Connect external storage cluster to NSP
Purpose
Perform this procedure to connect an external storage cluster to NSP.
In this procedure, the following terms are used:
-
External storage cluster—Ceph configured that is managed outside an NSP cluster.
-
Source cluster—Ceph cluster that is providing data to the NSP cluster in the external configuration. This is the external source cluster.
See https://www.rook.io/docs/rook/v1.12/CRDs/Cluster/external-cluster/ for more information.
-
Consumer cluster—NSP Kubernetes cluster that is consuming data from the external source cluster in the external configuration
See “Commands on the K8s consumer cluster” in https://www.rook.io/docs/rook/v1.12/CRDs/Cluster/external-cluster/ for more information.
The examples in the following procedure include sample names.
Steps
1 |
Log in as the root user on the NSP cluster. |
2 |
Open a console window. |
On the source cluster | |
3 |
Copy the following files to the external Ceph cluster: Note: These files must be updated before configuring the consumer cluster. |
4 |
Enter the following command: # helm repo add rook-release http://blr-orbw-artifactory.in.alcatel-lucent.com:8081/artifactory/api/helm/orbw-artifactory-helm-remotes-mirror ↵ |
5 |
Enter the following command: # helm repo list ↵ |
6 |
Enter the following command: # helm install --wait --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph -f rook-ceph-values.yaml ↵ NAME: rook-ceph LAST DEPLOYED: Fri Sep 27 10:56:07 2024 NAMESPACE: rook-ceph STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The Rook Operator has been installed. Check its status by running: kubectl --namespace rook-ceph get pods -l "app=rook-ceph-operator"
Visit https://rook.io/docs/rook/latest for instructions on how to create and configure Rook clusters
Important Notes: - You must customize the 'CephCluster' resource in the sample manifests for your cluster. - Each CephCluster must be deployed to its own namespace, the samples use `rook-ceph` for the namespace. - The sample manifests assume you also installed the rook-ceph operator in the `rook-ceph` namespace. - The helm chart includes all the RBAC required to create a CephCluster CRD in the same namespace. - Any disk devices you add to the cluster in the 'CephCluster' must be empty (no filesystem and no partitions). |
7 |
Enter the following command: # kubectl get pods -A | grep rook-ceph-operator ↵ rook-ceph rook-ceph-operator-7bfd98674f-h8425 1/1 Running 0 10m |
8 |
Enter the following command: # helm install --wait --timeout 20m --namespace namespace rook-ceph-cluster repo_name/rook-ceph-cluster -f rook-ceph-cluster-values.yaml ↵ NAME: rook-ceph-cluster LAST DEPLOYED: Fri Sep 27 11:09:33 2024 NAMESPACE: rook-ceph STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The Ceph Cluster has been installed. Check its status by running: kubectl --namespace rook-ceph get cephcluster
Visit https://rook.io/docs/rook/latest/CRDs/ceph-cluster-crd/ for more information about the Ceph CRD.
Important Notes: - You can only deploy a single cluster per namespace - If you wish to delete this cluster and start fresh, you will also have to wipe the OSD disks using `sfdisk` |
9 |
Enter the following command: # kubectl get cephcluster -A ↵ NAMESPACE NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID rook-ceph rook-ceph /var/lib/rook 1 2m26s Ready Cluster created successfully HEALTH_OK filesystem_id |
10 |
Get create-external-cluster-resources.py from the official Rook project: git clone https://github.com/rook/rook.git. |
11 |
Enter the following command: # kubectl cp create-external-cluster-resources.py rook-ceph-tools-6856b99dcb-mn8kx:/tmp -n rook-ceph ↵ |
12 |
Perform an SSH connection to the Kubernetes tools pod. |
13 |
Enter the following command: # python3 create-external-cluster-resources.py --rbd-data-pool-name pool_name --cephfs-filesystem-name filesystem_name --format bash ↵ export ROOK_EXTERNAL_FSID=filesystem_id export ROOK_EXTERNAL_USERNAME=username export ROOK_EXTERNAL_CEPH_MON_DATA=a=monitoring_url:port export ROOK_EXTERNAL_USER_SECRET=user_secret export ROOK_EXTERNAL_DASHBOARD_LINK=dashboard_url export CSI_RBD_NODE_SECRET=node_secret export CSI_RBD_NODE_SECRET_NAME=secret_name export CSI_RBD_PROVISIONER_SECRET=provisioner_secret export CSI_RBD_PROVISIONER_SECRET_NAME=secret_name export CEPHFS_POOL_NAME=pool_name export CEPHFS_METADATA_POOL_NAME=metadata_pool_name export CEPHFS_FS_NAME=filesystem_name export CSI_CEPHFS_NODE_SECRET=node_secret export CSI_CEPHFS_PROVISIONER_SECRET=provisioner_secret export CSI_CEPHFS_NODE_SECRET_NAME=secret_name export CSI_CEPHFS_PROVISIONER_SECRET_NAME=provisioner_secret_name export MONITORING_ENDPOINT=monitoring_url export MONITORING_ENDPOINT_PORT=monitoring_port export RBD_POOL_NAME=pool_name export RGW_POOL_PREFIX=default where pool_name is the name of the RBD pool filesystem_name is the name of the filesystem |
On the consumer cluster | |
14 |
Paste the output from create-external-cluster-resources.py in Step 13 into your current shell to allow source data to be imported. |
15 |
Navigate to the directory that contains the import-external-cluster.sh file. |
16 |
Enter the following: # ./import-external-cluster.sh ↵ namespace/rook-ceph created secret/rook-ceph-mon created configmap/rook-ceph-mon-endpoints created configmap/external-cluster-user-command created secret/rook-csi-rbd-node created secret/rook-csi-rbd-provisioner created secret/rook-csi-cephfs-node created secret/rook-csi-cephfs-provisioner created storageclass.storage.k8s.io/ceph-rbd created storageclass.storage.k8s.io/cephfs created |
17 |
Enter the following: # helm repo add rook-release http://blr-orbw-artifactory.in.alcatel-lucent.com:8081/artifactory/api/helm/orbw-artifactory-helm-remotes-mirror ↵ |
18 |
Enter the following: # helm repo list ↵ |
19 |
Enter the following: # helm install --create-namespace --namespace rook-ceph rook-ceph --set operatorNamespace=rook-ceph rook-release/rook-ceph -f consumer-rook-ceph-values.yaml ↵ NAME: rook-ceph LAST DEPLOYED: Thu Oct 3 14:49:22 2024 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES - You must customize the 'CephCluster' resource in the sample manifests for your cluster. - Each CephCluster must be deployed to its own namespace, the samples use `rook-ceph` for the namespace. - The sample manifests assume you also installed the rook-ceph operator in the `rook-ceph` namespace. - The helm chart includes all the RBAC required to create a CephCluster CRD in the same namespace. - Any disk devices you add to the cluster in the 'CephCluster' must be empty (no filesystem and no partitions). |
20 |
Enter the following: # helm install --create-namespace --namespace rook-ceph rook-ceph-cluster --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f release-1.13/deploy/charts/rook-ceph-cluster/values-external.yaml ↵ coalesce.go:237: warning: skipped value for rook-ceph-cluster.cephObjectStores: Not a table. coalesce.go:237: warning: skipped value for rook-ceph-cluster.cephFileSystems: Not a table. coalesce.go:237: warning: skipped value for rook-ceph-cluster.cephBlockPools: Not a table. coalesce.go:237: warning: skipped value for rook-ceph-cluster.cephBlockPools: Not a table. coalesce.go:237: warning: skipped value for rook-ceph-cluster.cephFileSystems: Not a table. coalesce.go:237: warning: skipped value for rook-ceph-cluster.cephObjectStores: Not a table. NAME: rook-ceph-cluster LAST DEPLOYED: Thu Oct 3 14:50:44 2024 NAMESPACE: rook-ceph STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The Ceph Cluster has been installed. Check its status by running: kubectl --namespace rook-ceph get cephcluster
Visit https://rook.io/docs/rook/latest/CRDs/ceph-cluster-crd/ for more information about the Ceph CRD.
Important Notes: - You can only deploy a single cluster per namespace - If you wish to delete this cluster and start fresh, you will also have to wipe the OSD disks using `sfdisk` - If you wish to delete this cluster and start fresh, you will also have to wipe the OSD disks using `sfdisk` |
21 |
Enter the following command: # kubectl --namespace rook-ceph get cephcluster NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID rook-ceph /var/lib/rook 3 9s Connected Cluster connected successfully HEALTH_WARN true fs_id |
22 |
Enter the following command: # kubectl get pod -n rook-ceph NAME READY STATUS RESTARTS AGE csi-cephfsplugin-2hr7p 3/3 Running 0 15s csi-cephfsplugin-4b5fb 3/3 Running 0 15s csi-cephfsplugin-8bt5m 3/3 Running 0 15s ... The output includes entries for the cephfsplugin, cephfsplugin-provisioner, rbdplugin, and rbdplugin-provisioner pods. |
23 |
Enter the following command: # kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ceph-rbd rook-ceph.rbd.csi.ceph.com Delete Immediate true 22m cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 22m |
24 |
Configure storage classes from Step 23 in nsp-config.yaml file before installing NSP:
|
25 |
Check PVC and PV creation using storage classes in Step 24: # cat fs_pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-rwx-cephfs-pvc spec: accessModes: - ReadWriteMany storageClassName: cephfs resources: requests: storage: 10Gi
# kubectl create -f fs_pvc.yaml persistentvolumeclaim/test-rwx-cephfs-pvc created
# kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default name Bound volume_id size RWX cephfs 4s End of steps |