Setting up the Rook Ceph storage cluster
-
Add the Rook Ceph Helm chart.
helm repo add rook-release https://charts.rook.io/release
-
Using the
rook-ceph-operator-values.yaml
file thatedaadm
generated based on the configuration, deploy the Rook Ceph Operator.helm install --create-namespace \ --namespace rook-ceph \ -f path/to/rook-ceph-operator-values.yaml \ rook-ceph rook-release/rook-ceph
-
Using the
rook-ceph-cluster-values.yaml
file that theedaadm
tool generated, deploy the Rook Ceph Cluster.helm install \ --namespace rook-ceph \ --set operatorNamespace=rook-ceph \ -f path/to/rook-ceph-cluster-values.yaml \ rook-ceph-cluster rook-release/rook-ceph-cluster
The output from this command can report missing CRDs; wait until the Rook Ceph Operator is running in the Kubernetes cluster.
-
Using kubectl commands, verify that the operator is deployed
and the necessary pods are deployed before installing the EDA application.
This example is for a six-node cluster, with three storage nodes.
$ kubectl -n rook-ceph get pods NAME READY STATUS RESTARTS AGE csi-cephfsplugin-22rmj 2/2 Running 1 (6m32s ago) 7m6s csi-cephfsplugin-25p9d 2/2 Running 1 (6m30s ago) 7m6s csi-cephfsplugin-2gr8v 2/2 Running 4 (5m16s ago) 7m6s csi-cephfsplugin-48cwk 2/2 Running 1 (6m30s ago) 7m6s csi-cephfsplugin-fknch 2/2 Running 2 (5m32s ago) 7m6s csi-cephfsplugin-provisioner-67c8454ddd-mpq4w 5/5 Running 1 (6m1s ago) 7m6s csi-cephfsplugin-provisioner-67c8454ddd-qmdrq 5/5 Running 1 (6m18s ago) 7m6s csi-cephfsplugin-vfxnf 2/2 Running 1 (6m32s ago) 7m6s rook-ceph-mds-ceph-filesystem-a-7c54cdf5bc-lmf6n 1/1 Running 0 2m40s rook-ceph-mds-ceph-filesystem-b-6dc794b9f4-2lc64 1/1 Running 0 2m37s rook-ceph-mgr-a-55b449c844-wpps8 2/2 Running 0 4m30s rook-ceph-mgr-b-5f97fd5746-fzngx 2/2 Running 0 4m30s rook-ceph-mon-a-76fcb96c4c-vscnc 1/1 Running 0 5m53s rook-ceph-mon-b-68bf5974bb-p2vnj 1/1 Running 0 4m57s rook-ceph-mon-c-6d7c64dcb6-phs99 1/1 Running 0 4m47s rook-ceph-operator-5f4c4bff8d-2fsq2 1/1 Running 0 7m54s rook-ceph-osd-0-bf89f779-zh4kd 1/1 Running 0 3m49s rook-ceph-osd-1-64dcd64c5f-7xcbm 1/1 Running 0 3m49s rook-ceph-osd-2-54ddd95489-5qkdt 1/1 Running 0 3m49s rook-ceph-osd-3-56cbd54bd6-7mt8w 1/1 Running 0 3m39s rook-ceph-osd-4-567dcff476-wljll 1/1 Running 0 2m56s rook-ceph-osd-5-6f69c998b6-2l5wp 1/1 Running 0 2m54s rook-ceph-osd-prepare-eda-dev-node01-7rfkn 0/1 Completed 0 4m8s rook-ceph-osd-prepare-eda-dev-node02-rqdkx 0/1 Completed 0 4m8s rook-ceph-osd-prepare-eda-dev-node03-xtznb 0/1 Completed 0 4m8s rook-ceph-osd-prepare-eda-dev-node04-db4v8 0/1 Completed 0 4m7s rook-ceph-osd-prepare-eda-dev-node05-29wwm 0/1 Completed 0 4m7s rook-ceph-osd-prepare-eda-dev-node06-zxp2x 0/1 Completed 0 4m7s rook-ceph-tools-b9d78b5d4-8r62p 1/1 Running 0 7m6s
Note: Some of the pods may restart as they initiate Ceph. This behavior is expected.