Setting up the Rook Ceph for air-gapped environments
Note: This procedure applies to the air-gapped environment, and is executed in the
air-gapped tools-system.
Ensure that you are using the correct paths and the correct Assets VM IP address in the commands below.
-
Deploy the Rook Ceph operator.
Using the rook-ceph-operator-values.yaml file that edaadm tool generated based on the configuration, deploy the Rook Ceph Operator using the Rook Ceph charts present on the Assets VM.
helm install --create-namespace \ --namespace rook-ceph \ --version v1.15.0 \ -f path/to/rook-ceph-operator-values.yaml \ rook-ceph \ http://eda:eda@<ASSETS VM IP>/artifacts/rook-ceph-v1.15.0.tgz
-
Deploy the Rook Ceph cluster.
Using the rook-ceph-cluster-values.yaml file that the edaadm tool generated, deploy the Rook Ceph cluster.
helm install \ --namespace rook-ceph \ --set operatorNamespace=rook-ceph \ -f path/to/rook-ceph-cluster-values.yaml \ rook-ceph-cluster \ http://eda:eda@<ASSETS VM IP>/artifacts/rook-ceph-cluster-v1.15.0.tgz
-
Using kubectl commands, verify that the operator is deployed
and the necessary pods are deployed before installing the EDA application.
The following sample output is for a deployment with 6 nodes used for storage:
$ kubectl -n rook-ceph get pods NAME READY STATUS RESTARTS AGE csi-cephfsplugin-22rmj 2/2 Running 1 (6m32s ago) 7m6s csi-cephfsplugin-25p9d 2/2 Running 1 (6m30s ago) 7m6s csi-cephfsplugin-2gr8v 2/2 Running 4 (5m16s ago) 7m6s csi-cephfsplugin-48cwk 2/2 Running 1 (6m30s ago) 7m6s csi-cephfsplugin-fknch 2/2 Running 2 (5m32s ago) 7m6s csi-cephfsplugin-provisioner-67c8454ddd-mpq4w 5/5 Running 1 (6m1s ago) 7m6s csi-cephfsplugin-provisioner-67c8454ddd-qmdrq 5/5 Running 1 (6m18s ago) 7m6s csi-cephfsplugin-vfxnf 2/2 Running 1 (6m32s ago) 7m6s rook-ceph-mds-ceph-filesystem-a-7c54cdf5bc-lmf6n 1/1 Running 0 2m40s rook-ceph-mds-ceph-filesystem-b-6dc794b9f4-2lc64 1/1 Running 0 2m37s rook-ceph-mgr-a-55b449c844-wpps8 2/2 Running 0 4m30s rook-ceph-mgr-b-5f97fd5746-fzngx 2/2 Running 0 4m30s rook-ceph-mon-a-76fcb96c4c-vscnc 1/1 Running 0 5m53s rook-ceph-mon-b-68bf5974bb-p2vnj 1/1 Running 0 4m57s rook-ceph-mon-c-6d7c64dcb6-phs99 1/1 Running 0 4m47s rook-ceph-operator-5f4c4bff8d-2fsq2 1/1 Running 0 7m54s rook-ceph-osd-0-bf89f779-zh4kd 1/1 Running 0 3m49s rook-ceph-osd-1-64dcd64c5f-7xcbm 1/1 Running 0 3m49s rook-ceph-osd-2-54ddd95489-5qkdt 1/1 Running 0 3m49s rook-ceph-osd-3-56cbd54bd6-7mt8w 1/1 Running 0 3m39s rook-ceph-osd-4-567dcff476-wljll 1/1 Running 0 2m56s rook-ceph-osd-5-6f69c998b6-2l5wp 1/1 Running 0 2m54s rook-ceph-osd-prepare-eda-dev-node01-7rfkn 0/1 Completed 0 4m8s rook-ceph-osd-prepare-eda-dev-node02-rqdkx 0/1 Completed 0 4m8s rook-ceph-osd-prepare-eda-dev-node03-xtznb 0/1 Completed 0 4m8s rook-ceph-osd-prepare-eda-dev-node04-db4v8 0/1 Completed 0 4m7s rook-ceph-osd-prepare-eda-dev-node05-29wwm 0/1 Completed 0 4m7s rook-ceph-osd-prepare-eda-dev-node06-zxp2x 0/1 Completed 0 4m7s rook-ceph-tools-b9d78b5d4-8r62p 1/1 Running 0 7m6s
Note: Some of the pods may restart as they initiate Ceph. This behavior is expected.