Back up data, perform individual cluster migrations
|
| |
|
1 |
Perform
“How do I back up the NSP cluster databases?” in the NSP System Administrator Guide.
|
2 |
Perform
Step 6 to
Step 20 in the standby data center.
|
3 |
Perform an NSP DR switchover to change the role of the standby NSP cluster to primary.
|
4 |
Perform
Step 6 to
Step 20 in the former primary data center.
|
5 |
Go to
Step 21.
|
Uninstall NSP
|
| |
|
6 |
Perform the following steps on the NSP deployer host to enure that all existing PVCs, including nsp-backup-storage, are deleted.
-
Open the following file using a plain-text editor such as vi:
/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml
-
Edit the following line in the platform section, kubernetes subsection to read:
deleteOnUndeploy: true
|
7 |
Enter the following on the NSP deployer host:
# cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵
|
8 |
Enter the following:
Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:
nspdeployerctl --ask-pass uninstall
# ./nspdeployerctl uninstall ↵
The NSP is uninstalled.
|
9 |
Enter the following:
# ./nspdeployerctl unconfig ↵
|
Configure and test your own storage
|
| |
|
10 |
Create Kubernetes storage classes.
The following are examples of storage configurations. For different storage types, see the appropriate storage documentation.
See
NFS storage configuration for NFS storage class configuration information.
See
Ceph storage configuration for Ceph storage class configuration information.
You can also create your own storage classes.
|
11 |
Enter the following command to get the storage class names created above:
# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 9m32s
rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 27s
|
12 |
Configure the following parameters in the platform section, kubernetes subsection as shown below:
storage:
readWriteOnceLowIOPSClass: "storage_class"
readWriteOnceHighIOPSClass: "storage_class"
readWriteOnceClassForDatabases: "storage_class"
readWriteManyLowIOPSClass: "storage_class"
readWriteManyHighIOPSClass: "storage_class"
where
readWriteOnceLowIOPSClass—for enabling ReadWriteOnce operations and maintaining storage IOPS below 10,000
readWriteOnceHighIOPSClass—for enabling ReadWriteOnce operations and maintaining storage IOPS above 10,000
readWriteOnceClassForDatabases—for enabling ReadWriteOnce operations and maintaining storage IOPS below 10,000 for NSP databases
readWriteManyLowIOPSClass—for enabling ReadWriteMany operations and maintaining storage IOPS below 10,000
readWriteManyHighIOPSClass—for enabling ReadWriteMany operations and maintaining storage IOPS above10,000
storage_class is your storage class name
|
13 |
You must ensure that the storage classes meet the IOPS requirements; otherwise, system performance degradation may result.
Run the script that checks the IOPS of the configured storage classes.
-
Enter the following on the NSP deployer host to change to the tool bin directory:
# cd /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/support/
storageIopsCheck/bin ↵
-
Enter the following to execute the nspstorageiopsctl script:
# ./nspstorageiopsctl ↵
The following prompt is displayed:
date time year ----- BEGIN ./nspstorageiopsctl -----
[INFO]: SSH to NSP Cluster host IP_address successful
1) readWriteManyHighIOPSClass 5) readWriteOnceClassForDatabases
2) readWriteOnceHighIOPSClass 6) All Storage Classes
3) readWriteManyLowIOPSClass 7) Quit
4) readWriteOnceLowIOPSClass
Select an option:
-
Select each storage class individually, or select All Storage Classes.
Output like the following is displayed to indicate if the script passes or fails.
[INFO] **** Calling IOPs check for readWriteManyHighIOPSClass - Storage Class Name (ceph-filesystem) Access Mode (ReadWriteMany) ****
[INFO] NSP Cluster Host: ip_address
[INFO] Validate configured storage classes are available on NSP Cluster
[INFO] Adding helm repo nokia-nsp
[INFO] Updating helm repo nokia-nsp
[INFO] Executing k8s job on NSP Cluster ip_address
[INFO] Creating /opt/nsp/nsp-storage-iops directory on NSP Cluster ip_address
[INFO] Copying values.yaml to /opt/nsp/nsp-deployer/tools/nsp-storage-iops
[INFO] Executing k8s job on NSP Cluster ip_address
[INFO] Waiting for K8s job status...
[INFO] Job storage-iops completed successfully.
[INFO] Cleaning up and uninstalling k8s job
[INFO] Helm uninstall cn-nsp-storage-iops successful
STORAGECLASS ACCESS MODE READIOPS WRITEIOPS RESULT STORAGECLASSTYPE
------------ ----------- -------- --------- ------ ----------------
storage_class ReadWriteMany 12400 12500 true readWriteManyHighIOPSClass
[INFO] READ IOPS and WRITE IOPS meet the threshold of 10000.
date time year ----- END ./nspstorageiopsctl - SUCCESS -----
|
14 |
If the output for each class indicates that the IOPS threshold is met, the throughput requirements are sufficient for NSP deployment. Otherwise, you must reconfigure your storage to meet the requirements.
Note: You must not proceed unless the throughput requirements are met.
|
Install NSP
|
| |
|
15 |
Enter the following on the NSP deployer host:
# cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵
|
16 |
Enter the following:
Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:
nspdeployerctl --ask-pass config
# ./nspdeployerctl config ↵
|
17 |
Enter the following:
Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:
nspdeployerctl --ask-pass install
# ./nspdeployerctl install ↵
The NSP is installed.
|
Verify storage configuration
|
| |
|
18 |
Check the state of the PVCs and PVs.
-
Enter the following:
# kubectl get pvc -A ↵
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nsp-psa-privileged nfs-server-pvc Bound pvc-ID 47Gi RWO piraeus-storage 2d
nsp-psa-privileged pvc-nfs-subdir-provisioner Bound pvc-ID 10Mi RWO 2d
nsp-psarestricted data-nspos-kafka-broker-0 Bound pvc-ID 10Gi RWO local-provisioner 2d
nsp-psa-restricted data-nspos-kafka-controller-0 Bound pvc-ID 10Gi RWO local-provisioner 2d
nsp-psarestricted data-nspos-zookeeper-0 Bound pvc-ID 2Gi RWO local-provisioner 2d
nsp-psa-restricted datadir-nspos-neo4j-core-dc2-250b-0 Bound pvc-ID 10Gi RWO local-provisioner 2d
nsp-psa-restricted nsp-backup-storage Bound pvc-ID 10G RWX nfs-storage 2d
nsp-psa-restricted nspos-fluentd-logs-data Bound pvc-ID 50Mi ROX 2d
nsp-psa-restricted nspos-fluentd-posfile-data Bound pvc-ID 1Gi RWO 2d
nsp-psa-restricted nspos-postgresql-data-nspos-postgresql-primary-0 Bound pvc-ID 15Gi RWO local-provisioner 2d
nsp-psa-restricted opensearch-backup-pv Bound pvc-ID 1Gi RWX nfs-storage 2d
nsp-psa-restricted opensearch-cluster-master-opensearch-cluster-master-0 Bound pvc-ID 50Gi RWO local-provisioner 2d..
nsp-psa-restricted solr-pvc-nspos-solr-statefulset-0 Bound pvc-ID 1Gi RWO local-provisioner 2d
nsp-psa-restricted storage-volume-nspos-prometheus-server-0 Bound pvc-ID 10Gi RWO local-provisioner 2d
-
Enter the following:
# kubectl get pv ↵
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nspos-fluentd-logs-data 50Mi ROX Retain Bound nsp-psa-restricted/nspos-fluentd-logs-data 2d
nspos-fluent-posfile-data 1Gi RWO Retain Bound nsp-psa-restricted/nspos-fluentd-posfile-data 2d
pv-nfs-subdir-provisioner 10Mi RWO Delete Bound nsp-psa-privileged/pvc-nfs-subdir-provisioner 2d
pvc-0c8d9fdf-6ffc-4208-... 47Gi RWO Delete Bound nsp-psa-privileged/nfs-server-pvc piraeus-storage 2d
pvc-4ccbe6ef-e38c-4739-... 10G RWX Delete Bound nsp-psa-restricted/nsp-backup-storage nfs-storage 2d
pvc-5b21e2e2-eec5-49a7-... 10Gi RWO Delete Bound nsp-psa-restricted/data-nspos-kafka-broker-0 local-provisioner 2d
pvc-50df27c7-fcf3-462d-... 10Gi RWO Delete Bound nsp-psa-restricted/storage-volume-nspos-prometheus-server-0 local-provisioner 2d
pvc-59c23b56-95fb-4cf6-... 50Gi RWO Delete Bound nsp-psa-restricted/opensearch-cluster-master-opensearch-cluster-master-0 local-provisioner 2d
pvc-96d2ceb0-e490-470d-... 2Gi RWO Delete Bound nsp-psa-restricted/data-nspos-zookeeper-0 local-provisioner 2d
pvc-370ee803-cd02-4eac-... 15Gi RWO Delete Bound nsp-psa-restricted/nspos-postgresql-data-nspos-postgresql-primary-0 local-provisioner 2d
pvc-32991a91-1b35-42c9-... 1Gi RWX Delete Bound nsp-psa-restricted/opensearch-backup-pv nfs-storage 2d
pvc-86581df8-026b-4b80-... 10Gi RWO Delete Bound nsp-psa-restricted/data-nspos-kafka-controller-0 local-provisioner 2d
pvc-bf1c522b-985c-485c-... 1Gi RWO Delete Bound nsp-psa-restricted/solr-pvc-nspos-solr-statefulset-0 local-provisioner 2d
pvc-ccee060b-6268-4601-... 10Gi RWO Delete Bound nsp-psa-restricted/datadir-nspos-neo4j-core-dc2-250b-0 local-provisioner 2d
-
Verify that the PVCs are bound to PVs, and that each PV has the correct storage class.
|
Verify DR data synchronization
|
| |
|
19 |
Use the System Health dashboard to verify that data synchronization completes for the following:
-
Neo4j database
-
PostgreSQL database
-
File Server service
|
20 |
Perform the following on the NSP deployer host.
-
Open the following file using a plain-text editor such as vi:
/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml
-
Edit the following line in the platform section, kubernetes subsection to read:
deleteOnUndeploy: false
|
21 |
Close the open console windows.
End of steps |