To migrate to your own storage in a standalone NSP deployment

Purpose

Perform this procedure to migrate all data in a standalone NSP deployment that currently uses local storage to a deployment that uses your own storage.

Steps
Back up data, uninstall NSP
 

Perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide.


Perform the following steps on the NSP deployer host to ensure that all existing PVCs, including nsp-backup-storage, are deleted.

  1. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read:

    deleteOnUndeploy: true


Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall

./nspdeployerctl uninstall ↵

The NSP is uninstalled.


Enter the following:

./nspdeployerctl unconfig ↵


Configure and test local storage
 

Create Kubernetes storage classes.

See Appendix C, NFS storage configuration for NFS storage class configuration information.

See Appendix D, Ceph storage configuration for Ceph storage class configuration information.

You can also create your own storage classes.


Enter the following command to get the storage class names created above:

# kubectl get sc

NAME              PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE

rook-ceph-block   rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   9m32s

rook-cephfs       rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   27s


Configure the following parameters in the platform section, kubernetes subsection as shown below:

  storage:

    readWriteOnceLowIOPSClass: "storage_class"

    readWriteOnceHighIOPSClass: "storage_class"

    readWriteOnceClassForDatabases: "storage_class"

    readWriteManyLowIOPSClass: "storage_class"

    readWriteManyHighIOPSClass: "storage_class"

where

readWriteOnceLowIOPSClass—for enabling ReadWriteOnce operations and maintaining storage IOPS below 10,000

readWriteOnceHighIOPSClass—for enabling ReadWriteOnce operations and maintaining storage IOPS above 10,000

readWriteOnceClassForDatabases—for enabling ReadWriteOnce operations and maintaining storage IOPS below 10,000 for NSP databases

readWriteManyLowIOPSClass—for enabling ReadWriteMany operations and maintaining storage IOPS below 10,000

readWriteManyHighIOPSClass—for enabling ReadWriteMany operations and maintaining storage IOPS above10,000

storage_class is your storage class name


You must ensure that the storage classes meet the IOPS requirements; otherwise, system performance degradation may result.

Run the script that checks the IOPS of the configured storage classes.

  1. Enter the following on the NSP deployer host:

    # cd /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/support/

    storageIopsCheck/bin ↵

    The following prompt is displayed:

    date time year ----- BEGIN ./nspstorageiopsctl -----

    [INFO]: SSH to NSP Cluster host IP_address successful

    1) readWriteManyHighIOPSClass            5) readWriteOnceClassForDatabases

    2) readWriteOnceHighIOPSClass            6) All Storage Classes

    3) readWriteManyLowIOPSClass             7) Quit

    4) readWriteOnceLowIOPSClass

    Select an option:

  2. Select each storage class individually, or select All Storage Classes.

    Output like the following is displayed to indicate if the script passes or fails.

    [INFO] **** Calling IOPs check for readWriteManyHighIOPSClass - Storage Class Name (ceph-filesystem) Access Mode (ReadWriteMany) ****

    [INFO] NSP Cluster Host: ip_address

    [INFO] Validate configured storage classes are available on NSP Cluster

    [INFO] Adding helm repo nokia-nsp

    [INFO] Updating helm repo nokia-nsp

    [INFO] Executing k8s job on NSP Cluster ip_address

    [INFO] Creating /opt/nsp/nsp-storage-iops directory on NSP Cluster ip_address

    [INFO] Copying values.yaml to /opt/nsp/nsp-deployer/tools/nsp-storage-iops

    [INFO] Executing k8s job on NSP Cluster ip_address

    [INFO] Waiting for K8s job status...

    [INFO] Job storage-iops completed successfully.

    [INFO] Cleaning up and uninstalling k8s job

    [INFO] Helm uninstall cn-nsp-storage-iops successful

    STORAGECLASS         ACCESS MODE    READIOPS   WRITEIOPS  RESULT    STORAGECLASSTYPE

    ------------         -----------    --------   ---------  ------    ----------------

    storage_class     ReadWriteMany  12400      12500      true      readWriteManyHighIOPSClass

    [INFO] READ IOPS and WRITE IOPS meet the threshold of 10000.

    date time year ------------------- END ./nspstorageiopsctl - SUCCESS --------------------


10 

If the output for each class indicates that the IOPS threshold is met, the throughput requirements are sufficient for NSP deployment. Otherwise, you must reconfigure your storage to meet the requirements.

Note: You must not proceed unless the throughput requirements are met.


Restore NSP data
 
11 

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


12 

Enter the following:

./nspdeployerctl config ↵


13 

Perform “How do I restore the NSP cluster databases?” in the NSP System Administrator Guide


Verify storage configuration
 
14 

Check the state of the PVCs and PVs.

  1. Enter the following:

    kubectl get pvc -A ↵

    NAMESPACE            NAME                       STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE

    nsp-psa-privileged   data-volume-mdm-server-0   Bound    pvc-ID  5Gi        RWO            storage_class  age

    nsp-psa-restricted   data-nspos-kafka-0         Bound    pvc-ID  10Gi       RWO            storage_class   age

    nsp-psa-restricted   data-nspos-zookeeper-0     Bound    pvc-ID  2Gi        RWO            storage_class  age

    ...

  2. Enter the following:

    kubectl get pv ↵

    NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM

    nspos-fluentd-logs-data   50Mi       ROX            Retain           Bound    nsp-psa-restricted/nspos-fluentd-logs-data

    pvc-ID                   10Gi       RWO            Retain           Bound    nsp-psa-restricted/data-nspos-kafka-0 

    pvc-ID                   2Gi        RWO            Retain           Bound    nsp-psa-restricted/data-nspos-zookeeper-0

    ...

  3. Verify that the PVCs are bound to PVs, and that each PV has the correct storage class.


15 

Verify that the NSP cluster databases are restored.


16 

Close the console window.

End of steps