To migrate to your own storage in a standalone NSP deployment

Purpose

Perform this procedure to migrate all data in a standalone NSP deployment that currently uses local or shared storage to a deployment that uses your own storage.

Steps
Back up data, uninstall NSP
 

Perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide.


Perform the following steps on the NSP deployer host to ensure that all existing PVCs, including nsp-backup-storage, are deleted.

  1. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read:

    deleteOnUndeploy: true


Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall

./nspdeployerctl uninstall ↵

The NSP is uninstalled.


Enter the following:

./nspdeployerctl unconfig ↵


Configure and test your own storage
 

Create Kubernetes storage classes.

The following are examples of storage configurations. For different storage types, see the appropriate storage documentation.

See NFS storage configuration for NFS storage class configuration information.

See Ceph storage configuration for Ceph storage class configuration information.

You can also create your own storage classes.


Enter the following command to get the storage class names created above:

# kubectl get sc

NAME              PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE

rook-ceph-block   rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   9m32s

rook-cephfs       rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   27s


Configure the following parameters in the platform section, kubernetes subsection as shown below:

  storage:

    readWriteOnceLowIOPSClass: "storage_class"

    readWriteOnceHighIOPSClass: "storage_class"

    readWriteOnceClassForDatabases: "storage_class"

    readWriteManyLowIOPSClass: "storage_class"

    readWriteManyHighIOPSClass: "storage_class"

where

readWriteOnceLowIOPSClass—for enabling ReadWriteOnce operations and maintaining storage IOPS below 10,000

readWriteOnceHighIOPSClass—for enabling ReadWriteOnce operations and maintaining storage IOPS above 10,000

readWriteOnceClassForDatabases—for enabling ReadWriteOnce operations and maintaining storage IOPS below 10,000 for NSP databases

readWriteManyLowIOPSClass—for enabling ReadWriteMany operations and maintaining storage IOPS below 10,000

readWriteManyHighIOPSClass—for enabling ReadWriteMany operations and maintaining storage IOPS above10,000

storage_class is your storage class name


You must ensure that the storage classes meet the IOPS requirements; otherwise, system performance degradation may result.

Run the script that checks the IOPS of the configured storage classes.

  1. Enter the following on the NSP deployer host to change to the tool bin directory:

    # cd /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/support/

    storageIopsCheck/bin ↵

  2. Enter the following to execute the nspstorageiopsctl script:

    # ./nspstorageiopsctl ↵

    The following prompt is displayed:

    date time year ----- BEGIN ./nspstorageiopsctl -----

    [INFO]: SSH to NSP Cluster host IP_address successful

    1) readWriteManyHighIOPSClass            5) readWriteOnceClassForDatabases

    2) readWriteOnceHighIOPSClass            6) All Storage Classes

    3) readWriteManyLowIOPSClass             7) Quit

    4) readWriteOnceLowIOPSClass

    Select an option:

  3. Select each storage class individually, or select All Storage Classes.

    Output like the following is displayed to indicate if the script passes or fails.

    [INFO] **** Calling IOPs check for readWriteManyHighIOPSClass - Storage Class Name (ceph-filesystem) Access Mode (ReadWriteMany) ****

    [INFO] NSP Cluster Host: ip_address

    [INFO] Validate configured storage classes are available on NSP Cluster

    [INFO] Adding helm repo nokia-nsp

    [INFO] Updating helm repo nokia-nsp

    [INFO] Executing k8s job on NSP Cluster ip_address

    [INFO] Creating /opt/nsp/nsp-storage-iops directory on NSP Cluster ip_address

    [INFO] Copying values.yaml to /opt/nsp/nsp-deployer/tools/nsp-storage-iops

    [INFO] Executing k8s job on NSP Cluster ip_address

    [INFO] Waiting for K8s job status...

    [INFO] Job storage-iops completed successfully.

    [INFO] Cleaning up and uninstalling k8s job

    [INFO] Helm uninstall cn-nsp-storage-iops successful

    STORAGECLASS         ACCESS MODE    READIOPS   WRITEIOPS  RESULT    STORAGECLASSTYPE

    ------------         -----------    --------   ---------  ------    ----------------

    storage_class     ReadWriteMany  12400      12500      true      readWriteManyHighIOPSClass

    [INFO] READ IOPS and WRITE IOPS meet the threshold of 10000.

    date time year ------------------- END ./nspstorageiopsctl - SUCCESS --------------------


10 

If the output for each class indicates that the IOPS threshold is met, the throughput requirements are sufficient for NSP deployment. Otherwise, you must reconfigure your storage to meet the requirements.

Note: You must not proceed unless the throughput requirements are met.


Restore NSP data
 
11 

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


12 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass config

./nspdeployerctl config ↵


13 

Perform the steps starting from the “Enable NSP restore mode” section in “How do I restore the NSP cluster databases?” in the NSP System Administrator Guide


Verify storage configuration
 
14 

Check the state of the PVCs and PVs.

  1. Enter the following:

    kubectl get pvc -A ↵

    NAMESPACE             NAME                                                   STATUS  VOLUME    CAPACITY   ACCESS MODES  STORAGECLASS         AGE

    nsp-psa-privileged    nfs-server-pvc                                         Bound   pvc-ID    47Gi       RWO           piraeus-storage      2d

    nsp-psa-privileged    pvc-nfs-subdir-provisioner                             Bound   pvc-ID    10Mi       RWO                                2d

    nsp-psarestricted     data-nspos-kafka-broker-0                              Bound   pvc-ID    10Gi       RWO           local-provisioner    2d

    nsp-psa-restricted    data-nspos-kafka-controller-0                          Bound   pvc-ID    10Gi       RWO           local-provisioner    2d

    nsp-psarestricted     data-nspos-zookeeper-0                                 Bound   pvc-ID    2Gi        RWO           local-provisioner    2d

    nsp-psa-restricted    datadir-nspos-neo4j-core-dc2-250b-0                    Bound   pvc-ID    10Gi       RWO           local-provisioner    2d

    nsp-psa-restricted    nsp-backup-storage                                     Bound   pvc-ID    10G        RWX           nfs-storage          2d

    nsp-psa-restricted    nspos-fluentd-logs-data                                Bound   pvc-ID    50Mi       ROX                                2d 

    nsp-psa-restricted    nspos-fluentd-posfile-data                             Bound   pvc-ID    1Gi        RWO                                2d

    nsp-psa-restricted    nspos-postgresql-data-nspos-postgresql-primary-0       Bound   pvc-ID    15Gi       RWO           local-provisioner    2d 

    nsp-psa-restricted    opensearch-backup-pv                                   Bound   pvc-ID    1Gi        RWX           nfs-storage          2d 

    nsp-psa-restricted    opensearch-cluster-master-opensearch-cluster-master-0  Bound   pvc-ID    50Gi       RWO           local-provisioner    2d..

    nsp-psa-restricted    solr-pvc-nspos-solr-statefulset-0                      Bound   pvc-ID    1Gi        RWO           local-provisioner    2d

    nsp-psa-restricted    storage-volume-nspos-prometheus-server-0               Bound   pvc-ID    10Gi       RWO           local-provisioner    2d

  2. Enter the following:

    kubectl get pv ↵

    NAME                          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                                          STORAGECLASS            REASON      AGE

    nspos-fluentd-logs-data       50Mi       ROX            Retain           Bound       nsp-psa-restricted/nspos-fluentd-logs-data                                                                         2d

    nspos-fluent-posfile-data     1Gi        RWO            Retain           Bound       nsp-psa-restricted/nspos-fluentd-posfile-data                                                                      2d

    pv-nfs-subdir-provisioner     10Mi       RWO            Delete           Bound       nsp-psa-privileged/pvc-nfs-subdir-provisioner                                                                      2d

    pvc-0c8d9fdf-6ffc-4208-...    47Gi       RWO            Delete           Bound       nsp-psa-privileged/nfs-server-pvc                                              piraeus-storage                     2d 

    pvc-4ccbe6ef-e38c-4739-...    10G        RWX            Delete           Bound       nsp-psa-restricted/nsp-backup-storage                                          nfs-storage                         2d

    pvc-5b21e2e2-eec5-49a7-...    10Gi       RWO            Delete           Bound       nsp-psa-restricted/data-nspos-kafka-broker-0                                   local-provisioner                   2d

    pvc-50df27c7-fcf3-462d-...    10Gi       RWO            Delete           Bound       nsp-psa-restricted/storage-volume-nspos-prometheus-server-0                    local-provisioner                   2d

    pvc-59c23b56-95fb-4cf6-...    50Gi       RWO            Delete           Bound       nsp-psa-restricted/opensearch-cluster-master-opensearch-cluster-master-0       local-provisioner                   2d

    pvc-96d2ceb0-e490-470d-...    2Gi        RWO            Delete           Bound       nsp-psa-restricted/data-nspos-zookeeper-0                                      local-provisioner                   2d

    pvc-370ee803-cd02-4eac-...    15Gi       RWO            Delete           Bound       nsp-psa-restricted/nspos-postgresql-data-nspos-postgresql-primary-0            local-provisioner                   2d

    pvc-32991a91-1b35-42c9-...    1Gi        RWX            Delete           Bound       nsp-psa-restricted/opensearch-backup-pv                                        nfs-storage                         2d

    pvc-86581df8-026b-4b80-...    10Gi       RWO            Delete           Bound       nsp-psa-restricted/data-nspos-kafka-controller-0                               local-provisioner                   2d

    pvc-bf1c522b-985c-485c-...    1Gi        RWO            Delete           Bound       nsp-psa-restricted/solr-pvc-nspos-solr-statefulset-0                           local-provisioner                   2d

    pvc-ccee060b-6268-4601-...    10Gi       RWO            Delete           Bound       nsp-psa-restricted/datadir-nspos-neo4j-core-dc2-250b-0                         local-provisioner                   2d

  3. Verify that the PVCs are bound to PVs, and that each PV has the correct storage class.


15 

Verify that the NSP cluster databases are restored.


16 

Close the console window.

End of steps