How do I restore the NSP cluster databases?

Purpose

Perform this procedure to restore one or more of the following in each NSP cluster:

  • NSP Kubernetes secrets

  • NSP file service data

  • Neo4j database

  • PostgreSQL database

  • nspos-solr database

  • nsp-tomcat database

  • nrcx-tomcat database

Note: The Neo4j, nsp-tomcat, and nrcx-tomcat database backup files are each named graph.db. You must ensure that you are using the correct graph.db backup file when you restore the Neo4j, nsp-tomcat, or nrcx-tomcat database.

Note: If you are performing the procedure as part of a system conversion, migration, or upgrade procedure in a DR deployment, you must perform the procedure only in the new primary NSP cluster.

Note: You can specify a local backup file path, or a remote path, if the remote server is reachable from the NSP deployer host and from the NSP cluster host.

To specify a remote path, use the following format for the backup_file parameter in the command, where user has access to backup_file at the server address:

user@server:/backup_file

Note: If root access for remote operations is disabled in the NSP configuration, remote operations such as SSH and SCP as the root user are not permitted within an NSP cluster. Steps that describe such an operation as the root user must be performed as the designated non-root user with sudoer privileges.

For simplicity, such steps describe only root-user access.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Steps
Prepare to restore databases
 

Log in as the root or NSP admin user on the NSP deployer host.


Open a console window.


If you are restoring the data on new NSP cluster VMs, create and distribute an SSH key for password-free NSP deployer host access to each NSP cluster VM.

  1. Enter the following:

    ssh-keygen -N "" -f path -t rsa ↵

    where path is the SSH key file path, for example, /home/user/.ssh/id_rsa

    An SSH key is generated.

  2. Enter the following for each NSP cluster VM to distribute the key to the VM.

    ssh-copy-id -i key_file user@address

    where

    user is the designated NSP ansible user, if root-user access is restricted; otherwise, user@ is not required

    key_file is the SSH key file, for example, /home/user/.ssh/id_rsa.pub

    address is the NSP cluster VM IP address


Perform one of the following.

Note: You must not proceed to the next step until the cluster is ready.

  1. If both of the following are true, you must stop each cluster and remove all existing cluster data:

    • You are restoring the data in an existing NSP cluster, rather than on new NSP cluster VMs.

    • You are restoring all NSP databases.

    Perform the following steps on the NSP deployer host in each NSP cluster.

    Note: In a DR deployment, you must perform the steps first on the standby cluster.

    1. Log in as the root or NSP admin user.

    2. Open the following file with a plain-text editor such as vi:

      /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

    3. Edit the following line in the platform section, kubernetes subsection to read as shown below:

        deleteOnUndeploy:true

    4. Save and close the file.

    5. Enter the following:

      cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵

    6. Enter the following:

      ./nspdeployerctl uninstall --undeploy ↵

      The NSP cluster is undeployed, and the existing data is removed.

  2. If you are not restoring all databases, you must delete only the existing data in each database that you are restoring.

    Perform the following steps on the NSP deployer host in each NSP cluster.

    Note: In a DR deployment, you must perform the steps first on the NSP cluster that you want to start as the standby cluster.

    1. Log in as the root or NSP admin user.

    2. Open the following file using a plain-text editor such as vi:

      /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

    3. Edit the following line in the platform section, kubernetes subsection to read as shown below:

        deleteOnUndeploy:false

    4. Save and close the file.

    5. On the NSP cluster host, enter the following to determine which node the backup files are on:

      kubectl get pods -o wide -A | grep backup | awk '{print $8}' ↵

    6. Log in on the NSP cluster node where the backup files are.

    7. On the node, enter the following for each database that you are restoring:

      Note: Database instances are dynamically allocated to NSP cluster nodes, so some nodes may not have an instance of a specific database. If a database instance is not present on a node; the command returns an error message that you can safely ignore.

      rm -rf /opt/nsp/volumes/db_name/* ↵

      where db_name is the database name, and is one of:

      • nsp-file-service

      • nspos-neo4j

      • nspos-postgresql

      • nspos-solr

      • nsp-tomcat

      • nrcx-tomcat


Enable NSP restore mode
 

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


Enter the following to enter restore mode:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the nspdeployerctl command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --restore

./nspdeployerctl install --config --restore ↵


The following NSP cluster pods must be operational before the restore begins:

  • nsp-backup-storage-n

  • nspos-neo4j-core-default-n

  • nspos-postgresql-primary-n

  • nsp-file-service-app-n

  • nspos-solr-statefulset-n

  • present only if the NSP deployment includes Path Control functions:

    • nsp-tomcat-dc_name-n

    • nrcx-tomcat-dc_name-n

    where dc_name is the dcName value in the cluster configuration file

Enter the following periodically to list the pods; the cluster is ready for the restore when each required pod is in the Running state:

kubectl get pods -A ↵


If any required pod is not Running, return to Step 7.

Note: A restore attempt fails unless each required pod is Running.


Restore data
 

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/database ↵


10 

Enter one or more of the following, as required, to restore system data and databases:

Note: In a DR deployment, you must perform the steps first in the data center that you want to start as the primary data center.

  1. To restore the NSP file service data:

    ./nspos-db-restore-k8s.sh nsp-file-service backup_dir/backup_file

  2. To restore the NSP Neo4j database:

    ./nspos-db-restore-k8s.sh nspos-neo4j backup_dir/backup_file

  3. To restore the NSP PostgreSQL database:

    ./nspos-db-restore-k8s.sh nspos-postgresql backup_dir/backup_file

  4. To restore the NSP Solr database:

    ./nspos-db-restore-k8s.sh nspos-solr backup_dir/backup_file

  5. To restore the NSP Tomcat database:

    ./nspos-db-restore-k8s.sh nsp-tomcat backup_dir/backup_file

  6. To restore the cross-domain Tomcat database:

    ./nspos-db-restore-k8s.sh nrcx-tomcat backup_dir/backup_file

where

backup_dir is the directory that contains the backup file

backup_file is the backup file name, for example, for PostgreSQL, the name is nspos-postgresql_backup_timestamp.tar.gz


Restore NSP Kubernetes secrets
 
11 

Perform the following steps in each data center to restore the NSP Kubernetes secrets.

Note: Ensure that you restore each backup file on the correct NSP cluster; an NSP secrets backup is specific to an NSP cluster.

  1. Enter the following on the NSP deployer host:

    cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵

  2. Enter the following:

    ./nspdeployerctl secret -i backup_file restore ↵

    where backup_file is the absolute path and filename of the secrets backup file to restore

    The following prompt is displayed:

    Please provide the encryption password for /opt/backupfile

    enter aes-256-ctr decryption password:

  3. Enter the password recorded during the backup creation.

    As the secrets are restored, messages like the following are displayed for each Kubernetes namespace:

    Restoring secrets from backup_file...

    secret/ca-key-pair-external created

      Restored secret namespace:ca-key-pair-external

    secret/ca-key-pair-internal created

      Restored secret namespace:ca-key-pair-internal

    secret/nsp-tls-store-pass created

      Restored secret namespace:nsp-tls-store-pass


Start NSP clusters
 
12 

Perform the following steps in each data center.

Note: In a DR deployment, you must perform the steps first in the data center that you want to start as the primary data center.

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the nspdeployerctl command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall --undeploy OR nspdeployerctl --ask-pass uninstall --undeploy

  1. Log in as the root or NSP admin user on the NSP deployer host.

  2. Open a console window.

  3. Open the following file with a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  4. Edit the following line in the platform section, kubernetes subsection to read as shown below:

      deleteOnUndeploy:false

  5. Save and close the file.

  6. Enter the following:

    cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵

  7. Enter the following to exit restore mode and terminate the restore pods:

    ./nspdeployerctl uninstall --undeploy ↵

  8. Open a CLI on the NSP cluster host.

  9. Enter the following:

    kubectl get pods -A ↵

    The pods are listed.

  10. If any of the following restore pods is listed, the pod is not terminated; return to substep 9.

    • nsp-file-service-app-n

    • nspos-neo4j-core-default-n

    • nspos-postgresql-primary-n

    • nspos-solr-statefulset-n

    • nsp-tomcat-dc_name-n

    • nrc-x tomcat-dc_name-n

    Note: You must not proceed to the next step if a restore pod is listed.

  11. On the NSP deployer host, enter the following:

    ./nspdeployerctl install --deploy ↵

    The NSP initializes using the restored data.

  12. Enter the following periodically on the NSP cluster host to display the cluster status:

    kubectl get pods -A ↵

    The cluster is operational when the status of each pod is Running.


13 

Close the open console windows.

End of steps