How do I restore the NSP cluster databases?

Purpose

Perform this procedure to restore one or more of the following in each NSP cluster:

  • NSP file service data

  • Neo4j database

  • PostgreSQL database

  • nsp-tomcat database

  • nrcx-tomcat database

Note: The Neo4j, nsp-tomcat, and nrcx-tomcat database backup files are each named graph.db. You must ensure that you are using the correct graph.db backup file when you restore the Neo4j, nsp-tomcat, or nrcx-tomcat database.

Note: If you are performing the procedure as part of a system conversion, migration, or upgrade procedure in a DR deployment, you must perform the procedure only in the new primary NSP cluster.

Note: You can specify a local backup file path, or a remote path, if the remote server is reachable from the NSP deployer host and from the NSP cluster host.

To specify a remote path, use the following format for the backup_file parameter in the command, where user has access to backup_file at the server address:

user@server:/backup_file

Note: If root access for remote operations is disabled in the NSP configuration, remote operations such as SSH and SCP as the root user are not permitted within an NSP cluster. Steps that describe such an operation as the root user must be performed as the designated non-root user with sudoer privileges.

For simplicity, such steps describe only root-user access.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Steps
Prepare to restore databases
 

Log in as the root user on the NSP deployer host.


Open a console window.


If you are restoring the data on new NSP cluster VMs, create and distribute an SSH key for password-free NSP deployer host access to each NSP cluster VM.

  1. Enter the following:

    ssh-keygen -N "" -f ~/.ssh/id_rsa -t rsa ↵

  2. Enter the following for each NSP cluster VM to distribute the SSH key to the VM.

    ssh-copy-id -i ~/.ssh/id_rsa.pub root@address

    where address is the NSP cluster VM IP address


Perform one of the following.

Note: You must not proceed to the next step until the cluster is ready.

  1. If both of the following are true, you must stop each cluster and remove all existing cluster data:

    • You are restoring the data in an existing NSP cluster, rather than on new NSP cluster VMs.

    • You are restoring all NSP databases.

    Perform the following steps on the NSP deployer host in each NSP cluster.

    Note: In a DR deployment, you must perform the steps first on the standby cluster.

    1. Log in as the root user.

    2. Open the following file with a plain-text editor such as vi:

      /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

    3. Edit the following line in the platform section, kubernetes subsection to read as shown below:

        deleteOnUndeploy:true

    4. Save and close the file.

    5. Enter the following:

      cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵

    6. Enter the following:

      ./nspdeployerctl uninstall --undeploy ↵

      The NSP cluster is undeployed, and the existing data is removed.

  2. If you are not restoring all databases, you must delete only the existing data in each database that you are restoring.

    Perform the following steps on the NSP deployer host in each NSP cluster.

    Note: In a DR deployment, you must perform the steps first on the NSP cluster that you want to start as the standby cluster.

    1. Log in as the root user.

    2. Open the following file using a plain-text editor such as vi:

      /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

    3. Edit the following line in the platform section, kubernetes subsection to read as shown below:

        deleteOnUndeploy:false

    4. Save and close the file.

    5. On each NSP cluster node, enter the following for each database that you are restoring:

      Note: Database instances are dynamically allocated to NSP cluster nodes, so some nodes may not have an instance of a specific database. If a database instance is not present on a node; the command returns an error message that you can safely ignore.

      rm -rf /opt/nsp/volumes/db_name/* ↵

      where db_name is the database name, and is one of:

      • nsp-file-service

      • nspos-neo4j

      • nspos-postgresql

      • nsp-tomcat

      • nrcx-tomcat


Enable NSP restore mode
 

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


Enter the following to enter restore mode:

./nspdeployerctl install --config --restore ↵


The following NSP cluster pods must be operational before the restore begins:

  • nsp-backup-storage-n

  • nspos-neo4j-core-default-n

  • nspos-postgresql-primary-n

  • nsp-file-service-app-n

  • present only if MDM or IPRC-related installation options are enabled:

    • nsp-tomcat-dc_name-n

    • nrcx-tomcat-dc_name-n

    where dc_name is the dcName value in the cluster configuration file

Enter the following periodically to list the pods; the cluster is ready for the restore when each required pod is in the Running state:

kubectl get pods -A ↵


If any required pod is not Running, return to Step 7.

Note: A restore attempt fails unless each required pod is Running.


Restore data
 

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/database ↵


10 

Enter one or more of the following, as required, to restore system data and databases:

Note: In a DR deployment, you must perform the steps first in the data center that you want to start as the primary data center.

  1. To restore the NSP file service data:

    ./nspos-db-restore-k8s.sh nsp-file-service backup_dir/backup_file

  2. To restore the NSP Neo4j database:

    ./nspos-db-restore-k8s.sh nspos-neo4j backup_dir/backup_file

  3. To restore the NSP PostgreSQL database:

    ./nspos-db-restore-k8s.sh nspos-postgresql backup_dir/backup_file

  4. To restore the NSP Tomcat database:

    ./nspos-db-restore-k8s.sh nsp-tomcat backup_dir/backup_file

  5. To restore the cross-domain Tomcat database:

    ./nspos-db-restore-k8s.sh nrcx-tomcat backup_dir/backup_file

where

backup_dir is the directory that contains the backup file

backup_file is the backup file name, for example, for PostgreSQL, the name is nspos-postgresql_backup_timestamp.tar.gz


Start NSP clusters
 
11 

Perform the following steps in each data center.

Note: In a DR deployment, you must perform the steps first in the data center that you want to start as the primary data center.

  1. Log in as the root user on the NSP deployer host.

  2. Open a console window.

  3. Open the following file with a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  4. Edit the following line in the platform section, kubernetes subsection to read as shown below:

      deleteOnUndeploy:false

  5. Save and close the file.

  6. Enter the following:

    cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵

  7. Enter the following to exit restore mode and terminate the restore pods:

    ./nspdeployerctl uninstall --undeploy ↵

  8. Open a CLI on the NSP cluster host.

  9. Enter the following:

    kubectl get pods -A ↵

    The pods are listed.

  10. If any of the following restore pods is listed, the pod is not terminated; return to substep 9.

    • nsp-file-service-app-n

    • nspos-neo4j-core-default-n

    • nspos-postgresql-primary-n

    • nsp-tomcat-dc_name-n

    • nrc-x tomcat-dc_name-n

    Note: You must not proceed to the next step if a restore pod is listed.

  11. Restore the nsp-keycloak-*-secret files from backup_dir specified during the previous database backup to the following NSP deployer host directory:

    /opt/nsp/nsp-configurator/generated/

  12. On the NSP deployer host, enter the following:

    ./nspdeployerctl install --deploy ↵

    The NSP initializes using the restored data.

  13. Enter the following periodically on the NSP cluster host to display the cluster status:

    kubectl get pods -A ↵

    The cluster is operational when the status of each pod is Running.


12 

Close the open console windows.

End of steps