How do I restore the CLM cluster databases?

Purpose

Perform this procedure to restore one or more of the following in each CLM cluster:

Note: If you are performing the procedure as part of a system conversion, migration, or upgrade procedure in a DR deployment, you must perform the procedure only in the new primary CLM cluster.

Note: You can specify a local backup file path, or a remote path, if the remote server is reachable from the CLM deployer host and from the CLM cluster host.

To specify a remote path, use the following format for the backup_file parameter in the command, where user has access to backup_file at the server address:

user@server:/backup_file

Note: If root access for remote operations is disabled in the CLM configuration, remote operations such as SSH and SCP as the root user are not permitted within a CLM cluster. Steps that describe such an operation as the root user must be performed as the designated non-root user with sudoer privileges.

For simplicity, such steps describe only root-user access.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the CLM release, in the form MAJOR.minor.patch

version is a numeric value

Steps
Prepare to restore databases
 

Log in as the root or CLM admin user on the CLM deployer host.


Open a console window.


If you are restoring the data on new CLM cluster VMs, create and distribute an SSH key for password-free CLM deployer host access to each CLM cluster VM.

  1. Enter the following:

    ssh-keygen -N "" -f path -t rsa ↵

    where path is the SSH key file path, for example, /home/user/.ssh/id_rsa

    An SSH key is generated.

  2. Enter the following for each CLM cluster VM to distribute the key to the VM.

    ssh-copy-id -i key_file user@address

    where

    user is the designated CLM ansible user, if root-user access is restricted; otherwise, user@ is not required

    key_file is the SSH key file, for example, /home/user/.ssh/id_rsa.pub

    address is the CLM cluster VM IP address


Perform one of the following.

Note: You must not proceed to the next step until the cluster is ready.

  1. If both of the following are true, you must stop each cluster and remove all existing cluster data:

    • You are restoring the data in an existing CLM cluster, rather than on new CLM cluster VMs.

    • You are restoring all CLM databases.

    Perform the following steps on the CLM deployer host in each CLM cluster.

    Note: In a DR deployment, you must perform the steps first on the standby cluster.

    1. Log in as the root or CLM admin user.

    2. Open the following file with a plain-text editor such as vi:

      /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

    3. Edit the following line in the platform section, kubernetes subsection to read as shown below:

        deleteOnUndeploy:true

    4. Save and close the file.

    5. Enter the following:

      cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵

    6. Enter the following:

      ./nspdeployerctl uninstall --undeploy ↵

      The CLM cluster is undeployed, and the existing data is removed.

  2. If you are not restoring all databases, you must delete only the existing data in each database that you are restoring.

    Perform the following steps on the CLM deployer host in each CLM cluster.

    Note: In a DR deployment, you must perform the steps first on the CLM cluster that you want to start as the standby cluster.

    1. Log in as the root or CLM admin user.

    2. Open the following file using a plain-text editor such as vi:

      /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

    3. Edit the following line in the platform section, kubernetes subsection to read as shown below:

        deleteOnUndeploy:false

    4. Save and close the file.

    5. On the CLM cluster host, enter the following to determine which node the backup files are on:

      kubectl get pods -o wide -A | grep backup | awk '{print $8}' ↵

    6. Log in on the CLM cluster node where the backup files are.

    7. On the node, enter the following for each database that you are restoring:

      Note: Database instances are dynamically allocated to CLM cluster nodes, so some nodes may not have an instance of a specific database. If a database instance is not present on a node; the command returns an error message that you can safely ignore.

      rm -rf /opt/nsp/volumes/db_name/* ↵

      where db_name is the database name, and is one of:

      • nsp-file-service

      • nspos-postgresql


Enable CLM restore mode
 

Enter the following on the CLM deployer host:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


Enter the following to enter restore mode:

Note: If the CLM cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the nspdeployerctl command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --restore

./nspdeployerctl install --config --restore ↵


The following CLM cluster pod must be operational before the restore begins:

  • nspos-postgresql-primary-n

Enter the following periodically to list the pods; the cluster is ready for the restore when each required pod is in the Running state:

kubectl get pods -A ↵


If any required pod is not Running, return to Step 7.

Note: A restore attempt fails unless each required pod is Running.


Restore data
 

Enter the following on the CLM deployer host:

cd /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/database ↵


10 

Enter one or more of the following to restore system data and the database:

Note: In a DR deployment, you must perform the steps first in the data center that you want to start as the primary data center.

To restore the CLM PostgreSQL database:

./nspos-db-restore-k8s.sh nspos-postgresql backup_dir/backup_file

where

backup_dir is the directory that contains the backup file

backup_file is the backup file name, for example, for PostgreSQL, the name is nspos-postgresql_backup_timestamp.tar.gz


Start CLM clusters
 
11 

Perform the following steps in each data center.

Note: In a DR deployment, you must perform the steps first in the data center that you want to start as the primary data center.

Note: If the CLM cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the nspdeployerctl command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall --undeploy OR nspdeployerctl --ask-pass uninstall --undeploy

  1. Log in as the root or CLM admin user on the CLM deployer host.

  2. Open a console window.

  3. Open the following file with a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  4. Edit the following line in the platform section, kubernetes subsection to read as shown below:

      deleteOnUndeploy:false

  5. Save and close the file.

  6. Enter the following:

    cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵

  7. Enter the following to exit restore mode and terminate the restore pods:

    ./nspdeployerctl uninstall --undeploy ↵

  8. Open a CLI on the CLM cluster host.

  9. Enter the following:

    kubectl get pods -A ↵

    The pods are listed.

  10. If the following restore pod is listed, the pod is not terminated; return to substep 9.

    • nspos-postgresql-primary-n

    Note: You must not proceed to the next step if a restore pod is listed.

  11. On the CLM deployer host, enter the following:

    ./nspdeployerctl install --deploy ↵

    The CLM initializes using the restored data.

  12. Enter the following periodically on the CLM cluster host to display the cluster status:

    kubectl get pods -A ↵

    The cluster is operational when the status of each pod is Running.


12 

Close the open console windows.

End of steps