To convert a standalone NSP system to DR

Purpose

Perform this procedure to create a DR NSP deployment by adding an NSP cluster in a separate data center to an existing standalone deployment.

The following represent the redundant data centers in the procedure:

  • DC A—initial standalone data center

  • DC B—data center of new DR NSP cluster

Note: You require root user privileges on each station.

Note: Command lines use the # symbol to represent the RHEL CLI prompt for the root user. Do not type the leading # symbol when you enter a command.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Note: Before you attempt a DR conversion, ensure that any third-party software installed on the NSP deployer host or any NSP cluster VM is disabled and stopped.

Steps
Back up NSP databases
 

Back up the NSP databases on the standalone NSP; perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide.


Ensure that you have the Kubernetes secrets backup file from DC A.

Note: If you need make a backup copy of the Kubernetes secrets from DC A, see Step 105 of To install an NSP cluster.


Create NSP cluster in DC B
 

Log in as the root user on the DC B station designated for the new NSP deployer host VM.


Open a console window.


Perform the following sections of procedure To install an NSP cluster in DC B.

Note: The number of VMs in each NSP cluster must match.

  1. Create NSP deployer host VM ( Step 1 to Step 6)—creates the NSP deployer host VM in DC B

    Note: The DC B VM specifications, such as disk layout and capacity, must be identical to the DC A VM specifications.

  2. Configure NSP deployer host networking ( Step 7 to Step 17)—sets the NSP deployer host communication parameters

  3. Install Kubernetes registry ( Step 18 to Step 27)—configures and deploys the Kubernetes registry

  4. Create NSP cluster VMs ( Step 28 to Step 49)—creates the new NSP cluster VMs

    Note: The DC B VM specifications, such as disk layout and capacity, must be identical to the DC A VM specifications.

  5. Deploy Kubernetes environment ( Step 50 to Step 105)—deploys the NSP cluster in DC B


Configure NSP cluster in DC B
 

Log in as the root or NSP admin user on the NSP deployer host in DC B.


Open a console window.


Open the following file copied from DC A using a plain-text editor such as vi:

/tmp/nsp-config.yml

Note: See nsp-config.yml file format for configuration information.


Configure the following NSP cluster address parameters in the platform section, ingressApplications subsection as shown below:

Note: The client_IP value is mandatory; the address is used for interfaces that remain unconfigured, such as in a single-interface deployment.

Note: If the client network uses IPv6, you must specify the NSP cluster hostname as the client_public_address.

  ingressApplications:

    ingressController:

      clientAddresses:

        virtualIp: "client_IP"

        advertised: "client_public_address"

      internalAddresses:

        virtualIp: "internal_IP"

        advertised: "internal_public_address"

      mediationAddresses:

        virtualIp: "mediation_IP"

        advertised: "mediation_public_address"

    trapForwarder:

      mediationAddresses:

        virtualIpV4: "trapV4_mediation_IP"

        advertisedV4: "trapV4_mediation_public_address"

        virtualIpV6: "trapV6_mediation_IP"

        advertisedV6: "trapV6_mediation_public_address"

where

client_IP is the address for external client access

internal_IP is the address for internal communication

mediation_IP is the address for network mediation

trapV4_mediation_IP is the address for IPv4 network mediation

trapV6_mediation_IP is the address for IPv6 network mediation

each public_address value is an optional address to advertise instead of the associated _IP value, for example, in a NAT environment


10 

If flow collection is enabled, configure the following parameters in the platform section, ingressApplications subsection as shown below:

    flowForwarder:

      mediationAddresses:

        virtualIpV4: "flowV4_mediation_IP"

        advertisedV4: "flowV4_mediation_public_address"

        virtualIpV6: "flowV6_mediation_IP"

        advertisedV6: "flowV6_mediation_public_address"

where

flowV4_mediation_IP is the mediation address for IPv4 flow collection

flowV6_mediation_IP is the mediation address for IPv6 flow collection

each _public_address value is an optional address to advertise instead of the associated mediation_IP value, for example, in a NAT environment


11 

Configure the parameters in the dr section as shown below:

Note: The peer_address value that you specify must match the advertisedAddress value in the configuration of the cluster in DC A and have the same format; if one value is a hostname, the other must also be a hostname.

dr:

   dcName: "data_center"

   mode: "dr"

   peer: "peer_address"

   internalPeer: "peer_internal_address"

   peerDCName: "peer_data_center"

where

data_center is the unique alphanumeric name of the DC B cluster

peer_address is the address at which the DC A cluster is reachable over the client network

peer_internal_address is the address at which the DC A cluster is reachable over the internal network

peer_data_center is the unique alphanumeric name of the DC A cluster


12 

If you are using a custom TLS server certificate, configure the following tls parameter in the deployment section:

   tls:                     

     customCaCert: CA_key_file

where CA_key_file is the CA.pem file generated in Step 7 of To generate custom NSP TLS certificates


13 

Save and close the file.


14 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


15 

Enter the following to apply the configuration to the NSP cluster:

./nspdeployerctl config ↵


16 

Enter the following to import the NSP images and Helm charts to the NSP Kubernetes registry

./nspdeployerctl import ↵


Restore NSP Kubernetes secrets in DC B
 
17 

Obtain and restore the NSP Kubernetes secrets from DC A.

  1. Enter the following:

    ./nspdeployerctl secret -i backup_file restore ↵

    where backup_file is the absolute path and filename of the secrets backup file to restore

    The following prompt is displayed:

    Please provide the encryption password for /opt/backupfile

    enter aes-256-ctr decryption password:

  2. Enter the password recorded during the backup creation.

    As the secrets are restored, messages like the following are displayed for each Kubernetes namespace:

    Restoring secrets from backup_file...

    secret/ca-key-pair-external created

      Restored secret namespace:ca-key-pair-external

    secret/ca-key-pair-internal created

      Restored secret namespace:ca-key-pair-internal

    secret/nsp-tls-store-pass created

      Restored secret namespace:nsp-tls-store-pass


18 

If the NSP uses a custom server certificate for client access, you must update the standby server secret for client access using the custom certificate and key files that are specific to the standby cluster.

Enter the following:

./nspdeployerctl secret -s nginx-nb-tls-nsp -n psaRestricted -f tls.key=customKey -f tls.crt=customCert -f ca.crt=customCaCert update ↵

where

customKey is the full path of the private server key

customCert is the full path of the server public certificate

customCaCert is the full path of the CA public certificate

Custom certificate and key files are created by performing To generate custom NSP TLS certificates.

Messages like the following are displayed as the server secret is updated:

secret/nginx-nb-tls-nsp patched

The following files may contain sensitive information. They are no longer required by NSP and may be removed.

  customKey

  customCert

  customCaCert


Reconfigure NSP cluster in DC A
 
19 

Log in as the root or NSP admin user on the NSP deployer host in DC A.


20 

Open a console window.


21 

Open the following file using a plain-text editor such as vi:

/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml


22 

Edit the following line in the platform section, kubernetes subsection to read as shown below:

  deleteOnUndeploy:true


23 

Save and close the file.


24 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


25 

Enter the following to stop the NSP cluster in DC A:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall --undeploy --clean

/opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵

The NSP cluster stops.

Old PVCs and PVs are deleted.


26 

Open the following file using a plain-text editor such as vi:

/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

Note: See nsp-config.yml file format for configuration information.


27 

Configure the parameters in the dr section as shown below:

Note: The peer_address value that you specify must match the advertisedAddress value in the configuration of the cluster in DC B and have the same format; if one value is a hostname, the other must also be a hostname.

dr:

   dcName: "data_center"

   mode: "dr"

   peer: "peer_address"

   internalPeer: "peer_internal_address"

   peerDCName: "peer_data_center"

where

data_center is the unique alphanumeric name to assign to the DC A cluster

peer_address is the address at which the DC B cluster is reachable over the client network

peer_internal_address is the address at which the DC B cluster is reachable over the internal network

peer_data_center is the unique alphanumeric name to assign to the DC B cluster


28 

Save and close the nsp-config.yml file.


29 

Enter the following to apply the configuration to the NSP cluster:

./nspdeployerctl config ↵


Enable NSP restore mode in DC A
 
30 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --restore

./nspdeployerctl install --config --restore ↵


31 

Note: The nspos-solr-statefulset-n and opensearch-cluster-master-n pods are only supported starting with Release 25.8. The pods are excluded from the restore operation if the system has been upgraded from a release earlier than Release 25.8.

The following NSP cluster pods must be operational before the restore begins:

  • nsp-backup-storage-n

  • nspos-neo4j-core-default-n

  • nspos-postgresql-primary-n

  • nsp-file-service-app-n

  • nspos-solr-statefulset-n

  • present only if the NSP deployment includes Path Control functions:

    • nsp-tomcat-dc_name-n

    • nrcx-tomcat-dc_name-n

    where dc_name is the dcName value in the cluster configuration file

  • present only if the NSP Deployment enables the loggingMonitoring feature:

    • opensearch-cluster-master-n

Enter the following periodically to list the pods; the cluster is ready for the restore when each required pod is in the Running state:

kubectl get pods -A ↵


32 

If any required pod is not Running, return to Step 31.

Note: A restore attempt fails unless each required pod is Running.


Restore data in DC A
 
33 

Enter the following on the NSP deployer VM:

cd /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/database ↵


34 

Enter one or more of the following, as required, to restore system data and databases:

Note: In a DR deployment, you must perform the steps first in the data center that you want to start as the primary data center.

Note: If the kubeconfig files in /opt/nsp/nsp-configurator/kubeconfig on the NSP deployer host contain an expired certificate, enter the following command to resynchronize the kubeconfig files from the NSP cluster; see “How do I update nsp_kubeconfig on the NSP deployer host?” in the System Administrator Guide..

./nspk8sctl update --kubeconfig ↵

  1. To restore the NSP file service data:

    ./nspos-db-restore-k8s.sh nsp-file-service backup_dir/backup_file

  2. To restore the NSP Neo4j database:

    ./nspos-db-restore-k8s.sh nspos-neo4j backup_dir/backup_file

  3. To restore the NSP PostgreSQL database:

    ./nspos-db-restore-k8s.sh nspos-postgresql backup_dir/backup_file

  4. To restore the NSP Solr database:

    ./nspos-db-restore-k8s.sh nspos-solr backup_dir/backup_file

    Note: This step applies only starting with Release 25.8. The NSP Solr database is excluded from backup and restore operations if the system has been upgraded from a release earlier than Release 25.8.

  5. To restore the NSP Tomcat database:

    ./nspos-db-restore-k8s.sh nsp-tomcat backup_dir/backup_file

  6. To restore the cross-domain Tomcat database:

    ./nspos-db-restore-k8s.sh nrcx-tomcat backup_dir/backup_file

  7. To restore the OpenSearch database:

    ./nspos-db-restore-k8s.sh opensearch backup_dir/backup_file

Note: This step applies only in Release 25.8 or later. The OpenSearch database is excluded from backup and restore operations if the system has been upgraded from a release earlier than Release 25.8.

where

backup_dir is the directory that contains the backup file

backup_file is the backup file name, for example:

  • for PostgreSQL, the name is nspos-postgresql_backup_timestamp.tar.gz

  • for OpenSearch, the name is opensearch-cluster-master-n


Start NSP clusters in DC A
 
35 

Perform the following steps in each data center.

Note: In a DR deployment, you must perform the steps first in the data center that you want to start as the primary data center.

  1. Open a terminal session to the NSP deployer VM.

  2. Log in as the root or NSP admin user.

  3. Open the following file with a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  4. Edit the following line in the platform section, kubernetes subsection to read as shown below:

      deleteOnUndeploy:false

  5. Save and close the file.

  6. Enter the following:

    cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵

  7. On the NSP deployer VM, enter the following:

    ./nspdeployerctl install --deploy ↵

    The NSP initializes using the restored data.

  8. Enter the following periodically on the NSP cluster host to display the cluster status:

    kubectl get pods -A ↵

    The cluster is operational when the status of each pod is Running.


36 

Close the open console windows in DC A.


Deploy NSP software in DC B
 
37 

Enter the following on the NSP deployer host in DC B:

./nspdeployerctl install --config --deploy ↵

The NSP cluster initializes as the standby cluster in the new DR deployment.


38 

Enter the following every few minutes to monitor the NSP cluster initialization:

kubectl get pods -A ↵

The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.

Note: Some pods may remain in the Pending state; for example, MDM pods in addition to the default for a deployment.


39 

Close the open console windows in DC B.


Reconfigure NFM-P
 
40 

If your system includes NFM-P:

  1. Shut down the NFM-P server.

  2. Log in to the main server station as the root user.

  3. Open a console window.

  4. Enter the following:

    samconfig -m main ↵

    The following is displayed:

    Start processing command line inputs...

    <main>

  5. Add the DC B client VIP to nspos ip-list.

    Perform Step 12 to Step 32 of To add an independent NFM-P to an existing NSP deployment.

  6. Start the main server.

End of steps