To convert a standalone NSP system to DR
Purpose
Perform this procedure to create a DR NSP deployment by adding an NSP cluster in a separate data center to an existing standalone deployment.
The following represent the redundant data centers in the procedure:
Note: You require root user privileges on each station.
Note: Command lines use the # symbol to represent the RHEL CLI prompt for the root user. Do not type the leading # symbol when you enter a command.
Note: release-ID in a file path has the following format:
R.r.p-rel.version
where
R.r.p is the NSP release, in the form MAJOR.minor.patch
version is a numeric value
Note: Before you attempt a DR conversion, ensure that any third-party software installed on the NSP deployer host or any NSP cluster VM is disabled and stopped.
Steps
Back up NSP databases | |
1 |
Back up the NSP databases on the standalone NSP; perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide. |
2 |
Ensure that you have the Kubernetes secrets backup file from DC A. Note: If you need make a backup copy of the Kubernetes secrets from DC A, see Step 105 of To install an NSP cluster. |
Create NSP cluster in DC B | |
3 |
Log in as the root user on the DC B station designated for the new NSP deployer host VM. |
4 |
Open a console window. |
5 |
Perform the following sections of procedure To install an NSP cluster in DC B. Note: The number of VMs in each NSP cluster must match.
|
Configure NSP cluster in DC B | |
6 |
Log in as the root or NSP admin user on the NSP deployer host in DC B. |
7 |
Open a console window. |
8 |
Open the following file copied from DC A using a plain-text editor such as vi: /tmp/nsp-config.yml Note: See nsp-config.yml file format for configuration information. |
9 |
Configure the following NSP cluster address parameters in the platform section, ingressApplications subsection as shown below: Note: The client_IP value is mandatory; the address is used for interfaces that remain unconfigured, such as in a single-interface deployment. Note: If the client network uses IPv6, you must specify the NSP cluster hostname as the client_public_address. ingressApplications: ingressController: clientAddresses: virtualIp: "client_IP" advertised: "client_public_address" internalAddresses: virtualIp: "internal_IP" advertised: "internal_public_address" mediationAddresses: virtualIp: "mediation_IP" advertised: "mediation_public_address" trapForwarder: mediationAddresses: virtualIpV4: "trapV4_mediation_IP" advertisedV4: "trapV4_mediation_public_address" virtualIpV6: "trapV6_mediation_IP" advertisedV6: "trapV6_mediation_public_address" where client_IP is the address for external client access internal_IP is the address for internal communication mediation_IP is the address for network mediation trapV4_mediation_IP is the address for IPv4 network mediation trapV6_mediation_IP is the address for IPv6 network mediation each public_address value is an optional address to advertise instead of the associated _IP value, for example, in a NAT environment |
10 |
If flow collection is enabled, configure the following parameters in the platform section, ingressApplications subsection as shown below: flowForwarder: mediationAddresses: virtualIpV4: "flowV4_mediation_IP" advertisedV4: "flowV4_mediation_public_address" virtualIpV6: "flowV6_mediation_IP" advertisedV6: "flowV6_mediation_public_address" where flowV4_mediation_IP is the mediation address for IPv4 flow collection flowV6_mediation_IP is the mediation address for IPv6 flow collection each _public_address value is an optional address to advertise instead of the associated mediation_IP value, for example, in a NAT environment |
11 |
Configure the parameters in the dr section as shown below: Note: The peer_address value that you specify must match the advertisedAddress value in the configuration of the cluster in DC A and have the same format; if one value is a hostname, the other must also be a hostname. dr: dcName: "data_center" mode: "dr" peer: "peer_address" internalPeer: "peer_internal_address" peerDCName: "peer_data_center" where data_center is the unique alphanumeric name of the DC B cluster peer_address is the address at which the DC A cluster is reachable over the client network peer_internal_address is the address at which the DC A cluster is reachable over the internal network peer_data_center is the unique alphanumeric name of the DC A cluster |
12 |
If you are using a custom TLS server certificate, configure the following tls parameter in the deployment section: tls: customCaCert: CA_key_file where CA_key_file is the CA.pem file generated in Step 7 of To generate custom NSP TLS certificates |
13 |
Save and close the file. |
14 |
Enter the following: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ |
15 |
Enter the following to apply the configuration to the NSP cluster: # ./nspdeployerctl config ↵ |
16 |
Enter the following to import the NSP images and Helm charts to the NSP Kubernetes registry # ./nspdeployerctl import ↵ |
Restore NSP Kubernetes secrets in DC B | |
17 |
Obtain and restore the NSP Kubernetes secrets from DC A.
|
18 |
If the NSP uses a custom server certificate for client access, you must update the standby server secret for client access using the custom certificate and key files that are specific to the standby cluster. Enter the following: # ./nspdeployerctl secret -s nginx-nb-tls-nsp -n psaRestricted -f tls.key=customKey -f tls.crt=customCert -f ca.crt=customCaCert update ↵ where customKey is the full path of the private server key customCert is the full path of the server public certificate customCaCert is the full path of the CA public certificate Custom certificate and key files are created by performing To generate custom NSP TLS certificates. Messages like the following are displayed as the server secret is updated: secret/nginx-nb-tls-nsp patched The following files may contain sensitive information. They are no longer required by NSP and may be removed. customKey customCert customCaCert |
Reconfigure NSP cluster in DC A | |
19 |
Log in as the root or NSP admin user on the NSP deployer host in DC A. |
20 |
Open a console window. |
21 |
Open the following file using a plain-text editor such as vi: /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml |
22 |
Edit the following line in the platform section, kubernetes subsection to read as shown below: deleteOnUndeploy:true |
23 |
Save and close the file. |
24 |
Enter the following: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ |
25 |
Enter the following to stop the NSP cluster in DC A: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass uninstall --undeploy --clean # /opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵ The NSP cluster stops. Old PVCs and PVs are deleted. |
26 |
Open the following file using a plain-text editor such as vi: /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml Note: See nsp-config.yml file format for configuration information. |
27 |
Configure the parameters in the dr section as shown below: Note: The peer_address value that you specify must match the advertisedAddress value in the configuration of the cluster in DC B and have the same format; if one value is a hostname, the other must also be a hostname. dr: dcName: "data_center" mode: "dr" peer: "peer_address" internalPeer: "peer_internal_address" peerDCName: "peer_data_center" where data_center is the unique alphanumeric name to assign to the DC A cluster peer_address is the address at which the DC B cluster is reachable over the client network peer_internal_address is the address at which the DC B cluster is reachable over the internal network peer_data_center is the unique alphanumeric name to assign to the DC B cluster |
28 |
Save and close the nsp-config.yml file. |
29 |
Enter the following to apply the configuration to the NSP cluster: # ./nspdeployerctl config ↵ |
Enable NSP restore mode in DC A | |
30 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config --restore # ./nspdeployerctl install --config --restore ↵ |
31 |
Note: The nspos-solr-statefulset-n and opensearch-cluster-master-n pods are only supported starting with Release 25.8. The pods are excluded from the restore operation if the system has been upgraded from a release earlier than Release 25.8. The following NSP cluster pods must be operational before the restore begins:
Enter the following periodically to list the pods; the cluster is ready for the restore when each required pod is in the Running state: # kubectl get pods -A ↵ |
32 |
If any required pod is not Running, return to Step 31. Note: A restore attempt fails unless each required pod is Running. |
Restore data in DC A | |
33 |
Enter the following on the NSP deployer VM: # cd /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/database ↵ |
34 |
Enter one or more of the following, as required, to restore system data and databases: Note: In a DR deployment, you must perform the steps first in the data center that you want to start as the primary data center. Note: If the kubeconfig files in /opt/nsp/nsp-configurator/kubeconfig on the NSP deployer host contain an expired certificate, enter the following command to resynchronize the kubeconfig files from the NSP cluster; see “How do I update nsp_kubeconfig on the NSP deployer host?” in the System Administrator Guide.. # ./nspk8sctl update --kubeconfig ↵
Note: This step applies only in Release 25.8 or later. The OpenSearch database is excluded from backup and restore operations if the system has been upgraded from a release earlier than Release 25.8. where backup_dir is the directory that contains the backup file backup_file is the backup file name, for example: |
Start NSP clusters in DC A | |
35 |
Perform the following steps in each data center. Note: In a DR deployment, you must perform the steps first in the data center that you want to start as the primary data center.
|
36 |
Close the open console windows in DC A. |
Deploy NSP software in DC B | |
37 |
Enter the following on the NSP deployer host in DC B: # ./nspdeployerctl install --config --deploy ↵ The NSP cluster initializes as the standby cluster in the new DR deployment. |
38 |
Enter the following every few minutes to monitor the NSP cluster initialization: # kubectl get pods -A ↵ The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod. Note: Some pods may remain in the Pending state; for example, MDM pods in addition to the default for a deployment. |
39 |
Close the open console windows in DC B. |
Reconfigure NFM-P | |
40 |
If your system includes NFM-P:
End of steps |