To convert a standalone NSP system to DR
Purpose
Perform this procedure to create a DR NSP deployment by adding an NSP cluster in a separate data center to an existing standalone deployment.
The following represent the redundant data centers in the procedure:
Note: You require root user privileges on each station.
Note: Command lines use the # symbol to represent the RHEL CLI prompt for the root user. Do not type the leading # symbol when you enter a command.
Note: release-ID in a file path has the following format:
R.r.p-rel.version
where
R.r.p is the NSP release, in the form MAJOR.minor.patch
version is a numeric value
Steps
Create NSP cluster in DC B | |||
1 |
Log in as the root user on the DC B station designated for the new NSP deployer host VM. | ||
2 |
Open a console window. | ||
3 |
Perform the following sections of procedure To install the NSP in DC B. Note: The number of VMs in each NSP cluster must match.
| ||
Reconfigure NSP cluster in DC A | |||
4 |
Log in as the root or NSP admin user on the NSP deployer host in DC A. | ||
5 |
Open a console window. | ||
6 |
Open the following file using a plain-text editor such as vi: /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml | ||
7 |
Edit the following line in the platform section, kubernetes subsection to read as shown below: deleteOnUndeploy:false | ||
8 |
Save and close the file. | ||
9 |
Enter the following: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ | ||
10 |
Enter the following to stop the NSP cluster in DC A: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass uninstall --undeploy # ./nspdeployerctl uninstall --undeploy ↵ The NSP cluster stops. | ||
11 |
Open the following file using a plain-text editor such as vi: /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml Note: See nsp-config.yml file format for configuration information. | ||
12 |
Configure the parameters in the dr section as shown below: dr: dcName: "data_center" mode: "dr" peer: "peer_address" internalPeer: "peer_internal_address" peerDCName: "peer_data_center" where data_center is the unique alphanumeric name to assign to the DC A cluster peer_address is the address at which the DC B cluster is reachable over the client network peer_internal_address is the address at which the DC B cluster is reachable over the internal network peer_data_center is the unique alphanumeric name to assign to the DC B cluster | ||
13 |
Save and close the nsp-config.yml file. | ||
14 |
Enter the following: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ | ||
15 |
Enter the following to start the NSP in DC A: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config --deploy # ./nspdeployerctl install --config --deploy ↵ The DC A NSP initializes in DR mode. | ||
16 |
Transfer the /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml file to the /tmp directory on the NSP deployer host in DC B. | ||
17 |
Close the open console windows in DC A. | ||
Configure NSP cluster in DC B | |||
18 |
Log in as the root or NSP admin user on the NSP deployer host in DC B. | ||
19 |
Open a console window. | ||
20 |
Open the following file copied from DC A using a plain-text editor such as vi: /tmp/nsp-config.yml Note: See nsp-config.yml file format for configuration information. | ||
21 |
Configure the following NSP cluster address parameters in the platform section, ingressApplications subsection as shown below: Note: The client_IP value is mandatory; the address is used for interfaces that remain unconfigured, such as in a single-interface deployment. Note: If the client network uses IPv6, you must specify the NSP cluster hostname as the client_IP value. ingressApplications: ingressController: clientAddresses: virtualIp: "client_IP" advertised: "client_public_address" internalAddresses: virtualIp: "internal_IP" advertised: "internal_public_address" trapForwarder: mediationAddresses: virtualIpV4: "trapV4_mediation_IP" advertisedV4: "trapV4_mediation_public_address" virtualIpV6: "trapV6_mediation_IP" advertisedV6: "trapV6_mediation_public_address" where client_IP is the address for external client access internal_IP is the address for internal communication trapV4_mediation_IP is the address for IPv4 network mediation trapV6_mediation_IP is the address for IPv6 network mediation each public_address value is an optional address to advertise instead of the associated _IP value, for example, in a NAT environment | ||
22 |
If flow collection is enabled, configure the following parameters in the platform section, ingressApplications subsection as shown below: flowForwarder: mediationAddresses: virtualIpV4: "flowV4_mediation_IP" advertisedV4: "flowV4_mediation_public_address" virtualIpV6: "flowV6_mediation_IP" advertisedV6: "flowV6_mediation_public_address" where flowV4_mediation_IP is the address for IPv4 flow collection flowV6_mediation_IP is the address for IPv6 flow collection each _public_address value is an optional address to advertise instead of the associated mediation_IP value, for example, in a NAT environment | ||
23 |
Configure the parameters in the dr section as shown below: dr: dcName: "data_center" mode: "dr" peer: "peer_address" internalPeer: "peer_internal_address" peerDCName: "peer_data_center" where data_center is the unique alphanumeric name of the DC B cluster peer_address is the address at which the DC A cluster is reachable over the client network peer_internal_address is the address at which the DC A cluster is reachable over the internal network peer_data_center is the unique alphanumeric name of the DC A cluster | ||
24 |
If you are using a custom TLS server certificate, configure the following tls parameter in the deployment section: tls: customCaCert: CA_key_file where CA_key_file is the CA.pem file generated in Step 7 of To generate custom TLS certificate files for the NSP | ||
25 |
Save and close the file. | ||
26 |
Replace the original NSP configuration file with the edited file.
| ||
Restore NSP Kubernetes secrets | |||
27 |
Obtain and restore the NSP Kubernetes secrets from DC A.
| ||
28 |
The install script will advise you to delete customCaCert, stating that the file is no longer required by NSP. This message is not correct for installations that use custom certificates. Do not delete this file if you are using custom certificates, as the deployment will fail. If the NSP uses a custom server certificate for client access, you must update the standby server secret for client access using the custom certificate and key files that are specific to the standby cluster. Enter the following: # ./nspdeployerctl secret -s nginx-nb-tls-nsp -n psaRestricted -f tls.key=customKey -f tls.crt=customCert -f ca.crt=customCaCert update ↵ where customKey is the full path of the private server key customCert is the full path of the server public certificate customCaCert is the full path of the CA public certificate Custom certificate and key files are created by performing To generate custom TLS certificate files for the NSP. Messages like the following are displayed as the server secret is updated. Do not delete customCaCert if you are using custom certificates, as the deployment will fail. secret/nginx-nb-tls-nsp patched The following files may contain sensitive information. They are no longer required by NSP and may be removed. customKey customCert customCaCert | ||
Deploy NSP software in DC B | |||
29 |
Enter the following on the NSP deployer host in DC B: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config --deploy # ./nspdeployerctl install --config --deploy ↵ The NSP cluster initializes as the standby cluster in the new DR deployment. | ||
30 |
Log in as the root or NSP admin user on the NSP cluster host. | ||
31 |
Enter the following every few minutes to monitor the NSP cluster initialization: # kubectl get pods -A ↵ The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod. Note: Some pods may remain in the Pending state; for example, MDM pods in addition to the default for a deployment. | ||
Start NFM-P | |||
32 |
Perform the following steps on each NFM-P main server station to start the server. Note: If the NFM-P system is redundant, you must perform the steps on the primary main server first.
End of steps |