To convert a standalone NSP system to DR

Purpose

Perform this procedure to create a DR NSP deployment by adding an NSP cluster in a separate data center to an existing standalone deployment.

Note: Components such as the NFM-P that you intend to include in the DR deployment must also be DR systems.

The following represent the redundant data centers in the procedure:

  • DC A—initial standalone data center

  • DC B—data center of new DR NSP cluster

Note: You require root user privileges on each station.

Note: Command lines use the # symbol to represent the RHEL CLI prompt for the root user. Do not type the leading # symbol when you enter a command.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Note: Before you attempt a DR conversion, ensure that any third-party software installed on the NSP deployer host or any NSP cluster VM is disabled and stopped.

Steps
Convert NFM-P to redundant system
 

If the NSP system includes a standalone NFM-P system, you must convert the NFM-P system to a redundant system; see the system conversion to redundancy procedure in NFM-P system conversion to redundancy for information.

Note: The nspos ip-list parameter in samconfig on each NFM-P main server must include the advertised address of each NSP cluster.


Create NSP cluster in DC B
 

Log in as the root user on the DC B station designated for the new NSP deployer host VM.


Open a console window.


Perform the following sections of procedure To install the NSP in DC B.

Note: The number of VMs in each NSP cluster must match.

  1. Create NSP deployer host VM ( Step 1 to Step 6)—creates the NSP deployer host VM in DC B

    Note: The DC B VM specifications, such as disk layout and capacity, must be identical to the DC A VM specifications.

  2. Configure NSP deployer host networking ( Step 7 to Step 17)—sets the NSP deployer host communication parameters

  3. Create NSP cluster VMs ( Step 28 to Step 49)—creates the new NSP cluster VMs

    Note: The DC B VM specifications, such as disk layout and capacity, must be identical to the DC A VM specifications.

  4. Deploy Kubernetes environment ( Step 50 to Step 62)—deploys the NSP cluster in DC B


Reconfigure NSP cluster in DC A
 

Log in as the root user on the NSP deployer host in DC A.


Open a console window.


Open the following file using a plain-text editor such as vi:

/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml


Edit the following line in the platform section, kubernetes subsection to read as shown below:

  deleteOnUndeploy:false


Save and close the file.


10 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


11 

Enter the following to stop the NSP cluster in DC A:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall --undeploy

./nspdeployerctl uninstall --undeploy ↵

The NSP cluster stops.


12 

Open the following file using a plain-text editor such as vi:

/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

Note: See nsp-config.yml file format for configuration information.


13 

Configure the parameters in the dr section as shown below:

Note: The peer_address value that you specify must match the advertisedAddress value in the configuration of the cluster in DC B and have the same format; if one value is a hostname, the other must also be a hostname.

dr:

   dcName: "data_center"

   mode: "dr"

   peer: "peer_address"

   internalPeer: "peer_internal_address"

   peerDCName: "peer_data_center"

where

data_center is the unique alphanumeric name to assign to the DC A cluster

peer_address is the address at which the DC B cluster is reachable over the client network

peer_internal_address is the address at which the DC B cluster is reachable over the internal network

peer_data_center is the unique alphanumeric name to assign to the DC B cluster


14 

Save and close the file.


15 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


16 

Enter the following to start the NSP in DC A:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --deploy

./nspdeployerctl install --config --deploy ↵

The DC A NSP initializes in DR mode.


17 

Transfer the /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml file to the /tmp directory on the NSP deployer host in DC B.


18 

Close the open console windows in DC A.


Configure NSP cluster in DC B
 
19 

Log in as the root user on the NSP deployer host in DC B.


20 

Open a console window.


21 

Copy the security artifacts from the NSP in DC A.

  1. Enter the following:

    scp -r address:/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls/* /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls/ ↵

    where address is the address of the NSP deployer host in DC A

  2. Enter the following:

    mkdir -p /opt/nsp/nsp-configurator/generated ↵

  3. Enter the following:

    scp root@address:/opt/nsp/nsp-configurator/generated/nsp-keycloak-*-secret /opt/nsp/nsp-configurator/generated/ ↵


22 

Open the following file copied from DC A using a plain-text editor such as vi:

/tmp/nsp-config.yml

Note: See nsp-config.yml file format for configuration information.


23 

Configure the cluster addressing parameters in the platform section as shown below; you must specify the client_address value, which is used as the default for any optional address parameter that you do not configure:

Note: If the client network uses IPv6, you must specify the NSP cluster hostname as the client_address value.

Note: You must preserve the lead spacing of each line.

  advertisedAddress: "client_address"

  mediationAdvertisedAddress: "IPv4_mediation_address"

  mediationAdvertisedAddressIpv6: "IPv6_mediation_address"

  internalAdvertisedAddress: "internal_cluster_address"

where

client_address is the public IPv4 address or hostname that is advertised to clients

IPv4_mediation_address is the optional address for IPv4 NE management traffic

IPv6_mediation_address is the optional address for IPv6 NE management traffic

internal_cluster_address is the optional IPv4 or IPv6 address, or internal hostname, for internal NSP communication


24 

Configure the parameters in the dr section as shown below:

Note: The peer_address value that you specify must match the advertisedAddress value in the configuration of the cluster in DC A and have the same format; if one value is a hostname, the other must also be a hostname.

dr:

   dcName: "data_center"

   mode: "dr"

   peer: "peer_address"

   internalPeer: "peer_internal_address"

   peerDCName: "peer_data_center"

where

data_center is the unique alphanumeric name of the DC B cluster

peer_address is the address at which the DC A cluster is reachable over the client network

peer_internal_address is the address at which the DC A cluster is reachable over the internal network

peer_data_center is the unique alphanumeric name of the DC A cluster


25 

Save and close the file.


26 

Replace the original NSP configuration file with the edited file.

  1. Enter the following to create a local backup of the original file:

    mv /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml nsp-config.orig ↵

  2. Enter the following:

    cp /tmp/nsp-config.yml /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config ↵


Install NSP software in DC B
 
27 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


28 

Enter the following to deploy the NSP cluster in DC B:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --deploy

./nspdeployerctl install --config --deploy ↵

The NSP cluster initializes as the standby cluster in the new DR deployment.


29 

Enter the following every few minutes to monitor the NSP cluster initialization:

kubectl get pods -A ↵

The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.

Note: Some pods may remain in the Pending state; for example, MDM pods in addition to the default for a deployment.


Start NFM-P
 
30 

Perform the following steps on each NFM-P main server station to start the server.

Note: If the NFM-P system is redundant, you must perform the steps on the primary main server first.

  1. Enter the following:

    bash$ cd /opt/nsp/nfmp/server/nms/bin ↵

  2. Enter the following:

    bash$ ./nmsserver.bash start ↵

  3. Enter the following:

    bash$ ./nmsserver.bash appserver_status ↵

    The server status is displayed; the server is fully initialized if the status is the following:

    Application Server process is running.  See nms_status for more detail.

    If the server is not fully initialized, wait five minutes and then repeat this step. Do not perform the next step until the server is fully initialized.

End of steps