How do I replace an NSP cluster node?

Purpose

The following steps describe how to replace a node in an NSP cluster, as may be required in the event of a node failure.

Note: If root access for remote operations is disabled in the NSP configuration, remote operations such as SSH and SCP as the root user are not permitted within an NSP cluster. Steps that describe such an operation as the root user must be performed as the designated non-root user with sudoer privileges.

For simplicity, such steps describe only root-user access.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Node replacement in a standalone deployment

In order to perform the procedure in a standalone NSP deployment, the following must be true.

Note: If you need to replace a node in a standalone NSP cluster and do not have a recent NSP system backup, you cannot use the procedure to replace the node. Instead, you must recreate the cluster configuration; contact technical support for assistance.

CAUTION 

CAUTION

Service outage

Performing the procedure in a standalone NSP deployment causes a service outage.

Ensure that you perform the procedure only during a scheduled maintenance period with the supervision of Nokia technical support.

Steps
Acquire node information
 

Log in as the root user on the NSP cluster host in the NSP cluster that requires the node replacement.


Open a console window.


Enter the following to show the node roles:

kubectl get nodes --show-kind ↵

Output like the following is displayed; the example below is for a three-node cluster:

NAME         STATUS   ROLES                  AGE   VERSION

node/node1   Ready    control-plane,master   18d   v1.20.7

node/node2   Ready    control-plane,master   18d   v1.20.7

node/node3   Ready    <none>                 18d   v1.20.7

If the Roles value for the node you are replacing includes control-plane or master, the node has a master role; if the value is <none>, the node is a worker node.


Enter the following to show the node labels:

kubectl get nodes node_name --show-labels ↵

NAME    STATUS   ROLES             AGE   VERSION   LABELS

node1   Ready control-plane,master 4d12h v1.20.7 act=true,backup=true,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-search=true,etcd=true,file-service=true,kafka=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=loriek8s-k8sc-node1,kubernetes.io/os=linux,mdm=true,neo4j-nr=true,neo4j=true,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,nrcs=true,postgresql=true,prometheus=true,rabbitmq=true,rta-detector=true,rta-ignite=true,rta-trainer=true,rta-windower=true,solr=true,wfm=true,zookeeper=true


Record the command output, which is required later in the procedure.


If the node to be replaced is not in a standalone single-node cluster, you can skip this step.

You must determine which NSP databases to restore on the replacement node.

Make note of which of the database entries in Table 12-1, NSP cluster node labels and associated data are in the Step 4 command output.

Note: Although the output may include “nsp-sdn=true”, the IPRC Tomcat database is installed only if IP resource control is enabled using an installation option in the nsp-config.yml file.

Table 12-1: NSP cluster node labels and associated data

Node label

Data or database

etcd=true

Kubernetes etcd data

nsp-file-service-app=true

NSP file service data

postgresql=true

PostgreSQL database

nsp-sdn=true

IPRC Tomcat database

neo4j-nr=true, neo4j=true

Neo4j database

nsp-nrcx=true

Cross-domain database


Ensure correct DR cluster roles
 

If the NSP system is not a DR deployment, skip this step.

In order to replace an NSP cluster node, the node must be in the standby cluster. After the failure of an DR NSP cluster node that hosts an essential NSP service, an NSP switchover automatically occurs. However, no automatic switchover occurs for a node that does not host an essential service.

If the node to replace is currently in the primary NSP cluster, perform How do I perform an NSP DR switchover?.


Preserve NSP cluster data
 

Log in as the root user on the NSP deployer host.


Open a console window.


10 

Configure the NSP to preserve the NSP cluster data.

  1. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read as shown below:

      deleteOnUndeploy:false

  3. Save and close the file.


Stop NSP cluster
 
11 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


12 

Enter the following:

Note: In a standalone deployment, performing this step marks the beginning of the service outage.

./nspdeployerctl uninstall --undeploy ↵

The NSP stops.


Reconfigure and start cluster
 
13 

If the replacement node has the same IP address as the node you are replacing, skip this step.

Update the node IP address in the NSP cluster configuration.

  1. Open the following file on the NSP deployer host using a plain-text editor such as vi:

    /opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml

  2. Change the former node IP address to the new IP address.

  3. Save and close the file.

  4. Enter the following:

    cd /opt/nsp/nsp-k8s-deployer-release-ID/bin ↵

  5. Enter the following:

    ./nspk8sctl config -c ↵

    A new /opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml file is created.


14 

Enter the following:

ssh-copy-id -i ~/.ssh/id_rsa.pub root@address

where address is the replacement node IP address

The required SSH key is transferred to the replacement node.


15 

Enter the following:

./nspk8sctl uninstall ↵

The Kubernetes software in the cluster is uninstalled.


16 

Enter the following:

./nspk8sctl install ↵

The Kubernetes software in the cluster is re-installed.


17 

Add the labels from the former node to the replacement node.

  1. Enter the following:

    cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵

  2. Enter the following:

    ./nspdeployerctl config ↵

  3. On the NSP cluster host, enter the following:

    kubectl get nodes node --show-labels ↵

    where node is the node name

  4. Verify that the labels match the labels recorded in Step 4.


18 

Update the node IP address in the NSP software configuration file.

  1. Open the following file on the NSP deployer host using a plain-text editor such as vi:

    /opt/nsp/NSP-CN--DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  2. If the former node IP address is present in the file, replace it with the new IP address.

  3. Save and close the file.


19 

Enter the following on the NSP deployer host:

/opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl install --config ↵

The new node IP address is propagated to the deployment configuration.


20 

Perform one of the following.

  1. If the NSP is deployed in a DR configuration, enter the following on the standby NSP deployer host:

    /opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl install --deploy ↵

    The NSP starts.

  2. If the NSP system is configured as an enhanced deployment without DR, you must ensure that the PostgreSQL and Neo4j databases in the cluster initialize on an existing node, and not on the replacement node, by cordoning the replacement node until after the initialization.

    1. Enter the following on the NSP cluster host:

      kubectl cordon node

      where node is the node name

      The replacement node is cordoned.

    2. On the NSP deployer host, enter the following:

      /opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl install --deploy ↵

    3. On the NSP cluster host, enter the following:

      kubectl get pods -A ↵

      The pods are listed.

    4. View the output; if all of the following are not true, repeat 3.

      Note: You must not proceed to the next step until the conditions are met.

      • One postgres-primary pod instance is in the Running state.

      • At least two nspos-neo4j-core pod instances are in the Running state.

      • At least two nsp-tomcat pod instances are in the Running state.

      • If nrcx-tomcat is installed, at least two nrcx-tomcat pods are in the Running state.

    5. When the conditions are met, enter the following:

      kubectl uncordon node

      The replacement node is uncordoned.

  3. If the node to be replaced is not in a DR or enhanced deployment, skip this step.

    Perform How do I restore the NSP cluster databases? using a copy of the appropriate NSP system backup, which is typically the most recent.

    Note: Do not perform Step 4, which deletes all databases.

    Note: You must restore only the databases identified in Step 6.

    Note: The restore procedure starts the NSP cluster when the restore is complete.


21 

Back up the NSP cluster data, as described in How do I back up the NSP cluster databases?.


22 

Close the open console windows.

End of steps