How do I replace an NSP cluster node?
Purpose
The following steps describe how to replace a node in an NSP cluster, as may be required in the event of a node failure.
Note: If root access for remote operations is disabled in the NSP configuration, remote operations such as SSH and SCP as the root user are not permitted within an NSP cluster. Steps that describe such an operation as the root user must be performed as the designated non-root user with sudoer privileges.
For simplicity, such steps describe only root-user access.
Note: release-ID in a file path has the following format:
R.r.p-rel.version
where
R.r.p is the NSP release, in the form MAJOR.minor.patch
version is a numeric value
Node replacement in a standalone deployment
In order to perform the procedure in a standalone NSP deployment, the following must be true.
-
Scheduled NSP backups are enabled, as described in Chapter 22, Classic management database administration.
Note: If you need to replace a node in a standalone NSP cluster and do not have a recent NSP system backup, you cannot use the procedure to replace the node. Instead, you must recreate the cluster configuration; contact technical support for assistance.
![]() |
CAUTION Service outage |
Performing the procedure in a standalone NSP deployment causes a service outage.
Ensure that you perform the procedure only during a scheduled maintenance period with the supervision of Nokia technical support.
Steps
Acquire node information | |
1 |
Log in as the root or NSP admin user on the NSP cluster host in the NSP cluster that requires the node replacement. |
2 |
Open a console window. |
3 |
Enter the following to show the node roles: # kubectl get nodes --show-kind ↵ Output like the following is displayed; the example below is for a three-node cluster: NAME STATUS ROLES AGE VERSION node/node1 Ready control-plane,master 18d v1.20.7 node/node2 Ready control-plane,master 18d v1.20.7 node/node3 Ready <none> 18d v1.20.7 If the Roles value for the node you are replacing includes control-plane or master, the node has a master role; if the value is <none>, the node is a worker node. |
4 |
Enter the following to show the node labels: # kubectl get nodes --show-labels ↵ NAME STATUS ROLES AGE VERSION LABELS node1 Ready control-plane 10d v1.32.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,etcd=true,isIngress=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=test-node1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers= node2 Ready control-plane 10d v1.32.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,etcd=true,isIngress=true,kafka-0=true,kafka=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=test-node2,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers= node3 Ready <none> 10d v1.32.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,etcd=true,file-service=true,isIngress=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=node3,kubernetes.io/os=linux,linbit.com/hostname=node3,storage=true node4 Ready <none> 10d v1.32.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,isIngress=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=node4,kubernetes.io/os=linux,linbit.com/hostname=node4,storage=true node5 Ready <none> 10d v1.32.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,isIngress=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=node5,kubernetes.io/os=linux,linbit.com/hostname=node5,storage=true The storage=true label applies only to a multi-node setup as follows:
|
Ensure correct DR cluster roles | |
5 |
If the NSP system is not a DR deployment, skip this step. In order to replace an NSP cluster node, the node must be in the standby cluster. After the failure of an DR NSP cluster node that hosts an essential NSP service, an NSP switchover automatically occurs. However, no automatic switchover occurs for a node that does not host an essential service. If the node to replace is currently in the primary NSP cluster, perform How do I perform an NSP DR switchover from the NSP UI?. |
Stop NSP cluster | |
6 |
Stop the NSP cluster. Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the nspdeployerctl command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass uninstall –-undeploy Note: In a standalone deployment, performing this step marks the beginning of the service outage.
|
Reconfigure and start cluster | |
7 |
If the replacement node has the same IP address as the node you are replacing, you can skip this step. Update the node IP address in the NSP cluster configuration.
|
8 |
Enter the following: # ssh-copy-id -i ~/.ssh/id_rsa.pub root@address ↵ where address is the replacement node IP address The required SSH key is transferred to the replacement node. |
9 |
Perform the following steps to back up the Kubernetes secrets.
|
10 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspk8sctl --ask-pass uninstall # ./nspk8sctl uninstall ↵ The Kubernetes software in the cluster is uninstalled. |
11 |
Perform the following steps to restore the NSP Kubernetes secrets.
|
12 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspk8sctl --ask-pass install # ./nspk8sctl install ↵ The Kubernetes software in the cluster is re-installed. |
13 |
Add the labels from the former node to the replacement node.
|
14 |
Update the node IP address in the NSP software configuration file.
|
15 |
Enter the following on the NSP deployer VM: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the nspdeployerctl command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config # /opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl install --config ↵ The new node IP address is propagated to the deployment configuration. |
16 |
Perform one of the following. Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the nspdeployerctl command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install –-deploy
|
17 |
Back up the NSP cluster data, as described in How do I back up the NSP cluster databases?. |
18 |
Close the open console windows. End of steps |