How do I replace an NSP cluster node?
Purpose
The following steps describe how to replace a node in an NSP cluster, as may be required in the event of a node failure.
Note: If root access for remote operations is disabled in the NSP configuration, remote operations such as SSH and SCP as the root user are not permitted within an NSP cluster. Steps that describe such an operation as the root user must be performed as the designated non-root user with sudoer privileges.
For simplicity, such steps describe only root-user access.
Note: release-ID in a file path has the following format:
R.r.p-rel.version
where
R.r.p is the NSP release, in the form MAJOR.minor.patch
version is a numeric value
Node replacement in a standalone deployment
In order to perform the procedure in a standalone NSP deployment, the following must be true.
-
Scheduled NSP backups are enabled, as described in Chapter 22, Classic management database administration.
Note: If you need to replace a node in a standalone NSP cluster and do not have a recent NSP system backup, you cannot use the procedure to replace the node. Instead, you must recreate the cluster configuration; contact technical support for assistance.
CAUTION Service outage |
Performing the procedure in a standalone NSP deployment causes a service outage.
Ensure that you perform the procedure only during a scheduled maintenance period with the supervision of Nokia technical support.
Steps
Acquire node information | |||||||||||||||
1 |
Log in as the root or NSP admin user on the NSP cluster host in the NSP cluster that requires the node replacement. | ||||||||||||||
2 |
Open a console window. | ||||||||||||||
3 |
Enter the following to show the node roles: # kubectl get nodes --show-kind ↵ Output like the following is displayed; the example below is for a three-node cluster: NAME STATUS ROLES AGE VERSION node/node1 Ready control-plane,master 18d v1.20.7 node/node2 Ready control-plane,master 18d v1.20.7 node/node3 Ready <none> 18d v1.20.7 If the Roles value for the node you are replacing includes control-plane or master, the node has a master role; if the value is <none>, the node is a worker node. | ||||||||||||||
4 |
Enter the following to show the node labels: # kubectl get nodes node_name --show-labels ↵ NAME STATUS ROLES AGE VERSION LABELS node1 Ready control-plane,master 4d12h v1.20.7 act=true,backup=true,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-search=true,etcd=true,file-service=true,kafka=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=loriek8s-k8sc-node1,kubernetes.io/os=linux,mdm=true,neo4j-nr=true,neo4j=true,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,nrcs=true,postgresql=true,prometheus=true,rabbitmq=true,rta-detector=true,rta-ignite=true,rta-trainer=true,rta-windower=true,solr=true,wfm=true,zookeeper=true | ||||||||||||||
5 |
Record the command output, which is required later in the procedure. | ||||||||||||||
6 |
If the node to be replaced is not in a standalone single-node cluster, you can skip this step. You must determine which NSP databases to restore on the replacement node. Make note of which of the database entries in Table 12-1, NSP cluster node labels and associated data are in the Step 4 command output. Note: Although the output may include “nsp-sdn=true”, the IPRC Tomcat database is installed only if IP resource control is enabled using an installation option in the nsp-config.yml file. Table 12-1: NSP cluster node labels and associated data
| ||||||||||||||
Ensure correct DR cluster roles | |||||||||||||||
7 |
If the NSP system is not a DR deployment, skip this step. In order to replace an NSP cluster node, the node must be in the standby cluster. After the failure of an DR NSP cluster node that hosts an essential NSP service, an NSP switchover automatically occurs. However, no automatic switchover occurs for a node that does not host an essential service. If the node to replace is currently in the primary NSP cluster, perform How do I perform an NSP DR switchover?. | ||||||||||||||
Stop NSP cluster | |||||||||||||||
8 |
Stop the NSP cluster. Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the nspdeployerctl command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass uninstall –-undeploy Note: In a standalone deployment, performing this step marks the beginning of the service outage.
| ||||||||||||||
Reconfigure and start cluster | |||||||||||||||
9 |
If the replacement node has the same IP address as the node you are replacing, you can skip this step. Update the node IP address in the NSP cluster configuration.
| ||||||||||||||
10 |
Enter the following: # ssh-copy-id -i ~/.ssh/id_rsa.pub root@address ↵ where address is the replacement node IP address The required SSH key is transferred to the replacement node. | ||||||||||||||
11 |
Perform the following steps to back up the Kubernetes secrets.
| ||||||||||||||
12 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspk8sctl --ask-pass uninstall # ./nspk8sctl uninstall ↵ The Kubernetes software in the cluster is uninstalled. | ||||||||||||||
13 |
Perform the following steps to restore the NSP Kubernetes secrets.
| ||||||||||||||
14 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspk8sctl --ask-pass install # ./nspk8sctl install ↵ The Kubernetes software in the cluster is re-installed. | ||||||||||||||
15 |
Add the labels from the former node to the replacement node.
| ||||||||||||||
16 |
Update the node IP address in the NSP software configuration file.
| ||||||||||||||
17 |
Enter the following on the NSP deployer host: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the nspdeployerctl command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config # /opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl install --config ↵ The new node IP address is propagated to the deployment configuration. | ||||||||||||||
18 |
Perform one of the following. Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the nspdeployerctl command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install –-deploy
| ||||||||||||||
19 |
Back up the NSP cluster data, as described in How do I back up the NSP cluster databases?. | ||||||||||||||
20 |
Close the open console windows. End of steps |