To enlarge an NSP deployment
Purpose
CAUTION System Alteration |
The procedure includes operations that fundamentally reconfigure the NSP system.
You must contact Nokia support for guidance before you attempt to perform the procedure.
CAUTION Special Deployment Limitation |
Adding MDM nodes to an NSP cluster is supported only during NSP cluster installation or upgrade.
If you intend to add one or more MDM nodes after NSP deployment, contact technical support for assistance.
Perform this procedure to change an NSP deployment to a larger deployment type. Enlarging a deployment may be required, for example, to accommodate additional NSP functions, or to increase the network management growth or scope.
Note: To add only worker nodes to an NSP deployment without changing the deployment type, perform “How do I add an NSP cluster node?” in the NSP System Administrator Guide.
The operation involves the following:
Note: release-ID in a file path has the following format:
R.r.p-rel.version
where
R.r.p is the NSP release, in the form MAJOR.minor.patch
version is a numeric value
Rollback operation
To revert to the previous deployment type, as may be required in the event of a problem with the deployment, perform “How do I remove an NSP cluster node?” in the NSP System Administrator Guide.
Steps
Prepare required resources | |
1 |
Submit an NSP Platform Sizing Request for the new deployment type. |
Back up NSP databases, configuration | |
2 |
If you are expanding a standalone NSP cluster, or the primary cluster in a DR deployment, you must back up the following NSP databases; perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide. Note: Ensure that you copy each backup file to a secure location on a station outside the NSP deployment that is reachable from the NSP cluster. |
Enlarge cluster, standalone deployment | |
3 |
If the NSP is a standalone deployment, go to Step 7. |
Enlarge clusters, DR deployment | |
4 |
Perform Step 7 to Step 38 on the standby NSP cluster. |
5 |
Perform Step 7 to Step 51 on the primary NSP cluster. |
6 |
Perform Step 46 to Step 51 on the standby NSP cluster. |
Uninstall NSP | |
7 |
Log in as the root or NSP admin user on the NSP deployer host. |
8 |
Open a console window. |
9 |
Transfer the following file to a secure location on a station outside the NSP cluster that is unaffected by the conversion activity. /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/appliedConfigs/nspConfiguratorConfigs.zip |
10 |
Configure the NSP to completely remove the existing deployment.
|
11 |
Enter the following to stop the NSP cluster: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass uninstall --undeploy --clean # /opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵ |
Create new NSP cluster | |
12 |
Using the resource specifications in the response to your Platform Sizing Request, create the required new NSP cluster VMs and resize the existing VMs, as required. |
Resize cluster as required | |
13 |
Identify the dedicated MDM nodes in the NSP cluster; you require the information for creating the new cluster configuration later in the procedure.
|
14 |
Log in as the root or NSP admin user on the NSP deployer host. |
15 |
Open a console window. |
16 |
Open the following file using a plain-text editor such as vi: /opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml |
17 |
You must add each new node to the hosts section, as shown below. Configure the following parameters for each new node; see the descriptive text at the head of the file for parameter information, and Hostname configuration requirements for general configuration information. - nodeName: noden nodeIp: private_IP_address accessIp: public_IP_address isIngress: value Note: Any dedicated MDM nodes must be placed at the end of the hosts section. For example, if you are expanding your NSP cluster from a node-labels-standard-sdn-4nodes deployment to node-labels-standard-sdn-5nodes, the dedicated MDM nodes must be listed after node 5. Note: The nodeName value: |
18 |
Save and close the k8s-deployer.yml file. |
19 |
Create a backup copy of the updated k8s-deployer.yml file, and transfer the backup copy to a station that is separate from the NSP system and preferably in a remote facility. Note: The backup file is crucial in the event of an NSP deployer host failure, so must be available from a separate station. |
20 |
Enter the following: # cd /opt/nsp/nsp-k8s-deployer-release-ID/bin ↵ |
21 |
Enter the following to create the cluster configuration: # ./nspk8sctl config -c ↵ The following is displayed when the creation is complete: ✔ Cluster hosts configuration is created at: /opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml |
22 |
For each NSP cluster VM that you are adding, enter the following to distribute the local SSH key to the VM: # ssh-copy-id -i ~/.ssh/id_rsa.pub user@address ↵ where address is the NSP cluster VM IP address user is the designated NSP ansible user, if root-user access is restricted; otherwise, user@ is not required |
23 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspk8sctl --ask-pass install # ./nspk8sctl install ↵ The NSP Kubernetes environment is deployed. |
24 |
On the NSP cluster host, enter the following to verify that the new node is added to the cluster: # kubectl get nodes ↵ |
25 |
On the NSP deployer host, enter the following: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ |
26 |
Open the following file using a plain-text editor such as vi: /opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml |
27 |
Configure the following parameters: hosts: "hosts_file" labelProfile: "../ansible/roles/apps/nspos-labels/vars/labels_file" where hosts_file is the absolute path of the hosts.yml file; the default is /opt/nsp/nsp-k8s-deployer-new-release-ID/config/hosts.yml labels_file is the file name below that corresponds to your cluster deployment type: |
28 |
Save and close the nsp-deployer.yml file. |
29 |
Enter the following to apply the node labels to the NSP cluster: # ./nspdeployerctl config ↵ |
30 |
If you are not including any dedicated MDM nodes in addition to the number of member nodes in a standard or enhanced NSP cluster, go to Step 35. |
31 |
On the NSP cluster host, perform the following steps for each additional MDM node.
|
32 |
Enter the following: # kubectl get nodes -o wide ↵ A list of nodes like the following is displayed. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node1 Ready master nd version int_IP ext_IP node2 Ready master nd version int_IP ext_IP node3 Ready <none> nd version int_IP ext_IP |
33 |
Record the NAME value of each node whose INTERNAL-IP value is the IP address of a node that has been added to host an additional MDM instance. |
34 |
For each node, enter the following sequence of commands: # kubectl label node node mdm=true ↵ where node is the recorded NAME value of the MDM node |
35 |
On the NSP cluster host, enter the following: # kubectl get nodes --show-labels ↵ |
36 |
Verify that the labels are added to the nodes. |
Configure deployment | |
37 |
Perform the following steps on the NSP deployer host to update the NSP configuration.
|
38 |
Configure the NSP to preserve the deployment.
|
Deploy NSP in restore mode | |
39 |
On the NSP deployer host, enter the following: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ |
40 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config --restore # ./nspdeployerctl install --config --restore ↵ The NSP cluster is deployed in restore mode. |
Restore databases | |
41 |
On the NSP cluster host, enter the following: # kubectl get pods -A ↵ The database pods are listed. |
42 |
Verify that the database pods are running; the number of replica pods running depends on the deployment type:
|
43 |
Perform “How do I restore the NSP cluster databases?” in the NSP System Administrator Guide. |
Undeploy NSP | |
44 |
On the NSP deployer host, enter the following: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ |
45 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass uninstall --undeploy # ./nspdeployerctl uninstall --undeploy ↵ The NSP cluster is undeployed. |
Start NSP | |
46 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config --deploy # ./nspdeployerctl install --config --deploy ↵ The NSP cluster is deployed, and the NSP starts. |
47 |
On the NSP cluster host, enter the following every few minutes to display the cluster status: Note: You must not proceed to the next step until the cluster is fully operational. # kubectl get pods -A ↵ The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed, with the following exception.
|
48 |
Use a browser to open the NSP cluster URL. |
49 |
Verify the following.
Note: If the UI fails to open, perform “How do I remove the stale NSP allowlist entries?” in the NSP System Administrator Guide to ensure that unresolvable host entries from the previous deployment do not prevent NSP access. |
50 |
Use the NSP to monitor device discovery and to check network management functions. |
51 |
Close the open console windows. End of steps |