To upgrade the NSP Kubernetes environment
Purpose
Perform this procedure to upgrade the Kubernetes deployment environment in an NSP system. The procedure upgrades only the deployment infrastructure, and not the NSP software.
Note: You must upgrade Kubernetes in each NSP cluster of a DR deployment, as described in the procedure.
Note: release-ID in a file path has the following format:
R.r.p-rel.version
where
R.r.p is the NSP release, in the form MAJOR.minor.patch
version is a numeric value
Steps
Download Kubernetes upgrade bundle | |
1 |
Download the following from the NSP downloads page on the Nokia Support portal to a local station that is not part of the NSP deployment: Note: The download takes considerable time; while the download is in progress, you may proceed to Step 2.
where R_r is the NSP release ID, in the form Major_minor |
Verify NSP cluster readiness | |
2 |
Perform the following steps on each NSP cluster to verify that the cluster is fully operational.
|
Back up NSP databases | |
3 |
On the standalone NSP cluster, or the primary cluster in a DR deployment, perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide. Note: The backup takes considerable time; while the backup is in progress, you may proceed to Step 4. |
Back up system configuration files | |
4 |
Perform the following on the NSP deployer host in each data center. Note: In a DR deployment, you must clearly identify the source cluster of each set of backup files.
|
Verify checksum of downloaded file | |
5 |
It is strongly recommended that you verify the message digest of each NSP file that you download from the Nokia Support portal. The downloaded .cksum file contains checksums for comparison with the output of the RHEL md5sum, sha256sum, or sha512sum commands. When the file download is complete, verify the file checksum.
|
Upgrade NSP registry | |
6 |
Perform Step 7 to Step 16 on the NSP deployer host in each data center, and then go to Step 17. Note: In a DR deployment, you must perform the steps first on the NSP deployer host in the primary data center. |
7 |
If the NSP deployer host is deployed in a VM created using an NSP RHEL OS disk image, perform To apply a RHEL update to an NSP image-based OS. |
8 |
Copy the downloaded NSP_K8S_DEPLOYER_R_r.tar.gz file to the /opt/nsp directory. |
9 |
Expand the software bundle file.
|
10 |
If you are not upgrading Kubernetes from the immediately previous version supported by the NSP, but from an earlier version, you must uninstall the Kubernetes registry; otherwise, you can skip this step. See the Host Environment Compatibility Guide for NSP and CLM for information about Kubernetes version support. Enter the following: # /opt/nsp/nsp-registry-old-release-ID/bin/nspregistryctl uninstall ↵ The Kubernetes software is uninstalled. |
11 |
Enter the following: # cd /opt/nsp/nsp-registry-new-release-ID/bin ↵ |
12 |
Enter the following to perform the registry upgrade: Note: During the registry upgrade, the registry may be temporarily unavailable. During such a period, an NSP pod that restarts on a new cluster node, or a pod that starts. is in the ImagePullBackOff state until the registry upgrade completes. Any such pods recover automatically after the upgrade, and no user intervention is required. # ./nspregistryctl install ↵ |
13 |
If you did not perform Step 10 to uninstall the Kubernetes registry, go to Step 16. |
14 |
Enter the following to import the original Kubernetes images. # /opt/nsp/NSP-CN-DEP-base_load/bin/nspdeployerctl import ↵ where base_load is the initially deployed version of the installed NSP release |
15 |
If you have applied any NSP service pack since the original deployment of the installed release, you must import the Kubernetes images from the latest applied service pack. Enter the following to import the Kubernetes images from the latest applied service pack. # /opt/nsp/NSP-CN-DEP-latest_load/bin/nspdeployerctl import ↵ where latest_load is the version of the latest applied NSP service pack |
Verify NSP cluster initialization | |
16 |
When the registry upgrade is complete, verify the cluster initialization.
|
Prepare to upgrade NSP Kubernetes deployer | |
17 |
Perform Step 18 to Step 31 on the NSP deployer host in each cluster, and then go to Step 32. Note: In a DR deployment, you can perform the steps on each NSP deployer host concurrently; the order is unimportant. |
18 |
You must merge the current k8s-deployer.yml settings into the new k8s-deployer.yml file. Open the following files using a plain-text editor such as vi:
|
19 |
Apply the settings in the old file to the same parameters in the new file. |
20 |
Close the old k8s-deployer.yml file. |
21 |
In the new k8s-deployer.yml file, edit the following line in the cluster section to read: hosts: "/opt/nsp/nsp-k8s-deployer-new-release-ID/config/hosts.yml" |
22 |
If you have disabled remote root access to the NSP cluster VMs,configure the following parameters in the cluster section, sshAccess subsection: sshAccess: userName: "user" privateKey: "path" where user is the designated NSP ansible user path is the SSH key path, for example, /home/user/.ssh/id_rsa |
23 |
Each NSP cluster VM has a parameter block like the following in the hosts section; configure the parameters for each VM, as required: - isIngress: value nodeIp: private_IP accessIp: public_address nodeName: node_name where value is true or false, and indicates whether the node acts as a load-balancer endpoint private_IP is the VM IP address public_IP is the public VM address; required only in a NAT environment node_name is the VM name |
24 |
In the following section, specify the virtual IP addresses for the NSP to use as the internal load-balancer endpoints. Note: A single-node NSP cluster requires at least the client_IP address. The addresses are the following values for NSP client, internal, and mediation access that you specify in the platform—ingressApplications section of the nsp-config.yml file during NSP cluster deployment. In the internalAddresses subsection, if configured, otherwise, in the clientAddresses subsection: loadBalancerExternalIps: - client_IP - internal_IP - trapV4_mediation_IP - trapV6_mediation_IP - flowV4_mediation_IP - flowV6_mediation_IP |
25 |
Configure the following parameter, which specifies whether dual-stack NE management is enabled: Note: Dual-stack NE management can function only when the network environment is appropriately configured, for example:
enable_dual_stack_networks: value where value must be set to true if the cluster VMs support both IPv4 and IPv6 addressing |
26 |
Save and close the new k8s-deployer.yml file. |
27 |
Enter the following: # cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵ |
28 |
Enter the following to create the new hosts.yml file: # ./nspk8sctl config -c ↵ |
29 |
Enter the following to list the node entries in the new hosts.yml file: # ./nspk8sctl config -l ↵ Output like the following example for a four-node cluster is displayed: Note: If NAT is used in the cluster: Otherwise: Note: The ansible_host value must match the access_ip value. Existing cluster hosts configuration is: node_1_name: ansible_host: 203.0.113.11 ip: private_IP access_ip: public_IP node_labels: isIngress: "true" node_2_name: ansible_host: 203.0.113.12 ip: private_IP access_ip: public_IP node_labels: isIngress: "true" node_3_name: ansible_host: 203.0.113.13 ip: private_IP access_ip: public_IP node_labels: isIngress: "true" node_4_name: ansible_host: 203.0.113.14 ip: private_IP access_ip: public_IP node_labels: isIngress: "false" |
30 |
Verify the IP addresses. |
31 |
Enter the following to import the Kubernetes images to the repository: .# ./nspk8sctl import ↵ The images are imported. |
Stop and undeploy NSP cluster | |
32 |
Perform Step 33 to Step 35 on each NSP cluster, and then go to Step 36. Note: In a DR deployment, you must perform the steps first on the standby cluster. |
33 |
Perform the following steps on the NSP deployer host to preserve the existing cluster data.
|
34 |
Enter the following on the NSP deployer host to undeploy the NSP cluster: Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, this step marks the beginning of the network management outage associated with the upgrade. Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass uninstall --undeploy --clean # /opt/nsp/NSP-CN-DEP-old-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵ The NSP cluster is undeployed. |
35 |
On the NSP cluster host, enter the following periodically to display the status of the Kubernetes system pods: Note: You must not proceed to the next step until the output lists only the following: # kubectl get pods -A ↵ The pods are listed. |
Deploy new NSP Kubernetes software | |
36 |
Perform Step 37 to the end of the procedure on each NSP cluster. Note: In a DR deployment, you must perform the steps first on the primary cluster. |
37 |
Perform one of the following. Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in a command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspk8sctl --ask-pass uninstall | nspk8sctl --ask-pass install
|
38 |
Enter the following on the NSP cluster host periodically to display the status of the Kubernetes system pods: Note: You must not proceed to the next step until each pod STATUS reads Running or Completed. # kubectl get pods -A ↵ The pods are listed. |
39 |
Enter the following periodically on the NSP cluster host to display the status of the NSP cluster nodes: Note: You must not proceed to the next step until each node STATUS reads Ready. # kubectl get nodes -o wide ↵ The NSP cluster nodes are listed, as shown in the following three-node cluster example: NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node1 Ready master nd version int_IP ext_IP node2 Ready master nd version int_IP ext_IP node3 Ready <none> nd version int_IP ext_IP |
40 |
Update the NSP cluster configuration.
|
Disable pod security policy | |
41 |
If your NSP deployment is at Release 23.4 and has a Kubernetes version newer than the version initially shipped with NSP Release 23.4, you must disable the pod security policy.
|
Redeploy NSP software | |
42 |
Enter the following on the NSP deployer host: # cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵ |
43 |
Enter the following: # ./nspdeployerctl config ↵ |
44 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config --deploy # ./nspdeployerctl install --config --deploy ↵ The NSP starts. |
Verify NSP initialization | |
45 |
On the NSP cluster host, monitor and validate the NSP cluster initialization. Note: You must not proceed to the next step until each NSP pod is operational.
|
46 |
Enter the following on the NSP cluster host to display the status of the NSP cluster members: Note: You must not proceed to the next step until each node is operational. # kubectl get nodes ↵ The status of each node is listed; all nodes are operational when the displayed STATUS value is Ready. The NSP Kubernetes deployer log file is /var/log/nspk8sctl.log. |
Verify upgraded NSP cluster operation | |
47 |
Use a browser to open the NSP cluster URL. |
48 |
Verify the following.
|
49 |
As required, use the NSP to monitor device discovery and to check network management functions. Note: You do not need to perform the step on the standby NSP cluster. Note: If you are upgrading Kubernetes in a standalone NSP cluster, or the primary NSP cluster in a DR deployment, the completed NSP cluster initialization marks the end of the network management outage. |
Purge Kubernetes image files | |
50 |
Note: You must perform this and the following step only after you verify that the NSP system is operationally stable and that an upgrade rollback is not required. Enter the following on the NSP deployer host: # cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵ |
51 |
Enter the following on the NSP deployer host: # ./nspk8sctl purge-registry -e ↵ The images are purged. |
52 |
Close the open console windows. End of steps |