To upgrade the CLM Kubernetes environment
Purpose
Perform this procedure to upgrade the Kubernetes deployment environment in a CLM system. The procedure upgrades only the deployment infrastructure, and not the CLM software.
Note: You must upgrade Kubernetes in each CLM cluster of a DR deployment, as described in the procedure.
Note: release-ID in a file path has the following format:
R.r.p-rel.version
where
R.r.p is the CLM release, in the form MAJOR.minor.patch
version is a numeric value
Steps
Download Kubernetes upgrade bundle | |
1 |
Download the following from the NSP downloads page on the Nokia Support portal to a local station that is not part of the CLM deployment: Note: The download takes considerable time; while the download is in progress, you may proceed to Step 2.
where R_r is the CLM release ID, in the form Major_minor |
Verify CLM cluster readiness | |
2 |
Perform the following steps on each CLM cluster to verify that the cluster is fully operational.
|
Back up CLM databases | |
3 |
On the standalone CLM cluster, or the primary cluster in a DR deployment, perform Back up the existing CLM. |
Back up system configuration files | |
4 |
Perform the following on the CLM deployer host in each data center. Note: In a DR deployment, you must clearly identify the source cluster of each set of backup files.
|
Back up Kubernetes secrets | |
5 |
If the CLM is at Release 24.8 or later, back up the Kubernetes secrets, if not done as part of the database backup in Step 3.
|
Verify checksum of downloaded file | |
6 |
It is strongly recommended that you verify the message digest of each CLM file that you download from the Nokia Support portal. The downloaded .cksum file contains checksums for comparison with the output of the RHEL md5sum, sha256sum, or sha512sum commands. When the file download is complete, verify the file checksum.
|
Upgrade CLM registry | |
7 |
Perform Step 8 to Step 17 on the CLM deployer host in each data center, and then go to Step 18. Note: In a DR deployment, you must perform the steps first on the CLM deployer host in the primary data center. |
8 |
If the CLM deployer host is deployed in a VM created using a RHEL OS disk image, perform To apply a RHEL update to a CLM image-based OS. |
9 |
Copy the downloaded NSP_CLM_K8S_DEPLOYER_R_r.tar.gz file to the /opt/nsp directory. |
10 |
Expand the software bundle file.
|
11 |
If you are not upgrading Kubernetes from the immediately previous version supported by the CLM, but from an earlier version, you must uninstall the Kubernetes registry; otherwise, you can skip this step. See the Host Environment Compatibility Guide for NSP and CLM for information about Kubernetes version support. Enter the following: # /opt/nsp/nsp-registry-old-release-ID/bin/nspregistryctl uninstall ↵ The Kubernetes software is uninstalled. |
12 |
Enter the following: # cd /opt/nsp/nsp-registry-new-release-ID/bin ↵ |
13 |
Enter the following to perform the registry upgrade: Note: During the registry upgrade, the registry may be temporarily unavailable. During such a period, a CLM pod that restarts on a new cluster node, or a pod that starts. is in the ImagePullBackOff state until the registry upgrade completes. Any such pods recover automatically after the upgrade, and no user intervention is required. # ./nspregistryctl install ↵ |
14 |
If you did not perform Step 11 to uninstall the Kubernetes registry, go to Step 17. |
15 |
Enter the following to import the original Kubernetes images. # /opt/nsp/NSP-CN-DEP-base_load/bin/nspdeployerctl import ↵ where base_load is the initially deployed version of the installed CLM release |
16 |
If you have applied any CLM service pack since the original deployment of the installed release, you must import the Kubernetes images from the latest applied service pack. Enter the following to import the Kubernetes images from the latest applied service pack. # /opt/nsp/NSP-CN-DEP-latest_load/bin/nspdeployerctl import ↵ where latest_load is the version of the latest applied CLM service pack |
Verify CLM cluster initialization | |
17 |
When the registry upgrade is complete, verify the cluster initialization.
|
Prepare to upgrade CLM Kubernetes deployer | |
18 |
Perform Step 19 to Step 32 on the CLM deployer host in each cluster, and then go to Step 33. Note: In a DR deployment, you can perform the steps on each CLM deployer host concurrently; the order is unimportant. |
19 |
You must merge the current k8s-deployer.yml settings into the new k8s-deployer.yml file. Open the following files using a plain-text editor such as vi:
|
20 |
Apply the settings in the old file to the same parameters in the new file. |
21 |
Close the old k8s-deployer.yml file. |
22 |
In the new k8s-deployer.yml file, edit the following line in the cluster section to read: hosts: "/opt/nsp/nsp-k8s-deployer-new-release-ID/config/hosts.yml" |
23 |
If you have disabled remote root access to the CLM cluster VMs,configure the following parameters in the cluster section, sshAccess subsection: sshAccess: userName: "user" privateKey: "path" where user is the designated CLM ansible user path is the SSH key path, for example, /home/user/.ssh/id_rsa |
24 |
Each CLM cluster VM has a parameter block like the following in the hosts section; configure the parameters for each VM, as required: - isIngress: value nodeIp: private_IP accessIp: public_address nodeName: node_name where value is true or false, and indicates whether the node acts as a load-balancer endpoint private_IP is the VM IP address public_IP is the public VM address; required only in a NAT environment node_name is the VM name |
25 |
In the following section, specify the virtual IP addresses for the CLM to use as the internal load-balancer endpoints. Note: A single-node CLM cluster requires at least the client_IP address. The addresses are the following values for CLM client, internal, and mediation access that you specify in the platform—ingressApplications section of the nsp-config.yml file during CLM cluster deployment. In the internalAddresses subsection, if configured, otherwise, in the clientAddresses subsection: loadBalancerExternalIps: - client_IP - internal_IP |
26 |
Configure the following parameter, which specifies whether dual-stack NE management is enabled: Note: Dual-stack NE management can function only when the network environment is appropriately configured, for example:
enable_dual_stack_networks: value where value must be set to true if the cluster VMs support both IPv4 and IPv6 addressing |
27 |
Save and close the new k8s-deployer.yml file. |
28 |
Enter the following: # cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵ |
29 |
Enter the following to create the new hosts.yml file: # ./nspk8sctl config -c ↵ |
30 |
Enter the following to list the node entries in the new hosts.yml file: # ./nspk8sctl config -l ↵ Output like the following example for a one-node cluster is displayed: Note: If NAT is used in the cluster: Otherwise: Note: The ansible_host value must match the access_ip value. Existing cluster hosts configuration is: node_1_name: ansible_host: 203.0.113.11 ip: private_IP access_ip: public_IP node_labels: isIngress: "true" |
31 |
Verify the IP addresses. |
32 |
Enter the following to import the Kubernetes images to the repository: .# ./nspk8sctl import ↵ The images are imported. |
Stop and undeploy CLM cluster | |
33 |
Perform Step 34 to Step 36 on each CLM cluster, and then go to Step 37. Note: In a DR deployment, you must perform the steps first on the standby cluster. |
34 |
Perform the following steps on the CLM deployer host to preserve the existing cluster data.
|
35 |
Enter the following on the CLM deployer host to undeploy the CLM cluster: Note: If you are upgrading a standalone CLM system, or the primary CLM cluster in a DR deployment, this step marks the beginning of the network management outage associated with the upgrade. Note: If the CLM cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass uninstall --undeploy --clean # /opt/nsp/NSP-CN-DEP-old-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵ The CLM cluster is undeployed. |
36 |
On the CLM cluster host, enter the following periodically to display the status of the Kubernetes system pods: Note: You must not proceed to the next step until the output lists only the following: # kubectl get pods -A ↵ The pods are listed. |
Deploy new CLM Kubernetes software | |
37 |
Perform Step 38 to the end of the procedure on each CLM cluster. Note: In a DR deployment, you must perform the steps first on the primary cluster. |
38 |
If the new Kubernetes version is more than one version later than the existing version, you cannot upgrade the software; instead, you must completely replace the existing version by uninstalling the software and then installing the new version. Perform the following steps. Note: The software replacement on a cluster takes approximately 30 minutes, and is the recommended option. Note: If the CLM cluster VMs do not have the required SSH key, you must include the --ask-pass argument in a command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspk8sctl --ask-pass uninstall
|
39 |
Enter the following: # cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵ |
40 |
If the CLM is at Release 24.8 or later, restore the backed-up Kubernetes secrets.
|
41 |
Enter the following to install or upgrade the Kubernetes software: Note: If you do not uninstall Kubernetes in Step 38, the software is upgraded rather than installed. An upgrade takes considerable time; during the upgrade process, each cluster node is individually cordoned, drained, upgraded, and uncordoned. The operation on each node may take 15 minutes or more. # ./nspk8sctl install ↵ The new CLM Kubernetes environment is deployed. |
42 |
Enter the following on the CLM cluster host periodically to display the status of the Kubernetes system pods: Note: You must not proceed to the next step until each pod STATUS reads Running or Completed. # kubectl get pods -A ↵ The pods are listed. |
43 |
Enter the following periodically on the CLM cluster host to display the status of the CLM cluster nodes: Note: You must not proceed to the next step until each node STATUS reads Ready. # kubectl get nodes -o wide ↵ The CLM cluster nodes are listed, as shown in the following example: NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node1 Ready master nd version int_IP ext_IP |
44 |
Update the CLM cluster configuration.
|
Redeploy CLM software | |
45 |
Enter the following on the CLM deployer host: # cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵ |
46 |
Enter the following: # ./nspdeployerctl config ↵ |
47 |
Enter the following: Note: If the CLM cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config --deploy # ./nspdeployerctl install --config --deploy ↵ The CLM starts. |
Verify CLM initialization | |
48 |
On the CLM cluster host, monitor and validate the CLM cluster initialization. Note: You must not proceed to the next step until each CLM pod is operational.
|
49 |
Enter the following on the CLM cluster host to display the status of the CLM cluster members: Note: You must not proceed to the next step until each node is operational. # kubectl get nodes ↵ The status of each node is listed; all nodes are operational when the displayed STATUS value is Ready. The CLM Kubernetes deployer log file is /var/log/nspk8sctl.log. |
Verify upgraded CLM cluster operation | |
50 |
Use a browser to open the CLM cluster URL. |
51 |
Verify the following.
|
52 |
As required, use the CLM to monitor device discovery and to check network management functions. Note: You do not need to perform the step on the standby CLM cluster. Note: If you are upgrading Kubernetes in a standalone CLM cluster, or the primary CLM cluster in a DR deployment, the completed CLM cluster initialization marks the end of the network management outage. |
Purge Kubernetes image files | |
53 |
Note: You must perform this and the following step only after you verify that the CLM system is operationally stable and that an upgrade rollback is not required. Enter the following on the CLM deployer host: # cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵ |
54 |
Enter the following on the CLM deployer host: # ./nspk8sctl purge-registry -e ↵ The images are purged. |
55 |
Close the open console windows. End of steps |