To upgrade the NSP Kubernetes environment

Purpose

Perform this procedure to upgrade the Kubernetes deployment environment in an NSP system. The procedure upgrades only the deployment infrastructure, and not the NSP software.

Note: You must upgrade Kubernetes in each NSP cluster of a DR deployment, as described in the procedure.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Steps
Download Kubernetes upgrade bundle
 

Download the following from the NSP downloads page on the Nokia Support portal to a local station that is not part of the NSP deployment:

Note: The download takes considerable time; while the download is in progress, you may proceed to Step 2.

  • NSP_K8S_DEPLOYER_R_r.tar.gz—software bundle for installing the registry and deploying the container environment

  • associated .cksum file

where

R_r is the NSP release ID, in the form Major_minor


Verify NSP cluster readiness
 

Perform the following steps on each NSP cluster to verify that the cluster is fully operational.

  1. Log in as the root user on the NSP cluster host.

  2. Open a console window.

  3. Enter the following to display the status of the NSP cluster nodes:

    kubectl get nodes -A ↵

    The status of each cluster node is displayed.

    The NSP cluster is fully operational if the status of each node is Ready.

  4. If any node is not in the Ready state, you must correct the condition; contact technical support for assistance, if required.

    Do not proceed to the next step until the issue is resolved.

  5. Enter the following to display the NSP pod status:

    kubectl get pods -A ↵

    The status of each pod is displayed.

    The NSP cluster is operational if the status of each pod is Running or Completed.

  6. If any pod is not in the Running or Completed state, you must correct the condition; see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


Back up NSP databases
 

On the standalone NSP cluster, or the primary cluster in a DR deployment, perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide.

Note: The backup takes considerable time; while the backup is in progress, you may proceed to Step 4.


Back up system configuration files
 

Perform the following on the NSP deployer host in each data center.

Note: In a DR deployment, you must clearly identify the source cluster of each set of backup files.

  1. Back up the following Kubernetes registry certificate files in the following directory:

    /opt/nsp/nsp-registry/tls

    • nokia-nsp-registry.crt

    • nokia-nsp-registry.key

  2. Back up the Kubernetes deployer configuration file:

    /opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml

  3. Back up the NSP deployer configuration file:

    /opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml

  4. Back up the NSP configuration file:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/appliedConfigs/nspConfiguratorConfigs.zip

  5. Copy the backed-up files to a separate station that is not part of the NSP deployment.


Verify checksum of downloaded file
 

It is strongly recommended that you verify the message digest of each NSP file that you download from the Nokia Support portal. The downloaded .cksum file contains checksums for comparison with the output of the RHEL md5sum, sha256sum, or sha512sum commands.

When the file download is complete, verify the file checksum.

  1. Enter the following:

    command file

    where

    command is md5sum, sha256sum, or sha512sum

    file is the name of the downloaded file

    A file checksum is displayed.

  2. Compare the checksum and the associated value in the .cksum file.

  3. If the values do not match, the file download has failed. Retry the download, and then repeat Step 5.


Upgrade NSP registry
 

Perform Step 7 to Step 16 on the NSP deployer host in each data center, and then go to Step 17.

Note: In a DR deployment, you must perform the steps first on the NSP deployer host in the primary data center.


If the NSP deployer host is deployed in a VM created using an NSP RHEL OS disk image, perform To apply a RHEL update to an NSP image-based OS.


Copy the downloaded NSP_K8S_DEPLOYER_R_r.tar.gz file to the /opt/nsp directory.


Expand the software bundle file.

  1. Enter the following:

    cd /opt/nsp ↵

  2. Enter the following:

    tar -zxvf NSP_K8S_DEPLOYER_R_r.tar.gz ↵

    The bundle file is expanded, and the following directories are created:

    • /opt/nsp/nsp-registry-new-release-ID

    • /opt/nsp/nsp-k8s-deployer-new-release-ID

  3. After the file expansion completes successfully, enter the following to remove the bundle file, which is no longer required:

    rm -f NSP_K8S_DEPLOYER_R_r.tar.gz ↵


10 

If you are not upgrading Kubernetes from the immediately previous version supported by the NSP, but from an earlier version, you must uninstall the Kubernetes software; otherwise, you can skip this step. See the Host Environment Compatibility Guide for NSP and CLM for information about Kubernetes version support.

Enter the following:

/opt/nsp/nsp-registry-old-release-ID/bin/nspregistryctl uninstall ↵

The Kubernetes software is uninstalled.


11 

Enter the following:

cd /opt/nsp/nsp-registry-new-release-ID/bin ↵


12 

Enter the following to perform the registry upgrade:

Note: During the registry upgrade, the registry may be temporarily unavailable. During such a period, an NSP pod that restarts on a new cluster node, or a pod that starts. is in the ImagePullBackOff state until the registry upgrade completes. Any such pods recover automatically after the upgrade, and no user intervention is required.

./nspregistryctl install ↵


13 

If you did not perform Step 10 to uninstall Kubernetes, go to Step 16.


14 

Enter the following to import the original Kubernetes images.

/opt/nsp/NSP-CN-DEP-base_load/bin/nspdeployerctl import ↵

where base_load is the initially deployed version of the installed NSP release


15 

If you have applied any NSP service pack since the original deployment of the installed release, you must import the Kubernetes images from the latest applied service pack.

Enter the following to import the Kubernetes images from the latest applied service pack.

/opt/nsp/NSP-CN-DEP-latest_load/bin/nspdeployerctl import ↵

where latest_load is the version of the latest applied NSP service pack


Verify NSP cluster initialization
 
16 

When the registry upgrade is complete, verify the cluster initialization.

  1. Enter the following:

    kubectl get nodes ↵

    NSP deployer node status information like the following is displayed:

    NAME        STATUS    ROLES                  AGE     VERSION

    node_name   status    control-plane,master   xxdnnh   version

  2. Verify that status is Ready; do not proceed to the next step otherwise.

  3. Enter the following periodically to monitor the NSP cluster initialization:

    kubectl get pods -A ↵

    The status of each pod is displayed.

    The NSP cluster is fully operational when the status of each pod is Running or Completed.

  4. If any pod fails to enter the Running or Completed state, correct the condition; see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


Prepare to upgrade NSP Kubernetes deployer
 
17 

Perform Step 18 to Step 23 on the NSP deployer host in each cluster, and then go to Step 24.

Note: In a DR deployment, you can perform the steps on each NSP deployer host concurrently; the order is unimportant.


18 

Update the NSP deployer configuration.

  1. Open the following files using a plain-text editor such as vi:

    • old configuration file—/opt/nsp/nsp-k8s-deployer-old-release-ID/config/k8s-deployer.yml

    • new configuration file—/opt/nsp/nsp-k8s-deployer-new-release-ID/config/k8s-deployer.yml

  2. Merge the settings from the old file into the new file.

  3. Edit the following line in the new file to read:

      hosts: "/opt/nsp/nsp-k8s-deployer-new-release-ID/config/hosts.yml"

  4. Save and close the new k8s-deployer.yml file.

  5. Close the old k8s-deployer.yml file.


19 

Enter the following:

cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵


20 

Enter the following to create the new hosts.yml file:

./nspk8sctl config -c ↵


21 

Enter the following to list the node entries in the new hosts.yml file:

./nspk8sctl config -l ↵

Output like the following example for a four-node cluster is displayed:

Note: If NAT is used in the cluster:

  • The access_ip value is the public IP address of the cluster node.

  • The ip value is the private IP address of the cluster node.

  • The ansible_host value is the same value as access_ip

Note: If NAT is not used in the cluster:

  • The access_ip value is the IP address of the cluster node.

  • The ip value matches the access_ip value.

  • The ansible_host value is the same value as access_ip

Existing cluster hosts configuration is:

all:

 hosts:

 node1:

 ansible_host: 203.0.113.11

 ip: ip

 access_ip: access_ip

node2:

 ansible_host: 203.0.113.12

 ip: ip

 access_ip: access_ip

node3:

 ansible_host: 203.0.113.13

 ip: ip

 access_ip: access_ip

node4:

 ansible_host: 203.0.113.14

 ip: ip

 access_ip: access_ip


22 

Verify the IP addresses.


23 

Enter the following to import the Kubernetes images to the repository:

../nspk8sctl import ↵

The images are imported.


Stop and undeploy NSP cluster
 
24 

Perform Step 25 to Step 27 on each NSP cluster, and then go to Step 28.

Note: In a DR deployment, you must perform the steps first on the standby cluster.


25 

Perform the following steps on the NSP deployer host to preserve the existing cluster data.

  1. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read:

      deleteOnUndeploy:false

  3. Save and close the file.


26 

Enter the following on the NSP deployer host to undeploy the NSP cluster:

Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, this step marks the beginning of the network management outage associated with the upgrade.

Note: If the NSP cluster members do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the common root password of each cluster member:

nspdeployerctl --ask-pass --option --option

/opt/nsp/NSP-CN-DEP-old-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵

The NSP cluster is undeployed.


27 

On the NSP cluster host, enter the following periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until the output lists only the following:

  • pods in kube-system namespace

  • nsp-backup-storage pod

kubectl get pods -A ↵

The pods are listed.


Deploy new NSP Kubernetes software
 
28 

Perform Step 29 to the end of the procedure on each NSP cluster.

Note: In a DR deployment, you must perform the steps first on the primary cluster.


29 

Perform one of the following.

  1. Upgrade the Kubernetes software, which is recommended if the new version is only one version later than your current version.

    Note: The installation takes considerable time; during the process, each cluster node is cordoned, drained, upgraded, and uncordoned, one node at a time. The operation on each node may take 15 minutes or more.

    Enter the following on the NSP deployer host:

    /opt/nsp/nsp-k8s-deployer-new-release-ID/bin/nspk8sctl install ↵

    The upgraded NSP Kubernetes environment is deployed.

  2. Replace the current Kubernetes software with the new version.

    Note: Replacement is the recommended option if the new Kubernetes version is more than one version later than your current version.

    Note: The replacement takes approximately 30 minutes per cluster.

    1. Enter the following on the NSP deployer host:

      cd /opt/nsp/nsp-k8s-deployer-old-release-ID/bin ↵

    2. Enter the following:

      ./nspk8sctl uninstall ↵

      The existing Kubernetes software is uninstalled.

    3. Enter the following:

      cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵

    4. Enter the following:

      ./nspk8sctl install ↵

    The new NSP Kubernetes environment is deployed.


30 

Enter the following on the NSP cluster host periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until each pod STATUS reads Running or Completed.

kubectl get pods -A ↵

The pods are listed.


31 

Enter the following periodically on the NSP cluster host to display the status of the NSP cluster nodes:

Note: You must not proceed to the next step until each node STATUS reads Ready.

kubectl get nodes -o wide ↵

The NSP cluster nodes are listed, as shown in the following three-node cluster example:

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   

node1   Ready    master   nd    version   int_IP        ext_IP

node2   Ready    master   nd    version   int_IP        ext_IP

node3   Ready    <none>   nd    version   int_IP        ext_IP


32 

Update the NSP cluster configuration.

  1. Open the following files using a plain-text editor such as vi:

    • old configuration file—/opt/nsp/NSP-CN-DEP-old-release-ID/config/nsp-deployer.yml

    • new configuration file—/opt/nsp/NSP-CN-DEP-new-release-ID/config/nsp-deployer.yml

  2. Merge the settings from the old file into the new file.

  3. Edit the following line in the new file to read:

      hosts: "/opt/nsp/nsp-k8s-deployer-new-release-ID/config/hosts.yml"

  4. Save and close the new nsp-deployer.yml file.

  5. Close the old nsp-deployer.yml file.


Disable pod security policy
 
33 

If your NSP deployment is at Release 23.4 and has a Kubernetes version newer than the version initially shipped with NSP Release 23.4, you must disable the pod security policy.

  1. Open the following file on the NSP deployer host using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-latest_load/NSP-CN-latest_load/config/nsp-config.yml

    where latest_load is the ID of the latest applied NSP service pack

  2. Edit the following line in the nsp section, podSecurityPolicies subsection to read:

          enabled: false

  3. Save and close the file.


Redeploy NSP software
 
34 

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵


35 

Enter the following:

./nspdeployerctl install --config --deploy ↵

The NSP starts.


Verify NSP initialization
 
36 

On the NSP cluster host, monitor and validate the NSP cluster initialization.

Note: You must not proceed to the next step until each NSP pod is operational.

  1. Enter the following every few minutes:

    kubectl get pods -A ↵

    The status of each NSP cluster pod is listed; all pods are running when the displayed STATUS value is Running or Completed.

    The nsp deployer log file is /var/log/nspdeployerctl.log.

  2. If the Network Operations Analytics - Baseline Analytics installation option is enabled, ensure that the following pods are listed; otherwise, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod:

    Note: The output for a non-HA deployment is shown below; an HA cluster has three sets of three baseline pods, three rta-ignite pods, and two spark-operator pods.

    • analytics-rtanalytics-tomcat

    • baseline-anomaly-detector-n-exec-1

    • baseline-trainer-n-exec-1

    • baseline-window-evaluator-n-exec-1

    • rta-anomaly-detector-app-driver

    • rta-ignite-0

    • rta-trainer-app-driver

    • rta-windower-app-driver

    • spark-operator-m-n

  3. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


37 

Enter the following on the NSP cluster host to display the status of the NSP cluster members:

Note: You must not proceed to the next step until each node is operational.

kubectl get nodes ↵

The status of each node is listed; all nodes are operational when the displayed STATUS value is Ready.

The NSP Kubernetes deployer log file is /var/log/nspk8sctl.log.


Verify upgraded NSP cluster operation
 
38 

Use a browser to open the NSP cluster URL.


39 

Verify the following.

  • In a DR deployment, if you specify the standby cluster address, the browser is redirected to the primary cluster address.

  • The NSP sign-in page opens.

  • The NSP UI opens after you sign in.


40 

As required, use the NSP to monitor device discovery and to check network management functions.

Note: You do not need to perform the step on the standby NSP cluster.

Note: If you are upgrading Kubernetes in a standalone NSP cluster, or the primary NSP cluster in a DR deployment, the completed NSP cluster initialization marks the end of the network management outage.


Purge Kubernetes image files
 
41 

Note: You must perform this and the following step only after you verify that the NSP system is operationally stable and that an upgrade rollback is not required.

Enter the following on the NSP deployer host:

cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵


42 

Enter the following on the NSP deployer host:

./nspk8sctl purge-registry -e ↵

The images are purged.


43 

Close the open console windows.

End of steps