To upgrade the NSP Kubernetes environment

Purpose

Perform this procedure to upgrade the Kubernetes deployment environment in an NSP system. The procedure upgrades only the deployment infrastructure, and not the NSP software.

Note: You must upgrade Kubernetes in each NSP cluster of a DR deployment, as described in the procedure.

Note: The following denote a specific NSP release ID in a file path:

  • old-release-ID—currently installed NSP release

  • new-release-ID—NSP release you are upgrading to

  • base_load—original deployed version of the installed NSP release, for example, Release 24.11.

  • latest_load—version of the latest applied NSP service pack, for example, Release 24.11 SP7

Each release ID has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Kubernetes upgrade considerations

  • Before you attempt to upgrade the Kubernetes deployment environment:

    • The latest RHEL OS patches must be applied to the deployer host and all cluster VMs.

    • All NSP pods are healthy.

  • For non-root installs, you must add the nspk8supdate script to the sudoers file for the NSP admin user.

    See Restricting root-user system access for examples of sudoers files.

  • If you are not upgrading from the immediately previous Kubernetes release, and you are using your own storage, by default the Kubernetes cluster is uninstalled and data is not preserved during the Kubernetes upgrade.  Perform Step 15 to preserve your data.

Steps

Note: The script in Deploy new NSP Kubernetes and NSP registry software disables SELinux enforcing mode on deployer and the cluster VMs, and does not automatically re-enable it at the end of the upgrade process. You must manually re-enable SELinux enforcing mode.

Download Kubernetes upgrade bundle
 

Download the following from the NSP downloads page on the Nokia Support portal to a local station that is not part of the NSP deployment:

Note: The download takes considerable time; while the download is in progress, you may proceed to Step 2.

  • NSP_K8S_DEPLOYER_R_r.tar.gz—software bundle for installing the registry and deploying the container environment

  • associated .cksum file

where

R_r is the NSP release ID, in the form Major_minor


Verify NSP cluster readiness
 

Perform the following steps on each NSP cluster to verify that the cluster is fully operational.

  1. Log in as the root user on the NSP cluster host.

  2. Open a console window.

  3. Enter the following to display the status of the NSP cluster nodes:

    kubectl get nodes -A ↵

    The status of each cluster node is displayed.

    The NSP registry is fully operational if the status of each node is Ready.

  4. If any node is not in the Ready state, you must correct the condition; contact technical support for assistance, if required.

    Do not proceed to the next step until the issue is resolved.

  5. Enter the following to display the NSP pod status:

    kubectl get pods -A ↵

    The status of each pod is displayed.

    The NSP cluster is operational if the status of each pod is Running or Completed.

  6. If any pod is not in the Running or Completed state, you must correct the condition; see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


Back up NSP databases
 

On the standalone NSP cluster, or the primary cluster in a DR deployment, perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide.

Note: The backup takes considerable time; while the backup is in progress, you may proceed to Step 4.


Back up system configuration files
 

Perform the following on the NSP deployer host in each data center.

Note: In a DR deployment, you must clearly identify the source cluster of each set of backup files.

  1. Log in as the root or NSP admin user.

  2. Back up the Kubernetes deployer configuration file:

    /opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml

  3. Back up the NSP deployer configuration file:

    /opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml

  4. Back up the NSP configuration file:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/appliedConfigs/nspConfiguratorConfigs.zip

  5. Copy the backed-up files to a separate station that is not part of the NSP deployment.


Verify checksum of downloaded file
 

It is strongly recommended that you verify the message digest of each NSP file that you download from the Nokia Support portal. The downloaded .cksum file contains checksums for comparison with the output of the RHEL md5sum, sha256sum, or sha512sum commands.

When the file download is complete, verify the file checksum.

  1. Enter the following:

    command file

    where

    command is md5sum, sha256sum, or sha512sum

    file is the name of the downloaded file

    A file checksum is displayed.

  2. Compare the checksum and the associated value in the .cksum file.

  3. If the values do not match, the file download has failed. Retry the download, and then repeat Step 5.


Perform the following steps on the NSP deployer host in each data center, and then go to Deploy new NSP Kubernetes and NSP registry software.

Note: In a DR deployment, you must perform the steps first on the NSP deployer host in the primary data center.


If the NSP deployer host is deployed in a VM created using an NSP RHEL OS disk image, perform To apply a RHEL update to an NSP image-based OS.


Copy the downloaded NSP_K8S_DEPLOYER_R_r.tar.gz file to the /opt/nsp directory.


Expand the software bundle file.

  1. Enter the following:

    cd /opt/nsp ↵

  2. Enter the following:

    tar -zxvf NSP_K8S_DEPLOYER_R_r.tar.gz ↵

    The bundle file is expanded, and the following directories are created:

    • /opt/nsp/nsp-registry-new-release-ID

    • /opt/nsp/nsp-k8s-deployer-new-release-ID

  3. After the file expansion completes successfully, enter the following to remove the bundle file, which is no longer required:

    rm -f NSP_K8S_DEPLOYER_R_r.tar.gz ↵


Deploy new NSP Kubernetes and NSP registry software
 
10 

Perform the following on the NSP deployer host in each data center.

Note: In a DR deployment, upgrade the clusters as follows:

  1. Upgrade the standby cluster by performing Step 11 to Step 23.

  2. Perform an NSP DR switchover to change the role of the standby cluster to primary.

  3. Upgrade the former primary cluster by performing Step 11 to Step 23.


11 

Enter the following:

cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵


12 

If you are not upgrading from the immediately previous Kubernetes release, and you are using your own storage, go to Step 15.

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspk8supgrade ... --ask-pass

Note: You can use the --skip-nsp-redeploy argument for SELinux enforcing systems that need to update their SELinux policies.

Enter the following to install or upgrade the Kubernetes software:

./nspk8supgrade -K /opt/nsp/nsp-k8s-deployer-old-release-ID -R /opt/nsp/nsp-registry-old_load-ID -N /opt/nsp/NSP-CN-DEP-base_load-ID -P /opt/nsp/NSP-CN-DEP-latest_load/bin/ ↵

where

nsp-k8s-deployer-old-release-ID is the previous Kubernetes base directory; nsp-registry-old_load-ID is the previous registry base directory

NSP-CN-DEP-base_load-ID is the original deployed version of the installed NSP release

NSP-CN-DEP-latest_load is the version of the latest applied NSP service pack

The following warning is displayed before the script runs:

Wed Jul  9 17:01:06 EDT 2025 -> *** WARNING ***

Wed Jul  9 17:01:06 EDT 2025 -> 

Wed Jul  9 17:01:06 EDT 2025 -> Upgrading k8s will require NSP be uninstalled and reinstalled.

Wed Jul  9 17:01:06 EDT 2025 -> Data will be preserved.

Wed Jul  9 17:01:06 EDT 2025 -> 

Wed Jul  9 17:01:06 EDT 2025 -> If SELinux is in enforcing mode on the deployer or cluster hosts it will be set to permissive.

Wed Jul  9 17:01:06 EDT 2025 -> The Sys Admin procedures for updating SELinux and setting it back to enforcing must be

Wed Jul  9 17:01:06 EDT 2025 -> performed manually by the user after this upgrade completes.

Wed Jul  9 17:01:06 EDT 2025 -> 

Would you like to continue with the upgrade? [y/n]  


13 

If the script fails, an error message is displayed that indicates what step number failed. 

  1. View the error message and find where the script failed.

    The following is an example of an error message:

    Wed Jul  9 10:29:08 EDT 2025 -> Step 3/23: Verify readiness

    Wed Jul  9 10:29:08 EDT 2025 -> Waiting for the cluster to be ready...

    Wed Jul  9 10:29:19 EDT 2025 ->

    Wed Jul  9 10:29:19 EDT 2025 -> Step 5/23: Uninstall old registry  

    Wed Jul  9 10:29:19 EDT 2025 ->

    Wed Jul  9 10:29:19 EDT 2025 -> The existing registry does not support in-place upgrades to 25.0.0-rel.8.

    Wed Jul  9 10:29:19 EDT 2025 -> The existing registry will be uninstalled so this version can be installed.

    Wed Jul  9 10:29:19 EDT 2025 -> Images and charts will be re-installed later.

    Wed Jul  9 10:29:19 EDT 2025 -> Executing: /opt/build/nsp-registry-23.4.0-rel.34/bin/nspregistryctl uninstall --assumeyes

    Wed Jul  9 10:29:54 EDT 2025 ->

    Wed Jul  9 10:29:54 EDT 2025 -> Step 6/23: Install new registry

    Wed Jul  9 10:29:54 EDT 2025 -> Executing: /opt/build/nsp-registry-25.0.0-rel.8/bin/nspregistryctl install

    Wed Jul  9 10:29:57 EDT 2025 -> Error: [6]: Failed to re-install the registry

    Wed Jul  9 10:29:57 EDT 2025 ->

    Wed Jul  9 10:29:57 EDT 2025 -> ********** Failed! Please contact Nokia support.

    Wed Jul  9 10:29:57 EDT 2025 -> ********** For more information see /var/log/nspk8supgrade.log and /var/log/nspregistryctl.log

    Wed Jul  9 10:29:57 EDT 2025 -> -------------------- END upgrade - FAILED(1) --------------------

    In the example above, the line "Error: [6]" indicates that the upgrade failed at Step 6.

  2. Resolve the error.

  3. Continue the upgrade from the step where the script failed.

    Enter the following:

    # ./nspk8supgrade -K /opt/nsp/nsp-k8s-deployer-old-release-ID -R /opt/nsp/nsp-registry-old_load-ID -N /opt/nsp/NSP-CN-DEP-base_load-ID -P /opt/nsp/NSP-CN-DEP-latest_load/bin/ -C step_number

    where

    nsp-k8s-deployer-old-release-ID is the previous Kubernetes base directory; nsp-registry-old_load-ID is the previous registry base directory

    NSP-CN-DEP-base_load-ID is the original deployed version of the installed NSP release

    NSP-CN-DEP-latest_load is the version of the latest applied NSP service pack

    step_number is where you want to restart the script.

    In the above example, enter 6 as the step_number.

    # ./nspk8supgrade -K /opt/nsp/nsp-k8s-deployer-old-release-ID -R /opt/nsp/nsp-registry-old_load-ID -N /opt/nsp/NSP-CN-DEP-base_load-ID -P /opt/nsp/NSP-CN-DEP-latest_load/bin/ -C 6 ↵


14 

Go to Step 22.


Deploy new NSP Kubernetes and NSP registry software using your own storage
 
15 

If you are not upgrading from the immediately previous Kubernetes release, and you are using your own storage, perform the one of the following pathways to install or upgrade the Kubernetes software and to preserve your data.

  • For a DR deployment, go to Step 16.

  • For a standalone cluster, go to Step 17.


16 

In a DR deployment, use this pathway to access detailed steps.

Perform the steps on the standby cluster first, and then on the original active cluster; this ensures that the databases are not lost.

  1. Run the script as described in Step 18.

    If the script fails, go to Step 13.

  2. Reconfigure the storage classes as described in Step 19.

  3. Deploy NSP as described in Step 20 and Step 21.

  4. Databases are automatically synchronized from the active cluster to the standby cluster.

  5. Perform “How do I check NSP database synchronization?” in the NSP System Administrator Guide to ensure that the primary and standby databases are completely synchronized.

  6. Go to Step 22.


17 

In a standalone cluster, use this pathway for detailed steps.

  1. Run the script as described in Step 18.

    If the script fails, go to Step 13.

  2. Reconfigure the storage classes as described in Step 19.

  3. Perform “How do I restore the NSP cluster databases?” starting from Step 5 in the NSP System Administrator Guide.

  4. Deploy NSP as described in Step 20 and Step 21.

  5. Go to Step 22.


18 

Enter the following with the --skip-nsp-redeploy option:

./nspk8supgrade -K /opt/nsp/nsp-k8s-deployer-old-release-ID -R /opt/nsp/nsp-registry-old_load-ID -N /opt/nsp/NSP-CN-DEP-base_load-ID -P /opt/nsp/NSP-CN-DEP-latest_load/bin/ --skip-nsp-redeploy ↵

where

nsp-k8s-deployer-old-release-ID is the previous Kubernetes base directory; nsp-registry-old_load-ID is the previous registry base directory

NSP-CN-DEP-base_load-ID is the original deployed version of the installed NSP release

NSP-CN-DEP-latest_load is the version of the latest applied NSP service pack


19 

Reconfigure the storage classes for the NSP cluster.

Step 55 has examples of storage class configurations; if you are using other types of storage, see the appropriate storage documentation.


20 

Enter the following to deploy the NSP software in the NSP cluster:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --deploy

./nspdeployerctl install --config --deploy ↵

The specified NSP functions are installed and initialized.


21 

Monitor and validate the NSP cluster initialization.

Note: You must not proceed to the next step until each NSP pod is operational.

  1. On the NSP cluster host, enter the following every few minutes:

    kubectl get pods -A ↵

    The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed, with the following exception.

    • If you are including any MDM VMs in addition to a standard or enhanced NSP cluster deployment, the status of each mdm-server pod is shown as Pending, rather than Running or Completed.

  2. Check PVCs are bound to PVs and PVs are created with STORAGECLASS as shown below

    # kubectl get pvc -A

    NAMESPACE            NAME                       STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE

    nsp-psa-privileged   data-volume-mdm-server-0   Bound    pvc-ID  5Gi        RWO            storage_class  age

    nsp-psa-restricted   data-nspos-kafka-0         Bound    pvc-ID  10Gi       RWO            storage_class   age

    nsp-psa-restricted   data-nspos-zookeeper-0     Bound    pvc-ID  2Gi        RWO            storage_class  age

    ...

    # kubectl get pv

    NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM

    nspos-fluentd-logs-data   50Mi       ROX            Retain           Bound    nsp-psa-restricted/nspos-fluentd-logs-data

    pvc-ID                   10Gi       RWO            Retain           Bound    nsp-psa-restricted/data-nspos-kafka-0 

    pvc-ID                   2Gi        RWO            Retain           Bound    nsp-psa-restricted/data-nspos-zookeeper-0

    ...

  3. Verify that all pods are in the Running state.

  4. If the Network Operations Analytics - Baseline Analytics installation option is enabled, ensure that the following pods are listed; otherwise, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod:

    Note: The output for a non-HA deployment is shown below; an HA cluster has three sets of three baseline pods, three rta-ignite pods, and two spark-operator pods.

    • analytics-rtanalytics-tomcat

    • baseline-anomaly-detector-n-exec-1

    • baseline-trainer-n-exec-1

    • baseline-window-evaluator-n-exec-1

    • rta-anomaly-detector-app-driver

    • rta-ignite-0

    • rta-trainer-app-driver

    • rta-windower-app-driver

    • spark-operator-m-n

  5. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


Verify upgraded NSP cluster operation
 
22 

Use a browser to open the NSP cluster URL.


23 

Verify the following:

  • In a DR deployment, if you specify the standby cluster address, the browser is redirected to the primary cluster address.

  • The NSP sign-in page opens.

  • The NSP UI opens after you sign in.


24 

As required, use the NSP to monitor device discovery and to check network management functions.

Note: You do not need to perform the step on the standby NSP cluster.

Note: If you are upgrading Kubernetes in a standalone NSP cluster, or the primary NSP cluster in a DR deployment, the completed NSP cluster initialization marks the end of the network management outage.


Restore SELinux enforcing mode
 
25 

If SELinux enforcing mode was set to permissive mode, perform the following steps:

  1. Perform the following procedures to update SELinux policies.

  2. Perform “How do I switch between SELinux mode on NSP system components?” in the System Administrator Guide.


26 

Close the open console windows.

End of steps