To upgrade the NSP Kubernetes environment

Purpose

Perform this procedure to upgrade the Kubernetes deployment environment in an NSP system. The procedure upgrades only the deployment infrastructure, and not the NSP software.

Note: You must upgrade Kubernetes in each NSP cluster of a DR deployment, as described in the procedure.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Steps
Download Kubernetes upgrade bundle
 

Download the following from the NSP downloads page on the Nokia Support portal to a local station that is not part of the NSP deployment:

Note: The download takes considerable time; while the download is in progress, you may proceed to Step 2.

  • NSP_K8S_DEPLOYER_R_r.tar.gz—software bundle for installing the registry and deploying the container environment

  • associated .cksum file

where

R_r is the NSP release ID, in the form Major_minor


Verify NSP cluster readiness
 

Perform the following steps on each NSP cluster to verify that the cluster is fully operational.

  1. Log in as the root user on the NSP cluster host.

  2. Open a console window.

  3. Enter the following to display the status of the NSP cluster nodes:

    kubectl get nodes -A ↵

    The status of each cluster node is displayed.

    The NSP cluster is fully operational if the status of each node is Ready.

  4. If any node is not in the Ready state, you must correct the condition; contact technical support for assistance, if required.

    Do not proceed to the next step until the issue is resolved.

  5. Enter the following to display the NSP pod status:

    kubectl get pods -A ↵

    The status of each pod is displayed.

    The NSP cluster is operational if the status of each pod is Running or Completed.

  6. If any pod is not in the Running or Completed state, you must correct the condition; see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


Back up NSP databases
 

On the standalone NSP cluster, or the primary cluster in a DR deployment, perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide.

Note: The backup takes considerable time; while the backup is in progress, you may proceed to Step 4.


Back up system configuration files
 

Perform the following on the NSP deployer host in each data center.

Note: In a DR deployment, you must clearly identify the source cluster of each set of backup files.

  1. Log in as the root or NSP admin user.

  2. Back up the Kubernetes deployer configuration file:

    /opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml

  3. Back up the NSP deployer configuration file:

    /opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml

  4. Back up the NSP configuration file:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/appliedConfigs/nspConfiguratorConfigs.zip

  5. Copy the backed-up files to a separate station that is not part of the NSP deployment.


Verify checksum of downloaded file
 

It is strongly recommended that you verify the message digest of each NSP file that you download from the Nokia Support portal. The downloaded .cksum file contains checksums for comparison with the output of the RHEL md5sum, sha256sum, or sha512sum commands.

When the file download is complete, verify the file checksum.

  1. Enter the following:

    command file

    where

    command is md5sum, sha256sum, or sha512sum

    file is the name of the downloaded file

    A file checksum is displayed.

  2. Compare the checksum and the associated value in the .cksum file.

  3. If the values do not match, the file download has failed. Retry the download, and then repeat Step 5.


Upgrade NSP registry
 

Perform Step 7 to Step 16 on the NSP deployer host in each data center, and then go to Step 17.

Note: In a DR deployment, you must perform the steps first on the NSP deployer host in the primary data center.


If the NSP deployer host is deployed in a VM created using an NSP RHEL OS disk image, perform To apply a RHEL update to an NSP image-based OS.


Copy the downloaded NSP_K8S_DEPLOYER_R_r.tar.gz file to the /opt/nsp directory.


Expand the software bundle file.

  1. Enter the following:

    cd /opt/nsp ↵

  2. Enter the following:

    tar -zxvf NSP_K8S_DEPLOYER_R_r.tar.gz ↵

    The bundle file is expanded, and the following directories are created:

    • /opt/nsp/nsp-registry-new-release-ID

    • /opt/nsp/nsp-k8s-deployer-new-release-ID

  3. After the file expansion completes successfully, enter the following to remove the bundle file, which is no longer required:

    rm -f NSP_K8S_DEPLOYER_R_r.tar.gz ↵


10 

If you are not upgrading Kubernetes from the immediately previous version supported by the NSP, but from an earlier version, you must uninstall the Kubernetes registry; otherwise, you can skip this step. See the Host Environment Compatibility Guide for NSP and CLM for information about Kubernetes version support.

Enter the following:

/opt/nsp/nsp-registry-old-release-ID/bin/nspregistryctl uninstall ↵

The Kubernetes software is uninstalled.


11 

Enter the following:

cd /opt/nsp/nsp-registry-new-release-ID/bin ↵


12 

Enter the following to perform the registry upgrade:

Note: During the registry upgrade, the registry may be temporarily unavailable. During such a period, an NSP pod that restarts on a new cluster node, or a pod that starts. is in the ImagePullBackOff state until the registry upgrade completes. Any such pods recover automatically after the upgrade, and no user intervention is required.

./nspregistryctl install ↵


13 

If you did not perform Step 10 to uninstall the Kubernetes registry, go to Step 16.


14 

Enter the following to import the original Kubernetes images.

/opt/nsp/NSP-CN-DEP-base_load/bin/nspdeployerctl import ↵

where base_load is the initially deployed version of the installed NSP release


15 

If you have applied any NSP service pack since the original deployment of the installed release, you must import the Kubernetes images from the latest applied service pack.

Enter the following to import the Kubernetes images from the latest applied service pack.

/opt/nsp/NSP-CN-DEP-latest_load/bin/nspdeployerctl import ↵

where latest_load is the version of the latest applied NSP service pack


Verify NSP cluster initialization
 
16 

When the registry upgrade is complete, verify the cluster initialization.

  1. Enter the following:

    kubectl get nodes ↵

    NSP deployer node status information like the following is displayed:

    NAME        STATUS    ROLES                  AGE     VERSION

    node_name   status    control-plane,master   xxdnnh   version

  2. Verify that status is Ready; do not proceed to the next step otherwise.

  3. Enter the following periodically to monitor the NSP cluster initialization:

    kubectl get pods -A ↵

    The status of each pod is displayed.

    The NSP cluster is fully operational when the status of each pod is Running or Completed.

  4. If any pod fails to enter the Running or Completed state, correct the condition; see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


Prepare to upgrade NSP Kubernetes deployer
 
17 

Perform Step 18 to Step 31 on the NSP deployer host in each cluster, and then go to Step 32.

Note: In a DR deployment, you can perform the steps on each NSP deployer host concurrently; the order is unimportant.


18 

You must merge the current k8s-deployer.yml settings into the new k8s-deployer.yml file.

Open the following files using a plain-text editor such as vi:

  • old configuration file—/opt/nsp/nsp-k8s-deployer-old-release-ID/config/k8s-deployer.yml

  • new configuration file—/opt/nsp/nsp-k8s-deployer-new-release-ID/config/k8s-deployer.yml


19 

Apply the settings in the old file to the same parameters in the new file.


20 

Close the old k8s-deployer.yml file.


21 

In the new k8s-deployer.yml file, edit the following line in the cluster section to read:

  hosts: "/opt/nsp/nsp-k8s-deployer-new-release-ID/config/hosts.yml"


22 

If you have disabled remote root access to the NSP cluster VMs,configure the following parameters in the cluster section, sshAccess subsection:

  sshAccess:

    userName: "user"

    privateKey: "path"

where

user is the designated NSP ansible user

path is the SSH key path, for example, /home/user/.ssh/id_rsa


23 

Each NSP cluster VM has a parameter block like the following in the hosts section; configure the parameters for each VM, as required:

  - isIngress: value

    nodeIp: private_IP

    accessIp: public_address

    nodeName: node_name

where

value is true or false, and indicates whether the node acts as a load-balancer endpoint

private_IP is the VM IP address

public_IP is the public VM address; required only in a NAT environment

node_name is the VM name


24 

In the following section, specify the virtual IP addresses for the NSP to use as the internal load-balancer endpoints.

Note: A single-node NSP cluster requires at least the client_IP address.

The addresses are the following values for NSP client, internal, and mediation access that you specify in the platformingressApplications section of the nsp-config.yml file during NSP cluster deployment.

In the internalAddresses subsection, if configured, otherwise, in the clientAddresses subsection:

  • if configured, the advertised value

  • otherwise, the virtualIp value

loadBalancerExternalIps:

    - client_IP

    - internal_IP

    - trapV4_mediation_IP

    - trapV6_mediation_IP

    - flowV4_mediation_IP

    - flowV6_mediation_IP


25 

Configure the following parameter, which specifies whether dual-stack NE management is enabled:

Note: Dual-stack NE management can function only when the network environment is appropriately configured, for example:

  • Only valid, non-link-local static or DHCPv6-assigned addresses are used.

  • A physical or virtual IPv6 subnet is configured for IPv6 communication with the NEs.

  enable_dual_stack_networks: value

where value must be set to true if the cluster VMs support both IPv4 and IPv6 addressing


26 

Save and close the new k8s-deployer.yml file.


27 

Enter the following:

cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵


28 

Enter the following to create the new hosts.yml file:

./nspk8sctl config -c ↵


29 

Enter the following to list the node entries in the new hosts.yml file:

./nspk8sctl config -l ↵

Output like the following example for a four-node cluster is displayed:

Note: If NAT is used in the cluster:

  • The ip value is the private IP address of the cluster node.

  • The access_ip value is the public address of the cluster node.

Otherwise:

  • The ip value is the private IP address of the cluster node.

  • The access_ip value matches the ip value.

Note: The ansible_host value must match the access_ip value.

Existing cluster hosts configuration is:

node_1_name:

  ansible_host: 203.0.113.11

  ip: private_IP

  access_ip: public_IP

  node_labels:

    isIngress: "true"

node_2_name:

  ansible_host: 203.0.113.12

  ip: private_IP

  access_ip: public_IP

  node_labels:

    isIngress: "true"

node_3_name:

  ansible_host: 203.0.113.13

  ip: private_IP

  access_ip: public_IP

  node_labels:

    isIngress: "true"

node_4_name:

  ansible_host: 203.0.113.14

  ip: private_IP

  access_ip: public_IP

  node_labels:

    isIngress: "false"


30 

Verify the IP addresses.


31 

Enter the following to import the Kubernetes images to the repository:

../nspk8sctl import ↵

The images are imported.


Stop and undeploy NSP cluster
 
32 

Perform Step 33 to Step 35 on each NSP cluster, and then go to Step 36.

Note: In a DR deployment, you must perform the steps first on the standby cluster.


33 

Perform the following steps on the NSP deployer host to preserve the existing cluster data.

  1. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read:

      deleteOnUndeploy:false

  3. Save and close the file.


34 

Enter the following on the NSP deployer host to undeploy the NSP cluster:

Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, this step marks the beginning of the network management outage associated with the upgrade.

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall --undeploy --clean

/opt/nsp/NSP-CN-DEP-old-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵

The NSP cluster is undeployed.


35 

On the NSP cluster host, enter the following periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until the output lists only the following:

  • pods in kube-system namespace

  • nsp-backup-storage pod

kubectl get pods -A ↵

The pods are listed.


Deploy new NSP Kubernetes software
 
36 

Perform Step 37 to the end of the procedure on each NSP cluster.

Note: In a DR deployment, you must perform the steps first on the primary cluster.


37 

Perform one of the following.

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in a command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspk8sctl --ask-pass uninstall | nspk8sctl --ask-pass install

  1. Upgrade the Kubernetes software, which is recommended if the new version is only one version later than your current version.

    Note: The upgrade takes considerable time; during the process, each cluster node is cordoned, drained, upgraded, and uncordoned, one node at a time. The operation on each node may take 15 minutes or more.

    Enter the following on the NSP deployer host:

    /opt/nsp/nsp-k8s-deployer-new-release-ID/bin/nspk8sctl install ↵

    The upgraded NSP Kubernetes environment is deployed.

  2. Replace the current Kubernetes software with the new version.

    Note: Replacement is the recommended option if the new Kubernetes version is more than one version later than your current version.

    Note: The replacement takes approximately 30 minutes per cluster.

    1. Enter the following on the NSP deployer host:

      cd /opt/nsp/nsp-k8s-deployer-old-release-ID/bin ↵

    2. Enter the following:

      ./nspk8sctl uninstall ↵

      The existing Kubernetes software is uninstalled.

    3. Enter the following:

      cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵

    4. Enter the following:

      ./nspk8sctl install ↵

    The new NSP Kubernetes environment is deployed.


38 

Enter the following on the NSP cluster host periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until each pod STATUS reads Running or Completed.

kubectl get pods -A ↵

The pods are listed.


39 

Enter the following periodically on the NSP cluster host to display the status of the NSP cluster nodes:

Note: You must not proceed to the next step until each node STATUS reads Ready.

kubectl get nodes -o wide ↵

The NSP cluster nodes are listed, as shown in the following three-node cluster example:

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   

node1   Ready    master   nd    version   int_IP        ext_IP

node2   Ready    master   nd    version   int_IP        ext_IP

node3   Ready    <none>   nd    version   int_IP        ext_IP


40 

Update the NSP cluster configuration.

  1. Open the following files using a plain-text editor such as vi:

    • old configuration file—/opt/nsp/NSP-CN-DEP-old-release-ID/config/nsp-deployer.yml

    • new configuration file—/opt/nsp/NSP-CN-DEP-new-release-ID/config/nsp-deployer.yml

  2. Merge the settings from the old file into the new file.

  3. Edit the following line in the new file to read:

      hosts: "/opt/nsp/nsp-k8s-deployer-new-release-ID/config/hosts.yml"

  4. Save and close the new nsp-deployer.yml file.

  5. Close the old nsp-deployer.yml file.


Disable pod security policy
 
41 

If your NSP deployment is at Release 23.4 and has a Kubernetes version newer than the version initially shipped with NSP Release 23.4, you must disable the pod security policy.

  1. Open the following file on the NSP deployer host using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-latest_load/NSP-CN-latest_load/config/nsp-config.yml

    where latest_load is the ID of the latest applied NSP service pack

  2. Edit the following line in the nsp section, podSecurityPolicies subsection to read:

          enabled: false

  3. Save and close the file.


Redeploy NSP software
 
42 

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵


43 

Enter the following:

./nspdeployerctl config ↵


44 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --deploy

./nspdeployerctl install --config --deploy ↵

The NSP starts.


Verify NSP initialization
 
45 

On the NSP cluster host, monitor and validate the NSP cluster initialization.

Note: You must not proceed to the next step until each NSP pod is operational.

  1. Enter the following every few minutes:

    kubectl get pods -A ↵

    The status of each NSP cluster pod is listed; all pods are running when the displayed STATUS value is Running or Completed.

    The nsp deployer log file is /var/log/nspdeployerctl.log.

  2. If the Network Operations Analytics - Baseline Analytics installation option is enabled, ensure that the following pods are listed; otherwise, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod:

    Note: The output for a non-HA deployment is shown below; an HA cluster has three sets of three baseline pods, three rta-ignite pods, and two spark-operator pods.

    • analytics-rtanalytics-tomcat

    • baseline-anomaly-detector-n-exec-1

    • baseline-trainer-n-exec-1

    • baseline-window-evaluator-n-exec-1

    • rta-anomaly-detector-app-driver

    • rta-ignite-0

    • rta-trainer-app-driver

    • rta-windower-app-driver

    • spark-operator-m-n

  3. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


46 

Enter the following on the NSP cluster host to display the status of the NSP cluster members:

Note: You must not proceed to the next step until each node is operational.

kubectl get nodes ↵

The status of each node is listed; all nodes are operational when the displayed STATUS value is Ready.

The NSP Kubernetes deployer log file is /var/log/nspk8sctl.log.


Verify upgraded NSP cluster operation
 
47 

Use a browser to open the NSP cluster URL.


48 

Verify the following.

  • In a DR deployment, if you specify the standby cluster address, the browser is redirected to the primary cluster address.

  • The NSP sign-in page opens.

  • The NSP UI opens after you sign in.


49 

As required, use the NSP to monitor device discovery and to check network management functions.

Note: You do not need to perform the step on the standby NSP cluster.

Note: If you are upgrading Kubernetes in a standalone NSP cluster, or the primary NSP cluster in a DR deployment, the completed NSP cluster initialization marks the end of the network management outage.


Purge Kubernetes image files
 
50 

Note: You must perform this and the following step only after you verify that the NSP system is operationally stable and that an upgrade rollback is not required.

Enter the following on the NSP deployer host:

cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵


51 

Enter the following on the NSP deployer host:

./nspk8sctl purge-registry -e ↵

The images are purged.


52 

Close the open console windows.

End of steps