To upgrade a Release 22.9 or later NSP cluster

Purpose
CAUTION 

CAUTION

Network management outage

The procedure requires a shutdown of the NSP system, which causes a network management outage.

Ensure that you perform the procedure only during a scheduled maintenance period with the assistance of technical support.

Perform this procedure to upgrade a standalone or DR NSP system at Release 22.9 or later after you have performed To prepare for an NSP system upgrade from Release 22.9 or later.

Note: The following denote a specific NSP release ID in a file path;

  • old-release-ID—currently installed release

  • new-release-ID—release you are upgrading to

Each release ID has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Steps
Back up NSP deployer host configuration files
 

Log in as the root user on the NSP deployer host.


Open a console window.


Back up the following NSP Kubernetes registry certificate files:

Note: The files are in one of the following directories, depending on the release you are upgrading from:

  • Release 22.9—/opt/nsp/nsp-registry-old-release-ID/config

  • Release 22.11 or later—/opt/nsp/nsp-registry/tls

  • nokia-nsp-registry.crt

  • nokia-nsp-registry.key


Back up the following Kubernetes deployer configuration file:

/opt/nsp/nsp-k8s-deployer-old-release-ID/config/k8s-deployer.yml


Back up the following NSP deployer configuration file:

/opt/nsp/NSP-CN-DEP-old-release-ID/config/nsp-deployer.yml


Copy the files backed up in Step 3, Step 4, and Step 5 to a separate station outside the NSP cluster for safekeeping.


Disable SELinux enforcing mode
 

If SELinux enforcing mode is enabled on the NSP deployer host and NSP cluster members, you must switch to permissive mode on each; otherwise, you can skip this step.

Perform “How do I switch between SELinux modes on NSP system components?” in the NSP System Administrator Guide on the NSP deployer host and on each NSP cluster member.

Note: If SELinux enforcing mode is enabled on any NSP component during the upgrade, the upgrade fails.


Apply OS update to NSP deployer host
 

If the NSP deployer host is deployed in a VM created using an NSP RHEL OS disk image, perform To apply a RHEL update to an NSP image-based OS.


If your Kubernetes version is supported, as determined in To prepare for an NSP system upgrade from Release 22.9 or later, you do not need to upgrade Kubernetes; go to Step 28.


Prepare for Kubernetes upgrade
 
10 

Transfer the downloaded NSP_K8S_DEPLOYER_R_r.tar.gz file to the /opt/nsp directory on the NSP deployer host.


11 

Enter the following on the NSP deployer host:

cd /opt/nsp ↵


12 

Enter the following:

tar -zxvf NSP_K8S_DEPLOYER_R_r.tar.gz ↵

The bundle file is expanded, and the following directories are created:

  • /opt/nsp/nsp-registry-new-release-ID

  • /opt/nsp/nsp-k8s-deployer-new-release-ID


13 

After the file expansion completes successfully, enter the following to remove the file, which is no longer required:

rm -f NSP_K8S_DEPLOYER_R_r.tar.gz ↵


14 

If you are upgrading from Release 22.9, restore the Kubernetes registry certificates.

  1. Enter the following on the NSP deployer host:

    mkdir -p /opt/nsp/nsp-registry/tls ↵

  2. Copy the following certificate files backed up in Step 3 to the /opt/nsp/nsp-registry/tls directory:

    • nokia-nsp-registry.crt

    • nokia-nsp-registry.key


Upgrade Kubernetes registry
 
15 

Enter the following:

cd /opt/nsp/nsp-registry-new-release-ID/bin ↵


16 

Enter the following to begin the registry upgrade:

Note: During the registry upgrade, the registry may be temporarily unavailable. During such a period, an NSP pod that restarts on a new cluster node, or a pod that starts, is in the ImagePullBackOff state until the registry upgrade completes. Any such pods recover automatically after the upgrade, and no user intervention is required.

./nspregistryctl install ↵


17 

When the registry upgrade is complete, verify the upgrade.

  1. Enter the following:

    kubectl get nodes ↵

    NSP deployer node status information like the following is displayed:

    NAME        STATUS    ROLES                  AGE     VERSION

    node_name   status    control-plane,master   xxdnnh   version

  2. Verify that status is Ready; do not proceed to the next step otherwise.

  3. Enter the following periodically to monitor the NSP cluster initialization:

    kubectl get pods -A ↵

    The status of each pod is displayed.

    The NSP cluster is fully operational when the status of each pod is Running or Completed.

  4. If any pod fails to enter the Running or Completed state, correct the condition; see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


Configure Kubernetes deployer
 
18 

You must merge the current k8s-deployer.yml settings into the new k8s-deployer.yml file.

  1. Open the following files using a plain-text editor such as vi:

    • old configuration file—/opt/nsp/nsp-k8s-deployer-old-release-ID/config/k8s-deployer.yml

    • new configuration file—/opt/nsp/nsp-k8s-deployer-new-release-ID/config/k8s-deployer.yml

  2. Apply the settings in the old file to the same parameters in the new file.

  3. Save and close the new k8s-deployer.yml file.

  4. Close the old k8s-deployer.yml file.


19 

Edit the following line in the cluster section of the new file to read:

  hosts: "/opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml"


20 

If you have disabled remote root access to the NSP cluster VMs,configure the following parameters in the cluster section, sshAccess subsection:

  sshAccess:

    userName: "user"

    privateKey: "path"

where

user is the designated root-equivalent user

path is the SSH key path, for example, /home/user/.ssh/id_rsa


21 

Configure the following parameter, which specifies whether dual-stack NE management is enabled:

Note: Dual-stack NE management can function only when the network environment is appropriately configured, for example:

  • Only valid, non-link-local static or DHCPv6-assigned addresses are used.

  • A physical or virtual IPv6 subnet is configured for IPv6 communication with the NEs.

  enable_dual_stack_networks: value

where value must be set to true if the cluster VMs support both IPv4 and IPv6 addressing


22 

Save and close the new k8s-deployer.yml file.


23 

Enter the following on the NSP deployer host:

cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵


24 

Enter the following to create the new hosts.yml file:

./nspk8sctl config -c ↵


25 

Enter the following to list the node entries in the new hosts.yml file:

./nspk8sctl config -l ↵

Output like the following example for a four-node cluster is displayed:

Note: If NAT is used in the cluster:

  • The access_ip value is the public IP address of the cluster node.

  • The ip value is the private IP address of the cluster node.

  • The ansible_host value is the same value as access_ip

Note: If NAT is not used in the cluster:

  • The access_ip value is the IP address of the cluster node.

  • The ip value matches the access_ip value.

  • The ansible_host value is the same value as access_ip

Existing cluster hosts configuration is:

all:

 hosts:

 node1:

 ansible_host: 203.0.113.11

 ip: ip

 access_ip: access_ip

node2:

 ansible_host: 203.0.113.12

 ip: ip

 access_ip: access_ip

node3:

 ansible_host: 203.0.113.13

 ip: ip

 access_ip: access_ip

node4:

 ansible_host: 203.0.113.14

 ip: ip

 access_ip: access_ip


26 

Verify the IP addresses.


27 

Enter the following to import the Kubernetes images to the repository:

../nspk8sctl import ↵


Update NSP cluster configuration
 
28 

Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host:

NSP_DEPLOYER_R_r.tar.gz


29 

Enter the following on the NSP deployer host:

cd /opt/nsp ↵


30 

Enter the following:

tar xvf NSP_DEPLOYER_R_r.tar.gz ↵

The bundle file is expanded, and the following directory of NSP installation files is created:

/opt/nsp/NSP-CN-DEP-new-release-ID/NSP-CN-new-release-ID


31 

Enter the following:

rm -f NSP_DEPLOYER_R_r.tar.gz ↵

The bundle file is deleted.


32 

Restore the required NSP configuration files.

  1. Enter the following:

    mkdir /tmp/appliedConfig ↵

  2. Enter the following:

    cd /tmp/appliedConfig ↵

  3. Transfer the following configuration backup file saved in To prepare for an NSP system upgrade from Release 22.9 or later to the /tmp/appliedConfig directory:

    nspConfiguratorConfigs.zip

  4. Enter the following:

    unzip nspConfiguratorConfigs.zip ↵

    The configuration files are extracted to the current directory, and include some or all of the following, depending on the previous deployment:

    • license file

    • nsp-config.yml file

    • TLS files; may include subdirectories

    • SSH key files

    • nsp-configurator/generated directory content

  5. Copy the extracted TLS files and subdirectories to the following directory:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls

  6. Enter the following:

    mkdir -p /opt/nsp/nsp-configurator/generated ↵

  7. Copy all extracted nsp-configurator/generated files to the /opt/nsp/nsp-configurator/generated directory.


33 

Merge the current nsp-deployer.yml settings into the new nsp-deployer.yml file.

  1. Open the following new cluster configuration file and the old nsp-deployer.yml file saved in Step 5 using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml

  2. Apply the settings in the old file to the same parameters in the new file.

  3. Close the old nsp-deployer.yml file.


34 

If you have disabled remote root access to the NSP cluster VMs, configure the following parameters in the cluster section, sshAccess subsection:

  sshAccess:

    userName: "user"

    privateKey: "path"

where

user is the designated root-equivalent user

path is the SSH key path, for example, /home/user/.ssh/id_rsa


35 

Save and close the new nsp-deployer.yml file.


36 

Enter the following to import the NSP images and Helm charts to the NSP Kubernetes registry:

Note: The import operation may take 20 minutes or longer.

/opt/nsp/NSP-CN-DEP-new-release-ID/bin/nspdeployerctl import ↵


Configure NSP software
 
37 

You must merge the nsp-config.yml file content from the existing deployment into the new nsp-config.yml file.

Open the following files using a plain-text editor such as vi:

  • old configuration file—/opt/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/config/nsp-config.yml

  • new configuration file—/opt/nsp/NSP-CN-DEP-new-release-ID/NSP-CN-new-release-ID/config/nsp-config.yml

Note: See nsp-config.yml file format for configuration information.

Note: The following REST-session parameters in the nsp section of the nsp-config.yml file apply only to an NSP system that uses CAS authentication, and are not to be configured otherwise:

  • ttlInMins

  • maxNumber


38 

Copy each configured parameter line from the previous nsp-config.yml file and use the line to overwrite the same line in the new file.

Note: You must maintain the structure of the new file, as any new configuration options for the new release must remain.

Note: You must replace each configuration line entirely, and must preserve the leading spaces in each line.

Note: If NSP application-log forwarding to NSP Elasticsearch is enabled, special configuration is required.

  • OpenSearch replaces Elasticsearch as the local log-viewing utility. Consequently, the Elasticsearch configuration cannot be directly copied from the current NSP configuration to the new configuration. Instead, you must configure the parameters in the loggingforwardingapplicationLogsopensearch section of the new NSP configuration file.

  • Elasticsearch is introduced as a remote log-forwarding option. You can enable NSP application-log forwarding to a remote Elasticsearch server in the loggingforwardingapplicationLogselasticsearch section of the new NSP configuration file.

    See Centralized logging for more information about configuring NSP logging options.


39 

Configure the following parameter in the platform section as shown below:

Note: You must preserve the lead spacing of the line.

  clusterHost: "cluster_host_address"

where cluster_host_address is the address of NSP cluster member node1, which is subsequently used for cluster management operations


40 

Configure the type parameter in the deployment section as shown below:

deployment:

    type: "deployment_type"

where deployment_type is one of the parameter options listed in the section


41 

If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, perform the following steps.

  1. Enable the following installation option:

    id: networkInfrastructureManagement-gnmiTelemetry

  2. Configure the throughputFactor parameter in the nspmodulestelemetrygnmi section; see the parameter description in the nsp-config.yml for the required value, which is based on the management scale:

               throughputFactor: n

    where n is the required throughput factor for your deployment


42 

If all of the following are true, configure the following parameters in the integrations section:

  • The NSP system includes the NFM-P.

  • You want the NFM-P to forward system metrics to the NSP cluster.

  • The NFM-P main server and main database are on separate stations:

    nfmpDB:

      primaryIp: ""

      standbyIp: ""


43 

If both of the following are true, configure the following parameters in the integrations section:

  • The NSP system includes the NFM-P.

  • You want the NFM-P to forward system metrics to the NSP cluster.

  • The NFM-P system includes one or more auxiliary servers:

    auxServer:

      primaryIpList: ""

      standbyIpList: ""


44 

If the NSP system includes one or more Release 22 analytics servers that are not being upgraded as part of the current NSP system upgrade, you must enable NSP and analytics compatibility; otherwise, you can skip this step.

Set the legacyPortEnabled parameter in the analyticsServer subsection of the integrations section to true as shown below:

  analyticsServer:

    legacyPortEnabled: true


45 

If required, configure the user authentication parameters in the sso section, as shown below; see NSP SSO configuration parameters for configuration information.


46 

If you have an updated license, ensure that the location of your license.zip file, as indicated in the nsp-config.yml file, is in the correct location on the NSP deployer host.


47 

Save and close the new nsp-config.yml file.


48 

Close the previous nsp-config.yml file.


49 

The steps in the following section align with the cluster-specific actions described in Workflow for DR NSP system upgrade from Release 22.9 or later.

If you are upgrading a standalone NSP system, go to Step 55.


DR-specific instructions
 
50 

Perform Step 55 to Step 58 on the standby NSP cluster.


51 

Perform Step 55 to Step 86 on the primary NSP cluster.


52 

Perform Step 59 to Step 86 on the standby NSP cluster.


53 

Perform Step 87 to Step 92 on each NSP cluster.


54 

Go to Step 93.


Stop and undeploy NSP cluster
 
55 

Perform the following steps on the NSP deployer host to preserve the existing cluster data.

  1. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read:

      deleteOnUndeploy:false

  3. Save and close the nsp-config.yml file.


56 

Enter the following on the NSP deployer host to undeploy the NSP cluster:

Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, this step marks the beginning of the network management outage associated with the upgrade.

Note: If the NSP cluster members do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the common root password of each cluster member:

nspdeployerctl --ask-pass --option --option

/opt/nsp/NSP-CN-DEP-old-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵

The NSP cluster is undeployed.


57 

On the NSP cluster host, enter the following periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until the output lists only the following:

  • pods in kube-system namespace

  • nsp-backup-storage pod

kubectl get pods -A ↵

The pods are listed.


Apply OS update to NSP cluster VMs
 
58 

If the NSP cluster VMs were created using an NSP RHEL OS disk image, perform the following steps on each NSP cluster VM to apply the required OS update.

  1. Log in as the root user on the VM.

  2. Perform To apply a RHEL update to an NSP image-based OS on the VM.


Upgrade Kubernetes deployment environment
 
59 

If your Kubernetes version is supported, as determined in To prepare for an NSP system upgrade from Release 22.9 or later, go to Step 62.

See the Host Envirnoment Compatibility Guide for NSP and CLM for Kubernetes version-support information.


60 

If you are not upgrading Kubernetes from the immediately previous version supported by the NSP, but from an earlier version, you must uninstall Kubernetes; otherwise, you can skip this step.

Enter the following on the NSP deployer host:

/opt/nsp/nsp-k8s-deployer-old-release-ID/bin/nspk8sctl uninstall ↵

The Kubernetes software is uninstalled.


61 

Enter the following on the NSP deployer host:

./opt/nsp/nsp-k8s-deployer-new-release-ID/bin/nspk8sctl install ↵

Note: The installation takes considerable time; during the process, each cluster node is cordoned, drained, upgraded, and uncordoned, one node at a time. The operation on each node may take 15 minutes or more.

The NSP Kubernetes environment is deployed.


Label NSP cluster nodes
 
62 

Open a console window on the NSP cluster host.


63 

Enter the following periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until each pod STATUS reads Running or Completed.

kubectl get pods -A ↵

The pods are listed.


64 

Enter the following periodically to display the status of the NSP cluster nodes:

Note: You must not proceed to the next step until each node STATUS reads Ready.

kubectl get nodes -o wide ↵

The NSP cluster nodes are listed, as shown in the following three-node cluster example:

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   

node1   Ready    master   nd    version   int_IP        ext_IP

node2   Ready    master   nd    version   int_IP        ext_IP

node3   Ready    <none>   nd    version   int_IP        ext_IP


65 

Enter the following on the NSP deployer host to apply the node labels to the NSP cluster:

/opt/nsp/NSP-CN-DEP-new-release-ID/bin/nspdeployerctl config ↵


66 

If you are not including any dedicated MDM nodes in addition to the number of member nodes in a standard or enhanced NSP cluster, go to Step 74.


67 

On the NSP cluster host, if remote root access is disabled, switch to the designated root-equivalent user.


68 

Perform the following steps for each additional MDM node.

  1. Enter the following to open an SSH session on the MDM node.

    Note: The root password for a VM created using the Nokia qcow2 image is available from technical support.

    ssh MDM_node

    where MDM_node is the node IP address

  2. Enter the following:

    mkdir -p /opt/nsp/volumes/mdm-server ↵

  3. Enter the following:

    chown -R 1000:1000 /opt/nsp/volumes ↵

  4. Enter the following:

    exit ↵


69 

If remote root access is disabled, switch back to the root user.


70 

Enter the following:

kubectl get nodes -o wide ↵

A list of nodes like the following is displayed.

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   

node1   Ready    master   nd   version   int_IP   ext_IP

node2   Ready    master   nd   version   int_IP   ext_IP

node3   Ready    <none>   nd   version   int_IP   ext_IP


71 

Record the NAME value of each node whose INTERNAL-IP value is the IP address of a node that has been added to host an additional MDM instance.


72 

For each node, enter the following sequence of commands:

kubectl label node node mdm=true ↵

kubectl cordon node

where node is the recorded NAME value of the cordoned MDM node


Upgrade NSP software
 
73 

Return to the console window on the NSP deployer host.


74 

Restore the Keycloak secret files.

  1. Enter the following:

    mkdir -p /opt/nsp/nsp-configurator/generated ↵

  2. Enter the following:

    cd /tmp/appliedConfig/opt/nsp/nsp-configurator/generated ↵

  3. Enter the following:

    cp nsp-keycloak-*-secret /opt/nsp/nsp-configurator/generated/ ↵


75 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵


76 

Enter the following:

./nspdeployerctl install --config --deploy ↵

The NSP is upgraded.


Monitor NSP initialization
 
77 

Open a console window on the NSP cluster host.


78 

If you are including any MDM VMs in addition to a standard or enhanced NSP cluster deployment, perform the following steps to uncordon the nodes cordoned in Step 72.

  1. Enter the following:

    kubectl get pods -A | grep Pending ↵

    The pods in the Pending state are listed; an mdm-server pod name has the format mdm-server-ID.

    Note: Some mdm-server pods may be in the Pending state because the manually labeled MDM nodes are cordoned in Step 72. You must not proceed to the next step if any pods other than the mdm-server pods are listed as Pending. If any other pod is shown, re-enter the command periodically until no pods, or only mdm-server pods, are listed.

  2. Enter the following for each manually labeled and cordoned node:

    kubectl uncordon node

    where node is an MDM node name recorded in Step 72

    The MDM pods are deployed.

    Note: The deployment of all MDM pods may take a few minutes.

  3. Enter the following periodically to display the MDM pod status:

    kubectl get pods -A | grep mdm-server ↵

  4. Ensure that the number of mdm-server-ID instances is the same as the mdm clusterSize value in nsp-config.yml, and that each pod is in the Running state. Otherwise, contact technical support for assistance.


79 

Monitor and validate the NSP cluster initialization.

Note: You must not proceed to the next step until each NSP pod is operational.

  1. Enter the following every few minutes:

    kubectl get pods -A ↵

    The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed.

  2. If the Network Operations Analytics - Baseline Analytics installation option is enabled, ensure that the following pods are listed; otherwise, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod:

    Note: The output for a non-HA deployment is shown below; an HA cluster has three sets of three baseline pods, three rta-ignite pods, and two spark-operator pods.

    • analytics-rtanalytics-tomcat

    • baseline-anomaly-detector-n-exec-1

    • baseline-trainer-n-exec-1

    • baseline-window-evaluator-n-exec-1

    • rta-anomaly-detector-app-driver

    • rta-ignite-0

    • rta-trainer-app-driver

    • rta-windower-app-driver

    • spark-operator-m-n

  3. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.

  4. Enter the following to display the status of the NSP cluster members:

    kubectl get nodes ↵

    The NSP Kubernetes deployer log file is /var/log/nspk8sctl.log.


80 

Enter the following on the NSP cluster host to ensure that all pods are running:

kubectl get pods -A ↵

The status of each pod is listed; all pods are running when the displayed STATUS value is Running or Completed.

The nsp deployer log file is /var/log/nspdeployerctl.log.


Verify upgraded NSP cluster operation
 
81 

Use a browser to open the NSP cluster URL.


82 

Verify the following.

  • The NSP sign-in page opens.

  • The NSP UI opens after you sign in.

  • In a DR deployment, if the standby cluster is operational and you specify the standby cluster address, the browser is redirected to the primary cluster address.

Note: If the UI fails to open, perform “How do I remove the stale NSP allowlist entries?” in the NSP System Administrator Guide to ensure that unresolvable host entries from the previous deployment do not prevent NSP access.


Upgrade MDM adaptors
 
83 

If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, you must upgrade your MDM adaptors to the latest in the adaptor suite delivered as part of the new NSP release, and install the required Custom Resources, or CRs.

Perform the following steps.

Note: Upgrading the adaptors to the latest version is mandatory in order for gNMI telemetry collection to function.

  1. Upgrade the adaptor suites; see “How do I install adaptor artifacts that are not supported in the Artifacts view?” in the NSP System Administrator Guide for information.

  2. When the adaptor suites are upgraded successfully, use NSP Artifacts to install the required telemetry artifact bundles that are packaged with the adaptor suites.

    • nsp-telemetry-cr-nodeType-version.rel.release.ct.zip

    • nsp-telemetry-cr-nodeType-version.rel.release-va.zip

  3. View the messages displayed during the installation to verify that the artifact installation is successful.


Upgrade or enable additional components and systems
 
84 

If the NSP deployment includes the VSR-NRC, upgrade the VSR-NRC as described in the VSR-NRC documentation.


85 

If you are including an existing NFM-P system in the deployment, perform one of the following.

  1. Upgrade the NFM-P to the NSP release; see NFM-P system upgrade from Release 22.9 or later.

  2. Enable NFM-P and NSP compatibility; perform To enable NSP compatibility with an earlier NFM-P system.

Note: An NFM-P system upgrade procedure includes steps for upgrading the following components in an orderly fashion:

  • NSP auxiliary database

  • NSP Flow Collectors / Flow Collector Controllers

  • NSP analytics servers


86 

If the NSP system includes the WS-NOC, perform the appropriate procedure in WS-NOC and NSP integration to enable WS-NOC integration with the upgraded NSP system.


Purge Kubernetes image files
 
87 

Note: Perform this and the following step only after you verify that the NSP system is operationally stable and that an upgrade rollback is not required.

Enter the following on the NSP deployer host:

cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵


88 

Enter the following:

./nspk8sctl purge-registry -e ↵

The images are purged.


Purge NSP image files
 
89 

Note: Perform this and the following step only after you verify that the NSP system is operationally stable and that an upgrade rollback is not required.

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵


90 

Enter the following:

./nspdeployerctl purge-registry -e ↵

The charts and images are purged.


Restore SELinux enforcing mode
 
91 

If either of the following is true, perform “How do I switch between SELinux modes on NSP system components?” in the NSP System Administrator Guide on the NSP deployer host and each NSP cluster VM.

  • You switched from SELinux enforcing mode to permissive mode before the upgrade, and want to restore the use of enforcing mode.

  • The upgrade has enabled SELinux in the NSP cluster for the first time, but in permissive mode, and you want the more stringent security of enforcing mode.


92 

Close the open console windows.


Remove path-control subscriptions
 
93 

If you are upgrading a Release 23.4 or later system that has path-control telemetry flow integration enabled, you must remove the older subscriptions, which can no longer be used.

Issue the following REST API call:

Note: In order to issue a REST API call, you require a token; see the My First NSP API Client tutorial on the Network Developer Portal for information.

POST https://{{address}}:8443/rest/flow-collector-controller/rest/api/v1/export/unsubscribe

where address is the NSP advertised address

The message body is the following:

{

"subscription" : "nrcp-sub"

}

The subscriptions are removed.


Restore classic telemetry collection
 
94 

Telemetry data collection for classically mediated NEs does not automatically resume after an upgrade to NSP Release 24.4. Manual action is required to restore the data collection.

If your NSP system collects telemetry data from classically mediated NEs, restore the telemetry data collection.

  1. The upgrade changes IPV6 and Dual Stack NE IDs. Using the updated NE ID from Device Management, edit your subscriptions and update the NE ID in the Object Filter. See “How do I manage subscriptions?” in the NSP Data Collection and Analysis Guide.

  2. If you have subscriptions with telemetry type telemetry:/base/interfaces/utilization, update the control files:

    1. Disable the subscriptions with telemetry type telemetry:/base/interfaces/utilization.

    2. Obtain and install the latest nsp-telemetry-cr-va-sros artifact bundle from the Nokia Support portal; see “How do I install an artifact bundle?” in the NSP Network Automation Guide.

    3. Enable the subscriptions.

  3. Add the classically mediated devices to a unified discovery rule that uses GRPC mediation; see “How do I stitch a classic device to a unified discovery rule?” in the NSP Device Management Guide.

    Collection resumes for Classic NEs.

Note: The subscription processing begins after the execution of a discovery rule, and may take 15 minutes or more.

End of steps