To upgrade a Release 22.9 or later NSP cluster

Purpose
CAUTION 

CAUTION

Network management outage

The procedure requires a shutdown of the NSP system, which causes a network management outage.

Ensure that you perform the procedure only during a scheduled maintenance period with the assistance of technical support.

Perform this procedure to upgrade a standalone or DR NSP system at Release 22.9 or later after you have performed To prepare for an NSP system upgrade from Release 22.9 or later.

Note: The NSP RHEL user named nsp on an NSP deployer host or NSP cluster VM requires user ID 1000. If another user has ID 1000, you must make the ID available to the nsp user before the upgrade by doing one of the following, or the upgrade fails:

  • deleting the user

  • using the RHEL usermod command to change the ID of the user

Note: The following denote a specific NSP release ID in a file path;

  • old-release-ID—currently installed release

  • new-release-ID—release you are upgrading to

Each release ID has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Steps
Back up NSP deployer host configuration files
 

Log in as the root user on the NSP deployer host.


Open a console window.


Back up the following NSP Kubernetes registry certificate files:

Note: The files are in one of the following directories, depending on the release you are upgrading from:

  • Release 22.9—/opt/nsp/nsp-registry-old-release-ID/config

  • Release 22.11 or later—/opt/nsp/nsp-registry/tls

  • nokia-nsp-registry.crt

  • nokia-nsp-registry.key


Back up the following Kubernetes deployer configuration file:

/opt/nsp/nsp-k8s-deployer-old-release-ID/config/k8s-deployer.yml


Back up the following NSP deployer configuration file:

/opt/nsp/NSP-CN-DEP-old-release-ID/config/nsp-deployer.yml


Copy the files backed up in Step 3, Step 4, and Step 5 to a separate station outside the NSP cluster for safekeeping.


Disable SELinux enforcing mode
 

If SELinux enforcing mode is enabled on the NSP deployer host and NSP cluster members, you must switch to permissive mode on each; otherwise, you can skip this step.

Perform “How do I switch between SELinux modes on NSP system components?” in the NSP System Administrator Guide on the NSP deployer host and on each NSP cluster member.

Note: If SELinux enforcing mode is enabled on any NSP component during the upgrade, the upgrade fails.


Apply OS update to NSP deployer host
 

If the NSP deployer host is deployed in a VM created using an NSP RHEL OS disk image, perform To apply a RHEL update to an NSP image-based OS.


If your Kubernetes version is supported, as determined in To prepare for an NSP system upgrade from Release 22.9 or later, you do not need to upgrade Kubernetes; go to Step 33.


Prepare for Kubernetes upgrade
 
10 

Log in as the root or NSP admin user on the NSP deployer host.


11 

Transfer the downloaded NSP_K8S_DEPLOYER_R_r.tar.gz file to the /opt/nsp directory on the NSP deployer host.


12 

Enter the following on the NSP deployer host:

cd /opt/nsp ↵


13 

Enter the following:

tar -zxvf NSP_K8S_DEPLOYER_R_r.tar.gz ↵

The bundle file is expanded, and the following directories are created:

  • /opt/nsp/nsp-registry-new-release-ID

  • /opt/nsp/nsp-k8s-deployer-new-release-ID


14 

After the file expansion completes successfully, enter the following to remove the file, which is no longer required:

rm -f NSP_K8S_DEPLOYER_R_r.tar.gz ↵


15 

If you are upgrading from Release 22.9, restore the Kubernetes registry certificates.

  1. Enter the following on the NSP deployer host:

    mkdir -p /opt/nsp/nsp-registry/tls ↵

  2. Copy the following certificate files backed up in Step 3 to the /opt/nsp/nsp-registry/tls directory:

    • nokia-nsp-registry.crt

    • nokia-nsp-registry.key


Upgrade Kubernetes registry
 
16 

Enter the following:

cd /opt/nsp/nsp-registry-new-release-ID/bin ↵


17 

Enter the following to begin the registry upgrade:

Note: During the registry upgrade, the registry may be temporarily unavailable. During such a period, an NSP pod that restarts on a new cluster node, or a pod that starts, is in the ImagePullBackOff state until the registry upgrade completes. Any such pods recover automatically after the upgrade, and no user intervention is required.

./nspregistryctl install ↵


18 

When the registry upgrade is complete, verify the upgrade.

  1. Enter the following:

    kubectl get nodes ↵

    NSP deployer node status information like the following is displayed:

    NAME        STATUS    ROLES                  AGE     VERSION

    node_name   status    control-plane,master   xxdnnh   version

  2. Verify that status is Ready; do not proceed to the next step otherwise.

  3. Enter the following periodically to monitor the NSP cluster initialization:

    kubectl get pods -A ↵

    The status of each pod is displayed.

    The NSP cluster is fully operational when the status of each pod is Running or Completed.

  4. If any pod fails to enter the Running or Completed state, correct the condition; see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


Configure Kubernetes deployer
 
19 

You must merge the current k8s-deployer.yml settings into the new k8s-deployer.yml file.

Open the following files using a plain-text editor such as vi:

  • old configuration file—/opt/nsp/nsp-k8s-deployer-old-release-ID/config/k8s-deployer.yml

  • new configuration file—/opt/nsp/nsp-k8s-deployer-new-release-ID/config/k8s-deployer.yml


20 

Apply the settings in the old file to the same parameters in the new file.


21 

Close the old k8s-deployer.yml file.


22 

Edit the following line in the cluster section of the new file to read:

  hosts: "/opt/nsp/nsp-k8s-deployer-new-release-ID/config/hosts.yml"


23 

If you have disabled remote root access to the NSP cluster VMs,configure the following parameters in the cluster section, sshAccess subsection:

  sshAccess:

    userName: "user"

    privateKey: "path"

where

user is the designated NSP ansible user

path is the SSH key path, for example, /home/user/.ssh/id_rsa


24 

Configure the following parameters for each NSP cluster VM; see the descriptive text at the head of the file for parameter information, and Hostname configuration requirements for general configuration information.

- nodeName: noden

  nodeIp: private_IP_address

  accessIp: public_IP_address

  isIngress: value


25 

In the following section, specify the virtual IP addresses for the NSP to use as the internal load-balancer endpoints.

Note: A single-node NSP cluster requires at least the client_IP address.

The addresses are the virtualIP values for NSP client, internal, and mediation access that you specify in Step 45 and Step 46 in the nsp-config.yml file.

loadBalancerExternalIps:

    - client_IP

    - internal_IP

    - trapV4_mediation_IP

    - trapV6_mediation_IP

    - flowV4_mediation_IP

    - flowV6_mediation_IP


26 

Configure the following parameter, which specifies whether dual-stack NE management is enabled:

Note: Dual-stack NE management can function only when the network environment is appropriately configured, for example:

  • Only valid, non-link-local static or DHCPv6-assigned addresses are used.

  • A physical or virtual IPv6 subnet is configured for IPv6 communication with the NEs.

  enable_dual_stack_networks: value

where value must be set to true if the cluster VMs support both IPv4 and IPv6 addressing


27 

Save and close the new k8s-deployer.yml file.


28 

Enter the following on the NSP deployer host:

cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵


29 

Enter the following to create the new hosts.yml file:

./nspk8sctl config -c ↵


30 

Enter the following to list the node entries in the new hosts.yml file:

./nspk8sctl config -l ↵

Output like the following example for a four-node cluster is displayed:

Note: If NAT is used in the cluster:

  • The ip value is the private IP address of the cluster node.

  • The access_ip value is the public address of the cluster node.

Otherwise:

  • The ip value is the private IP address of the cluster node.

  • The access_ip value matches the ip value.

Note: The ansible_host value must match the access_ip value.

Existing cluster hosts configuration is:

node_1_name:

  ansible_host: 203.0.113.11

  ip: private_IP

  access_ip: public_IP

  node_labels:

    isIngress: "true"

node_2_name:

  ansible_host: 203.0.113.12

  ip: private_IP

  access_ip: public_IP

  node_labels:

    isIngress: "true"

node_3_name:

  ansible_host: 203.0.113.13

  ip: private_IP

  access_ip: public_IP

  node_labels:

    isIngress: "true"

node_4_name:

  ansible_host: 203.0.113.14

  ip: private_IP

  access_ip: public_IP

  node_labels:

    isIngress: "false"


31 

Verify the IP addresses.


32 

Enter the following to import the Kubernetes images to the repository:

../nspk8sctl import ↵


Restore NSP system files
 
33 

Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host:

NSP_DEPLOYER_R_r.tar.gz


34 

Enter the following on the NSP deployer host:

cd /opt/nsp ↵


35 

Enter the following:

tar xvf NSP_DEPLOYER_R_r.tar.gz ↵

The bundle file is expanded, and the following directory of NSP installation files is created:

/opt/nsp/NSP-CN-DEP-new-release-ID/NSP-CN-new-release-ID


36 

Enter the following:

rm -f NSP_DEPLOYER_R_r.tar.gz ↵

The bundle file is deleted.


37 

Restore the required NSP configuration files.

  1. Enter the following:

    mkdir /tmp/appliedConfig ↵

  2. Enter the following:

    cd /tmp/appliedConfig ↵

  3. Transfer the following configuration backup file saved in To prepare for an NSP system upgrade from Release 22.9 or later to the /tmp/appliedConfig directory:

    nspConfiguratorConfigs.zip

  4. Enter the following:

    unzip nspConfiguratorConfigs.zip ↵

    The configuration files are extracted to the current directory, and include some or all of the following, depending on the previous deployment:

    • license file

    • nsp-config.yml file

    • TLS files; may include subdirectories

    • SSH key files

    • nsp-configurator/generated directory content


38 

Perform one of the following.

  1. If you are upgrading from Release 23.11 or earlier, you must merge the current nsp-deployer.yml settings into the new nsp-deployer.yml file.

    1. Open the following new cluster configuration file and the old nsp-deployer.yml file saved in Step 5 using a plain-text editor such as vi:

      /opt/nsp/NSP-CN-DEP-new-release-ID/config/nsp-deployer.yml

    2. Apply the settings in the old file to the same parameters in the new file.

    3. Close the old nsp-deployer.yml file.

  2. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-new-release-ID/config/nsp-deployer.yml


39 

If you have disabled remote root access to the NSP cluster VMs, configure the following parameters in the cluster section, sshAccess subsection:

  sshAccess:

    userName: "user"

    privateKey: "path"

where

user is the designated NSP ansible user

path is the SSH key path, for example, /home/user/.ssh/id_rsa


40 

Save and close the new nsp-deployer.yml file.


41 

Enter the following to import the NSP images and Helm charts to the NSP Kubernetes registry:

Note: The import operation may take 20 minutes or longer.

/opt/nsp/NSP-CN-DEP-new-release-ID/bin/nspdeployerctl import ↵


Configure NSP software
 
42 

You must merge the nsp-config.yml file content from the existing deployment into the new nsp-config.yml file.

Open the following files using a plain-text editor such as vi:

  • old configuration file—/opt/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/config/nsp-config.yml

  • new configuration file—/opt/nsp/NSP-CN-DEP-new-release-ID/NSP-CN-new-release-ID/config/nsp-config.yml

Note: See nsp-config.yml file format for configuration information.


43 

Copy each configured parameter line from the previous nsp-config.yml file and use the line to overwrite the same line in the new file, if applicable. The following parameters are not present in the new nsp-config.yml file, and are not to be merged:

  • platform section—address configuration now in ingressApplications section

    • advertisedAddress

    • mediationAdvertisedAddress

    • mediationAdvertisedAddressIpv6

    • internalAdvertisedAddress

  • elb section—all parameters; section removed

  • tls section—information now held in secrets:

    • truststorePass

    • keystorePass

    • customKey

    • customCert

  • mtls section—information now held in secrets:

    • mtlsCACert

    • mtlsClientCert

    • mtlsKey

  • sso section—authMode parameter obsolete

Note: You must maintain the structure of the new file, as any new configuration options for the new release must remain.

Note: You must replace each configuration line entirely, and must preserve the leading spaces in each line.

Note: If NSP application-log forwarding to NSP Elasticsearch is enabled, special configuration is required.

  • OpenSearch replaces Elasticsearch as the local log-viewing utility. Consequently, the Elasticsearch configuration cannot be directly copied from the current NSP configuration to the new configuration. Instead, you must configure the parameters in the loggingforwardingapplicationLogsopensearch section of the new NSP configuration file.

  • Elasticsearch is introduced as a remote log-forwarding option. You can enable NSP application-log forwarding to a remote Elasticsearch server in the loggingforwardingapplicationLogselasticsearch section of the new NSP configuration file.

    See Centralized logging for more information about configuring NSP logging options.


44 

Configure the following parameter in the platform section as shown below:

Note: You must preserve the lead spacing of the line.

  clusterHost: "cluster_host_address"

where cluster_host_address is the address of NSP cluster member node1, which is subsequently used for cluster management operations


45 

You must apply the address values from the former configuration file to the new parameters.

Configure the following parameters in the platform section, ingressApplications subsection as shown below.

Each address is an address from the ingressApplications section of the k8s-deployer.yml file described in Step 25.

Note: The client_IP value is mandatory; the address is used for interfaces that remain unconfigured, such as in a single-interface deployment.

Note: If the client network uses IPv6, you must specify the NSP cluster hostname as the client_IP value.

Note: The trapForwarder addresses that you specify must differ from the client_IP value, even in a single-interface deployment.

  ingressApplications:

    ingressController:

      clientAddresses:

        virtualIp: "client_IP"

        advertised: "client_public_address"

      internalAddresses:

        virtualIp: "internal_IP"

        advertised: "internal_public_address"

    trapForwarder:

      mediationAddresses:

        virtualIpV4: "trapV4_mediation_IP"

        advertisedV4: "trapV4_mediation_public_address"

        virtualIpV6: "trapV6_mediation_IP"

        advertisedV6: "trapV6_mediation_public_address"

where

client_IP is the address for external client access

internal_IP is the address for internal communication

trapV4_mediation_IP is the address for IPv4 network mediation

trapV6_mediation_IP is the address for IPv6 network mediation

each public_address value is an optional address to advertise instead of the associated _IP value, for example, in a NAT environment


46 

If flow data collection is enabled, configure the following parameters in the platform section, ingressApplications subsection as shown below:

    flowForwarder:

      mediationAddresses:

        virtualIpV4: "flowV4_mediation_IP"

        advertisedV4: "flowV4_mediation_public_address"

        virtualIpV6: "flowV6_mediation_IP"

        advertisedV6: "flowV6_mediation_public_address"

where

flowV4_mediation_IP is the address for IPv4 flow collection

flowV6_mediation_IP is the address for IPv6 flow collection

each _public_address value is an optional address to advertise instead of the associated mediation_IP value, for example, in a NAT environment


47 

If you are using your own storage instead of local NSP storage, perform the following steps:

  1. Create Kubernetes storage classes.

  2. Configure the following parameters in the platform section, kubernetes subsection as shown below:

      storage:

        readWriteOnceLowIOPSClass: "storage_class"

        readWriteOnceHighIOPSClass: "storage_class"

        readWriteOnceClassForDatabases: "storage_class"

        readWriteManyLowIOPSClass: "storage_class"

        readWriteManyHighIOPSClass: "storage_class"

    where

    readWriteOnceLowIOPSClass—for enabling ReadWriteOnce operations and maintaining storage IOPS below 10,000

    readWriteOnceHighIOPSClass—for enabling ReadWriteOnce operations and maintaining storage IOPS above 10,000

    readWriteOnceClassForDatabases—for enabling ReadWriteOnce operations and maintaining storage IOPS below 10,000 for NSP databases

    readWriteManyLowIOPSClass—for enabling ReadWriteMany operations and maintaining storage IOPS below 10,000

    readWriteManyHighIOPSClass—for enabling ReadWriteMany operations and maintaining storage IOPS above10,000

    storage_class is your storage class name


48 

Configure the type parameter in the deployment section as shown below:

deployment:

    type: "deployment_type"

where deployment_type is one of the parameter options listed in the section


49 

If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, perform the following steps.

  1. Enable the following installation option:

    id: networkInfrastructureManagement-gnmiTelemetry

  2. Configure the throughputFactor parameter in the nspmodulestelemetrygnmi section; see the parameter description in the nsp-config.yml for the required value, which is based on the management scale:

               throughputFactor: n

    where n is the required throughput factor for your deployment


50 

If required, configure NSP collection of accounting statistics. Perform the following steps:

  1. Enable the following installation option:

    id: networkInfrastructureManagement-accountingTelemetry

  2. Configure the collectFromClassicNes parameter in the nspmodulestelemetryaccounting section. The default is false; toggle the flag to true to enable the NSP to process accounting files from classic NEs.

Note: If the collectFromClassicNes flag is true, you must disable file rollover traps on the NE to prevent duplicate file collection; see the NE CLI documentation. This change will impact SAA and AA accounting collection: SAA and AA accounting collection is not supported in NSP.


51 

If all of the following are true, configure the following parameters in the integrations section:

  • The NSP system includes the NFM-P.

  • You want the NFM-P to forward system metrics to the NSP cluster.

  • The NFM-P main server and main database are on separate stations:

    nfmpDB:

      primaryIp: ""

      standbyIp: ""


52 

If both of the following are true, configure the following parameters in the integrations section:

  • The NSP system includes the NFM-P.

  • You want the NFM-P to forward system metrics to the NSP cluster.

  • The NFM-P system includes one or more auxiliary servers:

    auxServer:

      primaryIpList: ""

      standbyIpList: ""


53 

If the NSP system includes one or more Release 22 analytics servers that are not being upgraded as part of the current NSP system upgrade, you must enable NSP and analytics compatibility; otherwise, you can skip this step.

Set the legacyPortEnabled parameter in the analyticsServer subsection of the integrations section to true as shown below:

  analyticsServer:

    legacyPortEnabled: true


54 

If required, configure the user authentication parameters in the sso section, as shown below; see NSP SSO configuration parameters for configuration information.


55 

If you have an updated license, ensure that the location of your license.zip file, as indicated in the nsp-config.yml file, is in the correct location on the NSP deployer host.


56 

Save and close the new nsp-config.yml file.


57 

Close the previous nsp-config.yml file.


58 

If you are using your own storage instead of local NSP storage, perform the following steps.

See the NSP Planning Guide for the required IOPS and latency storage throughput.

  1. Check if the storage classes meet the IOPS requirements.

    Run the script to check the IOPS of the configured storage classes.

    1. Enter the following on the NSP deployer host:

      # cd /opt/nsp/NSP-CN-DEP-new-release-ID/NSP-CN-new-release-ID/tools/support/storageIopsCheck/bin

    2. Run the script by selecting each storage class individually or by selecting All Storage Classes.

      Output like the following is displayed and indicates if the script passes or fails.

      [root@ ]# ./nspstorageiopsctl 

      date time year -------------------- BEGIN ./nspstorageiopsctl --------------------

       

      [INFO]: SSH to NSP Cluster host ip_address successful

      1) readWriteManyHighIOPSClass            5) readWriteOnceClassForDatabases

      2) readWriteOnceHighIOPSClass            6) All Storage Classes

      3) readWriteManyLowIOPSClass             7) Quit

      4) readWriteOnceLowIOPSClass

      Select an option: 1

      [INFO] **** Calling IOPs check for readWriteManyHighIOPSClass - Storage Class Name (ceph-filesystem) Access Mode (ReadWriteMany) ****

      [INFO] NSP Cluster Host: ip_address

      [INFO] Validate configured storage classes are available on NSP Cluster

      [INFO] Adding helm repo nokia-nsp

      [INFO] Updating helm repo nokia-nsp

      [INFO] Executing k8s job on NSP Cluster ip_address

      [INFO] Creating /opt/nsp/nsp-storage-iops directory on NSP Cluster ip_address

      [INFO] Copying values.yaml to /opt/nsp/nsp-deployer/tools/nsp-storage-iops

      [INFO] Executing k8s job on NSP Cluster ip_address

      [INFO] Waiting for K8s job status...

      [INFO] Job storage-iops completed successfully.

      [INFO] Cleaning up and uninstalling k8s job

      [INFO] Helm uninstall cn-nsp-storage-iops successful

      STORAGECLASS         ACCESS MODE    READIOPS   WRITEIOPS  RESULT    STORAGECLASSTYPE

      ------------         -----------    --------   ---------  ------    ----------------

      storage_class     ReadWriteMany  12400      12500      true      readWriteManyHighIOPSClass

      [INFO] READ IOPS and WRITE IOPS meet the threshold of 10000.

      date time year ------------------- END ./nspstorageiopsctl - SUCCESS --------------------

    If these requirements are not met, this may result in system performance degradation.


59 

The steps in the following section align with the DR cluster-specific actions described in Workflow for DR NSP system upgrade from Release 22.9 or later.

If you are upgrading a standalone NSP system, go to Step 65.


DR-specific instructions
 
60 

Perform Step 65 to Step 68 on the standby NSP cluster.


61 

Perform Step 65 to Step 103 on the primary NSP cluster.


62 

Perform Step 69 to Step 103 on the standby NSP cluster.


63 

Perform Step 104 to Step 109 on each NSP cluster.


64 

Go to Step 110.


Stop and undeploy NSP cluster
 
65 

Perform the following steps on the NSP deployer host to preserve the existing cluster data.

  1. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read:

      deleteOnUndeploy:false

  3. Save and close the nsp-config.yml file.


66 

Enter the following on the NSP deployer host to undeploy the NSP cluster:

Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, this step marks the beginning of the network management outage associated with the upgrade.

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall --undeploy --clean

/opt/nsp/NSP-CN-DEP-old-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵

The NSP cluster is undeployed.


67 

As the root or NSP admin user on the NSP cluster host, enter the following periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until the output lists only the following:

  • pods in kube-system namespace

  • nsp-backup-storage pod

kubectl get pods -A ↵

The pods are listed.


Apply OS update to NSP cluster VMs
 
68 

If the NSP cluster VMs were created using an NSP RHEL OS disk image, perform the following steps on each NSP cluster VM to apply the required OS update.

  1. Log in as the root user on the VM.

  2. Perform To apply a RHEL update to an NSP image-based OS on the VM.


Upgrade Kubernetes deployment environment
 
69 

If your Kubernetes version is supported, as determined in To prepare for an NSP system upgrade from Release 22.9 or later, go to Step 72.

See the Host Envirnoment Compatibility Guide for NSP and CLM for Kubernetes version-support information.


70 

If you are not upgrading Kubernetes from the immediately previous version supported by the NSP, but from an earlier version, you must uninstall Kubernetes; otherwise, you can skip this step.

Enter the following on the NSP deployer host:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspk8sctl --ask-pass uninstall

/opt/nsp/nsp-k8s-deployer-old-release-ID/bin/nspk8sctl uninstall ↵

The Kubernetes software is uninstalled.


71 

Enter the following on the NSP deployer host:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspk8sctl --ask-pass install

./opt/nsp/nsp-k8s-deployer-new-release-ID/bin/nspk8sctl install ↵

Note: The installation takes considerable time; during the process, each cluster node is cordoned, drained, upgraded, and uncordoned, one node at a time. The operation on each node may take 15 minutes or more.

The NSP Kubernetes environment is deployed.


Label NSP cluster nodes
 
72 

Log in as the root or NSP admin user on the NSP cluster host.


73 

Open a console window.


74 

Enter the following periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until each pod STATUS reads Running or Completed.

kubectl get pods -A ↵

The pods are listed.


75 

Enter the following periodically to display the status of the NSP cluster nodes:

Note: You must not proceed to the next step until each node STATUS reads Ready.

kubectl get nodes -o wide ↵

The NSP cluster nodes are listed, as shown in the following three-node cluster example:

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   

node1   Ready    master   nd    version   int_IP        ext_IP

node2   Ready    master   nd    version   int_IP        ext_IP

node3   Ready    <none>   nd    version   int_IP        ext_IP


76 

Enter the following on the NSP deployer host to apply the node labels to the NSP cluster:

/opt/nsp/NSP-CN-DEP-new-release-ID/bin/nspdeployerctl config ↵


77 

If you are not including any dedicated MDM nodes in addition to the number of member nodes in a standard or enhanced NSP cluster, go to Step 82.


78 

Perform the following steps on the NSP cluster host for each additional MDM node.

  1. Enter the following to open an SSH session on the MDM node.

    Note: The root password for a VM created using the Nokia qcow2 image is available from technical support.

    ssh MDM_node

    where MDM_node is the node IP address

  2. Enter the following:

    mkdir -p /opt/nsp/volumes/mdm-server ↵

  3. Enter the following:

    chown -R 1000:1000 /opt/nsp/volumes ↵

  4. Enter the following:

    exit ↵


79 

Enter the following:

kubectl get nodes -o wide ↵

A list of nodes like the following is displayed.

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   

node1   Ready    master   nd   version   int_IP   ext_IP

node2   Ready    master   nd   version   int_IP   ext_IP

node3   Ready    <none>   nd   version   int_IP   ext_IP


80 

Record the NAME value of each node whose INTERNAL-IP value is the IP address of a node that has been added to host an additional MDM instance.


81 

For each node, enter the following sequence of commands:

kubectl label node node mdm=true ↵

kubectl cordon node

where node is the recorded NAME value of the cordoned MDM node


Install Kubernetes secrets
 
82 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵


83 

Enter the following:

Note: To install the Kubernetes secrets, you require the backed-up TLS artifacts extracted in Step 37.

./nspdeployerctl secret install ↵

The following prompt is displayed:

Would you like to use your own CA key pair for the NSP Internal Issuer? [yes,no]


84 

Perform one of the following.

  1. Enter no ↵.

    The NSP generates the internal key and certificate files.

  2. Provide your own certificate to secure the internal network.

    1. Enter yes ↵.

      The following messages and prompt are displayed:

    2. Building secret 'ca-key-pair-internal-nspdeployer'

      The CA key pair used to sign certificates generated by the NSP Internal Issuer.

      Please enter the internal CA private key:

    3. Enter the full path of the internal private key.

      The following prompt is displayed:

      Please enter the internal CA certificate:

    4. Enter the full path of the internal certificate:

      The following messages are displayed for each Kubernetes namespace:

      Adding secret ca-key-pair-internal-nspdeployer to namespace namespace...

      secret/ca-key-pair-internal-nspdeployer created

The following prompt is displayed:

Would you like to use your own CA key pair for the NSP External Issuer? [yes,no]


85 

Perform one of the following.

  1. Enter no ↵.

    The NSP generates the external key and certificate files.

  2. Provide your own certificate to secure the external network.

    1. Enter yes ↵.

      The following messages and prompt are displayed:

      Building secret 'ca-key-pair-external-nspdeployer'

      The CA key pair used to sign certificates generated by the NSP External Issuer.

      Please enter the external CA private key:

    2. Enter the full path of the external private key.

      The following prompt is displayed:

      Please enter the external CA certificate:

    3. Enter the full path of the external certificate:

      The following messages are displayed for each Kubernetes namespace:

      Adding secret ca-key-pair-external-nspdeployer to namespace namespace...

      secret/ca-key-pair-external-nspdeployer created

Would you like to provide a custom private key and certificate for use by NSP endpoints when securing TLS connections over the client network? [yes,no]


86 

Perform one of the following.

  1. Enter no ↵.

    The NSP generates the client key and certificate files.

  2. Provide your own certificate for the client network.

    1. Enter yes ↵

      The following messages and prompt are displayed:

      Building secret 'nginx-nb-tls-nsp'

      TLS certificate for securing the ingress gateway.

      Please enter the ingress gateway private key:

    2. Enter the full path of the private key file for client access.

      The following prompt is displayed:

      Please enter the ingress gateway public certificate:

    3. Enter the full path of the public certificate file for client access.

      The following prompt is displayed:

      Please enter the ingress gateway public trusted CA certificate bundle:

    4. Enter the full path of the public trusted CA certificate bundle file.

      The following message is displayed:

        Adding secret nginx-nb-tls-nsp to namespace namespace...


87 

If the deployment includes MDM, the following prompt is displayed:

Would you like to provide mTLS certificates for the NSP mediation interface for two-way TLS authentication? [yes,no]

Perform one of the following.

  1. Enter no ↵ if you are not using mTLS or have no certificate to provide for mTLS.

  2. Provide your own certificate to secure MDM and gNMI telemetry.

    1. Enter yes ↵.

    2. The following messages and prompt are displayed:

      Building secret 'mediation-mtls-key'

      mTLS artifacts use to secure MDM communications with nodes.

      Please enter the mediation private key:

    3. Enter the full path of the mediation private key.

      The following prompt is displayed:

      Please enter the mediation CA certificate:

    4. Enter the full path of the mediation CA certificate.

      The following messages are displayed:

        Adding secret mediation-mtls-key to namespace namespace...

      secret/mediation-mtls-key created

        Adding secret mediation-mtls-key to namespace namespace...

      secret/mediation-mtls-key created


88 

Back up the Kubernetes secrets.

  1. Enter the following:

    ./nspdeployerctl secret -o backup_file backup ↵

    where backup_file is the absolute path and name of the backup file to create

    As the secrets are backed up, messages like the following are displayed for each Kubernetes namespace:

    Backing up secrets to /opt/backupfile...

      Including secret namespace:ca-key-pair-external

      Including secret namespace:ca-key-pair-internal

      Including secret namespace:nsp-tls-store-pass

    When the backup is complete, the following prompt is displayed:

    Please provide an encryption password for backup_file

    enter aes-256-ctr encryption password:

  2. Enter a password.

    The following prompt is displayed:

    Verifying - enter aes-256-ctr encryption password:

  3. Re-enter the password.

    The backup file is encrypted using the password.

  4. Record the password for use when restoring the backup.

  5. Record the name of the data center associated with the backup.

  6. Transfer the backup file to a secure location in a separate facility for safekeeping.


Upgrade NSP software
 
89 

Return to the console window on the NSP deployer host.


90 

If you are configuring the new standby (former primary) cluster in a DR deployment, obtain and restore the secrets backup file from the NSP cluster in the new primary data center.

  1. Enter the following:

    scp address:path/backup_file /tmp/ ↵

    where

    address is the address of the NSP deployer host in the primary cluster

    path is the absolute file path of the backup file created in Step 88

    backup_file is the secrets backup file name

    The backup file is transferred to the local /tmp directory.

  2. Enter the following:

    cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵

  3. Enter the following:

    ./nspdeployerctl secret -i /tmp/backup_file restore ↵

    The following prompt is displayed:

    Please provide the encryption password for /opt/backupfile

    enter aes-256-ctr decryption password:

  4. Enter the password recorded in Step 88.

    As the secrets are restored, messages like the following are displayed for each Kubernetes namespace:

    Restoring secrets from backup_file...

    secret/ca-key-pair-external created

      Restored secret namespace:ca-key-pair-external

    secret/ca-key-pair-internal created

      Restored secret namespace:ca-key-pair-internal

    secret/nsp-tls-store-pass created

      Restored secret namespace:nsp-tls-store-pass

  5. If you answer yes to the Step 86 prompt for client access during the primary NSP cluster configuration, you must update the standby server secret for client access using the custom certificate and key files that are specific to the standby cluster.

    Enter the following:

    ./nspdeployerctl secret -s nginx-nb-tls-nsp -n psaRestricted -f tls.key=customKey -f tls.crt=customCert -f ca.crt=customCaCert update ↵

    where

    customKey is the full path of the private server key file

    customCert is the full path of the server public certificate file

    customCaCert is the full path of the CA public certificate file

    Messages like the following are displayed as the server secret is updated:

    secret/nginx-nb-tls-nsp patched

    The following files may contain sensitive information. They are no longer required by NSP and may be removed.

      customKey

      customCert

      customCaCert


91 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵


92 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --deploy

./nspdeployerctl install --config --deploy ↵

The NSP is upgraded.


Monitor NSP initialization
 
93 

Return to the console window on the NSP cluster host.


94 

If you are including any MDM VMs in addition to a standard or enhanced NSP cluster deployment, perform the following steps to uncordon the nodes cordoned in Step 81.

  1. Enter the following:

    kubectl get pods -A | grep Pending ↵

    The pods in the Pending state are listed; an mdm-server pod name has the format mdm-server-ID.

    Note: Some mdm-server pods may be in the Pending state because the manually labeled MDM nodes are cordoned in Step 81. You must not proceed to the next step if any pods other than the mdm-server pods are listed as Pending. If any other pod is shown, re-enter the command periodically until no pods, or only mdm-server pods, are listed.

  2. Enter the following for each manually labeled and cordoned node:

    kubectl uncordon node

    where node is an MDM node name recorded in Step 81

    The MDM pods are deployed.

    Note: The deployment of all MDM pods may take a few minutes.

  3. Enter the following periodically to display the MDM pod status:

    kubectl get pods -A | grep mdm-server ↵

  4. Ensure that the number of mdm-server-ID instances is the same as the mdm clusterSize value in nsp-config.yml, and that each pod is in the Running state. Otherwise, contact technical support for assistance.


95 

Monitor and validate the NSP cluster initialization.

Note: You must not proceed to the next step until each NSP pod is operational.

  1. Enter the following every few minutes:

    kubectl get pods -A ↵

    The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed.

  2. Check PVCs are bound to PVs and PVs are created with STORAGECLASS as shown below

    # kubectl get pvc -A

    NAMESPACE            NAME                       STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE

    nsp-psa-privileged   data-volume-mdm-server-0   Bound    pvc-ID  5Gi        RWO            storage_class  age

    nsp-psa-restricted   data-nspos-kafka-0         Bound    pvc-ID  10Gi       RWO            storage_class   age

    nsp-psa-restricted   data-nspos-zookeeper-0     Bound    pvc-ID  2Gi        RWO            storage_class  age

    ...

    # kubectl get pv

    NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM

    nspos-fluentd-logs-data   50Mi       ROX            Retain           Bound    nsp-psa-restricted/nspos-fluentd-logs-data

    pvc-ID                   10Gi       RWO            Retain           Bound    nsp-psa-restricted/data-nspos-kafka-0 

    pvc-ID                   2Gi        RWO            Retain           Bound    nsp-psa-restricted/data-nspos-zookeeper-0

    ...

  3. Verify that all pods are in the Running state.

  4. If the Network Operations Analytics - Baseline Analytics installation option is enabled, ensure that the following pods are listed; otherwise, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod:

    Note: The output for a non-HA deployment is shown below; an HA cluster has three sets of three baseline pods, three rta-ignite pods, and two spark-operator pods.

    • analytics-rtanalytics-tomcat

    • baseline-anomaly-detector-n-exec-1

    • baseline-trainer-n-exec-1

    • baseline-window-evaluator-n-exec-1

    • rta-anomaly-detector-app-driver

    • rta-ignite-0

    • rta-trainer-app-driver

    • rta-windower-app-driver

    • spark-operator-m-n

  5. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.

  6. Enter the following to display the status of the NSP cluster members:

    kubectl get nodes ↵

    The NSP Kubernetes deployer log file is /var/log/nspk8sctl.log.


96 

Enter the following on the NSP cluster host to ensure that all pods are running:

kubectl get pods -A ↵

The status of each pod is listed; all pods are running when the displayed STATUS value is Running or Completed.

The nsp deployer log file is /var/log/nspdeployerctl.log.


97 

To remove the sensitive NSP security information from the local file system enter the following:

rm -rf /tmp/appliedConfigs ↵

The /tmp/appliedConfigs directory is deleted.


Verify upgraded NSP cluster operation
 
98 

Use a browser to open the NSP cluster URL.


99 

Verify the following.

  • The NSP sign-in page opens.

  • The NSP UI opens after you sign in.

  • In a DR deployment, if the standby cluster is operational and you specify the standby cluster address, the browser is redirected to the primary cluster address.

Note: If the UI fails to open, perform “How do I remove the stale NSP allowlist entries?” in the NSP System Administrator Guide to ensure that unresolvable host entries from the previous deployment do not prevent NSP access.


Upgrade MDM adaptors
 
100 

If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, you must upgrade your MDM adaptors to the latest in the adaptor suite delivered as part of the new NSP release, and install the required Custom Resources, or CRs.

Perform the following steps.

Note: Upgrading the adaptors to the latest version is mandatory in order for gNMI telemetry collection to function.

  1. Upgrade the adaptor suites; see “How do I install adaptor artifacts that are not supported in the Artifacts view?” in the NSP System Administrator Guide for information.

  2. When the adaptor suites are upgraded successfully, use NSP Artifacts to install the required telemetry artifact bundles that are packaged with the adaptor suites.

    • nsp-telemetry-cr-nodeType-version.rel.release.ct.zip

    • nsp-telemetry-cr-nodeType-version.rel.release-va.zip

  3. View the messages displayed during the installation to verify that the artifact installation is successful.


Upgrade or enable additional components and systems
 
101 

If the NSP deployment includes the VSR-NRC, upgrade the VSR-NRC as described in the VSR-NRC documentation.


102 

If you are including an existing NFM-P system in the deployment, perform one of the following.

  1. Upgrade the NFM-P to the NSP release; see NFM-P system upgrade from Release 22.9 or later.

  2. Enable NFM-P and NSP compatibility; perform To enable NSP compatibility with an earlier NFM-P system.

Note: An NFM-P system upgrade procedure includes steps for upgrading the following components in an orderly fashion:

  • NSP auxiliary database

  • NSP analytics servers


103 

If the NSP system includes the WS-NOC, perform the appropriate procedure in WS-NOC and NSP integration to enable WS-NOC integration with the upgraded NSP system.


Purge Kubernetes image files
 
104 

Note: Perform this and the following step only after you verify that the NSP system is operationally stable and that an upgrade rollback is not required.

Enter the following on the NSP deployer host:

cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵


105 

Enter the following:

./nspk8sctl purge-registry -e ↵

The images are purged.


Purge NSP image files
 
106 

Note: Perform this and the following step only after you verify that the NSP system is operationally stable and that an upgrade rollback is not required.

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵


107 

Enter the following:

./nspdeployerctl purge-registry -e ↵

The charts and images are purged.


Restore SELinux enforcing mode
 
108 

If either of the following is true, perform “How do I switch between SELinux modes on NSP system components?” in the NSP System Administrator Guide on the NSP deployer host and each NSP cluster VM.

  • You switched from SELinux enforcing mode to permissive mode before the upgrade, and want to restore the use of enforcing mode.

  • The upgrade has enabled SELinux in the NSP cluster for the first time, but in permissive mode, and you want the more stringent security of enforcing mode.


109 

Close the open console windows.


Import NFM-P users and groups
 
110 

If you need to import NFM-P users to the NSP local user database as you transition to OAUTH2 user authentication, perform “How do I import users and groups from NFM-P?” in the NSP System Administrator Guide.


Remove path-control subscriptions
 
111 

If you are upgrading a Release 23.4 or later system that has path-control telemetry flow integration enabled, you must remove the older subscriptions, which can no longer be used.

Issue the following REST API call:

Note: In order to issue a REST API call, you require a token; see the My First NSP API Client tutorial on the Network Developer Portal for information.

POST https://{{address}}:8443/rest/flow-collector-controller/rest/api/v1/export/unsubscribe

where address is the NSP advertised address

The message body is the following:

{

"subscription" : "nrcp-sub"

}

The subscriptions are removed.


Restore classic telemetry collection
 
112 

Telemetry data collection for classically mediated NEs does not automatically resume after an upgrade from NSP Release 23.11 or earlier. Manual action is required to restore the data collection.

If your NSP system collects telemetry data from classically mediated NEs, restore the telemetry data collection.

  1. The upgrade changes IPV6 and Dual Stack NE IDs. Using the updated NE ID from Device Management, edit your subscriptions and update the NE ID in the Object Filter. See “How do I manage subscriptions?” in the NSP Data Collection and Analysis Guide.

  2. If you have subscriptions with telemetry type telemetry:/base/interfaces/utilization, update the control files:

    1. Disable the subscriptions with telemetry type telemetry:/base/interfaces/utilization.

    2. Obtain and install the latest nsp-telemetry-cr-va-sros artifact bundle from the Nokia Support portal; see “How do I install an artifact bundle?” in the NSP Network Automation Guide.

    3. Enable the subscriptions.

  3. Add the classically mediated devices to a unified discovery rule that uses GRPC mediation; see “How do I stitch a classic device to a unified discovery rule?” in the NSP Device Management Guide.

    Collection resumes for Classic NEs.

Note: The subscription processing begins after the execution of a discovery rule, and may take 15 minutes or more.


Synchronize LSP paths
 
113 

After upgrading NSP to Release 24.8 or later from any release earlier than 24.8, path properties for existing LSP paths require updating using the sync API.

The following API call ensures that the tunnel-id and signaling-type properties for LSPs that existed prior to the upgrade to NSP 24.8 or later are updated with the correct device mappings. This step is required for MDM NEs only, and is not required for classically managed NEs.

https://server_IP/restconf/operations/nsp-admin-resync:trigger-resync

Method: POST

Body:

{
  "nsp-admin-resync:input": {
    "plugin-id": "mdm",
    "network-element": [
      {
        "ne-id": {{ne_ipaddress}},
        "sbi-classes": [
          {
            "class-id": "nokia-state:/state/router/mpls/lsp/primary"
          },
          {
            "class-id": "nokia-state:/state/router/mpls/lsp/secondary"
          }
        ]
      }
    ]
  }
}

where ne_ipaddress is the IP address of the NE


114 

Confirm the completion of the resync operation for the affected MDM NEs by performing the following steps:

  1. On the nspos-resync-fw-0 pod, enter the following command:

    kubectl exec -it -n nsp-psa-restricted nspos-resync-fw-0 -- /bin/bash ↵

  2. Run a tail on mdResync.log:

    tail -f /opt/nsp/os/resync-fw/logs/mdResync.log ↵

  3. The log contains messages similar to the following after completing sync on an NE:

    done resync neId=3.3.3.3, resync task id=48, isFullResync=false, isScheduleResync=false, device classes=[nokia-state:/state/router/mpls/lsp/primary, nokia-state:/state/router/mpls/lsp/secondary]

  4. Verify that the sync is completed for all required MDM NEs.


115 

The following API call updates path control UUIDs and properties in existing IETF TE tunnel primary-paths and secondary-paths. This API syncs values from the path control database to the YANG database. The synchronization process occurs in the background when the API call is executed.

https://server_IP/lspcore/api/v1/syncLspPaths

Method: GET

Sample result:

HTTP Status OK/40X

{    response: {

        data: null,

        status:  40x,

        errors: {

            errorMessage: ""

        },

        startRow: 0,

        endRow: 0,

        totalRows: {}

    }

}

HTTP Status OK

{

    response: {

        data: null,

        status:  200,

        startRow: 0,

        endRow: 0,

        totalRows: {}

    }

}


Perform post-upgrade tasks
 
116 

Modify your UAC configuration to maintain user access to the Service Configuration Health dashlet in the Network Map and Health view.

At a minimum, to maintain user access to the Service Configuration Health dashlet, you must open all role objects that control access to the Network Map and Health view, make at least one change to each role (even if only to the Description field), and save your change(s). This action is essential to maintain user access to the Service Configuration Health dashlet after an upgrade.

See “How do I configure a role?” in the NSP System Administrator Guide for information on modifying roles.

End of steps