To upgrade a Release 25.8 or earlier NSP cluster

Purpose
CAUTION 

CAUTION

Network management outage

The procedure requires a shutdown of the NSP system, which causes a network management outage.

Ensure that you perform the procedure only during a scheduled maintenance period with the assistance of technical support.

Perform this procedure to upgrade a standalone or DR NSP system after you have performed To prepare for an NSP system upgrade from Release 25.8 or earlier.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Steps
Stop and undeploy NSP cluster
 

Log in as the root or NSP admin user on the NSP deployer host.


Perform the following steps to preserve the existing cluster data.

/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  1. Open the appropriate file, based on the installed NSP release, using a plain-text editor such as vi:

  2. Edit the following line in the platform section, kubernetes subsection to read:

      deleteOnUndeploy:false

  3. Save and close the file.


CAUTION 

CAUTION

Data Loss

Undeploying an NSP cluster as described in this step permanently removes the cluster data.

If you are upgrading a DR NSP system, you must ensure that you have the latest database backup from the primary cluster before you perform this step.

Undeploy the NSP cluster.

Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, this step marks the beginning of the network management outage associated with the upgrade.

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in a command, as shown in the following example, and are subsequently prompted for the common root password of each cluster member:

nsp-config.bash --ask-pass --undeploy --clean | nspdeployerctl uninstall --undeploy --clean

  1. Enter the following:

    /opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵

The NSP cluster is undeployed.


Preserve NSP cluster configuration
 

Ensure that the following file is copied to a separate station that is unaffected by the upgrade activity.

/opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml


Ensure that the following file is copied to a separate station that is unaffected by the upgrade activity.

/opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml


Create NSP deployer host
 

Log in as the root or NSP admin user on the station that will host the NSP deployer host VM.


Open a console window.


Enter the following:

dnf -y install virt-install libguestfs-tools ↵


Before you create the new NSP deployer host VM, you must disable the existing VM; the following options are available.

  • stop but do not delete existing VM—simplifies upgrade rollback, but VM consumes platform resources

  • delete existing VM—complicates upgrade rollback, but conserves platform resources

  1. Log in as the root or NSP admin user on the station that hosts the NSP deployer host VM.

  2. Enter the following to list the VMs on the station:

    virsh list ↵

    The VMs are listed.

  3. Enter the following:

    virsh destroy VM

    where VM is the name of the NSP deployer host VM

    The NSP deployer host VM stops.

  4. To delete the VM, enter the following:

    Note: If you intend to use the same VM name for the new NSP deployer host VM, you must delete the VM.

    virsh undefine VM

    The VM is deleted.


10 

Perform one of the following to create the new NSP deployer host VM.

Note: The NSP deployer host VM requires a hostname; you must change the default of ‘localhost’ to an actual hostname.

  1. Deploy the downloaded NSP_K8S_PLATFORM_RHEL9_yy_mm.qcow2 disk image; perform Step 6 to Step 15 of To deploy an NSP RHEL qcow2 disk image.

  2. Deploy the NSP_K8S_PLATFORM_RHEL9_yy_mm.ova disk image; see the documentation for your virtualization environment for information.

    Note: For OVA-image deployment, it is strongly recommended that you mount the /opt directory on a separate hard disk that has sufficient capacity to allow for future expansion.

  3. Manually install the RHEL OS and configure the disk partitions, as described in Manual NSP RHEL OS installation and Chapter 2, NSP disk setup and partitioning.


Install NSP Kubernetes registry
 
11 

Log in as the root or NSP admin user on the NSP deployer host.


12 

Enter the following:

mkdir /opt/nsp ↵


13 

Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host:

NSP_K8S_DEPLOYER_R_r.tar.gz


14 

Enter the following:

cd /opt/nsp ↵


15 

Enter the following:

tar xvf NSP_K8S_DEPLOYER_R_r.tar.gz ↵

where R_r is the NSP release ID, in the form Major_minor

The file is expanded, and the following directories are created:

  • /opt/nsp/nsp-k8s-deployer-release-ID

  • /opt/nsp/nsp-registry-release-ID


16 

After the file expansion completes successfully, enter the following to remove the bundle file, which is no longer required:

rm -f NSP_K8S_DEPLOYER_R_r.tar.gz ↵

The file is deleted.


17 

Enter the following:

cd nsp-registry-release-ID/bin ↵


18 

Enter the following:

./nspregistryctl install ↵

The following prompt is displayed.

Enter a registry admin password:


19 

Create a registry administrator password; the password must:

  • be a minimum of 10 characters

  • include at least one:

    • uppercase character

    • lowercase character

    • digit

    • special character in the following list:

      ! # $ % & ( ) * + , - . / : ; = ? @ \ ^ _ { | }


20 

Enter the password.

The following prompt is displayed.

Confirm the registry admin password:


21 

Re-enter the password.

The registry installation begins, and messages like the following are displayed.

✔ New installation detected.

✔ Initialize system.

date time Copy container images ...

date time Install/update package [container-selinux] ...

✔ Installation of container-selinux has completed.

date time Install/update package [k3s-selinux] ...

✔ Installation of k3s-selinux has completed.

date time Setup required tools ...

✔ Initialization has completed.

date time Install k3s ...

date time Waiting for up to 10 minutes for k3s initialization ...

..............................................

✔ Installation of k3s has completed.

➜ Generate self-signed key and cert.

date time Registry TLS key file: /opt/nsp/nsp-registry/tls/nokia-nsp-registry.key

date time Registry TLS cert file: /opt/nsp/nsp-registry/tls/nokia-nsp-registry.crt

date time Install registry apps ...

date time Waiting for up to 10 minutes for registry services to be ready ...

..........

✔ Registry apps installation is completed.

date time Generate artifacts ...

date time Apply artifacts ...

date time Setup registry.nsp.nokia.local certs ...

date time Setup a default project [nsp] ...

date time Setup a cron to regenerate the k3s certificate [nsp] ...

✔ Post configuration is completed.

✔ Installation has completed.


22 

If you want to create a non-root user, see Restricting root-user system access.


Migrate legacy cluster parameters
 
23 

Enter the following:

cd /opt/nsp/nsp-k8s-deployer-release-ID/tools ↵


24 

Copy the k8s-deployer.yml file saved in Step 4 to the current directory.


25 

You must merge the current k8s-deployer.yml settings into the new k8s-deployer.yml file.

Open the following files using a plain-text editor such as vi:

  • new configuration file—/opt/nsp/nsp-k8s-deployer-new-release-ID/config/k8s-deployer.yml


26 

Apply the settings in the old file to the same parameters in the new file.


27 

Close the old k8s-deployer.yml file.


28 

Edit the following line in the cluster section of the new file to read:

  hosts: "/opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml"


29 

If you have disabled remote root access to the NSP cluster VMs, configure the following parameters in the cluster section, sshAccess subsection:

  sshAccess:

    userName: "user"

    privateKey: "path"

where

user is the designated NSP ansible user

path is the SSH key path for the NSP admin user, for example, /home/NSP admin user/.ssh/id_rsa


30 

Configure the following parameters for each NSP cluster VM; see the descriptive text at the head of the file for parameter information, and Hostname configuration requirements for general configuration information.

- nodeName: node_name

  nodeIp: private_IP_address

  nodeIpv6: private_IPv6_address

  accessIp: public_IP_address

  isIngress: value

where

node_name is the VM name

private_IP_address is the VM IP address on the internal network

private_IPv6_address is the optional VM IPv6 address on the internal network; in a NAT environment, this is the private IPv6 address.

public_IP_address is the public VM address; required when the NSP deployer host and cluster nodes have different interfaces for internal and public traffic

value is true or false, and indicates whether the node acts as a load-balancer endpoint


31 

In the following section, specify the virtual IP addresses for the NSP to use as the internal load-balancer endpoints.

Note: A single-node NSP cluster requires at least the client_IP address.

The addresses are the virtualIP values for NSP client, internal, and mediation access that you specify as described above in the nsp-config.yml file.

loadBalancerExternalIps:

    - client_IP

    - internal_IP

    - mediation_IP

    - trapV4_mediation_IP

    - trapV6_mediation_IP

    - flowV4_mediation_IP

    - flowV6_mediation_IP


32 

Configure the following parameter, which specifies whether dual-stack NE management is enabled:

Note: Dual-stack NE management can function only when the network environment is appropriately configured, for example:

  • Only valid, non-link-local static or DHCPv6-assigned addresses are used.

  • A physical or virtual IPv6 subnet is configured for IPv6 communication with the NEs.

  enableIpv6Stack: value

where value must be set to true if the cluster VMs support both IPv4 and IPv6 addressing

Note: Each cluster VM must have a hosts.nodeIpv6 value defined in k8s-deployer.yml, or a default IPv6 route must be defined on all of the cluster VMs.


33 

Configure the following parameters for artifact import configuration:

  imageSignatureVerificationFile: "image_path"

where

image_path is the absolute path to a file in PEM format containing the certificates that are to be used to verify the signature of images imported into the NSP registry

The certificate is in the nsp-signing-keys.zip bundle that you downloaded in Obtain installation software.

To extract the contents, enter the following:

unzip nsp-signing-keys.zip ↵

The contents are extracted to the current directory.

They contain Nokia_Root_Certificate.crt that is used as the value for imageSignatureVerificationFile.


34 

Save and close the new k8s-deployer.yml file.


Create NSP cluster VMs
 
35 

For each required NSP cluster VM, perform one of the following to create the VM.

Note: Each NSP cluster VM requires a hostname; you must change the default of ‘localhost’ to an actual hostname.

  1. Deploy the downloaded NSP_K8S_PLATFORM_RHEL9_yy_mm.qcow2 disk image; perform Step 6 to Step 15 of To deploy an NSP RHEL qcow2 disk image.

  2. Deploy the NSP_K8S_PLATFORM_RHEL9_yy_mm.ova disk image; see the documentation for your virtualization environment for information.

    Note: For OVA-image deployment, it is strongly recommended that you mount the /opt directory on a separate hard disk that has sufficient capacity to allow for future expansion.

  3. Manually install the RHEL OS and configure the disk partitions, as described in Manual NSP RHEL OS installation and Chapter 2, NSP disk setup and partitioning.


36 

Perform Step 37 to Step 39 for each NSP cluster VM to configure the required interfaces.


Configure NSP cluster interfaces
 
37 

Enter the following on the NSP deployer host to open a console session on the VM:

virsh console NSP_cluster_VM

where NSP_cluster_VM is the VM name

You are prompted for credentials.


38 

Enter the following credentials:

  • username—root

  • password—available from technical support

A virtual serial console session opens on the NSP cluster VM.


39 

Perform in Step 7 to Step 17 in Configure NSP deployer host networking


Deploy container environment
 
40 

Log in as the root or NSP admin user on the NSP deployer host.


41 

Open a console window.


42 

For password-free NSP deployer host access to the NSP cluster VMs, you require an SSH key.

To generate and distribute the SSH key, perform the following steps.

  1. Enter the following:

    ssh-keygen -N "" -f path -t rsa ↵

    where path is the SSH key file path; for example, /home/user_deployer/.ssh/id_rsa where user_deployer is root or NSP admin user

    An SSH key is generated.

  2. Enter the following for each NSP cluster VM to distribute the key to the VM.

    ssh-copy-id -i key_file user@address

    where

    user is the designated NSP ansible user configured in Step 29, if root-user access is restricted; otherwise, user@ is not required

    key_file is the SSH key file, for example, /home/user_deployer/.ssh/id_rsa.pub

    address is the NSP cluster VM IP address


43 

Enter the following:

cd /opt/nsp/nsp-k8s-deployer-release-ID/bin ↵


44 

Enter the following to create the new hosts.yml file:

./nspk8sctl config -c ↵


45 

Enter the following to list the node entries in the new hosts.yml file:

./nspk8sctl config -l ↵

Output like the following example for a four-node cluster is displayed:

Note: If NAT is used in the cluster:

  • The ip value is the private IP address of the cluster node.

  • The access_ip value is the public address of the cluster node.

Otherwise:

  • The ip value is the private IP address of the cluster node.

  • The access_ip value matches the ip value.

Note: The ansible_host value must match the access_ip value.

Existing cluster hosts configuration is:

node_1_name:

  ansible_host: 203.0.113.11

  ip: private_IP

  access_ip: public_IP

  node_labels:

    isIngress: "true"

node_2_name:

  ansible_host: 203.0.113.12

  ip: private_IP

  access_ip: public_IP

  node_labels:

    isIngress: "true"

node_3_name:

  ansible_host: 203.0.113.13

  ip: private_IP

  access_ip: public_IP

  node_labels:

    isIngress: "true"

node_4_name:

  ansible_host: 203.0.113.14

  ip: private_IP

  access_ip: public_IP

  node_labels:

    isIngress: "false"


46 

Verify the IP addresses.


47 

Enter the following to import the Kubernetes images to the repository:

../nspk8sctl import ↵


48 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspk8sctl --ask-pass install

../nspk8sctl install ↵

The NSP Kubernetes environment is deployed.


49 

The NSP cluster member named node1 is designated the NSP cluster host for future configuration activities; record the NSP cluster host IP address for future reference.


50 

Open a console window on the NSP deployer host and enter the following:

export KUBECONFIG=/opt/nsp/nsp-configurator/kubeconfig/nsp_kubeconfig ↵


51 

Enter the following periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until each pod STATUS reads Running or Completed.

kubectl get pods -A ↵

The pods are listed.


52 

Enter the following periodically to display the status of the NSP cluster nodes:

Note: You must not proceed to the next step until each node STATUS reads Ready.

kubectl get nodes -o wide ↵

The NSP cluster nodes are listed, as shown in the following three-node cluster example:

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   

node1   Ready    master   nd    version   int_IP        ext_IP

node2   Ready    master   nd    version   int_IP        ext_IP

node3   Ready    <none>   nd    version   int_IP        ext_IP


Restore NSP system files
 
53 

Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host:

NSP_DEPLOYER_R_r.tar.gz


54 

Enter the following on the NSP deployer host:

cd /opt/nsp ↵


55 

Enter the following:

tar xvf NSP_DEPLOYER_R_r.tar.gz ↵

where R_r is the NSP release ID, in the form Major_minor

The bundle file is expanded, and the following directory of NSP installation files is created:

/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID


56 

Enter the following:

rm -f NSP_DEPLOYER_R_r.tar.gz ↵

The bundle file is deleted.


57 

Restore the required NSP configuration files.

  1. Enter the following:

    mkdir /tmp/appliedConfig ↵

  2. Enter the following:

    cd /tmp/appliedConfig ↵

  3. Transfer the following configuration backup file saved in To prepare for an NSP system upgrade from Release 25.8 or earlier to the /tmp/appliedConfig directory:

    nspConfiguratorConfigs.zip

  4. Enter the following:

    unzip nspConfiguratorConfigs.zip ↵

    The configuration files are extracted to the current directory, and include some or all of the following, depending on the previous deployment:

    • license file

    • nsp-config.yml file

    • TLS files; may include subdirectories

    • SSH key files

    • nsp-configurator/generated directory content

  5. Copy all extracted TLS certificates from the tls subdirectories the appropriate subdirectories under /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls.


58 

Perform one of the following.

  1. If you are upgrading from Release 23.11 or earlier, you must merge the current nsp-deployer.yml settings into the new nsp-deployer.yml file.

    1. Open the following new cluster configuration file and the old nsp-deployer.yml file saved in Step 5 using a plain-text editor such as vi:

      /opt/nsp/NSP-CN-DEP-new-release-ID/config/nsp-deployer.yml

    2. Apply the settings in the old file to the same parameters in the new file.

    3. Close the old nsp-deployer.yml file.

  2. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-new-release-ID/config/nsp-deployer.yml

    Update hosts attributes.


59 

If you have disabled remote root access to the NSP cluster VMs, configure the following parameters in the cluster section, sshAccess subsection:

  sshAccess:

    userName: "user"

    privateKey: "path"

where

user is the designated NSP ansible user

path is the SSH key path for the NSP admin user, for example, /home/NSP admin user/.ssh/id_rsa


60 

In the Artifact import configuration section, configure the following parameters:

  imageSignatureVerificationFile: "/image_path/Nokia_Root_Certificate.crt"

  chartSignatureVerificationFile: "/chart_path/nokia_helm_public_keyring.gpg" 

where

image_path is the absolute path to a file in PEM format containing the certificates that are to be used to verify the signature of images imported into the NSP registry

chart_path is the absolute path to a GNU Privacy Guard (GPG) file that is to be used to verify the provenance of charts imported into the NSP registry

The certificate is in the nsp-signing-keys.zip bundle that you downloaded in Obtain installation software.

To extract the contents, enter the following:

unzip nsp-signing-keys.zip ↵

The contents are extracted to the current directory.

The contents contain Nokia_Root_Certificate.crt and  nokia_helm_public_keyring.gpg that are used as the values for the imageSignatureVerificationFile and chartSignatureVerificationFile parameters.


61 

Save and close the new nsp-deployer.yml file.


62 

Enter the following to import the NSP images and Helm charts to the NSP Kubernetes registry:

Note: The import operation may take 20 minutes or longer.

/opt/nsp/NSP-CN-DEP-new-release-ID/bin/nspdeployerctl import ↵


Upgrade auxiliary database
 
63 

If your system includes NFM-P and an auxiliary database, go to Step 68.

If the NSP system includes NFMP and an auxiliary database, the auxiliary database upgrade is performed as part of the the NFM-P upgrade; go to Step 68.

If the system has an auxiliary database and no NFM-P, the auxiliary database upgrade is performed as part of the NSP upgrade.


64 

On a standalone NSP cluster where NFM-P is not deployed, perform Step 66 or Step 67, depending on the auxiliary database deployment.


65 

On a DR NSP cluster where NFM-P is not deployed, perform Step 66 or Step 67 after the primary NSP cluster is undeployed, depending on the auxiliary database deployment.


66 

If a standalone auxiliary database is installed, perform To upgrade a standalone auxiliary database to upgrade the auxiliary database.


67 

If a georedundant auxiliary database is installed, perform the following steps:

  1. Perform concurrently:

  2. Activate the former primary (second) auxiliary database cluster; see Disable maintenance mode for auxiliary database agents.

  3. Verify that the auxiliary database is functioning correctly; see Verify auxiliary database status.


Configure NSP software
 
68 

You must merge the nsp-config.yml file content from the existing deployment into the new nsp-config.yml file.

Open the following files using a plain-text editor such as vi:

  • former configuration file—/tmp/appliedConfig/nsp-config.yml file extracted in Step 57

  • new configuration file—/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

Note: See nsp-config.yml file format for configuration information.


69 

Copy each configured parameter line from the previous nsp-config.yml file and use the line to overwrite the same line in the new file, if applicable. The following parameters are not present in the new nsp-config.yml file, and are not to be merged:

Note: The peer_address value that you specify must match the advertisedAddress value in the configuration of the peer cluster and have the same format; if one value is a hostname, the other must also be a hostname.

  • platform section—address configuration now in ingressApplications section

    • advertisedAddress

    • mediationAdvertisedAddress

    • mediationAdvertisedAddressIpv6

    • internalAdvertisedAddress

  • elb section—all parameters; section removed

  • tls section—information now held in secrets:

    • truststorePass

    • keystorePass

    • customKey

    • customCert

  • mtls section—information now held in secrets:

    • mtlsCACert

    • mtlsClientCert

    • mtlsKey

  • sso section—parameters moved to Users and Security in the NSP UI

    • authMode (obsolete)

    • sessionIdleTimeout

    • accessTokenLifespan

    • ldap subsection

    • radius subsection

    • tacacs subsection

Note: You must maintain the structure of the new file, as any new configuration options for the new release must remain.

Note: You must replace each configuration line entirely, and must preserve the leading spaces in each line.

Note: If NSP application-log forwarding to NSP Elasticsearch is enabled, special configuration is required.

  • OpenSearch replaces Elasticsearch as the local log-viewing utility. Consequently, the Elasticsearch configuration cannot be directly copied from the current NSP configuration to the new configuration. Instead, you must configure the parameters in the loggingforwardingapplicationLogsopensearch section of the new NSP configuration file.

  • Elasticsearch is introduced as a remote log-forwarding option. You can enable NSP application-log forwarding to a remote Elasticsearch server in the loggingforwardingapplicationLogselasticsearch section of the new NSP configuration file.

    See Centralized logging for more information about configuring NSP logging options.


70 

Configure the following parameter in the platform section as shown below:

Note: You must preserve the lead spacing of the line.

  clusterHost: "cluster_host_address"

where cluster_host_address is the address of NSP cluster member node1, which is subsequently used for cluster management operations


71 

You must apply the address values from the former configuration file to the new parameters.

Configure the following parameters in the platform section, ingressApplications subsection as shown below.

Each address is an address from the ingressApplications section of the k8s-deployer.yml file described in Step 31.

Note: The client_IP value is mandatory; the address is used for interfaces that remain unconfigured, such as in a single-interface deployment.

Note: If the client network uses IPv6, you must specify the NSP cluster hostname as the client_IP value.

Note: The trapForwarder addresses that you specify must differ from the client_IP value, even in a single-interface deployment.

  ingressApplications:

    ingressController:

      clientAddresses:

        virtualIp: "client_IP"

        advertised: "client_public_address"

      internalAddresses:

        virtualIp: "internal_IP"

        advertised: "internal_public_address"

      mediationAddresses:

        virtualIp: "mediation_IP"

        advertised: "mediation_public_address"

    trapForwarder:

      mediationAddresses:

        virtualIpV4: "trapV4_mediation_IP"

        advertisedV4: "trapV4_mediation_public_address"

        virtualIpV6: "trapV6_mediation_IP"

        advertisedV6: "trapV6_mediation_public_address"

where

client_IP is the address for external client access

internal_IP is the address for internal communication

mediation_IP is the address for network mediation

trapV4_mediation_IP is the address for IPv4 network mediation

trapV6_mediation_IP is the address for IPv6 network mediation

each public_address value is an optional address to advertise instead of the associated _IP value, for example, in a NAT environment


72 

If flow collection is enabled, configure the following parameters in the platform section, ingressApplications subsection as shown below.

Each address is an address from the ingressApplications section of the k8s-deployer.yml file described in Step 31.

    flowForwarder:

      mediationAddresses:

        virtualIpV4: "flowV4_mediation_IP"

        advertisedV4: "flowV4_mediation_public_address"

        virtualIpV6: "flowV6_mediation_IP"

        advertisedV6: "flowV6_mediation_public_address"

where

flowV4_mediation_IP is the mediation address for IPv4 flow collection

flowV6_mediation_IP is the mediation address for IPv6 flow collection

each _public_address value is an optional address to advertise instead of the associated mediation_IP value, for example, in a NAT environment


73 

Configure the type parameter in the deployment section as shown below:

deployment:

    type: "deployment_type"

where deployment_type is one of the parameter options listed in the section


74 

If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, perform the following steps.

  1. Enable the following installation option:

    id: networkInfrastructureManagement-gnmiTelemetry

  2. Configure the throughputFactor parameter in the nspmodulestelemetrygnmi section; see the parameter description in the nsp-config.yml for the required value, which is based on the management scale:

               throughputFactor: n

    where n is the required throughput factor for your deployment


75 

If required, configure NSP collection of accounting statistics. Perform the following steps:

  1. Enable the following installation option:

    id: networkInfrastructureManagement-accountingTelemetry

  2. Configure the collectFromClassicNes parameter in the nspmodulestelemetryaccounting section. The default is false; toggle the flag to true to enable the NSP to process accounting files from classic NEs.

CAUTION: If the collectFromClassicNes flag is true: to prevent duplicate file collection you must disable file rollover traps on the NE and disable the polling policy for tmnxLogFileIdEntry from NFM-P. See the NSP Classic Management User Guide and the NE CLI documentation.

After the rollover trap is disabled on the NE, the NFM-P and all other third-party systems that depend on the trap to trigger file collection will no longer collect accounting and event files.


76 

If all of the following are true, configure the following parameters in the integrations section:

  • The NSP system includes the NFM-P.

  • You want the NFM-P to forward system metrics to the NSP cluster.

  • The NFM-P main server and main database are on separate stations:

    nfmpDB:

      primaryIp: ""

      standbyIp: ""


77 

If both of the following are true, configure the following parameters in the integrations section:

  • The NSP system includes the NFM-P.

  • You want the NFM-P to forward system metrics to the NSP cluster.

  • The NFM-P system includes one or more auxiliary servers:

    auxServer:

      primaryIpList: ""

      standbyIpList: ""


78 

If required, configure the user authentication parameters in the sso section; see NSP SSO configuration parameters for configuration information.


79 

If you have an updated license, ensure that the location of your license.zip file, as indicated in the nsp-config.yml file, is in the correct location on the NSP deployer host.


80 

Save and close the new nsp-config.yml file.


81 

Close the previous nsp-config.yml file.


82 

If you are upgrading the new primary (former standby) cluster in a DR deployment, stop here and return to Pathway for DR NSP system upgrade from Release 25.8 or earlier.


Restore Kubernetes secrets
 
83 

If you are configuring a standalone NSP cluster, or the new primary cluster in a DR deployment, enter the following; otherwise, go to Step 94:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


84 

If you are upgrading from 24.8 or later, obtain and restore the secrets backup file.

  1. Enter the following:

    scp address:path/backup_file /tmp/ ↵

    where

    address is the address of the NSP deployer host in the primary cluster

    path is the absolute file path of the backup file

    backup_file is the secrets backup file name

    The backup file is transferred to the local /tmp directory.

  2. Enter the following:

    cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵

  3. Enter the following:

    ./nspdeployerctl secret -i /tmp/backup_file restore ↵

    The following prompt is displayed:

    Please provide the encryption password for /opt/backupfile

    enter aes-256-ctr decryption password:

  4. Enter the password recorded in Step 6.

    As the secrets are restored, messages like the following are displayed for each Kubernetes namespace:

    Restoring secrets from backup_file...

    secret/ca-key-pair-external created

      Restored secret namespace:ca-key-pair-external

    secret/ca-key-pair-internal created

      Restored secret namespace:ca-key-pair-internal

    secret/nsp-tls-store-pass created

      Restored secret namespace:nsp-tls-store-pass


85 

If you are upgrading from Release 24.8 or later, perform the following steps.

Note: The Kubernetes secrets from the previous release already exist; you are subsequently prompted for new or optional secrets that were not created in the previous release. An example of an optional secret is custom certificates.

  1. Enter the following:

    ./nspdeployerctl secret install ↵

    A prompt may be displayed, depending on whether new or optional secrets were created.

  2. If any prompts are displayed, enter no ↵.

  3. Skip Step 86 through Step 91.


86 

If you are upgrading from Release 24.4 or earlier, enter the following:

Note: To install the Kubernetes secrets, you require the backed-up TLS certificates extracted in Step 57.

./nspdeployerctl secret install ↵

The following prompt is displayed:

Would you like to use your own CA key pair for the NSP Internal Issuer? [yes,no]


87 

Enter the following:

Note: To install the Kubernetes secrets, you require the backed-up TLS artifacts extracted in Step 57.

./nspdeployerctl secret install ↵

The following prompt is displayed:

Would you like to use your own CA key pair for the NSP Internal Issuer? [yes,no]


88 

Provide the internal CA artifacts from the appliedConfigs location from Step 57.

  1. Enter yes ↵.

    The following messages and prompt are displayed:

  2. Building secret 'ca-key-pair-internal-nspdeployer'

    The CA key pair used to sign certificates generated by the NSP Internal Issuer.

    Please enter the internal CA private key:

  3. Enter the full path of the internal private key (/tmp/appliedConfig/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/tls/ca/ca_internal.key).

    The following prompt is displayed:

    Please enter the internal CA certificate:

  4. Enter the full path of the internal certificate (/tmp/appliedConfig/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/tls/ca/ca_internal.pem).

    The following messages are displayed for each Kubernetes namespace:

    Adding secret ca-key-pair-internal-nspdeployer to namespace namespace...

    secret/ca-key-pair-internal-nspdeployer created

The following prompt is displayed:

Would you like to use your own CA key pair for the NSP External Issuer? [yes,no]


89 

Provide your own certificate to secure the external network.

  1. Enter yes ↵.

    The following messages and prompt are displayed:

    Building secret 'ca-key-pair-external-nspdeployer'

    The CA key pair used to sign certificates generated by the NSP External Issuer.

    Please enter the external CA private key:

  2. Enter the full path of the external private key (/tmp/appliedConfig/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/tls/ca/ca.key).

    The following prompt is displayed:

    Please enter the external CA certificate:

  3. Enter the full path of the external certificate (/tmp/appliedConfig/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/tls/ca/ca.pem):

    The following messages are displayed for each Kubernetes namespace:

    Adding secret ca-key-pair-external-nspdeployer to namespace namespace...

    secret/ca-key-pair-external-nspdeployer created

Would you like to provide a custom private key and certificate for use by NSP endpoints when securing TLS connections over the client network? [yes,no]


90 

If the original installation specified custom certificates in the tls section of nsp-config,yml, enter yes ↵ .

  1. Enter yes ↵.

    The following messages and prompt are displayed:

    Building secret 'nginx-nb-tls-nsp'

    TLS certificate for securing the ingress gateway.

    Please enter the ingress gateway private key:

  2. Enter the full path of the private key file for client access (/tmp/appliedConfig/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/tls/custom/customKey).

    The following prompt is displayed:

    Please enter the ingress gateway public certificate:

  3. Enter the full path of the public certificate file for client access (/tmp/appliedConfig/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/tls/custom/customCert).

    The following prompt is displayed:

    Please enter the ingress gateway public trusted CA certificate bundle:

  4. Enter the full path of the public trusted CA certificate bundle file (/tmp/appliedConfig/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/tls/custom/customCaCert).

    The following message is displayed:

      Adding secret nginx-nb-tls-nsp to namespace namespace...


91 

If the deployment includes MDM and mTLS is enabled in nsp-config.yml, the following prompt is displayed:

Would you like to provide mTLS certificates for the NSP mediation interface for two-way TLS authentication? [yes,no]

Perform one of the following.

  1. Enter no ↵ if you are not using mTLS or have no certificate to provide for mTLS.

  2. Provide your own certificate to secure MDM and gNMI telemetry.

    1. Enter yes ↵.

    2. The following messages and prompt are displayed:

      Building secret 'mediation-mtls-key'

      mTLS artifacts use to secure MDM communications with nodes.

      Please enter the mediation private key:

    3. Enter the full path of the mediation private key.

      The following prompt is displayed:

      Please enter the mediation CA certificate:

    4. Enter the full path of the mediation CA certificate.

      The following messages are displayed:

        Adding secret mediation-mtls-key to namespace namespace...

      secret/mediation-mtls-key created

        Adding secret mediation-mtls-key to namespace namespace...

      secret/mediation-mtls-key created


92 

If the NSP deployment is in Secure Boot mode, perform the following:

  1. The following prompt is displayed:

    Building secret 'drbd-signing-key'

    The certificate and key for signing drbd modules

    Please enter the private key enrolled with system: 

    Building secret 'drbd-signing-key'

    The certificate and key for signing drbd modules

    Please enter the private key enrolled with system: 

    Enter the private key from Step 1 when prompted.

  2. The following prompt is displayed:

    Please enter the public certificate enrolled with system:

    Enter the public certificate from Step 1.


93 

Back up the Kubernetes secrets.

  1. Enter the following:

    ./nspdeployerctl secret -o backup_file backup ↵

    where backup_file is the absolute path and name of the backup file to create

    As the secrets are backed up, messages like the following are displayed for each Kubernetes namespace:

    Backing up secrets to /opt/backupfile...

      Including secret namespace:ca-key-pair-external

      Including secret namespace:ca-key-pair-internal

      Including secret namespace:nsp-tls-store-pass

    When the backup is complete, the following prompt is displayed:

    Please provide an encryption password for backup_file

    enter aes-256-ctr encryption password:

  2. Enter a password.

    The following prompt is displayed:

    Verifying - enter aes-256-ctr encryption password:

  3. Re-enter the password.

    The backup file is encrypted using the password.

  4. Record the password for use when restoring the backup.

  5. Record the name of the data center associated with the backup.

  6. Transfer the backup file to a secure location in a separate facility for safekeeping.


Restore standby Kubernetes secrets
 
94 

If you are configuring the new standby (former primary) cluster in a DR deployment, obtain the secrets backup file from the NSP cluster in the new primary data center.

  1. Enter the following:

    scp address:path/backup_file /tmp/ ↵

    where

    address is the address of the NSP deployer host in the primary cluster

    path is the absolute file path of the backup file created in Step 93

    backup_file is the secrets backup file name

    The backup file is transferred to the local /tmp directory.

  2. Enter the following:

    cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵

  3. Enter the following:

    ./nspdeployerctl secret -i /tmp/backup_file restore ↵

    The following prompt is displayed:

    Please provide the encryption password for /opt/backupfile

    enter aes-256-ctr decryption password:

  4. Enter the password recorded in Step 93.

    As the secrets are restored, messages like the following are displayed for each Kubernetes namespace:

    Restoring secrets from backup_file...

    secret/ca-key-pair-external created

      Restored secret namespace:ca-key-pair-external

    secret/ca-key-pair-internal created

      Restored secret namespace:ca-key-pair-internal

    secret/nsp-tls-store-pass created

      Restored secret namespace:nsp-tls-store-pass

  5. If you answer yes to the Step 90 prompt for client access during the primary NSP cluster configuration, you must update the standby server secret for client access using the custom certificate and key files that are specific to the standby cluster.

    ./nspdeployerctl secret -s nginx-nb-tls-nsp -n psaRestricted -f tls.key=customKey -f tls.crt=customCert -f ca.crt=customCaCert update ↵

    where

    customKey is the full path of the private server key

    customCert is the full path of the server public certificate

    customCaCert is the full path of the CA public certificate

    Custom certificate and key files are created by performing To generate custom NSP TLS certificates.

    Messages like the following are displayed as the server secret is updated:

    secret/nginx-nb-tls-nsp patched

    The following files may contain sensitive information. They are no longer required by NSP and may be removed.

      customKey

      customCert

      customCaCert


Deploy NSP cluster
 
95 

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


96 

If you are upgrading the new standby (former primary) cluster in a DR deployment, go to Step 101.


97 

If you are using your own storage, you must reconfigure the storage classes for the NSP cluster.

Step 79 has examples of storage class configurations; if you are using other types of storage, see the appropriate storage documentation.


98 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass config

./nspdeployerctl config ↵


99 

If you are creating the new standalone NSP cluster, or the new primary NSP (previous standby) cluster in a DR deployment, you must start the cluster in restore mode; enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --restore

./nspdeployerctl install --config --restore ↵


Restore NSP data
 
100 

If you are creating the new standalone NSP cluster, or the new primary NSP cluster in a DR deployment, you must restore the NSP databases, file service data, and Kubernetes secrets; perform “How do I restore the NSP cluster databases?” in the NSP System Administrator Guide.


Start NSP
 
101 

If you are creating the new standby cluster in a DR deployment, enter the following on the NSP deployer host:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --deploy

./nspdeployerctl install --config --deploy ↵

The NSP starts.


Monitor NSP initialization
 
102 

Open a console window on the NSP deployer host and enter the following:

export KUBECONFIG=/opt/nsp/nsp-configurator/kubeconfig/nsp_kubeconfig ↵


103 

Monitor and validate the NSP cluster initialization.

Note: You must not proceed to the next step until each NSP pod is operational.

Note: nfs-storage over linstor is available on two or more nodes for all profile types.

nfs-storage uses RWX for access and local-provisioner uses RWO for access.

  1. Enter the following every few minutes:

    kubectl get pods -A ↵

    The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed.

  2. Check PVCs are bound to PVs and PVs are created with STORAGECLASS as shown below.

    kubectl get pvc -A ↵

    NAMESPACE             NAME                                                   STATUS  VOLUME    CAPACITY   ACCESS MODES  STORAGECLASS         AGE

    nsp-psa-privileged    nfs-server-pvc                                         Bound   pvc-ID    47Gi       RWO           piraeus-storage      2d

    nsp-psa-privileged    pvc-nfs-subdir-provisioner                             Bound   pvc-ID    10Mi       RWO                                2d

    nsp-psarestricted     data-nspos-kafka-broker-0                              Bound   pvc-ID    10Gi       RWO           local-provisioner    2d

    nsp-psa-restricted    data-nspos-kafka-controller-0                          Bound   pvc-ID    10Gi       RWO           local-provisioner    2d

    nsp-psarestricted     data-nspos-zookeeper-0                                 Bound   pvc-ID    2Gi        RWO           local-provisioner    2d

    nsp-psa-restricted    datadir-nspos-neo4j-core-dc2-250b-0                    Bound   pvc-ID    10Gi       RWO           local-provisioner    2d

    nsp-psa-restricted    nsp-backup-storage                                     Bound   pvc-ID    10G        RWX           nfs-storage          2d

    nsp-psa-restricted    nspos-fluentd-logs-data                                Bound   pvc-ID    50Mi       ROX                                2d 

    nsp-psa-restricted    nspos-fluentd-posfile-data                             Bound   pvc-ID    1Gi        RWO                                2d

    nsp-psa-restricted    nspos-postgresql-data-nspos-postgresql-primary-0       Bound   pvc-ID    15Gi       RWO           local-provisioner    2d 

    nsp-psa-restricted    opensearch-backup-pv                                   Bound   pvc-ID    1Gi        RWX           nfs-storage          2d 

    nsp-psa-restricted    opensearch-cluster-master-opensearch-cluster-master-0  Bound   pvc-ID    50Gi       RWO           local-provisioner    2d..

    nsp-psa-restricted    solr-pvc-nspos-solr-statefulset-0                      Bound   pvc-ID    1Gi        RWO           local-provisioner    2d

    nsp-psa-restricted    storage-volume-nspos-prometheus-server-0               Bound   pvc-ID    10Gi       RWO           local-provisioner    2d

    kubectl get pv ↵

    NAME                          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                                          STORAGECLASS            REASON      AGE

    nspos-fluentd-logs-data       50Mi       ROX            Retain           Bound       nsp-psa-restricted/nspos-fluentd-logs-data                                                                         2d

    nspos-fluent-posfile-data     1Gi        RWO            Retain           Bound       nsp-psa-restricted/nspos-fluentd-posfile-data                                                                      2d

    pv-nfs-subdir-provisioner     10Mi       RWO            Delete           Bound       nsp-psa-privileged/pvc-nfs-subdir-provisioner                                                                      2d

    pvc-0c8d9fdf-6ffc-4208-...    47Gi       RWO            Delete           Bound       nsp-psa-privileged/nfs-server-pvc                                              piraeus-storage                     2d 

    pvc-4ccbe6ef-e38c-4739-...    10G        RWX            Delete           Bound       nsp-psa-restricted/nsp-backup-storage                                          nfs-storage                         2d

    pvc-5b21e2e2-eec5-49a7-...    10Gi       RWO            Delete           Bound       nsp-psa-restricted/data-nspos-kafka-broker-0                                   local-provisioner                   2d

    pvc-50df27c7-fcf3-462d-...    10Gi       RWO            Delete           Bound       nsp-psa-restricted/storage-volume-nspos-prometheus-server-0                    local-provisioner                   2d

    pvc-59c23b56-95fb-4cf6-...    50Gi       RWO            Delete           Bound       nsp-psa-restricted/opensearch-cluster-master-opensearch-cluster-master-0       local-provisioner                   2d

    pvc-96d2ceb0-e490-470d-...    2Gi        RWO            Delete           Bound       nsp-psa-restricted/data-nspos-zookeeper-0                                      local-provisioner                   2d

    pvc-370ee803-cd02-4eac-...    15Gi       RWO            Delete           Bound       nsp-psa-restricted/nspos-postgresql-data-nspos-postgresql-primary-0            local-provisioner                   2d

    pvc-32991a91-1b35-42c9-...    1Gi        RWX            Delete           Bound       nsp-psa-restricted/opensearch-backup-pv                                        nfs-storage                         2d

    pvc-86581df8-026b-4b80-...    10Gi       RWO            Delete           Bound       nsp-psa-restricted/data-nspos-kafka-controller-0                               local-provisioner                   2d

    pvc-bf1c522b-985c-485c-...    1Gi        RWO            Delete           Bound       nsp-psa-restricted/solr-pvc-nspos-solr-statefulset-0                           local-provisioner                   2d

    pvc-ccee060b-6268-4601-...    10Gi       RWO            Delete           Bound       nsp-psa-restricted/datadir-nspos-neo4j-core-dc2-250b-0                         local-provisioner                   2d

  3. Verify that all pods are in the Running state.

  4. If the Network Operations Analytics - Baseline Analytics installation option is enabled, ensure that the following pods are listed; otherwise, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod:

    Note: The output for a non-HA deployment is shown below; an HA cluster has one baseline pod, two rta-ignite pods, and the remaining pods have one instance.

    • analytics-rtanalytics-tomcat

    • baseline-anomaly-detector-n-exec-1

    • baseline-trainer-n-exec-1

    • baseline-window-evaluator-n-exec-1

    • rta-anomaly-detector-app-driver

    • rta-ignite-0

    • rta-trainer-app-driver

    • rta-windower-app-driver

    • spark-operator-m-n

  5. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.

  6. Enter the following to display the status of the NSP cluster members:

    kubectl get nodes ↵

    The NSP Kubernetes deployer log file is /var/log/nspk8sctl.log.


104 

Enter the following on the NSP deployer host:

export KUBECONFIG=/opt/nsp/nsp-configurator/kubeconfig/nsp_kubeconfig ↵


105 

Enter the following to ensure that all pods are running:

kubectl get pods -A ↵

The status of each pod is listed; all pods are running when the displayed STATUS value is Running or Completed.

The NSP deployer log file is /var/log/nspdeployerctl.log.


106 

To remove the sensitive NSP security information from the local file system enter the following:

rm -rf /tmp/appliedConfig ↵

The /tmp/appliedConfigs directory is deleted.


Verify upgraded NSP cluster operation
 
107 

Use a browser to open the NSP cluster URL.


108 

Verify the following.

  • The NSP sign-in page opens.

  • The NSP UI opens after you sign in.

  • In a DR deployment, if the standby cluster is operational and you specify the standby cluster address, the browser is redirected to the primary cluster address.

Note: If the UI fails to open, perform “How do I remove the stale NSP allowlist entries?” in the NSP System Administrator Guide to ensure that unresolvable host entries from the previous deployment do not prevent NSP access.


Import NFM-P users and groups
 
109 

If you need to import NFM-P users to the NSP local user database as you transition to OAUTH2 user authentication, perform “How do I import users and groups from NFM-P?” in the NSP System Administrator Guide.


Upgrade MDM adaptors
 
110 

If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, you must upgrade your MDM adaptor artifacts to the latest in the adaptor suite delivered as part of the new NSP release, and install the required Custom Resources (CRs).

Perform the following steps.

Note: Upgrading the adaptor artifacts to the latest version is mandatory in order for gNMI and accounting telemetry collection to function.

  1. Upgrade the adaptor suites; see “How do I install artifacts that are not supported in the Artifacts view?” in the NSP System Administrator Guide for information.

  2. When the adaptor suites are upgraded successfully, use NSP Artifacts to install the required device telemetry and vendor-agnostic telemetry adaptation bundles that are packaged with the adaptor suites.

  3. View the messages displayed during the installation to verify that the artifact installation is successful.


Upgrade or enable additional components and systems
 
111 

If the NSP deployment includes the VSR-NRC, upgrade the VSR-NRC as described in the VSR-NRC documentation.


112 

If you are including an existing NFM-P system in the deployment, perform one of the following.

  1. Upgrade the NFM-P to the NSP release; see NFM-P system upgrade.

  2. Enable NFM-P and NSP compatibility; perform To enable NSP compatibility with an earlier NFM-P system.

Note: An NFM-P system upgrade procedure includes steps for upgrading the following components in an orderly fashion:

  • NSP auxiliary database


113 

If the NSP system includes the WS-NOC, perform the appropriate procedure in WaveSuite and NSP integration to enable WS-NOC integration with the upgraded NSP system.


Restore classic telemetry collection
 
114 

Telemetry data collection for classically mediated NEs does not automatically resume after an upgrade from NSP Release 23.11 or earlier. Manual action is required to restore the data collection.

If your NSP system collects telemetry data from :classically mediated NEs, restore the telemetry data collection.

  1. The upgrade changes IPV6 and Dual Stack NE IDs. Using the updated NE ID from Device Management, edit your subscriptions and update the NE ID in the Object Filter. See “How do I manage subscriptions?” in the NSP Data Collection and Analysis Guide.

  2. If you have subscriptions with telemetry type telemetry:/base/interfaces/utilization, update the control files:

    1. Disable the subscriptions with telemetry type telemetry:/base/interfaces/utilization.

    2. Obtain the latest nsp-telemetry-cr-va-sros artifact bundle from the Nokia Support portal and install it; see “How do I install an artifact bundle?” in the NSP Network Automation Guide.

    3. Enable the subscriptions.

  3. Add the classically mediated devices to a unified discovery rule that uses GRPC mediation; see “How do I stitch a classic device to a unified discovery rule?” in the NSP Device Management Guide.

    Collection resumes for Classic NEs.

Note: The subscription processing begins after the execution of a discovery rule, and may take 15 minutes or more.


Synchronize LSP paths
 
115 

You must update the path properties for existing LSP paths.

For each model-driven NE that the NSP manages, Issue the following RESTCONF API call to ensure that the tunnel-id and signaling-type LSP values are updated with the correct device mappings.

Note: This step is required for MDM NEs only, and is not required for classically managed NEs.

POST https://address/restconf/operations/nsp-admin-resync:trigger-resync

where address is the advertised address of the NSP cluster

The request body is the following:

{

  "nsp-admin-resync:input": {

    "plugin-id": "mdm",

    "network-element": [

      {

        "ne-id": NE_IP,

        "sbi-classes": [

          {

            "class-id": "nokia-state:/state/router/mpls/lsp/primary"

          },

          {

            "class-id": "nokia-state:/state/router/mpls/lsp/secondary"

          }

        ]

      }

    ]

  }

}

where NE_IP is the IP address of the NE


116 

Confirm the completion of the resync operation for the affected MDM NEs by performing the following steps:

  1. On the nspos-resync-fw-0 pod, enter the following command:

    kubectl exec -it -n nsp-psa-restricted nspos-resync-fw-0 -- /bin/bash ↵

  2. Run a tail on mdResync.log:

    tail -f /opt/nsp/os/resync-fw/logs/mdResync.log ↵

  3. The log contains messages similar to the following after completing sync on an NE:

    done resync neId=3.3.3.3, resync task id=48, isFullResync=false, isScheduleResync=false, device classes=[nokia-state:/state/router/mpls/lsp/primary, nokia-state:/state/router/mpls/lsp/secondary]

  4. Verify that the sync is completed for all required MDM NEs.


117 

The following API call updates path control UUIDs and properties in existing IETF TE tunnel primary-paths and secondary-paths. This API syncs values from the path control database to the YANG database. The synchronization process occurs in the background when the API call is executed.

https://server_IP/lspcore/api/v1/syncLspPaths

Method: GET

Sample result:

HTTP Status OK/40X

{    response: {

        data: null,

        status:  40x,

        errors: {

            errorMessage: ""

        },

        startRow: 0,

        endRow: 0,

        totalRows: {}

    }

}

HTTP Status OK

{

    response: {

        data: null,

        status:  200,

        startRow: 0,

        endRow: 0,

        totalRows: {}

    }

}


Perform post-upgrade tasks
 
118 

Modify your UAC configuration to maintain user access to the Service Configuration Health dashlet in the Network Map and Health view.

At a minimum, to maintain user access to the Service Configuration Health dashlet, you must open all role objects that control access to the Network Map and Health view, make at least one change to each role (even if only to the Description field), and save your change(s). This action is essential to maintain user access to the Service Configuration Health dashlet after an upgrade.

See “How do I configure a role?” in the NSP System Administrator Guide for information on modifying roles.


119 

To remove the sensitive NSP security information from the local file system enter the following:

rm -rf /tmp/appliedConfigs ↵

The /tmp/appliedConfigs directory is deleted.


120 

Upgrading a Release 23.8 or earlier NSP system that has User Access Control enabled does not preserve the Insights Administrator user permissions, as Data Collection and Analysis replaces Insights Administrator.

For example, a user role in a Release 23.8 NSP system has the Insights Administrator permissions set to Read / Write / Execute. A subsequent upgrade to Release 24.11 removes Insights Administrator, and sets the new Data Collection and Analysis Management permissions on the user role to None. To restore the user access to NSP functions and objects associated with the role, an NSP administrator must apply the Data Collection and Analysis Management Read / Write / Execute permissions to the role.

Restore the user permissions, if required.

  1. Sign in to the NSP as an administrator.

  2. Open Users and Security.

  3. Configure each role that formerly had Insights Administrator permissions using the same Data Collection and Analysis Management permissions.


121 

Use the NSP to monitor device discovery and to check network management functions.


122 

Back up the NSP databases; perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide.


123 

Close the open console windows.

End of steps