To upgrade a Release 25.8 or earlier NSP cluster
Purpose
|
CAUTION Network management outage |
The procedure requires a shutdown of the NSP system, which causes a network management outage.
Ensure that you perform the procedure only during a scheduled maintenance period with the assistance of technical support.
Perform this procedure to upgrade a standalone or DR NSP system after you have performed To prepare for an NSP system upgrade from Release 25.8 or earlier.
Note: release-ID in a file path has the following format:
R.r.p-rel.version
where
R.r.p is the NSP release, in the form MAJOR.minor.patch
version is a numeric value
Steps
Stop and undeploy NSP cluster | |||
1 |
Log in as the root or NSP admin user on the NSP deployer host. | ||
2 |
Perform the following steps to preserve the existing cluster data. /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml
| ||
3 |
Undeploying an NSP cluster as described in this step permanently removes the cluster data. If you are upgrading a DR NSP system, you must ensure that you have the latest database backup from the primary cluster before you perform this step. Undeploy the NSP cluster. Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, this step marks the beginning of the network management outage associated with the upgrade. Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in a command, as shown in the following example, and are subsequently prompted for the common root password of each cluster member: nsp-config.bash --ask-pass --undeploy --clean | nspdeployerctl uninstall --undeploy --clean
The NSP cluster is undeployed. | ||
Preserve NSP cluster configuration | |||
4 |
Ensure that the following file is copied to a separate station that is unaffected by the upgrade activity. /opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml | ||
5 |
Ensure that the following file is copied to a separate station that is unaffected by the upgrade activity. /opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml | ||
Create NSP deployer host | |||
6 |
Log in as the root or NSP admin user on the station that will host the NSP deployer host VM. | ||
7 |
Open a console window. | ||
8 |
Enter the following: # dnf -y install virt-install libguestfs-tools ↵ | ||
9 |
Before you create the new NSP deployer host VM, you must disable the existing VM; the following options are available.
| ||
10 |
Perform one of the following to create the new NSP deployer host VM. Note: The NSP deployer host VM requires a hostname; you must change the default of ‘localhost’ to an actual hostname.
| ||
Install NSP Kubernetes registry | |||
11 |
Log in as the root or NSP admin user on the NSP deployer host. | ||
12 |
Enter the following: # mkdir /opt/nsp ↵ | ||
13 |
Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host: NSP_K8S_DEPLOYER_R_r.tar.gz | ||
14 |
Enter the following: # cd /opt/nsp ↵ | ||
15 |
Enter the following: # tar xvf NSP_K8S_DEPLOYER_R_r.tar.gz ↵ where R_r is the NSP release ID, in the form Major_minor The file is expanded, and the following directories are created: | ||
16 |
After the file expansion completes successfully, enter the following to remove the bundle file, which is no longer required: # rm -f NSP_K8S_DEPLOYER_R_r.tar.gz ↵ The file is deleted. | ||
17 |
Enter the following: # cd nsp-registry-release-ID/bin ↵ | ||
18 |
Enter the following: # ./nspregistryctl install ↵ The following prompt is displayed. Enter a registry admin password: | ||
19 |
Create a registry administrator password; the password must: | ||
20 |
Enter the password. The following prompt is displayed. Confirm the registry admin password: | ||
21 |
Re-enter the password. The registry installation begins, and messages like the following are displayed. ✔ New installation detected. ✔ Initialize system. date time Copy container images ... date time Install/update package [container-selinux] ... ✔ Installation of container-selinux has completed. date time Install/update package [k3s-selinux] ... ✔ Installation of k3s-selinux has completed. date time Setup required tools ... ✔ Initialization has completed. date time Install k3s ... date time Waiting for up to 10 minutes for k3s initialization ... .............................................. ✔ Installation of k3s has completed. ➜ Generate self-signed key and cert. date time Registry TLS key file: /opt/nsp/nsp-registry/tls/nokia-nsp-registry.key date time Registry TLS cert file: /opt/nsp/nsp-registry/tls/nokia-nsp-registry.crt date time Install registry apps ... date time Waiting for up to 10 minutes for registry services to be ready ... .......... ✔ Registry apps installation is completed. date time Generate artifacts ... date time Apply artifacts ... date time Setup registry.nsp.nokia.local certs ... date time Setup a default project [nsp] ... date time Setup a cron to regenerate the k3s certificate [nsp] ... ✔ Post configuration is completed. ✔ Installation has completed. | ||
22 |
If you want to create a non-root user, see Restricting root-user system access. | ||
Migrate legacy cluster parameters | |||
23 |
Enter the following: # cd /opt/nsp/nsp-k8s-deployer-release-ID/tools ↵ | ||
24 |
Copy the k8s-deployer.yml file saved in Step 4 to the current directory. | ||
25 |
You must merge the current k8s-deployer.yml settings into the new k8s-deployer.yml file. Open the following files using a plain-text editor such as vi: | ||
26 |
Apply the settings in the old file to the same parameters in the new file. | ||
27 |
Close the old k8s-deployer.yml file. | ||
28 |
Edit the following line in the cluster section of the new file to read: hosts: "/opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml" | ||
29 |
If you have disabled remote root access to the NSP cluster VMs, configure the following parameters in the cluster section, sshAccess subsection: sshAccess: userName: "user" privateKey: "path" where user is the designated NSP ansible user path is the SSH key path for the NSP admin user, for example, /home/NSP admin user/.ssh/id_rsa | ||
30 |
Configure the following parameters for each NSP cluster VM; see the descriptive text at the head of the file for parameter information, and Hostname configuration requirements for general configuration information. - nodeName: node_name nodeIp: private_IP_address nodeIpv6: private_IPv6_address accessIp: public_IP_address isIngress: value where node_name is the VM name private_IP_address is the VM IP address on the internal network private_IPv6_address is the optional VM IPv6 address on the internal network; in a NAT environment, this is the private IPv6 address. public_IP_address is the public VM address; required when the NSP deployer host and cluster nodes have different interfaces for internal and public traffic value is true or false, and indicates whether the node acts as a load-balancer endpoint | ||
31 |
In the following section, specify the virtual IP addresses for the NSP to use as the internal load-balancer endpoints. Note: A single-node NSP cluster requires at least the client_IP address. The addresses are the virtualIP values for NSP client, internal, and mediation access that you specify as described above in the nsp-config.yml file. loadBalancerExternalIps: - client_IP - internal_IP - mediation_IP - trapV4_mediation_IP - trapV6_mediation_IP - flowV4_mediation_IP - flowV6_mediation_IP | ||
32 |
Configure the following parameter, which specifies whether dual-stack NE management is enabled: Note: Dual-stack NE management can function only when the network environment is appropriately configured, for example:
enableIpv6Stack: value where value must be set to true if the cluster VMs support both IPv4 and IPv6 addressing Note: Each cluster VM must have a hosts.nodeIpv6 value defined in k8s-deployer.yml, or a default IPv6 route must be defined on all of the cluster VMs. | ||
33 |
Configure the following parameters for artifact import configuration: imageSignatureVerificationFile: "image_path" where image_path is the absolute path to a file in PEM format containing the certificates that are to be used to verify the signature of images imported into the NSP registry The certificate is in the nsp-signing-keys.zip bundle that you downloaded in Obtain installation software. To extract the contents, enter the following: # unzip nsp-signing-keys.zip ↵ The contents are extracted to the current directory. They contain Nokia_Root_Certificate.crt that is used as the value for imageSignatureVerificationFile. | ||
34 |
Save and close the new k8s-deployer.yml file. | ||
Create NSP cluster VMs | |||
35 |
For each required NSP cluster VM, perform one of the following to create the VM. Note: Each NSP cluster VM requires a hostname; you must change the default of ‘localhost’ to an actual hostname.
| ||
36 |
Perform Step 37 to Step 39 for each NSP cluster VM to configure the required interfaces. | ||
Configure NSP cluster interfaces | |||
37 |
Enter the following on the NSP deployer host to open a console session on the VM: # virsh console NSP_cluster_VM ↵ where NSP_cluster_VM is the VM name You are prompted for credentials. | ||
38 |
Enter the following credentials: A virtual serial console session opens on the NSP cluster VM. | ||
39 |
Perform in Step 7 to Step 17 in Configure NSP deployer host networking | ||
Deploy container environment | |||
40 |
Log in as the root or NSP admin user on the NSP deployer host. | ||
41 |
Open a console window. | ||
42 |
For password-free NSP deployer host access to the NSP cluster VMs, you require an SSH key. To generate and distribute the SSH key, perform the following steps.
| ||
43 |
Enter the following: # cd /opt/nsp/nsp-k8s-deployer-release-ID/bin ↵ | ||
44 |
Enter the following to create the new hosts.yml file: # ./nspk8sctl config -c ↵ | ||
45 |
Enter the following to list the node entries in the new hosts.yml file: # ./nspk8sctl config -l ↵ Output like the following example for a four-node cluster is displayed: Note: If NAT is used in the cluster: Otherwise: Note: The ansible_host value must match the access_ip value. Existing cluster hosts configuration is: node_1_name: ansible_host: 203.0.113.11 ip: private_IP access_ip: public_IP node_labels: isIngress: "true" node_2_name: ansible_host: 203.0.113.12 ip: private_IP access_ip: public_IP node_labels: isIngress: "true" node_3_name: ansible_host: 203.0.113.13 ip: private_IP access_ip: public_IP node_labels: isIngress: "true" node_4_name: ansible_host: 203.0.113.14 ip: private_IP access_ip: public_IP node_labels: isIngress: "false" | ||
46 |
Verify the IP addresses. | ||
47 |
Enter the following to import the Kubernetes images to the repository: .# ./nspk8sctl import ↵ | ||
48 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspk8sctl --ask-pass install .# ./nspk8sctl install ↵ The NSP Kubernetes environment is deployed. | ||
49 |
The NSP cluster member named node1 is designated the NSP cluster host for future configuration activities; record the NSP cluster host IP address for future reference. | ||
50 |
Open a console window on the NSP deployer host and enter the following: # export KUBECONFIG=/opt/nsp/nsp-configurator/kubeconfig/nsp_kubeconfig ↵ | ||
51 |
Enter the following periodically to display the status of the Kubernetes system pods: Note: You must not proceed to the next step until each pod STATUS reads Running or Completed. # kubectl get pods -A ↵ The pods are listed. | ||
52 |
Enter the following periodically to display the status of the NSP cluster nodes: Note: You must not proceed to the next step until each node STATUS reads Ready. # kubectl get nodes -o wide ↵ The NSP cluster nodes are listed, as shown in the following three-node cluster example: NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node1 Ready master nd version int_IP ext_IP node2 Ready master nd version int_IP ext_IP node3 Ready <none> nd version int_IP ext_IP | ||
Restore NSP system files | |||
53 |
Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host: NSP_DEPLOYER_R_r.tar.gz | ||
54 |
Enter the following on the NSP deployer host: # cd /opt/nsp ↵ | ||
55 |
Enter the following: # tar xvf NSP_DEPLOYER_R_r.tar.gz ↵ where R_r is the NSP release ID, in the form Major_minor The bundle file is expanded, and the following directory of NSP installation files is created: /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID | ||
56 |
Enter the following: # rm -f NSP_DEPLOYER_R_r.tar.gz ↵ The bundle file is deleted. | ||
57 |
Restore the required NSP configuration files.
| ||
58 |
Perform one of the following.
| ||
59 |
If you have disabled remote root access to the NSP cluster VMs, configure the following parameters in the cluster section, sshAccess subsection: sshAccess: userName: "user" privateKey: "path" where user is the designated NSP ansible user path is the SSH key path for the NSP admin user, for example, /home/NSP admin user/.ssh/id_rsa | ||
60 |
In the Artifact import configuration section, configure the following parameters: imageSignatureVerificationFile: "/image_path/Nokia_Root_Certificate.crt" chartSignatureVerificationFile: "/chart_path/nokia_helm_public_keyring.gpg" where image_path is the absolute path to a file in PEM format containing the certificates that are to be used to verify the signature of images imported into the NSP registry chart_path is the absolute path to a GNU Privacy Guard (GPG) file that is to be used to verify the provenance of charts imported into the NSP registry The certificate is in the nsp-signing-keys.zip bundle that you downloaded in Obtain installation software. To extract the contents, enter the following: # unzip nsp-signing-keys.zip ↵ The contents are extracted to the current directory. The contents contain Nokia_Root_Certificate.crt and nokia_helm_public_keyring.gpg that are used as the values for the imageSignatureVerificationFile and chartSignatureVerificationFile parameters. | ||
61 |
Save and close the new nsp-deployer.yml file. | ||
62 |
Enter the following to import the NSP images and Helm charts to the NSP Kubernetes registry: Note: The import operation may take 20 minutes or longer. # /opt/nsp/NSP-CN-DEP-new-release-ID/bin/nspdeployerctl import ↵ | ||
Upgrade auxiliary database | |||
63 |
If your system includes NFM-P and an auxiliary database, go to Step 68. If the NSP system includes NFMP and an auxiliary database, the auxiliary database upgrade is performed as part of the the NFM-P upgrade; go to Step 68. If the system has an auxiliary database and no NFM-P, the auxiliary database upgrade is performed as part of the NSP upgrade. | ||
64 |
On a standalone NSP cluster where NFM-P is not deployed, perform Step 66 or Step 67, depending on the auxiliary database deployment. | ||
65 |
On a DR NSP cluster where NFM-P is not deployed, perform Step 66 or Step 67 after the primary NSP cluster is undeployed, depending on the auxiliary database deployment. | ||
66 |
If a standalone auxiliary database is installed, perform To upgrade a standalone auxiliary database to upgrade the auxiliary database. | ||
67 |
If a georedundant auxiliary database is installed, perform the following steps:
| ||
Configure NSP software | |||
68 |
You must merge the nsp-config.yml file content from the existing deployment into the new nsp-config.yml file. Open the following files using a plain-text editor such as vi:
Note: See nsp-config.yml file format for configuration information. | ||
69 |
Copy each configured parameter line from the previous nsp-config.yml file and use the line to overwrite the same line in the new file, if applicable. The following parameters are not present in the new nsp-config.yml file, and are not to be merged: Note: The peer_address value that you specify must match the advertisedAddress value in the configuration of the peer cluster and have the same format; if one value is a hostname, the other must also be a hostname.
Note: You must maintain the structure of the new file, as any new configuration options for the new release must remain. Note: You must replace each configuration line entirely, and must preserve the leading spaces in each line. Note: If NSP application-log forwarding to NSP Elasticsearch is enabled, special configuration is required.
| ||
70 |
Configure the following parameter in the platform section as shown below: Note: You must preserve the lead spacing of the line. clusterHost: "cluster_host_address" where cluster_host_address is the address of NSP cluster member node1, which is subsequently used for cluster management operations | ||
71 |
You must apply the address values from the former configuration file to the new parameters. Configure the following parameters in the platform section, ingressApplications subsection as shown below. Each address is an address from the ingressApplications section of the k8s-deployer.yml file described in Step 31. Note: The client_IP value is mandatory; the address is used for interfaces that remain unconfigured, such as in a single-interface deployment. Note: If the client network uses IPv6, you must specify the NSP cluster hostname as the client_IP value. Note: The trapForwarder addresses that you specify must differ from the client_IP value, even in a single-interface deployment. ingressApplications: ingressController: clientAddresses: virtualIp: "client_IP" advertised: "client_public_address" internalAddresses: virtualIp: "internal_IP" advertised: "internal_public_address" mediationAddresses: virtualIp: "mediation_IP" advertised: "mediation_public_address" trapForwarder: mediationAddresses: virtualIpV4: "trapV4_mediation_IP" advertisedV4: "trapV4_mediation_public_address" virtualIpV6: "trapV6_mediation_IP" advertisedV6: "trapV6_mediation_public_address" where client_IP is the address for external client access internal_IP is the address for internal communication mediation_IP is the address for network mediation trapV4_mediation_IP is the address for IPv4 network mediation trapV6_mediation_IP is the address for IPv6 network mediation each public_address value is an optional address to advertise instead of the associated _IP value, for example, in a NAT environment | ||
72 |
If flow collection is enabled, configure the following parameters in the platform section, ingressApplications subsection as shown below. Each address is an address from the ingressApplications section of the k8s-deployer.yml file described in Step 31. flowForwarder: mediationAddresses: virtualIpV4: "flowV4_mediation_IP" advertisedV4: "flowV4_mediation_public_address" virtualIpV6: "flowV6_mediation_IP" advertisedV6: "flowV6_mediation_public_address" where flowV4_mediation_IP is the mediation address for IPv4 flow collection flowV6_mediation_IP is the mediation address for IPv6 flow collection each _public_address value is an optional address to advertise instead of the associated mediation_IP value, for example, in a NAT environment | ||
73 |
Configure the type parameter in the deployment section as shown below: deployment: type: "deployment_type" where deployment_type is one of the parameter options listed in the section | ||
74 |
If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, perform the following steps.
| ||
75 |
If required, configure NSP collection of accounting statistics. Perform the following steps:
CAUTION: If the collectFromClassicNes flag is true: to prevent duplicate file collection you must disable file rollover traps on the NE and disable the polling policy for tmnxLogFileIdEntry from NFM-P. See the NSP Classic Management User Guide and the NE CLI documentation. After the rollover trap is disabled on the NE, the NFM-P and all other third-party systems that depend on the trap to trigger file collection will no longer collect accounting and event files. | ||
76 |
If all of the following are true, configure the following parameters in the integrations section:
nfmpDB: primaryIp: "" standbyIp: "" | ||
77 |
If both of the following are true, configure the following parameters in the integrations section: auxServer: primaryIpList: "" standbyIpList: "" | ||
78 |
If required, configure the user authentication parameters in the sso section; see NSP SSO configuration parameters for configuration information. | ||
79 |
If you have an updated license, ensure that the location of your license.zip file, as indicated in the nsp-config.yml file, is in the correct location on the NSP deployer host. | ||
80 |
Save and close the new nsp-config.yml file. | ||
81 |
Close the previous nsp-config.yml file. | ||
82 |
If you are upgrading the new primary (former standby) cluster in a DR deployment, stop here and return to Pathway for DR NSP system upgrade from Release 25.8 or earlier. | ||
Restore Kubernetes secrets | |||
83 |
If you are configuring a standalone NSP cluster, or the new primary cluster in a DR deployment, enter the following; otherwise, go to Step 94: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ | ||
84 |
If you are upgrading from 24.8 or later, obtain and restore the secrets backup file.
| ||
85 |
If you are upgrading from Release 24.8 or later, perform the following steps. Note: The Kubernetes secrets from the previous release already exist; you are subsequently prompted for new or optional secrets that were not created in the previous release. An example of an optional secret is custom certificates.
| ||
86 |
If you are upgrading from Release 24.4 or earlier, enter the following: Note: To install the Kubernetes secrets, you require the backed-up TLS certificates extracted in Step 57. # ./nspdeployerctl secret install ↵ The following prompt is displayed: Would you like to use your own CA key pair for the NSP Internal Issuer? [yes,no] | ||
87 |
Enter the following: Note: To install the Kubernetes secrets, you require the backed-up TLS artifacts extracted in Step 57. # ./nspdeployerctl secret install ↵ The following prompt is displayed: Would you like to use your own CA key pair for the NSP Internal Issuer? [yes,no] | ||
88 |
Provide the internal CA artifacts from the appliedConfigs location from Step 57.
The following prompt is displayed: Would you like to use your own CA key pair for the NSP External Issuer? [yes,no] | ||
89 |
Provide your own certificate to secure the external network.
Would you like to provide a custom private key and certificate for use by NSP endpoints when securing TLS connections over the client network? [yes,no] | ||
90 |
If the original installation specified custom certificates in the tls section of nsp-config,yml, enter yes ↵ .
| ||
91 |
If the deployment includes MDM and mTLS is enabled in nsp-config.yml, the following prompt is displayed: Would you like to provide mTLS certificates for the NSP mediation interface for two-way TLS authentication? [yes,no] Perform one of the following.
| ||
92 |
If the NSP deployment is in Secure Boot mode, perform the following:
| ||
93 |
Back up the Kubernetes secrets.
| ||
Restore standby Kubernetes secrets | |||
94 |
If you are configuring the new standby (former primary) cluster in a DR deployment, obtain the secrets backup file from the NSP cluster in the new primary data center.
| ||
Deploy NSP cluster | |||
95 |
Enter the following on the NSP deployer host: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ | ||
96 |
If you are upgrading the new standby (former primary) cluster in a DR deployment, go to Step 101. | ||
97 |
If you are using your own storage, you must reconfigure the storage classes for the NSP cluster. Step 79 has examples of storage class configurations; if you are using other types of storage, see the appropriate storage documentation. | ||
98 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass config # ./nspdeployerctl config ↵ | ||
99 |
If you are creating the new standalone NSP cluster, or the new primary NSP (previous standby) cluster in a DR deployment, you must start the cluster in restore mode; enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config --restore # ./nspdeployerctl install --config --restore ↵ | ||
Restore NSP data | |||
100 |
If you are creating the new standalone NSP cluster, or the new primary NSP cluster in a DR deployment, you must restore the NSP databases, file service data, and Kubernetes secrets; perform “How do I restore the NSP cluster databases?” in the NSP System Administrator Guide. | ||
Start NSP | |||
101 |
If you are creating the new standby cluster in a DR deployment, enter the following on the NSP deployer host: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config --deploy # ./nspdeployerctl install --config --deploy ↵ The NSP starts. | ||
Monitor NSP initialization | |||
102 |
Open a console window on the NSP deployer host and enter the following: # export KUBECONFIG=/opt/nsp/nsp-configurator/kubeconfig/nsp_kubeconfig ↵ | ||
103 |
Monitor and validate the NSP cluster initialization. Note: You must not proceed to the next step until each NSP pod is operational. Note: nfs-storage over linstor is available on two or more nodes for all profile types. nfs-storage uses RWX for access and local-provisioner uses RWO for access.
| ||
104 |
Enter the following on the NSP deployer host: # export KUBECONFIG=/opt/nsp/nsp-configurator/kubeconfig/nsp_kubeconfig ↵ | ||
105 |
Enter the following to ensure that all pods are running: # kubectl get pods -A ↵ The status of each pod is listed; all pods are running when the displayed STATUS value is Running or Completed. The NSP deployer log file is /var/log/nspdeployerctl.log. | ||
106 |
To remove the sensitive NSP security information from the local file system enter the following: # rm -rf /tmp/appliedConfig ↵ The /tmp/appliedConfigs directory is deleted. | ||
Verify upgraded NSP cluster operation | |||
107 |
Use a browser to open the NSP cluster URL. | ||
108 |
Verify the following.
Note: If the UI fails to open, perform “How do I remove the stale NSP allowlist entries?” in the NSP System Administrator Guide to ensure that unresolvable host entries from the previous deployment do not prevent NSP access. | ||
Import NFM-P users and groups | |||
109 |
If you need to import NFM-P users to the NSP local user database as you transition to OAUTH2 user authentication, perform “How do I import users and groups from NFM-P?” in the NSP System Administrator Guide. | ||
Upgrade MDM adaptors | |||
110 |
If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, you must upgrade your MDM adaptor artifacts to the latest in the adaptor suite delivered as part of the new NSP release, and install the required Custom Resources (CRs). Perform the following steps. Note: Upgrading the adaptor artifacts to the latest version is mandatory in order for gNMI and accounting telemetry collection to function.
| ||
Upgrade or enable additional components and systems | |||
111 |
If the NSP deployment includes the VSR-NRC, upgrade the VSR-NRC as described in the VSR-NRC documentation. | ||
112 |
If you are including an existing NFM-P system in the deployment, perform one of the following.
Note: An NFM-P system upgrade procedure includes steps for upgrading the following components in an orderly fashion: | ||
113 |
If the NSP system includes the WS-NOC, perform the appropriate procedure in WaveSuite and NSP integration to enable WS-NOC integration with the upgraded NSP system. | ||
Restore classic telemetry collection | |||
114 |
Telemetry data collection for classically mediated NEs does not automatically resume after an upgrade from NSP Release 23.11 or earlier. Manual action is required to restore the data collection. If your NSP system collects telemetry data from :classically mediated NEs, restore the telemetry data collection.
Note: The subscription processing begins after the execution of a discovery rule, and may take 15 minutes or more. | ||
Synchronize LSP paths | |||
115 |
You must update the path properties for existing LSP paths. For each model-driven NE that the NSP manages, Issue the following RESTCONF API call to ensure that the tunnel-id and signaling-type LSP values are updated with the correct device mappings. Note: This step is required for MDM NEs only, and is not required for classically managed NEs. POST https://address/restconf/operations/nsp-admin-resync:trigger-resync where address is the advertised address of the NSP cluster The request body is the following: { "nsp-admin-resync:input": { "plugin-id": "mdm", "network-element": [ { "ne-id": NE_IP, "sbi-classes": [ { "class-id": "nokia-state:/state/router/mpls/lsp/primary" }, { "class-id": "nokia-state:/state/router/mpls/lsp/secondary" } ] } ] } } where NE_IP is the IP address of the NE | ||
116 |
Confirm the completion of the resync operation for the affected MDM NEs by performing the following steps:
| ||
117 |
The following API call updates path control UUIDs and properties in existing IETF TE tunnel primary-paths and secondary-paths. This API syncs values from the path control database to the YANG database. The synchronization process occurs in the background when the API call is executed. https://server_IP/lspcore/api/v1/syncLspPaths Method: GET Sample result: HTTP Status OK/40X { response: { data: null, status: 40x, errors: { errorMessage: "" }, startRow: 0, endRow: 0, totalRows: {} } } HTTP Status OK { response: { data: null, status: 200, startRow: 0, endRow: 0, totalRows: {} } } | ||
Perform post-upgrade tasks | |||
118 |
Modify your UAC configuration to maintain user access to the Service Configuration Health dashlet in the Network Map and Health view. At a minimum, to maintain user access to the Service Configuration Health dashlet, you must open all role objects that control access to the Network Map and Health view, make at least one change to each role (even if only to the Description field), and save your change(s). This action is essential to maintain user access to the Service Configuration Health dashlet after an upgrade. See “How do I configure a role?” in the NSP System Administrator Guide for information on modifying roles. | ||
119 |
To remove the sensitive NSP security information from the local file system enter the following: # rm -rf /tmp/appliedConfigs ↵ The /tmp/appliedConfigs directory is deleted. | ||
120 |
Upgrading a Release 23.8 or earlier NSP system that has User Access Control enabled does not preserve the Insights Administrator user permissions, as Data Collection and Analysis replaces Insights Administrator. For example, a user role in a Release 23.8 NSP system has the Insights Administrator permissions set to Read / Write / Execute. A subsequent upgrade to Release 24.11 removes Insights Administrator, and sets the new Data Collection and Analysis Management permissions on the user role to None. To restore the user access to NSP functions and objects associated with the role, an NSP administrator must apply the Data Collection and Analysis Management Read / Write / Execute permissions to the role. Restore the user permissions, if required.
| ||
121 |
Use the NSP to monitor device discovery and to check network management functions. | ||
122 |
Back up the NSP databases; perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide. | ||
123 |
Close the open console windows. End of steps | ||