To upgrade a Release 22.6 or earlier NSP cluster
Purpose
CAUTION Network management outage |
The procedure requires a shutdown of the NSP system, which causes a network management outage.
Ensure that you perform the procedure only during a scheduled maintenance period with the assistance of technical support.
Perform this procedure to upgrade a standalone or DR NSP system at Release 22.3 or 22.6 after you have performed To prepare for an NSP system upgrade from Release 22.6 or earlier.
Note: release-ID in a file path has the following format:
R.r.p-rel.version
where
R.r.p is the NSP release, in the form MAJOR.minor.patch
version is a numeric value
Note: If you are upgrading from NSP Release 22.3, LLDP link discovery is performed by Original Service Fulfillment by default, rather than NSP Network Infrastructure Management. You can change this behavior only when the lldpv2 adaptors are available or deployed for all managed devices; otherwise, a loss of LLDP data occurs. See Configuring LLDP link discovery for more information.
Steps
Stop and undeploy NSP cluster | |||||||||||||||||||
1 |
Log in as the root user on the appropriate station, based on the installed NSP release: | ||||||||||||||||||
2 |
Perform the following steps to preserve the existing cluster data.
| ||||||||||||||||||
3 |
Undeploying an NSP cluster as described in this step permanently removes the cluster data. If you are upgrading a DR NSP system, you must ensure that you have the latest database backup from the primary cluster before you perform this step. Undeploy the NSP cluster. Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, this step marks the beginning of the network management outage associated with the upgrade. Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in a command, as shown in the following example, and are subsequently prompted for the common root password of each cluster member: nsp-config.bash --ask-pass --undeploy --clean | nspdeployerctl uninstall --undeploy --clean
The NSP cluster is undeployed. | ||||||||||||||||||
4 |
Before you create new NSP cluster VMs, you must disable each existing VM in the NSP cluster. The following options are available.
| ||||||||||||||||||
Uninstall IPRC, CDRC | |||||||||||||||||||
5 |
If you are upgrading from Release 22.3 and the NSP deployment includes IP resource control or cross-domain resource control, uninstall each IPRC and CDRC server.
| ||||||||||||||||||
Preserve NSP cluster configuration | |||||||||||||||||||
6 |
Log in as the root user on the existing NSP deployer host. | ||||||||||||||||||
7 |
Open a console window. | ||||||||||||||||||
8 |
Perform one of the following.
| ||||||||||||||||||
9 |
Copy the following file to a separate station that is unaffected by the upgrade activity: /opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml | ||||||||||||||||||
Create NSP deployer host | |||||||||||||||||||
10 |
Log in as the root user on the station that will host the NSP deployer host VM. | ||||||||||||||||||
11 |
Open a console window. | ||||||||||||||||||
12 |
Enter the following: # dnf -y install virt-install libguestfs-tools ↵ | ||||||||||||||||||
13 |
Before you create the new NSP deployer host VM, you must disable the existing VM; the following options are available.
| ||||||||||||||||||
14 |
Perform one of the following to create the new NSP deployer host VM. Note: The NSP deployer host VM requires a hostname; you must change the default of ‘localhost’ to an actual hostname.
| ||||||||||||||||||
Configure NSP deployer host network interface | |||||||||||||||||||
15 |
Enter the following to open a console session on the NSP deployer host VM: # virsh console deployer_host ↵ You are prompted for credentials. | ||||||||||||||||||
16 |
Enter the following credentials: A virtual serial console session opens. | ||||||||||||||||||
17 |
Enter the following: # ip a ↵ The available network interfaces are listed; information like the following is displayed for each: if_n: if_name: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether MAC_address inet IPv4_address/v4_netmask brd broadcast_address scope global noprefixroute if_name valid_lft forever preferred_lft forever inet6 IPv6_address/v6_netmask scope link valid_lft forever preferred_lft forever | ||||||||||||||||||
18 |
Record the if_name and MAC_address values of the interface that you intend to use. | ||||||||||||||||||
19 |
Enter the following: # nmcli con add con-name con_name ifname if_name type ethernet mac MAC_address ↵ where con_name is a connection name that you assign to the interface for ease of identification if_name is the interface name recorded in Step 18 MAC_address is the MAC address recorded in Step 18 | ||||||||||||||||||
20 |
Enter the following: # nmcli con mod con_name ipv4.addresses IP_address/netmask ↵ where con_name is the connection name assigned in Step 19 IP_address is the IP address to assign to the interface netmask is the subnet mask to assign | ||||||||||||||||||
21 |
Enter the following: # nmcli con mod con_name ipv4.method static ↵ | ||||||||||||||||||
22 |
Enter the following: # nmcli con mod con_name ipv4.gateway gateway_IP ↵ gateway_IP is the gateway IP address to assign | ||||||||||||||||||
23 |
Enter the following: Note: You must specify a DNS name server. If DNS is not deployed, you must use a non-routable IP address as a nameserver entry. Note: Any hostnames used in an NSP deployment must be resolved by a DNS server. Note: An NSP deployment that uses IPv6 networking for client communication must use a hostname configuration. # nmcli con mod con_name ipv4.dns nameserver_1,nameserver_2...nameserver_n ↵ where nameserver_1 to nameserver_n are the available DNS name servers | ||||||||||||||||||
24 |
To optionally specify one or more DNS search domains, enter the following: # nmcli con mod con_name ipv4.dns-search search_domains ↵ where search_domains is a comma-separated list of DNS search domains | ||||||||||||||||||
25 |
Enter the following to set the hostname: # hostnamectl set-hostname hostname ↵ where hostname is the hostname to assign | ||||||||||||||||||
26 |
Enter the following to reboot the deployer host VM: # systemctl reboot ↵ | ||||||||||||||||||
27 |
Close the console session by pressing Ctrl+] (right bracket). | ||||||||||||||||||
Install NSP Kubernetes registry | |||||||||||||||||||
28 |
Log in as the root or NSP admin user on the NSP deployer host. | ||||||||||||||||||
29 |
Enter the following: # mkdir /opt/nsp ↵ | ||||||||||||||||||
30 |
Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host: NSP_K8S_DEPLOYER_R_r.tar.gz | ||||||||||||||||||
31 |
Enter the following: # cd /opt/nsp ↵ | ||||||||||||||||||
32 |
Enter the following: # tar xvf NSP_K8S_DEPLOYER_R_r.tar.gz ↵ where R_r is the NSP release ID, in the form Major_minor The file is expanded, and the following directories are created: | ||||||||||||||||||
33 |
After the file expansion completes successfully, enter the following to remove the bundle file, which is no longer required: # rm -f NSP_K8S_DEPLOYER_R_r.tar.gz ↵ The file is deleted. | ||||||||||||||||||
34 |
Enter the following: # cd nsp-registry-release-ID/bin ↵ | ||||||||||||||||||
35 |
Enter the following: # ./nspregistryctl install ↵ The following prompt is displayed. Enter a registry admin password: | ||||||||||||||||||
36 |
Create a registry administrator password; the password must: | ||||||||||||||||||
37 |
Enter the password. The following prompt is displayed. Confirm the registry admin password: | ||||||||||||||||||
38 |
Re-enter the password. The registry installation begins, and messages like the following are displayed. ✔ New installation detected. ✔ Initialize system. date time Copy container images ... date time Install/update package [container-selinux] ... ✔ Installation of container-selinux has completed. date time Install/update package [k3s-selinux] ... ✔ Installation of k3s-selinux has completed. date time Setup required tools ... ✔ Initialization has completed. date time Install k3s ... date time Waiting for up to 10 minutes for k3s initialization ... .............................................. ✔ Installation of k3s has completed. ➜ Generate self-signed key and cert. date time Registry TLS key file: /opt/nsp/nsp-registry/tls/nokia-nsp-registry.key date time Registry TLS cert file: /opt/nsp/nsp-registry/tls/nokia-nsp-registry.crt date time Install registry apps ... date time Waiting for up to 10 minutes for registry services to be ready ... .......... ✔ Registry apps installation is completed. date time Generate artifacts ... date time Apply artifacts ... date time Setup registry.nsp.nokia.local certs ... date time Setup a default project [nsp] ... date time Setup a cron to regenerate the k3s certificate [nsp] ... ✔ Post configuration is completed. ✔ Installation has completed. | ||||||||||||||||||
Migrate legacy cluster parameters | |||||||||||||||||||
39 |
If you are upgrading from Release 22.6, go to Step 44. | ||||||||||||||||||
40 |
Enter the following: # cd /opt/nsp/nsp-k8s-deployer-release-ID/tools ↵ | ||||||||||||||||||
41 |
Copy the hosts.yml file saved in Step 8 to the current directory. | ||||||||||||||||||
42 |
Enter the following: # ./extracthosts hosts.yml ↵ The current NSP cluster node entries are displayed, as shown in the following example for a three-node cluster: hosts: - nodeName: node1 nodeIp: 192.168.96.11 accessIp: 203.0.113.11 - nodeName: node2 nodeIp: 192.168.96.12 accessIp: 203.0.113.12 - nodeName: node3 nodeIp: 192.168.96.13 accessIp: 203.0.113.13 | ||||||||||||||||||
43 |
Review the output to ensure that each node entry is correct. | ||||||||||||||||||
44 |
Open the following new deployer configuration file using a plain-text editor such as vi: /opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml | ||||||||||||||||||
45 |
Perform one of the following to configure the cluster node entries for the deployment.
| ||||||||||||||||||
46 |
Edit the following line in the cluster section of the new file to read: hosts: "/opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml" | ||||||||||||||||||
47 |
If you have disabled remote root access to the NSP cluster VMs, configure the following parameters in the cluster section, sshAccess subsection: sshAccess: userName: "user" privateKey: "path" where user is the designated NSP ansible user path is the SSH key path, for example, /home/user/.ssh/id_rsa | ||||||||||||||||||
48 |
In the following section, specify the virtual IP addresses for the NSP to use as the internal load-balancer endpoints. Note: A single-node NSP cluster requires at least the client_IP address. The addresses are the virtualIP values for NSP client, internal, and mediation access that you intend to specify in Step 102 and Step 103 in the nsp-config.yml file. loadBalancerExternalIps: - client_IP - internal_IP - trapV4_mediation_IP - trapV6_mediation_IP - flowV4_mediation_IP - flowV6_mediation_IP | ||||||||||||||||||
49 |
Configure the following parameter, which specifies whether dual-stack NE management is enabled: Note: Dual-stack NE management can function only when the network environment is appropriately configured, for example:
enable_dual_stack_networks: value where value must be set to true if the cluster VMs support both IPv4 and IPv6 addressing | ||||||||||||||||||
50 |
If the existing deployment includes an RPM-based IPRC, add a node entry for the IPRC after the existing node entries. | ||||||||||||||||||
51 |
If the deployment includes dedicated MDM cluster VMs, as identified in Step 24 of To prepare for an NSP system upgrade from Release 22.6 or earlier, add an entry for each identified VM. Note: If the deployment includes the IPRC, you must add the MDM node entries after the IPRC entry. | ||||||||||||||||||
52 |
Save and close the new k8s-deployer.yml file. | ||||||||||||||||||
Create NSP cluster VMs | |||||||||||||||||||
53 |
For each required NSP cluster VM, perform one of the following to create the VM. Note: Each NSP cluster VM requires a hostname; you must change the default of ‘localhost’ to an actual hostname.
| ||||||||||||||||||
54 |
Perform Step 55 to Step 72 for each NSP cluster VM to configure the required interfaces. | ||||||||||||||||||
Configure NSP cluster interfaces | |||||||||||||||||||
55 |
Enter the following on the NSP deployer host to open a console session on the VM: # virsh console NSP_cluster_VM ↵ where NSP_cluster_VM is the VM name You are prompted for credentials. | ||||||||||||||||||
56 |
Enter the following credentials: A virtual serial console session opens on the NSP cluster VM. | ||||||||||||||||||
57 |
Enter the following: # ip a ↵ The available network interfaces are listed; information like the following is displayed for each: if_n: if_name: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether MAC_address inet IPv4_address/v4_netmask brd broadcast_address scope global noprefixroute if_name valid_lft forever preferred_lft forever inet6 IPv6_address/v6_netmask scope link valid_lft forever preferred_lft forever | ||||||||||||||||||
58 |
Record the if_name and MAC_address values of the interfaces that you intend to use. | ||||||||||||||||||
59 |
Enter the following for each interface: # nmcli con add con-name con_name ifname if_name type ethernet mac MAC_address ↵ where con_name is a connection name that you assign to the interface for ease of identification; for example, ClientInterface or MediationInterface if_name is the interface name recorded in Step 58 MAC_address is the MAC address recorded in Step 58 | ||||||||||||||||||
60 |
Enter the following for each interface: # nmcli con mod con_name ipv4.addresses IP_address/netmask ↵ where con_name is the connection name assigned in Step 59 IP_address is the IP address to assign to the interface netmask is the subnet mask to assign | ||||||||||||||||||
61 |
Enter the following for each interface: # nmcli con mod con_name ipv4.method static ↵ | ||||||||||||||||||
62 |
Enter the following for each interface: # nmcli con mod con_name ipv4.gateway gateway_IP ↵ gateway_IP is the gateway IP address to assign Note: This command sets the default gateway on the primary interface and the gateways for all secondary interfaces. | ||||||||||||||||||
63 |
Enter the following for all secondary interfaces: # nmcli con mod con_name ipv4.never-default yes ↵ | ||||||||||||||||||
64 |
Enter the following for each interface: Note: You must specify a DNS name server. If DNS is not deployed, you must use a non-routable IP address as a nameserver entry. Note: Any hostnames used in an NSP deployment must be resolved by a DNS server. Note: An NSP deployment that uses IPv6 networking for client communication must use a hostname configuration. # nmcli con mod con_name ipv4.dns nameserver_1,nameserver_2...nameserver_n ↵ where nameserver_1 to nameserver_n are the available DNS name servers | ||||||||||||||||||
65 |
To optionally specify one or more DNS search domains, enter the following for each interface: # nmcli con mod con_name ipv4.dns-search search_domains ↵ where search_domains is a comma-separated list of DNS search domains | ||||||||||||||||||
66 |
Open the following file with a plain-text editor such as vi: /etc/sysctl.conf | ||||||||||||||||||
67 |
Locate the following line: vm.max_map_count=value | ||||||||||||||||||
68 |
Edit the line to read as follows; if the line is not present, add the line to the end of the file: vm.max_map_count=262144 | ||||||||||||||||||
69 |
Save and close the file. | ||||||||||||||||||
70 |
If you are installing in a KVM environment, enter the following: # mkdir /opt/nsp ↵ | ||||||||||||||||||
71 |
Enter the following to reboot the NSP cluster VM: # systemctl reboot ↵ | ||||||||||||||||||
72 |
Close the console session by pressing Ctrl+] (right bracket). | ||||||||||||||||||
Deploy container environment | |||||||||||||||||||
73 |
Log in as the root or NSP admin user on the NSP deployer host. | ||||||||||||||||||
74 |
Open a console window. | ||||||||||||||||||
75 |
For password-free NSP deployer host access to the NSP cluster VMs, you require an SSH key. To generate and distribute the SSH key, perform the following steps.
| ||||||||||||||||||
76 |
Enter the following: # cd /opt/nsp/nsp-k8s-deployer-release-ID/bin ↵ | ||||||||||||||||||
77 |
Enter the following to create the new hosts.yml file: # ./nspk8sctl config -c ↵ | ||||||||||||||||||
78 |
Enter the following to list the node entries in the new hosts.yml file: # ./nspk8sctl config -l ↵ Output like the following example for a four-node cluster is displayed: Note: If NAT is used in the cluster: Otherwise: Note: The ansible_host value must match the access_ip value. Existing cluster hosts configuration is: node_1_name: ansible_host: 203.0.113.11 ip: private_IP access_ip: public_IP node_labels: isIngress: "true" node_2_name: ansible_host: 203.0.113.12 ip: private_IP access_ip: public_IP node_labels: isIngress: "true" node_3_name: ansible_host: 203.0.113.13 ip: private_IP access_ip: public_IP node_labels: isIngress: "true" node_4_name: ansible_host: 203.0.113.14 ip: private_IP access_ip: public_IP node_labels: isIngress: "false" | ||||||||||||||||||
79 |
Verify the IP addresses. | ||||||||||||||||||
80 |
Enter the following to import the Kubernetes images to the repository: .# ./nspk8sctl import ↵ | ||||||||||||||||||
81 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspk8sctl --ask-pass install .# ./nspk8sctl install ↵ The NSP Kubernetes environment is deployed. | ||||||||||||||||||
82 |
The NSP cluster member named node1 is designated the NSP cluster host for future configuration activities; record the NSP cluster host IP address for future reference. | ||||||||||||||||||
83 |
Open a console window on the NSP cluster host. | ||||||||||||||||||
84 |
Enter the following periodically to display the status of the Kubernetes system pods: Note: You must not proceed to the next step until each pod STATUS reads Running or Completed. # kubectl get pods -A ↵ The pods are listed. | ||||||||||||||||||
85 |
Enter the following periodically to display the status of the NSP cluster nodes: Note: You must not proceed to the next step until each node STATUS reads Ready. # kubectl get nodes -o wide ↵ The NSP cluster nodes are listed, as shown in the following three-node cluster example: NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node1 Ready master nd version int_IP ext_IP node2 Ready master nd version int_IP ext_IP node3 Ready <none> nd version int_IP ext_IP | ||||||||||||||||||
Restore NSP system files | |||||||||||||||||||
86 |
Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host: NSP_DEPLOYER_R_r.tar.gz | ||||||||||||||||||
87 |
Enter the following on the NSP deployer host: # cd /opt/nsp ↵ | ||||||||||||||||||
88 |
Enter the following: # tar xvf NSP_DEPLOYER_R_r.tar.gz ↵ where R_r is the NSP release ID, in the form Major_minor The bundle file is expanded, and the following directory of NSP installation files is created: /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID | ||||||||||||||||||
89 |
Enter the following: # rm -f NSP_DEPLOYER_R_r.tar.gz ↵ The bundle file is deleted. | ||||||||||||||||||
90 |
Restore the required NSP configuration files.
| ||||||||||||||||||
Label NSP cluster nodes | |||||||||||||||||||
91 |
Merge the current nsp-deployer.yml settings into the new nsp-deployer.yml file.
| ||||||||||||||||||
92 |
If you have disabled remote root access to the NSP cluster VMs, configure the following parameters in the cluster section, sshAccess subsection: sshAccess: userName: "user" privateKey: "path" where user is the designated NSP ansible user path is the SSH key path, for example, /home/user/.ssh/id_rsa | ||||||||||||||||||
93 |
Save and close the new nsp-deployer.yml file. | ||||||||||||||||||
94 |
Enter the following to apply the node labels to the NSP cluster: # ./nspdeployerctl config ↵ | ||||||||||||||||||
95 |
Enter the following to import the NSP images and Helm charts to the NSP Kubernetes registry: Note: The import operation may take 20 minutes or longer. # ./nspdeployerctl import ↵ | ||||||||||||||||||
96 |
Open a console window on the NSP cluster host. | ||||||||||||||||||
97 |
Enter the following to display the node labels: # kubectl get nodes --show-labels ↵ Cluster node information is displayed. | ||||||||||||||||||
98 |
View the information to ensure that all NSP labels are added to the cluster VMs. | ||||||||||||||||||
Configure NSP software | |||||||||||||||||||
99 |
You must merge the nsp-config.yml file content from the existing deployment into the new nsp-config.yml file. Open the following files using a plain-text editor such as vi:
Note: See nsp-config.yml file format for configuration information. | ||||||||||||||||||
100 |
Copy each configured parameter line from the previous nsp-config.yml file and use the line to overwrite the same line in the new file, if applicable. The following parameters are not present in the new nsp-config.yml file, and are not to be merged: Note: The peer_address value that you specify must match the advertisedAddress value in the configuration of the peer cluster and have the same format; if one value is a hostname, the other must also be a hostname.
Note: You must maintain the structure of the new file, as any new configuration options for the new release must remain. Note: You must replace each configuration line entirely, and must preserve the leading spaces in each line. Note: If NSP application-log forwarding to NSP Elasticsearch is enabled, special configuration is required.
| ||||||||||||||||||
101 |
Configure the following parameter in the platform section as shown below: Note: You must preserve the lead spacing of the line. clusterHost: "cluster_host_address" where cluster_host_address is the address of NSP cluster member node1, which is subsequently used for cluster management operations | ||||||||||||||||||
102 |
You must apply the address values from the former configuration file to the new parameters. Configure the following parameters in the platform section, ingressApplications subsection as shown below. Each address is an address from the ingressApplications section of the k8s-deployer.yml file described in Step 48. Note: The client_IP value is mandatory; the address is used for interfaces that remain unconfigured, such as in a single-interface deployment. Note: If the client network uses IPv6, you must specify the NSP cluster hostname as the client_IP value. Note: The trapForwarder addresses that you specify must differ from the client_IP value, even in a single-interface deployment. ingressApplications: ingressController: clientAddresses: virtualIp: "client_IP" advertised: "client_public_address" internalAddresses: virtualIp: "internal_IP" advertised: "internal_public_address" trapForwarder: mediationAddresses: virtualIpV4: "trapV4_mediation_IP" advertisedV4: "trapV4_mediation_public_address" virtualIpV6: "trapV6_mediation_IP" advertisedV6: "trapV6_mediation_public_address" where client_IP is the address for external client access internal_IP is the address for internal communication trapV4_mediation_IP is the address for IPv4 network mediation trapV6_mediation_IP is the address for IPv6 network mediation each public_address value is an optional address to advertise instead of the associated _IP value, for example, in a NAT environment | ||||||||||||||||||
103 |
If flow collection is enabled, configure the following parameters in the platform section, ingressApplications subsection as shown below. Each address is an address from the ingressApplications section of the k8s-deployer.yml file described in Step 48. flowForwarder: mediationAddresses: virtualIpV4: "flowV4_mediation_IP" advertisedV4: "flowV4_mediation_public_address" virtualIpV6: "flowV6_mediation_IP" advertisedV6: "flowV6_mediation_public_address" where flowV4_mediation_IP is the address for IPv4 flow collection flowV6_mediation_IP is the address for IPv6 flow collection each _public_address value is an optional address to advertise instead of the associated mediation_IP value, for example, in a NAT environment | ||||||||||||||||||
104 |
Configure the type parameter in the deployment section as shown below: deployment: type: "deployment_type" where deployment_type is one of the parameter options listed in the section | ||||||||||||||||||
105 |
If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, perform the following steps.
| ||||||||||||||||||
106 |
If required, configure NSP collection of accounting statistics. Perform the following steps:
CAUTION: If the collectFromClassicNes flag is true: to prevent duplicate file collection you must disable file rollover traps on the NE and disable the polling policy for tmnxLogFileIdEntry from NFM-P. See the NSP Classic Management User Guide and the NE CLI documentation. After the rollover trap is disabled on the NE, the NFM-P and all other third-party systems that depend on the trap to trigger file collection will no longer collect accounting and event files. | ||||||||||||||||||
107 |
If all of the following are true, configure the following parameters in the integrations section:
nfmpDB: primaryIp: "" standbyIp: "" | ||||||||||||||||||
108 |
If both of the following are true, configure the following parameters in the integrations section: auxServer: primaryIpList: "" standbyIpList: "" | ||||||||||||||||||
109 |
If the NSP system includes one or more Release 22 analytics servers that are not being upgraded as part of the current NSP system upgrade, you must enable NSP and analytics compatibility; otherwise, you can skip this step. Set the legacyPortEnabled parameter in the analyticsServer subsection of the integrations section to true as shown below: analyticsServer: legacyPortEnabled: true | ||||||||||||||||||
110 |
If required, configure the user authentication parameters in the sso section, as shown below; see NSP SSO configuration parameters for configuration information. | ||||||||||||||||||
111 |
If you have an updated license, ensure that the location of your license.zip file, as indicated in the nsp-config.yml file, is in the correct location on the NSP deployer host. | ||||||||||||||||||
112 |
Save and close the new nsp-config.yml file. | ||||||||||||||||||
113 |
Close the previous nsp-config.yml file. | ||||||||||||||||||
114 |
If you are upgrading the new primary (former standby) cluster in a DR deployment, stop here and return to Workflow for DR NSP system upgrade from Release 22.6 or earlier. | ||||||||||||||||||
Restore dedicated MDM node labels | |||||||||||||||||||
115 |
If you are not including any dedicated MDM nodes in addition to the number of member nodes in a standard or enhanced NSP cluster, go to Step 130. | ||||||||||||||||||
116 |
Log in as the root or NSP admin user on the NSP cluster host. | ||||||||||||||||||
117 |
Open a console window. | ||||||||||||||||||
118 |
Perform the following steps for each additional MDM node.
| ||||||||||||||||||
119 |
Enter the following: # kubectl get nodes -o wide ↵ A list of nodes like the following is displayed. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node1 Ready master nd version int_IP ext_IP node2 Ready master nd version int_IP ext_IP node3 Ready <none> nd version int_IP ext_IP | ||||||||||||||||||
120 |
Record the NAME value of each node whose INTERNAL-IP value is the IP address of a node that has been added to host an additional MDM instance. | ||||||||||||||||||
121 |
For each node, enter the following sequence of commands: # kubectl label node node mdm=true ↵ where node is the recorded NAME value of the MDM node | ||||||||||||||||||
Install Kubernetes secrets | |||||||||||||||||||
122 |
If you are configuring a standalone NSP cluster, or the new primary cluster in a DR deployment, enter the following; otherwise, go to Step 129: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ | ||||||||||||||||||
123 |
Enter the following: Note: To install the Kubernetes secrets, you require the backed-up TLS artifacts extracted in Step 90. # ./nspdeployerctl secret install ↵ The following prompt is displayed: Would you like to use your own CA key pair for the NSP Internal Issuer? [yes,no] | ||||||||||||||||||
124 |
Provide the internal CA artifacts from the appliedConfigs location from Step 90.
The following prompt is displayed: Would you like to use your own CA key pair for the NSP External Issuer? [yes,no] | ||||||||||||||||||
125 |
Provide your own certificate to secure the external network.
Would you like to provide a custom private key and certificate for use by NSP endpoints when securing TLS connections over the client network? [yes,no] | ||||||||||||||||||
126 |
If the original installation specified custom certificates in the tls section of nsp-config,yml, enter yes ↵ .
| ||||||||||||||||||
127 |
If the deployment includes MDM and mTLS is enabled in nsp-config.yml, the following prompt is displayed: Would you like to provide mTLS certificates for the NSP mediation interface for two-way TLS authentication? [yes,no] Perform one of the following.
| ||||||||||||||||||
128 |
Back up the Kubernetes secrets.
| ||||||||||||||||||
Restore standby Kubernetes secrets | |||||||||||||||||||
129 |
The install script will advise you to delete customCaCert, stating that the file is no longer required by NSP. This message is not correct for installations that use custom certificates. Do not delete this file if you are using custom certificates, as the deployment will fail. If you are configuring the new standby (former primary) cluster in a DR deployment, obtain the secrets backup file from the NSP cluster in the new primary data center.
| ||||||||||||||||||
Deploy NSP cluster | |||||||||||||||||||
130 |
Enter the following on the NSP deployer host: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ | ||||||||||||||||||
131 |
If you are upgrading the new standby (former primary) cluster in a DR deployment, go to Step 134. | ||||||||||||||||||
132 |
If you are creating the new standalone NSP cluster, or the new primary NSP cluster in a DR deployment, you must start the cluster in restore mode; enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config --restore # ./nspdeployerctl install --config --restore ↵ | ||||||||||||||||||
Restore NSP data | |||||||||||||||||||
133 |
If you are creating the new standalone NSP cluster, or the new primary NSP cluster in a DR deployment, you must restore the NSP databases, file service data, and Kubernetes secrets; perform “How do I restore the NSP cluster databases?” in the NSP System Administrator Guide. | ||||||||||||||||||
Start NSP | |||||||||||||||||||
134 |
If you are creating the new standby cluster in a DR deployment, enter the following on the NSP deployer host: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config --deploy # ./nspdeployerctl install --config --deploy ↵ The NSP starts. | ||||||||||||||||||
Monitor NSP initialization | |||||||||||||||||||
135 |
Open a console window on the NSP cluster host. | ||||||||||||||||||
136 |
Monitor and validate the NSP cluster initialization. Note: You must not proceed to the next step until each NSP pod is operational. Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, the completed NSP cluster initialization marks the end of the network management outage associated with the upgrade.
| ||||||||||||||||||
137 |
Enter the following on the NSP cluster host to ensure that all pods are running: # kubectl get pods -A ↵ The status of each pod is listed; all pods are running when the displayed STATUS value is Running or Completed. The NSP deployer log file is /var/log/nspdeployerctl.log. | ||||||||||||||||||
Verify upgraded NSP cluster operation | |||||||||||||||||||
138 |
Use a browser to open the NSP cluster URL. | ||||||||||||||||||
139 |
Verify the following.
Note: If the UI fails to open, perform “How do I remove the stale NSP allowlist entries?” in the NSP System Administrator Guide to ensure that unresolvable host entries from the previous deployment do not prevent NSP access. | ||||||||||||||||||
Import NFM-P users and groups | |||||||||||||||||||
140 |
If you need to import NFM-P users to the NSP local user database as you transition to OAUTH2 user authentication, perform “How do I import users and groups from NFM-P?” in the NSP System Administrator Guide. | ||||||||||||||||||
Upgrade MDM adaptors | |||||||||||||||||||
141 |
If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, you must upgrade your MDM adaptors to the latest in the adaptor suite delivered as part of the new NSP release, and install the required Custom Resources, or CRs. Perform the following steps. Note: Upgrading the adaptors to the latest version is mandatory in order for gNMI telemetry collection to function.
| ||||||||||||||||||
Restore classic telemetry collection | |||||||||||||||||||
142 |
Telemetry data collection for classically mediated NEs does not automatically resume after an upgrade from NSP Release 23.11 or earlier. Manual action is required to restore the data collection. If your NSP system collects telemetry data from :classically mediated NEs, restore the telemetry data collection.
Note: The subscription processing begins after the execution of a discovery rule, and may take 15 minutes or more. | ||||||||||||||||||
Synchronize LSP paths | |||||||||||||||||||
143 |
You must update the path properties for existing LSP paths. For each model-driven NE that the NSP manages, Issue the following RESTCONF API call to ensure that the tunnel-id and signaling-type LSP values are updated with the correct device mappings. Note: This step is required for MDM NEs only, and is not required for classically managed NEs. POST https://address/restconf/operations/nsp-admin-resync:trigger-resync where address is the advertised address of the NSP cluster The request body is the following: { "nsp-admin-resync:input": { "plugin-id": "mdm", "network-element": [ { "ne-id": NE_IP, "sbi-classes": [ { "class-id": "nokia-state:/state/router/mpls/lsp/primary" }, { "class-id": "nokia-state:/state/router/mpls/lsp/secondary" } ] } ] } } where NE_IP is the IP address of the NE | ||||||||||||||||||
144 |
Confirm the completion of the resync operation for the affected MDM NEs by performing the following steps:
| ||||||||||||||||||
145 |
The following API call updates path control UUIDs and properties in existing IETF TE tunnel primary-paths and secondary-paths. This API syncs values from the path control database to the YANG database. The synchronization process occurs in the background when the API call is executed. https://server_IP/lspcore/api/v1/syncLspPaths Method: GET Sample result: HTTP Status OK/40X { response: { data: null, status: 40x, errors: { errorMessage: "" }, startRow: 0, endRow: 0, totalRows: {} } } HTTP Status OK { response: { data: null, status: 200, startRow: 0, endRow: 0, totalRows: {} } } | ||||||||||||||||||
Perform post-upgrade tasks | |||||||||||||||||||
146 |
Modify your UAC configuration to maintain user access to the Service Configuration Health dashlet in the Network Map and Health view. At a minimum, to maintain user access to the Service Configuration Health dashlet, you must open all role objects that control access to the Network Map and Health view, make at least one change to each role (even if only to the Description field), and save your change(s). This action is essential to maintain user access to the Service Configuration Health dashlet after an upgrade. See “How do I configure a role?” in the NSP System Administrator Guide for information on modifying roles. | ||||||||||||||||||
147 |
To remove the sensitive NSP security information from the local file system enter the following: # rm -rf /tmp/appliedConfigs ↵ The /tmp/appliedConfigs directory is deleted. | ||||||||||||||||||
148 |
If you uninstalled any NSP logical inventory adaptor suites in Step 25 of To prepare for an NSP system upgrade from Release 22.6 or earlier, perform the following steps.
| ||||||||||||||||||
149 |
Use the NSP to monitor device discovery and to check network management functions. | ||||||||||||||||||
150 |
Back up the NSP databases; perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide. | ||||||||||||||||||
151 |
Close the open console windows. End of steps |