To upgrade a Release 22.6 or earlier NSP cluster
Purpose
CAUTION Network management outage |
The procedure requires a shutdown of the NSP system, which causes a network management outage.
Ensure that you perform the procedure only during a scheduled maintenance period with the assistance of technical support.
Perform this procedure to upgrade a standalone or DR NSP system at Release 22.6 or earlier after you have performed To prepare for an NSP system upgrade from Release 22.6 or earlier.
Note: release-ID in a file path has the following format:
R.r.p-rel.version
where
R.r.p is the NSP release, in the form MAJOR.minor.patch
version is a numeric value
Note: If you are upgrading from an NSP release earlier than 22.6, LLDP link discovery is performed by Original Service Fulfillment by default, rather than NSP Network Infrastructure Management. You can change this behavior only when the lldpv2 adaptors are available or deployed for all managed devices; otherwise, a loss of LLDP data occurs. See Configuring LLDP link discovery for more information.
Steps
Stop and undeploy NSP cluster | ||||||||||||||||
1 |
Log in as the root user on the appropriate station, based on the installed NSP release: | |||||||||||||||
2 |
Perform the following steps to preserve the existing cluster data.
| |||||||||||||||
3 |
Undeploying an NSP cluster as described in this step permanently removes the cluster data. If you are upgrading a DR NSP system, you must ensure that you have the latest database backup from the primary cluster before you perform this step. Undeploy the NSP cluster: Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, this step marks the beginning of the network management outage associated with the upgrade. Note: If the NSP cluster members do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the common root password of each cluster member: command --ask-pass option option ↵
The NSP cluster is undeployed. | |||||||||||||||
4 |
Before you create new NSP cluster VMs, you must disable each existing VM in the NSP cluster. The following options are available.
| |||||||||||||||
Uninstall IPRC, CDRC | ||||||||||||||||
5 |
If you are upgrading from Release 22.3 and the NSP deployment includes IP resource control or cross-domain resource control, uninstall each IPRC and CDRC server.
| |||||||||||||||
Preserve NSP cluster configuration | ||||||||||||||||
6 |
Log in as the root user on the existing NSP deployer host. | |||||||||||||||
7 |
Open a console window. | |||||||||||||||
8 |
Perform one of the following.
| |||||||||||||||
Create NSP deployer host | ||||||||||||||||
9 |
Log in as the root user on the station that will host the NSP deployer host VM. | |||||||||||||||
10 |
Open a console window. | |||||||||||||||
11 |
Enter the following: # dnf -y install virt-install libguestfs-tools ↵ | |||||||||||||||
12 |
Before you create the new NSP deployer host VM, you must disable the existing VM; the following options are available.
| |||||||||||||||
13 |
Perform one of the following to create the new NSP deployer host VM. Note: The NSP deployer host VM requires a hostname; you must change the default of ‘localhost’ to an actual hostname.
| |||||||||||||||
Configure NSP deployer host network interface | ||||||||||||||||
14 |
Enter the following to open a console session on the NSP deployer host VM: # virsh console deployer_host ↵ You are prompted for credentials. | |||||||||||||||
15 |
Enter the following credentials: A virtual serial console session opens. | |||||||||||||||
16 |
Enter the following: # ip a ↵ The available network interfaces are listed; information like the following is displayed for each: if_n: if_name: LESSTHANBROADCAST,MULTICAST,UP,LOWER_UPGTRTHAN mtu 1500 qdisc mq state UP group default qlen 1000 link/ether MAC_address inet IPv4_address/v4_netmask brd broadcast_address scope global noprefixroute if_name valid_lft forever preferred_lft forever inet6 IPv6_address/v6_netmask scope link valid_lft forever preferred_lft forever | |||||||||||||||
17 |
Record the if_name and MAC_address values of the interface that you intend to use. | |||||||||||||||
18 |
Enter the following: # nmcli con add con-name con_name ifname if_name type ethernet mac MAC_address ↵ where con_name is a connection name that you assign to the interface for ease of identification if_name is the interface name recorded in Step 17 MAC_address is the MAC address recorded in Step 17 | |||||||||||||||
19 |
Enter the following: # nmcli con mod con_name ipv4.addresses IP_address/netmask ↵ where con_name is the connection name assigned in Step 18 IP_address is the IP address to assign to the interface netmask is the subnet mask to assign | |||||||||||||||
20 |
Enter the following: # nmcli con mod con_name ipv4.method static ↵ | |||||||||||||||
21 |
Enter the following: # nmcli con mod con_name ipv4.gateway gateway_IP ↵ gateway_IP is the gateway IP address to assign | |||||||||||||||
22 |
Enter the following: Note: You must specify a DNS name server. If DNS is not deployed, you must use a non-routable IP address as a nameserver entry. Note: Any hostnames used in an NSP deployment must be resolved by a DNS server. Note: An NSP deployment that uses IPv6 networking for client communication must use a hostname configuration. # nmcli con mod con_name ipv4.dns nameserver_1,nameserver_2...nameserver_n ↵ where nameserver_1 to nameserver_n are the available DNS name servers | |||||||||||||||
23 |
To optionally specify one or more DNS search domains, enter the following: # nmcli con mod con_name ipv4.dns-search search_domains ↵ where search_domains is a comma-separated list of DNS search domains | |||||||||||||||
24 |
Enter the following to set the hostname: # hostnamectl set-hostname hostname ↵ where hostname is the hostname to assign | |||||||||||||||
25 |
Enter the following to reboot the deployer host VM: # systemctl reboot ↵ | |||||||||||||||
26 |
Close the console session by pressing Ctrl+] (right bracket). | |||||||||||||||
Install NSP Kubernetes registry | ||||||||||||||||
27 |
Log in as the root user on the NSP deployer host. | |||||||||||||||
28 |
Enter the following: # mkdir /opt/nsp ↵ | |||||||||||||||
29 |
Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host: NSP_K8S_DEPLOYER_R_r.tar.gz | |||||||||||||||
30 |
Enter the following: # cd /opt/nsp ↵ | |||||||||||||||
31 |
Enter the following: # tar xvf NSP_K8S_DEPLOYER_R_r.tar.gz ↵ where R_r is the NSP release ID, in the form Major_minor The file is expanded, and the following directories are created: | |||||||||||||||
32 |
Enter the following: # rm -f NSP_K8S_DEPLOYER_R_r.tar.gz ↵ The file is deleted. | |||||||||||||||
33 |
Enter the following: # cd nsp-registry-release-ID/bin ↵ | |||||||||||||||
34 |
Enter the following: # ./nspregistryctl install ↵ The following prompt is displayed. Enter a registry admin password: | |||||||||||||||
35 |
Create a registry administrator password; the password must: | |||||||||||||||
36 |
Enter the password. The following prompt is displayed. Confirm the registry admin password: | |||||||||||||||
37 |
Re-enter the password. The registry installation begins, and messages like the following are displayed. ✔ New installation detected. ✔ Initialize system. date time Copy container images ... date time Install/update package [container-selinux] ... ✔ Installation of container-selinux has completed. date time Install/update package [k3s-selinux] ... ✔ Installation of k3s-selinux has completed. date time Setup required tools ... ✔ Initialization has completed. date time Install k3s ... date time Waiting for up to 10 minutes for k3s initialization ... .............................................. ✔ Installation of k3s has completed. ➜ Generate self-signed key and cert. date time Registry TLS key file: /opt/nsp/nsp-registry/tls/nokia-nsp-registry.key date time Registry TLS cert file: /opt/nsp/nsp-registry/tls/nokia-nsp-registry.crt date time Install registry apps ... date time Waiting for up to 10 minutes for registry services to be ready ... .......... ✔ Registry apps installation is completed. date time Generate artifacts ... date time Apply artifacts ... date time Setup registry.nsp.nokia.local certs ... date time Setup a default project [nsp] ... date time Setup a cron to regenerate the k3s certificate [nsp] ... ✔ Post configuration is completed. ✔ Installation has completed. | |||||||||||||||
Migrate legacy cluster parameters | ||||||||||||||||
38 |
If you are upgrading from Release 22.6, go to Step 49. | |||||||||||||||
39 |
Enter the following: # cd /opt/nsp/nsp-k8s-deployer-release-ID/tools ↵ | |||||||||||||||
40 |
Copy the hosts.yml file saved in Step 8 to the current directory. | |||||||||||||||
41 |
Enter the following: # ./extracthosts hosts.yml ↵ The current NSP cluster node entries are displayed, as shown in the following example for a three-node cluster: hosts: - nodeName: node1 nodeIp: 192.168.96.11 accessIp: 203.0.113.11 - nodeName: node2 nodeIp: 192.168.96.12 accessIp: 203.0.113.12 - nodeName: node3 nodeIp: 192.168.96.13 accessIp: 203.0.113.13 | |||||||||||||||
42 |
Review the output to ensure that each node entry is correct. | |||||||||||||||
43 |
Open the following file using a plain-text editor such as vi: /opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml | |||||||||||||||
44 |
You must configure the cluster node entries using the extracted hosts.yml file output. Table 8-1, hosts.yml and k8s-deployer.yml parameters shows the former and new parameter names, and the required value. Configure the k8s-deployer.yml parameters shown below for each NSP cluster VM; see the descriptive text at the head of the file for parameter information, and Hostname configuration requirements for general configuration information. Note: The nodeName value: Note: The node order in the k8s-deployer.yml file must match the order in the hosts.yml file. Table 8-1: hosts.yml and k8s-deployer.yml parameters
| |||||||||||||||
45 |
Configure the following parameter, which specifies whether dual-stack NE management is enabled: Note: Dual-stack NE management can function only when the network environment is appropriately configured, for example:
enable_dual_stack_networks: value where value must be set to true if the cluster VMs support both IPv4 and IPv6 addressing | |||||||||||||||
46 |
If the existing deployment includes an RPM-based IPRC, add a node entry for the IPRC after the existing node entries. | |||||||||||||||
47 |
If the deployment includes dedicated MDM cluster VMs, as identified in Step 23 of To prepare for an NSP system upgrade from Release 22.6 or earlier, add an entry for each identified VM. Note: If the deployment includes the IPRC, you must add the MDM node entries after the IPRC entry. | |||||||||||||||
48 |
Save and close the file. | |||||||||||||||
Create NSP cluster VMs | ||||||||||||||||
49 |
If you are upgrading from Release 22.6, copy the k8s-deployer.yml file saved in Step 8 to the following directory on the new NSP deployer host: /opt/nsp/nsp-k8s-deployer-release-ID/config | |||||||||||||||
50 |
For each required NSP cluster VM, perform one of the following to create the VM. Note: Each NSP cluster VM requires a hostname; you must change the default of ‘localhost’ to an actual hostname.
| |||||||||||||||
51 |
Perform Step 52 to Step 69 for each NSP cluster VM to configure the required interfaces. | |||||||||||||||
Configure NSP cluster interfaces | ||||||||||||||||
52 |
Enter the following on the NSP deployer host to open a console session on the VM: # virsh console NSP_cluster_VM ↵ where NSP_cluster_VM is the VM name You are prompted for credentials. | |||||||||||||||
53 |
Enter the following credentials: A virtual serial console session opens on the NSP cluster VM. | |||||||||||||||
54 |
Enter the following: # ip a ↵ The available network interfaces are listed; information like the following is displayed for each: if_n: if_name: LESSTHANBROADCAST,MULTICAST,UP,LOWER_UPGTRTHAN mtu 1500 qdisc mq state UP group default qlen 1000 link/ether MAC_address inet IPv4_address/v4_netmask brd broadcast_address scope global noprefixroute if_name valid_lft forever preferred_lft forever inet6 IPv6_address/v6_netmask scope link valid_lft forever preferred_lft forever | |||||||||||||||
55 |
Record the if_name and MAC_address values of the interfaces that you intend to use. | |||||||||||||||
56 |
Enter the following for each interface: # nmcli con add con-name con_name ifname if_name type ethernet mac MAC_address ↵ where con_name is a connection name that you assign to the interface for ease of identification; for example, ClientInterface or MediationInterface if_name is the interface name recorded in Step 55 MAC_address is the MAC address recorded in Step 55 | |||||||||||||||
57 |
Enter the following for each interface: # nmcli con mod con_name ipv4.addresses IP_address/netmask ↵ where con_name is the connection name assigned in Step 56 IP_address is the IP address to assign to the interface netmask is the subnet mask to assign | |||||||||||||||
58 |
Enter the following for each interface: # nmcli con mod con_name ipv4.method static ↵ | |||||||||||||||
59 |
Enter the following for each interface: # nmcli con mod con_name ipv4.gateway gateway_IP ↵ gateway_IP is the gateway IP address to assign Note: This command sets the default gateway on the primary interface and the gateways for all secondary interfaces. | |||||||||||||||
60 |
Enter the following for all secondary interfaces: # nmcli con mod con_name ipv4.never-default yes ↵ | |||||||||||||||
61 |
Enter the following for each interface: Note: You must specify a DNS name server. If DNS is not deployed, you must use a non-routable IP address as a nameserver entry. Note: Any hostnames used in an NSP deployment must be resolved by a DNS server. Note: An NSP deployment that uses IPv6 networking for client communication must use a hostname configuration. # nmcli con mod con_name ipv4.dns nameserver_1,nameserver_2...nameserver_n ↵ where nameserver_1 to nameserver_n are the available DNS name servers | |||||||||||||||
62 |
To optionally specify one or more DNS search domains, enter the following for each interface: # nmcli con mod con_name ipv4.dns-search search_domains ↵ where search_domains is a comma-separated list of DNS search domains | |||||||||||||||
63 |
Open the following file with a plain-text editor such as vi: /etc/sysctl.conf | |||||||||||||||
64 |
Locate the following line: vm.max_map_count=value | |||||||||||||||
65 |
Edit the line to read as follows; if the line is not present, add the line to the end of the file: vm.max_map_count=262144 | |||||||||||||||
66 |
Save and close the file. | |||||||||||||||
67 |
If you are installing in a KVM environment, enter the following: # mkdir /opt/nsp ↵ | |||||||||||||||
68 |
Enter the following to reboot the NSP cluster VM: # systemctl reboot ↵ | |||||||||||||||
69 |
Close the console session by pressing Ctrl+] (right bracket). | |||||||||||||||
Deploy container environment | ||||||||||||||||
70 |
Log in as the root user on the NSP deployer host. | |||||||||||||||
71 |
Open a console window. | |||||||||||||||
72 |
You must generate an SSH key for password-free NSP deployer host access to each NSP cluster VM. Enter the following: # ssh-keygen -N "" -f path -t rsa ↵ where path is the SSH key file path, for example, /home/user/.ssh/id_rsa An SSH key is generated. | |||||||||||||||
73 |
Enter the following for each NSP cluster VM to distribute the key to the VM. # ssh-copy-id -i key_file root@address ↵ where key_file is the SSH key file, for example, /home/user/.ssh/id_rsa.pub address is the NSP cluster VM IP address | |||||||||||||||
74 |
Enter the following: # cd /opt/nsp/nsp-k8s-deployer-release-ID/bin ↵ | |||||||||||||||
75 |
Enter the following to create the new hosts.yml file: # ./nspk8sctl config -c ↵ | |||||||||||||||
76 |
Enter the following to list the node entries in the new hosts.yml file: # ./nspk8sctl config -l ↵ Output like the following example for a four-node cluster is displayed: Note: If NAT is used in the cluster: Note: If NAT is not used in the cluster: Existing cluster hosts configuration is: all: hosts: node1: ansible_host: 203.0.113.11 ip: ip access_ip: access_ip node2: ansible_host: 203.0.113.12 ip: ip access_ip: access_ip node3: ansible_host: 203.0.113.13 ip: ip access_ip: access_ip node4: ansible_host: 203.0.113.14 ip: ip access_ip: access_ip | |||||||||||||||
77 |
Verify the IP addresses. | |||||||||||||||
78 |
Enter the following to import the Kubernetes images to the repository: .# ./nspk8sctl import ↵ | |||||||||||||||
79 |
Enter the following: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following examples, and are subsequently prompted for the root password of each cluster member: nspk8sctl --ask-pass install .# ./nspk8sctl install ↵ The NSP Kubernetes environment is deployed. | |||||||||||||||
80 |
The NSP cluster member named node1 is designated the NSP cluster host for future configuration activities; record the NSP cluster host IP address for future reference. | |||||||||||||||
81 |
Open a console window on the NSP cluster host. | |||||||||||||||
82 |
Enter the following periodically to display the status of the Kubernetes system pods: Note: You must not proceed to the next step until each pod STATUS reads Running or Completed. # kubectl get pods -A ↵ The pods are listed. | |||||||||||||||
83 |
Enter the following periodically to display the status of the NSP cluster nodes: Note: You must not proceed to the next step until each node STATUS reads Ready. # kubectl get nodes -o wide ↵ The NSP cluster nodes are listed, as shown in the following three-node cluster example: NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node1 Ready master nd version int_IP ext_IP node2 Ready master nd version int_IP ext_IP node3 Ready <none> nd version int_IP ext_IP | |||||||||||||||
Restore NSP system configuration | ||||||||||||||||
84 |
Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host: NSP_DEPLOYER_R_r.tar.gz | |||||||||||||||
85 |
Enter the following on the NSP deployer host: # cd /opt/nsp ↵ | |||||||||||||||
86 |
Enter the following: # tar xvf NSP_DEPLOYER_R_r.tar.gz ↵ where R_r is the NSP release ID, in the form Major_minor The bundle file is expanded, and the following directory of NSP installation files is created: /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID | |||||||||||||||
87 |
Enter the following: # rm -f NSP_DEPLOYER_R_r.tar.gz ↵ The bundle file is deleted. | |||||||||||||||
88 |
Restore the required NSP configuration files.
| |||||||||||||||
89 |
Perform one of the following.
| |||||||||||||||
Label NSP cluster nodes | ||||||||||||||||
90 |
Enter the following: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ | |||||||||||||||
91 |
Open the following file with a plain-text editor such as vi: /opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml Configure the following parameters: hosts: "hosts_file" labelProfile: "../ansible/roles/apps/nspos-labels/vars/labels_file" where hosts_file is the absolute path of the hosts.yml file; the default is /opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml labels_file is the file name below that corresponds to your cluster deployment type: Step 74 | |||||||||||||||
92 |
Save and close the file. | |||||||||||||||
93 |
Enter the following to apply the node labels to the NSP cluster: # ./nspdeployerctl config ↵ | |||||||||||||||
94 |
Enter the following to import the NSP images and Helm charts to the NSP Kubernetes registry: Note: The import operation may take 20 minutes or longer. # ./nspdeployerctl import ↵ | |||||||||||||||
95 |
Open a console window on the NSP cluster host. | |||||||||||||||
96 |
Enter the following to display the node labels: # kubectl get nodes --show-labels ↵ Cluster node information is displayed. | |||||||||||||||
97 |
View the information to ensure that all NSP labels are added to the cluster VMs. | |||||||||||||||
Configure NSP software | ||||||||||||||||
98 |
You must merge the nsp-config.yml file content from the existing deployment into the new nsp-config.yml file. Open the following files using a plain-text editor such as vi:
Note: See nsp-config.yml file format for configuration information. | |||||||||||||||
99 |
Copy each configured parameter line from the previous nsp-config.yml file and use the line to overwrite the same line in the new file. Note: You must maintain the structure of the new file, as any new configuration options for the new release must remain. Note: You must replace each configuration line entirely, and must preserve the leading spaces in each line. Note: The peer_address value that you specify must match the advertisedAddress value in the configuration of the peer cluster and have the same format; if one value is a hostname, the other must also be a hostname. Note: If NSP application-log forwarding to NSP Elasticsearch is enabled, special configuration is required.
| |||||||||||||||
100 |
Configure the following parameter in the platform section as shown below: Note: You must preserve the lead spacing of the line. clusterHost: "cluster_host_address" where cluster_host_address is the address of NSP cluster member node1, which is subsequently used for cluster management operations | |||||||||||||||
101 |
Configure the type parameter in the deployment section as shown below: deployment: type: "deployment_type" where deployment_type is one of the parameter options listed in the section | |||||||||||||||
102 |
If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, perform the following steps.
| |||||||||||||||
103 |
If all of the following are true, configure the following parameters in the integrations section:
nfmpDB: primaryIp: "" standbyIp: "" | |||||||||||||||
104 |
If both of the following are true, configure the following parameters in the integrations section: auxServer: primaryIpList: "" standbyIpList: "" | |||||||||||||||
105 |
If the NSP system includes one or more Release 22.11 or earlier analytics servers that are not being upgraded as part of the current NSP system upgrade, you must enable NSP and analytics compatibility; otherwise, you can skip this step. Set the legacyPortEnabled parameter in the analyticsServer subsection of the integrations section to true as shown below: analyticsServer: legacyPortEnabled: true | |||||||||||||||
106 |
Specify the user authorization mechanism in the sso section, as shown below. sso: authMode: "mode" where mode is one of the following:
| |||||||||||||||
107 |
If you use CAS authentication and are not migrating to OAUTH2 at this time, add the required parameter sections. Note: The parameters apply only to an NSP system that uses CAS authentication.
| |||||||||||||||
108 |
If you have an updated license, ensure that the location of your license.zip file, as indicated in the nsp-config.yml file, is in the correct location on the NSP deployer host. | |||||||||||||||
109 |
Save and close the new nsp-config.yml file. | |||||||||||||||
110 |
Close the previous nsp-config.yml file. | |||||||||||||||
111 |
If you are configuring the new standby (former primary) cluster in a DR deployment, obtain the security artifacts from the NSP cluster in the primary data center.
| |||||||||||||||
112 |
If you are upgrading the new primary (former standby) cluster in a DR deployment, stop here and return to Workflow for DR NSP system upgrade from Release 22.6 or earlier. | |||||||||||||||
Restore dedicated MDM node labels | ||||||||||||||||
113 |
If you are not including any dedicated MDM nodes in addition to the number of member nodes in a standard or enhanced NSP cluster, go to Step 120. | |||||||||||||||
114 |
Log in as the root user on the NSP cluster host. | |||||||||||||||
115 |
Open a console window. | |||||||||||||||
116 |
Perform the following steps for each additional MDM node.
| |||||||||||||||
117 |
Enter the following: # kubectl get nodes -o wide ↵ A list of nodes like the following is displayed. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node1 Ready master nd version int_IP ext_IP node2 Ready master nd version int_IP ext_IP node3 Ready <none> nd version int_IP ext_IP | |||||||||||||||
118 |
Record the NAME value of each node whose INTERNAL-IP value is the IP address of a node that has been added to host an additional MDM instance. | |||||||||||||||
119 |
For each node, enter the following sequence of commands: # kubectl label node node mdm=true ↵ where node is the recorded NAME value of the MDM node | |||||||||||||||
Deploy NSP cluster | ||||||||||||||||
120 |
Enter the following on the NSP deployer host: # cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵ | |||||||||||||||
121 |
If you are upgrading the new standby (former primary) cluster in a DR deployment, go to Step 124. | |||||||||||||||
122 |
If you are creating the new standalone NSP cluster, or the new primary NSP cluster in a DR deployment, you must start the cluster in restore mode; enter the following: Note: If the NSP cluster members do not have the required SSH key, you must include the --ask-pass argument in the nspdeployerctl command, as shown in the following example, and are subsequently prompted for the root password of each host: nspdeployerctl install arguments --ask-pass ↵ # ./nspdeployerctl install --config --restore ↵ | |||||||||||||||
Restore NSP data | ||||||||||||||||
123 |
If you are creating the new standalone NSP cluster, or the new primary NSP cluster in a DR deployment, you must restore the NSP databases and file service data; perform “How do I restore the NSP cluster databases?” in the NSP System Administrator Guide. | |||||||||||||||
Start NSP | ||||||||||||||||
124 |
If you are creating the new standby cluster in a DR deployment, enter the following on the NSP deployer host: Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member: nspdeployerctl --ask-pass install --config --deploy # ./nspdeployerctl install --config --deploy ↵ The NSP starts. | |||||||||||||||
Monitor NSP initialization | ||||||||||||||||
125 |
Open a console window on the NSP cluster host. | |||||||||||||||
126 |
Monitor and validate the NSP cluster initialization. Note: You must not proceed to the next step until each NSP pod is operational. Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, the completed NSP cluster initialization marks the end of the network management outage associated with the upgrade.
| |||||||||||||||
127 |
Enter the following on the NSP cluster host to ensure that all pods are running: # kubectl get pods -A ↵ The status of each pod is listed; all pods are running when the displayed STATUS value is Running or Completed. The NSP deployer log file is /var/log/nspdeployerctl.log. | |||||||||||||||
Verify upgraded NSP cluster operation | ||||||||||||||||
128 |
Use a browser to open the NSP cluster URL. | |||||||||||||||
129 |
Verify the following.
Note: If the UI fails to open, perform “How do I remove the stale NSP allowlist entries?” in the NSP System Administrator Guide to ensure that unresolvable host entries from the previous deployment do not prevent NSP access. | |||||||||||||||
Upgrade MDM adaptors | ||||||||||||||||
130 |
If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, you must upgrade your MDM adaptors to the latest in the adaptor suite delivered as part of the new NSP release, and install the required Custom Resources, or CRs.. Perform the following steps. Note: Upgrading the adaptors to the latest version is mandatory in order for gNMI telemetry collection to function.
| |||||||||||||||
Synchronize auxiliary database password | ||||||||||||||||
131 |
If the NSP deployment includes an auxiliary database, perform the following steps on the NSP cluster host.
| |||||||||||||||
Perform post-upgrade tasks | ||||||||||||||||
132 |
If you uninstalled any NSP logical inventory adaptor suites in Step 24 of To prepare for an NSP system upgrade from Release 22.6 or earlier, perform the following steps.
| |||||||||||||||
133 |
Use the NSP to monitor device discovery and to check network management functions. | |||||||||||||||
134 |
Back up the NSP databases; perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide. | |||||||||||||||
135 |
Close the open console windows. End of steps |