To upgrade a Release 22.6 or earlier NSP cluster

Purpose
CAUTION 

CAUTION

Network management outage

The procedure requires a shutdown of the NSP system, which causes a network management outage.

Ensure that you perform the procedure only during a scheduled maintenance period with the assistance of technical support.

Perform this procedure to upgrade a standalone or DR NSP system at Release 22.3 or 22.6 after you have performed To prepare for an NSP system upgrade from Release 22.6 or earlier.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Note: If you are upgrading from NSP Release 22.3, LLDP link discovery is performed by Original Service Fulfillment by default, rather than NSP Network Infrastructure Management. You can change this behavior only when the lldpv2 adaptors are available or deployed for all managed devices; otherwise, a loss of LLDP data occurs. See Configuring LLDP link discovery for more information.

Steps
Stop and undeploy NSP cluster
 

Log in as the root user on the appropriate station, based on the installed NSP release:

  • Release 22.3—NSP configurator VM

  • Release 22.6—NSP deployer host


Perform the following steps to preserve the existing cluster data.

  1. Open the appropriate file, based on the installed NSP release, using a plain-text editor such as vi:

    • Release 22.3—/opt/nsp/NSP-CN-release-ID/config/nsp-config.yml

    • Release 22.6—/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read:

      deleteOnUndeploy:false

  3. Save and close the file.


CAUTION 

CAUTION

Data Loss

Undeploying an NSP cluster as described in this step permanently removes the cluster data.

If you are upgrading a DR NSP system, you must ensure that you have the latest database backup from the primary cluster before you perform this step.

Undeploy the NSP cluster:

Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, this step marks the beginning of the network management outage associated with the upgrade.

Note: If the NSP cluster members do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the common root password of each cluster member:

command --ask-pass option option

  1. If you are upgrading from Release 22.3, enter the following:

    /opt/nsp/NSP-CN-DEP-release-ID/bin/nsp-config.bash --undeploy --clean ↵

  2. If you are upgrading from Release 22.6 or later, enter the following:

    /opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵

The NSP cluster is undeployed.


Before you create new NSP cluster VMs, you must disable each existing VM in the NSP cluster. The following options are available.

  • stop but do not delete existing VMs—simplifies upgrade rollback, but requires sufficient platform resources for new VMs

  • delete existing VMs—complicates upgrade rollback, but conserves platform resources

  1. Log in as the root user on the station that hosts the NSP cluster VM.

  2. Enter the following:

    virsh list ↵

    The NSP cluster VMs are listed.

  3. Enter the following for each listed VM:

    virsh destroy VM

    where VM is the VM name

    The VM stops.

  4. To delete the VM, enter the following:

    virsh undefine VM

    The VM is deleted.


Uninstall IPRC, CDRC
 

If you are upgrading from Release 22.3 and the NSP deployment includes IP resource control or cross-domain resource control, uninstall each IPRC and CDRC server.

  1. Log in as the root user on the IPRC or CDRC server station that has the extracted NSP software bundle from the previous installation or upgrade.

  2. Open a console window.

  3. Navigate to the NSP installer directory; enter the following:

    cd path/NSD_NRC_R_r

    where

    path is the directory that contains the extracted NSP software bundle

    R_r is the NSP software release, in the form MAJOR_minor

  4. Edit the hosts file in the directory to list only the addresses of the NSP components to uninstall.

    Note: The uninstaller uses the same root password on each server in the list; ensure that you specify only servers that have the same root password.

  5. Enter the following; include the --ask-pass option only if each target station has the same root user password:

    ./bin/uninstall.sh --ask-pass ↵

  6. Enter the root user password each time you are prompted.

    The NSP software is removed from each server listed in the hosts file.

  7. Close the console window.


Preserve NSP cluster configuration
 

Log in as the root user on the existing NSP deployer host.


Open a console window.


Perform one of the following.

  1. If you are upgrading from Release 22.3, copy the following file to a separate station that is unaffected by the upgrade activity:

    /opt/nsp/kubespray/inventory/nsp-deployer-default/hosts.yml

  2. If you are upgrading from NSP Release 22.6, copy the following file to a separate station that is unaffected by the upgrade activity:

    /opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml


Copy the following file to a separate station that is unaffected by the upgrade activity:

/opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml


Create NSP deployer host
 
10 

Log in as the root user on the station that will host the NSP deployer host VM.


11 

Open a console window.


12 

Enter the following:

dnf -y install virt-install libguestfs-tools ↵


13 

Before you create the new NSP deployer host VM, you must disable the existing VM; the following options are available.

  • stop but do not delete existing VM—simplifies upgrade rollback, but VM consumes platform resources

  • delete existing VM—complicates upgrade rollback, but conserves platform resources

  1. Log in as the root user on the station that hosts the NSP deployer host VM.

  2. Enter the following to list the VMs on the station:

    virsh list ↵

    The VMs are listed.

  3. Enter the following:

    virsh destroy VM

    where VM is the name of the NSP deployer host VM

    The NSP deployer host VM stops.

  4. To delete the VM, enter the following:

    Note: If you intend to use the same VM name for the new NSP deployer host VM, you must delete the VM.

    virsh undefine VM

    The VM is deleted.


14 

Perform one of the following to create the new NSP deployer host VM.

Note: The NSP deployer host VM requires a hostname; you must change the default of ‘localhost’ to an actual hostname.

  1. Deploy the downloaded NSP_K8S_PLATFORM_RHEL8_yy_mm.qcow2 disk image; perform Step 6 to Step 16 of To deploy an NSP RHEL qcow2 disk image.

  2. Deploy the NSP_K8S_PLATFORM_RHEL8_yy_mm.ova disk image; see the documentation for your virtualization environment for information.

    Note: For OVA-image deployment, it is strongly recommended that you mount the /opt directory on a separate hard disk that has sufficient capacity to allow for future expansion.

  3. Manually install the RHEL OS and configure the disk partitions, as described in Manual NSP RHEL OS installation and Chapter 2, NSP disk setup and partitioning.


Configure NSP deployer host network interface
 
15 

Enter the following to open a console session on the NSP deployer host VM:

virsh console deployer_host

You are prompted for credentials.


16 

Enter the following credentials:

  • username—root

  • password—available from technical support

A virtual serial console session opens.


17 

Enter the following:

ip a ↵

The available network interfaces are listed; information like the following is displayed for each:

if_nif_name: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether MAC_address

    inet IPv4_address/v4_netmask brd broadcast_address scope global noprefixroute if_name

       valid_lft forever preferred_lft forever

    inet6 IPv6_address/v6_netmask scope link

       valid_lft forever preferred_lft forever


18 

Record the if_name and MAC_address values of the interface that you intend to use.


19 

Enter the following:

nmcli con add con-name con_name ifname if_name type ethernet mac MAC_address

where

con_name is a connection name that you assign to the interface for ease of identification

if_name is the interface name recorded in Step 18

MAC_address is the MAC address recorded in Step 18


20 

Enter the following:

nmcli con mod con_name ipv4.addresses IP_address/netmask

where

con_name is the connection name assigned in Step 19

IP_address is the IP address to assign to the interface

netmask is the subnet mask to assign


21 

Enter the following:

nmcli con mod con_name ipv4.method static ↵


22 

Enter the following:

nmcli con mod con_name ipv4.gateway gateway_IP

gateway_IP is the gateway IP address to assign


23 

Enter the following:

Note: You must specify a DNS name server. If DNS is not deployed, you must use a non-routable IP address as a nameserver entry.

Note: Any hostnames used in an NSP deployment must be resolved by a DNS server.

Note: An NSP deployment that uses IPv6 networking for client communication must use a hostname configuration.

nmcli con mod con_name ipv4.dns nameserver_1,nameserver_2...nameserver_n

where nameserver_1 to nameserver_n are the available DNS name servers


24 

To optionally specify one or more DNS search domains, enter the following:

nmcli con mod con_name ipv4.dns-search search_domains

where search_domains is a comma-separated list of DNS search domains


25 

Enter the following to set the hostname:

hostnamectl set-hostname hostname

where hostname is the hostname to assign


26 

Enter the following to reboot the deployer host VM:

systemctl reboot ↵


27 

Close the console session by pressing Ctrl+] (right bracket).


Install NSP Kubernetes registry
 
28 

Log in as the root user on the NSP deployer host.


29 

Enter the following:

mkdir /opt/nsp ↵


30 

Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host:

NSP_K8S_DEPLOYER_R_r.tar.gz


31 

Enter the following:

cd /opt/nsp ↵


32 

Enter the following:

tar xvf NSP_K8S_DEPLOYER_R_r.tar.gz ↵

where R_r is the NSP release ID, in the form Major_minor

The file is expanded, and the following directories are created:

  • /opt/nsp/nsp-k8s-deployer-release-ID

  • /opt/nsp/nsp-registry-release-ID


33 

After the file expansion completes successfully, enter the following to remove the bundle file, which is no longer required:

rm -f NSP_K8S_DEPLOYER_R_r.tar.gz ↵

The file is deleted.


34 

Enter the following:

cd nsp-registry-release-ID/bin ↵


35 

Enter the following:

./nspregistryctl install ↵

The following prompt is displayed.

Enter a registry admin password:


36 

Create a registry administrator password; the password must:

  • be a minimum of 10 characters

  • include at least one:

    • uppercase character

    • lowercase character

    • digit

    • special character in the following list:

      ! # $ % & ( ) * + , - . / : ; = ? @ \ ^ _ { | }


37 

Enter the password.

The following prompt is displayed.

Confirm the registry admin password:


38 

Re-enter the password.

The registry installation begins, and messages like the following are displayed.

✔ New installation detected.

✔ Initialize system.

date time Copy container images ...

date time Install/update package [container-selinux] ...

✔ Installation of container-selinux has completed.

date time Install/update package [k3s-selinux] ...

✔ Installation of k3s-selinux has completed.

date time Setup required tools ...

✔ Initialization has completed.

date time Install k3s ...

date time Waiting for up to 10 minutes for k3s initialization ...

..............................................

✔ Installation of k3s has completed.

➜ Generate self-signed key and cert.

date time Registry TLS key file: /opt/nsp/nsp-registry/tls/nokia-nsp-registry.key

date time Registry TLS cert file: /opt/nsp/nsp-registry/tls/nokia-nsp-registry.crt

date time Install registry apps ...

date time Waiting for up to 10 minutes for registry services to be ready ...

..........

✔ Registry apps installation is completed.

date time Generate artifacts ...

date time Apply artifacts ...

date time Setup registry.nsp.nokia.local certs ...

date time Setup a default project [nsp] ...

date time Setup a cron to regenerate the k3s certificate [nsp] ...

✔ Post configuration is completed.

✔ Installation has completed.


Migrate legacy cluster parameters
 
39 

If you are upgrading from Release 22.6, go to Step 44.


40 

Enter the following:

cd /opt/nsp/nsp-k8s-deployer-release-ID/tools ↵


41 

Copy the hosts.yml file saved in Step 8 to the current directory.


42 

Enter the following:

./extracthosts hosts.yml ↵

The current NSP cluster node entries are displayed, as shown in the following example for a three-node cluster:

hosts:

- nodeName: node1

nodeIp: 192.168.96.11

accessIp: 203.0.113.11

- nodeName: node2

nodeIp: 192.168.96.12

accessIp: 203.0.113.12

- nodeName: node3

nodeIp: 192.168.96.13

accessIp: 203.0.113.13


43 

Review the output to ensure that each node entry is correct.


44 

Open the following new deployer configuration file using a plain-text editor such as vi:

/opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml


45 

Perform one of the following to configure the cluster node entries for the deployment.

  1. If you are upgrading from Release 22.3, configure the cluster node entries in the new configuration file using the extracted hosts.yml file output. Table 8-1, hosts.yml and k8s-deployer.yml parameters lists the former and new parameter names, and the required values.

    Note: The nodeName value:

    • can include only ASCII alphanumeric and hyphen characters

    • cannot include an upper-case character

    • cannot begin or end with a hyphen

    • cannot begin with a number

    • cannot include an underscore

    • must end with a number

    Note: The node order in the k8s-deployer.yml file must match the order in the hosts.yml file.

    Table 8-1: hosts.yml and k8s-deployer.yml parameters

    hosts.yml parameter

    k8s-deployer.yml parameter

    Value

    node_entry_header

    nodeName

    node hostname

    ansible_host

    same IP address used for access_ip

    ip

    nodeIp

    IP address; private, if NAT is used

    access_ip

    accessIp

    IP address; public, if NAT is used

  2. If you are upgrading from Release 22.6, merge the current k8s-deployer.yml settings into the new file.

    1. Open the old k8s-deployer.yml file saved in Step 8.

    2. Apply the settings in the old file to the same parameters in the new file.

    3. Close the old k8s-deployer.yml file.


46 

Edit the following line in the cluster section of the new file to read:

  hosts: "/opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml"


47 

If you have disabled remote root access to the NSP cluster VMs, configure the following parameters in the cluster section, sshAccess subsection:

  sshAccess:

    userName: "user"

    privateKey: "path"

where

user is the designated root-equivalent user

path is the SSH key path, for example, /home/user/.ssh/id_rsa


48 

Configure the following parameter, which specifies whether dual-stack NE management is enabled:

Note: Dual-stack NE management can function only when the network environment is appropriately configured, for example:

  • Only valid, non-link-local static or DHCPv6-assigned addresses are used.

  • A physical or virtual IPv6 subnet is configured for IPv6 communication with the NEs.

  enable_dual_stack_networks: value

where value must be set to true if the cluster VMs support both IPv4 and IPv6 addressing


49 

If the existing deployment includes an RPM-based IPRC, add a node entry for the IPRC after the existing node entries.


50 

If the deployment includes dedicated MDM cluster VMs, as identified in Step 22 of To prepare for an NSP system upgrade from Release 22.6 or earlier, add an entry for each identified VM.

Note: If the deployment includes the IPRC, you must add the MDM node entries after the IPRC entry.


51 

Save and close the new k8s-deployer.yml file.


Create NSP cluster VMs
 
52 

For each required NSP cluster VM, perform one of the following to create the VM.

Note: Each NSP cluster VM requires a hostname; you must change the default of ‘localhost’ to an actual hostname.

  1. Deploy the downloaded NSP_K8S_PLATFORM_RHEL8_yy_mm.qcow2 disk image; perform Step 6 to Step 16 of To deploy an NSP RHEL qcow2 disk image.

  2. Deploy the NSP_K8S_PLATFORM_RHEL8_yy_mm.ova disk image; see the documentation for your virtualization environment for information.

    Note: For OVA-image deployment, it is strongly recommended that you mount the /opt directory on a separate hard disk that has sufficient capacity to allow for future expansion.

  3. Manually install the RHEL OS and configure the disk partitions, as described in Manual NSP RHEL OS installation and Chapter 2, NSP disk setup and partitioning.


53 

Perform Step 54 to Step 71 for each NSP cluster VM to configure the required interfaces.


Configure NSP cluster interfaces
 
54 

Enter the following on the NSP deployer host to open a console session on the VM:

virsh console NSP_cluster_VM

where NSP_cluster_VM is the VM name

You are prompted for credentials.


55 

Enter the following credentials:

  • username—root

  • password—available from technical support

A virtual serial console session opens on the NSP cluster VM.


56 

Enter the following:

ip a ↵

The available network interfaces are listed; information like the following is displayed for each:

if_nif_name: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether MAC_address

    inet IPv4_address/v4_netmask brd broadcast_address scope global noprefixroute if_name

       valid_lft forever preferred_lft forever

    inet6 IPv6_address/v6_netmask scope link

       valid_lft forever preferred_lft forever


57 

Record the if_name and MAC_address values of the interfaces that you intend to use.


58 

Enter the following for each interface:

nmcli con add con-name con_name ifname if_name type ethernet mac MAC_address

where

con_name is a connection name that you assign to the interface for ease of identification; for example, ClientInterface or MediationInterface

if_name is the interface name recorded in Step 57

MAC_address is the MAC address recorded in Step 57


59 

Enter the following for each interface:

nmcli con mod con_name ipv4.addresses IP_address/netmask

where

con_name is the connection name assigned in Step 58

IP_address is the IP address to assign to the interface

netmask is the subnet mask to assign


60 

Enter the following for each interface:

nmcli con mod con_name ipv4.method static ↵


61 

Enter the following for each interface:

nmcli con mod con_name ipv4.gateway gateway_IP

gateway_IP is the gateway IP address to assign

Note: This command sets the default gateway on the primary interface and the gateways for all secondary interfaces.


62 

Enter the following for all secondary interfaces:

nmcli con mod con_name ipv4.never-default yes ↵


63 

Enter the following for each interface:

Note: You must specify a DNS name server. If DNS is not deployed, you must use a non-routable IP address as a nameserver entry.

Note: Any hostnames used in an NSP deployment must be resolved by a DNS server.

Note: An NSP deployment that uses IPv6 networking for client communication must use a hostname configuration.

nmcli con mod con_name ipv4.dns nameserver_1,nameserver_2...nameserver_n

where nameserver_1 to nameserver_n are the available DNS name servers


64 

To optionally specify one or more DNS search domains, enter the following for each interface:

nmcli con mod con_name ipv4.dns-search search_domains

where search_domains is a comma-separated list of DNS search domains


65 

Open the following file with a plain-text editor such as vi:

/etc/sysctl.conf


66 

Locate the following line:

vm.max_map_count=value


67 

Edit the line to read as follows; if the line is not present, add the line to the end of the file:

vm.max_map_count=262144


68 

Save and close the file.


69 

If you are installing in a KVM environment, enter the following:

mkdir /opt/nsp ↵


70 

Enter the following to reboot the NSP cluster VM:

systemctl reboot ↵


71 

Close the console session by pressing Ctrl+] (right bracket).


Deploy container environment
 
72 

Log in as the root user on the NSP deployer host.


73 

Open a console window.


74 

If remote root access is disabled, switch to the designated root-equivalent user.


75 

You must generate an SSH key for password-free deployer host access to each NSP cluster VM.

Enter the following:

ssh-keygen -N "" -f ~/.ssh/id_rsa -t rsa ↵


76 

Enter the following for each NSP cluster VM to distribute the SSH key to the VM.

ssh-copy-id -i ~/.ssh/id_rsa.pub address

where address is the NSP cluster VM IP address


77 

If remote root access is disabled, switch back to the root user.


78 

Enter the following:

cd /opt/nsp/nsp-k8s-deployer-release-ID/bin ↵


79 

Enter the following to create the new hosts.yml file:

./nspk8sctl config -c ↵


80 

Enter the following to list the node entries in the new hosts.yml file:

./nspk8sctl config -l ↵

Output like the following example for a four-node cluster is displayed:

Note: If NAT is used in the cluster:

  • The access_ip value is the public IP address of the cluster node.

  • The ip value is the private IP address of the cluster node.

  • The ansible_host value is the same value as access_ip

Note: If NAT is not used in the cluster:

  • The access_ip value is the IP address of the cluster node.

  • The ip value matches the access_ip value.

  • The ansible_host value is the same value as access_ip

Existing cluster hosts configuration is:

all:

 hosts:

 node1:

 ansible_host: 203.0.113.11

 ip: ip

 access_ip: access_ip

node2:

 ansible_host: 203.0.113.12

 ip: ip

 access_ip: access_ip

node3:

 ansible_host: 203.0.113.13

 ip: ip

 access_ip: access_ip

node4:

 ansible_host: 203.0.113.14

 ip: ip

 access_ip: access_ip


81 

Verify the IP addresses.


82 

Enter the following to import the Kubernetes images to the repository:

../nspk8sctl import ↵


83 

Enter the following:

../nspk8sctl install ↵

The NSP Kubernetes environment is deployed.


84 

The NSP cluster member named node1 is designated the NSP cluster host for future configuration activities; record the NSP cluster host IP address for future reference.


85 

Open a console window on the NSP cluster host.


86 

Enter the following periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until each pod STATUS reads Running or Completed.

kubectl get pods -A ↵

The pods are listed.


87 

Enter the following periodically to display the status of the NSP cluster nodes:

Note: You must not proceed to the next step until each node STATUS reads Ready.

kubectl get nodes -o wide ↵

The NSP cluster nodes are listed, as shown in the following three-node cluster example:

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   

node1   Ready    master   nd    version   int_IP        ext_IP

node2   Ready    master   nd    version   int_IP        ext_IP

node3   Ready    <none>   nd    version   int_IP        ext_IP


Restore NSP system configuration
 
88 

Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host:

NSP_DEPLOYER_R_r.tar.gz


89 

Enter the following on the NSP deployer host:

cd /opt/nsp ↵


90 

Enter the following:

tar xvf NSP_DEPLOYER_R_r.tar.gz ↵

where R_r is the NSP release ID, in the form Major_minor

The bundle file is expanded, and the following directory of NSP installation files is created:

/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID


91 

Enter the following:

rm -f NSP_DEPLOYER_R_r.tar.gz ↵

The bundle file is deleted.


92 

Restore the required NSP configuration files.

  1. Enter the following:

    mkdir /tmp/appliedConfig ↵

  2. Enter the following:

    cd /tmp/appliedConfig ↵

  3. Transfer the following configuration backup file saved in To prepare for an NSP system upgrade from Release 22.6 or earlier to the /tmp/appliedConfig directory:

    nspConfiguratorConfigs.zip

  4. Enter the following:

    unzip nspConfiguratorConfigs.zip ↵

    The configuration files are extracted to the current directory, and include some or all of the following, depending on the previous deployment:

    • license file

    • nsp-config.yml file

    • TLS files; may include subdirectories

    • SSH key files

    • nsp-configurator/generated directory content

  5. Copy the extracted TLS files to the following directory:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls

  6. Enter the following:

    mkdir -p /opt/nsp/nsp-configurator/generated ↵

  7. Copy all extracted nsp-configurator/generated files to the /opt/nsp/nsp-configurator/generated directory.


Label NSP cluster nodes
 
93 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


94 

Merge the current nsp-deployer.yml settings into the new nsp-deployer.yml file.

  1. Open the following new cluster configuration file and the old nsp-deployer.yml file saved in Step 9 using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml

  2. Apply the settings in the old file to the same parameters in the new file.

  3. Close the old k8s-deployer.yml file.


95 

If you have disabled remote root access to the NSP cluster VMs, configure the following parameters in the cluster section, sshAccess subsection:

  sshAccess:

    userName: "user"

    privateKey: "path"

where

user is the designated root-equivalent user

path is the SSH key path, for example, /home/user/.ssh/id_rsa


96 

Save and close the new nsp-deployer.yml file.


97 

Enter the following to apply the node labels to the NSP cluster:

./nspdeployerctl config ↵


98 

Enter the following to import the NSP images and Helm charts to the NSP Kubernetes registry:

Note: The import operation may take 20 minutes or longer.

./nspdeployerctl import ↵


99 

Open a console window on the NSP cluster host.


100 

Enter the following to display the node labels:

kubectl get nodes --show-labels ↵

Cluster node information is displayed.


101 

View the information to ensure that all NSP labels are added to the cluster VMs.


Configure NSP software
 
102 

You must merge the nsp-config.yml file content from the existing deployment into the new nsp-config.yml file.

Open the following files using a plain-text editor such as vi:

  • former configuration file—/tmp/appliedConfig/nsp-config.yml file extracted in Step 92

  • new configuration file—/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

Note: See nsp-config.yml file format for configuration information.


103 

Copy each configured parameter line from the previous nsp-config.yml file and use the line to overwrite the same line in the new file.

Note: You must maintain the structure of the new file, as any new configuration options for the new release must remain.

Note: You must replace each configuration line entirely, and must preserve the leading spaces in each line.

Note: If NSP application-log forwarding to NSP Elasticsearch is enabled, special configuration is required.

  • OpenSearch replaces Elasticsearch as the local log-viewing utility. Consequently, the Elasticsearch configuration cannot be directly copied from the current NSP configuration to the new configuration. Instead, you must configure the parameters in the loggingforwardingapplicationLogsopensearch section of the new NSP configuration file.

  • Elasticsearch is introduced as a remote log-forwarding option. You can enable NSP application-log forwarding to a remote Elasticsearch server in the loggingforwardingapplicationLogselasticsearch section of the new NSP configuration file.

    See Centralized logging for more information about configuring NSP logging options.


104 

Configure the following parameter in the platform section as shown below:

Note: You must preserve the lead spacing of the line.

  clusterHost: "cluster_host_address"

where cluster_host_address is the address of NSP cluster member node1, which is subsequently used for cluster management operations


105 

Configure the type parameter in the deployment section as shown below:

deployment:

    type: "deployment_type"

where deployment_type is one of the parameter options listed in the section


106 

If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, perform the following steps.

  1. Enable the following installation option:

    id: networkInfrastructureManagement-gnmiTelemetry

  2. Configure the throughputFactor parameter in the nspmodulestelemetrygnmi section; see the parameter description in the nsp-config.yml for the required value, which is based on the management scale:

               throughputFactor: n

    where n is the required throughput factor for your deployment


107 

If all of the following are true, configure the following parameters in the integrations section:

  • The NSP system includes the NFM-P.

  • You want the NFM-P to forward system metrics to the NSP cluster.

  • The NFM-P main server and main database are on separate stations:

    nfmpDB:

      primaryIp: ""

      standbyIp: ""


108 

If both of the following are true, configure the following parameters in the integrations section:

  • The NSP system includes the NFM-P.

  • You want the NFM-P to forward system metrics to the NSP cluster.

  • The NFM-P system includes one or more auxiliary servers:

    auxServer:

      primaryIpList: ""

      standbyIpList: ""


109 

If the NSP system includes one or more Release 22 analytics servers that are not being upgraded as part of the current NSP system upgrade, you must enable NSP and analytics compatibility; otherwise, you can skip this step.

Set the legacyPortEnabled parameter in the analyticsServer subsection of the integrations section to true as shown below:

  analyticsServer:

    legacyPortEnabled: true


110 

If required, configure the user authentication parameters in the sso section, as shown below; see NSP SSO configuration parameters for configuration information.


111 

If you have an updated license, ensure that the location of your license.zip file, as indicated in the nsp-config.yml file, is in the correct location on the NSP deployer host.


112 

Save and close the new nsp-config.yml file.


113 

Close the previous nsp-config.yml file.


114 

If you are configuring the new standby (former primary) cluster in a DR deployment, obtain the TLS and telemetry artifacts from the NSP cluster in the new primary data center.

  1. If remote root access is disabled, switch to the designated root-equivalent user.

  2. Enter the following:

    scp -r address:/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls/ca/* /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls/ca/ ↵

    where address is the address of the NSP deployer host in the primary cluster

  3. Enter the following:

    scp address:/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls/telemetry/* /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls/telemetry/ ↵

  4. Enter the following:

    mkdir -p /opt/nsp/nsp-configurator/generated ↵

  5. Enter the following to copy any Keycloak secret files that are on the primary cluster:

    scp address:/opt/nsp/nsp-configurator/generated/nsp-keycloak-*-secret /opt/nsp/nsp-configurator/generated/ ↵

  6. If remote root access is disabled, switch back to the root user.


115 

If you are upgrading the new primary (former standby) cluster in a DR deployment, stop here and return to Workflow for DR NSP system upgrade from Release 22.6 or earlier.


Restore dedicated MDM node labels
 
116 

If you are not including any dedicated MDM nodes in addition to the number of member nodes in a standard or enhanced NSP cluster, go to Step 125.


117 

Log in as the root user on the NSP cluster host.


118 

Open a console window.


119 

If remote root access is disabled, switch to the designated root-equivalent user.


120 

Perform the following steps for each additional MDM node.

  1. Enter the following to open an SSH session on the MDM node.

    Note: The root password for a VM created using the Nokia qcow2 image is available from technical support.

    ssh MDM_node

    where MDM_node is the node IP address

  2. Enter the following:

    mkdir -p /opt/nsp/volumes/mdm-server ↵

  3. Enter the following:

    chown -R 1000:1000 /opt/nsp/volumes ↵

  4. Enter the following:

    exit ↵


121 

If remote root access is disabled, switch back to the root user.


122 

Enter the following:

kubectl get nodes -o wide ↵

A list of nodes like the following is displayed.

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   

node1   Ready    master   nd   version   int_IP   ext_IP

node2   Ready    master   nd   version   int_IP   ext_IP

node3   Ready    <none>   nd   version   int_IP   ext_IP


123 

Record the NAME value of each node whose INTERNAL-IP value is the IP address of a node that has been added to host an additional MDM instance.


124 

For each node, enter the following sequence of commands:

kubectl label node node mdm=true ↵

kubectl cordon node

where node is the recorded NAME value of the cordoned MDM node


Deploy NSP cluster
 
125 

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


126 

If you are upgrading the new standby (former primary) cluster in a DR deployment, go to Step 129.


127 

If you are creating the new standalone NSP cluster, or the new primary NSP cluster in a DR deployment, you must start the cluster in restore mode; enter the following:

Note: If the NSP cluster members do not have the required SSH key, you must include the --ask-pass argument in the nspdeployerctl command, as shown in the following example, and are subsequently prompted for the root password of each host:

nspdeployerctl install arguments --ask-pass ↵

./nspdeployerctl install --config --restore ↵


Restore NSP data
 
128 

If you are creating the new standalone NSP cluster, or the new primary NSP cluster in a DR deployment, you must restore the NSP databases and file service data; perform “How do I restore the NSP cluster databases?” in the NSP System Administrator Guide.


Start NSP
 
129 

If you are creating the new standby cluster in a DR deployment, enter the following on the NSP deployer host:

./nspdeployerctl install --config --deploy ↵

The NSP starts.


Monitor NSP initialization
 
130 

Open a console window on the NSP cluster host.


131 

If you are including any MDM VMs in addition to a standard or enhanced NSP cluster deployment, perform the following steps to uncordon the nodes cordoned in Step 124.

  1. Enter the following:

    kubectl get pods -A | grep Pending ↵

    The pods in the Pending state are listed; an mdm-server pod name has the format mdm-server-ID.

    Note: Some mdm-server pods may be in the Pending state because the manually labeled MDM nodes are cordoned in Step 124. You must not proceed to the next step if any pods other than the mdm-server pods are listed as Pending. If any other pod is shown, re-enter the command periodically until no pods, or only mdm-server pods, are listed.

  2. Enter the following for each manually labeled and cordoned node:

    kubectl uncordon node

    where node is an MDM node name recorded in Step 124

    The MDM pods are deployed.

    Note: The deployment of all MDM pods may take a few minutes.

  3. Enter the following periodically to display the MDM pod status:

    kubectl get pods -A | grep mdm-server ↵

  4. Ensure that the number of mdm-server-ID instances is the same as the mdm clusterSize value in nsp-config.yml, and that each pod is in the Running state. Otherwise, contact technical support for assistance.


132 

Monitor and validate the NSP cluster initialization.

Note: You must not proceed to the next step until each NSP pod is operational.

Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, the completed NSP cluster initialization marks the end of the network management outage associated with the upgrade.

  1. Enter the following every few minutes:

    kubectl get pods -A ↵

    The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed.

  2. If the Network Operations Analytics - Baseline Analytics installation option is enabled, ensure that the following pods are listed; otherwise, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod:

    Note: The output for a non-HA deployment is shown below; an HA cluster has three sets of three baseline pods, three rta-ignite pods, and two spark-operator pods.

    • analytics-rtanalytics-tomcat

    • baseline-anomaly-detector-n-exec-1

    • baseline-trainer-n-exec-1

    • baseline-window-evaluator-n-exec-1

    • rta-anomaly-detector-app-driver

    • rta-ignite-0

    • rta-trainer-app-driver

    • rta-windower-app-driver

    • spark-operator-m-n

  3. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.

  4. Enter the following to display the status of the NSP cluster members:

    kubectl get nodes ↵

    The NSP Kubernetes deployer log file is /var/log/nspk8sctl.log.


133 

Enter the following on the NSP cluster host to ensure that all pods are running:

kubectl get pods -A ↵

The status of each pod is listed; all pods are running when the displayed STATUS value is Running or Completed.

The NSP deployer log file is /var/log/nspdeployerctl.log.


Verify upgraded NSP cluster operation
 
134 

Use a browser to open the NSP cluster URL.


135 

Verify the following.

  • The NSP sign-in page opens.

  • The NSP UI opens after you sign in.

  • In a DR deployment, if the standby cluster is operational and you specify the standby cluster address, the browser is redirected to the primary cluster address.

Note: If the UI fails to open, perform “How do I remove the stale NSP allowlist entries?” in the NSP System Administrator Guide to ensure that unresolvable host entries from the previous deployment do not prevent NSP access.


Upgrade MDM adaptors
 
136 

If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, you must upgrade your MDM adaptors to the latest in the adaptor suite delivered as part of the new NSP release, and install the required Custom Resources, or CRs.

Perform the following steps.

Note: Upgrading the adaptors to the latest version is mandatory in order for gNMI telemetry collection to function.

  1. Upgrade the adaptor suites; see “How do I install adaptor artifacts that are not supported in the Artifacts view?” in the NSP System Administrator Guide for information.

  2. When the adaptor suites are upgraded successfully, use NSP Artifacts to install the required telemetry artifact bundles that are packaged with the adaptor suites.

    • nsp-telemetry-cr-nodeType-version.rel.release.ct.zip

    • nsp-telemetry-cr-nodeType-version.rel.release-va.zip

  3. View the messages displayed during the installation to verify that the artifact installation is successful.


Restore classic telemetry collection
 
137 

Telemetry data collection for classically mediated NEs does not automatically resume after an upgrade to NSP Release 24.4. Manual action is required to restore the data collection.

If your NSP system collects telemetry data from :classically mediated NEs, restore the telemetry data collection.

  1. The upgrade changes IPV6 and Dual Stack NE IDs. Using the updated NE ID from Device Management, edit your subscriptions and update the NE ID in the Object Filter. See “How do I manage subscriptions?” in the NSP Data Collection and Analysis Guide.

  2. If you have subscriptions with telemetry type telemetry:/base/interfaces/utilization, update the control files:

    1. Disable the subscriptions with telemetry type telemetry:/base/interfaces/utilization.

    2. Obtain the latest nsp-telemetry-cr-va-sros artifact bundle from the Nokia Support portal and install it; see “How do I install an artifact bundle?” in the NSP Network Automation Guide.

    3. Enable the subscriptions.

  3. Add the classically mediated devices to a unified discovery rule that uses GRPC mediation; see “How do I stitch a classic device to a unified discovery rule?” in the NSP Device Management Guide.

    Collection resumes for Classic NEs.

Note: The subscription processing begins after the execution of a discovery rule, and may take 15 minutes or more.


Perform post-upgrade tasks
 
138 

If you uninstalled any NSP logical inventory adaptor suites in Step 23 of To prepare for an NSP system upgrade from Release 22.6 or earlier, perform the following steps.

  1. Perform “How do I install adaptor artifacts that are not supported in the Artifacts view?” in the NSP System Administrator Guide to re-install the adaptor suites.

  2. Enable logical inventory polling policies in the NSP.


139 

Use the NSP to monitor device discovery and to check network management functions.


140 

Back up the NSP databases; perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide.


141 

Close the open console windows.

End of steps