To upgrade a Release 22.9 or later NSP cluster

Purpose
CAUTION 

CAUTION

Network management outage

The procedure requires a shutdown of the NSP system, which causes a network management outage.

Ensure that you perform the procedure only during a scheduled maintenance period with the assistance of technical support.

Perform this procedure to upgrade a standalone or DR NSP system at Release 22.9 or later after you have performed To prepare for an NSP system upgrade from Release 22.9 or later.

Note: The NSP RHEL user named nsp that is created on an NSP deployer host or NSP cluster VM during deployment requires user ID 1000. If either of the following is true, you must make the ID available to the nsp user on the affected station before the upgrade, or the upgrade fails:

  • A user other than node_exporter on the NSP deployer host has ID 1000.

  • A user on an NSP cluster VM has ID 1000.

The following RHEL command returns the name of the user that has ID 1000, or nothing if the user ID is unassigned:

awk -F: ' { print $1" "$3 } ' /etc/passwd | grep 1000

You can make the ID available to the nsp user by doing one of the following:

• deleting the user

• using the RHEL usermod command to change the user ID

Note: The following denote a specific NSP release ID in a file path;

  • old-release-ID—currently installed release

  • new-release-ID—release you are upgrading to

Each release ID has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Steps
Back up NSP deployer host configuration files
 

Log in as the root user on the NSP deployer host.


Open a console window.


Back up the following NSP Kubernetes registry certificate files:

Note: The files are in one of the following directories, depending on the release you are upgrading from:

  • Release 22.9—/opt/nsp/nsp-registry-old-release-ID/config

  • Release 22.11 or later—/opt/nsp/nsp-registry/tls

  • nokia-nsp-registry.crt

  • nokia-nsp-registry.key


Back up the following Kubernetes deployer configuration file:

/opt/nsp/nsp-k8s-deployer-old-release-ID/config/k8s-deployer.yml


Back up the following NSP deployer configuration file:

/opt/nsp/NSP-CN-DEP-old-release-ID/config/nsp-deployer.yml


Copy the files backed up in Step 3, Step 4, and Step 5 to a separate station outside the NSP cluster for safekeeping.


Disable SELinux enforcing mode
 

If SELinux enforcing mode is enabled on the NSP deployer host and NSP cluster members, you must switch to permissive mode on each; otherwise, you can skip this step.

Perform “How do I switch between SELinux modes on NSP system components?” in the NSP System Administrator Guide on the NSP deployer host and on each NSP cluster member.

Note: If SELinux enforcing mode is enabled on any NSP component during the upgrade, the upgrade fails.


Apply OS update to NSP deployer host
 

If the NSP deployer host is deployed in a VM created using an NSP RHEL OS disk image, perform To apply a RHEL update to an NSP image-based OS.


If your Kubernetes version is supported, as determined in To prepare for an NSP system upgrade from Release 22.9 or later, you do not need to upgrade Kubernetes; go to Step 24.


Prepare for Kubernetes upgrade
 
10 

Transfer the downloaded NSP_K8S_DEPLOYER_R_r.tar.gz file to the /opt/nsp directory on the NSP deployer host.


11 

Enter the following on the NSP deployer host:

cd /opt/nsp ↵


12 

Enter the following:

tar -zxvf NSP_K8S_DEPLOYER_R_r.tar.gz ↵

The bundle file is expanded, and the following directories are created:

  • /opt/nsp/nsp-registry-new-release-ID

  • /opt/nsp/nsp-k8s-deployer-new-release-ID


13 

After the file expansion completes successfully, enter the following to remove the file, which is no longer required:

rm -f NSP_K8S_DEPLOYER_R_r.tar.gz ↵


14 

If you are upgrading from Release 22.9, restore the Kubernetes registry certificates.

  1. Enter the following on the NSP deployer host:

    mkdir -p /opt/nsp/nsp-registry/tls ↵

  2. Copy the following certificate files backed up in Step 3 to the /opt/nsp/nsp-registry/tls directory:

    • nokia-nsp-registry.crt

    • nokia-nsp-registry.key


Upgrade Kubernetes registry
 
15 

Enter the following:

cd /opt/nsp/nsp-registry-new-release-ID/bin ↵


16 

Enter the following to begin the registry upgrade:

Note: During the registry upgrade, the registry may be temporarily unavailable. During such a period, an NSP pod that restarts on a new cluster node, or a pod that starts, is in the ImagePullBackOff state until the registry upgrade completes. Any such pods recover automatically after the upgrade, and no user intervention is required.

./nspregistryctl install ↵


17 

When the registry upgrade is complete, verify the upgrade.

  1. Enter the following:

    kubectl get nodes ↵

    NSP deployer node status information like the following is displayed:

    NAME        STATUS    ROLES                  AGE     VERSION

    node_name   status    control-plane,master   xxdnnh   version

  2. Verify that status is Ready, and that version is greater than the value recorded in Step 5; do not proceed to the next step otherwise.

  3. Enter the following periodically to monitor the NSP cluster initialization:

    kubectl get pods -A ↵

    The status of each pod is displayed.

    The NSP cluster is fully operational when the status of each pod is Running or Completed.

  4. If any pod fails to enter the Running or Completed state, correct the condition; see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


Configure Kubernetes deployer
 
18 

Copy the k8s-deployer.yml file backed up in Step 4 to the following directory on the NSP deployer host:

/opt/nsp/nsp-k8s-deployer-new-release-ID/config


19 

Enter the following on the NSP deployer host:

cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵


20 

Enter the following to create the new hosts.yml file:

./nspk8sctl config -c ↵


21 

Enter the following to list the node entries in the new hosts.yml file:

./nspk8sctl config -l ↵

Output like the following example for a four-node cluster is displayed:

Note: If NAT is used in the cluster:

  • The access_ip value is the public IP address of the cluster node.

  • The ip value is the private IP address of the cluster node.

  • The ansible_host value is the same value as access_ip

Note: If NAT is not used in the cluster:

  • The access_ip value is the IP address of the cluster node.

  • The ip value matches the access_ip value.

  • The ansible_host value is the same value as access_ip

Existing cluster hosts configuration is:

all:

 hosts:

 node1:

 ansible_host: 203.0.113.11

 ip: ip

 access_ip: access_ip

node2:

 ansible_host: 203.0.113.12

 ip: ip

 access_ip: access_ip

node3:

 ansible_host: 203.0.113.13

 ip: ip

 access_ip: access_ip

node4:

 ansible_host: 203.0.113.14

 ip: ip

 access_ip: access_ip


22 

Verify the IP addresses.


23 

Enter the following to import the Kubernetes images to the repository:

../nspk8sctl import ↵


Update NSP cluster configuration
 
24 

Transfer the following downloaded file to the /opt/nsp directory on the NSP deployer host:

NSP_DEPLOYER_R_r.tar.gz


25 

Enter the following on the NSP deployer host:

cd /opt/nsp ↵


26 

Enter the following:

tar xvf NSP_DEPLOYER_R_r.tar.gz ↵

The bundle file is expanded, and the following directory of NSP installation files is created:

/opt/nsp/NSP-CN-DEP-new-release-ID/NSP-CN-new-release-ID


27 

Enter the following:

rm -f NSP_DEPLOYER_R_r.tar.gz ↵

The bundle file is deleted.


28 

Restore the required NSP configuration files.

  1. Enter the following:

    mkdir /tmp/appliedConfig ↵

  2. Enter the following:

    cd /tmp/appliedConfig ↵

  3. Transfer the following configuration backup file saved in To prepare for an NSP system upgrade from Release 22.9 or later to the /tmp/appliedConfig directory:

    nspConfiguratorConfigs.zip

  4. Enter the following:

    unzip nspConfiguratorConfigs.zip ↵

    The configuration files are extracted to the current directory, and include some or all of the following, depending on the previous deployment:

    • license file

    • nsp-config.yml file

    • TLS files; may include subdirectories

    • SSH key files

    • nsp-configurator/generated directory content

  5. Copy the extracted TLS files to the following directory:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls

  6. Copy all extracted nsp-configurator/generated files to the /opt/nsp/nsp-configurator/generated directory.


29 

Open the following file with a plain-text editor such as vi:

/opt/nsp/NSP-CN-DEP-new-release-ID/config/nsp-deployer.yml

Configure the following parameters:

hosts: "hosts_file"

labelProfile: "../ansible/roles/apps/nspos-labels/vars/labels_file"

where

hosts_file is the absolute path of the hosts.yml file; the default is /opt/nsp/nsp-k8s-deployer-new-release-ID/config/hosts.yml

labels_file is the file name below that corresponds to your cluster deployment type: Step 74

  • node-labels-basic-1node.yml

  • node-labels-basic-sdn-2nodes.yml

  • node-labels-enhanced-6nodes.yml

  • node-labels-enhanced-sdn-9nodes.yml

  • node-labels-standard-3nodes.yml

  • node-labels-standard-4nodes.yml

  • node-labels-standard-sdn-4nodes.yml

  • node-labels-standard-sdn-5nodes.yml


30 

Save and close the file.


31 

Enter the following to import the NSP images and Helm charts to the NSP Kubernetes registry:

Note: The import operation may take 20 minutes or longer.

/opt/nsp/NSP-CN-DEP-new-release-ID/bin/nspdeployerctl import ↵


Configure NSP software
 
32 

You must merge the nsp-config.yml file content from the existing deployment into the new nsp-config.yml file.

Open the following files using a plain-text editor such as vi:

  • former configuration file—/opt/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/config/nsp-config.yml

  • new configuration file—/opt/nsp/NSP-CN-DEP-new-release-ID/NSP-CN-new-release-ID/config/nsp-config.yml

Note: See nsp-config.yml file format for configuration information.

Note: The following REST-session parameters in the nsp section of the nsp-config.yml file apply only to an NSP system that uses CAS authentication, and are not to be configured otherwise:

  • ttlInMins

  • maxNumber


33 

Copy each configured parameter line from the previous nsp-config.yml file and use the line to overwrite the same line in the new file.

Note: You must maintain the structure of the new file, as any new configuration options for the new release must remain.

Note: You must replace each configuration line entirely, and must preserve the leading spaces in each line.

Note: The peer_address value that you specify must match the advertisedAddress value in the configuration of the peer cluster and have the same format; if one value is a hostname, the other must also be a hostname.

Note: If NSP application-log forwarding to NSP Elasticsearch is enabled, special configuration is required.

  • OpenSearch replaces Elasticsearch as the local log-viewing utility. Consequently, the Elasticsearch configuration cannot be directly copied from the current NSP configuration to the new configuration. Instead, you must configure the parameters in the loggingforwardingapplicationLogsopensearch section of the new NSP configuration file.

  • Elasticsearch is introduced as a remote log-forwarding option. You can enable NSP application-log forwarding to a remote Elasticsearch server in the loggingforwardingapplicationLogselasticsearch section of the new NSP configuration file.

    See Centralized logging for more information about configuring NSP logging options.


34 

Configure the following parameter in the platform section as shown below:

Note: You must preserve the lead spacing of the line.

  clusterHost: "cluster_host_address"

where cluster_host_address is the address of NSP cluster member node1, which is subsequently used for cluster management operations


35 

Configure the type parameter in the deployment section as shown below:

deployment:

    type: "deployment_type"

where deployment_type is one of the parameter options listed in the section


36 

If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, perform the following steps.

  1. Enable the following installation option:

    id: networkInfrastructureManagement-gnmiTelemetry

  2. Configure the throughputFactor parameter in the nspmodulestelemetrygnmi section; see the parameter description in the nsp-config.yml for the required value, which is based on the management scale:

               throughputFactor: n

    where n is the required throughput factor for your deployment


37 

If all of the following are true, configure the following parameters in the integrations section:

  • The NSP system includes the NFM-P.

  • You want the NFM-P to forward system metrics to the NSP cluster.

  • The NFM-P main server and main database are on separate stations:

    nfmpDB:

      primaryIp: ""

      standbyIp: ""


38 

If both of the following are true, configure the following parameters in the integrations section:

  • The NSP system includes the NFM-P.

  • You want the NFM-P to forward system metrics to the NSP cluster.

  • The NFM-P system includes one or more auxiliary servers:

    auxServer:

      primaryIpList: ""

      standbyIpList: ""


39 

If the NSP system includes one or more Release 22.11 or earlier analytics servers that are not being upgraded as part of the current NSP system upgrade, you must enable NSP and analytics compatibility; otherwise, you can skip this step.

Set the legacyPortEnabled parameter in the analyticsServer subsection of the integrations section to true as shown below:

  analyticsServer:

    legacyPortEnabled: true


40 

Specify the user authorization mechanism in the sso section, as shown below.

  sso:

    authMode: "mode"

where mode is one of the following:

  • oauth2—default; uses a local NSP user database, and can include remote authentication servers

  • cas—deprecated; uses the NFM-P or remote authentication servers for authentication


41 

If you use CAS authentication and are not migrating to OAUTH2 at this time, add the required parameter sections.

Note: The parameters apply only to an NSP system that uses CAS authentication.

  1. Add the following to the nspmodulesnspos section of the file:

          rest:

            session:

              ttlInMins: 60

              maxNumber: 50

  2. Add the following to the end of the nspsso section:

        authMode: "cas"

        session:

          concurrentLimitsEnabled: false

          maxSessionsPerUser: 10

          maxSessionsForAdmin: 10

        throttling:

          enabled: true

          rateThreshold: 3

          rateSeconds: 9

          lockoutPeriod: 5

        loginFailure:

          enabled: false

          threshold: 3

          lockoutMinutes: 1

  3. Add the required remote-authentication subsections from the previous configuration to the end of the section.


42 

If you have an updated license, ensure that the location of your license.zip file, as indicated in the nsp-config.yml file, is in the correct location on the NSP deployer host.


43 

Save and close the new nsp-config.yml file.


44 

Close the previous nsp-config.yml file.


45 

The steps in the following section align with the cluster-specific actions described in Workflow for DR NSP system upgrade from Release 22.9 or later.

If you are upgrading a standalone NSP system, go to Step 50.


DR-specific instructions
 
46 

Perform Step 50 to Step 53 on the standby NSP cluster.


47 

Perform Step 50 to Step 80 on the primary NSP cluster.


48 

Perform Step 54 to Step 80 on the standby NSP cluster.


49 

Perform Step 82 to Step 87 on each NSP cluster.


Stop and undeploy NSP cluster
 
50 

Perform the following steps on the NSP deployer host to preserve the existing cluster data.

  1. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-old-release-ID/NSP-CN-old-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read:

      deleteOnUndeploy:false

  3. Save and close the file.


51 

Enter the following on the NSP deployer host to undeploy the NSP cluster:

Note: If you are upgrading a standalone NSP system, or the primary NSP cluster in a DR deployment, this step marks the beginning of the network management outage associated with the upgrade.

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall --undeploy --clean

/opt/nsp/NSP-CN-DEP-old-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵

The NSP cluster is undeployed.


52 

On the NSP cluster host, enter the following periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until the output lists only the following:

  • pods in kube-system namespace

  • nsp-backup-storage pod

kubectl get pods -A ↵

The pods are listed.


Apply OS update to NSP cluster VMs
 
53 

If the NSP cluster VMs were created using an NSP RHEL OS disk image, perform the following steps on each NSP cluster VM to apply the required OS update.

  1. Log in as the root user on the VM.

  2. Perform To apply a RHEL update to an NSP image-based OS on the VM.


Upgrade Kubernetes deployment environment
 
54 

If your Kubernetes version is supported, as determined in To prepare for an NSP system upgrade from Release 22.9 or later, go to Step 57.

See the Host Environment Compatibility Guide for NSP and CLM for Kubernetes version-support information.


55 

If you are not upgrading Kubernetes from the immediately previous version supported by the NSP, but from an earlier version, you must uninstall Kubernetes; otherwise, you can skip this step.

Enter the following on the NSP deployer host:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following examples, and are subsequently prompted for the root password of each cluster member:

nspk8sctl --ask-pass uninstall

/opt/nsp/nsp-k8s-deployer-old-release-ID/bin/nspk8sctl uninstall ↵

The Kubernetes software is uninstalled.


56 

Enter the following on the NSP deployer host:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following examples, and are subsequently prompted for the root password of each cluster member:

nspk8sctl --ask-pass install

./opt/nsp/nsp-k8s-deployer-new-release-ID/bin/nspk8sctl install ↵

Note: The installation takes considerable time; during the process, each cluster node is cordoned, drained, upgraded, and uncordoned, one node at a time. The operation on each node may take 15 minutes or more.

The NSP Kubernetes environment is deployed.


Label NSP cluster nodes
 
57 

Open a console window on the NSP cluster host.


58 

Enter the following periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until each pod STATUS reads Running or Completed.

kubectl get pods -A ↵

The pods are listed.


59 

Enter the following periodically to display the status of the NSP cluster nodes:

Note: You must not proceed to the next step until each node STATUS reads Ready.

kubectl get nodes -o wide ↵

The NSP cluster nodes are listed, as shown in the following three-node cluster example:

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   

node1   Ready    master   nd    version   int_IP        ext_IP

node2   Ready    master   nd    version   int_IP        ext_IP

node3   Ready    <none>   nd    version   int_IP        ext_IP


60 

Enter the following on the NSP deployer host to apply the node labels to the NSP cluster:

/opt/nsp/NSP-CN-DEP-new-release-ID/bin/nspdeployerctl config ↵


61 

If you are not including any dedicated MDM nodes in addition to the number of member nodes in a standard or enhanced NSP cluster, go to Step 69.


62 

Perform the following steps for each additional MDM node.

  1. Enter the following on the NSP cluster host to open an SSH session as the root user on the MDM node.

    Note: The root password for a VM created using the Nokia qcow2 image is available from technical support.

    ssh root@MDM_node_IP_address

  2. Enter the following:

    mkdir -p /opt/nsp/volumes/mdm-server ↵

  3. Enter the following:

    chown -R 1000:1000 /opt/nsp/volumes ↵

  4. Enter the following:

    exit ↵


63 

Enter the following:

kubectl get nodes -o wide ↵

A list of nodes like the following is displayed.

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   

node1   Ready    master   nd   version   int_IP   ext_IP

node2   Ready    master   nd   version   int_IP   ext_IP

node3   Ready    <none>   nd   version   int_IP   ext_IP


64 

Record the NAME value of each node whose INTERNAL-IP value is the IP address of a node that has been added to host an additional MDM instance.


65 

For each node, enter the following sequence of commands:

kubectl label node node mdm=true ↵

where node is the recorded NAME value of the MDM node


Upgrade NSP software
 
66 

Return to the console window on the NSP deployer host.


67 

Enter the following:

mkdir -p /opt/nsp/nsp-configurator/generated &crarr;


68 

If you are not changing the NSP user authentication mode during the upgrade, restore the Keycloak secret files.

Note: Keycloak secret files may or may not be present, depending on the deployment.


69 

Restore the Keycloak secret files from backup.

Note: Depending on the system configuration, secret files may or may not exist.

  1. Enter the following on the NSP deployer host:

    mkdir -p /opt/nsp/nsp-configurator/generated ↵

  2. Enter the following:

    cd /tmp/appliedConfig/opt/nsp/nsp-configurator/generated ↵

  3. Enter the following:

    cp nsp-keycloak-*-secret /opt/nsp/nsp-configurator/generated/ ↵


70 

On the NSP deployer host, enter the following:

cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵


71 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --deploy

./nspdeployerctl install --config --deploy ↵

The NSP starts.


Monitor NSP initialization
 
72 

Open a console window on the NSP cluster host.


73 

Monitor and validate the NSP cluster initialization.

Note: You must not proceed to the next step until each NSP pod is operational.

  1. Enter the following every few minutes:

    kubectl get pods -A ↵

    The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed.

  2. If the Network Operations Analytics - Baseline Analytics installation option is enabled, ensure that the following pods are listed; otherwise, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod:

    Note: The output for a non-HA deployment is shown below; an HA cluster has three sets of three baseline pods, three rta-ignite pods, and two spark-operator pods.

    • analytics-rtanalytics-tomcat

    • baseline-anomaly-detector-n-exec-1

    • baseline-trainer-n-exec-1

    • baseline-window-evaluator-n-exec-1

    • rta-anomaly-detector-app-driver

    • rta-ignite-0

    • rta-trainer-app-driver

    • rta-windower-app-driver

    • spark-operator-m-n

  3. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.

  4. Enter the following to display the status of the NSP cluster members:

    kubectl get nodes ↵

    The NSP Kubernetes deployer log file is /var/log/nspk8sctl.log.


74 

Enter the following on the NSP cluster host to ensure that all pods are running:

kubectl get pods -A ↵

The status of each pod is listed; all pods are running when the displayed STATUS value is Running or Completed.

The nsp deployer log file is /var/log/nspdeployerctl.log.


Verify upgraded NSP cluster operation
 
75 

Use a browser to open the NSP cluster URL.


76 

Verify the following.

  • The NSP sign-in page opens.

  • The NSP UI opens after you sign in.

  • In a DR deployment, if the standby cluster is operational and you specify the standby cluster address, the browser is redirected to the primary cluster address.

Note: If the UI fails to open, perform “How do I remove the stale NSP allowlist entries?” in the NSP System Administrator Guide to ensure that unresolvable host entries from the previous deployment do not prevent NSP access.


Upgrade MDM adaptors
 
77 

If the NSP system currently performs model-driven telemetry or classic telemetry statistics collection, you must upgrade your MDM adaptors to the latest in the adaptor suite delivered as part of the new NSP release, and install the required Custom Resources, or CRs..

Perform the following steps.

Note: Upgrading the adaptors to the latest version is mandatory in order for gNMI telemetry collection to function.

  1. Upgrade the adaptor suites; see “How do I install or upgrade MDM adaptors?” in the NSP System Administrator Guide for information.

  2. When the adaptor suites are upgraded successfully, use Artifact Administrator to install the required telemetry artifact bundles that are packaged with the adaptor suites.

    • nsp-telemetry-cr-nodeType-version.rel.release.ct.zip

    • nsp-telemetry-cr-nodeType-version.rel.release-va.zip

  3. View the messages displayed during the installation to verify that the artifact installation is successful.


Upgrade or enable additional components and systems
 
78 

If the NSP deployment includes the VSR-NRC, upgrade the VSR-NRC as described in the VSR-NRC documentation.


79 

If you are including an existing NFM-P system in the deployment, perform one of the following.

  1. Upgrade the NFM-P to the NSP release; see NFM-P system upgrade from Release 22.9 or later.

  2. Enable NFM-P and NSP compatibility; perform To enable NSP compatibility with an earlier NFM-P system.

Note: An NFM-P system upgrade procedure includes steps for upgrading the following components in an orderly fashion:

  • auxiliary database

  • NSP Flow Collectors / Flow Collector Controllers

  • NSP analytics servers


80 

If the NSP system includes the WS-NOC, perform the appropriate procedure in WS-NOC and NSP integration to enable WS-NOC integration with the upgraded NSP system.


Synchronize auxiliary database password
 
81 

If the NSP deployment includes an auxiliary database, perform the following steps on the NSP cluster host.

  1. Enter the following:

    cd /opt ↵

  2. Enter the following:

    sftp root@deployer_IP

    where deployer_IP is the NSP deployer host IP address

    The prompt changes to sftp>.

  3. Enter the following:

    sftp> cd /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/database ↵

  4. Enter the following:

    sftp> get sync-auxdb-password.bash ↵

  5. Enter the following:

    sftp> quit ↵

  6. Enter the following:

    chmod 777 sync-auxdb-password.bash ↵

  7. Enter the following:

    ./sync-auxdb-password.bash ↵

    Output like the following is displayed:

    timestamp: Synchronizing password for Auxiliary DB Output...

    timestamp: deployment.apps/tlm-vertica-output scaled

    timestamp: secret/tlm-vertica-output patched

    timestamp: deployment.apps/tlm-vertica-output scaled

    timestamp: Synchronization completed.


Purge Kubernetes image files
 
82 

Note: Perform this and the following step only after you verify that the NSP system is operationally stable and that an upgrade rollback is not required.

Enter the following on the NSP deployer host:

cd /opt/nsp/nsp-k8s-deployer-new-release-ID/bin ↵


83 

Enter the following:

./nspk8sctl purge-registry -e ↵

The images are purged.


Purge NSP image files
 
84 

Note: Perform this and the following step only after you verify that the NSP system is operationally stable and that an upgrade rollback is not required.

Enter the following on the NSP deployer host:

cd /opt/nsp/NSP-CN-DEP-new-release-ID/bin ↵


85 

Enter the following:

./nspdeployerctl purge-registry -e ↵

The charts and images are purged.


Restore SELinux enforcing mode
 
86 

If either of the following is true, perform “How do I switch between SELinux modes on NSP system components?” in the NSP System Administrator Guide on the NSP deployer host and each NSP cluster VM.

  • You switched from SELinux enforcing mode to permissive mode before the upgrade, and want to restore the use of enforcing mode.

  • The upgrade has enabled SELinux in the NSP cluster for the first time, but in permissive mode, and you want the more stringent security of enforcing mode.


87 

Close the open console windows.


Post-upgrade removal of path control subscriptions
 
88 

In NSP Release 23.11, path control's internal engine - which processes flow records from the flow collector for the purpose of telemetry - was moved into a new dedicated service. If you upgraded from NSP Release 23.4 or later to NSP Release 23.11 or later - and the path control telemetry flow integration was enabled previously - the following REST API must be invoked to remove subscriptions that will no longer be used:

POST https://{{fcc}}:8443/rest/flow-collector-controller/rest/api/v1/export/unsubscribe

{

"subscription" : "nrcp-sub"

}

End of steps