To prepare for an NSP system upgrade from Release 22.6 or earlier

Purpose

Perform this procedure to prepare for an NSP system upgrade from Release 22.3 or 22.6.

Note: The NSP RHEL user named nsp that is created on an NSP deployer host or NSP cluster VM during deployment requires user ID 1000. If either of the following is true, you must make the ID available to the nsp user on the affected station before the upgrade, or the upgrade fails:

  • A user other than node_exporter on the NSP deployer host has ID 1000.

  • A user on an NSP cluster VM has ID 1000.

The following RHEL command returns the name of the user that has ID 1000, or nothing if the user ID is unassigned:

awk -F: ' { print $1" "$3 } ' /etc/passwd | grep 1000

You can make the ID available to the nsp user by doing one of the following:

• deleting the user

• using the RHEL usermod command to change the user ID

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Steps
Back up NSP databases, system data
 

Log in as the root user on the appropriate station, based on the installed NSP release:

  • Release 22.3—NSP configurator VM

  • Release 22.6—NSP deployer host


Transfer the appropriate file, based on the installed NSP release, to a secure location on a separate station that is unaffected by the upgrade activity:

  • Release 22.3—/opt/nsp/NSP-CN-release-ID/appliedConfigs/nspConfiguratorConfigs.zip

  • Release 22.6—/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/appliedConfigs/nspConfiguratorConfigs.zip


If you are upgrading a standalone NSP cluster, or the primary cluster in a DR deployment, perform the following steps.

Note: The backup operations may take considerable time, during which you can start the software download described in Step 4.

  1. Back up the NSP databases; perform the appropriate NSP database backup procedure in the NSP System Administrator Guide for the currently installed NSP release.

  2. If the NSP system is at Release 22.3 and includes the NSP file service, perform To back up the Release 22.3 NSP file service data.

  3. If the NSP deployment includes IP resource control deployed outside the NSP cluster, back up the IPRC Tomcat database of IP resource control as described in the NSP System Administrator Guide for the currently installed NSP release.


Obtain installation software
 

Download the following from the NSP downloads page on the Nokia Support portal to a local station that is not affected by the upgrade activity:

Note: You must also download the .cksum file associated with each.

Note: This step applies only when using an NSP OEM disk image.

  • NSP_K8S_DEPLOYER_R_r.tar.gz—bundle for installing the registry and deploying the container environment

  • one of the following RHEL OS images for creating the NSP deployer host and NSP cluster VMs:

    • NSP_K8S_PLATFORM_RHEL8_yy_mm.qcow2

    • NSP_K8S_PLATFORM_RHEL8_yy_mm.ova

  • NSP_DEPLOYER_R_r.tar.gz—bundle for installing the NSP application software

where

R_r is the NSP release ID, in the form Major_minor

yy_mm represents the year and month of issue


Record benchmarks such as system KPIs, equipment inventories, and service lists for verification after the upgrade.


It is strongly recommended that you verify the message digest of each NSP image file or software bundle that you download from the Nokia Support portal. The download page includes checksums for comparison with the output of the RHEL md5sum, sha256sum, or sha512sum command.

To verify a file checksum, perform the following steps.

  1. Enter the following:

    command file

    where

    command is md5sum, sha256sum, or sha512sum

    file is the name of the file to check

    A file checksum is displayed.

  2. Compare the checksum value and the value in the .cksum file.

  3. If the values do not match, the file download has failed. Download a new copy of the file, and then repeat this step.


If the downloaded NSP_DEPLOYER_R_r.tar.gz file has multiple parts, enter the following to create one NSP_DEPLOYER_R_r.tar.gz file from the partial image files:

cat filename.part* >filename.tar.gz ↵

where filename is the image file name

A filename.qcow2 file is created in the current directory.


Back up Elasticsearch log data
 

If log forwarding to Elasticsearch is not enabled in the NSP system, go to Step 22.

Starting in NSP Release 23.4, OpenSearch replaces Elasticsearch as the NSP log-viewing utility. If you are upgrading from an NSP release that uses Elasticsearch for viewing NSP logs, it is strongly recommended that you preserve the Elasticsearch log data collected by the NSP before you upgrade the NSP.

You can later restore the backed-up data for import by an Elasticsearch server in order to review the log data, if required, as described in “How do I restore the NSP Elasticsearch log data?” in the NSP System Administrator Guide.

Log in as the root user on the station that has the downloaded NSP_DEPLOYER_R_r.tar.gz file.


Navigate to the directory that contains the NSP_DEPLOYER_R_r.tar.gz file.


10 

Enter the following:

tar xvf NSP_DEPLOYER_R_r.tar.gz '*nsp-log-collector.zip' '*README.txt' ↵

The nsp-log-collector.zip file and a README.txt file are extracted to the following directory path below the current directory:

NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/support/logCollector

Note: The README.txt contains information about using the backup utility.


11 

Log in as the root user on NSP cluster node 1, which is called one of the following, based on the installed NSP release:

  • Release 22.3—NSP configurator VM

  • Release 22.6—NSP cluster host


12 

Open a console window.


13 

Navigate to a directory that has sufficient free space for the backup log data, such as /opt.

Note: The space required for the log backup is based on the number of days for which log data is stored by the NSP, which is specified by the logRetentionPeriodInDaysOverride parameter value in the NSP cluster configuration file, and the average amount of log data per day.


14 

Transfer the extracted nsp-log-collector.zip and README.txt files to the current directory.


15 

In order to perform an ElasticSearch data backup, the java-1.8.0-openjdk RHEL OS package must be installed. However, the package may not be present on an earlier system.

Enter the following:

yum -y install java-1.8.0-openjdk ↵

If the package is not installed, the yum utility installs the package. Otherwise, the utility indicates that the package is installed, and nothing is done.


16 

Enter the following:

unzip nsp-log-collector.zip ↵

The following files are created in an nsp-log-collector-release-ID/bin directory in the current directory:

  • nsp-log-collector

  • nsp-log-collector.bat


17 

After the files are extracted, enter the following:

cd nsp-log-collector-release-ID/bin ↵


18 

Enter the following to back up all collected Elasticsearch log data:

./nsp-log-collector --getAll path

where path is the local directory in which to store the backed-up log data

The following prompt is displayed:

Do you want to proceed with log collection? (y/n) :


19 

Enter y.

The backup process begins.

The backup process creates the following .zip file in the specified path directory:

Logs-timestamp.zip

where timestamp is the backup creation date and fime


20 

Transfer the Logs-timestamp.zip file for safekeeping to a secure location on a station that is not part of the NSP deployment.


Prepare NFM-P migration to OAUTH2 user authentication
 
21 

If your NSP deployment currently uses CAS user authentication, you must migrate to OAUTH2 authentication.

If your NSP deployment includes the NFM-P, edit NFM-P user accounts as required to prepare for importing the users to the NSP local user database. For example, remove duplicate user IDs, or assign e-mail addresses.

Note: For users whose user account includes an e-mail address, the import operation sends a new randomly generated temporary password. Users who lack an e-mail address are assigned a global temporary password.


Check and prepare NSP cluster
 
22 

Perform the following steps to verify that the local NSP cluster is fully operational.

  1. Log in as the root user on an NSP cluster member in the data center.

  2. Enter the following:

    kubectl get pods -A ↵

    The status of each pod is displayed.

    The NSP cluster is operational if the status of each member pod is Running or Completed.

  3. If any pod is not in the Running or Completed state, you must correct the condition; see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


23 

Ensure that the RHEL chronyd time-synchronization service is running on each component, and that chronyd is actively tracking a central time source. See the RHEL documentation for information about using the chronyc command to view the chronyd synchronization status.

Note: NSP deployment is blocked if the chronyd service is not active.


24 

Identify the dedicated MDM nodes in the NSP cluster; you require the information for restoring the cluster configuration later in the procedure.

  1. Log in as the root user on any NSP cluster node.

  2. Open a console window.

  3. Enter the following:

    kubectl get nodes --show-labels ↵

  4. Identify the dedicated MDM nodes, which have only the following label and no other NSP labels:

    mdm=true

    For example:

    /os=linux,mdm=true

  5. Record the name of each dedicated MDM node.


25 
WARNING 

WARNING

Upgrade Failure

An NSP system upgrade from Release 22.3 fails if any logical inventory adaptor suites are installed.

In order to successfully upgrade an NSP system at Release 22.3, you must remove each logical inventory adaptor suite before the upgrade.

The NSP upgrade procedure includes a step for re-installing the adaptor suites after the upgrade.

If you are upgrading from Release 22.3, perform “How do I uninstall MDM adaptor suites?” in the NSP System Administrator Guide for each installed logical inventory adaptor suite.

End of steps