To migrate from an independent NFM-P system to an NSP cluster deployment

Purpose

Perform this procedure to transfer the nspOS functions of an independent NFM-P system to a greenfield NSP cluster deployment that includes the NFM-P.

Note: The migration transfers the NFM-P service groups to the NSP. The transfer may take hours, depending on the number of services. During this time, you must not auto-create any service groups. You can use the Find Ungrouped members count in Map Layouts and Groups to help determine when the upload is complete, after which you can auto-create groups.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Steps
Prepare for migration
 

Perform Step 1 to Step 49 of To install the NSP to do the following.

  • Obtain the NSP software.

  • Create the NSP deployer host.

  • Create the NSP cluster VMs.


Using the specifications in the response to your Nokia Platform Sizing Request, create a VM for each additional RPM-based NSP component.


Perform the steps in the “To back up the main database from the client GUI” procedure in one of the following guides, depending on the installed NFM-P release:

  • Release 22.9 or earlier—NSP NFM-P Administrator Guide

  • Release 22.11 or later—NSP System Administrator Guide


Transfer the following TLS files from the current PKI-server station to a secure location on a station that is unaffected by the migration activity:

Note: The PKI server address can be viewed using the samconfig utility on an NFM-P main server station.

  • ca.pem

  • ca.key

  • ca_internal.pem

  • ca_internal.key


Perform Step 7 to Step 12 on each NSP cluster.

Note: In a DR deployment, you must perform the steps first on the NSP cluster that you want to be the primary cluster after the upgrade.


Go to Step 13.


Deploy NSP clusters
 

Perform Step 50 to Step 106 of To install the NSP to deploy the NSP Kubernetes environment and configure the NSP deployment parameters.


Restore NSP databases
 

Copy the following Neo4j and PostgreSQL database backup files created in Step 3 to an empty temporary directory on the NSP deployer host:

  • nspos-neo4j_backup_timestamp.tar.gz

  • nspos-postgresql_backup_timestamp.tar.gz

where timestamp is the backup creation time


Perform “How do I restore the NSP cluster databases?” in the NSP System Administrator Guide to restore only the following databases on the NSP cluster:

Note: Performing the procedure also starts the NSP.

  • Neo4j database

  • PostgreSQL database


10 

Monitor the NSP initialization; if the status of any pod is Error, you must correct the error; see the NSP System Administrator Guide for information about recovering an errored pod.

Note: You must not proceed to the next step until the cluster is operational and no pods are in error.


Transfer security artifacts to standby cluster
 
11 

If you are configuring the standby NSP cluster in a DR deployment, obtain the SSH, TLS, and telemetry artifacts from the NSP cluster in the primary data center.

  1. Open a console window on the NSP deployer host.

  2. If remote root access is disabled, switch to the root-equivalent user specified in Step 55 of To install the NSP.

  3. Enter the following:

    mkdir -p /opt/nsp/nsp-configurator/generated/ssh ↵

  4. Enter the following:

    scp address:/opt/nsp/nsp-configurator/generated/ssh/* /opt/nsp/nsp-configurator/generated/ssh/ ↵

    where address is the address of the NSP deployer host in the primary cluster

  5. Enter the following:

    scp -r address:/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls/ca/* /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls/ca/ ↵

  6. Enter the following:

    scp address:/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls/telemetry/* /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tls/telemetry/ ↵

  7. If remote root access is disabled, switch back to the root user.


Set IGP topology data source
 
12 

In order for the IGP maps to function after the migration, you may need to change the IGP topology data source; see Configuring the IGP topology data source for information.


Reconfigure NFM-P
 
13 

Perform Step 1 to Step 17 of To add an independent NFM-P to an existing NSP deployment.


Synchronize system data
 
14 

Verify that the NFM-P main server is fully initialized.

Note: You must not advance to the next step until the server is fully initialized.

  1. Enter the following as the nsp user on the main server station.

    cd /opt/nsp/nfmp/nms/bin ↵

  2. ./nmsserver.bash -s nms_status ↵

    Output like the following is displayed:

    Network Functions Manager - Packet: Main Server Information

    -- Build Information

      -- NSP Version R.r.v - Built on date

    -- System Information

      -- Host/IP: 203.0.113.164

    -- Server Information

      -- Server is a standalone system

      -- Server is server_state

      -- SSL Channel Disabled for OSS and GUI Clients

    The server is fully initialized if the following is displayed:

      -- Server is Up

    If the server is not fully initialized, wait five minutes and then repeat this step.


15 

Log in as the root user on the NSP deployer host in the primary data center.


16 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/database ↵


17 

Enter the following:

./nfmp-resync.sh ↵

The following time-stamped lines are displayed as the script synchronizes the NFM-P and NSP system data:

timestamp: Running with namespace/nsp-platform-tomcat-pod_ID

timestamp: Resync triggered successfully

where

namespace is the Kubernetes namespace

pod_ID is an alphanumeric pod ID


Reconfigure NSP analytics servers
 
18 

If the system from which you are migrating includes one or more NSP analytics servers, perform the following steps on each NSP analytics server station.

  1. Log in as the nsp user.

  2. Enter the following:

    bash$ cd /opt/nsp/analytics/bin ↵

  3. Enter the following to stop the server:

    bash$ ./AnalyticsAdmin.sh stop ↵

    The following and other messages are displayed:

    Stopping Analytics Application

    When the analytics server is completely stopped, the following is displayed:

    Analytics Application is not running

    Do not proceed to the next step until the server is completely stopped.

  4. Enter the following:

    bash$ ./AnalyticsAdmin.sh updateConfig ↵

    The script displays the following message and prompt:

    HIS ACTION UPDATES THE CONFIG FILE

    Please type 'YES' to continue

  5. Enter YES.

    The script displays the following, and the first in a series of prompts. At each prompt, you must enter a parameter value; to accept a default in brackets, press ↵.

    Config file found.

  6. Configure the following parameters as described below.

    • Primary PostgreSQL Repository Database Host—standalone or primary NSP cluster VIP address

    • Secondary PostgreSQL Repository Database Host—standby NSP cluster VIP address; leave blank for a standalone NSP deployment

    • Zookeeper Connection String—the NSP cluster VIP addresses and ports, separated by a semicolon, for example:

      address_1:port;address_2:port

    • Use NFM-P-only mode—must be set to true


19 

Enter the following:

bash$ ./AnalyticsAdmin.sh start ↵

The analytics server starts.


Reconfigure WS-NOC
 
20 

If the deployment includes the WS-NOC, configure the WS-NOC to use the new nspOS instance by specifying the advertised address of the NSP cluster as the nspOS address.

End of steps