To enlarge an NSP deployment

Purpose
CAUTION 

CAUTION

System Alteration

The procedure includes operations that fundamentally reconfigure the NSP system.

You must contact Nokia support for guidance before you attempt to perform the procedure.

CAUTION 

CAUTION

Special Deployment Limitation

Adding MDM nodes to an NSP cluster is supported only during NSP cluster installation or upgrade.

If you intend to add one or more MDM nodes after NSP deployment, contact technical support for assistance.

Perform this procedure to change an NSP deployment to a larger deployment type. Enlarging a deployment may be required, for example, to accommodate additional NSP functions, or to increase the network management growth or scope.

The operation involves the following:

  • backing up the current NSP cluster configuration and data

  • adding VMs to the cluster, as required

  • increasing the deployment scope in the cluster configuration

  • restoring the cluster data

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Rollback operation

To revert to the previous deployment type, as may be required in the event of a problem with the deployment, perform “How do I remove an NSP cluster node?” in the NSP System Administrator Guide.

Steps
Prepare required resources
 

Log in to the Nokia NSP Sizing Home page.


Submit an NSP Platform Sizing Request for the new deployment type.


Back up NSP databases, configuration
 

If you are expanding a standalone NSP cluster, or the primary cluster in a DR deployment, you must back up the following NSP databases; perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide.

Note: Ensure that you copy each backup file to a secure location on a station outside the NSP deployment that is reachable from the NSP cluster.

  • file-server-app

  • nspos-postgresql

  • nspos-neo4j

  • nsp-tomcat

  • nrcx-tomcat


Enlarge cluster, standalone deployment
 

If the NSP is a standalone deployment, go to Step 8.


Enlarge clusters, DR deployment
 

Perform Step 8 to Step 45 on the standby NSP cluster.


Perform Step 8 to Step 59 on the primary NSP cluster.


Perform Step 53 to Step 59 on the standby NSP cluster.


Uninstall NSP
 

Log in as the root user on the NSP deployer host.


Open a console window.


10 

Transfer the following file to a secure location on a station outside the NSP cluster that is unaffected by the conversion activity.

/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/appliedConfigs/nspConfiguratorConfigs.zip


11 

Configure the NSP to completely remove the existing deployment.

  1. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read as shown below:

      deleteOnUndeploy:true

  3. Save and close the nsp-config.yml file.


12 

Enter the following to stop the NSP cluster:

/opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵


Create new NSP cluster
 
13 

Using the resource specifications in the response to your Platform Sizing Request, create the required new NSP cluster VMs and resize the existing VMs, as required.


Resize cluster as required
 
14 

Identify the dedicated MDM nodes in the NSP cluster; you require the information for creating the new cluster configuration later in the procedure.

  1. Log in as the root user on the NSP cluster host.

  2. Open a console window

  3. Enter the following:

    kubectl get nodes --show-labels ↵

  4. Identify the dedicated MDM nodes, which have only the following label and no other NSP labels:

    mdm=true

    For example:

    /os=linux,mdm=true

  5. Record the name of each dedicated MDM node.


15 

Log in as the root user on the NSP deployer host.


16 

Open a console window.


17 

Open the following file using a plain-text editor such as vi:

/opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml


18 

You must add each new node to the hosts section, as shown below.

Configure the following parameters for each new node; see the descriptive text at the head of the file for parameter information, and Hostname configuration requirements for general configuration information.

- nodeName: noden

  nodeIp: private_IP_asdress

  accessIp: public_IP_asdress

Note: Any dedicated MDM nodes must be placed at the end of the hosts section. For example, if you are expanding your NSP cluster from a node-labels-standard-sdn-4nodes deployment to node-labels-standard-sdn-5nodes, the dedicated MDM nodes must be listed after node 5.

Note: The nodeName value:

  • can include only ASCII alphanumeric and hyphen characters

  • cannot include an upper-case character

  • cannot begin or end with a hyphen

  • cannot begin with a number

  • cannot include an underscore

  • must end with a number


19 

Save and close the k8s-deployer.yml file.


20 

Create a backup copy of the updated k8s-deployer.yml file, and transfer the backup copy to a station that is separate from the NSP system and preferably in a remote facility.

Note: The backup file is crucial in the event of an NSP deployer host failure, so must be available from a separate station.


21 

Enter the following:

cd /opt/nsp/nsp-k8s-deployer-release-ID/bin ↵


22 

Enter the following to create the cluster configuration:

./nspk8sctl config -c ↵

The following is displayed when the creation is complete:

✔ Cluster hosts configuration is created at: /opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml


23 

If remote root access is disabled, switch to the designated root-equivalent user.


24 

For each NSP cluster VM that you are adding, enter the following to distribute the SSH key to the VM:

ssh-copy-id -i ~/.ssh/id_rsa.pub address

where address is the NSP cluster VM IP address


25 

If remote root access is disabled, switch back to the root user.


26 

Enter the following:

./nspk8sctl install ↵

The NSP Kubernetes environment is deployed.


27 

Log in as the root user on the NSP cluster host.


28 

Enter the following to verify that the new node is added to the cluster:

kubectl get nodes ↵


29 

Log in as the root user on the NSP deployer host.


30 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


31 

Open the following file using a plain-text editor such as vi:

/opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml


32 

Configure the following parameters:

hosts: "hosts_file"

labelProfile: "../ansible/roles/apps/nspos-labels/vars/labels_file"

where

hosts_file is the absolute path of the hosts.yml file; the default is /opt/nsp/nsp-k8s-deployer-new-release-ID/config/hosts.yml

labels_file is the file name below that corresponds to your cluster deployment type:

  • node-labels-basic-1node.yml

  • node-labels-basic-sdn-2nodes.yml

  • node-labels-enhanced-6nodes.yml

  • node-labels-enhanced-sdn-9nodes.yml

  • node-labels-standard-3nodes.yml

  • node-labels-standard-4nodes.yml

  • node-labels-standard-sdn-4nodes.yml

  • node-labels-standard-sdn-5nodes.yml


33 

Save and close the nsp-deployer.yml file.


34 

Enter the following to apply the node labels to the NSP cluster:

./nspdeployerctl config ↵


35 

If you are not including any dedicated MDM nodes in addition to the number of member nodes in a standard or enhanced NSP cluster, go to Step 42.


36 

On the NSP cluster host, if remote root access is disabled, switch to the designated root-equivalent user.


37 

Perform the following steps for each additional MDM node.

  1. Enter the following to open an SSH session on the MDM node.

    Note: The root password for a VM created using the Nokia qcow2 image is available from technical support.

    ssh MDM_node

    where MDM_node is the node IP address

  2. Enter the following:

    mkdir -p /opt/nsp/volumes/mdm-server ↵

  3. Enter the following:

    chown -R 1000:1000 /opt/nsp/volumes ↵

  4. Enter the following:

    exit ↵


38 

If remote root access is disabled, switch back to the root user.


39 

Enter the following:

kubectl get nodes -o wide ↵

A list of nodes like the following is displayed.

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   

node1   Ready    master   nd   version   int_IP   ext_IP

node2   Ready    master   nd   version   int_IP   ext_IP

node3   Ready    <none>   nd   version   int_IP   ext_IP


40 

Record the NAME value of each node whose INTERNAL-IP value is the IP address of a node that has been added to host an additional MDM instance.


41 

For each node, enter the following sequence of commands:

kubectl label node node mdm=true ↵

kubectl cordon node

where node is the recorded NAME value of the cordoned MDM node


42 

On the NSP cluster host, enter the following:

kubectl get nodes --show-labels ↵


43 

Verify that the labels are added to the nodes.


Configure deployment
 
44 

Perform the following steps on the NSP deployer host to update the NSP configuration.

  1. Open the nsp-config.yml file backed up in Step 10 using a plain-text editor such as vi:

  2. In the deployment subsection of the nsp section, configure the type parameter to the new deployment type configured in Step 31.

  3. If the deploy parameter in the platform section, elb subsection is set to true, add or reconfigure nodes in the hosts subsection, as required.

  4. Verify the file content to ensure that the configuration is correct.

  5. Save and close the file.


45 

Configure the NSP to preserve the deployment.

  1. Open the following file on the NSP deployer host using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read as shown below:

      deleteOnUndeploy:false

  3. Save and close the nsp-config.yml file.


Deploy NSP in restore mode
 
46 

On the NSP deployer host, enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


47 

Enter the following:

./nspdeployerctl install --config --restore ↵

The NSP cluster is deployed in restore mode.


Restore databases
 
48 

On the NSP deployer host, enter the following:

kubectl get pods -A ↵

The database pods are listed.


49 

Verify that the database pods are running; the number of replica pods running depends on the deployment type:

  • standard, medium, or basic—one replica each of nspos-neo4j-core, nsp-tomcat, and one of nrcx-tomcat, if included in the deployment

  • enhanced—three replicas each of nspos-neo4j-core and nsp-tomcat


50 

Perform “How do I restore the NSP cluster databases?” in the NSP System Administrator Guide.


Undeploy NSP
 
51 

On the NSP deployer host, enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


52 

Enter the following:

./nspdeployerctl uninstall --undeploy ↵

The NSP cluster is undeployed.


Start NSP
 
53 

Enter the following:

./nspdeployerctl install --config --deploy ↵

The NSP cluster is deployed, and the NSP starts.


54 

On the NSP cluster host, enter the following every few minutes to display the cluster status:

Note: You must not proceed to the next step until the cluster is fully operational.

kubectl get pods -A ↵

The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed, with the following exception.

  • If you are including any MDM VMs in addition to a standard or enhanced NSP cluster deployment, the status of each mdm-server pod is shown as Pending, rather than Running or Completed.


55 

If you are including any MDM VMs in addition to a standard or enhanced NSP cluster deployment, perform the following steps to uncordon the nodes cordoned in Step 41.

  1. Enter the following:

    kubectl get pods -A | grep Pending ↵

    The pods in the Pending state are listed; an mdm-server pod name has the format mdm-server-ID.

    Note: Some mdm-server pods may be in the Pending state because the manually labeled MDM nodes are cordoned in Step 41. You must not proceed to the next step if any pods other than the mdm-server pods are listed as Pending. If any other pod is shown, re-enter the command periodically until no pods, or only mdm-server pods, are listed.

  2. Enter the following for each manually labeled and cordoned node:

    kubectl uncordon node

    where node is an MDM node name recorded in Step 41

    The MDM pods are deployed.

    Note: The deployment of all MDM pods may take a few minutes.

  3. Enter the following periodically to display the MDM pod status:

    kubectl get pods -A | grep mdm-server ↵

  4. Ensure that the number of mdm-server-ID instances is the same as the mdm clusterSize value in nsp-config.yml, and that each pod is in the Running state. Otherwise, contact technical support for assistance.


56 

Use a browser to open the NSP cluster URL.


57 

Verify the following.

  • The NSP sign-in page opens.

  • The NSP UI opens after you sign in.

  • In a DR deployment, if the standby cluster is operational and you specify the standby cluster address, the browser is redirected to the primary cluster address.

Note: If the UI fails to open, perform “How do I remove the stale NSP allowlist entries?” in the NSP System Administrator Guide to ensure that unresolvable host entries from the previous deployment do not prevent NSP access.


58 

Use the NSP to monitor device discovery and to check network management functions.


59 

Close the open console windows.

End of steps