To enlarge an NSP deployment

Purpose
CAUTION 

CAUTION

System Alteration

The procedure includes operations that fundamentally reconfigure the NSP system.

You must contact Nokia support for guidance before you attempt to perform the procedure.

CAUTION 

CAUTION

Special Deployment Limitation

Adding MDM nodes to an NSP cluster is supported only during NSP cluster installation or upgrade.

If you intend to add one or more MDM nodes after NSP deployment, contact technical support for assistance.

Perform this procedure to change an NSP deployment to a larger deployment type. Enlarging a deployment may be required, for example, to accommodate additional NSP functions, or to increase the network management growth or scope.

Note: To add only worker nodes to an NSP deployment without changing the deployment type, perform “How do I add an NSP cluster node?” in the NSP System Administrator Guide.

The operation involves the following:

  • backing up the current NSP cluster configuration and data

  • adding VMs to the cluster, as required

  • increasing the deployment scope in the cluster configuration

  • restoring the cluster data

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Rollback operation

To revert to the previous deployment type, as may be required in the event of a problem with the deployment, perform “How do I remove an NSP cluster node?” in the NSP System Administrator Guide.

Steps
Prepare required resources
 

Submit an NSP Platform Sizing Request for the new deployment type.


Back up NSP databases, configuration
 

If you are expanding a standalone NSP cluster, or the primary cluster in a DR deployment, you must back up the following NSP databases; perform “How do I back up the NSP cluster databases?” in the NSP System Administrator Guide.

Note: Ensure that you copy each backup file to a secure location on a station outside the NSP deployment that is reachable from the NSP cluster.

  • file-server-app

  • nspos-postgresql

  • nspos-neo4j

  • nspos-solr

  • nsp-tomcat

  • nrcx-tomcat


Enlarge cluster, standalone deployment
 

If the NSP is a standalone deployment, go to Step 7.


Enlarge clusters, DR deployment
 

Perform Step 7 to Step 38 on the standby NSP cluster.


Perform Step 7 to Step 51 on the primary NSP cluster.


Perform Step 46 to Step 51 on the standby NSP cluster.


Uninstall NSP
 

Log in as the root or NSP admin user on the NSP deployer host.


Open a console window.


Transfer the following file to a secure location on a station outside the NSP cluster that is unaffected by the conversion activity.

/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/appliedConfigs/nspConfiguratorConfigs.zip


10 

Configure the NSP to completely remove the existing deployment.

  1. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read as shown below:

      deleteOnUndeploy:true

  3. Save and close the nsp-config.yml file.


11 

Enter the following to stop the NSP cluster:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall --undeploy --clean

/opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵


Create new NSP cluster
 
12 

Using the resource specifications in the response to your Platform Sizing Request, create the required new NSP cluster VMs and resize the existing VMs, as required.


Resize cluster as required
 
13 

Identify the dedicated MDM nodes in the NSP cluster; you require the information for creating the new cluster configuration later in the procedure.

  1. Log in as the root or NSP admin user on the NSP cluster host.

  2. Open a console window

  3. Enter the following:

    kubectl get nodes --show-labels ↵

  4. Identify the dedicated MDM nodes, which have only the following label and no other NSP labels:

    mdm=true

    For example:

    /os=linux,mdm=true

  5. Record the name of each dedicated MDM node.


14 

Log in as the root or NSP admin user on the NSP deployer host.


15 

Open a console window.


16 

Open the following file using a plain-text editor such as vi:

/opt/nsp/nsp-k8s-deployer-release-ID/config/k8s-deployer.yml


17 

You must add each new node to the hosts section, as shown below.

Configure the following parameters for each new node; see the descriptive text at the head of the file for parameter information, and Hostname configuration requirements for general configuration information.

- nodeName: noden

  nodeIp: private_IP_address

  accessIp: public_IP_address

  isIngress: value

Note: Any dedicated MDM nodes must be placed at the end of the hosts section. For example, if you are expanding your NSP cluster from a node-labels-standard-sdn-4nodes deployment to node-labels-standard-sdn-5nodes, the dedicated MDM nodes must be listed after node 5.

Note: The nodeName value:

  • can include only ASCII alphanumeric and hyphen characters

  • cannot include an upper-case character

  • cannot begin or end with a hyphen

  • cannot begin with a number

  • cannot include an underscore

  • must end with a number


18 

Save and close the k8s-deployer.yml file.


19 

Create a backup copy of the updated k8s-deployer.yml file, and transfer the backup copy to a station that is separate from the NSP system and preferably in a remote facility.

Note: The backup file is crucial in the event of an NSP deployer host failure, so must be available from a separate station.


20 

Enter the following:

cd /opt/nsp/nsp-k8s-deployer-release-ID/bin ↵


21 

Enter the following to create the cluster configuration:

./nspk8sctl config -c ↵

The following is displayed when the creation is complete:

✔ Cluster hosts configuration is created at: /opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml


22 

For each NSP cluster VM that you are adding, enter the following to distribute the local SSH key to the VM:

ssh-copy-id -i ~/.ssh/id_rsa.pub user@address

where

address is the NSP cluster VM IP address

user is the designated NSP ansible user, if root-user access is restricted; otherwise, user@ is not required


23 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspk8sctl --ask-pass install

./nspk8sctl install ↵

The NSP Kubernetes environment is deployed.


24 

On the NSP cluster host, enter the following to verify that the new node is added to the cluster:

kubectl get nodes ↵


25 

On the NSP deployer host, enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


26 

Open the following file using a plain-text editor such as vi:

/opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml


27 

Configure the following parameters:

hosts: "hosts_file"

labelProfile: "../ansible/roles/apps/nspos-labels/vars/labels_file"

where

hosts_file is the absolute path of the hosts.yml file; the default is /opt/nsp/nsp-k8s-deployer-new-release-ID/config/hosts.yml

labels_file is the file name below that corresponds to your cluster deployment type:

  • node-labels-basic-1node.yml

  • node-labels-basic-sdn-2nodes.yml

  • node-labels-enhanced-6nodes.yml

  • node-labels-enhanced-sdn-9nodes.yml

  • node-labels-standard-3nodes.yml

  • node-labels-standard-4nodes.yml

  • node-labels-standard-sdn-4nodes.yml

  • node-labels-standard-sdn-5nodes.yml


28 

Save and close the nsp-deployer.yml file.


29 

Enter the following to apply the node labels to the NSP cluster:

./nspdeployerctl config ↵


30 

If you are not including any dedicated MDM nodes in addition to the number of member nodes in a standard or enhanced NSP cluster, go to Step 35.


31 

On the NSP cluster host, perform the following steps for each additional MDM node.

  1. Enter the following to open an SSH session on the MDM node.

    Note: The root password for a VM created using the Nokia qcow2 image is available from technical support.

    ssh MDM_node

    where MDM_node is the node IP address

  2. Enter the following:

    mkdir -p /opt/nsp/volumes/mdm-server ↵

  3. Enter the following:

    chown -R 1000:1000 /opt/nsp/volumes ↵

  4. Enter the following:

    exit ↵


32 

Enter the following:

kubectl get nodes -o wide ↵

A list of nodes like the following is displayed.

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   

node1   Ready    master   nd   version   int_IP   ext_IP

node2   Ready    master   nd   version   int_IP   ext_IP

node3   Ready    <none>   nd   version   int_IP   ext_IP


33 

Record the NAME value of each node whose INTERNAL-IP value is the IP address of a node that has been added to host an additional MDM instance.


34 

For each node, enter the following sequence of commands:

kubectl label node node mdm=true ↵

where node is the recorded NAME value of the MDM node


35 

On the NSP cluster host, enter the following:

kubectl get nodes --show-labels ↵


36 

Verify that the labels are added to the nodes.


Configure deployment
 
37 

Perform the following steps on the NSP deployer host to update the NSP configuration.

  1. Open the nsp-config.yml file backed up in Step 9 using a plain-text editor such as vi:

  2. In the deployment subsection of the nsp section, configure the type parameter to the new deployment type configured in Step 26.

  3. Verify the file content to ensure that the configuration is correct.

  4. Save and close the file.


38 

Configure the NSP to preserve the deployment.

  1. Open the following file on the NSP deployer host using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

  2. Edit the following line in the platform section, kubernetes subsection to read as shown below:

      deleteOnUndeploy:false

  3. Save and close the nsp-config.yml file.


Deploy NSP in restore mode
 
39 

On the NSP deployer host, enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


40 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --restore

./nspdeployerctl install --config --restore ↵

The NSP cluster is deployed in restore mode.


Restore databases
 
41 

On the NSP cluster host, enter the following:

kubectl get pods -A ↵

The database pods are listed.


42 

Verify that the database pods are running; the number of replica pods running depends on the deployment type:

  • standard, medium, or basic—one replica each of nspos-neo4j-core, nsp-tomcat, and one of nrcx-tomcat, if included in the deployment

  • enhanced—three replicas each of nspos-neo4j-core and nsp-tomcat


43 

Perform “How do I restore the NSP cluster databases?” in the NSP System Administrator Guide.


Undeploy NSP
 
44 

On the NSP deployer host, enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


45 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall --undeploy

./nspdeployerctl uninstall --undeploy ↵

The NSP cluster is undeployed.


Start NSP
 
46 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --deploy

./nspdeployerctl install --config --deploy ↵

The NSP cluster is deployed, and the NSP starts.


47 

On the NSP cluster host, enter the following every few minutes to display the cluster status:

Note: You must not proceed to the next step until the cluster is fully operational.

kubectl get pods -A ↵

The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed, with the following exception.

  • If you are including any MDM VMs in addition to a standard or enhanced NSP cluster deployment, the status of each mdm-server pod is shown as Pending, rather than Running or Completed.


48 

Use a browser to open the NSP cluster URL.


49 

Verify the following.

  • The NSP sign-in page opens.

  • The NSP UI opens after you sign in.

  • In a DR deployment, if the standby cluster is operational and you specify the standby cluster address, the browser is redirected to the primary cluster address.

Note: If the UI fails to open, perform “How do I remove the stale NSP allowlist entries?” in the NSP System Administrator Guide to ensure that unresolvable host entries from the previous deployment do not prevent NSP access.


50 

Use the NSP to monitor device discovery and to check network management functions.


51 

Close the open console windows.

End of steps