To add a VIP to an NSP cluster deployment

Purpose

Perform this procedure to add a VIP to an NSP cluster already configured with one client VIP.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Steps
 

Enter the following on the NSP deployer host to stop the NSP cluster:

Note: In a DR deployment, stop the standby cluster first and then the primary cluster.

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall --undeploy --clean

/opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵


In a DR deployment, perform Step 3 to Step 14 on the primary cluster first.

When the primary cluster is up and running, perform Step 3 to Step 14 on the standby cluster.


Enter the following on the NSP deployer host

cd /opt/nsp/nsp-k8s-deployer-release-ID/config ↵


Open the following file using a plain-text editor such as vi:

k8s-deployer.yml


In the following section, specify the virtual IP addresses for the NSP to use as the internal load-balancer endpoints.

Note: A single-node NSP cluster requires at least the client_IP address.

The addresses are the virtualIP values for NSP client, internal, and mediation access that you intend to specify in Step 11 in the nsp-config.yml file.

loadBalancerExternalIps:

    - client_IP

    - internal_IP

    - mediation_IP

    - trapV4_mediation_IP

    - trapV6_mediation_IP

    - flowV4_mediation_IP

    - flowV6_mediation_IP


Save and close the k8s-deployer.yml file.


Create a backup copy of the updated k8s-deployer.yml file, and transfer the backup copy to a station that is separate from the NSP system, and preferably in a remote facility.

Note: The backup file is crucial in the event of an NSP deployer host failure, and must be copied to a separate station.


Enter the following:

cd /opt/nsp/nsp-k8s-deployer-release-ID/bin ↵


Enter the following to create the cluster configuration:

./nspk8sctl config -c ↵

The following is displayed when the creation is complete:

✔ Cluster hosts configuration is created at: /opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml


10 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the following command, and are subsequently prompted for the root password of each cluster member:

nspk8sctl --ask-pass install

../nspk8sctl install ↵

The NSP Kubernetes environment is deployed.


11 

Add the virtual IP addresses in Step 5 to the nsp-config.yml file.

  1. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-latest-release-ID/NSP-CN-latest-release-ID/config/nsp-config.yml

  2. Add the virtual IP addresses to the platform section, ingressApplications subsection.

    See Step 75 in To install an NSP cluster for more information about the ingressApplications subsection.

  3. Save and close the file.


12 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


13 

Enter the following to deploy the NSP software in the NSP cluster:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --deploy

./nspdeployerctl install --config --deploy ↵

The specified NSP functions are installed and initialized.


14 

Monitor and validate the NSP cluster initialization.

Note: You must not proceed to the next step until each NSP pod is operational.

  1. On the NSP cluster host, enter the following every few minutes:

    kubectl get pods -A ↵

    The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed, with the following exception.

    • If you are including any MDM VMs in addition to a standard or enhanced NSP cluster deployment, the status of each mdm-server pod is shown as Pending, rather than Running or Completed.

  2. Check PVCs are bound to PVs and PVs are created with STORAGECLASS as shown below

    # kubectl get pvc -A

    NAMESPACE            NAME                       STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE

    nsp-psa-privileged   data-volume-mdm-server-0   Bound    pvc-ID  5Gi        RWO            storage_class  age

    nsp-psa-restricted   data-nspos-kafka-0         Bound    pvc-ID  10Gi       RWO            storage_class   age

    nsp-psa-restricted   data-nspos-zookeeper-0     Bound    pvc-ID  2Gi        RWO            storage_class  age

    ...

    # kubectl get pv

    NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM

    nspos-fluentd-logs-data   50Mi       ROX            Retain           Bound    nsp-psa-restricted/nspos-fluentd-logs-data

    pvc-ID                   10Gi       RWO            Retain           Bound    nsp-psa-restricted/data-nspos-kafka-0 

    pvc-ID                   2Gi        RWO            Retain           Bound    nsp-psa-restricted/data-nspos-zookeeper-0

    ...

  3. Verify that all pods are in the Running state.

  4. If the Network Operations Analytics - Baseline Analytics installation option is enabled, ensure that the following pods are listed; otherwise, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod:

    Note: The output for a non-HA deployment is shown below; an HA cluster has three sets of three baseline pods, three rta-ignite pods, and two spark-operator pods.

    • analytics-rtanalytics-tomcat

    • baseline-anomaly-detector-n-exec-1

    • baseline-trainer-n-exec-1

    • baseline-window-evaluator-n-exec-1

    • rta-anomaly-detector-app-driver

    • rta-ignite-0

    • rta-trainer-app-driver

    • rta-windower-app-driver

    • spark-operator-m-n

  5. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.

End of steps