To add a VIP to an NSP cluster deployment

Purpose

Perform this procedure to add a VIP to an NSP cluster already configured with one client VIP.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Steps
 

Enter the following on the NSP deployer host to stop the NSP cluster:

Note: In a DR deployment, stop the standby cluster first and then the primary cluster.

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass uninstall --undeploy --clean

/opt/nsp/NSP-CN-DEP-release-ID/bin/nspdeployerctl uninstall --undeploy --clean ↵


In a DR deployment, perform Step 3 to Step 14 on the primary cluster first.

When the primary cluster is up and running, perform Step 3 to Step 14 on the standby cluster.


Enter the following on the NSP deployer host

cd /opt/nsp/nsp-k8s-deployer-release-ID/config ↵


Open the following file using a plain-text editor such as vi:

k8s-deployer.yml


In the following section, specify the virtual IP addresses for the NSP to use as the internal load-balancer endpoints.

Note: A single-node NSP cluster requires at least the client_IP address.

The addresses are the virtualIP values for NSP client, internal, and mediation access that you intend to specify in Step 11 in the nsp-config.yml file.

loadBalancerExternalIps:

    - client_IP

    - internal_IP

    - mediation_IP

    - trapV4_mediation_IP

    - trapV6_mediation_IP

    - flowV4_mediation_IP

    - flowV6_mediation_IP


Save and close the k8s-deployer.yml file.


Create a backup copy of the updated k8s-deployer.yml file, and transfer the backup copy to a station that is separate from the NSP system, and preferably in a remote facility.

Note: The backup file is crucial in the event of an NSP deployer host failure, and must be copied to a separate station.


Enter the following:

cd /opt/nsp/nsp-k8s-deployer-release-ID/bin ↵


Enter the following to create the cluster configuration:

./nspk8sctl config -c ↵

The following is displayed when the creation is complete:

✔ Cluster hosts configuration is created at: /opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml


10 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the following command, and are subsequently prompted for the root password of each cluster member:

nspk8sctl --ask-pass install

./nspk8sctl install ↵

The NSP Kubernetes environment is deployed.


11 

Add the virtual IP addresses in Step 5 to the nsp-config.yml file.

  1. Open the following file using a plain-text editor such as vi:

    /opt/nsp/NSP-CN-DEP-latest-release-ID/NSP-CN-latest-release-ID/config/nsp-config.yml

  2. Add the virtual IP addresses to the platform section, ingressApplications subsection.

    See Step 75 in To install an NSP cluster for more information about the ingressApplications subsection.

  3. Save and close the file.


12 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


13 

Enter the following to deploy the NSP software in the NSP cluster:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --deploy

./nspdeployerctl install --config --deploy ↵

The specified NSP functions are installed and initialized.


14 

Monitor and validate the NSP cluster initialization.

Note: You must not proceed to the next step until each NSP pod is operational.

  1. On the NSP deployer host, enter the following:

    export KUBECONFIG=/opt/nsp/nsp-configurator/kubeconfig/nsp_kubeconfig ↵

  2. Enter the following every few minutes:

    kubectl get pods -A ↵

    The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed.

  3. Check PVCs are bound to PVs and PVs are created with STORAGECLASS as shown below.

    kubectl get pvc -A ↵

    NAMESPACE             NAME                                                   STATUS  VOLUME    CAPACITY   ACCESS MODES  STORAGECLASS         AGE

    nsp-psa-privileged    nfs-server-pvc                                         Bound   pvc-ID    47Gi       RWO           piraeus-storage      2d

    nsp-psa-privileged    pvc-nfs-subdir-provisioner                             Bound   pvc-ID    10Mi       RWO                                2d

    nsp-psarestricted     data-nspos-kafka-broker-0                              Bound   pvc-ID    10Gi       RWO           local-provisioner    2d

    nsp-psa-restricted    data-nspos-kafka-controller-0                          Bound   pvc-ID    10Gi       RWO           local-provisioner    2d

    nsp-psarestricted     data-nspos-zookeeper-0                                 Bound   pvc-ID    2Gi        RWO           local-provisioner    2d

    nsp-psa-restricted    datadir-nspos-neo4j-core-dc2-250b-0                    Bound   pvc-ID    10Gi       RWO           local-provisioner    2d

    nsp-psa-restricted    nsp-backup-storage                                     Bound   pvc-ID    10G        RWX           nfs-storage          2d

    nsp-psa-restricted    nspos-fluentd-logs-data                                Bound   pvc-ID    50Mi       ROX                                2d 

    nsp-psa-restricted    nspos-fluentd-posfile-data                             Bound   pvc-ID    1Gi        RWO                                2d

    nsp-psa-restricted    nspos-postgresql-data-nspos-postgresql-primary-0       Bound   pvc-ID    15Gi       RWO           local-provisioner    2d 

    nsp-psa-restricted    opensearch-backup-pv                                   Bound   pvc-ID    1Gi        RWX           nfs-storage          2d 

    nsp-psa-restricted    opensearch-cluster-master-opensearch-cluster-master-0  Bound   pvc-ID    50Gi       RWO           local-provisioner    2d..

    nsp-psa-restricted    solr-pvc-nspos-solr-statefulset-0                      Bound   pvc-ID    1Gi        RWO           local-provisioner    2d

    nsp-psa-restricted    storage-volume-nspos-prometheus-server-0               Bound   pvc-ID    10Gi       RWO           local-provisioner    2d

    kubectl get pv ↵

    NAME                          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                                          STORAGECLASS            REASON      AGE

    nspos-fluentd-logs-data       50Mi       ROX            Retain           Bound       nsp-psa-restricted/nspos-fluentd-logs-data                                                                         2d

    nspos-fluent-posfile-data     1Gi        RWO            Retain           Bound       nsp-psa-restricted/nspos-fluentd-posfile-data                                                                      2d

    pv-nfs-subdir-provisioner     10Mi       RWO            Delete           Bound       nsp-psa-privileged/pvc-nfs-subdir-provisioner                                                                      2d

    pvc-0c8d9fdf-6ffc-4208-...    47Gi       RWO            Delete           Bound       nsp-psa-privileged/nfs-server-pvc                                              piraeus-storage                     2d 

    pvc-4ccbe6ef-e38c-4739-...    10G        RWX            Delete           Bound       nsp-psa-restricted/nsp-backup-storage                                          nfs-storage                         2d

    pvc-5b21e2e2-eec5-49a7-...    10Gi       RWO            Delete           Bound       nsp-psa-restricted/data-nspos-kafka-broker-0                                   local-provisioner                   2d

    pvc-50df27c7-fcf3-462d-...    10Gi       RWO            Delete           Bound       nsp-psa-restricted/storage-volume-nspos-prometheus-server-0                    local-provisioner                   2d

    pvc-59c23b56-95fb-4cf6-...    50Gi       RWO            Delete           Bound       nsp-psa-restricted/opensearch-cluster-master-opensearch-cluster-master-0       local-provisioner                   2d

    pvc-96d2ceb0-e490-470d-...    2Gi        RWO            Delete           Bound       nsp-psa-restricted/data-nspos-zookeeper-0                                      local-provisioner                   2d

    pvc-370ee803-cd02-4eac-...    15Gi       RWO            Delete           Bound       nsp-psa-restricted/nspos-postgresql-data-nspos-postgresql-primary-0            local-provisioner                   2d

    pvc-32991a91-1b35-42c9-...    1Gi        RWX            Delete           Bound       nsp-psa-restricted/opensearch-backup-pv                                        nfs-storage                         2d

    pvc-86581df8-026b-4b80-...    10Gi       RWO            Delete           Bound       nsp-psa-restricted/data-nspos-kafka-controller-0                               local-provisioner                   2d

    pvc-bf1c522b-985c-485c-...    1Gi        RWO            Delete           Bound       nsp-psa-restricted/solr-pvc-nspos-solr-statefulset-0                           local-provisioner                   2d

    pvc-ccee060b-6268-4601-...    10Gi       RWO            Delete           Bound       nsp-psa-restricted/datadir-nspos-neo4j-core-dc2-250b-0                         local-provisioner                   2d

  4. Verify that all pods are in the Running state.

  5. If the Network Operations Analytics - Baseline Analytics installation option is enabled, ensure that the following pods are listed; otherwise, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod:

    Note: The output for a non-HA deployment is shown below; an HA cluster has three sets of three baseline pods, three rta-ignite pods, and two spark-operator pods.

    • analytics-rtanalytics-tomcat

    • baseline-anomaly-detector-n-exec-1

    • baseline-trainer-n-exec-1

    • baseline-window-evaluator-n-exec-1

    • rta-anomaly-detector-app-driver

    • rta-ignite-0

    • rta-trainer-app-driver

    • rta-windower-app-driver

    • spark-operator-m-n

  6. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.

End of steps