To install the NSP

Purpose

Perform this procedure to deploy a new standalone or DR NSP system.

Note: To create a DR deployment, you must perform the procedure on the NSP cluster in each data center. The NSP cluster on which you first perform the procedure initializes as the primary cluster.

Note: You require root user privileges on the NSP deployer host, and on each VM that you create.

Note: release-ID in a file path has the following format:

R.r.p-rel.version

where

R.r.p is the NSP release, in the form MAJOR.minor.patch

version is a numeric value

Note: Command lines use the # symbol to represent the RHEL CLI prompt for the root user. Do not type the leading # symbol when you enter a command.

Steps
Create NSP deployer host VM
 

Download the following from the NSP downloads page on the Nokia Support portal:

Note: You must also download the .cksum file associated with each.

Note: This step applies only when using an NSP OEM disk image.

  • NSP_K8S_DEPLOYER_R_r.tar.gz—bundle for installing the registry and deploying the container environment

  • one of the following RHEL OS images for creating the NSP deployer host and NSP cluster VMs:

    • NSP_K8S_PLATFORM_RHEL8_yy_mm.qcow2

    • NSP_K8S_PLATFORM_RHEL8_yy_mm.ova

  • NSP_DEPLOYER_R_r.tar.gz—bundle for installing the NSP application software

where

R_r is the NSP release ID, in the form Major_minor

yy_mm represents the year and month of issue


It is strongly recommended that you verify the message digest of each NSP image file or software bundle that you download from the Nokia Support portal. The download page includes checksums for comparison with the output of the RHEL md5sum, sha256sum, or sha512sum command.

To verify a file checksum, perform the following steps.

  1. Enter the following:

    command file

    where

    command is md5sum, sha256sum, or sha512sum

    file is the name of the file to check

    A file checksum is displayed.

  2. Compare the checksum value and the value in the .cksum file.

  3. If the values do not match, the file download has failed. Download a new copy of the file, and then repeat this step.


Log in as the root user on the station designated to host the NSP deployer host VM.


Open a console window.


If the downloaded NSP_DEPLOYER_R_r.tar.gz file has multiple parts, enter the following to create one NSP_DEPLOYER_R_r.tar.gz file from the partial image files:

cat filename.part* >filename.tar.gz ↵

where filename is the image file name

A filename.qcow2 file is created in the current directory.


Perform one of the following to create the NSP deployer host VM.

Note: The NSP deployer host VM requires a hostname; you must change the default of ‘localhost’ to an actual hostname.

  1. Deploy the downloaded NSP_K8S_PLATFORM_RHEL8_yy_mm.qcow2 disk image; perform Step 6 to Step 16 of To deploy an NSP RHEL qcow2 disk image.

  2. Deploy the NSP_K8S_PLATFORM_RHEL8_yy_mm.ova disk image; see the documentation for your virtualization environment for information.

    Note: For OVA-image deployment, it is strongly recommended that you mount the /opt directory on a separate hard disk that has sufficient capacity to allow for future expansion.

  3. Manually install the RHEL OS and configure the disk partitions, as described in Manual NSP RHEL OS installation and Chapter 2, NSP disk setup and partitioning.


Configure NSP deployer host networking
 

Enter the following to open a console session on the NSP deployer host:

virsh console deployer_host ↵

You are prompted for credentials.


Enter the following credentials:

  • username—root

  • password—available from technical support

A virtual serial console session opens on the deployer host VM.


Enter the following:

ip a ↵

The available network interfaces are listed; information like the following is displayed for each:

if_nif_name: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether MAC_address

    inet IPv4_address/v4_netmask brd broadcast_address scope global noprefixroute if_name

       valid_lft forever preferred_lft forever

    inet6 IPv6_address/v6_netmask scope link

       valid_lft forever preferred_lft forever


10 

Record the if_name and MAC_address values of the interface that you intend to use.


11 

Enter the following:

nmcli con add con-name con_name ifname if_name type ethernet mac MAC_address

where

con_name is a connection name that you assign to the interface for ease of identification

if_name is the interface name recorded in Step 10

MAC_address is the MAC address recorded in Step 10


12 

Enter the following:

nmcli con mod con_name ipv4.addresses IP_address/netmask

where

con_name is the connection name assigned in Step 11

IP_address is the IP address to assign to the interface

netmask is the subnet mask to assign


13 

Enter the following:

nmcli con mod con_name ipv4.method static ↵


14 

Enter the following:

nmcli con mod con_name ipv4.gateway gateway_IP

gateway_IP is the gateway IP address to assign


15 

Enter the following:

Note: You must specify a DNS name server. If DNS is not deployed, you must use a non-routable IP address as a nameserver entry.

Note: Any hostnames used in an NSP deployment must be resolved by a DNS server.

Note: An NSP deployment that uses IPv6 networking for client communication must use a hostname configuration.

nmcli con mod con_name ipv4.dns nameserver_1,nameserver_2...nameserver_n

where nameserver_1 to nameserver_n are the available DNS name servers


16 

To optionally specify one or more DNS search domains, enter the following:

nmcli con mod con_name ipv4.dns-search search_domains

where search_domains is a comma-separated list of DNS search domains


17 

Enter the following to reboot the VM:

systemctl reboot ↵


Install NSP Kubernetes registry
 
18 

Log in as the root or NSP admin user on the NSP deployer host.


19 

Enter the following:

mkdir /opt/nsp ↵


20 

Copy the downloaded NSP_K8S_DEPLOYER_R_r.tar.gz bundle file to the following directory:

/opt/nsp


21 

Enter the following:

cd /opt/nsp ↵


22 

Enter the following:

tar xvf NSP_K8S_DEPLOYER_R_r.tar.gz ↵

where R_r is the NSP release ID, in the form Major_minor

The bundle file is expanded, and the following directories are created:

  • /opt/nsp/nsp-k8s-deployer-release-ID

  • /opt/nsp/nsp-registry-release-ID


23 

Remove the bundle file to save disk space; enter the following:

rm -f NSP_K8S_DEPLOYER_R_r.tar.gz ↵

The file is deleted.


24 

Enter the following:

cd nsp-registry-release-ID/bin ↵


25 

Enter the following:

./nspregistryctl install ↵

The following prompt is displayed.

Enter a registry admin password:


26 

Create a registry administrator password, and enter the password.

The following prompt is displayed.

Confirm the registry admin password:


27 

Re-enter the password.

The registry installation begins, and messages like the following are displayed.

✔ New installation detected.

✔ Initialize system.

date time Copy container images ...

date time Install/update package [container-selinux] ...

✔ Installation of container-selinux has completed.

date time Install/update package [k3s-selinux] ...

✔ Installation of k3s-selinux has completed.

date time Setup required tools ...

✔ Initialization has completed.

date time Install k3s ...

date time Waiting for up to 10 minutes for k3s initialization ...

..............................................

✔ Installation of k3s has completed.

➜ Generate self-signed key and cert.

date time Registry TLS key file: /opt/nsp/nsp-registry/tls/nokia-nsp-registry.key

date time Registry TLS cert file: /opt/nsp/nsp-registry/tls/nokia-nsp-registry.crt

date time Install registry apps ...

date time Waiting for up to 10 minutes for registry services to be ready ...

..........

✔ Registry apps installation is completed.

date time Generate artifacts ...

date time Apply artifacts ...

date time Setup registry.nsp.nokia.local certs ...

date time Setup a default project [nsp] ...

date time Setup a cron to regenerate the k3s certificate [nsp] ...

✔ Post configuration is completed.

✔ Installation has completed.


28 

Enter the following periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until each pod STATUS reads Running or Completed.

kubectl get pods -A ↵

The pods are listed.


Create NSP cluster VMs
 
29 

For each required NSP cluster VM, perform one of the following to create the VM.

Note: Each NSP cluster VM requires a hostname; you must change the default of ‘localhost’ to an actual hostname.

  1. Deploy the downloaded NSP_K8S_PLATFORM_RHEL8_yy_mm.qcow2 disk image; perform Step 6 to Step 16 of To deploy an NSP RHEL qcow2 disk image.

  2. Deploy the NSP_K8S_PLATFORM_RHEL8_yy_mm.ova disk image; see the documentation for your virtualization environment for information.

    Note: For OVA-image deployment, it is strongly recommended that you mount the /opt directory on a separate hard disk that has sufficient capacity to allow for future expansion.

  3. Manually install the RHEL OS and configure the disk partitions, as described in Manual NSP RHEL OS installation and Chapter 2, NSP disk setup and partitioning.


30 

Record the MAC address of each interface on each VM.


31 

Perform Step 32 to Step 50 for each NSP cluster VM to configure the required interfaces.


Configure NSP cluster networking
 
32 

Enter the following to open a console session on the VM:

virsh console NSP_cluster_VM

where NSP_cluster_VM is the VM name

You are prompted for credentials.


33 

Enter the following credentials:

  • username—root

  • password—available from technical support

A virtual serial console session opens on the NSP cluster VM.


34 

Enter the following:

ip a ↵

The available network interfaces are listed; information like the following is displayed for each:

if_nif_name: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether MAC_address

    inet IPv4_address/v4_netmask brd broadcast_address scope global noprefixroute if_name

       valid_lft forever preferred_lft forever

    inet6 IPv6_address/v6_netmask scope link

       valid_lft forever preferred_lft forever


35 

Record the if_name and MAC_address values of the interfaces that you intend to use.


36 

Enter the following for each interface:

nmcli con add con-name con_name ifname if_name type ethernet mac MAC_address

where

con_name is a connection name that you assign to the interface for ease of identification; for example, ClientInterface or MediationInterface

if_name is the interface name recorded in Step 35

MAC_address is the MAC address recorded in Step 35


37 

Enter the following for each interface:

nmcli con mod con_name ipv4.addresses IP_address/netmask

where

con_name is the connection name assigned in Step 36

IP_address is the IP address to assign to the interface

netmask is the subnet mask to assign


38 

Enter the following for each interface:

nmcli con mod con_name ipv4.method static ↵


39 

Enter the following for each interface:

nmcli con mod con_name ipv4.gateway gateway_IP

gateway_IP is the gateway IP address to assign

Note: This command sets the default gateway on the primary interface and the gateways for all secondary interfaces.


40 

Enter the following for all secondary interfaces:

nmcli con mod con_name ipv4.never-default yes ↵


41 

Enter the following for each interface:

Note: You must specify a DNS name server. If DNS is not deployed, you must use a non-routable IP address as a nameserver entry.

Note: Any hostnames used in an NSP deployment must be resolved by a DNS server.

Note: An NSP deployment that uses IPv6 networking for client communication must use a hostname configuration.

nmcli con mod con_name ipv4.dns nameserver_1,nameserver_2...nameserver_n

where nameserver_1 to nameserver_n are the available DNS name servers


42 

To optionally specify one or more DNS search domains, enter the following for each interface:

nmcli con mod con_name ipv4.dns-search search_domains

where search_domains is a comma-separated list of DNS search domains


43 

Open the following file with a plain-text editor such as vi:

/etc/sysctl.conf


44 

Locate the following line:

vm.max_map_count=value


45 

Edit the line to read as follows; if the line is not present, add the line to the end of the file:

vm.max_map_count=262144


46 

Save and close the file.


47 

If you are installing in a KVM environment, enter the following:

mkdir /opt/nsp ↵


48 

It is essential that the disk I/O on each VM in the NSP cluster meets the NSP specifications.

On each NSP cluster VM, perform the tests described in “To verify disk performance for NSP” in the NSP Troubleshooting Guide.

If any test fails, contact technical support for assistance.


49 

Enter the following to reboot the NSP cluster VM:

systemctl reboot ↵


50 

Close the console session by pressing Ctrl+] (right bracket).


Deploy Kubernetes environment
 
51 

Enter the following on the NSP deployer host

cd /opt/nsp/nsp-k8s-deployer-release-ID/config ↵


52 

Open the following file using a plain-text editor such as vi:

k8s-deployer.yml


53 

Configure the following parameters for each NSP cluster VM.

- nodeName: noden

  nodeIp: private_IP_address

  accessIp: public_IP_address

  isIngress: value

Note: The nodeName value:

  • can include only ASCII alphanumeric and hyphen characters

  • cannot include an upper-case character

  • cannot begin or end with a hyphen

  • cannot begin with a number

  • cannot include an underscore

  • must end with a number


54 

In the following section, specify the virtual IP addresses for the NSP to use as the internal load-balancer endpoints.

Note: A single-node NSP cluster requires at least the client_IP address.

The addresses are the virtualIP values for NSP client, internal, and mediation access that you intend to specify in Step 75 and Step 76 in the nsp-config.yml file.

loadBalancerExternalIps:

    - client_IP

    - internal_IP

    - trapV4_mediation_IP

    - trapV6_mediation_IP

    - flowV4_mediation_IP

    - flowV6_mediation_IP


55 

Configure the following parameter, which specifies whether dual-stack NE management is enabled:

Note: Dual-stack NE management can function only when the network environment is appropriately configured, for example:

  • Only valid, non-link-local static or DHCPv6-assigned addresses are used.

  • A physical or virtual IPv6 subnet is configured for IPv6 communication with the NEs.

  enable_dual_stack_networks: value

where value must be set to true if the cluster VMs support both IPv4 and IPv6 addressing


56 

Configure the following parameter in the cluster section:

  hosts: "path"

where path is the location of the hosts file for deploying the NSP cluster


57 

If you have disabled remote root access to the NSP cluster VMs, configure the following parameters in the cluster section, sshAccess subsection:

  sshAccess:

    userName: "user"

    privateKey: "path"

where

user is the designated NSP ansible user

path is the SSH key path, for example, /home/user/.ssh/id_rsa


58 

Save and close the k8s-deployer.yml file.


59 

Create a backup copy of the updated k8s-deployer.yml file, and transfer the backup copy to a station that is separate from the NSP system, and preferably in a remote facility.

Note: The backup file is crucial in the event of an NSP deployer host failure, and must be copied to a separate station.


60 

Enter the following:

cd /opt/nsp/nsp-k8s-deployer-release-ID/bin ↵


61 

Enter the following to create the cluster configuration:

./nspk8sctl config -c ↵

The following is displayed when the creation is complete:

✔ Cluster hosts configuration is created at: /opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml


62 

Enter the following to import the Kubernetes container images to the registry:

./nspk8sctl import ↵

Messages like the following are displayed as the import proceeds:

✔ Pushing artifacts to registry (it takes a while) ...

date time Load container image from [/opt/nsp/nsp-k8s-deployer-release-ID/artifact/nsp-k8s-R.r.0-rel.tar.gz] ...

date time Push image [image_name] to registry.nsp.nokia.local/library ...

date time Push image [image_name] to registry.nsp.nokia.local/library ...

.

.

.

date time Push image [image_name] to registry.nsp.nokia.local/library ...


63 

For password-free NSP deployer host access to the NSP cluster VMs, you require an SSH key.

To generate and distribute the SSH key, perform the following steps.

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the following Step 64 command, and are subsequently prompted for the common password of each cluster member:

nspk8sctl --ask-pass install

  1. Enter the following:

    ssh-keygen -N "" -f ~/.ssh/id_rsa -t rsa ↵

    An SSH key is generated.

  2. Enter the following for each NSP cluster VM to distribute the key to the VM.

    ssh-copy-id -i ~/.ssh/id_rsa.pub user@address

    where

    address is the NSP cluster VM IP address

    user is the designated NSP ansible user configured in Step 57 , if root-user access is restricted; otherwise, user@ is not required


64 

Enter the following:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the following command, and are subsequently prompted for the root password of each cluster member:

nspk8sctl --ask-pass install

../nspk8sctl install ↵

The NSP Kubernetes environment is deployed.


65 

The NSP cluster member named node1 is designated the NSP cluster host for future configuration activities; record the NSP cluster host IP address for future reference.


Check NSP cluster status
 
66 

Open a console window on the NSP cluster host.


67 

Enter the following periodically to display the status of the Kubernetes system pods:

Note: You must not proceed to the next step until each pod STATUS reads Running or Completed.

kubectl get pods -A ↵

The pods are listed.


68 

Enter the following periodically to display the status of the NSP cluster nodes:

Note: You must not proceed to the next step until each node STATUS reads Ready.

kubectl get nodes -o wide ↵

The NSP cluster nodes are listed, as shown in the following three-node cluster example:

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   

node1   Ready    master   nd    version   int_IP        ext_IP

node2   Ready    master   nd    version   int_IP        ext_IP

node3   Ready    <none>   nd    version   int_IP        ext_IP


Configure NSP software
 
69 

Open a console window on the NSP deployer host.


70 

Enter the following:

cd /opt/nsp ↵


71 

Enter the following:

tar xvf NSP_DEPLOYER_R_r.tar.gz ↵

where R_r is the NSP release ID, in the form Major_minor

The bundle file is expanded, and the following directory is created:

/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID


72 

Enter the following:

rm -f NSP_DEPLOYER_R_r.tar.gz ↵

The bundle file is deleted.


73 

Open the following file using a plain-text editor such as vi to specify the system parameters and enable the required installation options:

/opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/config/nsp-config.yml

Note: See nsp-config.yml file format for configuration information.

Note: You must preserve the leading spaces in each line.


74 

Configure the following parameter in the platform section as shown below:

  clusterHost: "cluster_host_address"

where

cluster_host_address is the address of NSP cluster member node1, which is subsequently used for cluster management operations


75 

Configure the following NSP cluster address parameters in the platform section, ingressApplications subsection as shown below.

Each address is an address from the ingressApplications section of the k8s-deployer.yml file described in Step 54.

Note: The client_IP value is mandatory; the address is used for interfaces that remain unconfigured, such as in a single-interface deployment.

Note: If the client network uses IPv6, you must specify the NSP cluster hostname as the client_IP value.

Note: The trapForwarder addresses that you specify must differ from the client_IP value, even in a single-interface deployment.

  ingressApplications:

    ingressController:

      clientAddresses:

        virtualIp: "client_IP"

        advertised: "client_public_address"

      internalAddresses:

        virtualIp: "internal_IP"

        advertised: "internal_public_address"

    trapForwarder:

      mediationAddresses:

        virtualIpV4: "trapV4_mediation_IP"

        advertisedV4: "trapV4_mediation_public_address"

        virtualIpV6: "trapV6_mediation_IP"

        advertisedV6: "trapV6_mediation_public_address"

where

client_IP is the address for external client access

internal_IP is the address for internal communication

trapV4_mediation_IP is the address for IPv4 network mediation

trapV6_mediation_IP is the address for IPv6 network mediation

each public_address value is an optional address to advertise instead of the associated _IP value, for example, in a NAT environment


76 

If you have enabled flow statistics collection using the flowCollection installation option, configure the following parameters in the platform section, ingressApplications subsection as shown below.

Each address is an address from the ingressApplications section of the k8s-deployer.yml file described in Step 54.

    flowForwarder:

      mediationAddresses:

        virtualIpV4: "flowV4_mediation_IP"

        advertisedV4: "flowV4_mediation_public_address"

        virtualIpV6: "flowV6_mediation_IP"

        advertisedV6: "flowV6_mediation_public_address"

where

flowV4_mediation_IP is the address for IPv4 flow collection

flowV6_mediation_IP is the address for IPv6 flow collection

each _public_address value is an optional address to advertise instead of the associated mediation_IP value, for example, in a NAT environment


77 

If you are using your own storage instead of local NSP storage, perform the following steps:

  1. Create Kubernetes storage classes.

  2. Configure the following parameters in the platform section, kubernetes subsection as shown below:

      storage:

        readWriteOnceLowIOPSClass: "storage_class"

        readWriteOnceHighIOPSClass: "storage_class"

        readWriteOnceClassForDatabases: "storage_class"

        readWriteManyLowIOPSClass: "storage_class"

        readWriteManyHighIOPSClass: "storage_class"

    where

    readWriteOnceLowIOPSClass—for enabling ReadWriteOnce operations and maintaining storage IOPS below 10,000

    readWriteOnceHighIOPSClass—for enabling ReadWriteOnce operations and maintaining storage IOPS above 10,000

    readWriteOnceClassForDatabases—for enabling ReadWriteOnce operations and maintaining storage IOPS below 10,000 for NSP databases

    readWriteManyLowIOPSClass—for enabling ReadWriteMany operations and maintaining storage IOPS below 10,000

    readWriteManyHighIOPSClass—for enabling ReadWriteMany operations and maintaining storage IOPS above10,000

    storage_class is your storage class name


78 

Configure the remaining parameters in the platform section as shown below:

platform section, docker subsection:

    repo: "registry.nsp.nokia.local/nsp/images"

    pullPolicy: "IfNotPresent"

platform section, helm subsection:

    repo: "oci://registry.nsp.nokia.local/nsp/charts"

    timeout: "300"


79 

Configure the type parameter in the deployment section as shown below:

deployment:

    type: "deployment_type"

where deployment_type is one of the parameter options listed in the section


80 

If you are using a custom server certificate, configure the following tls parameter in the deployment section:

   tls:                     

     customCaCert: certificate_path

where certificate_path is the file path of the custom root CA certificate file


81 

If the NSP system is a DR deployment, configure the parameters in the dr section as shown below:

dr:

   dcName: "data_center"

   mode: "deployment_mode"

   peer: "peer_address"

   internalPeer: "peer_internal_address"

   peerDCName: "peer_data_center"

where

data_center is the unique alphanumeric name to assign to the cluster

deployment_mode is the case-sensitive deployment type, dr or standalone

peer_address is the address at which the peer data center is reachable over the client network

peer_internal_address is the address at which the peer data center is reachable over the internal network

peer_data_center is the unique alphanumeric name of the peer cluster


82 

If you are integrating one or more existing systems or components with the NSP, configure the required parameters in the integrations section.

For example:

To integrate a standalone NFM-P system, you must configure the nfmp parameters in the section as shown below:

Note: When the section includes an NFM-P IP address, the NSP UI is accessible only when the NFM-P is operational.

Note: In the client section of samconfig on the NFM-P main servers, if the address for client access is set using the hostname parameter, the primaryIp and standbyIp values in the nfmp section of the NSP configuration file, nsp-config.yml, must be set to hostnames.

Likewise, if the public-ip parameter in the client section is configured on the main server, the primaryIp and standbyIp values in the nsp-config.yml file must be set to IP addresses.

 integrations:

   nfmp:

     primaryIp: "main_server_address"

     standbyIp: 

     tlsEnabled: true | false


83 

If all of the following are true, configure the following parameters in the integrations section:

  • You are integrating an NFM-P system with the NSP.

  • You want the NFM-P to forward system metrics to the NSP cluster.

  • The NFM-P main server and main database are on separate stations:

    nfmpDB:

      primaryIp: ""

      standbyIp: ""


84 

If both of the following are true, configure the following parameters in the integrations section:

  • You are integrating an NFM-P system with the NSP.

  • You want the NFM-P to forward system metrics to the NSP cluster.

  • The NFM-P system includes one or more auxiliary servers:

    auxServer:

      primaryIpList: ""

      standbyIpList: ""


85 

If the NSP deployment includes one or more Release 22 analytics servers that are to remain at the earlier release, you must enable NSP and analytics compatibility; otherwise, you can skip this step.

Set the legacyPortEnabled parameter in the analyticsServer subsection of the integrations section to true as shown below:

  analyticsServer:

    legacyPortEnabled: true


86 

If the NSP deployment includes an auxiliary database, configure the required parameters.

Note: If the NSP deployment is to be integrated with a Release 22 or 23 NFM-P system, the auxiliary database release must match the NFM-P release.

Note: If the deployment is geo-redundant and is to include the NFM-P, you must record the following values for addition to the local NFM-P main server configuration:

  • ipList addresses, which must you must set as the cluster_1 addresses in the local main server configuration

  • standbyIpList addresses, which you must set as the cluster_2 addresses local main server configuration

  1. Locate the following section:

        auxDb:

          secure: "true"

          ipList: ""

          standbyIpList: ""

  2. Edit the section to read as follows:

    Note: If the auxiliary database is at the same Release as the NSP, the secure parameter must be set to true.

        auxDb:

          secure: "true"

          ipList: "cluster_1_IP1,cluster_1_IP2...cluster_1_IPn"

          standbyIpList: "cluster_2_IP1,cluster_2_IP2...cluster_2_IPn"

    where

    cluster_1_IP1, cluster_1_IP2...cluster_1_IPn are the external IP addresses of the stations in the local cluster

    cluster_2_IP1, cluster_2_IP2...cluster_2_IPn are the external IP addresses of the stations in the peer cluster; required only for geo-redundant deployment


87 

If you are including VMs to host MDM instances in addition to a standard or enhanced NSP cluster deployment, configure the following mdm parameters in the modules section:

 modules:

   mdm:

     clusterSize: members

     backupServers: n

where

members is the total number of VMs to host MDM instances

n is the total number of VMs to allocate as backup instances


88 

Configure the user authentication parameters in the sso section; see NSP SSO configuration parameters for configuration information.


89 

Save and close the nsp-config.yml file.


90 

Ensure that your license.zip file is on the NSP deployer host in the location specified in the nsp-config.yml file.


91 

If you are not including any dedicated MDM nodes in addition to the number of member nodes in a standard or enhanced NSP cluster, go to Step 99.


92 

Log in as the root or NSP admin user on the NSP cluster host.


93 

Open a console window.


94 

Perform the following steps for each Flow Collector node.

  1. Enter the following to open an SSH session on the node.

    Note: The root password for a VM created using the Nokia qcow2 image is available from technical support.

    ssh FC_node

    where FC_node is the node IP address

  2. Enter the following sequence of commands:

    mkdir -p /opt/nsp/volumes/flow-collector-aa

    chown nsp:nsp /opt/nsp/volumes/flow-collector-aa

    mkdir -p /opt/nsp/volumes/flow-collector-sys

    chown nsp:nsp /opt/nsp/volumes/flow-collector-sys

  3. Enter the following:

    exit ↵


95 

Perform the following steps for each additional MDM node.

  1. Enter the following to open an SSH session on the MDM node.

    Note: The root password for a VM created using the Nokia qcow2 image is available from technical support.

    ssh MDM_node

    where MDM_node is the node IP address

  2. Enter the following sequence of commands:

    mkdir -p /opt/nsp/volumes/mdm-server

    chown -R 1000:1000 /opt/nsp/volumes

  3. Enter the following:

    exit ↵


Label Flow Collector, MDM nodes
 
96 

Enter the following:

kubectl get nodes -o wide ↵

A list of nodes like the following is displayed.

NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   

node1   Ready    master   nd   version   int_IP   ext_IP

node2   Ready    master   nd   version   int_IP   ext_IP

node3   Ready    <none>   nd   version   int_IP   ext_IP


97 

If flow collection is enabled, label the Flow Collector nodes.

  1. Record the NAME value of each node listed in Step 96 whose INTERNAL-IP value is the IP address of a node designated to host a Flow Collector instance.

  2. Enter the following for each recorded name:

    kubectl label nodes node_name fc=true ↵

    where node is the recorded NAME value of the Flow Collector node


98 

If you are adding any MDM nodes in addition to the MDM instances in a standard or enhanced NSP cluster deployment, label the MDM nodes.

  1. Record the NAME value of each node listed in Step 96 whose INTERNAL-IP value is the IP address of a node designated to host an MDM instance.

  2. Enter the following for each recorded name:

    kubectl label node node mdm=true ↵

    kubectl cordon node

    where node is the recorded NAME value of the MDM node


Configure hosts file
 
99 

Log in as the root or NSP admin user on the NSP deployer host.


100 

Open the following file using a plain-text editor such as vi:

/opt/nsp/NSP-CN-DEP-release-ID/config/nsp-deployer.yml


101 

Configure the following parameters:

  hosts: "hosts_file"

  labelProfile: "../ansible/roles/apps/nspos-labels/vars/labels_file"

where

hosts_file is the absolute path of the hosts.yml file created in Step 61, typically /opt/nsp/nsp-k8s-deployer-release-ID/config/hosts.yml

labels_file is the file name below that corresponds to the cluster deployment type specified in Step 79:

  • node-labels-basic-1node.yml

  • node-labels-basic-sdn-2nodes.yml

  • node-labels-enhanced-6nodes.yml

  • node-labels-enhanced-sdn-9nodes.yml

  • node-labels-standard-3nodes.yml

  • node-labels-standard-4nodes.yml

  • node-labels-standard-sdn-4nodes.yml

  • node-labels-standard-sdn-5nodes.yml


102 

If you have disabled remote root access to the NSP cluster VMs, configure the following parameters in the cluster section, sshAccess subsection:

  sshAccess:

    userName: "user"

    privateKey: "path"

where

user is the designated NSP ansible user

path is the SSH key path, for example, /home/user/.ssh/id_rsa


103 

Save and close the nsp-deployer.yml file.


Install Kubernetes secrets
 
104 

If you are configuring the standby cluster in a DR deployment, go to Step 114.


105 

If you are including an existing NFM-P system in the NSP deployment, and the NFM-P TLS certificate is self-signed or root-CA-signed, you must use the NFM-P TLS artifacts in the NSP deployment. Otherwise, you can skip this step.

Transfer the following TLS files from the NFM-P PKI server to an empty directory on the standalone or primary NSP deployer host.

Note: The PKI server address can be viewed using the samconfig utility on an NFM-P main server station.

  • ca.pem

  • ca.key

  • ca_internal.pem

  • ca_internal.key


106 

Open a console window on the standalone or primary NSP deployer host.


107 

Enter the following:

cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵


108 

Enter the following:

./nspdeployerctl secret install ↵

The following prompt is displayed:

Would you like to use your own CA key pair for the NSP Internal Issuer? [yes,no]


109 

Perform one of the following.

  1. Enter no ↵.

    The NSP generates the internal key and certificate files.

  2. Provide your own certificate to secure the internal network.

    1. Enter yes ↵.

      The following messages and prompt are displayed:

    2. Building secret 'ca-key-pair-internal-nspdeployer'

      The CA key pair used to sign certificates generated by the NSP Internal Issuer.

      Please enter the internal CA private key:

    3. Enter the full path of the internal private key.

      The following prompt is displayed:

      Please enter the internal CA certificate:

    4. Enter the full path of the internal certificate:

      The following messages are displayed for each Kubernetes namespace:

      Adding secret ca-key-pair-internal-nspdeployer to namespace namespace...

      secret/ca-key-pair-internal-nspdeployer created

The following prompt is displayed:

Would you like to use your own CA key pair for the NSP External Issuer? [yes,no]


110 

Perform one of the following.

  1. Enter no ↵.

    The NSP generates the external key and certificate files.

  2. Provide your own certificate to secure the external network.

    1. Enter yes ↵.

      The following messages and prompt are displayed:

      Building secret 'ca-key-pair-external-nspdeployer'

      The CA key pair used to sign certificates generated by the NSP External Issuer.

      Please enter the external CA private key:

    2. Enter the full path of the external private key.

      The following prompt is displayed:

      Please enter the external CA certificate:

    3. Enter the full path of the external certificate:

      The following messages are displayed for each Kubernetes namespace:

      Adding secret ca-key-pair-external-nspdeployer to namespace namespace...

      secret/ca-key-pair-external-nspdeployer created

Would you like to provide a custom private key and certificate for use by NSP endpoints when securing TLS connections over the client network? [yes,no]


111 

Perform one of the following.

  1. Enter no ↵.

    The NSP generates the client key and certificate files.

  2. Provide your own certificate for the client network.

    1. Enter yes ↵

      The following messages and prompt are displayed:

      Building secret 'nginx-nb-tls-nsp'

      TLS certificate for securing the ingress gateway.

      Please enter the ingress gateway private key:

    2. Enter the full path of the private key file for client access.

      The following prompt is displayed:

      Please enter the ingress gateway public certificate:

    3. Enter the full path of the public certificate file for client access.

      The following prompt is displayed:

      Please enter the ingress gateway public trusted CA certificate bundle:

    4. Enter the full path of the public trusted CA certificate bundle file.

      The following message is displayed:

        Adding secret nginx-nb-tls-nsp to namespace namespace...


112 

If the deployment includes MDM, the following prompt is displayed:

Would you like to provide mTLS certificates for the NSP mediation interface for two-way TLS authentication? [yes,no]

Perform one of the following.

  1. Enter no ↵ if you are not using mTLS or have no certificate to provide for mTLS.

  2. Provide your own certificate to secure MDM and gNMI telemetry.

    1. Enter yes ↵.

    2. The following messages and prompt are displayed:

      Building secret 'mediation-mtls-key'

      mTLS artifacts use to secure MDM communications with nodes.

      Please enter the mediation private key:

    3. Enter the full path of the mediation private key.

      The following prompt is displayed:

      Please enter the mediation CA certificate:

    4. Enter the full path of the mediation CA certificate.

      The following messages are displayed:

        Adding secret mediation-mtls-key to namespace namespace...

      secret/mediation-mtls-key created

        Adding secret mediation-mtls-key to namespace namespace...

      secret/mediation-mtls-key created


113 

Back up the Kubernetes secrets.

  1. Enter the following:

    ./nspdeployerctl secret -o backup_file backup ↵

    where backup_file is the absolute path and name of the backup file to create

    As the secrets are backed up, messages like the following are displayed for each Kubernetes namespace:

    Backing up secrets to /opt/backupfile...

      Including secret namespace:ca-key-pair-external

      Including secret namespace:ca-key-pair-internal

      Including secret namespace:nsp-tls-store-pass

    When the backup is complete, the following prompt is displayed:

    Please provide an encryption password for backup_file

    enter aes-256-ctr encryption password:

  2. Enter a password.

    The following prompt is displayed:

    Verifying - enter aes-256-ctr encryption password:

  3. Re-enter the password.

    The backup file is encrypted using the password.

  4. Record the password for use when restoring the backup.

  5. Record the name of the data center associated with the backup.

  6. Transfer the backup file to a secure location in a separate facility for safekeeping.


Restore secrets on standby cluster, DR deployment
 
114 

If you are configuring the standby cluster in a DR deployment, obtain and restore the NSP secrets backup file from the NSP cluster in the primary data center.

  1. Enter the following on the standby NSP deployer host:

    scp address:path/backup_file /tmp/ ↵

    where

    address is the address of the NSP deployer host in the primary cluster

    path is the absolute file path of the backup file created in Step 113

    backup_file is the secrets backup file name

    The backup file is transferred to the local /tmp directory.

  2. Enter the following:

    cd /opt/nsp/NSP-CN-DEP-release-ID/bin ↵

  3. Enter the following:

    ./nspdeployerctl secret -i /tmp/backup_file restore ↵

    The following prompt is displayed:

    Please provide the encryption password for /opt/backupfile

    enter aes-256-ctr decryption password:

  4. Enter the password recorded in Step 113.

    As the secrets are restored, messages like the following are displayed for each Kubernetes namespace:

    Restoring secrets from backup_file...

    secret/ca-key-pair-external created

      Restored secret namespace:ca-key-pair-external

    secret/ca-key-pair-internal created

      Restored secret namespace:ca-key-pair-internal

    secret/nsp-tls-store-pass created

      Restored secret namespace:nsp-tls-store-pass

  5. If you answer yes to the Step 111 prompt for client access during the primary NSP cluster configuration, you must update the standby server secret for client access using the custom certificate and key files that are specific to the standby cluster.

    Enter the following:

    ./nspdeployerctl secret -s nginx-nb-tls-nsp -n psaRestricted -f tls.key=customKey -f tls.crt=customCert -f ca.crt=customCaCert update ↵

    where

    customKey is the full path of the private server key

    customCert is the full path of the server public certificate

    customCaCert is the full path of the CA public certificate

    Messages like the following are displayed as the server secret is updated:

    secret/nginx-nb-tls-nsp patched

    The following files may contain sensitive information. They are no longer required by NSP and may be removed.

      customKey

      customCert

      customCaCert


Deploy NSP software, monitor initialization
 
115 

If you are using your own storage instead of local NSP storage, perform the following steps.

See the NSP Planning Guide for the required IOPS and latency storage throughput.

  1. Check if the storage classes meet the IOPS requirements.

    Run the script to check the IOPS of the configured storage classes.

    1. Enter the following on the NSP deployer host:

      # cd /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/support/storageIopsCheck/bin

    2. Run the script by selecting each storage class individually or by selecting All Storage Classes.

      Output like the following is displayed and indicates if the script passes or fails.

      [root@ ]# ./nspstorageiopsctl 

      date time year -------------------- BEGIN ./nspstorageiopsctl --------------------

       

      [INFO]: SSH to NSP Cluster host ip_address successful

      1) readWriteManyHighIOPSClass            5) readWriteOnceClassForDatabases

      2) readWriteOnceHighIOPSClass            6) All Storage Classes

      3) readWriteManyLowIOPSClass             7) Quit

      4) readWriteOnceLowIOPSClass

      Select an option: 1

      [INFO] **** Calling IOPs check for readWriteManyHighIOPSClass - Storage Class Name (ceph-filesystem) Access Mode (ReadWriteMany) ****

      [INFO] NSP Cluster Host: ip_address

      [INFO] Validate configured storage classes are available on NSP Cluster

      [INFO] Adding helm repo nokia-nsp

      [INFO] Updating helm repo nokia-nsp

      [INFO] Executing k8s job on NSP Cluster ip_address

      [INFO] Creating /opt/nsp/nsp-storage-iops directory on NSP Cluster ip_address

      [INFO] Copying values.yaml to /opt/nsp/nsp-deployer/tools/nsp-storage-iops

      [INFO] Executing k8s job on NSP Cluster ip_address

      [INFO] Waiting for K8s job status...

      [INFO] Job storage-iops completed successfully.

      [INFO] Cleaning up and uninstalling k8s job

      [INFO] Helm uninstall cn-nsp-storage-iops successful

      STORAGECLASS         ACCESS MODE    READIOPS   WRITEIOPS  RESULT    STORAGECLASSTYPE

      ------------         -----------    --------   ---------  ------    ----------------

      storage_class     ReadWriteMany  12400      12500      true      readWriteManyHighIOPSClass

      [INFO] READ IOPS and WRITE IOPS meet the threshold of 10000.

      date time year ------------------- END ./nspstorageiopsctl - SUCCESS --------------------

    If these requirements are not met, this may result in system performance degradation.


116 

Enter the following to apply the node labels to the NSP cluster:

./nspdeployerctl config ↵


117 

Enter the following to import the NSP images and Helm charts to the NSP Kubernetes registry

./nspdeployerctl import ↵


118 

Enter the following to deploy the NSP software in the NSP cluster:

Note: If the NSP cluster VMs do not have the required SSH key, you must include the --ask-pass argument in the command, as shown in the following example, and are subsequently prompted for the root password of each cluster member:

nspdeployerctl --ask-pass install --config --deploy

./nspdeployerctl install --config --deploy ↵

The specified NSP functions are installed and initialized.


119 

Monitor and validate the NSP cluster initialization.

Note: You must not proceed to the next step until each NSP pod is operational.

  1. On the NSP cluster host, enter the following every few minutes:

    kubectl get pods -A ↵

    The status of each NSP cluster pod is displayed; the NSP cluster is operational when the status of each pod is Running or Completed, with the following exception.

    • If you are including any MDM VMs in addition to a standard or enhanced NSP cluster deployment, the status of each mdm-server pod is shown as Pending, rather than Running or Completed.

  2. Check PVCs are bound to PVs and PVs are created with STORAGECLASS as shown below

    # kubectl get pvc -A

    NAMESPACE            NAME                       STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE

    nsp-psa-privileged   data-volume-mdm-server-0   Bound    pvc-ID  5Gi        RWO            storage_class  age

    nsp-psa-restricted   data-nspos-kafka-0         Bound    pvc-ID  10Gi       RWO            storage_class   age

    nsp-psa-restricted   data-nspos-zookeeper-0     Bound    pvc-ID  2Gi        RWO            storage_class  age

    ...

    # kubectl get pv

    NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM

    nspos-fluentd-logs-data   50Mi       ROX            Retain           Bound    nsp-psa-restricted/nspos-fluentd-logs-data

    pvc-ID                   10Gi       RWO            Retain           Bound    nsp-psa-restricted/data-nspos-kafka-0 

    pvc-ID                   2Gi        RWO            Retain           Bound    nsp-psa-restricted/data-nspos-zookeeper-0

    ...

  3. Verify that all pods are in the Running state.

  4. If the Network Operations Analytics - Baseline Analytics installation option is enabled, ensure that the following pods are listed; otherwise, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod:

    Note: The output for a non-HA deployment is shown below; an HA cluster has three sets of three baseline pods, three rta-ignite pods, and two spark-operator pods.

    • analytics-rtanalytics-tomcat

    • baseline-anomaly-detector-n-exec-1

    • baseline-trainer-n-exec-1

    • baseline-window-evaluator-n-exec-1

    • rta-anomaly-detector-app-driver

    • rta-ignite-0

    • rta-trainer-app-driver

    • rta-windower-app-driver

    • spark-operator-m-n

  5. If any pod fails to enter the Running or Completed state, see the NSP Troubleshooting Guide for information about troubleshooting an errored pod.


120 

If you are including any MDM VMs in addition to a standard or enhanced NSP cluster deployment, perform the following steps to uncordon the nodes cordoned in Step 98.

  1. Enter the following:

    kubectl get pods -A | grep Pending ↵

    The pods in the Pending state are listed; an mdm-server pod name has the format mdm-server-ID.

    Note: Some mdm-server pods may be in the Pending state because the manually labeled MDM nodes are cordoned in Step 98. You must not proceed to the next step if any pods other than the mdm-server pods are listed as Pending. If any other pod is shown, re-enter the command periodically until no pods, or only mdm-server pods, are listed.

  2. Enter the following for each manually labeled and cordoned node:

    kubectl uncordon node

    where node is an MDM node name recorded in Step 98

    The MDM pods are deployed.

    Note: The deployment of all MDM pods may take a few minutes.

  3. Enter the following periodically to display the MDM pod status:

    kubectl get pods -A | grep mdm-server ↵

  4. Ensure that the number of mdm-server-ID instances is the same as the mdm clusterSize value in nsp-config.yml, and that each pod is in the Running state. Otherwise, contact technical support for assistance.


Set password for NFM-P XML API access
 
121 

If the NSP deployment includes the NFM-P, you must update the NSP cluster password for NFM-P XML API access; perform “How do I update the admin-user password for NSP XML API access?” in the NSP System Administrator Guide.


122 

Close the open console windows.

End of steps