NSP cluster requirements

Cluster sizing criteria

An NSP cluster must be sized according to the requirements defined by the Nokia NSP Platform Sizing Tool, which calculates the minimum platform requirements based on the specified installation options.

The platform requirements for NSP cluster VMs and VMs that host additional components depend on, but are not limited to, the following factors:

Note: The NSP cluster VMs require sufficient available user IDs to create system users for NSP applications.

Note: The disk layout and partitioning of each VM in a multi-node NSP cluster, including DR deployments, must be identical.

NSP VM hardware requirements

NSP deployments are server- and vendor-agnostic, but must meet all NSP component hardware criteria and performance targets. Server-class hardware is required; desktop hardware is inadequate. Processor support is limited to specific Intel Xeon-based x86-64 and AMD Epyc x86-64 CPUs that have the required minimum CPU core speed listed in Table 2-5, VM processor requirements.

Table 2-5: VM processor requirements

Processor microarchitecture

Minimum CPU core speed

Supported deployments

Intel Xeon Haswell or newer

2.4 GHz

Supported for all NSP deployments

Intel Skylake or newer

2.0 GHz

Supported for all NSP deployments

AMD Epyc Zen 3 or newer

2.0 GHz

Supported on all NSP components except the following:

  • vCPAA

  • VSR-NRC

Provisioned CPU resources are based upon threaded CPUs. The NSP Platform Requirements will specify a minimum number of vCPUs to be assigned to the VM. VMs are recommended to be configured with all vCPUs on one virtual socket.

A host system requires CPU, memory and disk resources after resources for NSP VMs have been allocated. Contact the hypervisor provider for requirements and best practices related to the hosting environment.

You must provide information about the provisioned VMs to Nokia support. You can provide the information through read-only hypervisor access, or make the information available upon request. Failure to provide the information may adversely affect NSP support.

NSP cluster storage-layer performance

The storage layer of an NSP cluster requires a minimum read/write IOPS based on deployment type and network size; each NSP cluster member requires the appropriate minimum IOPS listed in Table 2-6, Minimum NSP cluster IOPS requirements; see Chapter 8, Appendix A for information about how to determine the current storage-layer performance.

Table 2-6: Minimum NSP cluster IOPS requirements

Deployment Type and Network Size

Minimum storage-layer read/write IOPS

Lab/trial

2000

Production: Fewer than 2000 NEs

2500

Production: More than 2000 NEs

3000

Minimum and production platform requirements
WARNING 

WARNING

Service Degradation Risk

The NSP deployer host is a crucial element of an NSP system that holds the required container images and Helm repositories for deployment to each NSP cluster VM. If the NSP deployer host is unavailable, NSP cluster recovery in the event of a failure may be compromised.

Ensure that the NSP deployer host remains operational and reachable by the NSP cluster after the cluster deployment.

A deployment of a container based NSP software component in a Nokia-provided container environment will be sized according to the deployment type and the number of installation options enabled. Each deployment requires a deployer node and one or more worker nodes. The worker nodes require vCPU, memory and disk space as specified by the NSP Sizing Tool. These platform requirements support the network dimensions described in Chapter 5, Scaling and performance.

Defined CPU and memory resources for worker nodes must be reserved for the individual Guest OSs and cannot be shared or oversubscribed. This includes individual vCPUs which must be reserved for the VM. A deployer node does not require reserved CPU and memory resources.

The following table shows the NSP cluster requirements in a Nokia provided container environment.

Table 2-7: Platform requirements for NSP cluster deployment profiles

Requirement

Basic Deployment

Medium/Standard Deployment

Enhanced Deployment

NSP worker node count

1 node

3-5 nodes

6-10 nodes

NSP deployer host minimum specifications

vCPU: 4

Memory: 8 GB

vCPU: 4

Memory: 8 GB

vCPU: 4

Memory: 8 GB

Disk partitioning recommendations for cluster deployer nodes are detailed in the NSP Installation and Upgrade Guide.

The NSP Sizing Tool specifies the overall vCPU, memory and disk space requirements; typically, however, a Kubernetes worker node is a VM with the following specifications:

A lab/trial NSP cluster deployer node requires a minimum of 4 vCPU, 8 GB memory and 250 GB disk.

The Simulation Tool NSP deployment must be deployed as type “lab” or “ip-mpls-sim”. Type “ip-mpls-sim” uses higher minimum resources for CPU and memory. The Simulation Tool NSP cluster virtual machine requires a minimum of 80 GB of memory. The Simulation Tool NSP cluster deployer requires a minimum of 2 vCPU, 4 GB memory and 400 GB disk.

Note: The worker plus deployer node count represents the total number of nodes in a single NSP cluster. A redundant NSP cluster requires that number of nodes at each datacenter.

Note: A virtual machine running a VSR-NRC instance requires CPU pinning and isolation from other virtual machines running on the host system. For platform requirements of the VSR-NRC, refer to the VSR Installation and Setup Guide.

Virtual Machine Mapping to Physical Hosts - HA Deployment

In environments where multiple physical hosts are used to run NSP Cluster VMs, an option to ensure further resiliency of the deployment would be deploy certain VMs on different physical hosts. This would allow for a physical host to fail completely without necessarily triggering a site switchover.

Note: This VM distribution recommendation only applies for HA deployment types.

Note: In the case of a physical host failure, multiple VMs will be affected – as NSP only guarantees a single VM failure without affecting any service, if multiple VMs fail, it is quite likely that the data center will be running in a degraded state, with some non-essential services being completely unavailable.

While three physical hosts will allow for this additional resiliency, it is recommended to have four physical hosts to minimize the number of affected VMs should a physical host fail.

Table 2-8: Three physical host layout

Physical Host

NSP VM

NSP VM

NSP VM

1

node1

node4

node7

2

node2

node5

node8

3

node3

node6

node9

Table 2-9: Four physical host layout

Physical Host

NSP VM

NSP VM

NSP VM

1

node1

node5

node7

2

node2

node8

3

node3

node6

4

node4

node9

In both cases, any additional nodes can be placed on any physical host.

If more than four physical hosts are required, please contact your Nokia representative.

VM memory

The virtual memory configuration of each NSP cluster VM requires a parameter change to support Centralized Logging.

The following command entered as the root user displays the current setting:

sysctl -a | grep “vm.max_map_count”

If the setting is not at least 26 2144, you must enter the following command before you deploy the NSP:

sysctl -w vm.max_map_count=262144

Time synchronization

The system clocks of the NSP components must always be closely synchronized. The RHEL chronyd service is required as the time-synchronization mechanism to engage on each NSP component during deployment.

Caution: Some components - members of an etcd cluster, for example - will not trust the integrity of data if a time difference is detected. As such, failure to closely synchronize system clocks can complicate debugging and cause outages or other unexpected behavior.

Note: Only one time-synchronization mechanism can be active in an NSP system. Before you enable a service such as chronyd on an NSP component, you must ensure that no other time-synchronization mechanism, for example, the VMware Tools synchronization utility, is enabled.

Using hostnames

The hostname of an NSP component station must meet the following criteria:

Note: If you use hostnames or FQDNs to identify the NSP components in the NSP configuration, the hostnames or FQDNs must be resolvable using DNS.