NSP deployment network addressing requirements
NSP MetalLB load balancer
MetalLB is a load balancer for Kubernetes clusters. A pool of IP addresses is required for the following communications:
-
All NSP clusters require an ingress controller client VIP for client traffic.
-
A NSP cluster with MDM deployed requires a trap forwarder mediation VIP for mediation traffic.
-
A NSP cluster with Flow Collector deployed requires a flow forwarder VIP for mediation traffic.
Note: Each VIP specified must be distinct from any other VIP or IP addresses on the cluster nodes.
When installing the NSP Kubernetes cluster, you must specify which NSP cluster nodes will act as load balancer endpoints. Such nodes are called ingress gateways, and can accept ingress application traffic, e.g. GUI client connections or SNMP traps from network elements. There should be at least two ingress gateway nodes for redundancy and they should belong to different host stations. Each ingress application will accept traffic on one ingress gateway node. Different ingress applications could accept traffic on different ingress gateway nodes.
Each NSP cluster node that is an ingress gateway must have a network interface address that belongs to the same subnet for each VIP configured to the cluster.
NSP cluster nodes in same subnet
All NSP cluster nodes must belong to the same subnet when configuring the NSP kubernetes cluster. The nodeIP parameters in k8s-deployer.yml must be within the same subnet. This applies to NSP cluster deployments that use a single network interface, or multiple network interfaces. This requirement is independent of any restrictions around virtual IP addresses used in a NSP cluster deployment.
Using IPv4 and IPv6 in NSP deployments
The NSP supports IPv4 and IPv6 network connectivity with client applications and other components in the NSP architecture.
The following limitations and restrictions apply to deploying the NSP with IPv6 network communications:
-
The deployer host for an NSP cluster must have IPv4 connectivity to NSP cluster nodes. The NSP cluster can be configured for IPv6 communications for NSP applications, but must have IPv4 connectivity to the deployer node.
-
Common web browser applications have security policies that may prevent the use of bracketed IPv6 addresses in the URL browser bar. Customers that use IPv6 networking for client communications to the NSP must use the hostname configuration for NSP components. (The hostname of a NSP cluster is defined in nsp-config.yml with parameter platform.ingressApplications.ingressController.clientAddresses.advertised. The hostname of a NFMP server is defined in samconfig with parameter client.hostname.)
-
All NSP components in an NSP deployment must use IPv4 or IPv6 for inter-component communications. Different integrated components in an NSP deployment cannot communicate between IPv4 and IPv6 interchangeably (for example, if the NSP is deployed with IPv6, then the NFM-P also needs to be deployed with IPv6).
-
The NSP Kubernetes cluster communications uses internal addressing in 10.233.0.0/18 subnet. Customers should avoid using this subnet in their NSP deployment on VM network interfaces.
-
WS-NOC does not support IPv6 deployment. An integrated deployment of NSP with WS-NOC must be deployed with IPv4 addressing.
-
When there is an IPv4 interface and an IPv6 interface on the same switch, each interface needs its own VPLS.
The NSP can be deployed with multiple network interfaces using IPv4 and IPv6 addressing. Chapter 7 of this guide documents the requirements and limitations of a multiple network interface NSP deployment.
VSR-NRC
For NSP to VSR-NRC communications, both IPv4 and IPv6 are supported. These protocols are also supported in communications between VSR-NRC and PCCs.