Network Address Translation

Terminology

  • BNG subscriber

    This is a broader term than the ESM Subscriber, independent of the platform on which the subscriber is instantiated. It includes ESM subscribers on 7750 SR as well as subscribers instantiated on third party BNGs. Some of the NAT functions, such as Subscriber Aware Large Scale NAT44 utilizing standard RADIUS attribute work with subscribers independently of the platform on which they are instantiated.

  • deterministic NAT

    This is a mode of operation where mappings between the NAT subscriber and the outside IP address and port-range are allocated at the time of configuration. Each subscriber is permanently mapped to an outside IP and a dedicated port block. This dedicated port block is referred to as deterministic port block. Logging is not needed as the reverse mapping can be obtained using a known formula. The subscriber’s ports can be expanded by allocating a dynamic port block in case that all ports in deterministic port block are exhausted. In such case logging for the dynamic port block allocation/de-allocation is required.

  • Enhanced Subscriber Management (ESM) subscriber

    This is a host or a collection of hosts instantiated in 7750 SR Broadband Network Gateway (BNG). The ESM subscriber represents a household or a business entity for which various services with committed Service Level Agreements (SLA) can be delivered. NAT function is not part of basic ESM functionality.

  • Layer 2–aware NAT

    In the context of 7750 SR platform combines Enhanced Subscriber Management (ESM) subscriber-id and inside IP address to perform translation into a unique outside IP address and outside port. This is in contrast with classical NAT technique where only inside IP is considered for address translations. Because the subscriber-id alone is sufficient to make the address translation unique, Layer 2–aware NAT allows many ESM subscribers to share the same inside IP address. The scalability, performance and reliability requirements are the same as in LSN.

  • Large Scale NAT (LSN)

    This refers to a collection of network address translation techniques used in service provider network implemented on a highly scalable, high performance hardware that facilitates various intra and inter-node redundancy mechanisms. The purpose of LSN semantics is to make delineation between high scale and high performance NAT functions found in service provider networks and enterprise NAT that is usually serving much smaller customer base at smaller speeds. The following NAT techniques can be grouped under the LSN name:

    • Large Scale NAT44 or Carrier Grade NAT (CGN)

    • DS-Lite

    • NAT64

    Each distinct NAT technique is referred to by its corresponding name (Large Scale NAT44 [or CGN], DS-Lite and NAT64) with the understanding that in the context of 7750 SR platform, they are all part of LSN (and not enterprise based NAT).

    Large Scale NAT44 term can be interchangeably used with the term Carrier Grade NAT (CGN) which in its name implies high reliability, high scale and high performance. These are again typical requirements found in service provider (carrier) network.

  • NAT RADIUS accounting

    This is the reporting (or logging) of address translation related events (port-block allocation/de-allocation) via RADIUS accounting facility. NAT RADIUS accounting is facilitated via regular RADIUS accounting messages (start/interim-update/stop) as defined in RFC 2866, RADIUS Accounting, with NAT specific VSAs.

  • NAT RADIUS accounting

    This can be used interchangeably with the term NAT RADIUS logging.

  • NAT subscriber

    In NAT terminology, a NAT subscriber is an inside entity whose true identity is hidden from the outside. There are a few types of NAT implementation in 7750 SR and subscribers for each implementation are defined as follows:

    • Large Scale NAT44 (or CGN)

      The subscriber is an inside IPv4 address.

    • Layer 2–aware NAT

      The subscriber is an ESM subscriber which can spawn multiple IPv4 inside addresses.

    • DS-Lite

      The subscriber in DS-Lite can be identified by the CPE’s IPv6 address (B4 element) or an IPv6 prefix. The selection of address or prefix as the representation of a DS-Lite subscriber is configuration dependent.

    • NAT64

      The subscriber is an IPv6 prefix.

  • non-deterministic NAT

    This is a mode of operation where all outside IP address and port block allocations are made dynamically at the time of subscriber instantiation. Logging in such case is required.

  • port block

    This is collection of ports that is assigned to a subscriber. A deterministic LSN subscriber can have only one deterministic port block that can be extended by multiple dynamic port blocks. Non-deterministic LSN subscriber can be assigned only dynamic port blocks. All port blocks for a LSN subscriber must be allocated from a single outside IP address.

  • port-range

    This is a collection of ports that can spawn multiple port blocks of the same type. For example, deterministic port-range includes all ports that are reserved for deterministic consumption. Similarly dynamic port-range is a total collection of ports that can be allocated in the form of dynamic port blocks. Other types of port-ranges are well-known ports and static port forwards.

Network Address Translation (NAT) overview

The 7750 SR supports Network Address (and port) Translation (NAPT) to provide continuity of legacy IPv4 services during the migration to native IPv6. By equipping the multiservice ISA (MS ISA) in an IOM4-e, IOM4-e-B, IOM4-e-HS, or in a 7750 SR-1e, 7750 SR-2e, or 7750 SR-3e (IOM-e), the 7750 SR can operate in two different modes, known as:

  • Large Scale NAT

  • Layer 2-Aware NAT

These two modes both perform source address and port translation as commonly deployed for shared Internet access. The 7750 SR with NAT is used to provide consumer broadband or business Internet customers access to IPv4 Internet resources with a shared pool of IPv4 addresses, such as may occur around the forecast IPv4 exhaustion. During this time it, is expected that native IPv6 services are still growing and a significant amount of Internet content remains IPv4.

Principles of NAT

Network Address Translation devices modify the IP headers of packets between a host and server, changing some or all of the source address, destination address, source port (TCP/UDP), destination port (TCP/UDP), or ICMP query ID (for ping). The 7750 SR in both NAT modes performs Source Network Address and Port Translation (S-NAPT). S-NAPT devices are commonly deployed in residential gateways and enterprise firewalls to allow multiple hosts to share one or more public IPv4 addresses to access the Internet. The common terms of inside and outside in the context of NAT refer to devices inside the NAT (that is behind or masqueraded by the NAT) and outside the NAT, on the public Internet.

TCP/UDP connections use ports for multiplexing, with 65536 ports available for every IP address. Whenever many hosts are trying to share a single public IP address there is a chance of port collision where two different hosts may use the same source port for a connection. The resultant collision is avoided in S-NAPT devices by translating the source port and tracking this in a stateful manner. All S-NAPT devices are stateful in nature and must monitor connection establishment and traffic to maintain translation mappings. The 7750 SR NAT implementation does not use the well-known port range (1 to 1023).

In most circumstances, S-NAPT requires the inside host to establish a connection to the public Internet host or server before a mapping and translation occurs. With the initial outbound IP packet, the S-NAPT knows the inside IP, inside port, remote IP, remote port and protocol. With this information the S-NAPT device can select an IP and port combination (referred to as outside IP and outside port) from its pool of addresses and create a unique mapping for this flow of data.

Any traffic returned from the server uses the outside IP and outside port in the destination IP/port fields – matching the unique NAT mapping. The mapping then provides the inside IP and inside port for translation.

The requirement to create a mapping with inside port and IP, outside port and IP and protocol generally prevents new connections to be established from the outside to the inside as may occur when an inside host needs to be a server.

Traffic load balancing

NAT traffic in SR OS is distributed over ISAs and ESAs within each NAT group. As a result, NAT capacity grows incrementally by adding more ISAs and ESAs to the system while each ISA or ESA participates equally in load sharing.

SR OS load balancing mechanisms in CGN (LSN44, DS-Lite, and NAT64) differ in the upstream and downstream directions, they are independent and unaware of each other,

  • In the upstream direction, traffic is load balanced based on source IPv4 or IPv6 addresses or subnets.

  • In the downstream direction, outside IP address ranges (NAT pool address ranges) are micronetted (divided into smaller subnets), and these micronets are assigned to individual ISAs or ESAs in a balanced way. Downstream traffic is assigned to each ISA or ESA based on the micronets.

Load balancing over ISAs and ESAs shows traffic load balancing within SR OS. In the upstream direction, traffic is hashed based on the source IP addresses or subnets from the10.10.0.0/16 range. A sample of 64000 source IP addresses guarantees equal load distribution.

In this example, in the downstream direction, a pool of 256 public addresses is divided into four equal subnets and each subnet is assigned to one ISA or ESA, each ISA or ESA is serving 64 public IP addresses.

Figure 1. Load balancing over ISAs and ESAs

If there are not enough IP addresses on the inside and outside in relation to the number of ISAs and ESAs, unequal load balancing and, in extreme cases, traffic blackholing can occur. Traffic blackholing shows an example of an extreme case, where a single IP address is assigned to a pool in a NAT group with four ISAs or ESAs. This single outside IP address can be assigned to a single ISA or ESA that serves downstream traffic. Upstream traffic is unaware of the downstream load distribution, so it sends traffic to all four ISAs and ESAs, and as a result this traffic is dropped at ISAs or ESAs that do not have the public IP address assigned.

Figure 2. Traffic blackholing

The operator is notified when the number of outside IP addresses in a pool is smaller than the number of ISAs or ESAs in the NAT group. The notification is sent in the form of a log, as shown in the example below.

3 2020/04/03 18:48:42.010 CEST MINOR: NAT #2015 Base Resource problem
"The address configuration for pool 'test.' causes one or more ISAs not getting an IP address"
4 2020/04/03 18:48:42.010 CEST MINOR: NAT #2014 Base Resource alarm raised
"The status of the NAT resource problem indication changed to true."

In classic CLI, this configuration is permitted by the CLI, but a message is displayed directly in response to a pool activation.

configure router nat outside pool "test" no shutdown
INFO: BB #1221 The address configuration for this pool causes one or more members not getting an IP address - Router 'Base', pool 'test'

The load balancing mechanism in Layer 2–aware NAT relies on a different algorithm than CGN. In Layer 2–aware NAT, on the inside, traffic is distributed across ISAs and ESAs based on the resource utilization of each ISA or ESA. This load balancing mechanism is control plane driven, contrary to CGN which is forwarding plane driven (hashing is based purely on source IP addresses or subnets). In Layer 2–aware NAT, an ESM subscriber is directed to an ISA or ESA hosting a large number of subscribers, hosts, and port blocks, as an aggregate. In Layer 2–aware NAT, traffic is not blackholed when the number of outside IP addresses is smaller than the number of ISAs and ESAs in the pool within a single NAT group. Instead, the outside IP address is assigned to some of the ISAs or ESAs and the ESM host is directed to those. ISAs and ESAs without assigned outside IP addresses remains unused.

Application compatibility

Applications which operate as servers (such as HTTP, SMTP, and so on) or peer-to-peer applications can have difficulty when operating behind an S-NAPT because traffic from the Internet cannot reach the NAT without a mapping in place.

Different methods can be employed to overcome this, including:

  • Port forwarding

  • STUN support

  • Application Layer Gateways (ALG)

The 7750 SR supports all three methods following the best-practice RFC for TCP (RFC 5382, NAT Behavioral Requirements for TCP) and UDP (RFC 4787, Network Address Translation (NAT) Behavioral Requirements for Unicast UDP). Port Forwarding is supported on the 7750 SR to allow servers which operate on well-known ports <1024 (such as HTTP and SMTP) to request the appropriate outside port for permanent allocation.

STUN is facilitated by the support of Endpoint-Independent Filtering and Endpoint-Independent Mapping (RFC 4787) in the NAT device, allowing STUN-capable applications to detect the NAT and allow inbound P2P connections for that specific application. Many new SIP clients and IM chat applications are STUN capable.

Application Layer Gateways (ALG) allows the NAT to monitor the application running over TCP or UDP and make appropriate changes in the NAT translations to suit. The 7750 SR has an FTP ALG enabled following the recommendation of the IETF BEHAVE RFC for NAT (RFC 5382).

Even with these three mechanisms some applications still experience difficulty operating behind a NAT. As an industry-wide issue, forums like UPnP the IETF, operator and vendor communities are seeking technical alternatives for application developers to traverse NAT (including STUN support). In many cases the alternative of an IPv6-capable application gives better long-term support without the cost or complexity associated with NAT.

Large-Scale NAT

Large-Scale NAT (LSN) represents the most common deployment of S-NAPT in carrier networks today, it is already employed by mobile operators around the world for handset access to the Internet.

An LSN is typically deployed in a central network location with two interfaces, the inside toward the customers, and the outside toward the Internet. A Large Scale NAT functions as an IP router and is located between two routed network segments (the ISP network and the Internet).

Traffic can be sent to the LSN function on the 7750 SR using IP filters (ACL) applied to SAPs or by installing static routes with a next-hop of the NAT application. These two methods allow for increased flexibility in deploying the LSN, especially those environments where IP MPLS VPN are being used in which case the NAT function can be deployed on a single PE and perform NAT for any number of other PE by simply exporting the default route.

The 7750 SR NAT implementation supports NAT in the base routing instance and VPRN, and through NAT traffic may originate in one VPRN (the inside) and leave through another VPRN or the base routing instance (the outside). This technique can be employed to provide customers of IP MPLS VPN with Internet access by introducing a default static route in the customer VPRN, and NATing it into the Internet routing instance.

As LSN is deployed between two routed segments, the IP addresses allocated to hosts on the inside must be unique to each host within the VPRN. While RFC1918 private addresses have typically been used for this in enterprise or mobile environments, challenges can occur in fixed residential environments where a subscriber has existing S-NAPT in their residential gateway. In these cases the RFC 1918 private address in the home network may conflict with the address space assigned to the residential gateway WAN interface. Should a conflict occur, many residential gateways fail to forward IP traffic.

Port range blocks

The S-NAPT service on the 7750 SR BNG incorporates a port range block feature to address scalability of a NAT mapping solution. With a single BNG capable of hundreds of thousands of NAT mappings every second, logging each mapping as it is created and destroyed logs for later retrieval (as may be required by law enforcement) could quickly overwhelm the fastest of databases and messaging protocols. Port range blocks address the issue of logging and customer location functions by allocating a block of contiguous outside ports to a single subscriber. Instead of logging each NAT mapping, a single log entry is created when the first mapping is created for a subscriber and a final log entry when the last mapping is destroyed. This can reduce the number of log entries by 5000x or more. An added benefit is that as the range is allocated on the first mapping, external applications or customer location functions may be populated with this data to make real-time subscriber identification, instead of having to query the NAT as to the subscriber identity in real-time and possibly delay applications.

Port range blocks are configurable as part of outside pool configuration, allowing the operator to specify the number of ports allocated to each subscriber when a mapping is created. When a range is allocated to the subscriber, these ports are used for all outbound dynamic mappings and are assigned in a random manner to minimize the predictability of port allocations (draft-ietf-tsvwg-port-randomization-05).

Port range blocks also serve another useful function in a Large Scale NAT environment, and that is to manage the fair allocation of the shared IP resources among different subscribers.

When a subscriber exhausts all ports in their block, further mappings are prohibited. As with any enforcement system, some exceptions are allowed and the NAT application can be configured for reserved ports to allow high-priority applications access to outside port resources while exhausted by low priority applications.

Reserved ports and priority sessions

Reserved ports allows an operator to configure a small number of ports to be reserved for designated applications should a port range block be exhausted. Such a scenario may occur when a subscriber is unwittingly subjected to a virus or engaged in extreme cases of P2P file transfers. In these situations, instead of blocking all new mappings indiscriminately, the 7750 SR NAT application allows operators to nominate a number of reserved ports and then assign a 7750 SR forwarding class as containing high priority traffic for the NAT application. Whenever traffic reaches the NAT application which matches a priority session forwarding class, reserved ports are consumed to improve the chances of success. Priority sessions could be used by the operator for services such as DNS, web portal, e-mail, VoIP, and so on, to allow these applications even when a subscriber exhausted their ports.

Preventing port block starvation

Dynamic port block starvation in LSN

The outside IP address is always shared for the subscriber with a port forward (static or via PCP) and the dynamically allocated port block, insofar as the port from the port forward is in the range >1023. This behavior can lead to starvation of dynamic port blocks for the subscriber. An example for this scenario is shown in Dynamic port block starvation in LSN.

  • A static port forward for the WEB server in Home 1 is allocated in the CPE and the CGN. At the time of static port forward creation, no other dynamic port blocks for Home 1 exist (PCs are powered off).

  • Assume that the outside IP address for the newly created static port forward in the CGN is 10.3.3.1.

  • Over time dynamic port blocks are allocated for a number of other homes that share the same outside IP address, 10.3.3.1. Eventually those dynamic port block allocations exhaust all dynamic port block range for the address 10.3.3.1.

  • After the dynamic port blocks are exhausted for outside IP address 10.3.3.1, a new outside IP address (for example, 10.3.3.2) is allocated for additional homes.

Eventually the PCs in Home 1 come to life and they try to connect to the Internet. Because of the dynamic port block exhaustion for the IP address 10.3.3.1 (that is mandated by static port forward – Web Server), the dynamic port block allocation fails and consequently, the PCs are not able to access the Internet. There is no additional attempt within CGN to allocate another outside IP address. In the CGN there is no distinction between the PCs in Home 1 and the Web Server when it comes to source IP address. They both share the same source IP address 10.2.2.1 on the CPE.

The solution for this is to reserve a port block (or blocks) during the static port forward creation for the specific subscriber.

Figure 3. Dynamic port block starvation in LSN
Dynamic port block reservation

To prevent starvation of dynamic port blocks for the subscribers that use port forwards, a dynamic port block (or blocks) is reserved during the lifetime of the port forward. Those reserved dynamic port blocks are associated with the same subscriber that created the port forward. However, a log would not be generated until the dynamic port block is actually used and mapping within that block are created.

At the time of the port forward creation, the dynamic port block is reserved in the following fashion:

  • If the dynamic port block for the subscriber does not exist, then a dynamic port block for the subscriber is reserved. No log for the reserved dynamic port block is generated until the dynamic port block starts being used (mapping created because of the traffic flow).

  • If the corresponding dynamic port block already exists, it is reserved even after the last mapping within the last port block had expired.

The reserved dynamic port block (even without any mapping) continues to be associated with the subscriber as long as the port forward for the subscriber is present. The log (syslog or RADIUS) is generated only when there is not active mapping within the dynamic port block and all port forwards for the subscriber are deleted.

Additional considerations with dynamic port block reservation:

  • The port block reservation should be triggered only by the first port forward for the subscriber. The subsequent port forwards do not trigger additional dynamic port block reservation.

  • Only a single dynamic port block for the subscriber is reserved (that is, no multiple port-block reservations for the subscriber are possible).

  • This feature is enabled with the following commands:

    • MD-CLI
      configure router nat outside pool port-forwarding dynamic-block-reservation 
      configure service vprn nat outside pool port-forwarding dynamic-block-reservation
    • classic CLI
      configure router nat outside pool port-forwarding-dyn-block-reservation
      configure service vprn nat outside pool port-forwarding-dyn-block-reservation

    These commands can be enabled only if the maximum number of configured port blocks per outside IP is greater or equal then the maximum configured number of subscribers per outside IP address. This guarantees that all subscribers (up to the maximum number per outside IP address) configured with port forwards can reserve a dynamic port block.

  • If the port-reservation is enabled while the outside pool is operational and subscribers traffic is already present, the following two cases must be considered:

    • The configured number of subscribers per outside IP is less or equal than the configured number of port blocks per outside IP address (this is permitted) but all dynamic port blocks per outside IP address are occupied at the moment when port reservation is enabled. This leaves existing subscribers with port forwards that do not have any dynamic port blocks allocated (orphaned subscribers), unable to reserve dynamic port blocks. In this case the orphaned subscribers must wait until dynamic port blocks allocated to the subscribers without port forwards are freed.

    • The configured number of subscribers per outside IP is greater than the configured number of port blocks per outside IP address. In addition, all dynamic port blocks per outside IP address are allocated. Before the port reservation is even enabled, the subscriber-limit per outside IP address must be lowered (by configuration) so that it is equal or less than the configured number of port blocks per outside IP address. This action causes random deletion of subscribers that do not have any port forwards. Such subscribers are deleted until the number of subscriber falls below the newly configured subscriber limit. Subscribers with static port forwards are not deleted, regardless of the configured subscriber-limit number. When the number of subscribers is within the newly configured subscriber-limit, the port-reservation can take place under the condition that the dynamic port blocks are available. If specific subscribers with port forwards have more than one dynamic port block allocated, the orphaned subscribers must wait for those additional dynamic port blocks to expire and consequently be released.

  • This feature is supported on the following applications: CGN, DS-Lite and NAT64.

Pools with flexible port allocations

Pools with flexible port allocations are specialized pools that allow subscribers to configure per-port allocations instead of per port-block allocations for specific use cases. Logging of port allocations and deallocations within such pools is not facilitated. These pools are compatible with ESA-VM and vISA (VSR) and are not applicable to ISA2. These pools cater to users that have a dedicated private pool. Pools with flexible port allocations ensure that the external IP addresses of the pools are associated with a single user entity or a tenant even before the pool provisioning phase, which eliminates the need for logging.

When pools with flexible port allocations are configured, static port forwards are interspersed with dynamically allocated ports. These pool can only be linked via a NAT policy to a source prefix or to a static port forward. This allows NAT processing of traffic solely originating from the configured source prefix or address. Neither of these two mechanisms inherently steers traffic to the ESA-VM/vISA modules. Therefore, traffic for NAT processing is directed to pools with flexible port allocations based on either destination prefix or filter criteria. In this case, explicitly configured NAT policy is not allowed within a destination prefix or a filter . After the traffic is in the ESA-VM/vISA, the pool is selected based on the source prefix or an existing flow created under the static port forward, which does not have a NAT policy explicitly configured.

Execute the following command to configure pools with flexible port allocations:

  • MD-CLI
    configure router nat outside pool applications flexible-port-allocation true
  • classic CLI
    configure router nat outside pool flexible-port-allocation

The traffic is steered to ESA-VM/vISA based on a destination prefix or a filter criteria.

Note: A default NAT policy is not allowed for pools with flexible port allocations. Execute the following command to configure a default NAT policy:
  • MD-CLI
    configure service vprn nat inside large-scale nat-policy 
  • classic CLI
    configure service vprn nat inside nat-policy 

The following examples show configured NAT pools with flexible port allocations.

Destination prefix (MD-CLI)

[ex:/configure service vprn nat inside large-scale]
A:admin@node-2# info
    traffic-identification {
        source-prefix-only true
    }
    nat44 {
        destination-prefix 0.0.0.0/0 {
        }
        source-prefix 10.10.10.0/24 {
            nat-policy "nat-pol-1"
        }
        source-prefix 10.10.11.0/24 {
            nat-policy "nat-pol-1"
        }
        source-prefix 10.10.12.0/25 {
            nat-policy "nat-pol-2"
        }
        source-prefix 10.10.12.128/25 {
            nat-policy "nat-pol-3"
        }
    }

Destination prefix (classic CLI)

A:node-2>config>service>vprn>nat>inside# info
----------------------------------------------
                    destination-prefix 0.0.0.0/0
                    traffic-identification
                        source-prefix-only
                    exit
                    source-prefix 10.10.10.0/24 nat-policy "nat-pol-1"
                    source-prefix 10.10.11.0/24 nat-policy "nat-pol-1"
                    source-prefix 10.10.12.0/25 nat-policy "nat-pol-2"
                    source-prefix 10.10.12.128/25 nat-policy "nat-pol-3"
----------------------------------------------

Filter definition (MD-CLI)

[ex:/configure filter ip-filter "demo-nat" entry 10]
A:admin@node-2# info    
    match {
        protocol 6
        dst-ip {
            address 0.0.0.0/1
        }
        dst-port {
            eq 2000
        }
    }
    action {
        nat {
        }
    }

Filter definition (classic CLI)

A:node-2>config>filter>ip-filter>entry# info
----------------------------------------------
                match protocol 6
                    dst-ip 0.0.0.0/1
                    dst-port eq 2000
                exit
                action
                    nat
                exit
----------------------------------------------

Applying a filter to the ingress interface (MD-CLI)

[ex:/configure service vprn “nat”]
A:admin@node-2# info
    admin-state enable
    service-id 20
    customer "1"
    nat {
        inside {
            large-scale {
                traffic-identification {
                    source-prefix-only true
                }
                nat44 {
                    destination-prefix 0.0.0.0/0 {
                    }
                    source-prefix 10.10.10.0/24 {
                        nat-policy "nat-pol-1"
                    }
                    source-prefix 10.10.11.0/24 {
                        nat-policy "nat-pol-1"
                    }
                    source-prefix 10.10.12.0/25 {
                        nat-policy "nat-pol-2"
                    }
                    source-prefix 10.10.12.128/25 {
                        nat-policy "nat-pol-3"
                    }
                }
            }
        }
    }
    interface "access" {
        admin-state enable
        ipv4 {
            primary {
                address 172.16.102.1
                prefix-length 24
            }
        }
        sap lag-4:2 {
            ingress {
                filter {
                    ip "demo-nat"
                }
            }
        }
    }

Applying a filter to the ingress interface (classic CLI)

A:node-2>config>service>vprn# info
----------------------------------------------
            interface "access" create
                address 172.16.102.1/24
                sap lag-4:2 create
                    ingress
                        filter ip 1
                    exit
                exit
            exit
            nat
                inside
                    destination-prefix 0.0.0.0/0
                    traffic-identification
                        source-prefix-only
                    exit
                    source-prefix 10.10.10.0/24 nat-policy "nat-pol-1"
                    source-prefix 10.10.11.0/24 nat-policy "nat-pol-1"
                    source-prefix 10.10.12.0/25 nat-policy "nat-pol-2"
                    source-prefix 10.10.12.128/25 nat-policy "nat-pol-3"
                exit
            exit
            no shutdown
----------------------------------------------

Static port forwards in a pool with flexible port allocations

Static Port Forwards (SPF) are supported in pools with flexible port allocations.

Typically, the SPF command requires that the NAT subscriber be mapped to the same pool to which a source prefix is mapped. For instance, a subscriber with the IP 10.10.12.1, targeted in an SPF command, needs to correspond with t e pool related to its source prefix 10.10.12.0/25. This mapping is indirectly configured via the following nat-policy command.

Use the following command to configure an SPF configuration.

tools perform nat port-forwarding-action lsn create router "20" ip 10.10.12.1 protocol tcp port 2000 lifetime 3000 outside-ip 192.168.255.160 outside-port 2000 nat-policy "nat-policy-2"
Source prefix configuration MD-CLI
[ex:/configure service vprn "1" nat]
A:admin@node-2# info
    inside {
        large-scale {
            traffic-identification {
                source-prefix-only true
            }
            nat44 {
                destination-prefix 0.0.0.0/0 {
                }
                source-prefix 10.10.11.0/24 {
                    nat-policy "nat-policy-1"
                }
                source-prefix 10.10.12.0/25 {
                    nat-policy "nat-policy-2"
                }
                source-prefix 10.10.12.128/25 {
                    nat-policy "nat-policy-3"
                }
            }
        }
    }
Source prefix configuration classic CLI
A:node-2>config>service>vprn>nat>inside$ info
----------------------------------------------
                    destination-prefix 0.0.0.0/0
                    traffic-identification
                        source-prefix-only
                    exit
                    source-prefix 10.10.11.0/24 nat-policy "nat-policy-1"
                    source-prefix 10.10.12.0/25 nat-policy "nat-policy-2"
                    source-prefix 10.10.12.128/25 nat-policy "nat-policy-3"
----------------------------------------------

Failure to match the SPF request with a corresponding NAT policy and source prefix results in SPF creation failure. To circumvent this and enable SPF allocation from a different NAT pool than that of the corresponding source prefix, the NAT policy for SPF must be predeclared within the inside routing context. For example, if SPF is allocated from a different pool the NAT policy must be declared as follows.

tools perform nat port-forwarding-action lsn create router "20" ip 10.10.12.1 protocol tcp port 2000 lifetime 3000 outside-ip 192.168.255.160 outside-port 2000 nat-policy "nat-policy-1"

Use the following command to declare a NAT policy:

  • MD-CLI
    configure service vprn "esm" nat inside large-scale static-port-forwards spf-nat-policy "nat-policy-1"
  • classic CLI
    configure service vprn nat inside spf-policy nat-policy "nat-policy-1"

This enables the creation of an SPF for the subscriber with the IP address 10.10.12.1 in a pool different from that associated with the corresponding source prefix. Specifically, the SPF is allocated in the pool indicated by nat-policy-1, diverging from the pool associated with nat-policy-2, which is linked to the source prefix.

Multiple NAT policies can be declared within this configuration hierarchy.

This enables the creation of an SPF for the subscriber with the IP address 10.10.12.1 in a pool different from that associated with the corresponding source prefix. Specifically, the SPF can be allocated in the pool indicated by nat-policy-1, diverging from the pool associated with nat-policy-2, which is linked to the source prefix.

Free port limit

The free port limit feature allows the user to configure a limit on free ports per protocol for each external IP address. This avoids rapid port depletion for new subscribers in paired pooling mode, or unnecessary toggling between external IP addresses for existing subscribers in arbitrary pooling mode.

Such port limits ensure that only IP addresses with sufficient free ports, in accordance with the configured limit, are considered for selection and added to the eligible IP address list.

Note: Free port limit does not interfere with port allocation from an outside IP address for subscribers that are already assigned with the IP address. These subscribers can continue to use ports until the exhaustion of the ports on an outside IP address. After all of the ports on an outside IP address are used up, the system maps subscribers, new or those in arbitrary pooling mode, to a new IP address if it has a port count above the configured limit.

The following example displays configured free port limits in pools with flexible port allocations.

MD-CLI
[ex:/configure router "Base" nat outside pool "demo" large-scale]
A:admin@node-2# info
    flexible-port-allocation {
        free-port-limit {
            tcp 1000
            udp 1000
            icmp 1000
        }
    }
classic CLI
A:node-2>config>router>nat>outside>pool# info
----------------------------------------------
...
                    flexible-port-allocation
                        free-port-limit tcp 1000 udp 1000 icmp 1000
                    exit
...
----------------------------------------------

Restrictions

The following functionalities are not supported for pools with flexible port allocations and is blocked in the CLI:

  • referencing this pool in the destination prefix or filter
  • destination-based NAT (dNAT)
  • PCP
  • deterministic NAT
  • Layer 2–aware NAT
  • 1:1 NAT
  • maximum number of subscribers per-IP address
  • no reservation of ports based on QoS settings (port priorities)
  • Stateful Inter-Chassis NAT Redundancy (SICR)
  • WLAN-GW or L2 aware firewall-specific functionality (dormant pool or V6 translations)
  • scaling profile 1 and scaling profile 3

Association between NAT subscribers and IP addresses in a NAT pool

A NAT subscriber can allocate ports on a single outside IP address or multiple IP addresses in a NAT pool. Nokia recommends that NAT subscribers allocate ports from a single outside IP address. If this IP address runs out of ports, the NAT subscriber runs out of ports. In other words, there is no attempt for a new port to be allocated from a different outside IP address. This method of address allocation to a NAT subscriber is referred to as Paired Address Pooling and is the default behavior in SR OS.

The alternative method of port allocation involves port exhaustion on the originally allocated IP address. An attempt is made to allocate ports from another IP addresses that has free ports available. This results in a NAT subscriber be associated with multiple outside IP addresses. This method is referred to as Arbitrary Address Pooling and can be optionally enabled in SR OS. See RFC 7857, Section 4 for more information.

Arbitrary address pooling may offer more efficient allocations of port-blocks across outside IP address in a NAT pool, but it may negatively affect some applications. For example, an application may require two channels for communication, a control channel and a data channel, each on a different source port from the client perspective on the inside of the NAT. The communication channel may be established on the outside address IP1 and outside port X. If port X is the last free port on the IP1, the SR OS attempts to allocate the next port Y for the data channel from a different outside address, IP2. If the application is robust enough to accept communication from the same client on two different IP addresses, there are no issues. However, some applications may not support this scenario and the communication fails.

Arbitrary address pooling implies the following:

  • The subscriber limit per outside IP address loses its meaning because the subscriber can now be associated with multiple IP addresses. Hence, the following command cannot be set:
    • MD-CLI
      configure router nat outside pool large-scale subscriber-limit
    • classic CLI
      configure router nat outside pool subscriber-limit

    For more information about the subscriber-limit command in paired address pooling, see Managing port block space.

  • The number of outside IP addresses in a pool must be at least double the number of ESA-VM in a NAT-group hosting the subscriber. Each subscriber is hashed to a single ESA-VM, ISA2, or vISA; therefore at least two outside IP addresses must be available per ESA-VM, ISA2, or vISA for the subscriber to use more than one outside IP address.

  • The number of port blocks configured in a NAT policy using the following command is the aggregate limit that a NAT subscriber can be allocated across multiple outside IP addresses.

    configure service nat nat-policy block-limit
  • Reserving a port block by SPF configuration (when an SPF is configured before any port blocks are allocated to the subscriber) is not supported. In other words, the following commands are not supported:

    MD-CLI

    configure router nat outside pool port-forwarding dynamic-block-reservation
    classic CLI
    configure router nat outside pool port-forwarding-dyn-block-reservation
  • Arbitrary address pooling is not supported in Layer 2–aware NAT.

Use the following command to show NAT LSN information for the subscriber.

show service nat lsn-subscribers subscriber

The asterisk (*) next to the IP address field in the output indicates that additional outside IP addresses are associated with this subscriber in this pool.

===============================================================================
NAT LSN subscribers
===============================================================================
Subscriber                  : [LSN-Host@192.168.1.1]
NAT policy                  : nat-policy-lsn-deterministic
Subscriber ID               : 276824064
-------------------------------------------------------------------------------
Type                        : classic-lsn-sub
Inside router               : "Base"
Inside IP address prefix    : 192.168.1.1/32
ISA NAT group               : 1
ISA NAT group member        : 1
Outside router              : 4
Outside IP address          : 192.0.0.1*  

Use the detailed version of the command to see additional outside IP addresses and port blocks.

Timeouts

Creating a NAT mapping is only one half of the problem – removing a NAT mapping at the appropriate time maximizes the shared port resource. Having ports mapped when an application is no longer active reduces solution scale and may impact the customer experience should they exhaust their port range block. The NAT application provides timeout configuration for TCP, UDP and ICMP.

TCP state is tracked for all TCP connections, supporting both three-way handshake and simultaneous TCP SYN connections. Separate and configurable timeouts exist for TCP SYN, TCP transition (between SYN and Open), established and time-wait state. Time-wait assassination is supported and enabled by default to quickly remove TCP mappings in the TIME WAIT state.

UDP does not have the concept of connection state and is subject to a simple inactivity timer. Company-sponsored research into applications and NAT behavior suggested some applications, like the BitTorrent Distributed Hash Protocol (DHT) can make a large number of outbound UDP connections that are unsuccessful. Instead of waiting the default five (5) minutes to time these out, the 7750 SR NAT application supports an udp-initial timeout which defaults to 15 seconds. When the first outbound UDP packet is sent, the 15 second time starts – it is only after subsequent packets (inbound or outbound) that the default UDP timer becomes active, greatly reducing the number of UDP mappings.

Layer 2–aware NAT

L2-Aware tree shows the L2-Aware tree.

Figure 4. L2-Aware tree

NAT is supported on DHCP, PPPoE and L2TP. Static and ARP hosts are not supported.

Layer-2 Aware (or subscriber aware) NAT is combined with Enhanced Subscriber Management on the 7750 SR BNG to overcome the issues of colliding address space between home networks and the inside routed network between the customer and Large Scale NAT.

Layer-2 Aware NAT allows every broadband subscriber to be allocated the exact same IPv4 address on their residential gateway WAN link and then proceeds to translate this into a public IP through the NAT application.

Layer-2-Aware NAT is supported on any of the ESM access technologies, including PPPoE, IPoE (DHCP) and L2TP LNS. For IPoE both n:1 (VLAN per service) and 1:1 (VLAN per subscriber) models are supported. A subscriber device operating with Layer 2–aware NAT needs no modification or enhancement, existing address mechanisms (DHCP or PPP/IPCP) are identical to a public IP service, the 7750 SR BNG simply translates all IPv4 traffic into a pool of IPv4 addresses, allowing many Layer 2–aware NAT subscribers to share the same IPv4 address.

More information about Layer 2–aware NAT can be found in draft-miles-behave-l2nat-00.

Port block extensions

Similarly to LSN, a Layer 2–aware NAT subscriber is assigned a single outside IP address per NAT pool, with one or more port blocks tied to the IP address. The outside IP address is shared by multiple subscribers, each with its own unique set of port blocks.

To ensure that a predetermined number of subscribers receive NAT service, an outside IP address and at least one port block on that IP address must be guaranteed. For this reason, the port blocks space in a pool is divided into two partitions:

  • port block space reserved for new Layer 2–aware NAT subscribers

    Each new subscriber is guaranteed to receive at least one port block, referred to as the initial port block.

  • port block space reserved for the extended port-blocks of existing NAT subscribers

    This port partition can be used by subscribers who exhaust their ports in the initial port block and need additional ports. Pending on the availability and configuration, they are assigned additional port blocks.

Without this type of port space partitioning, the outside IP addresses and the NAT pool may become overtaken by users with heavier port consumption. This denies access to NAT services to a majority of users with lower port consumption.

This division of port space is controlled by limiting the number of subscribers per an outside IP address and configuring the size of the initial port block.

The following shows configuration information relevant to port-block allocation in Layer 2–aware NAT.

Use the following commands to configure the initial port block size for new subscribers:

  • MD-CLI
    configure service vprn nat outside pool port-reservation port-blocks
    configure service vprn nat outside pool port-reservation ports
    
  • classic CLI
    configure service vprn nat outside pool port-reservation blocks
    configure service vprn nat outside pool port-reservation ports
    Note: Only one of the above commands can be specified at a time.

    The pool type must be l2-aware.

    The port-reservation blocks value can be configured only if port-block-extension is not enabled.

The following command configures the extended port block size for existing subscribers and the maximum number of subscribers per outside IP address. The size of the initial port blocks and extended port block may differ.

  • MD-CLI
    configure service vprn nat outside pool l2-aware port-block-extension ports subscriber-limit
  • classic CLI
    configure service vprn nat outside pool port-block-extensions ports subscriber-limit
    
  • Note: The pool type must be l2-aware.

Use the following command to configure the upper boundary for static port forwarding:

  • MD-CLI
    configure service vprn nat outside pool port-forwarding range-end
  • classic CLI
    configure service vprn nat outside pool port-forwarding-range
    

Port space partitioning for an outside IP address (MD-CLI) and Port space partitioning for an outside IP address (classic CLI) show the effects of the commands.

Figure 5. Port space partitioning for an outside IP address (MD-CLI)
Figure 6. Port space partitioning for an outside IP address (classic CLI)

The maximum number of port blocks that can be allocated per subscriber is configured in the NAT policy by the following command.

configure service nat nat-policy block-limit

Managing port block space

Both port partitions, initial and extended, are served on a first-come-first-served basis. The initial port partition guarantees at least one port block for each of the preconfigured number of subscribers per outside IP address. Use the following command to configure subscriber limits:

  • MD-CLI
    configure router nat outside pool l2-aware port-block-extension subscriber-limit
  • classic CLI
    configure router nat outside pool port-block-extension ports subscriber-limit

If there are more subscribers in the network than the preconfigured number of NAT subscribers, then this space becomes oversubscribed.

Extended port partitioning, however, does not guarantee that each of the existing NAT subscriber receives additional port blocks. Each subscriber can allocate additional free port blocks only if they are available, up to the maximum combined limit (initial and extended) set in the NAT policy using the following command.

configure service nat nat-policy block-limit

For optimized NAT pool management and correct capacity planning, understanding the following command options in the operator’s network is essential:

  • IP address compression ratio (the number of subscribers who share one outside IP address)

  • subscriber oversubscription ratio (the number of NAT subscribers who are active simultaneously)

  • statistical port usage for subscribers (the percentage of subscribers who are heavy, medium, and light port users)

  • port block sizes

Based on the previous command options, an average port block per subscriber can be determined and the following command options in NAT can be set:

  • the subscriber limit per outside IP address configured in the NAT pool

  • the size of the initial and extended port blocks configured in the NAT pool

  • the maximum number of port blocks per subscriber configured in NAT policy

  • the outside IP address range configured in the NAT pool

The following are reasonable guidelines with an example that can serve as an initial configuration for operators who are unsure of their current traffic patterns in terms of port usage for their subscribers.

  • An operator has 10,000 subscribers that require NAT, but only 8,000 of them are active simultaneously. This means that the operator can allow oversubscription of outside (NAT) IP address.

  • Average port usage:

    • 60% of the subscribers are light port users with less than 1000 ports.

    • 30% of the subscribers are medium port users with less than 2000 ports.

    • 10% of the subscribers are heavy port users with less than 4000 ports.

These assumptions lead to the following calculations:

  • 8,000 active subscribers x (0.6 x 1000 + 0.3 x 2,000 + 0.1 x 4,000) = 12,800,000 total ports.

  • Consider that one outside IP address can accommodate ~50,000 (64K ports less the static port forwards and well known ports). This yields 256 outside IP addresses (/24) in a pool therefore, 12,800,000 / 50,000 = 256.

  • The compression ratio is 8,000 divided by 256 equals ~32 (32 subscribers share one outside IP address), therefore the subscriber limit equals 32.

    Based on this calculation, a reasonable size for the initial port block size is 1000 ports and the extended port block size is 335 ports.

  • The maximum number of port blocks per subscriber is set to 10 to accommodate heavy users with 4,000 ports (1000 + 9 x 335 = 4015)

Setting the subscriber limit in a pool to 32, the initial and extended port block sizes to 1000 and 335 respectively, the maximum number of port blocks per subscriber to 10, and configuring a /24 address range in a pool would produce the needed results. This assumes that the subscribers are properly load-balanced over ISAs or ESAs. The following is an example configuration.

MD-CLI
[ex:/configure service vprn "demo" nat outside pool "demo pool"]
A:admin@node-2# info
    type l2-aware
    nat-group 1
    port-forwarding {
        range-end 4119
    }
    port-reservation {
        ports 1000
    }
    l2-aware {
        port-block-extension {
            ports 335
            subscriber-limit 32
        }
    }


[ex:/configure service nat nat-policy "demo-policy"]
A:admin@node-2# info
    block-limit 10
classic CLI
A:node-2>config>service>vprn>nat>outside$ info
----------------------------------------------
                    pool "demo pool" nat-group 1 type l2-aware create
                        shutdown
                        port-reservation ports 1000
                        port-forwarding-range 4119
                        port-block-extensions
                            ports 335 subscriber-limit 32
                        exit
                    exit
---------------------------------------------

A:node-2>config>service>nat>nat-policy$ info
----------------------------------------------
                block-limit 10
----------------------------------------------

Layer 2–aware NAT bypass

Layer 2–aware NAT bypass refers to the functionality where the entire or partial traffic from a L2-Aware-NAT-enabled ESM1 subscriber circumvents local NAT function. There are three types of bypass supported for Layer 2–aware NAT in the SR OS:

  • full ESM host bypass

  • selective ESM host bypass based on an IP filter match

  • the entire ESM subscriber bypass because of ISA/ESA failure. This type of bypass is described in NAT redundancy.

Full ESM host bypass

In this type of bypass, a subscriber host is implicitly excluded from Layer 2–aware NAT if its IP address falls outside of the configured subnet in the inside NAT CLI hierarchy under the L2-Aware CLI node.

In the following example, the address under the L2-Aware CLI node (address 10.10.1.254/24) represents the default gateway and a L2-Aware subnet. Hosts with IP addresses within the configured L2-Aware subnet (in this example 10.10.1.0/24) are subjected to Layer 2–aware NAT (the exception is the default gateway address 10.10.1.254). Hosts outside of this IP range bypass NAT. In this way, a mix of hosts under the same L2-Aware enabled ESM subscriber can coexist, some of which are subject to NAT, and some of which are bypassing NAT.

MD-CLI
A:admin@node-2# configure router nat inside l2-aware subscribers 10.10.1.254/24
classic CLI
A:node-2>configure router nat inside l2-aware address 10.10.1.254/24

Selective Layer 2–aware NAT bypass

In selective Layer 2–aware NAT bypass, a decision whether to perform NAT is made based on the traffic classifiers (match conditions) defined in an IP filter applied to an ESM host.

A typical use case for selective Layer 2–aware NAT bypass is based on destinations, where on-net services are needed to be accessed without NAT, while some other off-net destinations, require NAT. Traffic to those on-net services is identified based on the destination IP addresses (L2-Aware bypass based on traffic destination).

Figure 7. L2-Aware bypass based on traffic destination

Layer 2–aware NAT subscribers that are candidates for selective bypass in SR OS, must be first identified and enabled using the following command:

  • MD-CLI
    configure subscriber-mgmt sub-profile nat allow-bypass
  • classic CLI
    configure subscriber-mgmt sub-profile nat-allow-bypass

After the selective Layer 2–aware NAT bypass is enabled, the determination of whether specific traffic from a host bypasses NAT comes via an IP filter with the following action.

configure filter ip-filter entry action l2-aware-nat-bypass

This action must be configured in addition to the existing action below:

  • MD-CLI
    configure filter ip-filter entry action accept
  • classic CLI
    configure filter ip-filter entry action forward

This defined set of actions divert identified traffic away from NAT.

Although most typical use cases require traffic identification based on destination IP addresses, generic match statements in IP filters allow identification of traffic based on any Layer 3 fields.

The filter entries are executed in top-to-bottom order as shown in Filtering example for Layer 2–aware NAT bypass.

Figure 8. Filtering example for Layer 2–aware NAT bypass

Configuration options for selective Layer 2–aware NAT bypass describes the behavior in relation to the three configuration options that directly influence selective Layer 2–aware NAT bypass.

Table 1. Configuration options for selective Layer 2–aware NAT bypass
Layer 2–aware NAT-enabled host Selective bypass enabled IP filter action

l2-aware-nat-bypass accept | forward

Behavior

Yes

Yes

Yes

Selective bypass is in effect

Yes

Yes

No

The host is enabled for bypass, but without the corresponding IP filter action. Bypass is not in effect and all traffic from the host is NAT’d. After the bypass action is provided via the IP filter, traffic identified in the IP filter is bypassed.

Yes

No

Yes

The host is not enabled for bypass, but the IP filter is configured for bypass. This is an incorrect condition where host traffic is bypassed in the upstream direction but not in the downstream direction. As a result, downstream traffic is dropped.

Yes

No

No

The host is not enabled for bypass. All host traffic is NAT’d.

No

Yes

Yes

The host is not a Layer 2–aware NAT host. This is a full bypass case.

No

Yes

No

The host is not a Layer 2–aware NAT host. This is a full bypass case.

No

No

Yes

The host is not a Layer 2–aware NAT host. This is full bypass case.

No

No

No

The host is not a Layer 2–aware NAT host. This is full bypass case.

The following are configuration considerations:

  • An ESM-enabled host can be enabled if the following two conditions are met:

    • the subscriber’s sub-profile contains a NAT policy

    • The host IP address belongs to the subnet configured under one of the following contexts:

      • MD-CLI
        configure router nat inside l2-aware subscribers
        configure service vprn nat inside l2-aware subscribers
      • classic CLI
        configure router nat inside l2-aware address
        configure service vprn nat inside l2-aware address
  • Selective bypass is enabled if the following command is configured under the sub-profile:

    • MD-CLI
      configure subscriber-mgmt sub-profile nat allow-bypass
    • classic CLI
      configure subscriber-mgmt sub-profile nat-allow-bypass
  • All configuration options are allowed in the CLI and it is up to the operator to consult Configuration options for selective Layer 2–aware NAT bypass for expected results.

On-line change of the selective NAT bypass

While traffic is flowing, it is possible to change its path from going through NAT to bypassing NAT. This kind of transition while traffic is flowing is referred to as on-line change.

NAT bypass for the Layer 2-aware NAT subscriber can be influenced using NAT access mode configuration command options. See Configuration options for selective Layer 2–aware NAT bypass for information about possible (valid and invalid) combinations of the two.

Use the following command to enable and disable NAT bypass, which is supported by changing the subscriber profile for the ESM subscriber with RADIUS/Gx. Changing the subscriber profile configuration while the profile is in use is not supported.

  • MD-CLI
    configure subscriber-mgmt sub-profile nat access-mode
  • classic CLI
    configure subscriber-mgmt sub-profile nat-access-mode

Nokia recommends changing the IP filter action using the following command, which overrides the existing IP filter using RADIUS or Gx.

configure filter ip-filter entry action l2-aware-nat-bypass
NAT bypass verification

Use the following commands to verify that NAT bypass is in effect.

This command shows that the ESM subscriber is a Layer 2–aware NAT subscriber for which bypass is enabled.

show service active-subscribers detail
===============================================================================
Active Subscribers
===============================================================================
-------------------------------------------------------------------------------
Subscriber AL_x0ffx6x0x0x2
           (sub_l2-dhcp1)
-------------------------------------------------------------------------------
NAT Policy    : pol-B-1
Outside IP    : 130.0.0.201
Ports         : 1536-1570
NAT Policy    : pol-o-1
Outside IP    : 19.0.0.87 (vprn101)
Ports         : 1024-1055
NAT Policy    : pol-o1-1
Outside IP    : 130.0.0.68 (vprn601)
Ports         : 1152-1349
NAT Policy    : pol-o2-1
Outside IP    : 130.0.0.222 (vprn602)
Ports         : 1152-1349
-------------------------------------------------------------------------------
I. Sched. Policy : N/A                              
E. Sched. Policy : N/A                              E. Agg Rate Limit: Max
                                                    E. Min Resv Bw   : 1
I. Policer Ctrl. : N/A                              
E. Policer Ctrl. : N/A                              
I. vport-hashing : Disabled                         
I. sec-sh-hashing: Disabled                         
Q Frame-Based Ac*: Disabled                         
Acct. Policy     : N/A                              Collect Stats    : Disabled
ANCP Pol.        : N/A                              
Accu-stats-pol   : (Not Specified)                  
HostTrk Pol.     : N/A                              
IGMP Policy      : N/A                              
MLD Policy       : N/A                              
PIM Policy       : N/A                              
Sub. MCAC Policy : N/A                              
NAT Policy       : pol-o-1                          
Firewall Policy  : N/A                              
UPnP Policy      : N/A                              
NAT Prefix List  : npl-4                            
Allow NAT bypass : Yes

The following command provides insight into whether the NAT bypass is in effect.

show filter ip
===============================================================================
IP Filter
===============================================================================
Filter Id           : 10                           Applied        : Yes
Scope               : Template                     Def. Action    : Drop
Type                : Normal
System filter       : Unchained
Radius Ins Pt       : n/a
CrCtl. Ins Pt       : n/a
RadSh. Ins Pt       : n/a
PccRl. Ins Pt       : n/a
Entries             : 1
Description         : (Not Specified)
Filter Name         : 10
-------------------------------------------------------------------------------
Filter Match Criteria : IP
-------------------------------------------------------------------------------
Entry               : 1
Description         : (Not Specified)
Log Id              : n/a
Src. IP             : 0.0.0.0/0
Src. Port           : n/a
Dest. IP            : 0.0.0.0/0
Dest. Port          : n/a
Protocol            : Undefined                    Dscp           : Undefined
ICMP Type           : Undefined                    ICMP Code      : Undefined
Fragment            : Off                          Src Route Opt  : Off
Sampling            : Off                          Int. Sampling  : On
IP-Option           : 0/0                          Multiple Option: Off
Tcp-flag            : (Not Specified)
Option-pres         : Off
Egress PBR          : Disabled
Primary Action      : Forward
L2 Aware NAT Bypass : Enabled
Ing. Matches        : 0 pkts
Egr. Matches        : 0 pkts
===============================================================================

Layer 2–aware NAT destination-based multiple NAT policies

Multiple NAT policies for a L2-Aware subscriber can be selected based on the destination IP address of the packet. This allows the operator to assign different NAT pools and outside routing contexts based on the traffic destinations.

The mapping between the destination IP prefix and the NAT policy is defined in a NAT prefix list. This NAT prefix list is applied to the L2-Aware subscriber through a subscriber profile. After the subscriber traffic arrives to the MS-ISA where NAT is performed, an additional lookup based on the destination IP address of the packet is executed to select the specific NAT policy (and consequently the outside NAT pool). Failure to find the specific NAT policy based on the destination IP address lookup results in the selection of the default NAT policy referenced in the subscriber profile.

MD-CLI

[ex:/configure service]
A:admin@node-2# info
    nat {
        prefix-list "prefixlist1" {
            prefix 192.168.0.0/30 {
                nat-policy "l2aw nat policy"
            }
            prefix 192.168.0.64/30 {
                nat-policy "l2aw nat policy"
            }
            prefix 192.168.0.128/30 {
                nat-policy "l2aw nat policy"
            }
            prefix 192.168.1.0/30 {
                nat-policy "another-l2aw-nat-policy"
            }
            prefix 192.168.1.64/30 {
                nat-policy "another-l2aw-nat-policy"
            }
            prefix 192.168.1.128/30 {
                nat-policy "another-l2aw-nat-policy"
            }
        }
        nat-policy "another-l2aw-nat-policy" {
            pool {
                router-instance "Base"
                name "another-l2-aw-nat-pool"
            }
        }
        nat-policy "default-nat-policy" {
            pool {
                router-instance "Base"
                name "default-nat-pool"
            }
        }
        nat-policy "l2aw nat policy" {
            pool {
                router-instance "Base"
                name "l2-aw-nat-pool"
            }
        }
    }

[ex:/configure subscriber-mgmt]
A:admin@node-2# info
     sub-profile "sub_profile" {
        nat {
            policy "default-nat-policy"
            prefix-list "prefixlist1"
        }
    }

classic CLI

A:node-2>config>service# info
----------------------------------------------
service
   nat  
     nat-policy "l2aw-nat-policy" create                
       pool "l2aw-nat-pool" router 1
     exit
     nat-policy "another-l2aw-nat-policy" create
        pool "another-l2aw-nat-pool" router 2
     exit
nat-policy "default-nat-policy" create
        pool "default-nat-pool" router Base
     exit
         
     nat-prefix-list "prefixlist1" application l2-aware-dest-to-policy create
        prefix 192.168.0.0/30 nat-policy "l2aw-nat-policy"
          prefix 192.168.0.64/30 nat-policy "l2aw-nat-policy"
          prefix 192.168.0.128/30 nat-policy "l2aw-nat-policy"
          prefix 192.168.1.0/30 nat-policy "another-l2aw-nat-policy"
          prefix 192.168.1.64/30 nat-policy "another-l2aw-nat-policy"
          prefix 192.168.1.128/30 nat-policy "another-l2aw-nat-policy" 
        exit        
    exit
----------------------------------------------

A:node-2>config>subscr-mgmt# info
----------------------------------------------
    subscriber-mgmt        
        sub-profile "sub_profile" create            
            nat-policy "default-nat-policy"
            nat-prefix-list "prefixlist1"            
        exit
----------------------------------------------     

As displayed in the example above, multiple IP prefixes can be mapped to the same NAT policy.

The NAT prefix list cannot reference the default NAT policy. The default NAT policy is the one that is referenced directly under the subscriber profile.

Logging

In Layer 2–aware NAT with multiple nat-policies, the NAT resources are allocated in each pool associated with the subscriber. This NAT resource allocation is performed at the time when the ESM subscriber is instantiated. Each NAT resource allocation is followed by log generation.

For example, if RADIUS logging is enabled, one Alc-NAT-Port-Range VSA per NAT policy is included in the acct START/STOP message.

Alc-Nat-Port-Range = "192.168.20.1 1024-1055 router base nat-pol-1"
Alc-Nat-Port-Range = "193.168.20.1 1024-1055 router base nat-pol-2"
Alc-Nat-Port-Range = "194.168.20.1 1024-1055 router base nat-pol-3"
RADIUS logging and NAT-policy change via CoA

NAT policy change for Layer 2–aware NAT is supported through a sub-profile change triggered in CoA. However, change of sub-profile alone through CoA does not trigger generation of a new RADIUS accounting message and therefore NAT events related to NAT policy changes are not promptly logged. For this reason, each CoA initiating the sub-profile change in a NAT environment must do one of the following:

  • Change the SLA profile.

  • Include the Alc-Trigger-Acct-Interim VSA in the CoA messages.

Note that the SLA profile has to be changed and not just refreshed. In other words, replacing the existing SLA profile with the same one does not trigger a new accounting message.

Both of these events trigger an accounting update at the time CoA is processed. This keeps NAT logging current. The information about NAT resources for logging purposes is conveyed in the following RADIUS attributes:

  • Alc-Nat-Port-Range-Freed VSA → NAT resources released because of CoA.

  • Alc-Nat-Port-Range VSA → NAT resources in use. These can be the existing NAT resources which were not affected by CoA or they can be new NAT resource allocated because of CoA.

NAT logging behavior because of CoA depends on the deployed accounting mode of operation. This is described in NAT-policy change and CoA in L2-Aware NAT. The interim-update keyword must be configured for host or session accounting for Interim-Update messages to be triggered.

configure subscriber-mgmt radius-accounting-policy session-accounting interim-update
configure subscriber-mgmt radius-accounting-policy host-accounting interim-update
Table 2. NAT-policy change and CoA in L2-Aware NAT

Host or session accounting Queue-instance accounting Comments

CoA

Sub-prof change +

ATAI VSA

Single I-U with:

  • released NAT info
  • unchanged NAT info
  • new NAT info
  • AATR
  • ATAI

Single I-U with:

  • released NAT info
  • unchanged NAT info
  • new NAT info
  • AATR
  • ATAI

Single I-U message is triggered by CoA.

CoA

Sub-profile change +

SLA profile change

First I-U:

  • released NAT info
  • unchanged NAT info
  • new NAT info

Second I-U:

  • unchanged NAT info
  • new NAT info

Acct Stop:

  • released NAT info
  • unchanged NAT info
  • new NAT info

Acct Start:

  • unchanged NAT info
  • new NAT info

Two accounting messages are triggered in succession.

CoA

Sub-profile change

No accounting messages are triggered by CoA. The next regular I-U messages contain:

  • old (released) NAT info
  • unchanged NAT info
  • new NAT info

CoA

Sub-profile change+

Flame-proofing +

ATAI VSA

First I-U:

  • released NAT info
  • unchanged NAT info
  • new NAT info

Second I-U:

  • unchanged NAT info
  • new NAT info
  • AATR
  • ATAI

Acct Stop:

  • re-released NAT info
  • unchanged NAT info
  • new NAT info

Acct Start:

  • unchanged NAT info
  • new NAT info

Two accounting messages are triggered in succession.

Table Legend:

  • AATR (Alc-Acct-Triggered-Reason) VSA — This VSA is optionally carried in Interim-Update messages that are triggered by CoA.

  • ATAI (Alc-Trigger-Acct-Interim) VSA — this VSA can be carried in CoA to trigger Interim-Update message. The string carried in this VSA is reflected in the triggered Interim-Update message.

  • I-U (Interim-Update Message)

For example, the second CoA row describes the outcome triggered by CoA carrying new sub and SLA profiles. In host/session accounting mode this creates two Interim-Update messages. The first Interim-Messages carries information about:

  • the released NAT resources at the time when CoA is activated

  • existing NAT resources that are not affected by CoA

  • new NAT resources allocated at the time when CoA is activated

The second Interim-Update message carries information about the NAT resources that are in use (existing and new) when CoA is activated.

From this, the operator can infer which NAT resources are released by CoA and which NAT resources continue to be in use when CoA is activated.

Delay between the NAT resource allocation and logging during CoA

Nat-policy change induced by CoA triggers immediate log generation (for example acct STOP or INTERIM-UPDATE) indicating that the nat resources have been released. However, the NAT resources (outside IP addresses and port-blocks) in SR OS are not released for another five seconds. This delay is needed to facilitate proper termination of traffic flow between the NAT user and the outside server during the NAT policy transition. A typical example of this scenario is the following:

  1. HTTP traffic is redirected to a WEB portal for authentication. Only when the user is authenticated, access to the Internet is granted along with a new NAT policy that provides more NAT resources (larger port-ranges, and so on).

  2. After the user is authenticated, CoA is used to change the user forwarding properties (HTTP-redirect is removed and the NAT policy is changed). However, CoA must be sent before the authentication acknowledgment (ACK) messages is sent, otherwise the next new HTTP request would be redirected again.

  3. Authentication acknowledgment is sent to the NAT user following the CoA which removed the HTTP redirect and instantiated a new NAT policy. Because the original communication between the WEB portal and the NAT user was relying on the original NAT policy, the NAT resources associated with the original NAT policy must be preserved to terminate this communication gracefully. Therefore, the delay of five seconds before the NAT resources are freed.

Similar to other stale dynamic mappings, stale port forwards are released after five seconds. Note that static port forwards are kept on the CPM. New CoAs related to NAT are rejected (NAK’d) in case that the previous change is in progress (during the 5seconds interval until the stale mappings are purged).

Static port forwards

Unless the specific NAT policy is provided during Static Port Forward (SPF) creation, the port forward is created in the pool referenced in the default NAT policy. A NAT policy can be part of the command used to modify or delete SPF. If the NAT policy is not provided, then the behavior is:

  • If there is only one match, the port forward is modified or deleted.

  • If there is more than one match, modify or delete port forward must specify a NAT policy. Otherwise, the modify or delete action fails.

A match is considered when at least these command options from the modify or delete command are matched (mandatory command options in the tools perform nat port-forwarding-action lsn command):

  • subscriber identification string

  • inside IP address

  • inside port

  • protocol

For a Layer 2-Aware NAT, an alternative AAA interface can be used to specify SPF. An alternative AAA interface and CLI-based port forwards are mutually exclusive. See the 7450 ESS, 7750 SR, and VSR RADIUS Attributes Reference Guide for more details.

L2-Aware ping

Similar to the non-L2-Aware ping command, understanding how the ICMP Echo Request packets are sourced in L2-Aware ping is crucial for the correct execution of this command and the interpretation of its results. The ICMP Echo Reply packets must be able to reach the source IP address that was used in ICMP Echo Request packets on the SR OS node on which the L2-Aware ping command was executed. See L2-Aware ping.

The return packet (the ICMP Echo reply sent by the targeted host) is subject to L2- Aware NAT routing executed in the MS-ISA. The Layer 2–aware NAT routing process looks at the destination IP address of the upstream packet and then directs the packet to the correct outside routing context. The result of this lookup is a NAT policy that references the NAT pool in an outside routing context. This outside routing context must be the same as the one from which the L2-Aware ping command was sourced. Otherwise, the L2-Aware ping command fails.

The L2-Aware ping command can be run in two modes:

  • basic mode – the subscriber ID is a required field to differentiate subscriber hosts that assigned the same IP address (although each host has its own instantiation of this IP address)

    • MD-CLI
      ping subscriber
    • classic CLI
      ping subscriber-id
  • extended mode – additional command options can be selected. The two most important command options are the source IP address (source) and the routing context (router).

    • MD-CLI
      ping subscriber source-address router-instance
    • classic CLI
      ping subscriber-id source router

The following example shows the traffic flow for an L2-Aware ping command targeting the subscriber’s IP address 10.2.3.4, sourced from the Base routing context using an arbitrary source IP address of 10.6.7.8 (it is not required that this IP address belong to the L2-Aware ping originating node).

When the host 10.2.3.4 replies, the incoming packets with the destination IP address of 10.6.7.8 are matched against the destination-prefix 10.6.7.0/24 referencing the nat-policy-1. nat-policy-1 contains the Pool B which resides in the Base routing context. Hence, the loop is closed and the execution of the L2-Aware ping command is successful.

MD-CLI
[ex:/configure service nat]
A:admin@node-2# info
    prefix-list "prefixlist1" {
        prefix 10.6.7.0/24 {
            nat-policy "nat-policy-1"
        }
    }
     nat-policy "default nat-policy" {
        pool {
            router-instance 2
            name "Pool A"
        }
    }
    nat-policy "nat-policy-1" {
        pool {
            router-instance "Base"
            name "Pool B"
        }
    }

[ex:/configure subscriber-mgmt]
A:admin@node-2# info
    sub-profile "sub profile" {
        nat {
            policy "default nat-policy"
            prefix-list "prefixlist1"
        }
    }
classic CLI
A:node-2>config>service>nat# info
----------------------------------------------
            nat-policy "default nat-policy" create
                pool "Pool A" router 2
            exit
            nat-policy "nat-policy-1" create
                pool "Pool B" router Base
            exit
            nat-prefix-list "prefixlist1" application l2-aware-dest-to-policy create
                prefix 10.6.7.0/24 nat-policy "nat-policy-1"
             exit
----------------------------------------------

A:node-2>config>subscr-mgmt# info
----------------------------------------------
        sub-profile "sub profile" create
            nat-policy "default nat-policy"
            nat-prefix-list "prefixlist1"
        exit
----------------------------------------------
Figure 9. L2-Aware ping

L2-Aware ping is always sourced from the outside routing context, never from the inside routing context. If the router is not specifically configured as an option in the L2-Aware ping command, the Base routing context is selected by default. If that the Base routing context is not one of the outside routing contexts for the subscriber, the L2-Aware ping command execution fails with the following error message:

‟MINOR: OAM #2160 router ID is not an outside router for this subscriber.”

UPnP

UPnP uses the default NAT policy.

Layer 2–aware NAT and multicast

Multicast traffic through NAT is not supported. However, if the downstream multicast traffic is received in the inside routing context, without going through NAT, the traffic can be forwarded to a L2-Aware host.

The following figure shows an example of downstream multicast traffic on the inside.
Figure 10. Downstream multicast traffic originated on the inside

To enable this type of traffic, the following entities must be configured:

  • A NAT policy in the sub-profile context. This configuration enables L2-Aware subscribers.

  • An IGMP policy in the sub-profile context. This enables multicast traffic that is originated on the inside.

  • The following command must be enabled. This configuration enforces the uniqueness of IPv4 addresses of L2-Aware subscribers. Layer 2–aware NAT supports overlapping subscriber IPv4 addresses in the inside routing context.

    configure service vprn nat inside l2-aware force-unique-ip-addresses
    configure router nat inside l2-aware force-unique-ip-addresses

The following examples show the configuration of Layer 2–aware NAT and multicast.

MD-CLI
[ex:/configure subscriber-mgmt]
A:admin@node-2# info
    sub-profile "demo-profile" {
        igmp-policy "demo-mcast"
        nat {
            policy "demo-nat-pol"
        }
    }

[ex:/configure router "Base" nat inside]
A:admin@node-2# info
    l2-aware {
        force-unique-ip-addresses true
      subscribers 192.168.100.200/32 { }
    }

[ex:/configure service vprn "demo-vprn" nat inside]
A:admin@node-2# info
    l2-aware {
        force-unique-ip-addresses true
    }
classic CLI
A:node-2>config>subscr-mgmt# info
----------------------------------------------
        sub-profile "demo-profile" create
            nat-policy "demo-nat-pol"
            igmp-policy "demo-mcast"
        exitq
----------------------------------------------


A:node-2>config>service# info
----------------------------------------------
        vprn 105 name "demo-vprn" customer 1 create
            nat
                inside
                    l2-aware
                        address 192.168.100.200/32
                    exit
                exit
            exit
            no shutdown
        exit
----------------------------------------------

A:node-2>config>router>nat>inside>l2-aware# info
----------------------------------------------
                    force-unique-ip-addresses
----------------------------------------------

A:node-2>config>service>vprn>nat>inside>l2-aware$ info 
----------------------------------------------
                        force-unique-ip-addresses
----------------------------------------------

NAT pool addresses and ICMP Echo Request/Reply (ping)

The outside IPv4 addresses in a NAT pool can be configured to answer pings. ICMPv4 Echo Requests are answered with ICMPv4 Echo Replies.

In 1:1 NAT, ICMP Echo Requests are propagated to the host on the inside. The host identified by a NAT binding then answers the ping.

In Network Address Port Translation (NAPT), ICMP Echo Requests are not propagated to the hosts behind the NAT. Instead, the reply is issued by the SR OS from the ESA or ISA.

In Layer 2-aware NAT, use the following command to configure how replies from outside IP addresses are handled:

  • MD-CLI
    configure router nat outside pool l2-aware port-block-extension
  • classic CLI
    configure router nat outside pool port-block-extensions
    

In NAPT, the behavior is as follows:

  • In Layer 2–aware NAT when port-block-extensions is disabled, the reply from an outside IP address is generated only when the IP address has at least one host (binding) behind it.

  • In Layer 2–aware NAT when port-block-extensions is enabled, the reply from an outside IP address is generated regardless if a binding is present.

  • In LSN, the reply from an outside IP address is generated regardless if a binding is present.

For security reasons, the ICMP Echo Reply functionality is disabled by default. Use the following command to enable ICMP Echo Reply functionality.

configure router nat outside pool icmp-echo-reply

This functionality is on a per-pool basis and it can be configured online while the pool is enabled.

Traffic steering to NAT

Traffic steering to NAT refers to the mechanism by which traffic in the SR line card is redirected to the ISA or VM-ESA for NAT processing. This traffic must be identified first and then redirected to ISA or VM-ESA. The mechanism by which traffic is steered to NAT in an SR node in the upstream direction depends on the NAT type.

For LSN44, the upstream traffic (in the private to public direction) is steered (or redirected) to NAT in an SR node through one of the two mechanisms:

  • routing

  • filters

Both methods are applied in the inside (private) routing context. Traffic matched through routing or filter criteria is sent to the ISA or VM-ESA for NAT processing and from there to the outside (public) routing context where it exits the node.

In NAT64 and DS-lite, traffic is steered to NAT mainly through routing NAT64 prefix in NAT64 and Address Family Transition Router (AFTR) IPv6 address in DS-lite. However, the routing can be augmented with IPv6 filters to accommodate mapping to multiple NAT pools per subscriber.

In L2-Aware, where NAT is integrated with ESM, traffic is steered to NAT automatically assuming that the subscriber session is associated with NAT during session instantiation phase.

In all NAT types, the downstream traffic arriving in the outside (public) routing context is forwarded to NAT through routing and public pool IPv4 addresses are installed in the routing table with the next hop pointing to the ISA or VM-ESA.

The following sections describe steering logic for LSN44 which is the only NAT type that supports dynamic routing.

Routing approach in LSN44

Routing approach relies on destination IP-based match. A destination IP route leading to NAT can be static (specifically configured) or dynamic (installed through the BGP routing protocol).

Static NAT routes

Static steering to LSN44 is based on the destination prefix with a statically-configured routing prefix in an inside routing context (VPRN or GRT). This static route points to an ISA or VM-ISA. In transit from the inside routing context to the outside routing context, frames must be redirected through an ISA or VM-ESA where NAT is performed.

If there are multiple ISA or VM-ESAs in a NAT group, an internal LAG per NAT group is used with member ports connected to each ISA or VM-ESA. Upstream traffic is load-balanced between the ISAs or VM-ESAs based on the source IP addresses or prefixes.

The CLI configuration used for static steering to LSN44 is shown in Basic CLI for NAT (MD-CLI) and Basic CLI for NAT (classic CLI) where a NAT policy containing a pool name and the outside routing context acts as a bond between the inside routing context and the outside routing context. A destination-prefix without an explicitly configured NAT policy uses a default NAT policy. As shown in Basic CLI for NAT (MD-CLI) and Basic CLI for NAT (classic CLI), the prefix 192.168.1.0/24 is mapped to NAT policy cgn2.

Figure 11. Basic CLI for NAT (MD-CLI)
Figure 12. Basic CLI for NAT (classic CLI)

Logical representation of NAT routing through SR displays the logical configuration.

Figure 13. Logical representation of NAT routing through SR

Use the following command to display information about the ISA or VM-ESA next hops for NAT destination prefixes are shown as NAT inside semantics and the listed static routes belong to protocol NAT in the routing table.

show router 1 route-table 
===============================================================================
Route Table (Service: 1)
===============================================================================
Dest Prefix[Flags]                            Type    Proto     Age        Pref
      Next Hop[Interface Name]                                    Metric   
-------------------------------------------------------------------------------
192.168.1.0/24                                Remote  NAT       05d15h21m  0
       NAT inside                                                   0
192.168.2.0/24                                Remote  NAT       05d15h21m  0
       NAT inside                                                   0
192.168.3.0/24                                Remote  NAT       05d15h21m  0
       NAT inside                                                   0

When forwarding in the downstream direction (in the public to private direction), the outside address ranges (NAT pool ranges 172.16.x.x, as shown in the following example) are subdivided (micronetted) and distributed on a more granular level across the ISAs and VM-ESAs in the same NAT group. The micronetting is necessary so that traffic can be distributed across ISAs or VM-ESAs. The micronets are visible in the routing table and their next hops point to the corresponding ISA or VM-ESA. Micronets are used to attract traffic toward the ISAs or VM-ESAs in the downstream direction.

Use the following command to display information for the route table on the outside (public side).

show router 3 route-table 
===============================================================================
Route Table (Router: Base)
===============================================================================
Dest Prefix[Flags]                            Type    Proto     Age        Pref
      Next Hop[Interface Name]                                    Metric   
-------------------------------------------------------------------------------
172.16.2.0/28                                   Remote  NAT       05d15h32m  0
       NAT outside to mda 1/2                                       0
172.16.2.16/28                                   Remote  NAT       05d15h32m  0
       NAT outside to mda 2/2                                       0
172.16.2.32/28                                   Remote  NAT       05d15h32m  0
       NAT outside to mda 1/2                                       0
172.16.2.48/28                                   Remote  NAT       05d15h32m  0
       NAT outside to mda 2/2                                       0
172.16.2.64/28                                   Remote  NAT       05d15h32m  0
       NAT outside to mda 1/2                                       0
172.16.2.80/28                                   Remote  NAT       05d15h32m  0
       NAT outside to mda 2/2                                       0
172.16.2.96/31                                   Remote  NAT       05d15h32m  0
       NAT outside to mda 1/2                                       0
172.16.2.98/31                                   Remote  NAT       05d15h32m  0
       NAT outside to mda 2/2                                       0
172.16.2.100/32                                   Remote  NAT       05d15h32m  0
       NAT outside to mda 1/2                                       0

Dynamic routing to LSN44

LSN44 steering on the inside routing context is supported through routes received from the BGP-VPN protocol. NAT-related BGP-VPN routes are received from BGP-VPN peers in the outside routing contexts and are imported in the inside routing context. This way, the routing information for NAT is dynamically updated without user intervention. Typical users are administrators using NAT who are peering with third parties and frequently re-purposing their IPv4 prefixes based on usage or redundancy. A cloud solution is an example of these types of peering partners.

A typical network design describing this scenario is shown in Connectivity diagram.

Figure 14. Connectivity diagram

Dynamic import of BGP-VPN routes with the next-hop leading to NAT (ISA or VM-ESA) is performed through a routing policy where:

  • A route received on the outside by BGP-VPN is matched against the configured route target community and installed in the outside VPRN automatically. Alternatively, the route can be matched against any BGP-VPN supported criteria through an import route policy and installed in the outside routing table.

  • Within the NAT context on the inside, the BGP-VPN route is imported through an import route policy, where the route is matched against any BGP-VPN supported criteria. In the same route policy, the route is associated with a NAT policy which determines the NAT pool and outside routing context.

An example configuration for the route policy that is referenced as import within a NAT context is shown below.

Import route-policy within a NAT context (MD-CLI)
[ex:/configure policy-options]
A:admin@node-2# info
    policy-statement "BGP-VPN-import" {
        entry 1 {
            from {
                protocol {
                    name [bgp-vpn]  <<any match condition supported for BGP-VPN routes>>
                }
            }
            action {
                action-type accept
                nat-policy "NAT_policy"
            }
        }
    }
Import policy within a NAT context (classic CLI)
A:node-2>config>router>policy-options# info
----------------------------------------------
            policy-statement "BGP-VPN-import"
                entry 1
                    from
                        protocol bgp-vpn
                    exit
                    action accept
                        nat-policy "NAT_policy"
                    exit
                exit
            exit
----------------------------------------------

If the NAT policy under the action is omitted, then a default NAT policy from the inside routing context is used.

The configured route policy is then applied under NAT in the inside routing context.

Application of the route policy under NAT (MD-CLI)
[ex:/configure service vprn "1"]
A:admin@node-2# info
    nat {
        inside {
            large-scale {
                nat-policy "NAT-policy"  <<default nat-policy can be used here>>
                nat44 {
                    nat-import ["BGP-VPN-import"]
                }
            }
        }
    }
Application of the route policy under NAT (classic CLI)
A:node-2>config>service>vprn# info
----------------------------------------------
...
            nat
                inside
                    nat-policy "NAT_policy"
                    nat-import "BGP-VPN-import"
                exit
            exit
----------------------------------------------
Deterministic LSN44, non-deterministic LSN44, and 1:1 static LSN44 in a dynamic routing environment

Deterministic and non-deterministic LNS44 can be simultaneously configured in an inside routing context. See Multiple NAT policies and deterministic NAT for more details.

Combination of static and dynamic routes

If the same route is provided by the static configuration and dynamically by a BGP-VPN, only the configured (static) route is installed in the routing table. In other words, a static route has a higher priority than the dynamic route.

Scale and logging notes

A NAT route is not installed in the routing table on the inside if one of the following occurs:

  • A maximum number of NAT policies per inside VPRN is reached. A NAT policy indirectly represents the route’s next hop in the inside routing context (toward the ISA or VM-ESA).

  • The next hop in the outside routing context is not available.

  • A maximum number of imported NAT routes is reached. There is a maximum numbers of routes that can be imported into the inside routing context from each outside VPRN.

  • A maximum number of the dynamic NAT routes per system is reached.

NAT steering through IP filters

Traffic steering to NAT through IP filters is more customizable than steering through routing with traffic identification because of the extensive matching criteria offered by IP filters.

An IP filter can be applied on:

  • ingress access and network interfaces on the private side

  • network ingress of a VPRN, for example, auto-bind or spoke SDPs

  • ingress SLA profile in subscriber management

The filter entry used for NAT steering has an action nat which redirects traffic identified through matched criteria toward ISAs and VM-ESAs. The following example shows that entries in the filter are evaluated and the first match ends the filter evaluation.

Filter entry for NAT steering (MD-CLI)

[ex:/configure filter]
A:admin@node-2# info
    match-list {
        ip-prefix-list "nat-dest" {
            prefix 172.16.0.0/24 { }
        }
    }
    ip-filter "demo-filter" {
        default-action accept
        filter-id 12
        entry 10 {
            match {
                protocol udp
                src-port {
                    eq 30000
                }
            }
            action {
                accept
            }
        }
        entry 20 {
            match {
                protocol udp
                dst-ip {
                    ip-prefix-list "nat-dest"
                }
                src-port {
                    range {
                        start 40000
                        end 50000
                    }
                }
            }
            action {
                nat {
                }
            }
        }
        entry 30 {
            match {
                protocol udp
            }
            action {
                drop
            }
        }
    }

Filter entry for NAT steering (classic CLI)

A:node-2>config>filter# info
----------------------------------------------
        match-list
            ip-prefix-list "nat-dest" create
                prefix 172.16.0.0/24
            exit
        exit
        ip-filter 12 name "demo-filter" create
            default-action forward
            entry 10 create
                match protocol udp
                    src-port eq 30000
                exit
                action
                    forward
                exit
            exit
            entry 20 create
                match protocol udp
                    dst-ip ip-prefix-list "nat-dest"
                    src-port range 40000 50000
                exit
                action
                    nat
                exit
            exit
            entry 30 create
                match protocol udp
                exit
                action
                    drop
                exit
            exit
        exit
----------------------------------------------

In this scenario, any UDP traffic with the source port 3000 as indicated in entry 10, is allowed through the system, bypassing NAT. UDP traffic with source ports in the range 40 000 to 50 000 destined for network 172.16.0.0/24 as indicated in entry 20, is NAT’d. The remaining UDP traffic is dropped, according to entry 30.

The remaining non-UDP traffic is allowed through the filter and bypasses NAT as indicated by the default action accept.

The following example shows a filter applied to a network interface on ingress.

Configuring the filter applied to a network interface on ingress (MD-CLI)

[ex:/configure]
A:admin@node-2# info
 router "Base" {
    interface "annex" {
        port 1/1/1
        ingress {
            filter {
                ip "demo-filter"
            }
        }
        ipv4 {
            primary {
                address 192.168.12.2
                prefix-length 24
            }
        }
    }

Configuring the filter applied to a network interface on ingress (classic CLI)

A:node-2>config>router# info
----------------------------------------------
#--------------------------------------------------
echo "IP Configuration"
#--------------------------------------------------
        interface "annex"
            address 192.168.12.2/24
            port 1/1/1
            ingress
                filter ip 12
            exit
            no shutdown
        exit
        interface "system"
            no shutdown
        exit
#--------------------------------------------------

The following example shows a filter applied to all ingress network interfaces for a specific VPRN.

Filter applied to all ingress network interfaces for a specific VPRN (MD-CLI)

[ex:/configure service vprn "demo"]
A:admin@node-2# info
        service-id 1
        customer "1"
        network {
            ingress {
                filter {
                    ip "demo-filter"
                }
            }
        }    

Filter applied to all ingress network interfaces for a specific VPRN (classic CLI)

A:node-2>config>service>vprn# info
----------------------------------------------
            shutdown
            network
                ingress
                    filter ip 12
                exit
            exit
----------------------------------------------

The following example shows a filter applied on ingress in an SLA profile.

Filter applied on ingress in an SLA profile (MD-CLI)

[ex:/configure]
A:admin@node-2# info
    subscriber-mgmt {
        sla-profile "demo" {
            ingress {
                ip-filter "demo-filter"
            }
        }

Filter applied on ingress in an SLA profile (classic CLI)

A:mode-2>config>subscr-mgmt# info
----------------------------------------------
        sla-profile "demo" create
            ingress
                ip-filter 12
            exit
        exit

Layer 2-aware support for residential gateway types

Layer 2–aware NAT functionality is tightly coupled with ESM and therefore, the type of the residential gateway supported in Layer 2–aware NAT depends on the anti-spoof setting of the ESM subscriber. In this context, the residential gateway types can be:

  • bridged

    Subscriber-hosts behind the residential gateway are individually set up in the BNG and their IP and MAC addresses are known to the BNG during the host setup phase (DHCP/PPPoE).

  • routed with NAT

    The only residential gateway is set up in the BNG. The residential gateway IP and MAC address is known in the BNG during the set up phase. Subscriber hosts behind the residential gateway are not known in the BNG, but instead, they are hidden behind the residential gateway’s NAT.

  • routed without NAT

    The residential gateway is set up in the BNG. Hosts behind residential gateway’s NAT are not set up in the BNG. The control plane in the BNG is not aware of their IP and MAC addresses. To forward data traffic from these routed hosts in the upstream direction, the anti-spoof in BNG must be set to nh-mac using one of the following commands:

    • MD-CLI
      configure service ies subscriber-interface group-interface sap-parameters anti-spoof
      configure service vprn subscriber-interface group-interface sap-parameters anti-spoof 
    • classic CLI
      configure service ies subscriber-interface group-interface sap anti-spoof
      configure service vprn subscriber-interface group-interface sap anti-spoof

    In the downstream direction, a frame route pointing to the residential gateway must be present in the BNG.

    DHCP relay on the residential gateway is disabled. If DHCP relay is enabled, routed hosts can be set up in BNG using the following commands:

    • MD-CLI
      configure service ies subscriber-interface group-interface ipv4 dhcp lease-populate l2-header
      configure service ies subscriber-interface group-interface ipv4 dhcp lease-populate max-leases
      configure service vprn subscriber-interface group-interface ipv4 dhcp lease-populate l2-header
      configure service vprn subscriber-interface group-interface ipv4 dhcp lease-populate max-leases
      
    • classic CLI
      configure service ies subscriber-interface group-interface dhcp lease-populate l2-header
      configure service vprn subscriber-interface group-interface dhcp lease-populate l2-header

Anti-spoof settings in ESM that are relevant to this context include:

  • ip-mac

    Anti-spoof is based on the MAC address and the source IP address of the host. This anti-spoof type is more stringent and secure.

  • nh-mac

    Anti-spoof is based only on the MAC address of the host. This is used in the presence of IP hosts behind the routed RG without NAT. The IP addresses of these hosts are exposed within the data traffic received by BNG, even though those hosts were never explicitly set up in the BNG (using DHCP/PPP). Nh-mac anti-spoof ensures that data traffic from unknown (unknown on the control plane level) IP addresses pass through BNG in the upstream direction. These hosts are behind a known subscriber host, that is, in this case, a routed residential gateway without NAT.

In addition to the anti-spoof setting, the following command is required in BNG to select the needed residential gateway type:

  • MD-CLI
    configure subscriber-mgmt sub-profile nat access-mode {auto | bridged}
  • classic CLI
    configure subscriber-mgmt sub-profile nat-access-mode {auto | bridged} 
    

The relationship between the anti-spoof setting in ESM, NAT access mode CLI flag, and a compatible residential gateway model is shown in Anti-spoof setting comparisons.

Table 3. Anti-spoof setting comparisons
Model no. Home model Anti-spoof NAT access mode CLI flag Supported in SR OS Comments

1

Bridged RG

ip-mac

auto

bridged

Yes

All bridged subscriber hosts are eligible for Layer 2–aware NAT with the most stringent anti-spoof settings. If there is only one host behind the bridged RG, then this model becomes the same as model 3.

2

Bridged RG

nh-mac

bridged

Yes

All bridged subscriber hosts are eligible for Layer 2–aware NAT. In this model, MAC addresses within the subscriber and SAP must be unique.

Even though the anti-spoof in ESM is set to nh-mac, the NAT function still checks the source IP address of the upstream traffic and drops any traffic from spoofed IP addresses (IP source address that do not belong to the bridged hosts, as initially setup in ESM).

3

Routed RG with NAT

ip-mac

auto

bridged

Yes

Subscriber hosts behind the residential gateway are hidden behind routed RG’s NAT and are not visible in BNG.

4

Routed RG with NAT

nh-mac

auto

bridged

Yes

This combination is supported but with inferior anti-spoofing.

5

Routed RG, no NAT

ip-mac

No

This combination is not supported. The ip-mac anti-spoof command option in ESM blocks traffic for the host with an exposed source IP address that resides behind the RG. Those hosts are not set up in the BNG on the control plane level (DHCP/PPPoE is not sent from those hosts).

6

Routed RG, no NAT

nh-mac

auto

bridged

Yes

Subscriber hosts with exposed source IP addresses pass the nh-mac anti-spoof command option check and are eligible for Layer 2–aware NAT.

One-to-one (1:1) NAT

In 1:1 NAT, each source IP address is translated in 1:1 fashion to a corresponding outside IP address. However, the source ports are passed transparently without translation.

The mapping between the inside IP addresses and outside IP addresses in 1:1 NAT supports two modes:

  • dynamic

    The user can specify the outside IP addresses in the pool, but the exact mapping between the inside IP address and the configured outside IP addresses is performed dynamically by the system in a semi-random fashion.

  • static

    The mappings between IP addresses are configurable and they can be explicitly set.

The dynamic version of 1:1 NAT is protocol dependent. Only TCP/UDP/ICMP protocols are allowed to traverse such NAT. All other protocols are discarded, with the exception of PPTP with ALG. In this case, only GRE traffic associated with PPTP is allowed through dynamic 1:1 NAT.

The static version of 1:1 NAT is protocol agnostic. This means that all IP based protocols are allowed to traverse static 1:1 NAT.

The following points are applicable to 1:1 NAT:

  • Even though source ports are not being translated, the state maintenance for TCP and UDP traffic is still performed.

  • Traffic can be initiated from outside toward any statically mapped IPv4 address.

  • 1:1 NAT can be supported simultaneously with NAPT (classic non 1:1 NAT) within the same inside routing context. This is accomplished by configuring two separate NAT pools, one for 1:1 NAT and the other for non 1:1 NAPT.

Static 1:1 NAT

In static 1:1 NAT, inside IP addresses are statically mapped to the outside IP addresses. This way, devices on the outside can predictably initiate traffic to the devices on the inside.

Static configuration is based on the CLI concepts used in deterministic NAT. The following example shows a deterministic NAT configuration.

Deterministic NAT configuration (MD-CLI)

[ex:/configure router "Base" nat inside large-scale nat44 deterministic]
A:admin@node-2# info
    prefix-map 10.10.0.220/30 nat-policy "one-to-one-agnostic" {
        map 10.10.0.220 to 10.10.0.220 {
            first-outside-address 192.168.255.206
        map 10.10.0.221 to 10.10.0.221 {
            first-outside-address 192.168.255.207
        map 10.10.0.222 to 10.10.0.222 {
            first-outside-address 192.168.255.208
        map 10.10.0.223 to 10.10.0.223 {
            first-outside-address 192.168.255.209
        }
    }

Deterministic NAT configuration (classic CLI)

A:node-2>config>router>nat>inside>deterministic# info
----------------------------------------------
 prefix-map 10.10.0.220/30 subscriber-type classic-lsn-sub nat-policy "one-to-one-agnostic" create
         map start 10.10.0.220 end 10.10.0.220 to 192.168.255.206
         map start 10.10.0.221 end 10.10.0.221 to 192.168.255.207
         map start 10.10.0.222 end 10.10.0.222 to 192.168.255.208
         map start 10.10.0.223 end 10.10.0.223 to 192.168.255.209
         no shutdown
    exit
----------------------------------------------

Static mappings are configured according to the map statements:

  • In the MD-CLI, the map statement must be configured by the user, but the following command can be used to produce system-generated maps.
    tools perform nat deterministic calculate-maps
    The preceding command outputs a set of system-generated map statements. The map command options can then be copied and pasted into an MD-CLI candidate configuration by the user.
  • In classic CLI, the map statement can be configured manually by the user or automatically by the system.

IP addresses from the automatically-generated map statements are sequentially mapped into available outside IP addresses in the pool:

  • The first inside IP address is mapped to the first available outside IP address from the pool.

  • The second inside IP address is mapped to the second available outside IP address from the pool.

The following mappings apply to the preceding example.

Table 4. Static mappings
Inside IP address Outside IP address

10.10.0.220

192.168.255.206

10.10.0.221

192.168.255.207

10.10.0.222

192.168.255.208

10.10.0.223

192.168.255.209

Protocol-agnostic behavior

Although static 1:1 NAT is protocol agnostic, the state maintenance for TCP and UDP traffic is still required to support ALGs. Therefore, the existing scaling limits related to the number of supported flows still apply.

The following example shows protocol-agnostic behavior in 1:1 NAT is a property of a NAT pool.

Protocol-agnostic behavior configuration (MD-CLI)
[ex:/configure router "Base" nat outside]
A:admin@node-2# info
    pool "one-to-one" {
        admin-state enable
        type large-scale
        nat-group 1
              mode one-to-one
        applications {
            agnostic true
        }
        port-forwarding {
            range-start 0
            range-end 0
        }
        port-reservation {
            port-blocks 1
        }
        large-scale {
            subscriber-limit 1
            }
            deterministic {
                port-reservation 65325
            }
        address-range 192.168.2.0 end 192.168.2.10 {
        }
    }
Protocol-agnostic behavior configuration (classic CLI)
A:node-2>config>router>nat>outside# info
----------------------------------------------
               pool "one-to-one" nat-group 1 type large-scale applications agnostic create
                    no shutdown
                    port-reservation blocks 1
                    port-forwarding-range 0 0
                    subscriber-limit 1
                    deterministic
                        port-reservation 65325 
                    exit
                    mode one-to-one
                    address-range 192.168.2.0 192.168.2.10 create
                    exit
                exit
----------------------------------------------

The application agnostic command is a pool create-time command. This command automatically pre-sets the following pool command options:

  • mode is set to one-to-one

  • port forwarding range start is set to 0

  • port forwarding range end is set to 0

  • number of port reservation blocks is set to 1

  • the subscriber limit is set to 1

  • the deterministic port reservation is set to 65325, which configures the pool to operate in static (or deterministic) mode

When pre-set, these command options cannot be changed while the pool is operating in protocol agnostic mode.

Modification of parameters in static 1:1 NAT

Note: This information applies for the classic CLI.

In classic CLI only, command options in the static 1:1 NAT can be changed according to the following rules:

  • The deterministic pool must be in a no shutdown state when a prefix or a map command in deterministic NAT is added or removed.

  • All configured prefixes referencing the pool via the NAT policy must be deleted (unconfigured) before the pool can be shut down.

  • Map statements can be modified only when prefix is shutdown state. All existing map statements must be removed before the new ones are created.

These rules do not apply in MD-CLI.

Load distribution over ISAs in static 1:1 NAT

For best traffic distribution over ISAs, the value of the maximum subscriber limit command option should be set to 1 using the following command:

  • MD-CLI
    configure router nat inside  large-scale nat44 max-subscriber-limit 1
  • classic CLI
    configure router nat inside deterministic classic-lsn-max-subscriber-limit 1

This means that traffic is load balanced over ISAs based on inside IP addresses. In static 1:1 NAT this is certainly possible because the subscriber-limit command at the pool level is preset to a fixed value of 1.

However, if 1:1 static NAT is simultaneously used with regular (many-to-one) deterministic NAT where the subscriber-limit command option can be set to a value greater than 1, then the maximum subscriber limit command option also has to be set to a value that is greater than 1. The consequence of this is that the traffic is load-balanced based on the consecutive blocks of IP addresses (subnets) instead of individual IP addresses. See Deterministic NAT for information about Deterministic NAT behavior.

NAT-policy selection

The traffic match criteria used in the selection of specific NAT policies in static 1:1 NAT (the deterministic part of the configuration) must not overlap with traffic match criteria that is used in the selection of a specific NAT policy used in filters or in destination-prefix statement (these are used for traffic diversion to NAT). Otherwise, traffic is dropped in ISA.

A specific NAT policy in this context refers to a non-default NAT policy, or a NAT policy that is directly referenced in a filter, in a destination prefix or a deterministic prefix.

The following example is used to clarify this point.

NAT policy selection (MD-CLI)
[ex:/configure router "Base" nat inside large-scale]
A:admin@node-2# info
    nat44 {
        max-subscriber-limit 128
        destination-prefix 192.0.2.0/24 {
            nat-policy "pol-2"
        }
        deterministic {
            prefix-map 10.10.10.0/24 nat-policy "pol-1" {
                map 10.10.10.0 to 10.10.10.255 {
                    first-outside-address 192.168.0.1
                }
            }
        }
    }
NAT policy selection (classic CLI)
A:node-2>config>router>nat>inside# info
----------------------------------------------
                destination-prefix 192.0.2.0/24 nat-policy "pol-2"
                classic-lsn-max-subscriber-limit 128
                deterministic
                    prefix-map 10.10.10.0/24 subscriber-type classic-lsn-sub nat-policy "pol-1" create
                        shutdown
                        map start 10.10.10.0 end 10.10.10.255 to 192.168.0.1
                    exit
                exit
----------------------------------------------

In the preceding example:

  • Traffic is diverted to NAT using specific nat-policy pol-2.

  • The deterministic (source) prefix 10.10.10.0/24 is configured to be mapped to nat-policy pol-1 specifically which points to protocol agnostic 1:1 NAT pool.

  • Packets received in the ISA have a source IP of 10.10.10.0/24 and a destination IP of 192.0.2.0/24.

  • If no NAT mapping for this traffic exists in the ISA, a NAT policy (and with this, the NAT pool) must be determined to create the mapping. Traffic is diverted to NAT using NAT policy pol-2, while the deterministic mapping suggests that the NAT policy pol-1 should be used (this is a different pool from the one referenced in NAT policy pol-2). Because of the specific NAT policy conflict, traffic is dropped in the ISA.

To successfully pass traffic between two subnets through NAT while simultaneously using static 1:1 NAT and regular LSN44, a default (non-specific) NAT policy can be used for regular LSN44.

A specific NAT policy (in a filter, destination-prefix command, or in deterministic prefix-map command) always takes precedence over a default NAT policy. However, traffic that matches classification criteria (in a filter, destination-prefix command, or a deterministic prefix-map command) that leads to multiple specific NAT policies, is dropped.

In this case, the four hosts from the prefix 10.10.10.0/24 are mapped in 1:1 fashion to 4 IP addresses from the pool referenced in the specific NAT policy pol-1, while all other hosts from the 10.10.10.0/24 network are mapped to the NAPT pool referenced by the default NAT policy pol-2. In this way, a NAT policy conflict is avoided.

Mapping timeout

Static 1:1 NAT mappings are explicitly configured, and therefore, their lifetime is tied to the configuration.

Logging

The logging mechanism for static mapping is the same as in Deterministic NAT. Configuration changes are logged via syslog and enhanced with reverse querying on the system.

Restrictions

Static 1:1 NAT is supported only for LSN44. There is no support for DS-Lite/NAT64 or Layer 2–aware NAT.

ICMP

In 1:1 NAT, specific ICMP messages contain an additional IP header embedded in the ICMP header. For example, when the ICMP message is sent to the source because of the inability to deliver datagram to its destination, the ICMP generating node includes the original IP header of the packet plus 64bits of the original datagram. This information helps the source node to match the ICMP message to the process associated with this message.

When these messages are received in the downstream direction (on the outside), 1:1 NAT recognizes them and changes the destination IP address not only in the outside header but also in the ICMP header. In other words, a lookup in the downstream direction is performed in the ISA to determine if the packet is ICMP with a specific type. Depending on the outcome, the destination IP address in the ICMP header is changed (reverted to the original source IP address).

Messages carrying the original IP header within ICMP header are:

  • Destination Unreachable Messages (Type 3)

  • Time Exceeded Message (Type 11)

  • Parameter Problem Message (Type 12)

  • Source Quench Message (Type 4)

Deterministic NAT

Overview

In deterministic NAT the subscriber is deterministically mapped into an outside IP address and a port block. The algorithm that performs this deterministic mapping is revertive, which means that a NAT subscriber can be uniformly derived from the outside IP address and the outside port (and the routing instance). Thus, logging in deterministic NAT is not needed.

The deterministic [subscriber <-> outside-ip, deterministic-port-block] mapping can be automatically extended by a dynamic port-block in case that deterministic port block becomes exhausted of ports. By extending the original deterministic port block of the NAT subscriber by a dynamic port block yields a satisfactory compromise between a deterministic NAT and a non-deterministic NAT. There is no logging as long as the translations are in the domain of the deterministic NAT. After the dynamic port block is allocated for port extension, logging is automatically activated.

NAT subscribers in deterministic NAT are not assigned outside IP address and deterministic port-block on a first come first serve basis. Instead, deterministic mappings are pre-created at the time of configuration regardless of whether the NAT subscriber is active or not. In other words we can say that overbooking of the outside address pool is not supported in deterministic NAT. Consequently, all configured deterministic subscribers (for example, inside IP addresses in LSN44 or IPv6 address/prefix in DS-Lite) are guaranteed access to NAT resources.

Supported deterministic NAT types

The routers support Deterministic LSN44 and Deterministic DS-Lite. The basic deterministic NAT principle is applied equally to both NAT flavors. The difference between the two stem from the difference in interpretation of the subscriber – in LSN44 a subscriber is an IPv4 address, whereas in DS-Lite the subscriber is an IPv6 address or prefix (configuration dependent).

With the exception of following commands in the inside routing context, the deterministic NAT configuration blocks are for the most part common to LSN44 and DS-Lite:
  • MD-CLI
    configure router nat inside large-scale nat44 max-subscriber-limit
    configure router nat inside large-scale dual-stack-lite max-subscriber-limit
  • classic CLI
    configure router nat inside classic-lsn-max-subscriber-limit
    configure router nat inside dslite-max-subscriber-limit

Deterministic DS-Lite section at the end of this section focuses on the features specific to DS-Lite.

Number of subscribers per-outside IP and per-pool

The maximum number of NAT subscribers that can be mapped to a single outside IP address is configurable using the following command.

  • MD-CLI
    configure router nat outside pool large-scale subscriber-limit
    configure service vprn nat outside pool large-scale subscriber-limit
  • classic CLI
    configure router nat outside pool subscriber-limit
    configure service vprn nat outside pool subscriber-limit

For Deterministic NAT in a system with multiple ESA-VMs in a NAT group, this number is restricted to the power of 2 (2^n). In a system with a single ESA-VM in a NAT-group and NAT44, this restriction does not apply and a flexible number of NAT subscribers can be mapped to an outside IP address. See Flexible number of subscribers per outside IP (single IP) address.

For example, in a multi-ESA-VM system with NAT44 where the NAT subscriber is an IP address, the deterministic subscribers are configured with prefixes (for example, a prefix 10.10.10.0/24 corresponds to 256 or 2^8 subscribers) as opposed to an IP address range that would contain a flexible number of addresses (for example, a range of 10.10.0.5 – 10.10.0.104 which would contain a 100 IP addresses).

However, in a system with a single ESA-VM in a NAT group, deterministic NAT44 subscribers can be configured with an address map with a flexible number of IP addresses, that is, not restricted by the 2^n rule.

Referencing a pool

In deterministic NAT, the outside pool can be shared amongst subscribers from multiple routing instances. Also, NAT subscribers from a single routing instance can be selectively mapped to different outside pools.

Outside pool configuration

The number of deterministic mappings that a single outside IP address can sustain is determined through the configuration of the outside pool.

The port allocation per an outside IP is shown in Outside pool configuration (classic CLI).

Figure 15. Outside pool configuration (MD-CLI)
Figure 16. Outside pool configuration (classic CLI)

The well-known ports are predetermined and are in the range 0 to 1023.

The upper limit of the port range for static port forwards (wildcard range) is determined by the following existing command:
  • MD-CLI
    configure router nat outside pool port-forwarding range-end
  • classic CLI
    configure router nat outside pool port-forwarding-range
The range of ports allocated for deterministic mappings (DetP) is determined by multiplying the number of subscribers per outside IP (subscriber-limit command) with the number of ports per deterministic block (deterministic port-reservation command). The number of subscribers per outside IP in deterministic NAT must be power of 2 (2^n). Use the following commands to configure the subscriber limit and deterministic port reservation:
  • MD-CLI
    configure router nat outside pool large-scale subscriber-limit
    configure router nat outside pool large-scale deterministic port-reservation
  • classic CLI
    configure router nat outside pool subscriber-limit
    configure router nat outside pool deterministic port-reservation

The remaining ports, extending from the end of the deterministic port range to the end of the total port range (65535) are used for dynamic port allocation. The size of each dynamic port block is determined with the existing port-reservation command.

The deterministic port-reservation command enables deterministic mode of operation for the pool.

Three examples follow, with deterministic Large Scale NAT44, where the requirements are:

  • 300, 500, or 700 (three separate examples) ports are in each deterministic port block.

  • A subscriber (an inside IPv4 address in LSN44) can extend its deterministic ports by a minimum of one dynamic port-block and by a maximum of four dynamic port blocks.

  • Each dynamic port-block contains 100 ports.

  • Oversubscription of dynamic port blocks is 4:1. This means that 1/4th of inside IP addresses may be starved out of dynamic port blocks in worst case scenario.

  • The wildcard (static) port range is 3000 ports.

In the first case, the ideal case is examined where an arbitrary number of subscribers per outside IP address is allocated according to our requirements described above. Then the limitation of the number of subscribers being power of 2 is factored in.

Table 5. Contiguous number of subscribers
Well-known ports1 Static port range1 Number of ports in deterministic block1 Number of deterministic blocks Number of ports in dynamic block1 Number of dynamic blocks1 Number of inside IP addresses per outside IP address1 Block limit per inside IP address1 Wasted ports

0-1023

1024-4023

300

153

100

153

153

5

312

0-1023

1024-4023

500

102

100

102

102

5

312

0-1023

1024-4023

700

76

100

76

76

5

712

The example in Contiguous number of subscribers shows how port ranges would be carved out in ideal scenario.

The other values are calculated according to the fixed requirements.

The port-block limit includes the deterministic port blocks plus all dynamic port blocks.

Next, in Preserving Det/Dyn port ratio with 2^n subscribers, a more realistic example with the number of subscribers being equal to 2^n are considered. The ratio between the deterministic ports and the dynamic ports per port block just like in the example above: 3/1, 5/1 and 7/1 are preserved. In this case, the number of ports per port block is dictated by the number of subscribers per outside IP address.

Table 6. Preserving Det/Dyn port ratio with 2^n subscribers
Well-known ports1 Static port range1 Number of ports in deterministic block1 Number of deterministic blocks Number of ports in dynamic block1 Number of dynamic blocks Number of inside IP addresses per outside IP address1 Block limit per inside IP address1 Wasted ports

0-1023

1024-4023

180

256

60

256

256

5

72

0-1023

1024-4023

400

128

80

128

128

5

72

0-1023

1024-4023

840

64

120

64

64

5

72

The final example (Fixed number of deterministic ports with 2^n subscribers) is similar as Contiguous number of subscribers with the difference that the number of deterministic port blocks fixed are kept, as in the original example (300, 500 and 700).

Table 7. Fixed number of deterministic ports with 2^n subscribers
Well-known ports Static port range Number of ports in deterministic block Number of deterministic blocks Number of ports in dynamic block Number of dynamic blocks Number of inside IP addresses per outside IP address Block limit per inside IP address Wasted ports

0-1023

1024-4023

300

128

180

128

128

5

72

0-1023

1024-4023

500

64

461

64

64

5

8

0-1023

1024-4023

700

64

261

64

64

5

8

The three preceding examples should give a perspective on the size of deterministic and dynamic port blocks in relation to the number of subscribers (2^n) per outside IP address. Users should run a similar dimensioning exercise before they start configuring their deterministic NAT.

The CLI for the highlighted case in the Contiguous number of subscribers is displayed.

Configuring the number of deterministic ports per subscriber (MD-CLI)

[ex:/configure service vprn "7"]
A:admin@node-2# info
    customer "1"
    nat {
        outside {
            pool "mypool" {
                type large-scale
                nat-group 1
                port-forwarding {
                    range-end 4023
                }
                port-reservation {
                    ports 180
                }
                large-scale {
                    subscriber-limit 128
                    deterministic {
                        port-reservation 300
                    }
                }
            }
        }
    }

Configuring the number of deterministic ports per subscriber (classic CLI)

A:node-2>config>service>vprn$ info
----------------------------------------------
            no shutdown
            nat
                outside
                    pool "mypool" nat-group 1 type large-scale create
                        shutdown
                        port-reservation ports 180
                        port-forwarding-range 4023
                        subscriber-limit 128
                        deterministic
                            port-reservation 300
                        exit
                    exit
                exit
            exit
----------------------------------------------

where

128 subs * 300ports = 38,400 deterministic port range

128 subs * 180ports = 23,040 dynamic port range

Det+dyn available ports = 65,536 – 4024 = 61,512

Det+dyn usable pots = 128*300 + 128 *180 = 61,440 ports

72 ports per outside-ip are wasted.

Configuring a NAT policy and maximum number of blocks per subscriber (MD-CLI)

[ex:/configure service nat]
A:admin@node-2# info
    nat-policy "mypolicy" {
        block-limit 5
    }

Configuring a NAT policy and maximum number of blocks per subscriber (classic CLI)

A:node-2>config>service>nat# info
----------------------------------------------
            nat-policy "mypolicy" create
                block-limit 5      1 deterministic port block + 4 dynamic port blocks
            exit
----------------------------------------------

This configuration allows 128 subscribers (inside IP addresses in LSN44) for each outside address (compression ratio is 128:1) with each subscriber being assigned up to 1020 ports (300 deterministic and 720 dynamic ports over 4 dynamic port blocks).

The outside IP addresses in the pool and their corresponding port ranges are organized as shown in the following diagram. .

Figure 17. Outside address ranges

Assuming that the preceding graph depicts an outside deterministic pool, the number of subscribers that can be accommodated by this deterministic pool is represented by purple squares (number of IP addresses in an outside pool * subscriber-limit). The number of subscribers across all configured prefixes on the inside that are mapped to the same deterministic pool must be less than the outside pool can accommodate. In other words, an outside address pool in deterministic NAT cannot be oversubscribed.

The following example displays a CLI representation of a deterministic pool definition including the outside IP ranges.

Deterministic pool definition (MD-CLI)

[ex:/configure service vprn "1"]
A:admin@node-2# info
    customer "1"
    nat {
        outside {
            pool "mypool" {
                type large-scale
                nat-group 1
                port-forwarding {
                    range-end 4023
                }
                port-reservation {
                    ports 461
                }
                large-scale {
                    subscriber-limit 64
                    deterministic {
                        port-reservation 2
                    }
                }
            }
        }
    }

Deterministic pool definition (classic CLI)

A:node-2>config>service>vprn$ info
----------------------------------------------
            shutdown
            nat
                outside
                    pool "mypool" nat-group 1 type large-scale create
                        shutdown
                        port-reservation ports 461
                        port-forwarding-range 4023
                        subscriber-limit 64
                        deterministic
                            port-reservation 2
                        exit
                    exit
                exit
            exit
----------------------------------------------

Mapping rules and the map command in deterministic LSN44

The common building block on the inside in the deterministic LSN44 configuration is a IPv4 prefix. The NAT subscribers (inside IPv4 addresses) from the configured prefix are deterministically mapped to the outside IP addresses and corresponding deterministic port-blocks. Any inside prefix in any routing instance can be mapped to any pool in any routing instance (including the one in which the inside prefix is defined).

The mapping between the inside prefix and the deterministic pool is achieved through a NAT policy that can be referenced per each individual inside IPv4 prefix. IPv4 addresses from the prefixes on the inside are distributed over the IP addresses defined in the outside pool referenced by the NAT policy.

The mapping itself is represented by the map command under the prefix hierarchy. The following example displays the configuration of source IP prefixes on the inside and their association with outside deterministic NAT pools through the NAT policy in a vprn or router context.

Configuring source IP prefixes on the inside and their association with outside deterministic NAT pools (MD-CLI)

[ex:/configure service vprn "12"]
A:admin@node-2# info
    customer "1"
    nat {
        inside {
            large-scale {
                nat44 {
                    }
                    deterministic {
                        prefix-map 10.0.0.0/24 nat-policy "nat-policy-1" {
                            map 10.0.0.0 to 10.0.0.255 {
                                first-outside-address 192.168.0.1
                            }
                        }
                    }
                }
            }
        }
    }

[ex:/configure router "Base"]
A:admin@node-2# info
    nat {
        inside {
            large-scale {
                nat44 {
                    max-subscriber-limit 256
                    deterministic {
                        prefix-map 10.10.4.0/22 nat-policy "nat-policy-2" {
                            map 10.10.4.0 to 10.10.7.255 {
                                first-outside-address 192.168.0.3
                            }
                        }
                    }
                }
            }
        }

Configuring source IP prefixes on the inside and their association with outside deterministic NAT pools (classic CLI)

A:node-2>config>service>vprn# info
----------------------------------------------
	nat
		inside
			deterministic
				prefix-map 10.0.0.0/24 subscriber-type classic-lsn-sub nat-policy "nat-policy-1" create
					map start 10.0.0.0 end 10.0.0.255 to 192.168.0.1
				no shutdown
				exit
			exit
		exit
	exit
----------------------------------------------

A:node-2>config>router# info
...
#--------------------------------------------------
echo "NAT Configuration"
#--------------------------------------------------
	nat
		inside
			classic-lsn-max-subscriber-limit 256
			deterministic
				prefix-map 10.10.4.0/22 subscriber-type classic-lsn-sub nat-policy "nat-policy-2" create
					map start 10.10.4.0 end 10.10.7.255 to 192.168.0.3
				no shutdown
				exit
			exit
		exit
	exit
----------------------------------------------
The purpose of the map statement is to split the number of subscribers within the configured prefix over available sequences of outside IP addresses. The key command option that governs mappings between the inside IPv4 addresses and outside IPv4 addresses in deterministic LSN44 is defined by the following command.
  • MD-CLI
    configure router nat outside pool large-scale subscriber-limit
  • classic CLI
    configure router nat outside pool subscriber-limit

This command option must be power of 2 and it limits the maximum number of NAT subscribers that can be mapped to the same outside IP address.

The follow are rules governing the configuration of the map statement.

In case that the number of subscribers (IP addresses in LSN44) in the map statement is larger than the subscriber-limit per outside IP, the subscribers must be split over a block of consecutive outside IP addresses where the outside-ip-address in the map statement represents only the first outside IP address in that block.

The number of subscribers (range of inside IP addresses in LSN44) in the map statement does not have to be a power of 2. Rather it has to be a multiple of a power of two  m * 2^n, where m is the number of consecutive outside IP addresses to which the subscribers are mapped and the 2^n is the subscriber-limit per outside IP.

The following displays an example of the configuration of the map statement.

Configuring the map statement (MD-CLI)

[ex:/configure router "Base" nat]
A:admin@node-2# info
...
    outside {
        pool "my-det-pool" {
            type large-scale
            nat-group 1
            large-scale {
                subscriber-limit 128
                deterministic {
                    port-reservation 31
                }
            }
            address-range 192.168.0.0 end 192.168.0.10 {
            }
        }
    }

[ex:/configure service vprn "10"]
A:admin@node-2# info
    customer "1"
    nat {
        inside {
            large-scale {
                nat44 {
                    max-subscriber-limit 128
                    deterministic {
                        prefix-map 10.0.0.0/24 nat-policy "det" {
                            map 10.0.0.0 to 10.0.0.255 {
                                first-outside-address 192.168.0.1
                            }
                        }
                    }
                }
            }
        }
    }

Configuring the map statement (classic CLI)

A:node-2>config>router>nat# info
----------------------------------------------
...
            outside
                pool "my-det-pool" nat-group 1 type large-scale create
                    shutdown
                    subscriber-limit 128
                    deterministic
                        port-reservation 31
                    exit
                    address-range 192.168.0.0 192.168.0.10 create
                    exit
                exit
            exit
----------------------------------------------

A:node-2>config>service>vprn# info
----------------------------------------------
            shutdown
            nat
                inside
                    classic-lsn-max-subscriber-limit 128
                    deterministic
                        prefix-map 10.0.0.0/24 subscriber-type classic-lsn-sub nat-policy "det" create
				map start 10.0.0.0 end 10.0.0.255 to 192.168.0.1
				no shutdown
                        exit
                    exit
                exit
            exit
----------------------------------------------

In this case, the configured 10.0.0.0/24 prefix is represented by the range of IP addresses in the map statement (10.0.0.0 to10.0.0.255). Because the range of 256 IP addresses in the map statement cannot be mapped into a single outside IP address (subscriber-limit=128), this range must be further implicitly split within the system and mapped into multiple outside IP addresses. The implicit split creates two IP address ranges, each with 128 IP addresses (10.0.0.0/25 and 10.0.0.128/25) so that addresses from each IP range are mapped to one outside IP address. The hosts from the range 10.0.0.0-10.0.0.127 are mapped to the first IP address in the pool (128.251.0.1) as explicitly stated in the map statement.

In the classic CLI, this is done in the to statement.

In the MD-CLI, this is done in the first-outside-address statement.

The hosts from the second range, 10.0.0.128-10.0.0.255 are implicitly mapped to the next consecutive IP address (128.251.0.2).

Alternatively, the map statement can be configured as:

Configuring the map statement (MD-CLI)

[ex:/configure service vprn "10"]
A:admin@node-2# info
    customer "1"
    nat {
        inside {
            large-scale {
                nat44 {
                    max-subscriber-limit 128
                    deterministic {
                        prefix-map 10.0.0.0/24 nat-policy "det" {
                            map 10.0.0.0 to 10.0.0.127 {
                                first-outside-address 192.168.0.1
                            }
                            map 10.0.0.128 to 10.0.0.255 {
                                first-outside-address 192.168.0.5
                            }
                        }
                    }
                }
            }
        }
    }

Configuring the map statement (classic CLI)

A:node-2>config>service>vprn$ info
----------------------------------------------
	shutdown
	nat
		inside
			classic-lsn-max-subscriber-limit 128
			deterministic
				prefix-map 10.0.0.0/24 subscriber-type classic-lsn-sub nat-policy "det" create
					map start 10.0.0.0 end 10.0.0.127 to 192.168.0.1
					map start 10.0.0.128 end 10.0.0.255 to 192.168.0.5
				shutdown
				exit
			exit
		exit
	exit
----------------------------------------------

In this case the IP address range in the map statement is split into two non-consecutive outside IP addresses. This gives the user more freedom in configuring the mappings.

However, the following configuration is not supported:

Unsupported configuration (MD-CLI)

Note: The following configuration is not supported.
[ex:/configure service vprn "10"]
A:admin@node-2# info
    customer "1"
    nat {
        inside {
            large-scale {
                nat44 {
                    max-subscriber-limit 128
                    deterministic {
                        prefix-map 10.0.0.0/24 nat-policy "det" {
				map 10.0.0.0 to 10.0.0.63 {
                                first-outside-address 192.168.0.1
				}
				map 10.0.0.64 to 10.0.0.127 {
                                first-outside-address 192.168.0.3
				}
				map 10.0.0.128 to 10.0.0.255 {
                                first-outside-address 192.168.0.5
				}
                        }
                    }
                }
            }
        }
    }

Unsupported configuration (classic CLI)

Note: The following configuration is not supported.
A:node-2>config>service>vprn$ info
----------------------------------------------
            shutdown
            nat
                inside
                    classic-lsn-max-subscriber-limit 128
                    deterministic
                        prefix-map 10.0.0.0/24 subscriber-type classic-lsn-sub nat-policy "det" create 
               		map start 10.0.0.0 end 10.0.0.63 to 192.168.0.1
               		map start 10.0.0.64 end 10.0.0.127 to 192.168.0.3
               		map start 10.0.0.128 end 10.0.0.255 to 192.168.0.5

Considering that the subscriber-limit = 128 (2^n; where n=7), the lower n bits of the start address in the second map statement are not 0.

In the classic CLI, this appears as map start 10.0.0.64 end 10.0.0.127 to 192.168.0.3 in the configuration.

In the MD-CLI, this appears as map 10.0.0.64 to 10.0.0.127 first-outside-address 192.168.0.3

This is in violation of the rule #1 that governs the provisioning of the map statement.

Assuming that he same pool with 128 subscribers is used per outside IP address, the following scenario is also not supported (configured prefix in this example is different from in previous example).

Unsupported configuration (MD-CLI)

Note: The following configuration is not supported.
[ex:/configure service vprn "10"]
A:admin@node-2# info
    customer "1"
    nat {
        inside {
            large-scale {
                nat44 {
                    max-subscriber-limit 128
                    deterministic {
				prefix-map 10.0.0.0/26 nat-policy "det" {
					map 10.0.0.0 to 10.0.0.63 {
                                	first-outside-address 192.168.0.1
				}
				prefix-map 10.0.1.0/26 nat-policy "det" {
					map 10.0.1.0 to 10.0.1.63 {
                                	first-outside-address 192.168.0.1
				}
                        }
                    }
                }
            }
        }
    }

Unsupported configuration (classic CLI)

Note: The following configuration is not supported.
A:node-2>config>service>vprn$ info
----------------------------------------------
            shutdown
            nat
                inside
                    classic-lsn-max-subscriber-limit 128
                    deterministic
				prefix-map 10.0.0.0/26 subscriber-type classic-lsn-sub nat-policy det 
               			map start 10.0.0.0 end 10.0.0.63 to 192.168.0.1
            
				prefix-map 10.0.1.0/26 subscriber-type classic-lsn-sub nat-policy det 
               			map start 10.0.1.0 end 10.0.1.63 to 192.168.0.1         

Although the lower n bits in both map statements are 0, both statements are referencing the same outside IP (192.168.0.1). This is violating rule #2 that governs the provisioning of the map statement. Each of the prefixes in this case have to be mapped to a different outside IP address, which leads to under utilization of outside IP addresses (half of the deterministic port blocks in each of the two outside IP addresses are not used).

In conclusion, considering that the number of subscribers per outside IP (subscriber-limit) must be 2^n, the inside IP addresses from the configured prefix are split on the 2^n boundary so that every deterministic port block of an outside IP is used. In case that the originally configured prefix contains less subscribers (IP addresses in LSN44) than an outside IP address can accommodate (2^n), all subscribers from such a configured prefix are mapped to a single outside IP. Because the outside IP cannot be shared with NAT subscribers from other prefixes, some of the deterministic port blocks for this particular outside IP address are not used.

Each configured prefix can evaluate into multiple map commands. The number of map commands depends on the length of the configured prefix, the subscriber-limit command and fragmentation of the outside address range within the pool with which the prefix is associated.

In classic CLI, the map statement can be configured manually by the user or automatically by the system.

In MD-CLI, the map statement must be configured by the user, but the following command can be used to produce system-generated maps if needed.
tools perform nat deterministic calculate-maps
The calculate-maps command outputs a set of system-generated map statements. The map command options can then be copied and pasted into an MD-CLI candidate configuration by the user.
  • If the number of subscribers per configured prefix is greater than the subscriber limit per outside IP command option (2^n), the lowest n bits of the map start inside IP address must be set to 0.

  • If the number of subscribers per configured prefix is equal or less than the subscriber-limit per outside IP command option (2^n), only one map command for this prefix is allowed. In this case there is no restriction on the lower n bits of the map start IP address. . The range of the inside IP addresses in such map statement represent the prefix itself.

  • The outside IP address in the map statements must be unique amongst all map statements referencing the same pool. In other words, two map statements cannot reference the same outside IP address in the pool.

Hashing considerations in deterministic LSN44

Support for multiple MS-ISAs in the NAT group calls for traffic hashing on the inside in the ingress direction. This ensures fair load balancing of the traffic amongst multiple MS-ISAs. While hashing in non-deterministic LSN44 can be performed per source IP address, hashing in deterministic LSN44 is based on subnets instead of individual IP addresses. The length of the hashing subnet is common for all configured prefixes within an inside routing instance. In the case where prefixes from an inside routing instance are referencing multiple pools, the common hashing prefix length is chosen according to the pool with the highest number of subscribers per outside IP address. This ensures that subscribers mapped to the same outside IP address are always hashed to the same MS-ISA.

In general, load distribution based on hashing is dependent on the sample. A large and more diverse sample ensures better load balancing. Therefore, the efficiency of load distribution between the MS-ISAs is dependent on the number and diversity of subnets that the hashing algorithm takes into consideration within the inside routing context.

A simple rule for good load balancing is to configure a large number of subscribers relative to the largest subscriber limit in any pool that is referenced from this inside routing instance.

Figure 18. Deterministic LSN44 configuration example (MD-CLI)
Figure 19. Deterministic LSN44 configuration example (classic CLI)

Deterministic LSN44 configuration example (classic CLI) shows a case in which prefixes from multiple routing instances are mapped to the same outside pool and at the same time the prefixes from a single inside routing instance are mapped to different pools (Nokia does not support the latter with non-deterministic NAT).

Note: In the preceding example, the inside prefix 10.10.10.0/24 is present in VPRN 1 and VPRN 2. In both VPRNs, this prefix is mapped to the same pool (pool-1) with the subscriber limit of 64. Four outside IP addresses per prefix per VPRN (eight in total) are allocated to accommodate the mappings for all hosts in prefix 10.10.10.0/24. However, the hashing prefix length in VPRN 1 is based on the subscriber limit of 64 (VPRN 1 references only pool-1) while the hashing prefix length in VPRN 2 is based on the subscriber limit of 256 in pool-2. VPRN 2 references both pools, pool-1 and pool-2, and the larger subscriber limit must be selected. The consequence of this is that the traffic from subnet 10.10.10.0/24 in VPRN 1 can be load balanced over 4 MS-ISA (hashing prefix length is 26) while traffic from the subnet 10.10.10.0/24 in VPRN 2 is always sent to the same MS-ISA (hashing prefix length is 24).

Distribution of outside IP addresses across MS-ISAs in an MS-ISA NA group

Distribution of outside IP addresses across the MS-ISAs is dependent on the ingress hashing algorithm. Because traffic from the same subscriber is always pre-hashed to the same MS-ISA, the corresponding outside IP address also must reside on the same ISA. CPM runs the hashing algorithm in advance to determine on which MS-ISA the traffic from particular inside subnet lands and then the corresponding outside IP address (according to deterministic NAT mapping algorithm) is configured in that particular MS-ISA.

Flexible number of subscribers per outside IP (single IP) address

In a deterministic NAT with a single ESA-VM in a NAT group, the number of subscribers per outside IP address does not have to be restricted to 2^n. Instead, a flexible number of subscribers can be mapped to an outside IP address. This mode of operation is enabled via configuration. The default mode of operation is the one with discrete number (2^n) of subscribers per outside IP address. To enable this feature, the NAT group must contain a single ESA-VM.

The mapping between the flexible number of subscribers and the outside IP addresses is configured using commands shown in the following examples.

Configuring a flexible number of subscribers and the outside IP addresses (MD-CLI)

[ex:/configure service vprn "demo-vprn" nat inside large-scale nat44]
A:admin@node-2# info
    deterministic {
        address-map 10.10.0.5 to 10.10.0.104 nat-policy "demo-nat-policy" {
            admin-state enable
            outside-range 192.0.2.15
        }
    }

Configuring a flexible number of subscribers and the outside IP addresses (classic CLI)

A:node-2>config>service>vprn>nat>inside# info 
----------------------------------------------
                    deterministic
                        address-map 10.10.0.5 to 10.10.0.10 subscriber-type classic-lsn-sub nat-policy "demo-nat-policy" create
                            no shutdown
                            outside-range start 192.0.2.15
                        exit
                    exit

In the following example, the inside IPv4 addresses (NAT subscribers) are sequentially mapped from 10.10.0.5 to 10.10.0.100 into the outside IP addresses in a NAT pool referenced by the NAT policy "demo-nat-policy", starting with the outside IP address 192.0.2.15.

Mapping the inside IP addresses to outside IP addresses in a NAT pool (MD-CLI)

[ex:/configure router "Base" nat outside pool "pool-1"]
A:admin@node-2# info
    type large-scale
    nat-group 1
    port-forwarding {
        range-end 5534
    }
    large-scale {
        subscriber-limit 20
        deterministic {
            port-reservation 3000
        }
    }
    address-range 192.0.2.15 end 192.0.2.19 {
    }

Mapping the inside IP addresses to outside IP addresses in a NAT pool (classic CLI)

A:node-2>config>router>nat>outside>pool# info
----------------------------------------------
                    shutdown
                    port-forwarding-range 5534
                    subscriber-limit 20
                    deterministic
                        port-reservation 3000
                    exit
                    address-range 192.0.2.15 192.0.2.19 create
                    exit
----------------------------------------------

Where:

  • the number of subscribers per outside IP address is 20

  • the size of the deterministic port block of each subscriber is 3000 ports

  • the deterministic port block allocations for each outside IP address starts at port 5535, immediately after the ports allocated for port forwards (5534)

  • the outside IP addresses in the NAT pool are 192.0.2.15 to 192.0.2.19

In the preceding examples, 100 NAT44 subscribers from the address-map are sequentially divided into sets of 20 subscribers per outside IP address. This requires five outside IP addresses, which results in the following mapping:

10.10.0.5 – 10.10.0.24 → 192.0.2.15

10.10.0.25 – 10.10.0.44 → 192.0.2.16

10.10.0.45 – 10.10.0.64 → 192.0.2.17

10.10.0.65 – 10.10.0.84 → 192.0.2.18

10.10.0.85 – 10.10.0.104 → 192.0.2.19

Sharing of deterministic NAT pools

Sharing of the deterministic pools between LSN44 and DS-Lite is supported.

Simultaneous support of dynamic and deterministic NAT

Simultaneous support for deterministic and non-deterministic NAT inside of the same routing instance is supported. However, an outside pool can be only deterministic (although expandable by dynamic ports blocks) or non-deterministic at any time.

Ingress hashing for all NAT’d traffic within the VRF, in this case, is performed based on the subnets driven by the classic-lsn-max-subscriber-limit parameter.

Selecting traffic for NAT

Deterministic NAT does not change the way how traffic is selected for the NAT function but instead only defines a predictable way for translating subscribers into outside IP addresses and port-blocks.

Traffic is still diverted to NAT using the existing methods:

  • routing based

    Traffic is forwarded to the NAT function if it matches a configured destination prefix that is part of the routing table. In this case inside and outside routing context must be separated.

  • filter based

    Traffic is forwarded to the NAT function based on any criteria that can be defined inside an IP filter. In this case the inside and outside routing context can be the same.

Inverse mappings

The inverse mapping can be performed with a MIB locally on the node or externally via a script sourced in the router. In both cases, the input parameters are <outside routing instance, outside IP, outside port. The output from the mapping is the subscriber and the inside routing context in which the subscriber resides.

MIB approach

The reverse mapping information can be obtained using the following command.

tools dump nat deterministic-mapping outside-ip router outside-port
Obtaining reverse mapping information (MD-CLI)
[/tools dump nat]
A:admin@node-2# deterministic-mapping outside-ip 10.0.0.2 router "Base" outside-port 2333
Obtaining reverse mapping information (classic CLI)
A:node-2>tools>dump>nat# deterministic-mapping outside-ip 10.0.0.2 router "Base" outside-port 2333tools dump nat deterministic-mapping outside-ip 10.0.0.2 router "Base" outside-port 2333
Output example

Inside router 10 ip 10.0.5.171 -- outside router Base ip 10.0.0.2 port 2333 at Mon Jan 7 10:02:02 PST 2013

Offline approach to obtain deterministic mappings

Instead of querying the system directly, there is an option where a Python script can be generated on router and exported to an external node. This Python script contains mapping logic for the configured deterministic NAT in the router. The script can then be queried offline to obtain mappings in either direction. The external node must have installed the Python scripting language with the following modules: getopt, math, os, socket, and sys.

The purpose of such an offline approach is to provide fast queries without accessing the router. Exporting the Python script for reverse querying is a manual operation that needs to be repeated every time there is configuration change in deterministic NAT.

The script is exported outside of the box to a remote location (assuming that writing permissions on the external node are correctly set). The remote location is specified with the following command.

configure service nat deterministic-script location remote-url

Use the following command to show the status of the script.

show service nat deterministic-script
========================================================================
Deterministic NAT script data
========================================================================
Location            : ftp://10.10.10.10/pub/det-nat-script/det-nat.py
Save needed         : yes
Last save result    : none
Last save time      : N/A
========================================================================

After the script location is specified, the script can be exported to that location, using the following command.

admin nat save-deterministic-script

This needs to be repeated manually every time the configuration affecting deterministic NAT changes. After the script is exported (saved), the status of the script is changed as well.

show service nat deterministic-script
========================================================================
Deterministic NAT script data
========================================================================
Location            : ftp://10.10.10.10/pub/det-nat-script/det-nat.py
Save needed         : no
Last save result    : success
Last save time      : 2013/01/07 10:33:43
========================================================================

The script itself can be run to obtain mapping in forward or backward direction.

user@external-server:/home/ftp/pub/det-nat-script$ ./det-nat.py 
Usage: det-nat-.py {{DIRECTION PARAMS} | -h[elp] }
where  DIRECTION := { -f[orward] | -b[ackward] }
where  PARAMS := { -s[ervice] -a[ddress] -p[ort] }

The following displays an example in which source addresses are mapped in the following manner.

Router 10, Source-ip:  10.0.5.0-10.0.5.127      to router base, outside-ip  10.0.0.1
Router 10 Source-ip: 10.0.5.128-10.0.5.255    to router base outside-ip 10.0.0.2

The forward query for the preceding example is performed as follows.

user@external-server:/home/ftp/pub/det-nat-script$ ./det-nat.py -f -s 10 -a 10.0.5.10 
Output example

subscriber has public ip address 10.0.0.1 from service 0 and is using ports [1324 - 1353]

The reverse query for this example is performed as follows.

user@external-server:/home/ftp/pub/det-nat-script$ ./det-nat.py -b -s 0 -a 10.0.0.1  -p 3020
Output example

subscriber has private ip address 10.0.5.66 from service 10

Logging

Every configuration change concerning the deterministic pool is logged and the script (if configured for export) is automatically updated (although not exported). This is needed to keep current track of deterministic mappings. In addition, every time a deterministic port-block is extended by a dynamic block, the dynamic block is logged just as it is today in non-deterministic NAT. The same logic is followed when the dynamic block is de-allocated.

All static port forwards (including PCP) are also logged.

PCP allocates static port forwards from the wildcard-port range.

Deterministic DS-Lite

A subscriber in non-deterministic DS-Lite is defined as an IPv6 prefix, with the prefix length being configured under the DS-Lite NAT node. Use the following command to configure the IPv6 prefix length of the DS-Lite subscribers:
  • MD-CLI
    configure service vprn nat inside large-scale dual-stack-lite subscriber-prefix-length
  • classic CLI
    configure service vprn nat inside dual-stack-lite subscriber-prefix-length

All incoming IPv6 traffic with source IPv6 addresses falling under a unique IPv6 prefix that is configured with the subscriber-prefix-length command is considered as a single subscriber. As a result, all source IPv4 addresses carried within that IPv6 prefix are mapped to the same outside IPv4 address.

The concept of deterministic DS-Lite is very similar to deterministic LSN44. The DS-Lite subscribers (IPv6 addresses/prefixes) are deterministically mapped to outside IPv4 addresses and corresponding deterministic port-blocks.

Although the subscriber in DS-Lite is considered to be either a B4 element (IPv6 address) or the aggregation of B4 elements (IPv6 prefix determined by the subscriber-prefix-length command), only the IPv4 source addresses and ports carried inside of the IPv6 tunnel are actually translated.

The prefix statement for deterministic DS-Lite remains under the same deterministic CLI node as for the deterministic LSN44. However, the prefix statement parameters for deterministic DS-Lite differ from the one for deterministic LSN44 in the following fashion:

  • DS-Lite prefix is an IPv6 prefix (instead of IPv4). The DS-Lite subscriber whose traffic is mapped to a particular outside IPv4 address and the deterministic port block is deduced from the prefix statement and the subscriber-prefix-length statement.

  • In the classic CLI, the subscriber type is set to dslite-lsn-sub.

Use the following command to configure the DS-Lite LSN subscriber source:
  • MD-CLI
    configure service vprn nat inside large-scale dual-stack-lite deterministic prefix-map IPv6-prefix/length nat-policy policy-name 
    
  • classic CLI
    configure service vprn nat inside deterministic prefix-map IPv6-prefix/length subscriber-type dslite-lsn-sub nat-policy policy-name 
    

Configuring the DS-Lite LSN subscriber source (MD-CLI)

[ex:/configure service vprn "10"]
A:admin@node-2# info
    customer "1"
    nat {
        inside {
            large-scale {
                dual-stack-lite {
                    max-subscriber-limit 128
                    subscriber-prefix-length 60
                    deterministic {
                        prefix-map 2001:db8::/56 nat-policy "det-policy" {
                        }
                    }
                }
            }
        }
    }

Configuring the DS-Lite LSN subscriber source (classic CLI)

A:node-2>config>service>vprn# info
----------------------------------------------
            shutdown
            nat
                inside
                    dslite-max-subscriber-limit 128
                    deterministic
                        prefix-map 2001:db8::/56 subscriber-type dslite-lsn-sub nat-policy "det-policy" create
                            shutdown
                        exit
                    exit
                    dual-stack-lite
                        shutdown
                        subscriber-prefix-length 60
                    exit
                exit
            exit
----------------------------------------------

In this case, 16 IPv6 prefixes (from 2001:db8::/60 to 2001:db8:00:F0::/60) are considered DS-Lite subscribers. The source IPv4 addresses and ports inside of the IPv6 tunnels are mapped into the respective deterministic port blocks within an outside IPv4 address according to the map statement.

The map statement contains minor modifications as well. It maps DS-Lite subscribers (IPv6 address or prefix) to corresponding outside IPv4 addresses, as shown in the following example.

Mapping DS-Lite subscribers to outside IP addresses (MD-CLI)

[ex:/configure service vprn "10" nat inside large-scale dual-stack-lite deterministic prefix-map 2001:db8::/56 nat-policy "policy-1"]
A:admin@node-2# info
    map 2001:db8::/60 to 2001:db8:0:f0::/60 {
        first-outside-address 192.168.1.1
    }

Mapping DS-Lite subscribers to outside IP addresses (classic CLI)

A:node-2>config>service>vprn>nat>inside>deterministic>prefix-map# info
	map start 2001:db8::/60 end 2001:db8:00:F0::/60 to 192.168.1.1

The prefix length (/60), in this case, must be the same as the configured subscriber-prefix-length. If we assume that the subscriber limit in the corresponding pool is set to 8 and outside IP address range is 192.168.1.1 to 192.168.1.10, the actual mapping is the following.

2001:db8::/60 to 2001:db8:00:70::/60 to 192.168.1.1 
2001:db8:00:80::/60 to 2001:db8:00:F0::/60 to 192.168.1.2

Hashing considerations in DS-Lite

The ingress hashing and load distribution between the ISAs in Deterministic DS-Lite is governed by the highest number of configured subscribers per outside IP address in any pool referenced within the specific inside routing context.

The following example displays the configuration of the largest value for all subscriber limits in each deterministic pool in a router or VPRN context.

Configuring the largest value for all subscriber limits (MD-CLI)
[ex:/configure router "Base"]
A:admin@node-2# info
    nat {
        inside {
            large-scale {
                dual-stack-lite {
                    max-subscriber-limit 128
                 }
            }
        }

[ex:/configure service vprn "1"]
A:admin@node-2# info
    customer "1"
    nat {
        inside {
            large-scale {
                dual-stack-lite {
                    max-subscriber-limit 128
                }
            }
        }
    }
Configuring the largest value for all subscriber limits (classic CLI)
A:node-2>config>router# info
----------------------------------------------
...
#--------------------------------------------------
echo "NAT Configuration"
#--------------------------------------------------
        nat
            inside
                dslite-max-subscriber-limit 128
                exit
            exit
----------------------------------------------

A:node-2>config>service>vprn# info
----------------------------------------------
            shutdown
            nat
                inside
                    dslite-max-subscriber-limit 128
                    exit
                exit
            exit
----------------------------------------------

While ingress hashing in non-deterministic DS-Lite is governed by the subscriber-prefix-length command, in deterministic DS-Lite the ingress hashing is governed by the combination of value configured for the DS-Lite maximum subscriber limit and the configured subscriber prefix length. . This is to ensure that all DS-Lite subscribers that are mapped to a single outside IP address are always sent to the same MS-ISA (on which that outside IPv4 address resides). In essence, as soon as deterministic DS-Lite is enabled, the ingress hashing is performed on an aggregated set of n = log2 (or DS-Lite maximum subscriber limit) contiguous subscribers. n is the number of bits used to represent the largest number of subscribers within an inside routing context, that is mapped to the same outside IP address in any pool referenced from this inside routing context (referenced through the NAT policy).

After the deterministic DS-Lite is enabled (a prefix-map command under the deterministic CLI node is configured), the ingress hashing influenced by the DS-Lite maximum subscriber limit is in effect for both flavors of DS-Lite (deterministic and non-deterministic) within the inside routing context assuming that both flavors are configured simultaneously.

With introduction of deterministic DS-Lite, the configuration of the subscriber-prefix-length must adhere to the following rule.

The configured value for the subscriber-prefix-length minus the number of bits representing the dslite-max-subscriber-limit value, must be in the range [32 to 64, 128]. or:

subscriber-prefix-length – n = [32..64,128] where n = log2(dslite-max-subscriber-limit) [or dslite-max-subscriber-limit = 2^n]

This can be clarified by the two following examples:

dslite-max-subscriber-limit = 64 — n=6 [log2(64) = 6]

This means that 64 DS-Lite subscribers are mapped to the same outside IP address. Consequently, the prefix length of those subscribers must be reduced by six bits for hashing purposes (so that chunks of 64 subscribers are always hashed to the same ISA).

According to this rule, the prefix of those subscribers (subscriber-prefix-length) can be only in the range of [38 to 64], and no longer in the range [32 to 64, 128].

dslite-max-subscriber-limit = 1 > n=0 [log2(1) = 0]

This means that each DS-Lite subscriber is mapped to its own outside IPv4 address. Consequently, there is no need for the aggregation of the subscribers for hashing purposes, because each DS-Lite subscriber is mapped to an entire outside IPv4 address (with all ports). Because the subscriber prefix length are not contracted in this case, the prefix length can be configured in the range [32 to 64, 128].

In other words, the largest configured prefix length for the deterministic DS-Lite subscriber is 32+n, where n = log2 (DS-Lite maximum subscriber limit). The subscriber prefix length can extend up to 64 bits. Beyond 64 bits for the subscriber prefix length, there is only one value allowed: 128. In the case n must be 0, which means that the mapping between B4 elements (or IPv6 address) and the IPv4 outside addresses is in 1:1 ratio (no sharing of outside IPv4 addresses).

The dependency between the subscriber definition in DS-Lite (based on the subscriber-prefix-length) and the subscriber hashing mechanism on ingress (based on the DS-Lite maximum subscriber limit value), influences the order in which deterministic DS-Lite is configured.

Order of configuration steps in deterministic DS-Lite

Configure deterministic DS-Lite in the following order:

  1. Configure DS-Lite subscriber-prefix-length.
  2. Configure the DS-Lite maximum subscriber limit.
  3. Configure deterministic prefix (using a NAT policy).
  4. Optionally, configure map statements under the prefix.
  5. Configure DS-Lite AFTR endpoints.
  6. Enable the DS-Lite node.

Modifying the DS-Lite maximum subscriber limit requires that all NAT policies be removed from the inside routing context.

To migrate a non-deterministic DS-Lite configuration to a deterministic DS-Lite configuration, the non-deterministic DS-Lite configuration must be first removed from the system. The following steps should be followed:

  1. Administratively disable DS-Lite node.
  2. Remove DS-Lite AFTR endpoints.
  3. Remove the global NAT policy.
  4. Configure or modify the DS-Lite subscriber-prefix-length.
  5. Configure DS-Lite maximum subscriber limit.
  6. Reconfigure the global NAT policy.
  7. Configure the deterministic prefix.
  8. Optionally, configure one or more manual map statements under the prefix.
  9. Reconfigure DS-Lite AFTR endpoints.
  10. Enable the DS-Lite node.
  11. Configuration the restrictions in the deterministic NAT.
  • NAT pool

    • To modify the NAT pool command options, the NAT pool must be in an administratively disabled state.

    • Administratively disabling the NAT pool by configuration is not allowed in case that any NAT policy referencing this pool is active. In other words, all configured prefixes referencing the pool via the NAT policy must be deleted system-wide before the pool can be administratively disabled. When the pool is enabled again, all prefixes referencing this pool (with the NAT policy) have to be recreated. For a large number of prefixes, this can be performed with an offline configuration file executed using the exec command.

  • NAT policy

    • All NAT policies (deterministic and non-deterministic) in the same inside routing instance must point to the same NAT group.

    • A NAT policy (be it a global or in a deterministic prefix) must be configured before one can configure an AFTR endpoint.

  • NAT group

    The active-mda-limit in a NAT group cannot be modified as long as a deterministic prefix using that NAT group exists in the configuration (even if that prefix is administratively disabled). In other words, all deterministic prefixes referencing (with the NAT policy) any pool in that NAT group, must be removed.

  • deterministic mappings (prefix and map statements)

    • A non-deterministic policy must be removed before adding deterministic mappings.

    • Modifying, adding, or deleting the prefix-map and map statements in deterministic DS-Lite requires that the corresponding NAT pool is enabled.

    • Removing an existing prefix statement requires that the prefix node is in an administratively disabled state.

    Example: Deterministic mapping configuration (MD-CLI)
    [ex:/configure service vprn "10"]
    A:admin@node-2# info
        customer "1"
        nat {
            inside {
                large-scale {
                    nat44 {
                        max-subscriber-limit 128
                        deterministic {
                            prefix-map 10.0.5.0/24 nat-policy "det" {
                                map 10.0.5.0 to 10.0.5.127 {
                                    first-outside-address 192.168.0.7
                                }
                                map 10.0.5.128 to 10.0.5.255 {
                                    first-outside-address 192.168.0.2
                                }
                            }
                        }
                    }
                }
            }
        }
    
    [ex:/configure service vprn "11"]
    A:admin@node-2# info
        customer "1"
        nat {
            inside {
                large-scale {
                    dual-stack-lite {
                        max-subscriber-limit 128
                        deterministic {
                            prefix-map 2001:db8:0:1/64 nat-policy "det" {
                            map 2001:db8::/64 to 2001:db8::FF:0:0:0:0/64 {
                                    first-outside-address 10.0.0.5
                            }
                        }
                    }
                }
            }
        }
    
    [exl:/configure service vprn "11"]
    A:admin@node-2# info
        customer "1"
        nat {
            inside {
                large-scale {
                    dual-stack-lite {
                        subscriber-prefix-length 64
                        }
                    }
                }
            }
        }
    
    Example: Deterministic mapping configuration (classic CLI)
    A:node-2>config>service>vprn# info
    ----------------------------------------------
                shutdown
                nat
                    inside
                        classic-lsn-max-subscriber-limit 128
                        deterministic
                            prefix-map 10.0.5.0/24 subscriber-type classic-lsn-sub nat-policy "det" create
    				map start 10.0.5.0 end 10.0.5.127 to 192.168.0.7
    				map start 10.0.5.128 end 10.0.5.255 to 192.168.0.2
    				shutdown
                            exit
                        exit
                    exit
                exit
    ----------------------------------------------
    
    A:node-2>config>service>vprn# info
    ----------------------------------------------
                shutdown
                nat
                    inside
                        dslite-max-subscriber-limit 128
                        deterministic
                            prefix-map 2001:db8:0:1/64 subscriber-type dslite-lsn-sub nat-policy "det" create
    				map start 2001:db8::/64 end 2001:db8::FF:0:0:0:0/64 to 10.0.0.5 
    				shutdown
                            exit
                        exit
                    exit
                exit
    ----------------------------------------------
    
    A:node-2>config>service>vprn# info
    ----------------------------------------------
                shutdown
                nat
                    inside
                        dual-stack-lite
                            shutdown
                            subscriber-prefix-length 64
                        exit
                    exit
                exit
    ----------------------------------------------

    Similarly, the map statements can be added or removed only if the prefix node is in an administratively disabled state.

    There are a few rules governing the configuration of the map statement:

    • If the number of subscribers per configured prefix is greater than the subscriber limit per outside IP parameter (2^n), the lowest n bits of the map start inside IP address must be set to 0.

    • If the number of subscribers per configured prefix is equal or less than the subscriber limit per outside IP command option (2^n), only one map command for this prefix is allowed. In this case there is no restriction on the lower n bits of the map start inside IP address. The range of the inside IP addresses in such map statement represents the prefix itself.

    The outside IP address in the map statements must be unique amongst all map statements referencing the same pool. In other words, two map statements cannot reference the same outside IP address in a pool.

  • configuration command options

    • The subscriber limit in deterministic NAT pool must be a power of 2.
    • The NAT inside large scale NAT maximum subscriber limit must be power of 2 and at least as large as the largest subscriber limit in any deterministic NAT pool referenced by this routing instance. To change this command option, all NAT policies in that inside routing instance must be removed.

    • The NAT inside DS-Lite maximum subscriber limit must be power of 2 and at least as large as the largest subscriber limit in any deterministic nat pool referenced by this routing instance. To change this command option, all NAT policies in that inside routing instance must be removed.

    • In DS-Lite, the [subscriber-prefix-length - log2 (DS-Lite maximum subscriber limit)] value must fall within [32 to 64, 128].

    • In DS-Lite, the subscriber-prefix-length can be only modified if the DS-Lite CLI node is in the administratively disabled state and there are no deterministic DS-Lite prefixes configured.

  • miscellaneous

    • Deterministic NAT is not supported in combination with 1:1 NAT. Therefore, the NAT pool cannot be in mode 1:1 when used as deterministic pool. Even if each subscriber is mapped to its own unique outside IP (subscriber limit=1, deterministic port reservation ports (65535-1023), NAPT (port translation) function is still performed.

    • Wildcard port forwards (including PCP) map to the wildcard port ranges and not the deterministic port range. Consequently, logs are generated for static port forwards using PCP.

Destination Based NAT (DNAT)

Destination NAT (DNAT) in SR OS is supported for LSN44 and Layer 2–aware NAT. DNAT can be used for traffic steering where the destination IP address of the packet is rewritten. In this fashion traffic can be redirected to an appliance or set of servers that are in control of the user, without the need for a separate transport service (for example, PBR plus LSP). Applications using traffic steering via DNAT normally require some form of inline traffic processing, such as inline content filtering (parental control, antivirus/spam, firewalling), video caching, and so on.

After the destination IP address of the packet is translated, traffic is naturally routed based on the destination IP address lookup. DNAT translates the destination IP address in the packet while leaving the original destination port untranslated.

Similar to source based NAT (Source Network Address and Port Translation [SNAPT]), the SR OS maintains state of DNAT translations so that the source IP address in the return (downstream) packet is translated back to the original address.

Traffic selection for DNAT processing in MS-ISA is performed via a NAT classifier.

Combination of SNAPT and DNAT

In specific cases SNAPT is required along with DNAT. In other cases only DNAT is required without SNAPT. Supported combinations of SNAPT and DNAT shows the supported combinations of SNAPT and DNAT in SR OS.

Table 8. Supported combinations of SNAPT and DNAT

SNAPT DNAT-only SNAPT + DNAT

LSN44

X

X

X

L2-Aware

X

X

The SNAPT/DNAT address translations are shown in IP address/port translation modes.

Figure 20. IP address/port translation modes

Forwarding model in DNAT

NAT forwarding in SR OS is implemented in two stages:

  1. Traffic is first directed toward the MS-ISA. This is performed via a routing lookup, via a filter or via a subscriber-management lookup (Layer 2–aware NAT). DNAT does not introduce any changes to the steering logic responsible for directing traffic from the I/O line card toward the MS-ISA.

  2. When traffic reaches the MS-ISA, translation logic is performed. DNAT functionality incurs an additional lookup in the MS-ISA. This lookup is based on the protocol type and the destination port of the packets, as defined in the nat-classifier.

As part of the NAT state maintenance, the SR OS maintains the following fields for each DNATed flow:

<inside host /port, outside IP/port, foreign IP address/port, destination IP address/port, protocol (TCP,TCP,ICMP)> Note that the inside host in LSN is inside the IP address and in Layer 2–aware NAT it is the <inside IP address + subscriber-index>. The subscriber index is carried in session-id of the L2TP.

The foreign IP address represents the destination IP address in the original packet, while the destination IP address represents the DNAT address (translated destination IP address).

DNAT traffic selection via NAT classifier

Traffic intended for DNAT processing is selected via a NAT classifier. The NAT classifier has configurable protocol and destination ports. The inclusion of the classifier in the NAT policy is the trigger for performing DNAT. The configuration of the NAT classifier determines which of the following is true:

  • A specific traffic defined in the match criteria is DNATed while the rest of the traffic is transparently passed through the NAT classifier.

  • A specific traffic defined in the match criteria is transparently passed through the NAT classifier while the rest of the traffic is DNATed.

Classifier cannot drop traffic (no action drop). However, a non-reachable destination IP address in DNAT causes traffic to be black-holed.

Configuring DNAT

Use the command options in the following contexts to configure DNAT:
  • MD-CLI
    configure service nat nat-policy dnat classifier
    configure service nat nat-policy dnat-only
  • classic CLI
    configure service nat nat-policy dnat nat-classifier 
    configure service nat nat-policy dnat dnat-only

The DNAT function is triggered by the presence of the NAT classifier, referenced in the NAT policy. The dnat-only command is configured in cases where SNAPT is not required. This command is necessary to determine the outside routing context and the NAT group (nat-group command), when SNAPT is not configured. The pool (relevant to SNAPT) and dnat-only command options within the NAT policy are mutually exclusive.

DNAT traffic selection and destination IP address configuration

DNAT traffic selection is performed via a NAT classifier. A NAT classifier is defined under the configure service nat context and is referenced within the nat-policy.

Configuring a NAT classifier (MD-CLI)
[ex:/configure service nat]
A:admin@node-2# info
    classifier "test-classifer" {
        entry 1 {
            action {
                destination-nat {
                    ip-address 192.168.8.5
                }
            }
            match {
                dst-port-range {
                    start 1
                    end 1200
                }
            }
        }
    }
Configuring a NAT classifier (classic CLI)
A:node-2>config>service>nat# info
----------------------------------------------
            nat-classifier "test-classifer" create
                entry 1 create
                    action dnat ip-address 192.168.0.5
                    match protocol udp
                        dst-port-range start 1 end 1200
                    exit
                exit
            exit
Information on other NAT classifier command options

The default DNAT IP address is used in all match criteria that contain DNAT action without specific destination IP address. However, the default DNAT IP address is ignored in cases where the IP address is explicitly configured as part of the action within the match criteria.

The default action is applied to all packets that do not satisfy any match criteria.

The forward (forwarding action) has no effect on the packets and transparently forwards packets through the NAT classifier.

By default, packets that do not match any matching criteria are transparently passed through the classifier.

For more information about NAT classifier commands options, see the following guides:
  • 7450 ESS, 7750 SR, 7950 XRS, and VSR MD-CLI Command Reference Guide
  • 7450 ESS, 7750 SR, 7950 XRS, and VSR Classic CLI Command Reference Guide

Micronetting original source (inside) IP space in a DNAT-only case

To forward upstream and downstream traffic for the same NAT binding to the same MS-ISA, the original source IP address space must be known in advance; and consequently, hashed on the inside ingress toward the MS-ISAs and micronetted on the outside.

The following example displays the configuration of the following:
  • the source prefix used to identify traffic for NAT processing
  • the granularity of traffic distribution in the upstream direction across the MS-ISA within the scope of an inside routing context
  • the prefix list for the application
  • the NAT prefix
Micronetting the original source IP space in a DNAT-only case (MD-CLI)
[ex:/configure router "Base"]
A:admin@node-2# info
    nat {
        inside {
            large-scale {
                traffic-identification {
                    source-prefix-only true
                }
                nat44 {
                    max-subscriber-limit 8
                    source-prefix 192.168.2.0/24 {
                        nat-policy "ls-outPolicy"
                    }
                }
            }
        }
    }

[ex:/configure service vprn "1"]
A:admin@node-2# info
    customer "1"
    nat {
        inside {
            large-scale {
                traffic-identification {
                    source-prefix-only true
                }
                nat44 {
                    source-prefix 192.168.2.0/24 {
                        nat-policy "ls-outPolicy"
                    }
                }
            }
        }
    }


[ex:/configure service nat]
A:admin@node-2# info
    prefix-list "list1" {
        application dnat-only-subscribers
        prefix 192.168.2.0/24 {
        }
    }
Micronetting the original source IP space in a DNAT-only case (classic CLI)
A:node-2>config>router# info
----------------------------------------------
...
#--------------------------------------------------
echo "NAT Configuration"
#--------------------------------------------------
        nat
            inside
			traffic-identification
				source-prefix-only
			exit
			source-prefix 192.168.2.0/24 nat-policy "ls-outPolicy"
			classic-lsn-max-subscriber-limit 8
			dnat-only
			exit
----------------------------------------------

A:node-2>config>service>vprn$ info
----------------------------------------------
            shutdown
            nat
                inside
                    traffic-identification
                        source-prefix-only
                    exit
                    source-prefix 192.168.2.0/24 nat-policy "ls-outPolicy"
                exit
            exit
----------------------------------------------

A:node>config>service>nat# info
----------------------------------------------
...
            nat-prefix-list "list1" application dnat-only-subscribers create
                prefix 192.168.2.0/24
            exit
----------------------------------------------

In the classic CLI, the classic-lsn-max-subscriber-limit command was introduced by deterministic NAT and it is reused here.

In the MD-CLI, the max-subscriber-limit command was introduced by deterministic NAT and it is reused here.

This command, which differs in the MD-CLI and classic as referenced in the preceding information, affects the distribution of the traffic across multiple MS-ISA in the upstream direction traffic. The hashing mechanism based on source IPv4 addresses and prefixes is used to distribute incoming traffic on the inside (private side) across the MS-ISAs. Hashing based on the entire IPv4 address produces the most granular traffic distribution, while hashing based on the IPv4 prefix (determined by prefix length) produces less granular hashing. For more details about this command, see the descriptions in the following guides:
  • 7450 ESS, 7750 SR, 7950 XRS, and VSR MD-CLI Command Reference Guide
  • 7450 ESS, 7750 SR, 7950 XRS, and VSR Classic CLI Command Reference Guide

The source IP prefix is defined in the NAT prefix list and then applied under the DNAT-only node in the inside routing context. This instructs the SR OS to create micronets in the outside routing context. The number of routes installed in this fashion is limited by the following configuration.

Configuring the route limit in the DNAT-only case (MD-CLI)
[ex:/configure router "Base"]
A:admin@node-2# info
    nat {
        outside {
            dnat-only {
                route-limit 500
            }
        }
    }

[ex:/configure service vprn "1"]
A:admin@node-2# info
    customer "1"
    nat {
        outside {
            dnat-only {
                route-limit 500
            }
        }
    }
Configuring the route limit in the DNAT-only case (classic CLI)
A:node-2>config>router# info
----------------------------------------------
...
echo "NAT Configuration"
#--------------------------------------------------
        nat
            outside
                dnat-only
                    route-limit 500
                exit
            exit
        exit
----------------------------------------------

A:node-2>config>service>vprn# info
----------------------------------------------
            shutdown
            nat
                outside
                    dnat-only
                        route-limit 500
                    exit
                exit
            exit
----------------------------------------------

The configurable range is 1-128K with the default value of 32K.DNAT provisioning concept is shown in the following figures.

Figure 21. DNAT provisioning model (MD-CLI)
Figure 22. DNAT provisioning model (classic CLI)

LSN – multiple NAT policies per inside routing context

Restrictions

The following restrictions apply to multiple NAT policies per inside routing context:

  • There is no support for Layer 2–aware NAT.

  • DS-Lite and NAT64 diversion to NAT is supported only through IPv6 filters.

  • The default NAT policy is counted toward this limit (8).

Multiple NAT policies per inside routing context

The selection of the NAT pool and the outside routing context is performed through the NAT policy. Multiple NAT policies can be used within an inside routing context. This feature effectively allows selective mapping of the incoming traffic within an inside routing context to different NAT pools (with different mapping properties, such as port-block size, subscriber-limit per pool, address range, port-forwarding range, deterministic vs non-deterministic behavior, port-block watermarks, and so on) and to different outside routing contexts. NAT policies can be configured:

  • via filters as part of the action nat command

  • via routing with the destination-prefix command within the inside routing context

The concept of the NAT pool selection mechanism based on the destination of the traffic via routing is shown in Pool selection based on traffic destination.

Figure 23. Pool selection based on traffic destination

Diversion of the traffic to NAT based on the source of the traffic is shown in NAT pool selection based on the inside source IP address.

Only the filter-based diversion solution is supported for this case. The filter-based solution can be extended to a five tuple matching criteria.

Figure 24. NAT pool selection based on the inside source IP address

The following considerations must be taken into account when deploying multiple NAT policies per inside routing context:

  • The inside IP address can be mapped into multiple outside IP addresses based on the traffic destination. The relationship between the inside IP and the outside IP is 1:N.

  • In case where the source IP address is selected as a matching criteria for a NAT policy (or pool) selection, the inside IP address always stays mapped to the same outside IP address (relationship between the inside IP and outside IP address is, in this case, 1:1)

  • Static Port Forwards (SPF); each SPF can be created only in one pool. This means that the pool (or NAT policy) must be an input parameter for SPF creation.

Routing approach for NAT diversion

The routing approach relies on upstream traffic being directed (or diverted) to the NAT function based on the following commands:
  • MD-CLI
    configure service vprn nat inside large-scale nat44 destination-prefix
    configure router nat inside large-scale nat44 destination-prefix
  • classic CLI
    configure service vprn nat inside destination-prefix
    configure router nat inside destination-prefix

In other words, the upstream traffic is NAT’d only if it matches a preconfigured destination IP prefix. The destination-prefix command creates a static route in the routing table of the inside routing context. This static route diverts all traffic with the destination IP address that matches the created entry, toward the MS-ISA. The NAT function itself is performed when the traffic is in the correct context in the MS-ISA.

The following example displays the configuration of multiple NAT policies per inside routing context with routing based diversion to NAT.

Configuring multiple NAT policies per inside routing context (MD-CLI)

[ex:/configure service vprn "66"]
A:admin@node-2# info
    customer "1"
    nat {
        inside {
            large-scale {
                nat44 {
                    destination-prefix 10.20.10.0/24 {
                        nat-policy "policy-1"
                    }
                    destination-prefix 10.30.30.0/24 {
                        nat-policy "policy-1"
                    }
                    destination-prefix 10.40.40.0/24 {
                        nat-policy "policy-2"
                    }
                }
            }
        }
    }

[ex:/configure router "Base"]
A:admin@node-2# info
    nat {
        inside {
            large-scale {
                nat44 {
                    max-subscriber-limit 256
                    destination-prefix 10.20.10.0/24 {
                        nat-policy "policy-1"
                    }
                    destination-prefix 10.30.30.0/24 {
                        nat-policy "policy-1"
                    }
                    destination-prefix 10.40.40.0/24 {
                        nat-policy "policy-2"
                    }
                }
            }
        }
    }

Configuring multiple NAT policies per inside routing context (classic CLI)

A:node-2>config>service>vprn# info
----------------------------------------------
            shutdown
            nat
                inside
                    destination-prefix 10.20.10.0/24 nat-policy "policy-1"
                    destination-prefix 10.30.30.0/24 nat-policy "policy-1"
                    destination-prefix 10.40.40.0/24 nat-policy "policy-2"
                exit
            exit
----------------------------------------------

A:node-2>config>router# info
...
#--------------------------------------------------
echo "NAT Configuration"
#--------------------------------------------------
            shutdown
            nat
                inside
                    destination-prefix 10.20.10.0/24 nat-policy "policy-1"
                    destination-prefix 10.30.30.0/24 nat-policy "policy-1"
                    destination-prefix 10.40.40.0/24 nat-policy "policy-2"
                exit
            exit
----------------------------------------------

Different destination prefixes can reference a single NAT policy (policy-1 in this case).

In the case where the destination policy does not directly reference the NAT policy, the default NAT policy is used. The default NAT policy is configured directly in the following context:
  • MD-CLI
    configure service vprn nat inside large-scale
    configure router nat inside large-scale
  • classic CLI
    configure service vprn nat inside
    configure router nat inside

After the destination-prefix command referencing the NAT policy is configured, an entry in the routing table is created that directs the traffic to the MS-ISA.

Filter-based approach

Use the options under the following context to use a filter-based approach to divert traffic to NAT based on the IP matching criteria.
configure filter ip-filter entry match
Use the following command to use the filter-based diversion in conjunction with multiple NAT policies..
configure filter ip-filter entry action nat [nat-policy nat-policy-name]

The association with the NAT policy is made after the filter is applied to the SAP.

Multiple NAT policies and deterministic NAT

Combination of deterministic LSN44, non-deterministic LSN44, and MNP

Deterministic LSN44 is supported in combination with multiple NAT policies (MNP) based on the destination prefix or on a filter term in non-deterministic LSN44. For simplicity, the destination prefix configuration is used throughout this section instead of the filter terms. However, the combination of deterministic and non-deterministic LSN44 can lead to conflicting scenarios for which the outcomes must be well defined.

The reasons for these conflicting scenarios are the following:

  • In private to public (or inside to outside) direction, deterministic NAT uses source IP addresses of the traffic as a match criterion to find the correct NAT pool and outside routing context. This is performed through configuration where each source prefix is associated with a NAT policy.

  • In contrast, non-deterministic NAT uses destination IP addresses of the traffic as a match criterion to find the correct NAT pool and outside routing context. This is performed through configuration where each destination prefix is explicitly associated with its own NAT policy.

If both NAT variants are used simultaneously in the same inside NAT routing context, the conflict resulting from different NAT policies for the same traffic must be resolved. Deterministic and non-deterministic NAT must always use different NAT policies within the same inside routing context.

The rules used to resolve this conflict are:

  • Destination prefixes are used to determine the outside routing context from their associated NAT policies. A NAT policy must be explicitly defined for each destination prefix.

  • When the outside routing context is determined, the pool selection within that routing context is selected based on the NAT policies associated with the source prefixes.

This means that the outside routing context in both NAT policies must match. If they do not match, traffic is dropped.

This logic ensures that traffic intended for NAT processing is identified based on the traffic destination (destination-prefix) while the pool selection (with outside IP addresses) is determined by the source prefix.

This is useful when non-deterministic NAT and static 1:1 NAT are simultaneously used in the same inside routing context. In this case, a customer to which NAT’d traffic is destined can change and repurpose its routes that can then be dynamically advertised to the NAT user. At the same time, the NAT user who is providing static 1:1 NAT can ensure that its clients are predictably mapped to static outside IP addresses because the source prefixes and their associated NAT policies are not changed, even if the destination prefixes are changed.

An example of a supported scenario is shown in MNP and deterministic NAT44 (MD-CLI) where all destination prefixes are explicitly associated with NAT policies (no destination-prefix is using a default NAT policy) and the source prefixes are explicitly mapped to a different set of NAT policies. The outside routing contexts in the two NAT policies for traffic matching destination and source prefixes simultaneously, must match.

As shown in MNP and deterministic NAT44 (MD-CLI), traffic from the source network 10.10.0.0/24 is mapped to two different pools in two different outside VPRNs based on the pool names configured in NAT policies associated with the source prefixes. The actual outside VPRN is selected based on the NAT policy associated with the relevant destination prefix.

Figure 25. MNP and deterministic NAT44 (MD-CLI)

Multiple NAT policies with DS-Lite and NAT64

DS-Lite and NAT64 diversion to NAT with multiple NAT policies is supported only through IPv6 filters. The following example displays the configuration of NAT traffic diversion based on an IPv6 filter NAT64.

Configuration of NAT traffic diversion based on an IPv6 filter NAT64 (MD-CLI)

[ex:/configure filter]
A:admin@node-2# info
    ipv6-filter "1" {
        entry 1 {
            action {
                nat {
                    nat-policy "policy-1"
                    nat-type nat64
                }
            }
        }
    }

Configuration of NAT traffic diversion based on an IPv6 filter NAT64 (classic CLI)

A:node-2>config>filter# info
----------------------------------------------
        ipv6-filter 1 name "1" create
            entry 1 create
                action
                    nat nat-type nat64 nat-policy "policy-1"
                exit
            exit
        exit
----------------------------------------------

The nat-type option can be either dslite or nat64.

The DS-Lite AFTR address and NAT64 destination prefix configuration under the corresponding (DS-Lite or NAT64) the following context is mandatory. This is even when only filters are needed for traffic diversion to NAT.
  • MD-CLI
    configure router nat inside large-scale
    configure service vprn nat inside large-scale
  • classic CLI
    configure router nat inside
    configure service vprn nat inside

For example, every AFTR address and NAT64 prefix that is configured as a match criteria in the filter, must also be duplicated in the preceding context. However, the opposite is not required.

IPv6 traffic with the destination address outside of the AFTR/NAT64 address/prefix follows normal IPv6 routing path within the 7750 SR.

Default NAT policy

The default nat-policy is always mandatory and must be configured under the following context:
  • MD-CLI
    configure service vprn nat inside large-scale
    configure router nat inside large-scale
  • classic CLI
    configure service vprn nat inside
    configure router nat inside

This default NAT policy can reference any configured pool in the needed ISA group. The pool referenced in the default NAT policy can be then overridden by the NAT policy associated with the destination prefix in LSN44 or by the NAT policy referenced in the IPv4 filter or IPv6-filter used for NAT diversion in LSN44, DS-Lite, NAT64.

The NAT CLI nodes fail to activate, unless a valid NAT policy is referenced in the preceding context.

Scaling considerations

Each subscriber using multiple policies is counted as one subscriber for the inside resources scaling limits (such as the number of subscribers per MS-ISA), and counted as one subscriber per (subscriber and policy combination) for the outside limits (subscriber-limit subscribers per IP; port-reservation port/block reservations per subscriber).

Multiple NAT policies and SPF configuration considerations

Any Static Port Forward (SPF) can be created only in one pool. This pool, which is referenced through the NAT policy, has to be specified at the SPF creation time, either explicitly through the configuration request or implicitly via defaults.

Explicit requests are submitted either using NSP or the following CLI command:
  • MD-CLI
    tools perform nat port-forwarding-action lsn add router string [b4 ipv6 address] [aftr ipv6 address] ip IPaddress protocol keyword [port number] lifetime string [outside-ip ipv4 address] [outside-port number] [nat-policy string] [force]
    
  • classic CLI
    tools perform nat port-forwarding-action lsn create router router-instance [b4 ipv6-address] [aftr ipv6-address] ip ip-address protocol {tcp|udp} [port port] lifetime lifetime [outside-ip ipv4-address] [outside-port port] [nat-policy nat-policy-name]
In the absence of the NAT policy referenced in the SPF creation request, the default nat-policy command under the following context is used:
  • MD-CLI
    configure service vprn nat inside large-scale
    configure router nat inside large-scale
  • classic CLI
    configure service vprn nat inside
    configure router nat inside

The consequence of this is that the user must know the NAT policy in which the SPF is to be created. The SPF cannot be created via PCP outside of the pool referenced by the default NAT policy, because PCP does not provide means to communicate NAT policy name in the SPF creation request.

The static port forward creation and their use by the subscriber types must follow these rules:

  • default NAT policy

    Any subscriber type can use an SPF created in the pool referenced by the default NAT policy.

  • deterministic LSN44 NAT policy

    Only deterministic LSN44 subscribers matching the configured prefix can use the SPF created in the pool referenced by the deterministic LSN44 prefix NAT policy.

  • deterministic DS-Lite NAT policy

    Only deterministic DS-Lite subscribers matching the configured prefix can use the SPF created in the pool referenced by the deterministic DS-Lite prefix NAT policy.

  • LSN44 filter based NAT policy

    Only LSN44 subscribers matching the configured filter entry can use the SPF created in the pool referenced by the non-deterministic LSN44 NAT policy within the filter.

  • DS-Lite filter based NAT policy

    Only DS-Lite subscribers matching the configured filter entry can use the SPF created in the pool referenced by the DS-Lite NAT policy within the filter.

  • NAT64 filter based NAT policy

    Only NAT64 subscribers matching the configured filter entry can use the SPF created in the pool referenced by the NAT64 NAT policy within the filter.

When the last relevant policy for a specific subscriber type is removed from the virtual router, the associated port forwards are automatically deleted.

Multiple NAT policies and forwarding considerations

SPF with multiple NAT policies and Bypassing NAT policy rule describe specific scenarios that are more theoretical and are less likely to occur in reality. However, they are described here for the purpose of completeness.

SPF with multiple NAT policies represents the case where traffic from the WEB server 10.1.1.1 is initiated toward the destined network 10.11.0.0/8. Such traffic ends up translated in the Pool B and forwarded to the 10.11.0.0/8 network even though the static port forward has been created in Pool A. In this case, the NAT policy rule (dest 10.11.0.0/8 pool B) determines the pool selection in the upstream direction (even though the SPF for the WEB server already exists in the Pool A).

Figure 26. SPF with multiple NAT policies

The next example in Bypassing NAT policy rule shows a case where the Flow 1 is initiated from the outside. Because the partial mapping matching this flow already exists (created by SPF) and there is no more specific match (FQF) present, the downstream traffic is mapped according to the SPF (through Pool A to the Web server). At the same time, a more specific entry (FQF) is created (initiated by the very same outside traffic). This FQF now determines the forwarding path for all traffic originating from the inside that is matching this flow. This means that the Flow 2 (reverse of the Flow 1) is not mapped to an IP address from the pool B (as the policy dictates) but instead to the Pool A which has a more specific match.

A more specific match would be in this case fully qualified flows (FQF) that contains information about the foreign host: <host, inside IP/port, outside IP/port, foreign IP address/port, protocol>.

Figure 27. Bypassing NAT policy rule

NAT policy selection in non-deterministic NAT

In deterministic NAT, the NAT policy, and consequently, the NAT pool selection is based on the source IP prefix, as discussed in Deterministic NAT.

The selection of the NAT policy (and then the NAT pool) is based on the source prefix for non-deterministic Large Scale NAT (LSN44). This functionality can be also referred to as NAT traffic identification based on the source prefix.

As described in Traffic steering to NAT, traffic can be redirected to an ESA-VM for NAT processing using routing (via the destination prefix) or any criteria defined in the IPv4 filter. The NAT policy selection can be configured in the destination prefix or IPv4 filter. If a policy is specified, a default NAT policy configured in the inside routing context is applied to all traffic that does not have an explicitly configured NAT policy in the destination prefix or IPv4 filter.

The source prefix can be used for NAT policy selection as an alternative method to the destination prefix, IPv4 filter, or default NAT policy. After the traffic reaches the ESA-VM, as directed by the destination prefix or IPv4 filter, a source-prefix configuration maps traffic to the NAT policy, and by extension to the NAT pool. Traffic without an explicit mapping between the source prefix and NAT policy is dropped.

NAT policy selection via the source prefix and all other mechanisms for policy selection (such as specific NAT policies in destination prefix, filter, or a default NAT policy) are mutually exclusive. In other words, traffic steering to ESA-VMs for NAT processing is not performed based on the source prefixes, and instead, relies on existing methods (such as the destination prefix or filter). Source prefixes are only used to associate subscribers with NAT policies, and through the policies with NAT pools.

The following are disabled when the source-prefix-only command is enabled:

  • default NAT policy

  • destination prefixes with an explicit NAT policy

  • filter with an explicit NAT policy

  • NAT64

  • DS-Lite

  • destination based NAT

  • deterministic DS-lite prefixes

  • Stateful Inter-Chassis NAT Redundancy (SICR)

The following example displays a NAT configuration based on the source prefix in non-deterministic NAT.

Configuring the source prefix in non-deterministic NAT (MD-CLI)

[ex:/configure router "Base" nat]
A:node-2#
    inside {
        large-scale {
            traffic-identification {
                source-prefix-only true
            }
            nat44 {
                destination-prefix 0.0.0.0/0 {
                }
                source-prefix 10.10.10.0/24 {
                    nat-policy "nat-pol-1"
                }
                source-prefix 10.10.11.0/24 {
                    nat-policy "nat-pol-1"
                }
                source-prefix 10.10.12.0/25 {
                    nat-policy "nat-pol-2"
                }
                source-prefix 10.10.12.128/25  nat-policy “nat-pol-3” {
                    nat-policy "nat-pol-3"
                }
            }
        }
    }

Configuring the source prefix in non-deterministic NAT (classic CLI)

A:node-2>config>router>nat>inside   
            destination-prefix 0.0.0.0/0   
            traffic-identification 
                source-prefix-only
            source-prefix 10.10.10.0/24 nat-policy “nat-pol-1”
            source-prefix 10.10.11.0/24 nat-policy “nat-pol-1”
            source-prefix 10.10.12.0/25 nat-policy “nat-pol-2”	
            source-prefix 10.10.12.128/25 nat-policy “nat-pol-3”

Configuration notes:

  • Aside from the NAT policies linked with the source prefix, other NAT policies are not allowed in this configuration, including the default NAT policy and NAT policies configured under the destination prefix or IPv4 filter.

  • The source-prefix-only command is mandatory. Only traffic identified with the source-prefix command is processed by NAT. Any other traffic that is diverted to NAT and arrives to the ESA-VM is discarded.

  • Either the configuration of the destination-prefix command or an IPV4 filter is mandatory. This is how traffic is steered toward the ESA-VMs.

  • This configuration allows the use of multiple source prefixes.

Default DMZ host

A default demilitarized zone (DMZ) host is a node to which all unmatched traffic from the outside can be redirected. This redirection is achieved by changing the destination IPv4 address in the traffic header to the IPv4 address of the default DMZ host. On the default DMZ host, unmatched traffic can be inspected as part of a threat analysis.

A default DMZ host does not have to be directly connected to the inside NAT segment, but can be located deeper in the network. The default DMS host does not send any replies to the unknown traffic; and therefore, there is no state maintained for the unknown traffic in NAT. The rate of unmatched traffic sent to the default DMZ host can be restricted by configuration.

In the redirected traffic with swapped destination IPv4 addresses, the Layer 3 and Layer 4 (UDP and TCP) checksums are recalculated.

The following example show a basic default DMZ host configuration.

Figure 28. Default DMZ host

The following example shows a default DMZ host for LSN in a VPRN service configuration.

Configuring the default DMZ host for LSN in a VPRN service (MD-CLI)

[ex:/configure service vprn "77"]
A:admin@node-2# info
    customer "1"
    nat {
        outside {
            pool "demo-pool" {
                type large-scale
                nat-group 1
                large-scale {
                    default-host {
                        ip-address 10.10.10.10
                        inside-router-instance "Base"
                        rate-limit 10
                    }
                }
            }
        }
    }

Configuring the default DMZ host for LSN in a VPRN service (classic CLI)

A:node-2>config>service>vprn>nat$ info
----------------------------------------------
                outside
                    pool "demo-pool" nat-group 1 type large-scale create
                        shutdown
                        default-host 10.10.10.10 inside-router-id "Base" rate 10
                    exit
                exit
----------------------------------------------

The following example shows a Layer 2–aware NAT in a Base router configuration.

Configuring Layer 2–aware NAT in a Base router (MD-CLI)

[ex:/configure router "Base" nat]
A:admin@node-2# info
    outside {
        pool "demo-pool" {
            type l2-aware
            nat-group 1
            l2-aware {
                default-host {
                    ip-address 10.10.10.10
                    inside-router-instance "Base"
                    rate-limit 100
                }
            }
        }

Configuring Layer 2–aware NAT in a Base router (classic CLI)

A:node-2>config>router>nat# info
----------------------------------------------
                pool "demo-pool" nat-group 1 type l2-aware create
                    no shutdown
                    default-host 10.10.10.10 inside-router-id "Base" rate 100
                exit
            exit
----------------------------------------------

NAT and CoA

RADIUS Change of Authorization (CoA) can be used in Enhanced Subscriber Management (ESM) to modify the NAT behavior of the subscriber. This can be performed by:

  • replacing a NAT policy in a subscriber profile for the Layer 2–aware NAT subscriber

  • replacing or removing a NAT policy within the IP filter for the ESM subscriber using LSN44, DS-Lite, or NAT64

  • modifying DNAT command options directly via CoA for the L2-Aware subscriber

CoA and NAT policies

The behavior for NAT policy changes via CoA for LSN and Layer 2–aware NAT is summarized in NAT policy changes via CoA.

Table 9. NAT policy changes via CoA
Action Outcome Remarks
L2-Aware LSN

CoA - replacing NAT policy

Stale flows using the old NAT policy are cleared after 5 seconds.

New flows immediately start using a new NAT policy.

Restrictions:

Allowed only when the previous change is completed (need to wait for a 5 second interval during which the stale mappings caused by previous CoA are purged).

  • Allowed only when the previous change is completed (need to wait for a 5 second interval during which the stale mappings caused by previous CoA are purged).

  • Not allowed if L2-Aware subscriber has multiple hosts and the new prefix-list contains one or more 1:1 NAT policies.

  • Not allowed if the new NAT policy references to a pool in a different NAT group.

Stale flows using the old NAT policy continue to exist and are used for traffic forwarding until they are naturally timed out or TCP terminated. The exception to this is when the reference to the NAT policy in the filter was the last one for the inside VRF. In this scenario, the flows from the removed NAT policy are cleared immediately.

New flows immediately start using new NAT policy.

A NAT policy change via CoA is performed by changing the sub-profile for the ESM subscriber or by changing the ESM subscriber filter in the LSN case. 2

A sub-profile change alone does not trigger accounting messages in Layer 2–aware NAT and consequently the logging information is lost.

To ensure timely RADIUS logging of the NAT policy change in Layer 2–aware NAT, each CoA must, in addition to the sub-profile change, also do one of the following:

  • change the SLA profile3

  • include the Alc-Trigger-Acct-Interim VSA in the CoA messages

Both of the above events trigger an accounting update at the time when CoA is processed. This keeps NAT logging current.

1 In non-ESM environments, the NAT policy can be changed by replacing the interface filter via CLI for LSN case.
2 The SLA profile has to be changed and not just refreshed. In other words, replacing the existing SLA profile with the same one does not trigger a new accounting message.

CoA and DNAT

Adding, removing, or replacing DNAT options in LSN44 can be achieved through NAT policy manipulation in an IP filter for ESM subscriber. The rules for NAT policy manipulation via CoA are given in NAT policy changes via CoA. In Layer 2–aware NAT, CoA can be used to:

  • enable or disable DNAT functionality while leaving the Source Network Address and Port Translation (SNAPT) uninterrupted

  • modify the default destination IP address in DNAT

After the DNAT configuration is modified via CoA (enable or disable DNAT or change the default DNAT IP address), the existing flows affected by the change remain active for 5 more seconds while the new flows are created in accordance with the new configuration. After a 5 second timeout, the stale flows are cleared from the system.

The RADIUS attribute used to perform DNAT modifications is a composite attribute with the following format:

Alc-DNAT-Override (234) = ‟{<DNAT_state> | <DNAT-ip-addr>},[nat-policy]”

where: DNAT_state = none | disable and the DNAT-ip-addr option are mutually exclusive.

DNAT-ip-addr – Provides an implicit enable with the destination IPv4 address in dotted format (a.b.c.d) and the DNAT-state option are mutually exclusive.

nat-policy = NAT policy name – This is an optional option. If it is not present, the default NAT policy is assumed.

For example:

Alc-DNAT-Override=none – This negates any previous DNAT-related override in the default NAT policy. Consequently, the DNAT functionality is set as originally defined in the default NAT policy. In case that the none value is received while DNAT is already enabled, a CoA ACK is sent back to the originator.

Alc-DNAT-Override = none,nat-pol-1 – This re-enables DNAT functionality in the specific NAT policy with the name "nat-policy-1".

Alc-DNAT-Override = none,10.1.1.1 – The DNAT_state and DNAT-ip-addr options are mutually exclusive within the same Alc-DNAT-Override attribute. Although a CoA ACK reply is returned to the RADIUS server, an error log message is generated in the SR OS indicating that the attempted override failed.

Alc-DNAT-Override =10.1.1.1 – This changes the default DNAT IP address to 1.1.1.1 in the default NAT policy. In case DNAT was disabled before receiving this CoA, it is implicitly enabled.

Alc-DNAT-Override =10.1.1.1,nat-pol-1 – This changes the default DNAT IP address to 10.1.1.1 in the specific NAT policy named "nat-policy-1". DNAT is implicitly enabled if it was disabled before receiving this CoA.

The combination of sub-fields with the Alc-DNAT-Override RADIUS attribute and the corresponding actions are shown in CoA and DNAT.

Table 10. CoA and DNAT
DNAT-state DNAT-ip-addr NAT policy DNAT action in Layer 2–aware NAT

none

-

-

Re-enable DNAT in the default NAT policy.

If DNAT was enabled before receiving this CoA, no specific action is carried out by the SR OS with the exception of sending the CoA ACK back to the CoA server.

This negates any previous DNAT-related override in the default NAT policy. Consequently, the DNAT functionality is set as originally defined in the default NAT policy.

If the DNAT classifier is not present in the default nat-policy when this CoA is received, an error log message is raised.

none

-

NAT policy name

Re-enable DNAT in the referenced NAT policy.

This negates any previous DNAT-related override in the referenced NAT policy. Consequently, the DNAT functionality is set as originally defined in the referenced NAT policy.

If the DNAT classifier is not present in the referenced NAT policy when this CoA is received, a CoA ACK reply is returned to the RADIUS server and an error log message is generated in the SR OS indicating that the attempted override has failed.

none

a.b.c.d

-

These two options are mutually exclusive in the same Alc-DNAT-Override attribute.

Although a CoA ACK reply is returned to the RADIUS server, an error log message is generated in SR OS indicating that the attempted override has failed.

none

a.b.c.d

NAT policy name

DNAT-state and DNAT IP address options are mutually exclusive in the same Alc-DNAT-Override attribute.

Although a CoA ACK reply is returned to the RADIUS server, an error log message is generated in SR OS indicating that the attempted override has failed.

disable

-

-

Disable DNAT in the default NAT policy.

If the DNAT classifier is not present in the default NAT policy when this CoA is received, a CoA ACK reply is returned to the RADIUS server and an error log message is generated in the SR OS indicating that the attempted override has failed.

disable

-

NAT policy name

Disable DNAT in the referenced NAT policy.

If the DNAT classifier is not present in the referenced NAT policy when this CoA is received, a CoA ACK reply is returned to the RADIUS server and an error log message is generated in SR OS indicating that the attempted override has failed.

disable

a.b.c.d

-

The DNAT-state and DNAT IP address options are mutually exclusive in the same Alc-DNAT-Override attribute.

Although a CoA ACK reply is returned to the RADIUS server, an error log message is generated in SR OS indicating that the attempted override has failed.

disable

a.b.c.d

NAT policy name

The DNAT-state and DNAT IP address options are mutually exclusive in the same Alc-DNAT-Override attribute.

Although a CoA ACK reply is returned to the RADIUS server, an error log message is generated in the SR OS indicating that the attempted override has failed.

-

a.b.c.d

-

The default destination IP address is changed in the default NAT policy.

-

a.b.c.d

NAT policy name

The default destination IP address is changed in the referenced NAT policy.

-

-

-

or

NAT policy name

A CoA NAK (error) is generated. Either DNAT-state or DNAT IP address options must be present in the Alc-DNAT-Override attribute.

If multiple Alc-DNAT-Override attributes with conflicting actions are received in the same CoA or Access-Accept, the action that occurred last takes precedence.

For example, if the following two Alc-DNAT-Override attributes are received in the same CoA, the last one takes effect and consequently DNAT is disabled in the default NAT policy:

Alc-DNAT-Override = ‟10.1.1.1‟

Alc-DNAT-Override = ‟disable‟

Modifying an active NAT prefix list or NAT classifier via CLI

Modifying active NAT prefix list or NAT classifier describes the outcome when the active NAT prefix list or NAT classifier is modified using CLI.

Table 11. Modifying active NAT prefix list or NAT classifier
Action Outcome Remarks
L2-Aware LSN

CLI – Modifying prefix in the NAT prefix list

Existing flows are always checked whether they comply with the NAT prefix list that is currently applied in the subscriber profile for the subscriber. If the flows do not comply with the current NAT prefix list, they are cleared after 5 seconds.

The new flows immediately start using the updated settings.

Changing the prefix in the NAT prefix list internally re-subnets the outside IP address space.

A NAT prefix list is used with multiple NAT policies in Layer 2–aware NAT and for downstream Internal subnet in DNAT-only scenario for LSN.

The prefix can be modified (added, removed, remapped) at any time in the NAT prefix list.

In the classic CLI, if you want to modify the NAT policy, you must first administratively disable the NAT policy.

CLI – Modifying or replacing the NAT classifier

Existing flows are always checked whether they comply with the NAT classifier that is currently applied in the active NAT policy for the subscriber. If the flows do not comply with the current NAT classifier, they are cleared after 5 seconds.

The new flows immediately start using the updated settings.

Changing the NAT classifier have the same effect as in Layer 2–aware NAT; all existing flows using the NAT classifier are checked whether they comply with this classifier or not.

The NAT classifier is used for DNAT.

NAT classifier is referenced in the NAT policy.

CLI - Removing or adding NAT policy in NAT prefix list

Blocked

Not applicable

CLI - Removing or adding NAT policy in the subscriber profile

Blocked

Not applicable

CLI - Removing, adding or replacing NAT prefix list under the rtr/nat/inside/DNAT-only

Not applicable

This action triggers internally re-subnet the source address space according to the new NAT prefix list. However, the current flows in the MS ISA are not affected by this change. In other words, they are not removed if the associated prefix is removed from the prefix list.

Watermarks

Watermarks can be configured to monitor the actual usage of sessions, ports, and port blocks.

For each watermark, a high and a low value must be set. When the high threshold value is crossed in the upward direction, an event is generated (SNMP trap), notifying the user that a NAT resource may be approaching exhaustion. When the low threshold value is crossed in the downward direction, a similar event is generated (clearing the first event), notifying the user that the resource utilization has dropped below the low threshold value.

Watermarks can be defined on the NAT group, pool, and policy level.

  • NAT group

    Watermarks can be placed to monitor the total number of sessions on an MDA.

  • NAT pool on each NAT group member

    Watermarks can be placed to monitor the port and port-block occupancy in a pool within each NAT group member.

  • NAT policy

    In the policy, the user can define watermarks on session and port usage. In both cases, the usage per subscriber (for Layer 2–aware NAT) or per host (for large-scale NAT) is monitored.

Port forwards

Port forwards allow devices on the public side of NAT (NAT outside) to initiate sessions toward those devices, usually servers, that are hidden behind NAT (NAT inside). Another term for port forwards is NAT pinhole.

A port forward represents a previously created (before any traffic is received from the inside) mapping between a TCP/UDP port on the outside IP address and a TCP/UDP port on the inside IP address assigned to a device behind the NAT. This mapping can be created statically by configuration (such as CLI, MIB, YANG, or NETCONF), or it can be created dynamically with protocols such as PCP or UPnP. Port forwards are supported only in NAT pools in Network Address and Port Translation (NAPT) mode. NAT pools in 1:1 mode do not support configured port forwards because, by default, the pools allow traffic from the outside to the inside and this cannot be disabled. Pools in 1:1 mode (whether protocol agnostic) do not perform port translation; therefore the inside and outside always match.

In UPnP, the forwarded ports are created with the port range of the NAT subscriber, whereas with PCP and Static Port Forwards (SPF), the forwarded ports are allocated from a dedicated port range outside of the port blocks allocated to individual NAT subscribers. There are two ranges dedicated to port forwards in NAT:

  • well-known ports (1 to 1023)

    This range is always enabled and cannot be disabled in NAT pools that support configured port forwards (non 1:1 NAT pools).

  • ports from the ephemeral port range (1025 to 65535)

    Port forwards from the ephemeral port space must be explicitly enabled by configuration. They are allocated from a contiguous block of ports where upper and lower limits are defined. Ports reserved for port forwards allocated in the ephemeral port space are also referred to as wildcard ports.

Port forwarding ranges (well-known ports and wildcard ports) are shared by all NAT subscribers on a specific outside IP address. Port blocks that are individually assigned to the subscriber cannot be allocated from the port forwarding range. The wildcard port forwarding range can be configured only when the pool is administratively disabled.

See the Port Control Protocol (PCP) and Universal plug and play Internet Gateway Device service sections as well as the SR OS R24.x.Rx Software Release Notes for more information about these protocols and the supported NAT types.

Static port forwards

Use the command options in the following command to manage port forwarding for Large Scale NAT (LSN):

  • MD-CLI, NETCONF, and classic CLI

    In the MD-CLI, NETCONF, and classic CLI, use the options under the following command to manage NAT Static Port Forwards (SPFs). This command enables large-scale NAT port forwarding actions.

    tools perform nat port-forwarding-action lsn
    For the preceding tools command, if you do not explicitly configure the following optional fields, the system selects them automatically:
    • port number – number of the source port
    • outside IP – IPv4 address for the outside IP address
    • outside-port number – number of the outside port
    • NAT policy – name of the NAT policy

    If the preceding tools command is configured to manage SPFs and preserve SPFs across reboots, you must use the following command to enable persistency of the SPF. With persistency enabled, SPF configuration is stored on the compact flash.

    configure system persistence nat-port-forwarding
  • classic CLI

    In the classic CLI, you can manage SPFs through the preceding tools command or the following configuration command. This command creates NAT static port forwards for LSN44, Ds-Lite and NAT64.

    configure service nat port-forwarding lsn

Manage the SPFs using the tools command (MD-CLI)

[/tools perform nat port-forwarding-action]
A:admin@node-2# lsn add router 100 ip 10.2.3.4 protocol udp lifetime infinite outside-port 888

[/]
*A:node-2# configure system persistence nat-port-forwarding location cf3

[/]
*A:node-2# tools dump persistence nat-port-forwarding
----------------------------------------
Persistence Info
----------------------------------------
Client : nat-fwds
File Info :
Filename : cf3:\nat_fwds.002
File State : CLOSED (Not enough space on disk)
Subsystem Info :
Nbr Of Registrations : 524288
Registrations In Use : 2
Subsystem State : NOK

Manage the SPFs using the tools command (classic CLI)

*A:node-2# tools perform nat port-forwarding-action lsn create router 100
ip 10.2.3.4 protocol udp lifetime infinite outside-port 888
*A:node-2# configure system persistence nat-port-forwarding location cf3:
*A:node-2# tools dump persistence nat-port-forwarding
----------------------------------------
Persistence Info
----------------------------------------
Client : nat-fwds
File Info :
Filename : cf3:\nat_fwds.002
File State : CLOSED (Not enough space on disk)
Subsystem Info :
Nbr Of Registrations : 524288
Registrations In Use : 2
Subsystem State : NOK

Manage the SPFs using the configuration command (classic CLI)

The following command only applies for the classic CLI.

*A:node-2>config>service>nat>fwd# lsn router 101 ip 11.11.13.7 protocol udp port 12345 outside-ip 130.0.255.254 outside-port 3171 nat-policy "pol1_for_2001-pool-0"

You can specify a force option that is applicable only to LSN pools with flexible port allocations where the dynamic ports in this pool are allocated individually instead of port blocks. The dynamic ports are interleaved with Static Port Forwards (SPFs). This creates increased possibility for a collision between the dynamically-allocated port and the requested SPF during an SPF request.

For instance, if a user requests port X on a public IP address Y, there is a chance that port X is already in use because of the dynamic allocation.

To resolve such conflicts, use the force option to ensure that the requested SPF has higher priority, allowing it to preempt an existing dynamically-allocated port. This action overwrites the previous port mapping and deletes all associated sessions.

If you omit the force option in such a scenario, the static-port allocation fails. The force option can only preempt dynamically-allocated ports and does not affect pre-existing SPFs.

Port Control Protocol (PCP)

PCP is a protocol that operates between subscribers and the NAT directly. This makes the protocol similar to DHCP or PPP in that the subscriber has a limited but direct control over the NAT behavior.

PCP is designed to allows the configuration of static port-forwards, obtain information about existing port forwards and to obtain the outside IP address from software running in the home network or on the CPE.

PCP runs on each MS-ISA as its own process and make use of the same source-IP hash algorithm as the NAT mappings themselves. The protocol itself is UDP based and is request/response in nature, in some ways, similar to UPnP.

PCP operates on a specified loopback interface in a similar way to the local DHCP server. It operates on UDP and a specified (in CLI) port. As Epoch is used to help recover mappings, a unique PCP service must be configured for each NAT group.

When epoch is lowered, there is no mechanism to inform the clients to refresh their mappings en masse. External synchronization of mappings is possible between two chassis (epoch does not need to be synchronized). If epoch is unsynchronized then the result is clients re-creating their mapping on next communication with the PCP server.

     0                   1                   2                   3
      0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     |  Version = 1  |R|   OpCode    |      Reserved (16 bits)       |
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     |                      Requested Lifetime                       |
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     :                                                               :
     :             (optional) opcode-specific information            :
     :                                                               :
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     :                                                               :
     :             (optional) PCP Options                            :
     :                                                               :
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

The R-bit (0) indicates request and (1) indicates response. This is a request so (0).

OpCode defined as:

Requested Lifetime: Lifetime 0 means delete.

      0                   1                   2                   3
      0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     |  Version = 1  |R|   OpCode    |   Reserved    |  Result Code  |
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     |                           Lifetime                            |
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     |                             Epoch                             |
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     :                                                               :
     :             (optional) OpCode-specific response data          :
     :                                                               :
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     :             (optional) Options                                :
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

As this is a response, R = (1).

The Epoch field increments by 1 every second and can be used by the client to determine if state needs to be restored. On any failure of the PCP server or the NAT to which it is associated Epoch must restart from zero (0).

Result Codes:

0 SUCCESS, success.

1 UNSUPP_VERSION, unsupported version.

2 MALFORMED_REQUEST, a general catch-all error.

3 UNSUPP_OPCODE, unsupported OpCode.

4 UNSUPP_OPTION, unsupported option. Only if the Option was mandatory.

5 MALFORMED_OPTION, malformed option.

6 UNSPECIFIED_ERROR, server encountered an error

7 MISORDERED_OPTIONS, options not in correct order

Creating a Mapping

Client Sends

      0                   1                   2                   3
      0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     |  Protocol     |          Reserved (24 bits)                   |
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     |        Internal port          |   Suggested external port     |
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     :                                                               :
     : Suggested External IP Address (32 or 128, depending on OpCode):
     :                                                               :
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

MAP4 opcode is (1). Protocols: 0 – all; 1 – ICMP; 6 – TCP; 17 – UDP.

MAP4 (1), PEER4 (3) and PREFER_FAILURE are supported. FILTER and THIRD_PARTY are not supported.

PORT_SET option

Terminology

The terms internal port and inside port are used interchangeably. They both refer to the original source port before NAT is performed.

The terms external port and outside port are used interchangeably. They both refer to the translated source port after NAT is performed.

The PCP PORT_SET option is defined in RFC 7753, Port Control Protocol (PCP) Extension for Port-Set Allocation, and is used by applications that require a consecutive block of port forwards. The reasons to provide a block of ports in a single request as opposed to multiple requests for single port in a MAP request are described in Section 2 of RFC 7753.

A PCP PORT_SET option indicates to the PCP server (SR OS) that a client needs a block of sequential port forwards. The number of requested port forwards must be greater than one (otherwise, a plain MAP opcode can be used). The ports in the block start at the Internal Port (in MAP opcode) and map to the same number of ports on the outside (or the external side), starting from either the Suggested External Port (in MAP opcode) or from an arbitrary external port. The returned number of ports may be fewer than the requested number, but the number cannot be larger. The allocated port set cannot fall outside of the range defined by the Internal Port (as requested in MAP opcode) plus the PORT_SET Size. Before a port set is assigned to a client, the SR OS always checks if any of the internal ports in the MAP request carrying the PORT_SET option has already been allocated.

The following figure shows an example of PORT_SET option format.

Figure 29. PORT_SET option format example

In the MAP request, the needed number of ports is specified in the PORT_SET Size field.

The First Internal Port is set to the same value as the Internal Port field in the MAP opcode.

In the MAP response, the PORT_SET Size represents the number of allocated ports. The First Internal Port represents the first internal port in the port set, which may be different than the Internal Port field in the MAP opcode (see example in Section 6.3 in RFC 7753, overlapping requests). The first external port is the value returned in Assigned External Port field in the MAP opcode.

Enabling the PORT_SET option

The port-set command enables the PORT_SET option support. Instead of asking for each individual port in multiple requests through the MAP option, the port-set command allows individual ports to ask the SR OS for a set of ports at once in a single request. The following example displays the configuration of the PORT_SET option.

MD-CLI
[ex:/configure service nat]
A:admin@node-2# info
    pcp-server-policy "test1" {
        option {
            port-set true
        }
    }
classic CLI
A:node-2>config>service>nat>pcp-server-policy$ info
    option 
        port-set

Port allocation scheme

The appropriate port set size is determined from the combination of the port set size received via the port-set command and the locally configured policy which may limit the port set size.

If the requested port set is initially not available at the Suggested External Port, an attempt is made to find a new set of ports of the appropriate size. The appropriate PORT_SET size in this context means a combination of the requested PORT_SET size and the local limits set by the user in the SR OS node.

If the available number of consecutive ports in a set for the specified external IP address is fewer that the requested amount or stated in the policy, the maximum number of available consecutive ports are allocated to the client.

In summary, the SR OS tries to find the biggest available port set (as dictated by the combination of the requested size and local policy), instead of allocating random port sets available at the suggested external port.

Limits and quotas

The maximum number of port forwards per subscriber can be limited. The following example displays the configuration to limit the number of port forwards per subscriber.

This limit is the total number of port forwards, regardless of the methods by which the port forwards are requested, either PCP MAP, PORT_SET, or static port forwards.

MD-CLI
[ex:/configure service nat nat-policy "test1"]
A:admin@node-2# info
    port-limits {
        forwarding 50
    }
classic CLI
A:node-2>config>service>nat>nat-policy$ info
----------------------------------------------
                port-limits
                    forwarding 50
                exit
----------------------------------------------

Port overlaps

PCP clients should not request overlapping ports. Requesting overlapping ports produces an erroneous condition. If this condition occurs, then the request is considered as a refresh of the existing ports. This is described in RFC 7753, Section 4.4.1.

Port allocation example

The following example depicts PCP port allocation with the PORT_SET option in action. For additional examples, see RFC 7753.

In this example, the sequence on the top of PORT_SET example represents the state of the port forwards for an external IP address before the PCP Request with the PORT_SET option is received. The port 1032 (in red) has already been mapped to a source port for the same client or to a different client.

A PCP client requesting an overlapping set of external ports (while internal ports are different) triggers the following action in SR OS (PCP server):

  • The SR OS checks if the existing mapping is overlapping with the one for this client. It checks to see if the occupied external port is already mapped to one of the requested internal ports from the 20000 to 20009 range for the same client.

  • If such overlap between internal and external ports is detected (for example, 20001 is mapped to 1032), this is considered a refresh, and the response is:

    opcode: 1(MAP)
    internal port: 20,000
    assigned external port: 1032
    assigned external ip address:192.0.2.3
    

    Every overlapping pair of internal and external ports are individually acknowledged. No new mapping is allocated.

  • If there is no overlap (when the external port is mapped to a port outside of the 20000 to 20009 range), and no limit is reached, then the SR OS honors the request where it finds any set containing consecutive 10 ports.

In PORT_SET example, there is no overlap (external port 1032 belongs to another client or the same client outside of the 20000 to 20009 range) and consequently a block of 10 ports (following parity) is allocated to the client.

Figure 30. PORT_SET example

Operational considerations

Consider the following points when the when PCP PORT_SET option is enabled. Points listed below are in accordance with the RFC 7753:

  • The SR OS attempts to allocate the first external port from the suggested external port. If the port is not available, another port is selected.

  • Parity is honored.

  • The PORT_SET size value 0xffff in the request indicates that the client is willing to accept as many ports as the SR OS can offer.

  • If the requested PORT-SET request cannot be fully (or exactly) satisfied because of the unavailability of the requested consecutive port range, the SR OS tries to find and allocate the next largest port set.

  • If the SR OS, because of the lack of contiguous port ranges, allocates only a single port, the PORT_SET option is not present in the response.

  • If the SR OS receives a PCP MAP request, with or without a PORT_SET option that tries to map one or more internal ports or port sets belonging to already existing mappings, then the request is considered to be a refresh request. Each of the matching port or port set mappings is processed independently, as if a separate refresh request was received. Consequently, the SR OS sends a Mapping Update message for each of the mappings.

  • If multiple PORT_SET options are present in a single PCP MAP request, a MALFORMED_OPTION error is returned.

  • If the PORT_SET size is zero, a MALFORMED_OPTION error is returned.

  • If a Prefer Failure option is present, a MALFORMED_OPTION error is returned.

  • When a PCP request contains both the PORT_SET and Port Reservation Port (N/N+1) options, only the PORT_SET is honored.

  • PCP (with PORT_SET option or otherwise) should not be configured simultaneously with static port forwards in the same NAT pool. PCP allows for dynamic refreshment of port forwards while static port forwards do not have this capability. As a result, PCP port refresh of a port forward allocated statically may lead to unwanted behavior.

  • If the PORT_SET capability is added to or removed from an operationally up PCP server on an SR OS node, the server resets its Epoch time and sends a Version 2 ANNOUNCE message as described in the PCP specification.

Universal plug and play Internet Gateway Device service

Universal Plug and Play (UPnP), which is a set of specifications defined by the UPnP forum. One specification is called Internet Gateway Device (IGD) which defines a protocol for clients to automatically configure port mappings on a NAT device. Today, many gaming, P2P, VoIP applications support the UPnP IGD protocol. The SR OS supports the following UPnP version 1 InternetGatewayDevice version 1 features:

  • Supports only Layer 2–aware NAT hosts.

  • Distributed subscriber management is not supported.

  • The UPnP server runs on NAT ISA and only serves the local Layer 2–aware NAT hosts on the same ISA.

  • The UPnP server can be enabled per subscriber by configuring a UPnP policy in the sub-profile in the following command.

    configure subscriber-mgmt sub-profile upnp-policy 
  • UPnP discovery is supported.

  • UPnP event notification (eventing) is not supported.

  • The following IGD devices and services are supported:

    • WANDevice

    • WANConnectionDevice

    • WANIPConnection service

  • For WANIPConnection services:

    • Optional state variables in a WANIPConnection service are not supported.

    • Optional actions in a WANIPConnection services are not supported.

    • Wildcard ExternalPort is not supported.

    • Only supports wildcard RemoteHost.

    • Up to 64 bytes of port mapping description are supported.

    • The SR OS supports a vendor specific action X_ClearPortMapping. This clears all port mappings of the subscriber belonging to the requesting host. This action has no in or out arguments.

  • If the NewExternalPort in an addPortMapping request is same as the external port of one existing UPnP port mapping:

    • If NewInternalClient is different from InternalClient of existing mapping, then the system rejects the request.

    • If NewInternalClient is same as InternalClient of existing mapping:

      • With strict-mode on, if the source IP address of the request is same as InternalClient of existing mapping, then the request is accepted; otherwise the request is rejected.

      • With strict-mode off, the request is accepted.

  • The system also supports the Alc-UPnP-Sub-Override-Policy RADIUS VSA which can be included in access-accept or CoA request. It can be used to override the upnp-policy command configured in sub-profile or disable UPnP for the subscriber. See RADIUS reference guide for detail usage.

Configuring UPnP IGD service

  1. Configure Layer 2–aware NAT.
  2. Create a upnp-policy.
    MD-CLI
    [ex:/configure service]
    A:admin@node-2# info
        upnp {
            policy "test" {
                port 5000
                mapping-limit 100
            }
        }
    classic CLI
    A:node-2>config>service# info
    ----------------------------------------------
            upnp
                upnp-policy "test" create
                    http-listening-port 5000
                    mapping-limit 100
                exit
            exit
  3. Associate the upnp-policy created in step 2 in the subscriber profile.
    MD-CLI
    [ex:/configure subscriber-mgmt]
    A:admin@node-2# info
        sub-profile "l2nat-upnp" {
            upnp-policy "test"
            nat {
                policy "l2"
            }
        }
    classic CLI
    A:node-2>config>subscr-mgmt# info
    ----------------------------------------------
            sub-profile "l2nat-upnp" create
                nat-policy "l2"
                upnp-policy "test"
            exit
    ----------------------------------------------

NAT Point-to-Point Tunneling Protocol (PPTP) ALG

PPTP is defined in RFC 2637, Point-to-Point Tunneling Protocol (PPTP), and is used to provide VPN connection for home/mobile users to gain secure access to the enterprise network. Encrypted payload is transported over GRE tunnel that is negotiated over TCP control channel. In order for PPTP traffic to pass through NAT, the NAT device must correlate the TCP control channel with the corresponding GRE tunnel. This mechanism is referred to as PPTP ALG.

PPTP protocol

There are two components of PPTP:

  • TCP control connection between the two endpoints

  • an IP tunnel operating between the same endpoints. These are used to transport GRE encapsulated PPP packets for user sessions between the endpoints. PPTP uses an extended version of GRE to carry user PPP packets.

The control connection is established from the PPTP clients (for example, home users behind the NAT) to the PPTP server which is located on the outside of the NAT. Each session that carries data between the two endpoints can be referred as call. Multiple sessions (or calls) can carry data in a multiplexed fashion over a tunnel. The tunnel protocol is defined by a modified version of GRE. Call ID in the GRE header is used to multiplex sessions over the tunnel. The Call-ID is negotiated during the session/call establishment phase.

Supported control messages

  • control connection management

    The following messages are used to maintain the control connection:

    • Start-Control-Connection-Request

    • Start-Control-Connection-Reply

    • Stop-Control-Connection-Request

    • Stop-Control-Connection-Reply

    • Echo-Request

    • Echo-Reply

    The remaining control message types are sent over the established TCP session to open/maintain sessions and to convey information about the link state:

  • call management

    Call management messages are used to establish/terminate a session/call and to exchange information about the multiplexing field (Call-id). Call-IDs must be captured and translated by the NAT. The call management messages are:

    • Outgoing-Call-Request (contains Call ID)

    • Outgoing-Call-Reply (contains Call ID and peer’s Call-ID)

    • Call-Clear-Request (contains Call ID)

    • Call-Disconnect-Notify (contains Call ID)

  • error reporting

    This message is sent by the client to indicate WAN error conditions that occur on the interface supporting PPP.

    Wan-Error-Notify contains Call ID and Peer’s Call ID.

  • PPP session control

    This message is sent in both directions to setup PPP-negotiated options.

    Set-Link-Info contains Call ID and Peer’s Call ID.

After Call-ID is negotiated by both endpoints, it is inserted in GRE header and used as multiplexing field in the tunnel that carries data traffic.

GRE tunnel

A GRE tunnel is used to transport data between two PPTP endpoints. The packet transmitted over this tunnel has the general structure shown in the following figure.

Figure 31. Structure of a packet transmitted between two PPTP endpoints over a GRE tunnel

The following figure shows an example GRE header containing the Call ID of the peer for the session for which the GRE packet belongs.

Figure 32. GRE header example

PPTP ALG operation

PPTP ALG is aware of the control session (Start Control Connection Request/Replay) and consequently it captures the Call ID field in all PPTP messages that carry that field. In addition to translating inside IP and TCP port, the PPTP ALG process data beyond the TCP header to extract the Call ID field and translate it inside of the Outgoing Call Request messages initiated from the inside of the NAT.

The GRE packets with corresponding Call IDs are translated through the NAT as follows:

  • The inside source IP address is replaced by the outside IP address and the opposite is true for traffic in the opposite direction. This is standard IP address translation technique. The key is to keep the outside IP address of the control packets and corresponding data packets (GRE tunnel) the same.

  • The Call-ID in the GRE packets in the direction of outside to inside is translated by the NAT according to the mappings that were created during session negotiation.

In addition, the following applies:

  • GRE packets are translated and passed through the NAT only if they can be matched to an existing PPTP call for which the mapping already exists.

  • Translation of the Call-IDs advertised by the PPTP server in the Outgoing Call Reply control message (this message is sent from the outside of the NAT to the inside) are not translated. Subsequently the Call ID in such messages are transparently passed through the NAT. There is no need to translate those Call IDs as their uniqueness between the two endpoints are guaranteed by the selection algorithm of the PPTP server. This can be thought of as destination TCP/UDP ports. They are not translated in the NAT. Instead only the source ports are translated.

  • PPTP session initiation in the outside to inside direction through the NAT is not supported.

  • Call-ID’s are allocated and used in the same fashion as the outside TCP/UDP ports (random with parity). They are taken from the same port range as ICMP ports.

The basic principle of PPTP NAT ALG is shown in NAT PPTP operation.

Figure 33. NAT PPTP operation

The scenario where multiple clients behind the NAT are terminated to the same PPTP server is shown in Merging of endpoints in NAT. In this case, it is possible that the source IP addresses of the two PPTP clients are mapped to the same outside address of the NAT. Because the endpoints of the GRE tunnel from the NAT to the PPTP server are the same for both PPTP clients (although their real source IP addresses are different), the NAT must ensure the uniqueness of the Call-IDs in the outbound data connection. This is where Call-ID translation in the NAT becomes crucial.

Figure 34. Merging of endpoints in NAT

Multiple sessions initiated from the same PPTP client node

The routers supports a deployment scenario where multiple calls (or tunnels) are established from a single PPTP node within a single control connection. In this case, there is only one set of Start-Control-Connection-Req/Reply messages (one control channel) and multiple sets of Outgoing-Call-Req/Reply messages.

Selection of call IDs in NAT

Call-Id are taken from the same pool as the ICMP port ranges. Port-ranges and Call-IDs are both 16-bit values. Call-id selection mechanism is the same as the outside TCP/UDP port selection mechanism (random with parity).

Modifying active NAT prefix list or NAT classifier via CLI

Modifying active NAT prefix list or NAT classifier describes the outcome when the active NAT prefix list or NAT classifier is modified using CLI.

Table 12. Modifying active NAT prefix list or NAT classifier
Action Outcome Remarks
L2-Aware LSN

CLI – Modifying prefix in the NAT prefix list

Existing flows are always checked whether they comply with the NAT prefix list that is currently applied in the sub-profile for the subscriber. If the flows do not comply with the current NAT prefix list, they are cleared after 5 seconds.

The new flows immediately start using the updated settings.

Changing the prefix in the NAT prefix list internally re-subnets the outside IP address space.

NAT prefix list is used with multiple NAT policies in Layer 2–aware NAT and for downstream internal subnet in dNAT-only scenario for LSN.

The prefix can be modified (added, removed, remapped) at any time in the NAT prefix list.

In the classic CLI, the NAT policy must be first administratively disabled via CLI.

CLI – Modifying the NAT classifier

Existing flows are always checked whether they comply with the NAT classifier that is currently applied in the active NAT policy for the subscriber. If the flows do not comply with the current NAT classifier, they are cleared after 5 seconds.

The new flows immediately start using the updated settings.

Changing the NAT classifier has the same effect as in Layer 2–aware NAT; all existing flows using the NAT classifier are checked to see whether they comply with this classifier.

The NAT classifier is used for dNAT.

The NAT classifier is referenced in the NAT policy.

CLI – Removing/adding NAT policy in the NAT prefix list

Blocked

Not Applicable

CLI – Removing, adding, or replacing the NAT policy in sub-profile

Blocked

Not Applicable

CLI – Removing, adding, or replacing the NAT prefix list under the rtr/nat/inside/dnat-only

Not Applicable

Internally re-subnet, no effect on the flows

NAT logging

LSN logging is extremely important to the Service Providers (SP) who are required by the government agencies to track source of suspicious Internet activities back to the users that are hidden behind the LSN device.

The 7750 SR supports several modes of logging for LSN applications. Choosing the right logging model depends on the required scale, simplicity of deployment and granularity of the logged data.

For most purposes logging of allocation or de-allocation of outside port-blocks and outside IP address along with the corresponding LSN subscriber and inside service-id is sufficient.

In some cases, port-block based logging is not satisfactory and per flow logging is required.

Syslog, SNMP, local-file logging

The simplest form of LSN and Layer 2–aware NAT logging is via the logging facility in the 7750 SR, commonly called logger. Each port-block allocation or de-allocation event is recorded and send to the system logging facility (logger). Such an event can be:

  • recorded in the system memory as part of regular logs

  • written to a local file

  • sent to an external server by a syslog facility

  • sent to a SNMT trap destination

In this mode of logging, all applications in the system share the same logger.

Syslog, SNMP, and local-file logging on LSN and NAT RADIUS-based logging are mutually exclusive.

Syslog, SNMP, and local-file logging must be separately enabled for LSN and Layer 2–aware NAT. Use the options under the following context to enable syslog, SNMP, and local-file logging for LSN and Layer 2–aware NAT:

  • MD-CLI
    configure log log-events
  • classic CLI
    configure log event-control

The following output example displays relevant MIB events.

Relevant MIB events

2012 tmnxNatPlBlockAllocationLsn 
2013 tmnxNatPlBlockAllocationL2Aw

Filtering LSN events to system memory

In the following example, a single port-block [1884-1888] is allocated or de-allocated for the inside IP address 10.5.5.5 which is mapped to the outside IP address 198.51.100.1. Consequently, the event is logged in the memory as shown.

Event log memory output
2 2012/07/12 16:40:58.23 WEST MINOR: NAT #2012 Base NAT
"{2} Free 198.51.100.1 [1884-1888] -- vprn10 10.5.5.5 at 2012/07/12 16:40:58"

1 2012/07/12 16:39:55.15 WEST MINOR: NAT #2012 Base NAT
"{1} Map  198.51.100.1 [1884-1888] -- vprn10 10.5.5.5 at 2012/07/12 16:39:55"

When the needed LSN events are enabled for logging via the following configuration, they can be logged to memory through standard log ID 99 or be filtered with a custom log ID, such as in this example that follows (log-id 5).

Enable LSN events for logging (MD-CLI)
[ex:/configure log]
A:admin@node-2# info
    log-events {
        nat event tmnxNatPlL2AwBlockUsageHigh {
            generate false
            throttle false
        }
        nat event tmnxNatIsaMemberSessionUsageHigh {
            generate false
            throttle false
        }
        nat event tmnxNatPlLsnMemberBlockUsageHigh {
            generate false
            throttle false
        }
        nat event tmnxNatL2AwSubIcmpPortUsageHigh {
            generate false
            throttle false
        }
        nat event tmnxNatL2AwSubUdpPortUsageHigh {
            generate false
            throttle false
        }
        nat event tmnxNatL2AwSubTcpPortUsageHigh {
            generate false
            throttle false
        }
        nat event tmnxNatL2AwSubSessionUsageHigh {
            generate false
            throttle false
        }
        nat event tmnxNatPlBlockAllocationLsn {
            generate true
        }
        nat event tmnxNatResourceProblemDetected {
            generate false
            throttle false
        }
        nat event tmnxNatResourceProblemCause {
            generate false
            throttle false
        }
        nat event tmnxNatPlLsnRedActiveChanged {
            generate false
            throttle false
        }
    }
    filter "1" {
        default-action drop
        named-entry "1" {
            action forward
            match {
                application {
                    eq nat
                }
                event {
                    eq 2012
                }
            }
        }
    }
    log-id "5" {
        filter "1"
        source {
            main true
        }
        destination {
            memory {
            }
        }
    }
Enable LSN events for logging (classic CLI)
A:node-2>config>log# info 
----------------------------------------------
        filter 1 
            default-action drop
            entry 1 
                action forward
                match
                    application eq "nat"
                    number eq 2012
                exit 
            exit 
        exit 
        event-control "nat" 2001 suppress
        event-control "nat" 2002 suppress
        event-control "nat" 2003 suppress
        event-control "nat" 2004 suppress
        event-control "nat" 2005 suppress
        event-control "nat" 2006 suppress
        event-control "nat" 2007 suppress
        event-control "nat" 2008 suppress
        event-control "nat" 2009 suppress
        event-control "nat" 2010 suppress
        event-control "nat" 2011 suppress
        event-control "nat" 2012 generate
        event-control "nat" 2014 suppress
        event-control "nat" 2015 suppress
        event-control "nat" 2017 suppress
        syslog 10
        exit 
        log-id 5 name "5" 
            filter 1 
            from main 
            to memory
        exit 
----------------------------------------------

Use the following command to display the log event information.

show log event-control "nat"  
Log events output
=======================================================================
Log Events
=======================================================================
Application
 ID#    Event Name                       P   g/s     Logged     Dropped
-----------------------------------------------------------------------
   2001 tmnxNatPlL2AwBlockUsageHigh      WA  thr          0           0
   2002 tmnxNatIsaMemberSessionUsageHigh WA  thr          0           0
   2003 tmnxNatPlLsnMemberBlockUsageHigh WA  thr          0           0
   2007 tmnxNatL2AwSubIcmpPortUsageHigh  WA  thr          0           0
   2008 tmnxNatL2AwSubUdpPortUsageHigh   WA  thr          0           0
   2009 tmnxNatL2AwSubTcpPortUsageHigh   WA  thr          0           0
   2010 tmnxNatL2AwSubSessionUsageHigh   WA  thr          0           0
   2012 tmnxNatPlBlockAllocationLsn      MI  sup          0           0
   2013 tmnxNatPlBlockAllocationL2Aw     MI  sup          0           0
   2014 tmnxNatResourceProblemDetected   MI  thr          0           0
   2015 tmnxNatResourceProblemCause      MI  thr          0           0
   2016 tmnxNatPlAddrFree                MI  sup          0           0
   2017 tmnxNatPlLsnRedActiveChanged     WA  thr          0           0
   2018 tmnxNatPcpSrvStateChanged        MI  thr          0           0
   2020 tmnxNatMdaActive                 MI  thr          0           0
   2021 tmnxNatLsnSubBlksFree            MI  sup          0           0
   2022 tmnxNatDetPlcyChanged            MI  thr          0           0
   2023 tmnxNatMdaDetectsLoadSharingErr  MI  thr          0           0
   2024 tmnxNatIsaGrpOperStateChanged    MI  thr          0           0
   2025 tmnxNatIsaGrpIsDegraded          MI  thr          0           0
   2026 tmnxNatLsnSubIcmpPortUsgHigh     WA  thr          0           0
   2027 tmnxNatLsnSubUdpPortUsgHigh      WA  thr          0           0
   2028 tmnxNatLsnSubTcpPortUsgHigh      WA  thr          0           0
   2029 tmnxNatLsnSubSessionUsgHigh      WA  thr          0           0
   2030 tmnxNatInAddrPrefixBlksFree      MI  sup          0           0
   2031 tmnxNatFwd2EntryAdded            MI  sup          0           0
   2032 tmnxNatDetPlcyOperStateChanged   MI  thr          0           0
   2033 tmnxNatDetMapOperStateChanged    MI  thr          0           0
   2034 tmnxNatFwd2OperStateChanged      WA  thr          0           0
=======================================================================

The event description is shown in the MIB information that follows.

Event description output
tmnxNatPlL2AwBlockUsageHigh 
        The tmnxNatPlL2AwBlockUsageHigh notification is sent when 
         the block usage of a Layer-2-Aware NAT address pool 
         reaches its high watermark ('true')
         or when it reaches its low watermark again ('false').

tmnxNatIsaMemberSessionUsageHigh 
        The tmnxNatIsaMemberSessionUsageHigh notification is sent when 
         the session usage of a NAT ISA group member reaches its high 
         watermark ('true') or when it reaches its low watermark 
         again ('false').

tmnxNatPlLsnMemberBlockUsageHigh 
        The tmnxNatPlLsnMemberBlockUsageHigh notification is sent when 
         the block usage of a Large Scale NAT address pool 
         reaches its high watermark ('true')
         or when it reaches its low watermark again ('false')
         on a particular member MDA of its ISA group.
   
tmnxNatLsnSubIcmpPortUsageHigh 
        The tmnxNatLsnSubIcmpPortUsageHigh notification is sent when 
         the ICMP port usage of a Large Scale NAT subscriber reaches its high 
         watermark ('true') or when it reaches its low watermark 
         again ('false').

tmnxNatLsnSubUdpPortUsageHigh 
        The tmnxNatLsnSubUdpPortUsageHigh notification is sent when 
         the UDP port usage of a Large Scale NAT subscriber reaches its high 
         watermark ('true') or when it reaches its low watermark 
         again ('false').

tmnxNatLsnSubTcpPortUsageHigh 
        The tmnxNatLsnSubTcpPortUsageHigh notification is sent when 
         the TCP port usage of a Large Scale NAT subscriber reaches its high 
         watermark ('true') or when it reaches its low watermark 
         again ('false').

tmnxNatL2AwSubIcmpPortUsageHigh 
        The tmnxNatL2AwSubIcmpPortUsageHigh notification is sent when 
         the ICMP port usage of a Layer-2-Aware NAT subscriber reaches its high 
         watermark ('true') or when it reaches its low watermark 
         again ('false').

tmnxNatL2AwSubUdpPortUsageHigh 
        The tmnxNatL2AwSubUdpPortUsageHigh notification is sent when 
         the UDP port usage of a Layer-2-Aware NAT subscriber reaches its high 
         watermark ('true') or when it reaches its low watermark 
         again ('false').

tmnxNatL2AwSubTcpPortUsageHigh 
        The tmnxNatL2AwSubTcpPortUsageHigh notification is sent when 
         the TCP port usage of a Layer-2-Aware NAT subscriber reaches its high 
         watermark ('true') or when it reaches its low watermark 
         again ('false').

tmnxNatL2AwSubSessionUsageHigh 
        The tmnxNatL2AwSubSessionUsageHigh notification is sent when 
         the session usage of a Layer-2-Aware NAT subscriber reaches its high 
         watermark ('true') or when it reaches its low watermark 
         again ('false').

tmnxNatLsnSubSessionUsageHigh 
        The tmnxNatLsnSubSessionUsageHigh notification is sent when 
         the session usage of a Large Scale NAT subscriber reaches its high 
         watermark ('true') or when it reaches its low watermark 
         again ('false').

tmnxNatPlBlockAllocationLsn 
        The tmnxNatPlBlockAllocationLsn notification is sent when 
         an outside IP address and a range of ports is allocated to 
         a NAT subscriber associated with a Large Scale NAT (LSN) pool, 
         and when this allocation expires.

tmnxNatPlBlockAllocationL2Aw 
        The tmnxNatPlBlockAllocationL2Aw notification is sent when 
         an outside IP address and a range of ports is allocated to 
         a NAT subscriber associated with a Layer-2-Aware NAT pool, 
         and when this allocation expires.

tmnxNatResourceProblemDetected 
        The tmnxNatResourceProblemDetected notification is sent when 
         the value of the object tmnxNatResourceProblem changes.

tmnxNatResourceProblemCause 
        The tmnxNatResourceProblemCause notification is to describe the cause
         of a NAT resource problem.

tmnxNatPlAddrFree 
        The tmnxNatPlAddrFree notification is sent when 
         a range of outside IP addresses becomes free at once.

tmnxNatPlLsnRedActiveChanged 
       The tmnxNatPlLsnRedActiveChanged notification is related to NAT Redundancy
       sent when the value of the object tmnxNatPlLsnRedActive changes. The cause is
       explained in the tmnxNatNotifyDescription which is a printable character
       string.

tmnxNatMdaActive
        The tmnxNatMdaActive notification is sent when 
         the value of the object tmnxNatIsaMdaStatOperState changes from
         'primary' to any other value, or the other way around.
         The value 'primary' means that the MDA is active in the group.

tmnxNatLsnSubBlksFree 
        The tmnxNatLsnSubBlksFree notification is sent when 
         all port blocks allocated to a Large Scale NAT (LSN) subscriber
         are released.
         
         The NAT subscriber is identified with its subscriber ID 
         tmnxNatNotifyLsnSubId.
         
         To further facilitate the identification of the NAT subscriber,
         its type tmnxNatNotifySubscriberType, 
         inside IP address tmnxNatNotifyInsideAddr
         and inside virtual router instance tmnxNatNotifyInsideVRtrID
         are provided.
         
         The values of tmnxNatNotifyMdaChassisIndex, tmnxNatNotifyMdaCardSlotNum
         and tmnxNatNotifyMdaSlotNum identify the ISA MDA where the blocks were
         processed.
         
         All notifications of this type are sequentially numbered with
         the tmnxNatNotifyPlSeqNum.
         
         The value of tmnxNatNotifyNumber is the numerical identifier of the
         NAT policy used for this allocation; it can be used for correlation
         with the tmnxNatPlBlockAllocationLsn notification; the value zero
         means that this notification can be correlated with all the
         tmnxNatPlBlockAllocationLsn notifications of the subscriber.

tmnxNatDetPlcyChanged
        The tmnxNatDetPlcyChanged notification is sent when 
         something changed in the Deterministic NAT map.
         
         [CAUSE] Such a change may be caused by a modification of the 
        tmnxNatDetPlcyTable or the tmnxNatDetMapTable.
        
     [EFFECT] Traffic flows of one or more given subscribers, subject to NAT, may be
        assigned different outside IP address and/or outside port.
        
        [RECOVERY] Managers that rely on the offline representation of the
        Deterministic NAT map should get an updated copy.

tmnxNatMdaDetectsLoadSharingErr 
        The tmnxNatMdaDetectsLoadSharingErr notification is sent  
         periodically at most every 10 seconds while a NAT ISA MDA
         detects that it is receiving packets erroneously, due to
         incorrect load-balancing by the ingress IOM.
         
         The value of tmnxNatNotifyCounter is the incremental count of
         dropped packets since the previous notification sent by the same MDA.
         
         [CAUSE] The ingress IOM hardware does not support a particular
         NAT function's load-balancing, for example an IOM-2 does not 
         support deterministic NAT.
         
         [EFFECT] The MDA drops all incorrectly load-balanced traffic.
         
         [RECOVERY] Upgrade the ingress IOM, or change the configuration. 

tmnxNatIsaGrpOperStateChanged
        The tmnxNatIsaGrpOperStateChanged notification is sent when 
         the value of the object tmnxNatIsaGrpOperState changes.

tmnxNatIsaGrpIsDegraded
        The tmnxNatIsaGrpIsDegraded notification is sent when 
         the value of the object tmnxNatIsaGrpDegraded changes.

tmnxNatLsnSubIcmpPortUsgHigh 
      The tmnxNatLsnSubIcmpPortUsgHigh notification is sent when
      the ICMP port usage of a Large Scale NAT subscriber reaches its high watermark
      ('true') or when it reaches its low watermark again ('false').
         
      The subscriber is identified with its inside IP address or prefix
      tmnxNatNotifyInsideAddr in the inside virtual router instance
      tmnxNatNotifyInsideVRtrID.

tmnxNatLsnSubUdpPortUsgHigh
        The tmnxNatLsnSubUdpPortUsgHigh notification is sent when 
         the UDP port usage of a Large Scale NAT subscriber reaches its high 
         watermark ('true') or when it reaches its low watermark 
         again ('false').
         
         The subscriber is identified with its inside IP address or prefix
         tmnxNatNotifyInsideAddr in the inside virtual router instance
         tmnxNatNotifyInsideVRtrID.

tmnxNatLsnSubTcpPortUsgHigh 
        The tmnxNatLsnSubTcpPortUsgHigh notification is sent when 
         the TCP port usage of a Large Scale NAT subscriber reaches its high 
         watermark ('true') or when it reaches its low watermark 
         again ('false').
         
         The subscriber is identified with its inside IP address or prefix
          tmnxNatNotifyInsideAddr in the inside virtual router instance
          tmnxNatNotifyInsideVRtrID.

tmnxNatLsnSubSessionUsgHigh
        The tmnxNatLsnSubSessionUsgHigh notification is sent when 
         the session usage of a Large Scale NAT subscriber reaches its high 
         watermark ('true') or when it reaches its low watermark 
         again ('false').
         
         The subscriber is identified with its inside IP address or prefix
         tmnxNatNotifyInsideAddr
         in the inside virtual router instance tmnxNatNotifyInsideVRtrID.

tmnxNatInAddrPrefixBlksFree 
        The tmnxNatInAddrPrefixBlksFree notification is sent when 
        all port blocks allocated to one or more subscribers
        associated with a particular set of inside addresses 
        are released by this system.
        
        The type of subscriber(s) is indicated by tmnxNatNotifySubscriberType.
        
        The set of inside IP addresses is associated with the virtual
        router instance indicated by tmnxNatNotifyInsideVRtrID and is of the
         type indicated by tmnxNatNotifyInsideAddrType
        
        The set of inside IP addresses consists of the address prefix
        indicated with tmnxNatNotifyInsideAddr and tmnxNatNotifyInsideAddrPrefixLen
        unless these objects are empty and zero; if tmnxNatNotifyInsideAddr is empty
        and tmnxNatNotifyInsideAddrPrefixLen is zero, the set contains 
        all IP addresses of the indicated type.
        
        The values of tmnxNatNotifyMdaChassisIndex, tmnxNatNotifyMdaCardSlotNum
        and tmnxNatNotifyMdaSlotNum identify the ISA MDA where the blocks were 
        processed.
        
        All notifications of this type are sequentially numbered with
        the tmnxNatNotifyPlSeqNum.
        
        This type of notification is typically the consequence of one or
        more configuration changes; the nature of these changes is indicated
        in the tmnxNatNotifyDescription.

tmnxNatFwd2EntryAdded 
        [CAUSE] The tmnxNatFwd2EntryAdded notification is sent when 
         a row is added to or removed from the tmnxNatFwd2Table by other means
         than operations on the tmnxNatFwdAction; 
         a conceptual row can be added to or removed from the table by operations on
         the tmnxNatFwdAction
         object group or otherwise, by means of the PCP protocol
         or automatically by the system, for example when a subscriber profile is
         changed.
         When the row is added, the value of the object 
         tmnxNatNotifyTruthValue is 'true'; when the row is removed,
         it is 'false'.
         
         [EFFECT] The specified NAT subscriber can start receiving inbound
         traffic flows.
         [RECOVERY] No recovery required; this notification is the result 
         of an operator or protocol action. 

tmnxNatDetPlcyOperStateChanged
        [CAUSE] The tmnxNatDetPlcyOperStateChanged notification is sent when
         the value of the object tmnxNatDetPlcyOperState changes. The cause is
         explained in the tmnxNatNotifyDescription.
         tmnxNatDetMapOperStateChanged
         [CAUSE] The tmnxNatDetMapOperStateChanged notification is sent when 
         the value of the object tmnxNatDetMapOperState changes. The cause is
         explained in the tmnxNatNotifyDescription.

tmnxNatFwd2OperStateChanged
      [CAUSE] The tmnxNatFwd2OperStateChanged notification is sent when 
      the value of the object tmnxNatFwd2OperState changes. This
      is related to the state of the ISA MDA where the forwarding entry
      is located, or the availability of resources on that MDA.
      
      In the case of Layer-2-Aware NAT subscribers, the tmnxNatFwd2OperState
      is 'down' while the subscriber is not instantiated. This would typically
      be a transient situation.
         
      [EFFECT] The corresponding inward bound packets are dropped while the
      operational status is 'down'.
         
      [RECOVERY] If the ISA MDA reboots successfully, or another ISA MDA takes over,
      no recovery is required. If more resources become available on the ISA MDA, no
         recovery is required.

NAT logging to a local file

The following example displays NAT logging to a local file instead of memory.

Enable NAT logging to a local file (MD-CLI)
[ex:/configure log]
A:admin@node-2# info
...
file "5" {
        description "nat logging"
        rollover 15
        retention 12
        compact-flash-location {
            primary cf1
        }
    }
    log-id "5" {
        filter "1"
        source {
            main true
        }
        destination {
            file "5"
        }
    }
Enable NAT logging to a local file (classic CLI)
A:node-2>config>log# info  
---------------------------------------------- 
        file-id 5  
            description "nat logging" 
            location cf1:  
            rollover 15 retention 12  
        exit  
         
        log-id 5  
            filter 1  
            from main  
            to file 5 
        exit  

The events are logged to a local file on the Compact Flash (CF) cf1 in a file under the /log directory.

Note: Logging to the CF represents a single point of failure. Performance (logs per second) of logging onto the CF is limited in comparison to other logging methods (RADIUS, Syslog, and IPFIX). Failure to generate logs because of a failed CF or performance limitation results in dropped NAT traffic. For this reason, local NAT logging in the SR OS is recommended only in a lab environment.

SNMP trap logging

In case of SNMP logging to a remote node, set the log destination to the SNMP destination. Allocation or de-allocation of each port block triggers sending a SNMP trap message to the trap destination.

Configure SNMP trap logging (MD-CLI)

[ex:/configure log]
A:admin@node-2# info
...
    filter "1" {
        default-action drop
        named-entry "1" {
            action forward
            match {
                application {
                    eq nat
                }
                event {
                    eq 2012
                }
            }
        }
    }
    log-id "6" {
        filter "1"
        source {
            main true
        }
        destination {
            snmp {
            }
        }
    }
    snmp-trap-group "6" {
        trap-target "nat" {
            address 192.168.1.10
            port 9001
            version snmpv2c
            notify-community "private"
        }
    }

Configure SNMP trap logging (classic CLI)

A:node-2>config>log# info 
----------------------------------------------
       filter 1 
            default-action drop
            entry 1 
                action forward
                match
                    application eq "nat"
                    number eq 2012
                exit 
            exit 
        exit 
        
        snmp-trap-group 6
            trap-target "nat" address 192.168.1.10 port 9001 snmpv2c notify-community "private"
        exit 
        log-id 6 
            filter 1 
            from main 
            to snmp
        exit 

The following figure shows an SNMP trap message.

Figure 35. SNMP trap message

NAT syslog

The follow example displays NAT logs configured to be sent to a syslog remote facility. A separate syslog message is generated for every port-block allocation or de-allocation.

Configure the sending of NAT logs to a syslog remote facility (MD-CLI)

[ex:/configure log]
A:admin@node-2# info
...
    filter "1" {
        default-action drop
        named-entry "1" {
            action forward
            match {
                application {
                    eq nat
                }
                event {
                    eq 2012
                }
            }
        }
    }
    log-id "7" {
        filter "1"
        source {
            main true
        }
        destination {
            syslog "7"
        }
    }
    syslog "7" {
        address 192.168.1.10
    }

Configure the sending of NAT logs to a syslog remote facility (classic CLI)

A:node-2>config>log# info 
----------------------------------------------
...
        filter 1 
            default-action drop
            entry 1 name "1"
                action forward
                match
                    application eq "nat"
                    number eq 2012
                exit 
            exit 
        exit 
        syslog 7
            address 192.168.1.10
        exit 
        
        
         log-id 7 name "7"
            filter 1 
            from main 
            to syslog 7
            no shutdown
        exit 
----------------------------------------------

The following figure displays a syslog message.

Figure 36. Syslog message
The following example displays the change of configuration for a severity level for this event. Select from the following options:
  • cleared
  • indeterminate
  • critical
  • major
  • minor
  • warning

Change the event severity level (MD-CLI)

*[ex:/configure]
A:admin@node-2# log log-events nat event * severity major

Change the event severity level (classic CLI)

*A:node-2# configure log event-control "nat" 2012 generate major

LSN RADIUS logging

LSN RADIUS logging (or accounting) is based on RADIUS accounting messages defined in RFC 2866. It requires a user to have RADIUS accounting infrastructure in place. For that reason, LSN RADIUS logging and LSN RADIUS accounting terms can be used interchangeably.

This mode of logging operation is introduced so that the shared logging infrastructure in 7750 SR can be offloaded by disabling syslog/SNMP/local-file LSN logging. The result is increased performance and higher scale, particularly in cases when multiple BB-ISA cards within the same system are deployed to perform aggregated LSN functions.

An additional benefit of LSN RADIUS logging over syslog/SNMP/local-file logging is reliable transport. Although RADIUS accounting relies on unreliable UDP transport, each accounting message from the RADIUS client must be acknowledged on the application level by the receiving end (accounting server).

Each port-block allocation or deallocation is reported to an external accounting (logging) server in the form of START, INTERIM-UPDATE, or STOP messages. The type of accounting messages generated depends on the mode of operation. The modes of operation are as follows:

  • START and STOP per port-block

    An accounting START is generated when a new port-block for the LSN subscriber is allocated. Similarly, the accounting STOP is generated when the port-block is released. Each accounting START and STOP pair of messages that are triggered by port-block allocation or deallocation within the same subscriber have the same Acct-Multi-Session-Id (subscriber significant) but a different Acct-Session-Id (port-block significant). This mode of operation is enabled by the inclusion of Acct-Multi-Session-Id within the NAT accounting policy.

  • START and STOP per subscriber

    An accounting START is generated when the first port block for the NAT subscriber is allocated. Each consecutive port-block allocation or deallocation triggers an INTERIM-UPDATE message with the same Acct-Session-Id (subscriber significant). The termination cause attribute in accounting STOP messages indicates the reason for port-block deallocation. Deallocation of the last port block for the LSN subscriber triggers an accounting STOP message. There is no Acct-Multi-Session-Id present in this mode of operation.

The accounting messages are generated and reported directly from the BB-ISA card, therefore bypassing accounting infrastructure residing on the Control Plane Module (CPM).

LSN RADIUS logging is enabled per NAT group. To achieve the required scale, each BB-ISA card in the NAT group with LSN RADIUS logging enabled runs a RADIUS client with its own unique source IP address. Accounting messages can be distributed to up to five accounting servers that can be accessed in round-robin fashion. Alternatively, in direct access mode, only one accounting server in the list is used. When this server fails, the next one in the list is used.

Perform the following steps to enable LSN RADIUS logging:

  1. Configure the LSN RADIUS policy. The policy defines the following:

    • accounting destination

    • inclusion of RADIUS attributes that are sent in accounting messages to the destination

    • source IP addresses per BB-ISA card (RADIUS client) in the NAT group

    Note: The accounting policy applies to both LSN and WLAN-GW. Some attributes are only applicable to NAT, some are only applicable WLAN-GW, and some are applicable to both.
  2. Apply this policy to the NAT group. This automatically enables RADIUS accounting on every BB-ISA card in the group, provided that each BB-ISA card has an IP address.

Configure the LSN RADIUS accounting policy (MD-CLI)

[ex:/configure aaa radius isa-policy "1"]
A:admin@node-2# info detail
 ## apply-groups
 ## apply-groups-exclude
    description "RADIUS accounting policy for NAT"
 ## password
    nas-ip-address-origin system-ip
 ## python-policy
    accounting {
        include-attributes {
            acct-delay-time false
            acct-triggered-reason false
            called-station-id true
            calling-station-id false
            circuit-id false
            class false
            dhcp-options false
            dhcp-vendor-class-id false
            frame-counters true
            framed-ip-address true
            framed-ip-netmask false
            framed-ipv6-prefix false
            hardware-timestamp true
            ipv6-address false
            mac-address false
            multi-session-id true
            nas-identifier true
            nas-ip-address false
            nas-ipv6-address false
            nas-port false
            nas-port-id false
            nas-port-type false
            nat-inside-service-id true
            nat-outside-ip-address true
            nat-outside-service-id true
            nat-port-range-block true
            nat-subscriber-string false
            octet-counters true
            proxied-subscriber-data false
            release-reason true
            remote-id false
            rssi false
            session-time true
            subscriber-id false
            toserver-dhcp6-options false
            ue-creation-type false
            user-name true
            wlan-ssid-vlan false
            xconnect-tunnel-local-ipv6-address false
            xconnect-tunnel-remote-ipv6-address false
            xconnect-tunnel-service false
            xconnect-tunnel-type false
            xconnect-tunnel-home-address false
            millisecond-event-timestamp false
            credit-control-quota false
        }
    }

    ...

    servers {
        source-address-range 192.168.1.20
        timeout 5
        total-tries 3
        router-instance "Base"
        access-algorithm direct
        ipv6 {
            mtu 9000
         ## source-prefix
        }
        server 1 {
         ## apply-groups
         ## apply-groups-exclude
            admin-state disable
            ip-address 192.168.1.10
            secret "ZVo7IYMjSxbdW1ocPvLeFh5a8Xa1DVLOc2uzKVmGRnIKJo37JJjKleoTXIPkD7hQljmD3aC8ZdQOSlw=" hash2
            purpose {
                accounting {
                    udp-port 1813
                }
             ## authentication
             ## coa
            }
        }
    }

Configure the LSN RADIUS accounting policy (classic CLI)

A:node-2>config>aaa>isa-radius-plcy$ info detail
----------------------------------------------
            description "RADIUS accounting policy for NAT"
            nas-ip-address-origin system-ip
            no password
            no periodic-update
            user-name-format mac mac-format alu
            acct-include-attributes
                no acct-delay-time
                no acct-trigger-reason
                called-station-id
                no calling-station-id
                no circuit-id
                no class
                no credit-control-quota
                no dhcp-options
                no dhcp-vendor-class-id
                no dhcp6-options
                frame-counters
                framed-ip-addr
                no framed-ip-netmask
                no framed-ipv6-prefix
                hardware-timestamp
                inside-service-id
                no ipv6-address
                no mac-address
                no millisecond-event-timestamp
                multi-session-id
                nas-identifier
                no nas-ip-address
                no nas-ipv6-address
                no nas-port
                no nas-port-id
                no nas-port-type
                no nat-subscriber-string
                octet-counters
                outside-ip
                outside-service-id
                port-range-block
                release-reason
                no remote-id
                session-time
                no subscriber-data
                no subscriber-id
                no ue-creation-type
                user-name
                no wifi-rssi
                no wifi-ssid-vlan
                no xconnect-tunnel-home-address
                no xconnect-tunnel-local-ipv6-address
                no xconnect-tunnel-remote-ipv6-address
                no xconnect-tunnel-service
                no xconnect-tunnel-type
            exit

            ...

            servers
                access-algorithm direct
                retry 3
                router "Base"
                source-address-range 192.168.1.20
                timeout sec 5
                ipv6
                    mtu 9000
                    no source-prefix
                exit
                server 1 create
                    shutdown
                    accounting port 1813
                    no authentication
                    no coa
                    ip-address 192.168.1.10
                    secret "ZVo7IYMjSxbdW1ocPvLeFh5a8Xa1DVLOc2uzKVmGRnIKJo37JJjKleoTXIPkD7hQljmD3aC8ZdQOSlw=" hash2
                exit
            exit
----------------------------------------------
Note: The NAT subscriber string and subscriber data attributes are only relevant when subscriber-aware NAT is enabled.

Use the following command to assign one unique IPv4 address to each BB-ISA card from the range of IPv4 addresses configured:

  • MD-CLI

    configure aaa radius isa-policy servers source-address-range
  • classic CLI

    configure aaa isa-radius-policy servers source-address-range
Note: This IPv4 address must be accessible from the accounting server.

The IP addresses are consecutively assigned to each BB-ISA, starting from the IP address configured by this command. The number of IP addresses allocated internally by the system corresponds to the number of BB-ISAs in the system.

Each BB-ISA is provisioned automatically with the first free IP address available, starting from the IP address that is configured using the source-address-range command. When a BB-ISA is removed from the system (or NAT group), it releases that IP address to be available to the next BB-ISA that comes online within the NAT group.

It is important to be mindful of the internally allocated IP addresses, because they are not explicitly configured in the system (other than the first IP address configured using the source-address-range command). However, those internally-assigned IP addresses can be seen using show commands in the routing table.

Use the following command to show route-table information.

show router route-table

The following output example shows there is one BB-ISA card in the NAT group 1. Its source IP address is 192.168.1.120.

NAT group with one BB-ISA card output

===============================================================================================
Route Table (Router: Base)
===============================================================================================
Dest Prefix[Flags]  Type     Proto    Age         Pref  Next Hop[Interface Name]         Metric    
-----------------------------------------------------------------------------------------------
80.0.0.1/32         Remote   NAT      02d18h24m   0     NAT outside: group 1 member 1    0
192.168.1.0/28      Local    Local    02d20h25m   0     radius                           0
192.168.1.20/32     Remote   NAT      00h38m29s   0     NAT outside: group 1 member 1    0

To communicate with IPv6 servers, provide a /64 source prefix for all ISAs and ESA VMs to use. Use the following command to configure this source prefix:

  • MD-CLI

    configure aaa radius isa-policy servers ipv6 source-prefix
  • classic CLI

    configure aaa isa-radius-policy servers ipv6 source-prefix

Each ISA or ESA uses one /128 address from this prefix as a source address when communicating with an IPv6 RADIUS server. Because this is a /64 prefix, there is no risk of the ISA or ESA VMs running out of allocated addresses or using addresses not assigned to them, as there is with IPv4.

There is no path MTU discovery when communicating with an IPv6 server. On IPv6 servers, only the originating host is allowed to fragment a packet. In this case, the host is the ISA or ESA VM. This means that any MTU applied to the IOM does not fragment the packet, as it does with IPv4. However, an MTU for IPv6 fragmentation can be configured manually. Use the following command to configure an MTU for IPv6 fragmentation:

  • MD-CLI

    configure aaa radius isa-policy servers ipv6 mtu
  • classic CLI

    configure aaa isa-radius-policy servers ipv6 mtu

When the ISA or ESA VM generates a packet larger than the MTU, IPv6 fragmentation is applied to the packet.

Note: Fragmentation and reassembly adds processing overhead. To avoid this, increase the MTU or reduce the message size (by including less RADIUS attributes, for example) when possible.

It is possible to load-balance accounting messages over multiple logging servers by configuring the access-algorithm to round-robin mode. After the LSN RADIUS accounting policy is defined, it must be applied to a NAT group.

Configure the NAT group with an LSN RADIUS accounting policy (MD-CLI)

[ex:/configure isa nat-group 1]
A:admin@node-2# info
    admin-state enable
    radius-accounting-policy "nat-acct-basic"
    redundancy {
        active-mda-limit 1
    }
    mda 1/2 { }

Configure the NAT group with an LSN RADIUS accounting policy (classic CLI)

A:node-2>config>isa>nat-group# info 
----------------------------------------------
            active-mda-limit 1
            radius-accounting-policy "nat-acct-basic"
            mda 1/2
            no shutdown

The following output shows the RADIUS accounting messages for when a Large Scale NAT44 subscriber has allocated two port blocks in a logging mode where accounting START or STOP is generated per port block.

RADIUS accounting messages after a Large Scale NAT44 subscriber has allocated two port blocks

Fri Jul 13 09:55:15 2012
        NAS-IP-Address = 10.1.1.1
        NAS-Identifier = "left-a20"
        NAS-Port = 37814272
        Acct-Status-Type = Start
        Acct-Multi-Session-Id = "500052cd2edcaeb97c2dad3d7c2dad3d"
        Acct-Session-Id = "500052cd2edcaeb96206475d7c2dad3d"
        Called-Station-Id = "00-00-00-00-01-01"
        User-Name = "LSN44@10.0.0.58"
        Alc-Serv-Id = 10
        Framed-IP-Address = 10.0.0.58
        Alc-Nat-Outside-Ip-Addr = 198.51.100.1
        Alc-Nat-Port-Range = "198.51.100.1 2024-2028 router base"
        Acct-Input-Packets = 0
        Acct-Output-Packets = 0
        Acct-Input-Octets = 0
        Acct-Output-Octets = 0
        Acct-Input-Gigawords = 0
        Acct-Output-Gigawords = 0
        Acct-Session-Time = 0
        Event-Timestamp = "Jul 13 2012 09:54:37 PDT"
        Acct-Unique-Session-Id = "21c45a8b92709fb8"
        Timestamp = 1342198515
        Request-Authenticator = Verified

Fri Jul 13 09:55:16 2012
        NAS-IP-Address = 10.1.1.1
        NAS-Identifier = "left-a20"
        NAS-Port = 37814272
        Acct-Status-Type = Start
        Acct-Multi-Session-Id = "500052cd2edcaeb97c2dad3d7c2dad3d"
        Acct-Session-Id = "500052cd2edcaeb9620647297c2dad3d"
        Called-Station-Id = "00-00-00-00-01-01"
        User-Name = "LSN44@10.0.0.58"
        Alc-Serv-Id = 10
        Framed-IP-Address = 10.0.0.58
        Alc-Nat-Outside-Ip-Addr = 198.51.100.1
        Alc-Nat-Port-Range = "198.51.100.1 2029-2033 router base"
        Acct-Input-Packets = 0
        Acct-Output-Packets = 5
        Acct-Input-Octets = 0
        Acct-Output-Octets = 370
        Acct-Input-Gigawords = 0
        Acct-Output-Gigawords = 0
        Acct-Session-Time = 1
        Event-Timestamp = "Jul 13 2012 09:54:38 PDT"
        Acct-Unique-Session-Id = "baf26e8a35e31020"
        Timestamp = 1342198516
        Request-Authenticator = Verified

The following output shows RADIUS accounting messages for when a Large Scale NAT44 subscriber has deallocated two port blocks in a logging mode where accounting START or STOP is generated per port block.

RADIUS accounting messages after a Large Scale NAT44 subscriber has deallocated two port blocks

Fri Jul 13 09:56:18 2012
        NAS-IP-Address = 10.1.1.1
        NAS-Identifier = "left-a20"
        NAS-Port = 37814272
        Acct-Status-Type = Stop
        Acct-Multi-Session-Id = "500052cd2edcaeb97c2dad3d7c2dad3d"
        Acct-Session-Id = "500052cd2edcaeb96206475d7c2dad3d"
        Called-Station-Id = "00-00-00-00-01-01"
        User-Name = "LSN44@10.0.0.58"
        Alc-Serv-Id = 10
        Framed-IP-Address = 10.0.0.58
        Alc-Nat-Outside-Ip-Addr = 198.51.100.1
        Alc-Nat-Port-Range = "198.51.100.1 2024-2028 router base"
        Acct-Terminate-Cause = Port-Unneeded
        Acct-Input-Packets = 0
        Acct-Output-Packets = 25
        Acct-Input-Octets = 0
        Acct-Output-Octets = 1850
        Acct-Input-Gigawords = 0
        Acct-Output-Gigawords = 0
        Acct-Session-Time = 64
        Event-Timestamp = "Jul 13 2012 09:55:41 PDT"
        Acct-Unique-Session-Id = "21c45a8b92709fb8"
        Timestamp = 1342198578
        Request-Authenticator = Verified

Fri Jul 13 09:56:20 2012
        NAS-IP-Address = 10.1.1.1
        NAS-Identifier = "left-a20"
        NAS-Port = 37814272
        Acct-Status-Type = Stop
        Acct-Multi-Session-Id = "500052cd2edcaeb97c2dad3d7c2dad3d"
        Acct-Session-Id = "500052cd2edcaeb9620647297c2dad3d"
        Called-Station-Id = "00-00-00-00-01-01"
        User-Name = "LSN44@10.0.0.58"
        Alc-Serv-Id = 10
        Framed-IP-Address = 10.0.0.58
        Alc-Nat-Outside-Ip-Addr = 198.51.100.1
        Alc-Nat-Port-Range = "198.51.100.1 2029-2033 router base"
        Acct-Terminate-Cause = Host-Request
        Acct-Input-Packets = 0
        Acct-Output-Packets = 25
        Acct-Input-Octets = 0
        Acct-Output-Octets = 1850
        Acct-Input-Gigawords = 0
        Acct-Output-Gigawords = 0
        Acct-Session-Time = 65
        Event-Timestamp = "Jul 13 2012 09:55:42 PDT"
        Acct-Unique-Session-Id = "baf26e8a35e31020"
        Timestamp = 1342198580
        Request-Authenticator = Verified

Including the Acct-Multi-Session-Id attribute in the NAT accounting policy enables generating START and STOP messages for each allocation or deallocation of a port block within the subscriber. Otherwise, only the first and last port block for the subscriber would generate a pair of START and STOP messages. All port blocks in between would generate INTERIM-UPDATE messages.

The User-Name attribute in accounting messages is set to app-name@inside-ip-address where the app-name can be any of the following:

  • LSN44

  • DS-Lite

  • NAT64

Subscriber session time versus port-block time

The Acct-Session-Time RADIUS attribute in NAT logging specifies the duration of the entire NAT session (subscriber), and not the duration of individual port blocks (PB) or port forwards (PF) within the subscriber session. This is valid regardless of whether the per-subscriber (with Multi-Acct-Session-Id disabled) or per-port-block (with Multi-Acct-Session-Id enabled) NAT RADIUS logging mode used.

In the per-NAT-subscriber logging mode (with Multi-Acct-Session-Id disabled), an Acct-Start message is generated when the subscriber comes online. This occurs because of various reasons, such as lawful intercept (LI), PF allocation without prior PB allocation, enabling debugging for the user, or when the first PB is allocated. An Acct-Stop message is generated when the NAT subscriber goes offline (for example, when the last PB is released, LI, debug, or PF); all PB allocations and deallocations in between are logged with triggered Acct-Interim-Update messages. In the Acct-Interim-Update and Acct-Stop messages, the Acct-Session-Time attribute indicates the duration of the NAT subscriber session, not the duration of individual PBs.

Similarly, in per-port-block logging mode (with Acct-Multi-Session-ID or AMSI enabled), Acct-Start and Acct-Stop messages are generated for each PB with a unique Acct-Session-Id and the same per-subscriber AMSI. In these messages, the Acct-Session-Time attribute also represents the duration of the NAT subscriber session.

The Alc-Nat-Port-Time RADIUS attribute is used to record the duration of individual PB and PF logging mode.

The Acct-Session-Time attribute is included by default in RADIUS NAT logging. To enable the Alc-Nat-Port-Time attribute, use the following command.

  • MD-CLI
    configure aaa radius isa-policy accounting include-attributes nat-port-time true
  • classic CLI
    configure aaa isa-radius-policy acct-include-attributes port-time

Periodic RADIUS logging

Currently-allocated NAT resources (such as a public IP address and a port block for a NAT subscriber) can be periodically refreshed via Interim-Update (I-U) accounting messages. This functionality is enabled by the periodic RADIUS logging facility. Its primary purpose is to keep logging information preserved for long-lived sessions in environments where NAT logs are periodically and deliberately deleted from the network of the service provider. This is typically the case in countries where privacy laws impose a limit on the amount of time that the information about the traffic of a customer can be retained or stored in the network of a service provider.

Configure periodic RADIUS logging for NAT (MD-CLI)
[ex:/configure aaa]
A:admin@node-2# info
    radius {
        isa-policy "radius1" {
            accounting {
                nat-periodic-update {
                    interval 2
                    rate-limit 1000
                }
            }
        }
    }
Configure periodic RADIUS logging for NAT (classic CLI)
A:node-2>config>aaa# info
----------------------------------------------
        isa-radius-policy "radius1" create
            periodic-update interval 2 rate-limit 1000
        exit
----------------------------------------------

The configurable interval dictates the frequency of I-U messages that are generated for each currently allocated NAT resource (such as a public IP address and a port block).

By default, the I-U messages are sent in rapid succession for a subscriber without any intentional delay inserted by SR OS. For example, a NAT subscriber with 8 NAT policies, each configured with 40 port ranges generates 320 consecutive I-U messages at the expiration of the configured interval. This can create a surge in I-U message generation in cases where intervals are synchronized for multiple NAT subscribers. This can have adverse effects on the logging behavior. For example, the logging server can drop messages because of its inability to process the high rate of incoming I-U messages.

To prevent this, the rate of I-U message generation can be controlled by the rate-limit command option.

The periodic logging is applicable to both modes of RADIUS logging in NAT:

  • Acct-Multi-Session-Id AVP is enabled

    In this case, accounting START/STOP messages are generated for each NAT resource (such as a public IP address and a port block) allocation/de-allocation. Acct-multi-session-id and acct-session-id messages in the periodic I-U messages for the currently allocated NAT resource are inherited from the acct START messages related to the same NAT resource.

  • Acct-Multi-Session-Id AVP is disabled

    In this case, the acct START is generated for the first allocated NAT resource for the subscriber (a public IP address and a port block) and the acct STOP message is generated when the last NAT resource for the subscriber is released. All of the in-between port block allocations for the same subscriber trigger I-U messages with the same acct-session-id as the one contained in the acct START message. To differentiate between the port-block allocations, releases, and updates within the I-U messages for the same NAT subscriber, the Alc-Acct-Triggered-Reason AVP is included in every periodic I-U message. Sending the Alc-Acct-Triggered-Reason AVP is configuration dependent. Use the commands in the following context to include attributes in RADIUS accounting messages:

    • MD-CLI
      configure aaa radius isa-policy accounting include-attributes
    • classic CLI
      configure aaa isa-radius-policy acct-include-attributes

    The supported values for Alc-Acct-Triggered-Reason AVP in I-U messages are:

    • Alc-Acct-Triggered-Reason=Nat-FREE (19) Generated when the port-block is released

    • Alc-Acct-Triggered-Reason=Nat-MAP (20) Generated when the port-block is allocated

    • Alc-Acct-Triggered-Reason = Nat-UPDATE (21) Generated during periodically scheduled I-U update

    The log for each port-block periodic update is carried in a separate I-U message.

Message pacing

Periodic Interim Update (I-U) message output can be paced to avoid congestion at the logging server. Use the rate-limit command option in the following command to control pacing:

  • MD-CLI
    configure aaa radius isa-policy accounting nat-periodic-update rate-limit
  • classic CLI
    configure aaa isa-radius-policy periodic-update interval rate-limit

As an example, consider the following hypothetical case:

  • 1 million NAT subscribers came up within 1 hour (16,666 NAT-subs per minute).

  • On average, each NAT subscriber allocates two port blocks.

  • This means that 2 million logs are sent to the logging server.

  • If the rate-limit command option is set to 100 (messages per second), on average it would take over 5 hours to send all those messages at that specific rate.

  • In this case, it is prudent to set the interval value to at least 6 hours, or increase the rate-limit command option so there is no time overlap between the old and new logs.

In the case of an MS-ISA switchover or a NAT multichassis redundancy switchover, there is a chance that a large number of subscribers become active at approximately the same time on the newly active MS-ISA (or chassis). This causes a large number of logs to be sent in a relatively short amount of time, which may overwhelm the logging server. The rate-limit command option is designed to help in such situations.

RADIUS buffer management on ISA or ESA-VM

RADIUS accounting messages (Accounting-Request) sent from an SR OS node to a RADIUS server are acknowledged by an Accounting-Response message generated by the RADIUS server. This acknowledgment confirms that the RADIUS server has successfully received and recorded the client information, such as NAT logs. This communication between the SR OS node and the RADIUS server occurs over UDP transport, as defined in RFC 2866.

If there is a lack of acknowledgments to the SR OS because of RADIUS server overload, server failure, or network failure, the SR OS node backs off RADIUS messages and retransmits them in line with the configured ISA RADIUS policy. After several retransmissions, as specified by the ISA RADIUS policy, the message is discarded.

Each ISA or ESA-VM maintains a buffer capable of storing 32,000 outstanding transactions toward the RADIUS server. A slow or unresponsive RADIUS server can result in buffer exhaustion.

When the buffer is full and unable to accept additional messages, the new NAT port blocks are not allocated or existing ones are not released.

The Acct Tx Timeouts counter in the following output example shows the number of dropped RADIUS messages because of these timeouts. This command was executed while the RADIUS server was unresponsive.

Use the following command to display ISA RADIUS policy information.

show aaa isa-radius-policy "AcctPolicy1"
RADIUS policy information output
===============================================================================
ISA RADIUS policy "AcctPolicy1"
===============================================================================
Description                 : AcctPolicy1 associated with nat-grp 1
Include attributes acct     : framed-ip-addr
                              nas-identifier
                              nat-subscriber-string
                              user-name
                              inside-service-id
                              outside-service-id
                              outside-ip
                              port-range-block
                              hardware-timestamp
                              release-reason
                              multi-session-id
                              frame-counters
                              octet-counters
                              session-time
                              called-station-id
                              subscriber-data
                              framed-ip-netmask
                              circuit-id
                              remote-id
                              dhcp-options
                              dhcp-vendor-class-id
                              mac-address
                              nas-port-id
                              nas-port-type
                              calling-station-id
                              subscriber-id
                              acct-trigger-reason
                              ue-creation-type
                              wifi-rssi
                              acct-delay-time
                              wifi-ssid-vlan
                              nas-ip-address
                              nas-port
                              class
                              ipv6-address
                              framed-ipv6-prefix
                              dhcp6-options
Include attributes auth     : nas-ip-address
                              nas-ipv6-address
User name format            : mac
User name MAC format        : alu
NAS-IP-Address              : system-ip
Python policy               : (Not Specified)
Periodic update
  interval (hours)          : (Not Specified)
  rate limit (messages/s)   : (Not Specified)
-------------------------------------------------------------------------------
RADIUS server settings
-------------------------------------------------------------------------------
Router                      : 2000
Source address start        : 192.0.2.100
Source address end          : 192.0.2.100
IPv6 Source prefix          : (Not Specified)
IPv6 MTU                    : 9000
Access algorithm            : direct
Retry                       : 10
Timeout (s)                 : 60
Last management change      : 05/04/2023 11:36:05
===============================================================================

===============================================================================
Servers for "AcctPolicy1"
===============================================================================
Index Address                                 Acct-port Auth-port CoA-port 
-------------------------------------------------------------------------------
1     10.10.17.7                              1813      0         0        
===============================================================================

===============================================================================
Status for ISA RADIUS server policy "AcctPolicy1"
===============================================================================
Server 1, group 1, member 1
-------------------------------------------------------------------------------
Purposes Up                                     : (None)
Purposes Hold down                              : (None)
Source IP address                               : 192.0.2.100
Acct Tx Requests                                : 655451
Acct Tx Retries                                 : 589905
Acct Tx Timeouts                                : 65544
Acct Rx Replies                                 : 0
Auth Tx Requests                                : 0
Auth Tx Retries                                 : 0
Auth Tx Timeouts                                : 0
Auth Rx Replies                                 : 0
CoA Rx Requests                                 : 0

===============================================================================

Summarization logs and bulk operations

Bulk operations, such as removing a NAT policy or shutting down a NAT pool, can trigger a cascade of events, such as release of NAT subscribers associated with the NAT policy or a NAT pool. To avoid excessive logging during those operations, summarization logs are used. These logs carry relational information that connects multiple events and are categorized under event log 99 on the CPM. Configurable destinations for these logs include SNMP notification (trap), syslog (sent in syslog format to the syslog collector), memory (sent to memory buffer), local file, and NETCONF.

Tracking NAT subscribers based on the logs becomes more complicated if they were terminated because of bulk operations. A MAP log is generated when NAT resources for the subscriber are allocated; a FREE log is generated when NAT resources for the subscriber are released. Typically, individual MAP logs are paired with corresponding FREE logs to determine the identity and activity duration for the subscriber. However, during bulk operations, individual FREE logs are substituted with a summarized log containing relational information. In such cases, identifying NAT subscriber mappings may necessitate examining multiple logging sources, such as a combination of RADIUS and summarization logs.

To simplify log summarization, a policy ID is added as a connecting option in all logs. The policy ID follows the format: plcy-id XX

Where: XX is a unique number representing a NAT policy and assigned by the router for each inside routing context, as shown in the following example.

670 2023/05/31 12:55:00.952 UTC MINOR: NAT #2012 vprn601 NAT
"{986} Map  10.10.10.1 [4001-4279] MDA 5/1 -- 1166016512 classic-lsn-sub 
%203 vprn101 192.0.2.1 at 2023/05/31 12:55:00"

When an active NAT policy is removed from the configuration within an inside routing context, all NAT subscribers associated with that NAT policy in that context are removed from the system. Instead of generating individual FREE logs for each subscriber, a single summarized log is generated. This summarized log entry contains only the policy ID of the removed NAT policy and the inside service ID. To determine which NAT resources were released, the user must match the policy ID and the service ID in the summarization log with those in all MAP logs that lack a pairing explicit FREE log.

A summarization log is always created on the CPM, regardless of whether RADIUS logging is enabled.

A summarization log is generated on the CPM under the following circumstances:

  • NAT policy removal

    If there is a single NAT policy for each inside routing context, the summarization log contains the inside service ID (VPRN or Base). To identify the terminated NAT mappings for subscribers, search all individual MAP logs matching the service ID from the summarization log.

    When there are multiple NAT policies per inside routing context, the summarization log contains the inside service ID and policy ID. Search individual logs based on policy ID and inside service ID to identify subscribers affected by the NAT policy removal.

  • pool administratively disabled

    The router sends a summarization log with the outside service ID and all IP address ranges in the pool. Match individual logs based on outside IP address and outside service ID to identify released subscribers.

  • IP address range removal from the pool

    The summarization log includes the outside service ID and the removed IP address range. Match individual logs based on the outside IP addresses in the range and the outside service ID to identify the released subscribers.

  • Non deterministic source prefix removal

    The summarization log includes the removed source prefix, policy ID, and inside service ID.

  • Last AFTR address removal

    The summarization log includes the inside service ID.

  • DS-Lite or NAT64 node administratively disabled

    The summarization log includes the inside service ID.

  • Deterministic NAT prefix creation or removal

    The summarization log includes the inside service ID.

Summarization logs are enabled by event controls 2021 (tmnxNatLsnSubBlksFree), 2016 (tmnxNatPlAddrFree), and 2030 (tmnxNatInAddrPrefixBlksFree). These events are suppressed by default. Event control 2021 also reports when all port blocks for a subscriber are freed.

Summarization logs and RADIUS logging

RADIUS logging does not generate summarization logs because RADIUS accounting messages (start, interim-updates, and stop messages) are generated for each subscriber individually. Therefore, using RADIUS logging to also send summarization logs to every subscriber would be ineffective.

Instead, during RADIUS logging bulk operations, summarization logs are generated exclusively on the CPM using event logs. One exception is when a NAT accounting policy is removed, in which case a RADIUS acct-off message is sent without an accompanying summarization log. For bulk operations with RADIUS logging, users must rely on both RADIUS logging and summarization logs on the CPM.

For example, if a RADIUS log sequence indicates a mapping for <inside IP 1, outside IP 1, port-block 1>, and later a mapping log for <inside IP 2, outside IP 1, port-block 1>, it suggests that the FREE log for <inside IP 1, outside IP 1, port-block 1> is missing. This could mean that either the FREE log for <inside IP 1, outside IP 1, port-block 1> was lost, or a policy, pool, and address range were removed from the configuration. In the latter case, the user should check the CPM log for the summarization message.

Integrated Layer 2–aware NAT RADIUS logging and BNG accounting

In Layer 2–aware NAT, the logging of NAT resources is integrated with ESM RADIUS accounting. The NAT-related resources reporting is described in Integrated ESM and NAT accounting.

Accounting START messages carry only the RADIUS Event-Timestamp (type 55), which correctly reflects the creation of the initial port block and outside IP address for Layer 2–aware NAT. The initial port block and outside IP address allocation in the ISA or ESA for a L2-Aware subscriber is triggered by the control plane (CPM) when the first session or host is created. This means that the initial port block and outside IP address creation in the ISA or ESA is not triggered by data traffic. However, data traffic triggers the creation of extend port blocks.

Interim-Updates and STOP accounting messages carry two timestamps. This is because the RADIUS accounting message is generated by the CPM at the time indicated by the Event-Timestamp, which may not accurately reflect the time of the extended port block allocation or de-allocation that occurs on ISA or ESA.

  • RADIUS Event-Timestamp (type 55) with a 1 second resolution

    This timestamp is updated by the CPM with the time that the Interim-Update message is generated.

  • Nokia Alc-ISA-Event-Timestamp (type 86)

    This is updated only when an event on the ISA or ESA occurs, for example, an extension port block is allocated or de-allocated. The format and resolution of this timestamp is the same as the format of the Event-Timestamp.

A summary of integrated ESM and NAT RADIUS logging is shown in Integrated ESM and NAT accounting. Only RADIUS attributes relevant to NAT are shown.

Table 13. Integrated ESM and NAT accounting
ESM and NAT integrated RADIUS accounting/logging
Acct msg type Queue-instance

(Sla-profile instance) accounting

Session or host accounting Comments

Start

An Acct START message is generated for every SLA profile instantiation and every accounting START message contains NAT-related information carried in

Alc-Nat-Port-Range (26.6527.121)

which includes the outside IP address, newly allocated initial port block, outside router ID, and NAT policy ID.

If there are multiple SLA profile instances per a NAT-enabled ESM subscriber, this information is repeated for all additional SLA profile instances.

Acct START is generated for every new session or host of a NAT- enabled subscriber.

This message carries:

  • the outside IP address and the initial port for the first session or the host for the subscriber

  • the outside IP address, the initial port block, and extended port blocks for any existing sessions or hosts of the subscriber

The NAT related information is carried in the following RADIUS attribute:

Alc-Nat-Port-Range(26.6527.121)

This attribute includes the outside IP address, port blocks, outside router ID, and NAT policy. There is no distinction between NAT-enabled sessions or hosts and non-NAT-enabled sessions of hosts (that is, non-NAT-enabled sessions or hosts also carry NAT information) for a NAT enabled subscriber.

The initial port block and outside IP address are always advertised in accounting START messages, regardless of whether there is a single session, host, multiple sessions, hosts per subscriber, or the sessions or hosts are NAT-enabled.

Regular Interim-Update

The message reports existing in-use NAT resources (the cumulative update) for each SLA profile instance:

Alc-Nat-Port-Range (26.6527.121)

The outside IP address, all existing port blocks, outside router ID, and NAT policy ID.

Alc-ISA-Event-Timestamp(241.26.6527.86)

The time of the last extended port block allocation or de-allocation on the ISA or ESA.

Event-Timestamp (55)

The time when the RADIUS message is generated on the CPM.

This is repeated for all NAT- enabled sessions of an ESM subscriber.

This message reports the existing in-use NAT resources (the cumulative update) for each session:

Alc-Nat-Port-Range (26.6527.121)

The outside IP address, all existing port blocks, outside router ID, and NAT policy.

Alc-ISA-Event-Timestamp(241.26.6527.86)

The time of the last extended port block allocation or de-allocation on the ISA or ESA.

Event-Timestamp (55)

The time when the RADIUS message is generated on the CPM

This is repeated for all NAT-enabled sessions or hosts of an ESM subscriber.

Triggered Interim-Update

This message carries differential updates tracking changes only for extended port blocks of the existing subscriber. The initial port-block is not advertised in the triggered Interim-Update but instead it is only advertised in the accounting START (map) or STOP (free) message.

Alc-Nat-Port-Range (26.6527.121)

The outside IP address, newly allocated or de-allocated extended port block, outside router ID, and NAT policy ID.

Alc-Acct-Triggered-Reason (26.6527.163)

  • NAT-MAP (20)

  • NAT-FREE (19)

These are the reasons for this triggered Interim-Update message. An extended port block is allocated (MAP) or de-allocated (FREE).

Alc-ISA-Event-Timestamp (241.26.6527.86)

The time of the extended port block allocation or de-allocation on the ISA or ESA.

Event-Timestamp (55)

The time when the RADIUS message is generated on the CPM.

This is repeated for each SLA-profile instance (queuing instance).

This message carries differential updates tracking changes only for extended port blocks of the existing subscriber. The initial port block is never advertised in the triggered Interim-Update but is only advertised in accounting START (map) or STOP (free) message.

Alc-Nat-Port-Range (26.6527.121)

The outside IP address, newly allocated or de-allocated extended port block, outside router ID, and NAT policy ID.

Alc-Acct-Triggered-Reason (26.6527.163)

  • NAT-MAP (20)

  • NAT-FREE (19)

The reason for this triggered Interim-Update message. An extended port block is allocated (MAP) or de-allocated (FREE).

Alc-ISA-Event-Timestamp (241.26.6527.86)

The time of the extended port block allocation or de-allocation on the ISA or ESA.

Event-Timestamp (55)

The time when the RADIUS message is generated on the CPM.

This is repeated for all sessions or hosts of an ESM subscriber.

If the last session of the subscriber is terminated, and at the same time this session has extended port blocks in use, two consecutive RADIUS accounting messages are sent (regardless of the accounting model):

  • a triggered I-U message with extended PBs

  • a STOP message for the last session termination for the subscriber. This STOP message contains the initial PB (and outside IP address).

A subscriber termination is infrequent event.

Stop

Accounting STOP messages are sent when an SLA profile instance (queuing-instance) is terminated (the last session associated with it is terminated).

If the terminated SLA-profile instance (queuing instance) is the last for the subscriber, the accounting STOP message only carry the initial port block (and outside IP address). Any extended port blocks that were released are be reported in immediately preceding triggered Interim-Update message.

If the terminated SLA-profile instance (queuing instance) is not the last for the subscriber, the accounting STOP message carries the initial port-block (and outside IP address) and any extended port blocks that are still allocated for the subscriber, but not used any more by this terminated SLA-profile instance.

Alc-Nat-Port-Range (26.6527.121)

The outside IP address, initial port block, outside router ID, and NAT policy ID.

Alc-ISA-Event-Timestamp (241.26.6527.86)

The time of the last extended port block allocation or de-allocation on the ISA or ESA.

Event-Timestamp (55)

The time when the RADIUS message is generated on the CPM. This information is generated for every SLA-profile instance (queuing instance) termination, meaning that the information is repeated if the subscriber has multiple SLA-profile instances.

Accounting STOP message is sent when a session or host of a NAT enabled subscriber is terminated.

If the terminated session of host is the last for the subscriber, the accounting STOP message carries only the initial port-block (and outside IP address). Any extended port blocks that were released are reported in immediately preceding triggered Interim-Update messages.

If the terminated session or host is not the last for the subscriber, the accounting STOP message carries the initial port-block (and outside IP address) and any extended port blocks that are still allocated for the subscriber, but not used any more by this terminated session or host.

Alc-Nat-Port-Range (26.6527.121)

The outside IP address, initial port block, outside router ID, and NAT policy ID.

Alc-ISA-Event-Timestamp (241.26.6527.86)

The time of the last extended port block allocation or de-allocation on the ISA or ESA.

Event-Timestamp (55)

The time when the RADIUS message is generated on the CPM

This information is generated upon termination of every session or host of an L2-Aware subscriber.

Each accounting stream (START, I-U, STOP) is treated as a separate entity and it contains NAT information that can overlap with other accounting streams (for the queuing instance or a session) of the same subscriber.

A complete NAT information is always conveyed in an accounting stream, for example, for every PB allocation a matching de-allocation can be found on the same stream. In other words, there are no known cases where a PB allocation is reported on one accounting stream, but de-allocation is reported on another.

The following are examples showing only relevant NAT-related attributes:

  • A session is created for a Layer 2–aware NAT subscriber. At the time of session instantiation, a RADIUS accounting START messages is generated.

         Alc-Nat-Port-Range = "192.168.20.2 2001-2024 router base l2-aware" 
         Event-Timestamp = T1 
    

    The outside IP address 192.168.20.2 and initial port block [2001-2004] are allocated at time T1.

  • New extended port block is allocated. Differential data is carried in a triggered Interim-Update message.

         Alc-Nat-Port-Range = "192.168.20.2 3000-3023 router base l2-aware" 
         Alc-Acct-Triggered-Reason = Nat-Map (20) 
         Event-Timestamp = T3
         Alc-ISA-Event-Timestamp = T2
    

    Only the new allocated port blocks are present in this update with the triggered reason Nat-Map (20).

    This port block was allocated on ISA or ESA at time T2 which may be different than time T3 at which the Interim-Update is sent to the RADIUS server.

    This difference may be small if there is no congestion in the system. It may be larger if there is congestion in the system while the notifications from the ISA or ESA are queued internally in the system waiting to be transported to the CPM which is backlogged. A reason for CPM backlog can be from a high volume of RADIUS messages that are sent to the RADIUS servers.

  • Periodic Interim-Update messages are triggered at regular intervals and carries cumulative (or absolute) data.

         Alc-Nat-Port-Range = "192.168.20.2 2001-2024, 3000-3023 router base l2-aware"
         Event-Timestamp = T4
         Alc-ISA-Event-Timestamp = T2 
    

    This update carries both previously allocated port blocks, the initial port block and the extended port block.

    T4 in Event-Timestamp reflects the time when the message is generated, while the Alc-ISA-Event-Timestamp is unchanged from the previous update because no new event occurred on the ISA or ESA.

  • An existing extended port block is de-allocated. Differential data is carried in triggered Interim-Update message.

         Alc-Acct-Triggered-Reason = Nat-Free (19)    
         Alc-Nat-Port-Range = "192.168.20.2 3000-3023 router base l2-aware" 
         Event-Timestamp = T6
         Alc-ISA-Event-Timestamp = T5
    

    Only the de-allocated port block is present in this update with the triggered reason NAT-Free (19).

    This port block was de-allocated on ISA or ESA at time T5 which may be different than time T6 at which the Interim-Update is sent to the RADIUS server.

  • At session termination, a RADIUS accounting STOP message with initial port block is generated.

         Alc-Nat-Port-Range = "192.168.20.2 2001-2024 router base l2-aware" 
         Event-Timestamp = T7
         Alc-ISA-Event-Timestamp = T5
    

    This final update for the session carries the initial port block that is no longer used by the terminated session, host or queuing instance. Although this session is terminated, the initial port block can be used by another sessions still present under the same Layer 2–aware NAT subscriber.

    T7 in Event-Timestamp reflects the time when the message is generated, while the Alc-ISA-Event-Timestamp is always the same as in the previous triggered accounting Interim-Update message.

Enabling RADIUS logging for Layer 2–aware NAT subscribers

The following example displays the configuration to enable RADIUS logging for Layer 2–aware NAT subscribers.

Configure RADIUS logging for Layer 2–aware NAT subscribers (MD-CLI)
[ex:/configure subscriber-mgmt radius-accounting-policy "policy1"]
A:admin@node-2# info
    include-radius-attribute {
        acct-triggered-reason true
        nat-port-range true
    }
Configure RADIUS logging for Layer 2–aware NAT subscribers (classic CLI)
A:node-2>config>subscr-mgmt>acct-plcy# info
----------------------------------------------
            include-radius-attribute
                nat-port-range
                alc-acct-triggered-reason
            exit
----------------------------------------------

For session or host type accounting, the generation of periodic Interim-Update messages must be enabled. The following example displays the configuration to generate periodic Interim-Update (I-U) messages.

Enable the generation of periodic I-U messages (MD-CLI)
[ex:/configure subscriber-mgmt radius-accounting-policy "policy1"]
A:admin@node-2# info
    session-accounting {
        admin-state enable
        interim-update true
    }
Enable the generation of periodic I-U messages (classic CLI)
A:node-2>config>subscr-mgmt>acct-plcy# info
----------------------------------------------
            session-accounting interim-update
----------------------------------------------

Timestamp interpretation

Extended port block functionality in Layer 2–aware NAT contains an additional time stamp into the logging framework. In addition to the standardized Event-Timestamp that is carried in every RADIUS accounting message, a NAT-related timestamp is included. This additional timestamp is introduced in the accounting stream after the first extended port block for the subscriber is allocated and then it is present in every accounting message in the stream. It represents the time of the last extended port block allocation or de-allocation as recoded by the ISA or ESA.

The two timestamps should be interpreted as the following:

  • The Standard Event-Timestamp (55) attribute records the time when the accounting message was generated on the CPM.

  • The Alc-ISA-Event-Timestamp (241.26.6527.86) attribute records the time of the last NAT related event (extended port block allocation or de-allocation).

For example, a periodic I-U message in the following example indicates that at the time 1000, a subscriber has two ports blocks allocated: [2001-2024] and [3000-3023]. The last change related to extended port blocks was at time 500.

The following are Periodic Interim-Updates with the NAT-related attributes.

Alc-Nat-Port-Range = "192.168.20.2 2001-2024,3000-3023 router base l2-aware" 
Event-Timestamp = 1000
Alc-ISA-Event-Timestamp = 500

In the preceding example:

  • the extended port block [3000-3023] was released a few milliseconds before the previous periodic Interim-Update message was sent

  • notification from the ISA or ESA about this event has not reached the CPM in time for the event to be included in periodic Interim-Update

Then, a triggered Interim-Update would immediately follow the above periodic Interim-Update with relevant NAT related attributes:

Alc-Nat-Port-Range = "192.168.20.2 3000-3023 router base l2-aware" 
Alc-Acct-Triggered-Reason = Nat-Free 
Event-Timestamp = 1000
Alc-ISA-Event-Timestamp = 999

Both messages have the same Event-Timestamp of 1000 because the resolution of this timestamp is 1 second. However, the port block [3000-3023] was released at time 999 as indicated by the Alc-ISA-Event-Timestamp. This scenario is shown in Alc-ISA-Event-Timestamp.

Figure 37. Alc-ISA-Event-Timestamp

High logging rates

A system with on-demand port block allocations is dynamic with a possibility of generating a high volume of logs. Transporting NAT logs through ESM accounting relies on a generic RADIUS accounting infrastructure implemented in the SR which supports multiple RADIUS servers and failover mechanisms. In cases where the rate of accounting message exceeds the capacity of the entire accounting system, the queue of the accounting message toward the RADIUS servers in SR start to fill up. This can be caused by the internal condition in the SR by slow or even unresponsive RADIUS servers. Considering that NAT is only a contributor of the accounting messages in a larger accounting framework that includes ESM, the rate of the allocation and de-allocations of extended port blocks is internally limited. Although this does not prevent loss of accounting messages in the overloaded accounting system (for example, caused by slow RADIUS servers), it helps to reduce the chances that the system become overloaded.

Intra-chassis redundancy

Initial port blocks are preserved during an ISA or ESA switchover and are not affected by the switchover. However, extended port blocks are released during the switchover. Consequently, their release is reported in triggered Interim-Update messages.

LSN and Layer 2–aware NAT flow logging

LSN and Layer 2–aware NAT flow logging allows each BB-ISA card to export the creation and deletion of NAT flows to an external server. A NAT flow, or a Fully Qualified Flow (FQF), consists of the following: inside IP, inside port, outside IP, outside port, foreign IP, foreign port, and protocol (UDP, TCP, ICMP).

Use the following command to display LSN and Layer 2–aware NAT flow logging information.

tools dump nat sessions
Owner               : LSN-Host@10.10.10.101
Router              : 1
Policy              : mnp
FlowType            : UDP               
Inside IP Addr      : 10.10.10.101                            
Inside Port         : 20001            
Outside IP Addr     : 192.168.20.28                           
Outside Port        : 2001             
Foreign IP Addr     : 192.168.5.4                             
Foreign Port        : 20001            
Dest IP Addr        : 192.168.5.4                             
Nat Group           : 1                
Nat Group Member    : 1 

The foreign IP address is the original IPv4 destination address as received by NAT on the inside. The destination IP address is the translated foreign IP address if that destination NAT is active (the destination NAT translates the destination IPv4 address of the packet).

Additional information, such as the inside or outside service ID and subscriber string, can be added to a flow record.

Flow logging can be deployed as an alternative to port-range logging or can be complementary (providing a more granular log for offline reporting or compliance). Certain users have legal and compliance requirements that require extremely detailed logs, created per flow, to be exportable from the NAT node.

Because the setup rate of new flows is excessive, logging to an internal facility (like compact flash) is not possible except in debugging mode (which must specify match criteria down to the inside IP and service level).

Flow logging can be enabled on a per-NAT policy basis and, consequently, it is initiated from each BB-ISA card. The flow records can be exported to an external collector in an IPFIX format or a syslog format, both of which use UDP as the transport protocol. These UDP streams are stateless because of the significant volume of transactions. However they do contain sequence numbers so packet loss can be identified. They egress the chassis at the forwarding class network class.

IPFIX and SYSLOG flow logging are configured using respective flow logging policies:

  • MD-CLI
    configure service ipfix export-policy
    configure service nat syslog export-policy
    configure service nat nat-policy flow-log-policy ipfix
    configure service nat up-nat-policy flow-log-policy ipfix
  • classic CLI
    configure service ipfix ipfix-export-policy
    configure service nat syslog syslog-export-policy
    configure service nat nat-policy ipfix-export-policy
    configure service nat up-nat-policy ipfix-export-policy

Each flow logging policy supports two destinations (collectors). One IPFIX export policy and one syslog export policy can be used simultaneously in any one NAT group.

IPFIX flow logging

IPFIX defines two different types of messages that are sent from the IPFIX exporter (SR OS NAT node). The first contains a template set which is an IPFIX message that defines fields for subsequent IPFIX messages but contains no actual data of its own. The second IPFIX message type contains data sets. Here, the data is passed using the previous template set message to define the fields. This means an IPFIX message is not passed as sets of TLV, but instead, data is encoded with a scheme defined through the template set message.

While an IPFIX message can contain both a template set and data set, the SR OS node sends template set messages periodically without any data, whereas the data set messages are sent on demand and as required. When IPFIX is used over UDP, the default retransmission frequency of the template set messages defaults to 10 minutes. The interval for retransmission is configurable in CLI with a minimum interval of one minute and a maximum interval of 10 minutes. When the exporter first initializes, or when a configuration change occurs, the template set is sent out three times, one second apart. Templates are sent before any data sets, assuming that the collector is enabled, so that an IPFIX collector can establish the data template set.

Although the UDP transport is unreliable, the IPFIX sequence number is a 32-bit number that contains the total number of IPFIX data records sent for the UDP transport session before the receipt of the new IPFIX message. The sequence number starts with 0 and rolls over when it reaches 4,294,967,268.

The default packet size is 1500 bytes unless another value has been defined in the configuration (the range is 512 bytes through 9212 bytes inclusively). Traffic is originated from a random high port to the collector on port 4739. Multiple create and delete flow records are stuffed into a single IPFIX packet (although the mappings created are not delayed) until stuffing an additional data record would exceed MTU or a timer expires. The timer is not configurable and is set to 250 milliseconds (that is, should any mapping occur, a packet is sent within 250 milliseconds of that mapping being created).

Each collector has a 50-packet buffering space. If, because of excessive logging, the buffering space becomes unavailable, new flows are denied and the deletion of flows is delayed until buffering space becomes available.

Two collector nodes can be defined in the same IPFIX export policy for redundancy purposes.

Template formats

The SR OS supports two data formats, format1 and format2. The following example displays the configuration of format2.

Configure the template-format (MD-CLI)
[ex:/configure service]
A:admin@node-2# info
    ipfix {
        export-policy "test" {
            template-format format2
        }
    }
Configure the template-format (classic CLI)
A:node-2>config>service>ipfix# info
----------------------------------------------
            ipfix-export-policy "test" create
                template-format format2
            exit
----------------------------------------------

The difference between the two formats is related to the fields conveying information about the translated source IP addresses and ports (outside IP addresses and ports).

The format1 option carries information about translated (outside) IP address in the sourceIPv4Address information element while in format2 this information element is replaced by the postNATSourceIPv4Address. Further, format1 does not convey any information about the translated source port (post-NAT) while a new information element postNAPTsourceTrasportPort is introduced in format2 to carry this information.

Both formats use proprietary information element AluNatSubString carrying the original source IP address, before NAT is performed.

The template and data sets are formatted according to RFC 5101, Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of IP Traffic Flow Information.

Standardized data fields are defined in RFC 5102, Information Model for IP Flow Information Export, and in IANA registry https://www.iana.org/assignments/ipfix/ipfix.xhtml# ipfix-information-elements.

In addition to standardized data fields, IPFIX supports vendor-proprietary data fields which contains an Enterprise Number specific to each vendor.

The supported information elements and their description for each format is provided in the following table. EN in the following table stands for Enterprise Number (0 = IETF, 637 = Nokia) and IE-Id represents Information Element Identifier.

Table 14. IPFIX fields and formats
Field EN, IE-Id Format 1 Format 2

flowId

0, 148

A unique (per-observation domain ID) ID for this flow.

Used for tracking purposes only (opaque value). The flow ID in a create and a delete mapping record must be the same for a specific NAT mapping.

A unique (per-observation domain ID) ID for this flow.

Used for tracking purposes only (opaque value). The flow ID in a create and a delete mapping record must be the same for a specific NAT mapping.

sourceIPv4Address

0, 8

The outside (translated) IP address used in the NAT mapping.

In format2, this is replaced by postNATSourceIPv4Address.

N/A

postNATSourceIPv4Address

0, 255

N/A

The outside (translated) IP address used in the NAT mapping.

This replaces the sourceIPv4Address field from format1.

destinationIPv4Address

0, 12

The foreign or remote IP address used in the NAT mapping.

The foreign or remote IP address used in the NAT mapping.

sourceTransportPort

0, 7

The outside (translated) source port used in the NAT mapping.

This is the original source port (before NAT translation) on the inside

postNAPTsourceTrasportPort

0, 227

N/A

The outside (translated) source port used in the NAT mapping

destinationTransportPort

0, 11

The destination port used in the NAT mapping.

The destination port used in the NAT mapping.

flowStartMilliseconds

0, 152

The timestamp of when the flow was created (chassis NTP derived) in milliseconds from epoch.

The timestamp of when the flow was created (chassis NTP derived) in milliseconds from epoch.

flowEndMilliseconds

0, 153

The timestamp of when the flow was destroyed (chassis NTP derived) in milliseconds from epoch.

The timestamp of when the flow was destroyed (chassis NTP derived) in milliseconds from epoch.

protocolIdentifier

0, 4

Protocol (UDP, TCP, ICMP)

Protocol (UDP, TCP, ICMP)

flowEndReason

0, 136

The reasons for flow termination.

The following Flow End Reasons are supported:

  • 0x01: Idle Timeout. A mapping expired (because of UDP or TCP timeout)

  • 0x03: end of Flow Detected. A mapping closed (only used for TCP after a FIN or RST).

  • 0x04: forced end. Collects all other reasons included administrative or failure case.

The reasons for flow termination.

The following Flow End Reasons are supported:

  • 0x01: Idle Timeout. A mapping expired (because of UDP or TCP timeout)

  • 0x03: end of Flow Detected. A mapping closed (only used for TCP after a FIN or RST).

  • 0x04: forced end. Collects all other reasons included administrative or failure case.

paddingOctets

0, 210

Padding

N/A

aluInsideServiceId

637, 91

The 16-bit service ID representing the inside service ID. This field is not applicable in Layer 2–aware NAT and is set to NULL in this case.

The 16-bit service ID representing the inside service ID. This field is not applicable in Layer 2–aware NAT and is set to NULL in this case.

aluOutsideServiceId

637, 92

The 16-bit service ID representing the outside service ID.

The 16-bit service ID representing the outside service ID.

aluNatSubString

637, 93

A variable 8B aligned string that represents the NAT subscriber construct.

This is currently used in the following command.
tools dump nat sessions

The original IP source address, before NAT is performed is included in this string.

For example:

LSN-Host@10.10.10.101

A variable 8B aligned string that represents the NAT subscriber construct.

This is currently used in the following command.
tools dump nat sessions

The original IP source address, before NAT is performed is included in this string.

For example:

LSN-Host@10.10.10.101

Template format 1 and format 2

Template format 1 and Template format 2 show information elements that are present in data sets during flow creation and deletion for the two formats. Each template set carries a unique template ID that is used to match the corresponding data set that carries the same ID in the set header (Set Id field).

Table 15. Template format 1
Flow creation template set Flow deletion templates set
Description Size (B) Description Size (B)

flowId

8

flowId

8

sourceIPv4Address

4

sourceIPv4Address

4

destinationIPv4Address

4

destinationIPv4Address

4

sourceTransportPort

2

sourceTransportPort

2

destinationTransportPort

2

destinationTransportPort

2

flowStartMilliseconds

8

flowEndMilliseconds

8

protocolIdentifier

1

protocolIdentifier

1

paddingOctets

1

flowEndReason

1

aluInsideServiceID

2

aluInsideServiceID

2

aluOutsideServiceID

2

aluOutsideServiceID

2

aluNatSubString

var

aluNatSubString

var

Table 16. Template format 2
Flow creation template set Flow deletion templates set
Description Size (B) Description Size (B)

flowId

8

flowId

8

postNATSourceIPv4Address

4

postNATSourceIPv4Address

4

destinationIPv4Address

4

destinationIPv4Address

4

sourceTransportPort

2

sourceTransportPort

2

postNAPTSourceTransportPort

2

postNAPTSourceTransportPort

2

destinationTransportPort

2

destinationTransportPort

2

flowStartMilliseconds

8

protocolIdentifier

1

protocolIdentifier

1

flowEndReason

1

paddingOctets

1

flowEndMilliseconds

8

aluInsideServiceID

2

aluInsideServiceID

2

aluOutsideServiceID

2

aluOutsideServiceID

2

aluNatSubString

var

aluNatSubString

var

Configuring LSN flow logging

Configure policies to manage flow logging for LSN44 with format2.

  1. Define a collector node along with other local transport command options through an IPFIX export policy.
    MD-CLI
    [ex:/configure service ipfix]
    A:admin@node-2# info
    ...
        export-policy "flow-logging" {
            template-format format2
            collector router-instance "Base" ip-address 192.168.115.1 {
                admin-state enable
                mtu 1500
                source-ip-address 192.0.2.2
                refresh-timeout 300
            }
        }
    classic CLI
    A:node-2>config>service>ipfix# info 
    ----------------------------------------------
                ipfix-export-policy "flow-logging" create
                    no description
                    template-format format2
                    collector router "Base" ip 192.168.115.1 create
                        mtu 1500
                        source-address 192.0.2.2
                        template-refresh-timeout min 5 
                        no shutdown
                    exit
                exit

    To export flow records using UDP stream, the BB-ISA card must be configured with an appropriate IPv4 address within a designated VPRN. This address (/32) acts as the source for sending all IPFIX records and is shared by all ISA.

  2. Apply the IPFIX export policy created in 1. It must be applied within the NAT policy.
    MD-CLI
    A:node-2>config>service>nat# info
    ----------------------------------------------
    ...
                nat-policy "mnp" create
                    pool "mnp" router Base
                    ipfix-export-policy "flow-logging"
                exit
    
    classic CLI
    A:node-2>config>service>nat# info 
    ----------------------------------------------
                nat-policy "mnp" create
                    pool "mnp" router Base
                    ipfix-export-policy "flow-logging"
                exit

    Flow creation and flow deletion templates for format2, as captured in Wireshark, are shown in the following figure.

    Figure 38. Format2 templates

    IPFIX flow creation data set, as captured in Wireshark, is shown in the following figure.

    Figure 39. Flow creation data set

    IPFIX flow destruction data set, as captured in Wireshark, is shown in the following figure.

    Figure 40. Flow destruction

Syslog flow logging

The format of syslog messages for NAT flow logging in SR OS adheres to RFC 3164, The BSD Syslog Protocol:

<PRI> <HEADER><MSG>

where:

  • <PRI> (the ‟<” and ‟>” are included in the syslog message) is the configured facility*8+severity (as described in the 7450 ESS, 7750 SR, 7950 XRS, and VSR System Management Guide and RFC 3164).

  • <HEADER> defines the MMM DD HH:MM:SS <hostname>. Two characters always appear for the day (DD) field. Single-digit days are preceded with a space character. Time is recorded as local time (and not UTC). The time zone designator is not shown in this example, but each event has its own timestamp where the time-zone designator is shown.

  • <MSG> defines the <log-prefix>: <seq> <application [<subject>]: <message>\n

where:

  • <log-prefix> is an optional 32-character string of text (default = 'TMNX') as configured in the log-prefix command.

  • <seq> is the log event sequence number (always preceded by a colon and a space char).

  • The [<subject>] field may be empty resulting in []:

  • <message> display a custom message relevant to the log event.

  • \n is the standard ASCII new line character (hex 0A).

Syslog message fields for NAT flow logging shows the syslog message fields for NAT flow logging.

Table 17. Syslog message fields for NAT flow logging
Field name Value Comments

PRI

  • severity

  • facility

  • Default: 6

  • Default: 16

  • Configurable

  • Configurable

Timestamps

MMM DD

HH:MM:SS

<hostname>

The IP address of the SR OS system that is generating the message.

<log-prefix>

Configurable. This can be used as a field to differentiate between the vendors. For example, NOK(ia) in log-prefix indicates that this is a log format from a Nokia node so the user can apply parsing logic accordingly.

<seq>

Sequence numbers can be used for tracking if loss in transit occurs.

<application>

NAT

The application that generated the log.

[<subject>]:

MDA ID

The BB-ISA on which the event occurred.

<message>

This is a custom part with specific information related to the event itself.

The message portion contains information relevant to the respective log event, even if this information is already repeated outside of the message (for example, timestamp). The fields in the message part are separated by a single whitespace for easier parsing and are placed in the order shown Message fields .

Table 18. Message fields
Field name Value Presence Comments

NAT type

LSN44

NAT64

M(andatory)

Event name

SADD

SDEL

M

SADD – session added event

SDEL – session deleted event

Timestamp

<TimeStamp>: <Year> <Mon> <Day> <hh:mm:ss:cs> <TZ>, Year is 4-digit, Mon is 3-letter abbreviation, TZ is a 1-5 character time-zone designator.

M

Because events can be combined in the same syslog message, each event is uniquely timestamped with the local time (not UTC), including the time zone designator. During daylight saving’s time (summer), the time zone designator is replaced by the DST designator, which is configurable.

Protocol ID

1, 6, 17

M

ICMP, UDP, TCMP

Inside router

0 to 2147483650

M

0 represents Base

1 to 2147483650 represents VPRNs

Source IP address

IPv4 address in LSN44 and IPv6 address in NAT64

M

Source port or ICMP identifier

0 to 65535

M

Outside router

0 to 2147483650

M

Outside (post NAT) IP address

IPv4 address

M

Outside (post NAT) port or ICMP identifier

0 to 65535

M

Foreign IP address

IPv4 address

O(ptional)

This is the original destination IPv4 address.

Foreign port or ICMP identifier

0 to 65535

O

Destination IP address

IPv4 address

O

It represents the translated destination IP address.

Nat-policy

<name>

O

Sub-ID

<sub-name>

O

‟-” if requested by the configuration (includes the sub-id statement) but the sub-aware NAT is not enabled. Otherwise, the sub-ID in the sub-aware NAT.

Sequence numbers

Each syslog message contains a sequence number. The sequence numbers are independently generated by each BB-ISA per collector, and they are monotonically increased by 1. The MDA ID carried in the syslog message is used to differentiate between the overlapping sequence numbers generated by different BB-ISAs in the same NAT group.

Timestamp

Each flow creation or deletion event is timestamped individually using the local time in the system. The event timestamp (including the time-zone designator) is carried in the message part of the log. This timestamp is carried in addition to the syslog timestamp which is generated at the time of syslog message generation and carried in the <HEADER> part of the syslog messages.

Event aggregation

By default, flow logging events are transported to the collector as fast as they are generated. This does not imply that each event is transported individually, instead a few events can be still aggregated in a single message. However, this aggregation is not user controllable and it depends on the current condition in the system (events that are generated at approximately the same time).

To further optimize transport of logging events to the collector, the events can be aggregated in a controlled fashion. The flow logging events can be aggregated based on:

  • expiry of a configurable timer

  • transport message size (logs are collected until the size of the syslog message reaches the MTU size)

Whichever of the two conditions is met first triggers the generation of a syslog frame carrying multiple events. The separating character between the logs in a syslog message is ‟|” surrounded by a whitespace on each side.

<186>Jan 11 18:51:22 135.221.38.108 NOK: 47 NAT [MDA 1/1]: NAT44 SADD 2017 Jan 11 18:51:22:50 PST 6 0 10.10.10.1 3000 20 11.11.11.11 5000 12.12.12.12 8000 pol-name-1 sub-1 | NAT44 SADD 2017 Jan 11 18:51:22:60 PST 6 0 10.10.10.2 4000 20 11.11.11.11 6000 13.13.13.13 9000 pol-name-1 sub-1\n

Syslog transmission rate limit and overload conditions

The transmission rate of syslog messages can be limited by configuration. The rate limit is enforced in packets-per-seconds (pps). When the rate limit is exceeded, NAT flow logs are buffered. An overload condition is characterized by exhaustion of this buffer space. This condition can occur because of imposed rate limit or the software speed limit. After the buffer space is exhausted, new flow creation is denied, and the teardown of the existing flows are delayed until the buffer space becomes available. Rate limit determines how fast the buffers are freed (by sending packets to the collector).

Logging of port forwards via RADIUS

Logging of port forwards, via RADIUS, is an optional functionality, that can be enabled in addition to logging of port blocks.

Enable logging of port forwards via RADIUS (MD-CLI)

[ex:/configure aaa]
A:admin@node-2# info
    radius {
        isa-policy "demo" {
            accounting {
                include-attributes {
                    nat-port-forward-logging true
                }
            }
        }
    }

Enable logging of port forwards via RADIUS (classic CLI)

A:node-2>config>aaa# info
----------------------------------------------
        isa-radius-policy "demo" create
            acct-include-attributes
                port-forward-logging
            exit
        exit
----------------------------------------------

The logging information is conveyed in vendor specific RADIUS attribute Alc-Nat-Port-Forward. For more information about this attribute, see the 7450 ESS, 7750 SR, and VSR RADIUS Attributes Reference Guide.

In essence, this attribute contains the protocol, port-forwarding type, inside and outside IP address, inside to outside port mapping, and outside routing context.

The NAT type is derived from the User-Name RADIUS attribute, and the inside service is conveyed in a separate vendor specific RADIUS attribute, Alc-Serv-Id, that must be explicitly included in RADIUS accounting.

Note: PCP port forwards can also be logged using CPM logger and syslog with the following command:
  • MD-CLI
    configure log log-events nat event tmnxNatFwd2EntryAdded
  • classic CLI
    configure log event-control "nat" tmnxNatFwd2EntryAdded

DS-Lite and NAT64 fragmentation

Overview

Fragmentation functionality is invoked when the size of a fragmentation eligible packet exceeds the size of the MTU of the egress interface or tunnel. Packets eligible for fragmentation are:

  • IPv4 packets or fragments with the DF bit in the IPv4 header cleared. Fragmentation can be performed on any routing node between the source and the destination of the packet.

  • IPv6 packets on the source node. Fragmentation of IPv6 packet on the transient routing nodes is not allowed.

The best practice is to avoid fragmentation in the network by ensuring adequate MTU size on the transient or source nodes. Drawbacks of the fragmentation are:

  • increased processing and memory demands to the network nodes (especially during reassembly process)

  • increased byte overhead

  • increased latency

Fragmentation can be particularly deceiving in a tunneled environment whereby the tunnel encapsulation adds extra overhead to the original packet. This extra overhead could tip the size of the resulting packet over the egress MTU limit.

Fragmentation could be one solution in cases where the restriction in the mtu size on the packet’s path from source to the destination cannot be avoided. Routers support IPv6 fragmentation in DS-Lite and NAT64 with some enriched capabilities, such as optional packet IPv6 fragmentation even in cases where DF-bit in corresponding IPv4 packet is set.

In general, the lengths of the fragments must be chosen such that resulting fragment packets fit within the MTU of the path to the packets destinations.

In downstream direction fragmentation can be implemented in two ways:

  • IPV4 packet can be fragmented in the carrier IOM before it reaches ISA for any NAT function.

  • IPv6 packet can be fragmented in the ISA, after the IPv4 packet is IPv6 encapsulated in DS-Lite or IPv6 translated in NAT64.

In upstream direction, IPv4 packets can be fragmented after they are decapsulated in DS-Lite or translated in NAT64. The fragmentation occurs in the IOM.

IPv6 fragmentation in DS-Lite

In the downstream direction, the IPv6 packet carrying IPv4 packet (IPv4-in-IPv6) is fragmented in the ISA in case the configured DS-Lite tunnel MTU is smaller than the size of the IPv4 packet that is to be tunneled inside of the IPv6 packet. The maximum IPv6 fragment size is 48bytes larger than the value set by the tunnel MTU. The additional 48 bytes is added by the IPv6 header fields: 40 bytes for the basic IPv6 header plus 8 bytes for extended IPv6 fragmentation header. NAT implementation in the routers does not insert any extension IPv6 headers other than fragmentation header.

DS-Lite shows DS-Lite IPv6 fragmentation.

Figure 41. DS-Lite

If the IPv4 packet is larger than the value set by the tunnel MTU, the fragmentation action depends on the configuration options and the DF bit setting in the header of the received IPv4 header:

  • The IPv4 packet can be dropped regardless of the DF bit setting. IPv6 fragmentation is disabled.

  • The IPv4 packet can be encapsulated in IPv6 packet and then the IPv6 can be fragmented regardless of the DF bit setting in the IPv4 tunneled packet. The IPv6 fragment payload is limited to the value set by the tunnel MTU.

  • The IPv4 packet can be encapsulated in IPv6 packet and then the IPv6 can be fragmented only if the DF bit is cleared. The IPv6 fragment payload is limited to the value set by the tunnel MTU.

If the IPv4 packet is dropped because of fragmentation not being allowed, an ICMPv4 Datagram Too Big message is returned to the source. This message carries the information about the size of the MTU that is supported, by notifying the source to reduce its MTU size to the requested value (tunnel MTU).

The maximum number of supported fragments per IPv6 packet is 8. Considering that the minimum standard base size for IPv6 packet is 1280 bytes, 8 fragments is enough to cover jumbo Ethernet frames.

Use the following commands to configure downstream IPv6 fragmentation behavior in DS-Lite and the DS-Lite tunnel MTU for this DS-Lite address:

  • MD-CLI
    configure router nat inside large-scale dual-stack-lite endpoint ip-fragmentation
    configure router nat inside large-scale dual-stack-lite endpoint tunnel-mtu
    configure service vprn nat inside large-scale dual-stack-lite endpoint ip-fragmentation
    configure service vprn nat inside large-scale dual-stack-lite endpoint tunnel-mtu
    
  • classic CLI
    configure router nat inside dual-stack-lite address ip-fragmentation
    configure router nat inside dual-stack-lite address tunnel-mtu
    configure service vprn nat inside dual-stack-lite address ip-fragmentation
    configure service vprn nat inside dual-stack-lite address tunnel-mtu
    

NAT64

Downstream fragmentation in NAT64 works in similar fashion. The difference between DS-Lite is that in NAT64 the configured ipv6-mtu represents the mtu size of the IPv6 packet (as opposed to payload of the IPv6 tunnel in DS-Lite). In addition, IPv4 packet in NAT64 is not tunneled but instead IPv4 or IPv6 headers are translated. Consequently, the fragmented IPv6 packet size is 28 bytes larger than the translated IPv4 packet 20 bytes difference in basic IP header sizes (40-bytes IPv6 header versus a 20-byte IPv4 header) plus 8 bytes for extended fragmentation IPv6 header. The only extended IPv6 header that NAT64 generates is the fragmentation header.

If the IPv4 packet is dropped because of the fragmentation not being allowed, the returned ICMP message contains MTU size of ipv6-mtu minus 28 bytes. Otherwise, the fragmentation options are the same as in DS-Lite.

Use the following commands to configure downstream IPv6 fragmentation behavior in NAT64 and the size of the IPv6 downstream packet in NAT64:

  • MD-CLI
    configure router nat inside large-scale nat64 ip-fragmentation
    configure router nat inside large-scale nat64 endpoint ipv6-mtu
    configure service vprn nat inside large-scale nat64 ip-fragmentation
    configure service vprn nat inside large-scale nat64 ipv6-mtu
  • classic CLI
    configure router nat inside nat64 ip-fragmentation
    configure router nat inside nat64 ipv6-mtu
    configure service vprn nat inside nat64 ip-fragmentation
    configure service vprn nat inside nat64 ipv6-mtu

DS-Lite reassembly

In a tunneled environment such as DS-Lite, a fragmented packet must be reassembled in the end node before it is decapsulated. DS-Lite reassembly is implemented in-line, which means that the reassembly function runs in the same MS-ISA where native DS-Lite processing occurs. The presence of the Fragment Extension header in the IPv6 header signals the need for reassembly for all traffic destined for the AFTR in the upstream direction.

Fragments of a frame can be buffered for up to two seconds in an MS-ISA to wait for all the fragments of the original frame to arrive can be reassembled.

DS-Lite reassembly is performed only in the upstream direction.

Interpreting fragmentation statistics

Use the following command to display fragmentation statistics in DS-Lite.

show isa nat-group statistics mda

The command output displays only relevant DS-Lite fragmentation counters. The remaining counters are removed from the output for easier reading.

show isa nat-group 1 statistics mda 1/2

Fragmentation statistics in DS-Lite

===============================================================================
ISA NAT Group 1 MDA 1/2
===============================================================================
...
too many fragments for IP packet                        : 0
too many fragmented packets                             : 0
too many fragment holes                                 : 0
too many frags buffered                                 : 0
fragment list expired                                   : 0
Reassembly Failures                                     : 0
Fragments RX DSL                                        : 0
Fragments RX DORMANT                                    : 0
Fragments TX DSL                                        : 0
Fragments TX DORMANT                                    : 0
Fragments RX OUT                                        : 0
Fragments TX OUT                                        : 0
too many frag. lists for flow                           : 0
frag. list cleanup in progress                          : 0
...
===============================================================================

To interpret these counters, familiarity with the following terms in the context of fragmentation is required:

  • packet

    An IP packet that is split into fragments (multiple frames) because of its original size being larger than the MTU configured on any node servicing this packet.

  • fragment

    A fragment comprises the frames that make up a packet. Multiple fragments (frames on the wire) are eventually reassembled into the original packet.

  • fragment list

    MS-ISA maintains a list of fragments belonging to a single packet. Each list represents a single fragmented packet and the list contains multiple fragments.

  • fragment hole

    A hole refers to a fragment or a group of consecutive fragments in a fragment list. Fragments of a packet are sequentially numbered from first to last and they must be reassembled in the same order in which they are fragmented. For example, if a packet contains 9 fragments but only fragments [1,3,5,9] are received by MS-ISA, then there are 3 holes in this list [2],[3,4],[6,7,8].

  • flow

    This is identified by 5 tuple <src IP, dst IP, src Port, dst Port, protocol>. Flows can have many packets and each packet of a flow can be fragmented.

The following table describes the counter names.

Table 19. Counter names and descriptions
Counter name Description

too many fragments for IP packet

This counter increments if there are more than 20 fragments of a received single packet (the maximum number of fragments per packet is 20). In this case, all fragments of the packet are dropped.

too many fragmented packets

This counter increases if the maximum number of fragmented packets per MS-ISA is reached. See the MS-ISA Scaling Guides for the maximum number of fragmented packets per MS-ISA (specifically, the max num of frag lists option).

too many fragment holes

This counter increments if there are more than 11 holes tracked for a single packet. In this case, all fragments are be dropped.

too many frags buffered

This counter increases if the maximum number of fragments that can be stored on MS-ISA is reached.

fragment list expired

This counter increases if all fragments of single packets are not received within two seconds. In this case, all fragments of this packet are dropped.

lists for flow

This counter increases when more than five fragmented packets per flow are being maintained simultaneously in MS-ISA. In this case, the fragments of the sixth packets are dropped.

Reassembly Failures

This counter increases when the reassembly of a packets (when all fragments are received) failed. This can attributed to the size of the first DS-Lite IPv6 fragment is smaller than 1280B, or the total reassembled packet is too big (greater than 9212B).

Fragments RX DSL

This counter increments only when a DS-Lite packet/fragment is received in the upstream direction that contains an IPv4 fragment. In other words, this counter is relevant only to IPv4 fragments inside of the DS-Lite packet/fragment that is received from the subscriber, and is not affected by DS-Lite fragments.

Fragments TX DSL

This counter increments only in case that DS-Lite packet/fragment sent in the downstream direction contains an encapsulated IPv4 fragment (which is received from the public side). In other words, this counter is relevant only to IPv4 fragments inside of the DS-Lite packet/fragment that is sent toward the subscriber, and is not affected by DS-Lite fragments.

Fragments RX OUT

This counter increments when an IPv4 fragment is received in the downstream direction, toward the subscriber.

Fragments TX OUT

This counter increments when an IPv4 fragment is transmitted in the upstream direction (public side).

Support for small first fragments

RFC 8200, Internet Protocol, Version 6 (IPv6) Specification, recommends the minimum MTU size in IPv6 should be 1280 bytes. However, some devices sourcing IPv6 traffic do not follow this recommendation and fragments packets to a size smaller than 1280 bytes. To accommodate these devices, the DS-Lite implementation in SR OS can process first fragments (with the fragment offset equaling 0 in the IPv6 fragmentation header) that are smaller than 1280 bytes. The SR OS can reassemble such packets in the upstream direction, as well as fragment them in the downstream direction.

Upstream reassembly with small first IPv6 fragments less than 1280 bytes

By default, DS-Lite implementation in SR OS drops the first fragment in the upstream direction if it is smaller than 1280 bytes.

For a router instance, use the following command to configure the minimum MTU size for the first fragment in the upstream direction and enable the processing of first IPv6 fragments smaller than 1280 bytes:

  • MD-CLI

    configure router nat inside large-scale dual-stack-lite endpoint min-first-fragment-size-rx
  • classic CLI

    configure router nat inside dual-stack-lite address min-first-fragment-size-rx

For a VPRN service, use the following command to configure the minimum MTU size for the first fragment in the upstream direction and enable the processing of first IPv6 fragments smaller than 1280 bytes:

  • MD-CLI

    configure service vprn nat inside large-scale dual-stack-lite endpoint min-first-fragment-size-rx
  • classic CLI

    configure service vprn nat inside dual-stack-lite address min-first-fragment-size-rx

Downstream fragmentation with small first IPv6 fragment

Use the following command to set the size of the frames in the downstream direction for a router instance:

  • MD-CLI

    configure router nat inside large-scale dual-stack-lite endpoint tunnel-mtu
  • classic CLI

    configure router nat inside dual-stack-lite address tunnel-mtu

Use the following command to set the size of the frames in the downstream direction for a VPRN service:

  • MD-CLI

    configure service vprn nat inside large-scale dual-stack-lite endpoint tunnel-mtu
  • classic CLI

    configure service vprn nat inside dual-stack-lite address tunnel-mtu

The tunnel-mtu value represents the size of the IPv4 payload which is encapsulated in the IPv6 packet with an additional 48-byte header. These IPv6 packets can then be fragmented when fragmentation is enabled.

Histogram

The distribution of the following resources in a NAT pool is tracked in the form of a histogram:

  • Ports and subscribers

    The distribution of outside ports in a NAT pool is tracked for an aggregate number of subscribers. The output of the following command can reveal the number of subscribers in a pool that are heavy port users, or it can reveal the average number of ports used by most subscribers.

    show router nat pool histogram
  • Port blocks and subscribers in a NAT pool

    The distribution of port blocks in a Layer 2–aware NAT pool is tracked for an aggregate number of subscribers. The output of the histogram command can reveal how subscribers are using port blocks in the aggregate.

  • Subscribers and IP addresses

    The distribution of subscribers across IP addresses is tracked. The output of the histogram command is used to determine if any substantial imbalances exist.

  • Extended port blocks and outside IP addresses in a NAT pool

    The distribution of extended port blocks in the NAT pool is tracked in relation to an aggregate number of outside IP addresses. The output of the histogram command can reveal how extended port blocks are distributed over IP addresses in an aggregate. This is applicable only for a Layer 2–aware NAT pool with extended port blocks enabled, or a deterministic LSN pool.

The user can use the displayed information to adjust the port block size per subscriber, the amount of port blocks per subscriber, or see port usage trends over time. Consequently, the user can adjust the configuration as the port demand per subscriber increases or decreases over time. For example, a user may find that the port usage in a pool increased over a period of time. Accordingly, the user can plan to increase the number of ports per port block.

Execute the following show commands to display the histogram output.

Ports and subscribers per NAT pool (L2-Aware or LSN)

Use the following command to show ports and subscribers per NAT pool (L2-Aware or LSN). The output is organized in port buckets with the number of subscribers in each bucket.

show router nat pool "pool-1" histogram ports bucket-size 200 num-buckets 10

Ports and subscribers per NAT pool (L2-Aware or LSN) output

===============================================================================
Usage histogram NAT pool "pool-1" router "Base"
===============================================================================
Num-ports   Sub-TCP    Sub-UDP    Sub-ICMP
-------------------------------------------------------------------------------
1-199       17170      0          0
200-399     8707       0          0
400-599     2406       0          0
600-799     635        0          0
800-999     322        0          0
1000-1199   0          0          0
1200-1399   0          0          0
1400-1599   0          0          0
1600-1799   0          0          0
1800-       0          0          0
-------------------------------------------------------------------------------
No. of entries: 10
===============================================================================

Port blocks and subscribers per NAT pool (L2-Aware and LSN)

Use the following command to show ports and subscribers per NAT pool (L2-Aware or LSN). The output is organized by the increasing number of port blocks in a NAT pool with the number of subscribers using the number of port blocks indicated in each line.

show router nat pool "l2a" histogram port-blocks

Port blocks and subscribers per NAT pool (L2-Aware and LSN) output

===============================================================================
Usage histogram NAT pool "l2a" router "Base" port blocks per subscriber
===============================================================================
Num port-blocks      Num subscribers
-------------------------------------------------------------------------------
1                    17398
2                    8550
3                    2352
4                    940
5                    0
6                    0
7                    0
8                    0
9                    0
10                   0
-------------------------------------------------------------------------------
No. of entries: 10
===============================================================================

Subscribers and IP addresses per NAT pool (LSN)

Use the following command to show subscribers and IP addresses per NAT pool (LSN). The output is organized in buckets where each bucket shows how the subscribers are spread over the preferred outside IP addresses. For example, the output of the below command shows that each of the 513 IP addresses in the pool have 120 to 129 subscribers. This is a fairly even distribution of subscribers over IP addresses and the favorable output of this command.

show router 5 nat pool "demo" histogram subscribers-per-ip bucket-size 10 num-buckets 50

Subscribers and IP addresses per NAT pool (LSN) output

===============================================================================
Usage histogram NAT pool "demo" router 5 subscribers per IP address
===============================================================================
Num subscribers     Num IP addresses
-------------------------------------------------------------------------------
1-9                 0
10-19               0
20-29               0
30-39               0
40-49               0
50-59               0
60-69               0
70-79               0
80-89               0
90-99               0
100-109             0
110-119             0
120-129             513
130-139             0
140-149             0
150-159             0
160-169             0
170-179             0
180-189             0
190-199             0
200-209             0
210-219             0
220-229             0
230-239             0
240-249             0
250-259             0
260-269             0
270-279             0
280-289             0
290-299             0
300-309             0
310-319             0
320-329             0
330-339             0
340-349             0
350-359             0
360-369             0
370-379             0
380-389             0
390-399             0
400-409             0
410-419             0
420-429             0
430-439             0
440-449             0
450-459             0
460-469             0
470-479             0
480-489             0
490-                0

Extended port blocks in a NAT pool and outside IP addresses (Layer 2–aware NAT and deterministic LSN)

Use the following command to show subscribers and IP addresses per NAT pool (LSN). The output is organized in extended port-block buckets in a NAT pool with the number of outside IP addresses in each bucket.

show router nat pool "l2a" histogram extended-port-blocks-per-ip bucket-size 1 num-buckets 10

Extended port blocks in a NAT pool and outside IP addresses (Layer 2–aware NAT and deterministic LSN) output

===============================================================================
Usage histogram NAT pool "l2a" router "Base" extended port blocks per IP address
===============================================================================
Num extended-port-blocks      Num IP addresses
-------------------------------------------------------------------------------
-                             -
1-1                           1039
2-2                           6182
3-3                           777
4-4                           194
5-5                           0
6-6                           0
7-7                           0
8-8                           0
9-                            0
-------------------------------------------------------------------------------
No. of entries: 10
===============================================================================

The output of each command can be periodically exported to an external destination with the cron command.

The following example displays the script, script policy, and CRON configuration

Configure the script, script policy, and CRON (MD-CLI)

[ex:/configure system]
A:admin@node-2# info
...
    cron {
        schedule "nat_histogram_schedule" owner "TiMOSCLI" {
            admin-state enable
            interval 600
            script-policy {
                name "dump_nat_histogram"
            }
        }
    }
...
    script-control {
        script "nat_histogram" owner "TiMOSCLI" {
            admin-state enable
            location "ftp://*:*@138.203.8.62/nat-histogram.txt"
        }
        script-policy "dump_nat_histogram" owner "TiMOSCLI" {
            admin-state enable
            results "ftp://*:*@138.203.8.62/nat_histogram_results.txt"
            script {
                name "nat_histogram"
            }
        }
    }

Configure the script, script policy, and CRON (classic CLI)

A:node-2>config>system# info
----------------------------------------------
#--------------------------------------------------
echo "System Configuration"
#--------------------------------------------------
...
        script-control
            script "nat_histogram" owner "TiMOSCLI"
                no shutdown
                location "ftp://*:*@138.203.8.62/nat-histogram.text"
            exit
            script-policy "dump_nat_histogram" owner "TiMOSCLI"
                no shutdown
                results "ftp://*:*@130.203.8.62/nat_histogram_results.text"
                script "nat_histogram"
            exit
        exit
        cron
            schedule "nat_histogram_schedule" owner "TiMOSCLI"
                interval 600
                script-policy "dump_nat_histogram"
                no shutdown
            exit
        exit
#--------------------------------------------------

The nat-histogram.txt file contains the command execution line.

show router nat pool "pool-1" histogram ports bucket-size 200 num-buckets 10

This command is executed every 10 minutes (600 seconds) and the output of the command is written into a set of files on an external TFP server as displayed in the following example.

Files on an external TFP server

[root@ftp]# ls nat_histogram_results.txt*
    nat_histogram_results.txt_20130117-153548.out
    nat_histogram_results.txt_20130117-153648.out
    nat_histogram_results.txt_20130117-153748.out
    nat_histogram_results.txt_20130117-153848.out
    nat_histogram_results.txt_20130117-153948.out
    nat_histogram_results.txt_20130117-154048.out
    [root@ftp]#

NAT redundancy

NAT ISA redundancy helps protect against Integrated Service Adapter (ISA) failures. This protection mechanism relies on the CPM maintaining configuration copy of each ISA. In case that an ISA fails, the CPM restores the NAT configuration from the failed ISA to the remaining ISAs in the system. NAT configuration copy of each ISA, as maintained by CPM, is concerned with configuration of outside IP address and port forwards on each ISA. However, CPM does not maintain the state of dynamically created translations on each ISA. This causes interruption in traffic until the translation are re-initiated by the devices behind the NAT.

Two modes of operation are supported:

  • Active/Standby (A/S)

    In this mode of operation, any number of standby ISAs can be allocated for protection purposes. When there are no failures in the router, standby ISAs are idle, in a state ready to accept traffic from failed ISA. Mapping between the failed ISA and the standby ISA is always 1:1. This means that one standby ISA entirely replaces one failed ISA. In this respect, NAT bandwidth from the failed ISA is reserved and restored upon failure. This model is shown in Active/Standby intra-chassis redundancy model.

    Figure 42. Active/Standby intra-chassis redundancy model
  • Active/Active (A/A)

    In this mode all ISAs in the system are active. When an ISA fails, its load is distributed across the remaining active ISA. In this mode of operation there is no bandwidth reservation across active ISA. Each ISA can operate at full speed at any time. However, memory resources necessary to setup new translations from the failed ISAs are reserved. The reserved resources are:

    • subscribers (inside IPv4 addresses for LSN44, IPv6 prefixes for DS-Lite/NAT64 and L2-Aware subscriber)

    • outside IPv4 addresses

    • outside port-ranges

By reserving memory resources it can be assured that failed traffic can be recovered by remaining ISAs, potentially with some bandwidth reduction in case that remaining ISAs operated at full or close to full speed before the failure occurred. A/A ISA redundancy model is shown in Active/Active intra-chassis redundancy model.

Figure 43. Active/Active intra-chassis redundancy model

In case of an ISA failure, the member-id of the member ISA that failed is contained in the FREE log. This info is used to find the corresponding MAP log which also contains the member-id field.

In case of RADIUS logging, CPM summarization trap is generated (because RADIUS log is sent from the ISA – which is failed).

NAT stateless dual-homing

Note: See "NAT Stateless Dual-Homing" in the 7450 ESS, 7750 SR, and VSR Multiservice ISA and ESA Advanced Configuration Guide for Classic CLI for information about advanced configurations.

Multichassis stateless NAT redundancy is based on a switchover of the NAT pool that can assume active (master) or standby state. The inside/outside routes that attract traffic to the NAT pool are always advertised from the active node (the node on which the pool is active).

This dual-homed redundancy based on the pool mastership state works well in scenarios where each inside routing context is configured with a single NAT policy (NAT’d traffic within this inside routing context is mapped to a single NAT pool).

However, in cases where the inside traffic is mapped to multiple pools (with deterministic NAT and in case when multiple NAT policies are configured per inside routing context), the basic per pool multichassis redundancy mode can cause the inside traffic within the same routing instance to fail because some pools referenced from the routing instance may be active on one node while other pools may be active on the other node.

Imagine a case where traffic ingressing the same inside routing instance is mapped as follows (this mapping can be achieved via filters):

  • Source ip-address A → Pool 1 (nat-policy 1) active on Node 1

  • Source ip-address B → Pool 2 (nat-policy 2) active on Node 2

Traffic for the same destination is normally attracted only to one NAT node (the destination route is advertised only from a single NAT node). Assume that this node is Node 1 in the example. After the traffic arrives to the NAT node, it is mapped to the corresponding pool according to the mapping criteria (routing based or filter based). But if active pools are not co-located, traffic destined for the pool that is active on the neighboring node would fail. In our example traffic from the source ip-address B would arrive to the Node 1, while the corresponding Pool 2 is inactive on that node. Consequently the traffic forwarding would fail.

To remedy this situation, a group of pools targeted from the same inside routing context must be active on the same node simultaneously. In other words, the active pools referenced from the same inside routing instance must be co-located. This group of pools is referred to as Pool Fate Sharing Group (PFSG). The PFSG is defined as a group of all NAT pools referenced by inside routing contexts whereby at least one of those pools is shared by those inside routing contexts. This is shown in Active/Standby intra-chassis redundancy model.

Even though only Pool 2 is shared between subscribers in VRF 1 and VRF 2, the remaining pools in VRF 1 and VRF 2 must be made part of PFSG 1 as well.

This ensures that the inside traffic is always mapped to pools that are active in a single box.

Pool fate sharing group shows the pool fate sharing group.

Figure 44. Pool fate sharing group

There is always one lead pool in PFSG. The Lead pool is the only pool that is exporting/monitoring routes. Other pools in the PFSG are referencing the lead pool and they inherit its (activity) state. If any of the pools in PFSG fails, all the pools in the PFSG switch the activity, or in another words they share the fate of the lead pool (active/standby/disabled).

There is one lead pool per PFSG per node in a dual-homed environment. Each lead pool in a PFSG has its own export route that must match the monitoring route of the lead pool in the corresponding PFSG on the peering node.

PFSG is implicitly enabled by configuring multiple pools to follow the same lead pool.

Configuration considerations

Attracting traffic to the active NAT node (from inside and outside) is based on the routing.

On the outside, the active pool address range is advertised. On the inside, the destination prefix or steering route (in case of filter based diversion to the NAT function) is advertised by the node with the active pool.

The advertisement of the routes is driven by the activity of the pools in the pool fate sharing group.

The following example displays the configuration of the NAT outside pool under the configure router context.
Note: This can also be configured under the configure service vprn context.
Configure NAT outside pool (MD-CLI)
[ex:/configure router "Base"]
A:admin@node-2# info
    nat {
        outside {
            pool "nat0-pool" {
                admin-state enable
                type large-scale
                nat-group 1
                port-reservation {
                    ports 252
                }
                large-scale {
                    redundancy {
                        follow {
                            router-instance "500"
                            name "nat500-pool"
                        }
                    }
                }
                address-range 192.168.12.0 end 192.168.12.10 {
                }
            }
        }
    }
Configure NAT outside pool (classic CLI)
A:node-2>config>router# info
...
#--------------------------------------------------
echo "NAT Configuration"
#--------------------------------------------------
        nat
            outside
                pool "nat0-pool" nat-group 1 type large-scale create
                    port-reservation ports 252
                    redundancy
                        follow router 500 pool "nat500-pool"
                    exit
                    address-range 192.168.12.0 192.168.12.10 create
                    exit
                    no shutdown
                exit
            exit
        exit

A pool can be one of the following:

  • a leading pool administratively enabled and with the following configured:
    • MD-CLI
      configure router nat outside pool large-scale redundancy export-route
      configure router nat outside pool large-scale redundancy monitor-route
    • classic CLI
      configure router nat outside pool redundancy export 
      configure router nat outside pool redundancy monitor
  • a following pool with the following configured:

    • MD-CLI
      configure router nat outside pool large-scale redundancy follow
    • classic CLI
      configure router nat outside pool redundancy follow

Both sets of options are therefore mutually exclusive.

A leading pool redundancy is only enabled when the redundancy node is administratively enabled. For a following pool, the administrate has no effect, and the redundancy is only enabled when the leading pool is enabled.

Before a lead pool is enabled, consistency checks are performed to make sure that PFSG is properly configured and that the all pools in the PFSG belong to the same NAT ISA group. PFSG is implicitly enabled by configuring multiple pools to follow the same lead pool. To ensure the effective functioning of a PFSG, it is essential that all participating NAT pools are enabled. Modifications to the PFSG, such as adding or removing pools, can only be executed when the primary pool is temporarily disabled.

For example in the case shown in the following figure, the consistency check would fail because pool 1 is not part of the PFSG 1 (where it should be).

Figure 45. Consistency check

Troubleshooting commands

Use the following command to display the state of the leading pool (dual-homing section toward the bottom of the command output):

  • MD-CLI
    show router 500 nat pool pool-name "nat500-pool"
  • classic CLI
    show router 500 nat pool "nat500-pool"
State of the leading pool output
===============================================================================
NAT Pool nat500-pool
===============================================================================
Description                           : (Not Specified)
ISA NAT Group                         : 1
Pool type                             : largeScale
Admin state                           : inService
Mode                                  : auto (napt)
Port forwarding dyn blocks reserved   : 0
Port forwarding range                 : 1 - 1023
Port reservation                      : 2300 blocks
Block usage High Watermark (%)        : (Not Specified)
Block usage Low Watermark (%)         : (Not Specified)
Subscriber limit per IP address       : 65535
Active                                : true
Deterministic port reservation        : (Not Specified)
Last Mgmt Change                      : 02/17/2014 09:41:43
===============================================================================
===============================================================================
NAT address ranges of pool nat500-pool
===============================================================================
Range                                                          Drain Num-blk
-------------------------------------------------------------------------------
192.168.1.0 - 192.168.1.255                                          0
-------------------------------------------------------------------------------
No. of ranges: 1
===============================================================================

===============================================================================
NAT members of pool nat500-pool ISA NAT group 1
===============================================================================
Member                                                        Block-Usage-% Hi
-------------------------------------------------------------------------------
1                                                             < 1           N
2                                                             < 1           N
3                                                             < 1           N
4                                                             < 1           N
5                                                             < 1           N
6                                                             < 1           N
-------------------------------------------------------------------------------
No. of members: 6
===============================================================================
===============================================================================
Dual-Homing
===============================================================================
Type                                  : Leader
Export route                          : 10.0.0.3/32
Monitor route                         : 10.0.0.2/32
Admin state                           : inService
Dual-Homing State                     : Active
===============================================================================
===============================================================================
Dual-Homing fate-share-group
===============================================================================
Router         Pool                                                   Type
-------------------------------------------------------------------------------
Base           nat0-pool                                              Follower
vprn500        nat500-pool                                            Leader
vprn501        nat501-pool                                            Follower
vprn502        nat502-pool                                            Follower
-------------------------------------------------------------------------------
No. of pools: 4
===============================================================================

Use the following command to display the state of the follower pool (dual-homing section toward the bottom of the command output):

  • MD-CLI
    show router 501 nat pool pool-name "nat501-pool"
  • classic CLI
    show router 501 nat pool "nat501-pool"
State of the follower pool output
===============================================================================
NAT Pool nat501-pool
===============================================================================
Description                           : (Not Specified)
ISA NAT Group                         : 1
Pool type                             : largeScale
Admin state                           : inService
Mode                                  : auto (napt)
Port forwarding dyn blocks reserved   : 0
Port forwarding range                 : 1 - 1023
Port reservation                      : 2300 blocks
Block usage High Watermark (%)        : (Not Specified)
Block usage Low Watermark (%)         : (Not Specified)
Subscriber limit per IP address       : 65535
Active                                : true
Deterministic port reservation        : (Not Specified)
Last Mgmt Change                      : 02/17/2014 09:41:43
===============================================================================
===============================================================================
NAT address ranges of pool nat501-pool
===============================================================================
Range                                                          Drain Num-blk
-------------------------------------------------------------------------------
192.168.2.0 - 192.168.2.255                                          0
192.168.3.0 - 192.168.3.255                                          0
-------------------------------------------------------------------------------
No. of ranges: 2
===============================================================================
===============================================================================
NAT members of pool nat501-pool ISA NAT group 1
===============================================================================
Member                                                        Block-Usage-% Hi
-------------------------------------------------------------------------------
1                                                             < 1           N
2                                                             < 1           N
3                                                             < 1           N
4                                                             < 1           N
5                                                             < 1           N
6                                                             < 1           N
-------------------------------------------------------------------------------
No. of members: 6
===============================================================================
===============================================================================
Dual-Homing
===============================================================================
Type                                  : Follower
Follow-pool                           : "nat500-pool" router 500
Dual-Homing State                     : Active
===============================================================================
===============================================================================
Dual-Homing fate-share-group
===============================================================================
Router         Pool                                                   Type
-------------------------------------------------------------------------------
Base           nat0-pool                                              Follower
vprn500        nat500-pool                                            Leader
vprn501        nat501-pool                                            Follower
vprn502        nat502-pool                                            Follower
-------------------------------------------------------------------------------
No. of pools: 4
===============================================================================

Use the following command to list all the pools that are configured along with the NAT inside and outside routing context.

show service nat overview
NAT overview output
===============================================================================
NAT overview
===============================================================================
Inside/        Policy/                                   Type
Outside        Pool                                      
-------------------------------------------------------------------------------
vprn550        lsn-policy_unused                         default
Base           nat0-pool                                 

vprn550        lsn-policy_nat1                           destination prefix
vprn500        nat500-pool                               

vprn550        lsn-policy-nat2                           destination prefix
vprn501        nat501-pool                               

vprn551        lsn-policy_unused                         default
Base           nat0-pool                                 

vprn551        lsn-policy-nat3                           destination prefix
vprn501        nat501-pool                               

vprn551        lsn-policy-nat4                           destination prefix
vprn502        nat502-pool                               

vprn552        lsn-policy_unused                         default
Base           nat0-pool                                 

vprn552        lsn-policy-nat5                           destination prefix
vprn502        nat502-pool                               
===============================================================================

Active/active ESA-VM or ISA redundancy model

In active/active (A/A) ESA-VM or ISA redundancy, each ESA-VM or ISA is subdivided into multiple logical ESA-VM or ISAs. These logical sub-entities are referred to as members. NAT configuration of each member is saved in the CPM. In case that any one ESA-VM or ISA fails, its members are downloaded by the CPM to the remaining active ESA-VM or ISAs. Memory resources on each ESA-VM or ISA are reserved to accommodate additional traffic from the failed ESA-VM or ISAs. The amount of resources reserved per ESA-VM or ISA depends on the number of ESA-VM or ISAs in the system and the number of simultaneously supported ESA-VM or ISA failures. The number of simultaneous ESA-VM or ISA failures per system is configurable. Memory reservation affects NAT scale per ESA-VM or ISA.

Traffic received on the inside is forwarded by the ingress forwarding complex to a predetermined member ESA-VM or ISAs for further NAT processing. Each ingress forwarding complex maintains an internal link per member. The number of these internal links, along with other factors, determine the maximum number of members per system and with this, the granularity of traffic distribution over remaining ESA-VM or ISAs in case of an ESA-VM or ISA failure. The segmentation of ESA-VM or ISAs into members for a single failure scenario is shown in Load distribution in A/A intra-chassis redundancy model. The protection mechanism in this example is designed to cover for one physical ESA-VM or ISA failure. Each ESA-VM or ISA is divided into four members. Three of those carry traffic during normal operation, while the fourth one has resources reserved to accommodate traffic from one of the members in case of failure. When an ESA-VM or ISA failure occurs, three members are delegated to the remaining ESA-VM or ISAs. Each member from the failed ESA-VM or ISA is mapped to a corresponding reserved member on the remaining ESA-VM or ISAs.

Figure 46. Load distribution in A/A intra-chassis redundancy model

A/A ESA-VM or ISA redundancy model supports multiple failures simultaneously. The protection mechanism shown in Multiple failures is designed to protect against two simultaneous ESA-VM or ISA failures. As the previous case, each ESA-VM or ISA is divided into six members, three of which are carrying traffic under normal circumstances while the remaining three members have reserved memory resources.

Figure 47. Multiple failures

Load distribution in A/A ESA-VM or ISA redundancy model supporting single ESA-VM or ISA failure shows resource utilization for a single ESA-VM or ISA failure in relation to the total number of ESA-VM or ISAs in the system. The resource utilization affects only scale of each ESA-VM or ISA. However, bandwidth per ESA-VM or ISA is not reserved and each ESA-VM or ISA can operate at full speed at any time (with or without failures).

Table 20. Load distribution in A/A ESA-VM or ISA redundancy model supporting single ESA-VM or ISA failure
Number of physical ESA-VM or ISAs per system Number of member ESA-VM or ISAs per physical ESA-VM or ISA (active/reserved) Resource utilization per system in non-failed condition Resource utilization per system with one failed ESA-VM or ISA

2

1A 1R

50%

100%

3

2A 1R

67%

100%

4

3A 1R

75%

100%

5

3A 1R

75%

~94%

6

2A 1R

67%

~80%

7

2A 1R

67%

~78%

8

2A 1R

67%

~76%

9

1A 1R

50%

~56%

10

1A 1R

50%

~56%

11

1A 1R

50%

~55%

12

1A 1R

50%

~54%

13

1A 1R

50%

~54%

14

1A 1R

50%

~54%

Startup conditions

During the first five minutes of system startup or nat-group activation, the system behaves as if all ISAs are operational. Consequently, ISAs are segmented in its members according to the configured maximum number of supported failures.

Upon expiration of this initial five minute interval, the system is re-evaluated. In case that one of more ISAs are found in faulty state during re-evaluation, the members of the failed ISAs are distributed to the remaining ISAs that are operational.

Recovery

After a failed ISA is recovered, the system automatically accepts it and traffic is assigned to it. Traffic that is moved to the recovered ISA is interrupted.

Adding additional ESA-VM or ISAs in the ESA-VM or ISA group

Adding additional ESA-VM or ISAs in an operational NAT group requires reconfiguration of the active MDA limit for the NAT group and the failed MDA limit.

In the classic CLI, this is only possible when the NAT group is in an administratively disabled state.

Active MDA limit

The following commands are needed to determine how the resources are divided amongst the EA-VMs or ISAs.

Use the following commands respectively to:

  • configure the number of active ISAs in active-standby ISA redundancy model for NAT
  • configure the maximum number of supported simultaneously failures in the active-active intra-chassis NAT redundancy model
  • MD-CLI
    configure isa nat-group redundancy active-mda-limit
    configure isa nat-group redundancy intra-chassis active-active failed-mda-limit
  • classic CLI
    configure isa nat-group active-mda-limit
    configure isa nat-group failed-mda-limit

L2-Aware bypass

L2-Aware bypass provides the basis for traffic continuity if an MS-ISA fails. With L2-Aware bypass functionality disabled and without an intra-chassis redundancy scheme deployed (such as active/active or active/standby), the traffic to be processed by the failed MS-ISA is blackholed. This means that traffic continues to be sent to the failed MS-ISA. By enabling L2-Aware bypass, instead of being blackholed, the traffic is routed outside of the SR OS node without being NAT’d in accordance to the routing table in the inside routing context. The intent is that non-NAT’d traffic is intercepted by a central NAT node that performs the NAT function. This way, traffic served by the failed MS-ISA continues to be NAT’d by a central NAT node. The central NAT node provides redundancy for multiple SR OS nodes, therefore reducing the need to equip each individual SR OS node with multiple MS-ISAs which are normally used in an active/active or active/standby intra-chassis redundancy mode.

This concept is shown in L2-Aware bypass. The example shows the base router as an inside routing context where the global routing table (GRT) is used to decide where to send traffic if an MS-ISA is unavailable. Apart from this example, the inside routing context is not limited to the base router but instead can be an VPRN instance.

L2-Aware bypass is an optional redundancy model in Layer 2–aware NAT which is mutually exclusive with the other two MS-ISA redundancy modes (active/active and active/standby).

Use the following command option to enable L2-Aware bypass:

  • MD-CLI
    configure isa nat-group redundancy intra-chassis l2aware-bypass
  • classic CLI
    configure isa nat-group redundancy l2aware-bypass
Figure 48. L2-Aware bypass

Sharing IP addresses in Layer 2–aware NAT

Layer 2–aware NAT allows overlap of inside (private) IP addresses between Enhanced Subscriber Management (ESM) (or Layer 2–aware NAT) subscribers. For example, IP addresses assigned to hosts within, for example, subscriber SUB-1, can be identical to IP addresses assigned to hosts within, for example, subscriber SUB-2. This is possible by the subscriber-ID field (which must be unique in the system) that is a part of the NAT translation key. This way the return traffic (in downstream direction) belonging to different ESM subscribers with overlapping IP addresses can still be differentiated by a unique ESM subscriber-id field that is used in reverse NAT translation.

L2-Aware bypass functionality with a failed MS-ISA breaks the logic because traffic is not translated (NAT’d) in SR OS node, and therefore, the return traffic does not take subscriber-id field into forwarding consideration. For this reason, the overlap of inside (private) IP addresses between ESM subscribers is not supported by the L2-Aware bypass functionality for the routed traffic within the same inside routing context. In other words, private IP addresses must be unique across the subscribers within a specified inside routing context.

Recovery

Upon the recovery of the failed MS-ISA, all existing subscribers that are affected by the bypass continue to use the bypass. However, all new subscribers that come online after the recovery, are automatically Layer 2–aware NAT’d (and therefore do not use the bypass).

Restoring bypassed subscribers to the Layer 2–aware NAT after the recovery, requires manual intervention by the user. Use the following command to restore bypassed subscribers to the Layer 2–aware NAT after the recovery.

tools perform nat recover-l2aw-bypass mda 

In Layer 2–aware NAT, the <subscriber to outside IP, port-block> mappings are allocated during the subscriber attachment phase (when the subscriber comes online) and are maintained in the CPM. Therefore, they are preserved in the CPM during MS-ISA failure. This means that the original mappings for the recovered subscribers continue to be used when the MS-ISA is recovered.

Be aware that only the partial mappings <subscriber to outside IP, port-block> are preserved. This does not include preservation of NAT sessions sometimes referred as fully qualified flows. NAT sessions are maintained in MS-ISA and they are lost during MS-ISA failures. Hence, this model provides stateless failover to an external NAT node.

The user is notified about the MS-ISA failure by a log message or an SNMP trap. The following example shows an MS-ISA failure log message.

MS-ISA failure log message
9 2017/06/07 11:32:49.748 UTC MINOR: NAT #2020 Base NAT
"The NAT MDA 5/1 is now inactive in group 2."

Default bypass during reboot or MS-ISA provisioning

If enabled, L2-Aware bypass takes effect automatically if an MS-ISA does not become operational within 10 minutes of provisioning (configuring) or after a system bootup.

Logging

Because partial mappings in Layer 2–aware NAT (subscriber to outside IP addresses, port block) are preserved in the CPM during an MS-ISA failure, no logging is generated for existing ESM/NAT subscribers when the MS-ISA fails or is recovered.

Stateful inter-chassis NAT redundancy

Stateful inter-chassis NAT redundancy provides seamless NAT failover between the two redundant SR OS nodes. A pair of redundant nodes operates in active or standby modes per NAT group. If traffic distribution between the nodes is needed, then up to four NAT groups per node can be deployed, and with each NAT group having its own set of ISAs. In this scenario, traffic between the nodes can be load-balanced per NAT group.

CGN stateful inter-chassis redundancy shows a scenario where inside routes are advertised from the node with the active NAT group that ensures traffic is symmetric. This means that upstream and downstream traffic is fully flowing through the same node. Although this scenario represents the majority of use cases, it is allowed for the upstream traffic to arrive on the node with a standby NAT group and be shunted over to the node with active nat-group over a link that interconnects the two nodes. This scenario is shown in Asymmetric traffic.

The redundant pair of NAT nodes protect against the following:

  • node failure

  • ISA failure

  • link failure

  • path failure (BFD, VRRP)

Figure 49. CGN stateful inter-chassis redundancy
Figure 50. Asymmetric traffic

The basic premise of stateful inter-chassis NAT redundancy is as follows:

  • ISAs/ESAs within a NAT group can be either active or standby. This means that in a pair of redundant NAT nodes, only one node attracts traffic per NAT group while the NAT group on the peering node is in a standby state with synchronized flows (or sessions).

  • The activity (active or standby) of a NAT group is determined by an internal health value that represents the node’s ability to perform NAT at full capacity. The value of the health value can change dynamically and is based on the events that are related to various failures against which the stateful inter-chassis NAT redundancy offers protection. The health value is communicated between the redundant (or peering) nodes on the CPM level.

  • In case of equal health, a user can, through configuration, influence the activity of a NAT group.

  • The activity of a NAT group is characterized by the advertisement of the NAT- related routes on the inside (steering and destination-prefix) and the outside (pool routes).

  • Only TCP/UDP/ICMP flows that are older than the preconfigured amount of time are synchronized.

  • Flow synchronization is performed directly between the ISAs, bypassing the CPM. The CPM’s main role is to determine the activity of a NAT group with health related options as shown in NAT synchronization.

  • Flows are created on the active NAT group and are synchronized from the active NAT group to the standby NAT group.

  • Immediately following the switchover, all flows on the newly standby NAT group are resynchronized from the newly active NAT group.

  • If the link between the chassis is lost, then each chassis operates in a standalone mode.

Figure 51. NAT synchronization

A reliable and redundant link should always be available between the two redundant NAT nodes. This link is referred to as Inter-Chassis Link (ICL) and is used for:

  • CPM communication (activity negotiation and presence detection)

  • ISA-to-ISA communication (flow synchronization)

  • transient forwarding of data traffic during switchovers

Health status and failure events

A health value determines the activity of a NAT group within a pair of redundant nodes. The health value of a NAT group is internally calculated. The system can automatically decrease this value depending on the events that can negatively affect the system’s ability to perform NAT at a needed capacity.

A NAT group with a higher health value becomes active.

Activity states at equal health shows activity states, if paired NAT groups have equal health values on both nodes. Preferred is a command option that influences the activity state for a pair of NAT groups with equal health value (typical use case would be load balancing per NAT group).

Table 21. Activity states at equal health
Node 1 Node 2 Active node Comments

no preferred configured

no preferred configured

Whichever node becomes active first, remains active

If both nodes are becoming active simultaneously, the node with the highest system chassis MAC address becomes a controller node that decides which node becomes active node and which standby based on the health and preference values.

When the health and preference are equal, the controller node does not preempt (trigger a switchover) an already active node.

preferred configured

no preferred configured

Node 1

Node 1 always preempts Node 2 (if the health values are equal)

no preferred configured

preferred configured

Node 2

Node 2 always preempts Node 1 (if the health values are equal)

preferred configured

preferred configured

Whichever node becomes active first, remains active

Same as for no preference on both nodes

The health-drop value is initially set to a value of 1000 under the following circumstances:

  • The number of active ISAs in a NAT group is matching the configured value for the active-mda-limit command option.

  • There are no port failures that are being monitored.

  • There are no failures within the operation groups that are being monitored.

The above circumstances imply that the system is fully operational with no failures that would affect NAT operation.

However, the health value can be influenced by the events that can affect NAT operation, and that are outside of ISA-related failures, for example, unhealthy ports and paths that lead traffic in and out of the NAT node. Such events are explicitly tracked or monitored for the purpose of dynamically adjusting the health value and therefore influencing the activity of the NAT groups.

Stateful inter-chassis NAT redundancy protects against the following failures:

  • Nodal failure; if the active node fails, the standby node is notified of such an event by the lack of received keepalives.

  • ISA failure; a NAT group must have exactly the active-mda-limit number of ISAs that are operationally up to participate in stateful inter-chassis NAT redundancy. If the number of operational ISAs falls below the configured limit, then the health of the NAT group drops to 0.

  • Ports on the node can be monitored, and their operational state can trigger a change of the health value.

  • BFD sessions on the node can be monitored using the operational groups and their state change can trigger a change of the health value.

  • VRRP instances under interface configurations can be monitored using the operational groups and their state change can trigger a change of the health value.

  • SAPs on the node can be monitored using the operational groups and their operational state can trigger a change of the health value.

Port and operational group state change influences the reachability of the NAT node and consequently this affects network-wide NAT operation. If that port or path capacity in and out of the NAT node drops below a specific level, a switchover to a healthier NAT node may be needed.

Port states can be tracked or monitored on the private side (inside) and on the public side (outside) of NAT.

Operational groups are constructs that are tracking states of BFD enabled interfaces, SAPs, and VRRP instances.

BFD sessions targeted to the next hop can traverse intermediate Layer 2 nodes and can have longer reach than port tracking.

Another benefit of monitoring ports and paths is that it can help reduce the amount of traffic on the inter-chassis communication link (ICL) if that active node loses direct connection to the node downstream or upstream from it. The link for inter-chassis control communication (ICL) must always be present (for synchronization purposes). However, this link does not need to be designed for heavy traffic loads during extended periods of time occurs if traffic bearing ports are not colocated with the active node. However, this link is used for shorter transient periods that are caused by switchovers.

Route advertisements

NAT-related routes are active only on the node with the active NAT group. On the outside, these are the pool routes that attract traffic in the downstream direction. On the inside, the destination-prefix (which can be configured per NAT policy) represents a route that diverts traffic to NAT in the upstream direction. In case of a filter-based diversion to NAT, a steering route is advertised only from the active node. The steering route is used in the realm of this virtual router instance as an indirect next-hop for all the traffic that must be routed to the large scale NAT function. Use the following commands to configure the IP address and prefix length of the steering route:

  • MD-CLI
    configure router nat inside large-scale redundancy steering-route 
    configure service vprn nat inside large-scale redundancy steering-route 
  • classic CLI
    configure router nat inside redundancy steering-route 
    configure service vprn nat inside redundancy steering-route 

The route advertisement concept is shown in Route advertisement.

Figure 52. Route advertisement

Flow synchronization

The goal of flow synchronization is to minimize service interruption after a switchover. Minimum service interruption in this context allows some packet drop during a switchover, but the user sessions are preserved without requiring a user to restart them. Flows are synchronized directly between the ISAs at the time of creation and deletion, and always only from the active side to the standby side.

The amount of traffic carried across the inter-chassis link is proportional to the number of flows that are synchronized, and it also depends on the NAT type (NAT44, DS-Lite or NAT64). The size of a flow record on the wire is around 100 bytes and a number of flow records can be packed into a single frame whose MTU is adjustable. With approximately 90 bytes of header overhead per frame, and the number of synchronized flows, it is relatively easy to estimate the required bandwidth of the inter-chassis links. The minimum recommended bandwidth of the inter-chassis link is 10 Gb/s.

Excluding short-lived flows from synchronization could further reduce the necessary bandwidth for synchronization.

Configure the flow replication threshold (MD-CLI)
[ex:/configure isa]
A:admin@node-2# info
    nat-group 1 {
        redundancy {
            active-mda-limit 10
            inter-chassis {
                replication-threshold 20
            }
        }
    }
Configure the flow replication threshold (classic CLI)
A:node-2>config>isa# info detail
----------------------------------------------
        nat-group 1 create
            active-mda-limit 10
            inter-chassis-redundancy
                replication-threshold 20
            exit
        exit
----------------------------------------------

For the replication-threshold, Nokia recommends choosing a value that is larger than any of the typical short-lived flows that are not closed by the protocol itself but rely on the timeout value for its expiration (UDP DNS, UDP initial, or ICMP query).

After a switchover, a resynchronization of flows occurs. The new standby ISA starts clearing all its flows and the synchronization process restarts. This means that an attempt is made to resynchronize flows from the currently active side to the standby side. During this process, the active ISAs continue to forward traffic and create new flows.

While the flow synchronization is in progress, a switchover is not allowed unless the health on the active side drops to 0, which means that one or more ISAs on the active side have failed.

Loss of synchronization

After the loss of synchronization, an ISA transitions into a ‟com-sync” state. Some of the events that can cause loss of synchronization on the ISA level are:

  • Misconfigurations such as:

    • pools not matching on both nodes (outside IPs do not match between the ISAs)

    • NAT policies not matching on both nodes

  • ISA-to-ISA timeout. If an ACK for any flow synchronization frame is not received within one second, the system transitions to a non-synchronized state.

When the synchronization is lost, the standby ISA starts clearing all the flows and the synchronization process restarts. This means that an attempt is made to resynchronize flows from the currently active side to the standby side. During this process, the active ISAs continue to forward traffic and create new flows.

Flow timeout on the standby node

When a flow is synchronized to a standby ISA, its record is present in the standby ISA until it is deleted explicitly by the active node with a delete synchronization message. There are no additional updates sent for that flow from the active node between its creation and deletion time.

Flow timeout following the switchover

When an ISA transitions from a standby to an active state, the timeout for the synchronized flows on the newly active ISA is set to a percentage of the flow timeout value that is configured in the NAT policy. Timeout of the flow refers to the clearing of its state in NAT after a period of traffic inactivity. Flow timeouts are configured in the NAT policy. This adjustment of the flow timeout on the newly active ISA is necessary because the standby ISA, although aware of the flows, is not aware of the flow forwarding status at the time of the switchover. In other words, the flow on the active node just before the switchover may have been inactive and close to the timer expiry. In this case, it may not be desirable to extend the life time of such a flow on the newly active ISA to its initially configured value.

Use the following command to configure the flow timeout after the switchover:

  • MD-CLI
    configure isa nat-group redundancy inter-chassis flow-timeout-on-switchover
  • classic CLI
    configure isa nat-group inter-chassis-redundancy flow-timeout-on-switchover

A value of 50 for the flow-timeout-on-switchover command means that synchronized flows on the newly active ISA inherit half of the value set in the NAT policy.

Rapid consecutive switchovers

Excessive switchovers can be caused by unstable network elements. Events causing instability should be dampened at the source (ports may support event dampening). Event dampening control is not configurable under NAT. However, a dampening mechanism is built into stateful inter-chassis NAT redundancy by not allowing a switchover while synchronization is in progress.

Considering that a full re-synchronization is triggered after every switchover, the next switchover can occur only after the amount of time needed for full synchronization of flows. The new standby starts deleting flows and after this is completed, all flows from the newly active ISA are copied over to the standby.

ISA-to-ISA communication

Flow synchronization occurs directly between the ISAs, bypassing CPMs. Each ISA has its own IP address through which it communicates with its counterpart on the peering node.

Although each ISA has its own IP address, only one IP address in a NAT group is configured. This IP address is used by one of the ISAs and the rest of the ISAs are assigned consecutive IP addresses automatically by the system.

Preemption

Preemption is a feature that allows one node to relinquish activity to a peering node when the health values are equal. Use the following command to set the preference for activity of a NAT group in stateful inter-chassis redundancy configuration if both nodes have equal health:

  • MD-CLI
    configure isa nat-group redundancy inter-chassis preferred
  • classic CLI
    configure isa nat-group inter-chassis-redundancy preferred

If one of the nodes in the redundant pair is enabled for preemption (has the preferred command configured), this node, on a boot up, takes over the activity from the existing active peering node with the same health value.

In a setup with two nodes with the same health value and no preferred command is configured on either node, the currently active node remains active and does not relinquish its activity status to a just booted up node unless the health value is changed.

If the health values are different between the peering nodes, preemption is automatically disabled, and the activity is driven solely by the health value.

A freshly booted node that is about to take over the activity from an existing node (because of preemption), waits for all the flows to be synchronized and transferred from the currently active node. Only after this transfer is complete, does switchover occur.

Message delivery prioritization

Control messages originated by the CPM that pertain to NAT redundancy are treated with the highest priority and are marked by the system with a DSCP value of Network Control NC1 (48d). Those messages are crucial to stability of NAT redundancy (otherwise inadvertent switchovers could occur).

Flow synchronization control messages originated by ISAs are marked with EF (46d) for DSCP and 5 for dot1p, which are lower priorities than CPM-originated messages. The sync (ISA keepalive) messages are sent with DSCP NC1 and dot1p 6.

Flow synchronization messages are more tolerant to delays and loss.

Both of these types of control messages should be higher priority than any customer -originated traffic that is expected to cross the ICL. Customer traffic can be marked in SR OS node by the appropriate QoS configuration on egress interfaces.

Subscriber-aware NAT

Subscriber-aware information must be supplied to both nodes (active and standby) through RADIUS accounting.

With Nokia BNG, the following configuration ensures that subscriber information is sent through RADIUS to more than one target.

The following example displays the configuration of the RADIUS accounting policy for the subscriber that is using this subscriber profile.

Configure a RADIUS accounting policy for a subscriber (MD-CLI)
[ex:/configure subscriber-mgmt sub-profile "test"]
A:admin@node-2# info
    radius-accounting {
        policy ["acct-nat-1" "acct-nat-2"]
    }
Configure a RADIUS accounting policy for a subscriber (classic CLI)
A:node-2>config>subscr-mgmt>sub-prof# info
----------------------------------------------
            radius-accounting
                policy "acct-nat-1" "acct-nat-2"
            exit

Accounting policies contain respective NAT node destinations and consequently the subscriber information is sent to both NAT nodes.

Matching configuration on redundant pair of nodes

A pair of nodes participating in stateful NAT inter-chassis redundancy must have matching NAT configurations, the inside service ID and outside service ID. That is, command options other than the configuration items referring to local objects, such as ports and interfaces, should be configured with the same values on both nodes.

For example, the following commands must be the same on both nodes. Use the following commands to configure the keepalives between the CPMs residing on different chassis and to configure the drop count:

  • MD-CLI
    configure isa nat-group redundancy inter-chassis keepalive interval 
    configure isa nat-group redundancy inter-chassis keepalive dropcount
  • classic CLI
    configure isa nat-group inter-chassis-redundancy keepalive dropcount

However, the following command does not need to match because each node can monitor its own set of unique ports, or not monitor ports at all. Use the following command to configure the monitoring of the ports to adjust the overall health of the node in a redundant inter-chassis NAT system:

  • MD-CLI
    configure isa nat-group redundancy inter-chassis monitor-port 
    configure isa nat-group redundancy inter-chassis monitor-port health-drop
  • classic CLI
    configure isa nat-group inter-chassis-redundancy monitor-port health-drop

Detection of configuration mismatch is logged in the system and users are encouraged to check the logs periodically for any misaligned statements.

Online configuration changes

Certain NAT configuration changes can be performed online, which allows NAT to continue running in a redundant configuration.

In classic CLI, the user cannot perform the following online:

  • delete flows

  • block subscribers

In MD-CLI, these changes are allowed; however, during the commit, the inter-chassis NAT redundancy is temporarily disabled and then re-enabled after the commit is completed. Examples of configuration changes that require halting synchronization, include manipulation of NAT pools with active flows or subscribers, or removing NAT policies for active subscribers.

For configuration changes that can be performed online without halting the synchronization, there is a period during which the configuration between the nodes are misaligned (while the user is performing configuration changes). During these periods, the system continuously tries to synchronize the configuration. This perpetual attempt to synchronize flows is demanding for processing power and Nokia recommends it should be avoided.

To properly perform NAT configuration changes in a redundant configuration, the user must temporarily disable synchronization of the flows between the nodes with the following configuration. Perform the changes to the NAT configuration in an inter-chassis redundant setup.

Disable synchronization of the flows between the nodes (MD-CLI)
[ex:/configure isa]
A:admin@node-2# info
    nat-group 1 {
        redundancy {
            inter-chassis {
                sync false
                }
            }
        }
    }
Disable synchronization of the flows between the nodes (classic CLI)
A:node-2>config>isa# info
----------------------------------------------
        nat-group 1 create
            shutdown
            redundancy inter-chassis
            inter-chassis-redundancy
                no sync
            exit
        exit
----------------------------------------------

In this configuration, the nodes continue to operate in active or standby mode but the newly added and deleted flows on the active node are not synchronized. During this non-synchronizing period, the user can make any NAT changes that are possible in a standalone node. When the configuration changes are performed, the sync command must be reversed on both nodes. At that point, the two nodes resynchronize.

Setting up inter-chassis redundancy
Perform the following steps to change the NAT configuration in an inter-chassis redundant setup.
  1. Disable the committed synchronization of flows between the ISAs or ESAs on both nodes.
    MD-CLI
    *[ex:/configure isa]
    A:admin@node-2# nat-group 1 redundancy inter-chassis sync false
    
    *[ex:/configure isa]
    A:admin@node-2# commit
    
    classic CLI
    *A:node-2>config>isa>nat-group>inter-chassis-redundancy# no sync

This sync command causes the nodes to behave as if the flow synchronizations are not configured, which allows the online configuration changes.

The order in which the nodes (active or standby) are configured is irrelevant.

  1. Perform the configuration changes on both nodes.
  2. Re-enable synchronization of flows on both nodes.
    MD-CLI
    *[ex:/configure isa]
    A:admin@node-2# nat-group 1 redundancy inter-chassis sync true
    
    *[ex:/configure isa]
    A:admin@node-2# commit
    
    classic CLI
    *A:node-2>config>isa>nat-group>inter-chassis-redundancy# sync

Scenario with monitoring ports

This example relies on the following assumptions in Port monitoring scenario:

  • Load sharing over redundant chassis is achieved through two nat-groups that are, under normal conditions (no failures), active on respective chassis:

    • nat-group 1 is active on NAT node 1.

    • nat-group 2 is active on NAT node 2.

  • Two 100G links on the network/public/outside side are shared between the two NAT groups on each node (Internet access). These links are redundant, and failure of one link does not have a negative effect on the traffic.

  • Each NAT group has five 10G ports connected on the subscriber/private/inside side. Planned traffic load over those links is between 30G and 40G, which means that one link can be safely lost, without affecting traffic in the NAT group.

The rules for managing failures include the following:

  • The scheme protects against two access link failures per NAT group and one network link failure, simultaneously.

  • The scheme protects against three access link failures per NAT group, simultaneously. However, in this case, there cannot be any network link failures.

  • In the two above scenarios, if both network links fail on the same node (while on the other node at least one is available), the node with two failed links becomes standby.

According to those rules, the following configuration can be applied.

Monitor ports to adjust the overall health of the node in a redundant inter-chassis NAT system (MD-CLI)
[ex:/configure isa]
A:admin@node-2# info
    nat-group 1 {
        redundancy {
            active-mda-limit 5
            inter-chassis {
                monitor-port 1/1/1 {
                    health-drop 6
                }
                monitor-port 1/1/2 {
                    health-drop 6
                }
                monitor-port 1/1/3 {
                    health-drop 6
                }
                monitor-port 1/1/4 {
                    health-drop 6
                }
                monitor-port 1/1/5 {
                    health-drop 6
                }
                monitor-port 1/1/11 {
                    health-drop 6
                }
                monitor-port 1/1/12 {
                    health-drop 6
                }
            }
        }
    }
Monitor ports to adjust the overall health of the node in a redundant inter-chassis NAT system (classic CLI)
A:node-2>config>isa# info
----------------------------------------------
      nat-group 1 create
         active-mda-limit 5   
         inter-chassis-redundancy
            monitor-port port-1 health-drop 6
            monitor-port port-2 health-drop 6
            monitor-port port-3 health-drop 6
            monitor-port port-4 health-drop 6
            monitor-port port-5 health-drop 6
            monitor-port port-11 health-drop 10
            monitor-port port-12 health-drop 10
Figure 53. Port monitoring scenario

The results for a randomly selected number of failure combinations (out of 360 valid combinations) is shown in Randomly selected number of failure combinations .

‟N” indicates that the priority is equal, and unless preemption is enabled, the node that becomes active first, remains active.

Table 22. Randomly selected number of failure combinations
Node Number of failures in nat-group 1

(10G ports)

Number of failures in nat-group 2

(10G ports)

Number of failures on shared network side

(100G ports)

Health of nat-group 1 Health of nat-group 2 State of nat-group 1 (active/standby) State of nat-group 2 (active/standby)

1

2

0

1

0

0

0

1

1000

984

1000

990

A

S

A

S

1

2

0

2

0

1

1

1

990

978

990

984

A

S

A

S

1

2

0

0

1

0

0

0

1000

1000

994

1000

N

N

S

A

1

2

1

2

1

0

0

0

1000

988

994

1000

A

S

S

A

1

2

1

0

0

2

1

1

984

990

990

978

S

A

A

S

1

2

1

2

1

1

1

1

984

978

984

984

A

S

N

N

1

2

1

1

2

0

0

0

994

994

988

1000

N

N

S

A

1

2

1

0

2

1

1

1

984

990

978

984

S

A

S

A

1

2

1

2

2

0

1

1

984

978

978

990

A

S

S

A

1

2

2

0

2

2

0

1

988

990

988

978

S

A

A

S

Configuring stateful inter-chassis NAT redundancy

Stateful inter-chassis NAT redundancy requires synchronization on the CPM level and on the ISA and ESA levels.

CPM level synchronization is required to primarily exchange health information and keepalives between the nodes for the purpose of determining active and standby NAT groups between the two peers (nodes). Each peer is identified by a single IP address. The level of traffic exchanged between the peers for CPM synchronization is low.

The ISA or ESA level synchronization is required to synchronize flows between the ISA or ESAs. Each ISA or ESA becomes a peer and is identified by its own IP address. The level of traffic exchanged between ISA or ESA for synchronization purposes depends on the configuration and the amount of NAT traffic.

Basic configuration steps are described below with command syntax examples. Some of the steps are optional and can assume default values. For more information about each command, see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Classic CLI Command Reference Guide.

  1. Configure a synchronization peer on the CPM level using the command options in the following context.

    configure redundancy multi-chassis peer sync nat nat-group

    The health of the NAT group is exchanged between the chassis and the node that is elected as active for the NAT group. The other node becomes the standby for the same NAT group.

  2. Configure keepalives between the nodes (CPMs) using the command options in the following context:

    • MD-CLI

      configure isa nat-group redundancy inter-chassis keepalive
    • classic CLI

      configure isa nat-group inter-chassis-redundancy keepalive
  3. Configure the minimum duration of the flow before it is synchronized using the following command:

    • MD-CLI

      configure isa nat-group redundancy inter-chassis replication-threshold
    • classic CLI

      configure isa nat-group inter-chassis-redundancy replication-threshold

    The user may choose to synchronize only long-lived flows.

  4. Configure a timeout of the flow after a switchover using the following command:

    • MD-CLI

      configure isa nat-group redundancy inter-chassis flow-timeout-on-switchover
    • classic CLI

      configure isa nat-group inter-chassis-redundancy flow-timeout-on-switchover

    Independent of stateful redundancy, and depending on the type of traffic, each flow has a timeout value that determines its expiration time if there is inactivity. The initial flow timeouts are configured in a NAT policy. After a switchover, this timeout can be reset to the percentage of the originally-configured value. This can be useful because some of the flows switched over may already have been in an inactive state before the switchover.

  5. Configure the IP-MTU size of the packets carrying flow synchronization information between the ISA or ESAs using the following command:

    • MD-CLI

      configure isa nat-group redundancy inter-chassis ip-mtu
    • classic CLI

      configure isa nat-group inter-chassis-redundancy ip-mtu
  6. Configure the IP address of the first ISA or ESA in a NAT group on local and remote nodes using the following commands:

    • MD-CLI

      configure isa nat-group redundancy inter-chassis local-ip-range-start
      configure isa nat-group redundancy inter-chassis remote-ip-range-start
    • classic CLI

      configure isa nat-group inter-chassis-redundancy local-ip-range-start
      configure isa nat-group inter-chassis-redundancy remote-ip-range-start

    The IP addresses for the remaining ISA or ESA are assigned automatically consecutively. These are peering addresses between the ISA and ESAs over which the flows are synchronized. Traffic from the first IP address on the local node is sent to the first IP address on the remote node.

  7. Configure the monitoring status of the ports and other objects, such as SAPs, BFD sessions, or VRRP sessions in the system using the command options in the following contexts:

    • MD-CLI

      configure isa nat-group redundancy inter-chassis monitor-oper-group
      configure isa nat-group redundancy inter-chassis monitor-port
    • classic CLI

      configure isa nat-group inter-chassis-redundancy monitor-oper-group
       configure isa nat-group inter-chassis-redundancy monitor-port

    The status of these objects can affect the health of the system and can trigger a switchover.

  8. Select the activity preference for a NAT group using the following command:

    • MD-CLI

      configure isa nat-group redundancy inter-chassis preferred true
    • classic CLI

      configure isa nat-group inter-chassis-redundancy preferred
  9. Reference a routing instance through which ISA or ESAs on redundant nodes exchange synchronization information using the following command:

    • MD-CLI

      configure isa nat-group redundancy inter-chassis router-instance
    • classic CLI

      configure isa nat-group inter-chassis-redundancy router
  10. Display information about the stateful inter-chassis NAT redundancy configuration using the following relevant command options for the show isa nat-group command.
    show isa nat-group inter-chassis-redundancy
    show isa nat-group inter-chassis-redundancy statistics
    show isa nat-group member inter-chassis-redundancy
    show isa nat-group member inter-chassis-redundancy statistics

ISA feature interactions

This section describes the interaction between MS-ISA applications and other system features.

MS-ISA use with service mirrors

All MS-ISA features and applications support simultaneous service mirroring, without impact. For example, a service that is diverted to AA, IPsec, NAT, LNS, or supported combinations of MS-ISA applications also supports service mirroring simultaneously.

Network Address Translation

Subscriber aware Large Scale NAT44

Subscriber aware Large Scale NAT44 attempts to combine the positive attributes of Large Scale NAT44 and Layer 2–aware NAT, namely:

  • the ability for some traffic to bypass the NAT function, such as IPTV traffic and VoIP traffic whenever a unique IP address per subscriber is used (for example, not L2-Aware NAT where all subs share the same IP). This can be achieved using existing Large Scale NAT44 mechanisms (ingress IP filters)

  • the use of RADIUS Acct for logging of port ranges, including multiple port-range blocks

  • the use of subscriber-identification/RADIUS username to identify the customer to simplify management of Large Scale NAT44 subscribers

Subscriber awareness in Large Scale NAT44 facilitates the release of NAT resources immediately after the BNG subscriber is terminated, without having to wait for the last flow of the subscriber to expire on its own (TCP timeout is 4 hours by default).

The subscriber aware Large Scale NAT44 function leverages RADIUS accounting proxy built-in to the 7750 SR. The RADIUS accounting proxy allows the 7750 SR to inform Large Scale NAT44 application about individual BNG subscribers from the RADIUS accounting messages generated by a remote BNG and use this information in the management of Large Scale NAT44 subscribers. The combination of the two allows, for example, the 7750 SR running as a Large Scale NAT44 to make the correlation between the BNG subscriber (represented in the Large Scale NAT44 by the Inside IP Address) and RADIUS attributes such as User-Name, Alc-Sub-Ident-String, Calling-Station-Id or Class. These attributes can subsequently be used for either management of the Large Scale NAT44 subscriber, or in the NAT RADIUS Accounting messages generated by the 7750 SR Large Scale NAT44 application. Doing so simplifies both the administration of the Large Scale NAT44 and the logging function for port-range blocks.

As BNG subscribers authenticate and come online, the RADIUS accounting messages are ‛snooped’ through RADIUS accounting proxy which creates a cache of attributes from the BNG subscriber. BNG subscribers are correlated with the NAT subscriber by framed-ip address, and one of the following attributes that must be present in the accounting messages generated by BNG:

  • username

  • subscriber ID

  • RADIUS Class attribute

  • Calling-Station-id

  • IMSI

  • IMEI

Framed-ip address must also be present in the accounting messages generated by BNG.

The Large Scale NAT44 Subscriber Aware application receives a number of cached attributes that are used to manage Large Scale NAT44 subscribers, for example:

  • Delete the Large Scale NAT44 subscriber when the BNG subscriber is terminated.

  • Report attributes in Large Scale NAT44 accounting messages according to configuration options.

Creation and removal of RADIUS accounting proxy cache entries related to BNG subscriber is triggered by the receipt of accounting start/stop messages sourced by the BNG subscriber. Modification of entries can be triggered by interim-update messages carrying updated attributes. Cached entries can also be purged using the CLI.

In addition to passing one of the above attributes in Large Scale NAT44 RADIUS accounting messages, a set of opaque BNG subscriber RADIUS attributes can optionally be passed in Large Scale NAT44 RADIUS accounting messages. Up to 128B of these opaque attributes are accepted. The remaining attributes are truncated.

Large Scale NAT44 subscriber instantiation can optionally be denied in case that corresponding BNG subscriber cannot be identified in Large Scale NAT44 through RADIUS accounting proxy.

Subscriber aware Large Scale NAT44 configuration

Consider the following configuration guidelines:
  • Configure RADIUS accounting proxy functionality in a routing instance that receives accounting messages from the remote or local BNG. Optionally, forward received accounting message received by the RADIUS accounting proxy to the final accounting destination (accounting server).

  • Point the BNG RADIUS accounting destination to the RADIUS accounting proxy to enable the RADIUS accounting proxy to receive and snoop BNG RADIUS accounting data.

  • BNG subscriber can be associated with two accounting policies; therefore, pointing to two different accounting destinations. For example, one to the RADIUS accounting proxy, the other one to the real accounting server.

  • Configure subscriber aware Large Scale NAT44. From Large Scale NAT44 Subscriber Aware application reference the RADIUS Proxy accounting server and define the string that is used to correlate BNG subscriber with the Large Scale NAT44 subscriber.

  1. Optionally, enable NAT RADIUS accounting that includes BNG subscriber relevant data.
    1. Configure the RADIUS accounting proxy.

      The RADIUS accounting proxy listens to accounting messages on the interface ‛rad-proxy-loopback’. In the following example, the name ‛proxy-acct’ is used to reference this proxy accounting server from Large Scale NAT44.

      Configure the RADIUS accounting proxy (MD-CLI)
      [ex:/configure service vprn "56"]
      A:admin@node-2# info
          radius {
              proxy "proxy-acct" {
                  admin-state enable
                  description "two side server -interface;client ; default-plcy:real server"
                  purpose accounting
                  defaults {
                      accounting-server-policy "lsn-policy"
                  }
                  interface "rad-proxy-loopback" { }
                  }
                  secret "TEg1UEZzemRMyZXD1HvvQGkeGfoQ58MF hash2"
          }
      Configure the RADIUS accounting proxy (classic CLI)
      A:node-2>config>service>vprn#
            radius-proxy
                      server "proxy-acct" purpose accounting create
                          default-accounting-server-policy "lsn-policy"
                          description "two side server -interface:client ; default-plcy:real server"
                          interface "rad-proxy-loopback"
                          secret "TEg1UEZzemRMyZXD1HvvQGkeGfoQ58MF" hash2
                          no shutdown
                      exit
                  exit
    2. Create and configure a RADIUS server policy.

      Received accounting messages can be relayed further from the RADIUS accounting proxy to the accounting server, which can be indirectly referenced in the default accounting policy ‛lsn-policy’. The LSN policy is defined as follows.

      Configure a RADIUS server policy (MD-CLI)
      [ex:/configure aaa]
      A:admin@node-2# info
          radius {
              server-policy "lsn-policy" {
                  servers {
                      router-instance "Base"
                      source-address 192.168.1.12
                      server 1 {
                          server-name "192"
                      }
                  }
              }
          }
      Configure a RADIUS server policy (classic CLI)
      A:node-2>config>aaa#
                     radius-server-policy "lsn-policy" create
                      servers
                          router "Base"
                          source-address 192.168.1.12
                          server 1 name "192"
                      exit
                  exit
    3. Configure an external RADIUS server in the routing instance.

      This LSN policy in the preceding example can then reference an external RADIUS accounting server with its own security credentials. This external accounting server can be configured in any routing instance.

      Configure an external RADIUS server (MD-CLI)
      [ex:/configure router "Base" radius]
      A:admin@node-2# info
          server "192" {
              description "real radius or acct server"
              address 192.168.1.10
              secret "KRr7H.K3i0z9O/hj2BUSmdJUdl.zWrkE hash2"
              acct-port 1813
          }
      Configure an external RADIUS server (classic CLI)
      A:node-2>config>router>radius-server# info 
      ----------------------------------------------
                  server "192" address 192.168.1.10 secret "KRr7H.K3i0z9O/hj2BUSmdJUdl.zWrkE" hash2 port 1813 create
                      description "real radius or acct server"
                  exit
  2. Configure two RADIUS accounting policies in BNG, one to the real RADIUS server and the other one to the RADIUS accounting proxy.
    Configure two RADIUS accounting policies in BNG (MD-CLI)
    [ex:/configure subscriber-mgmt sub-profile "test"]
    A:admin@node-2# info
        egress {
            qos {
                agg-rate {
                    rate 10000
                }
            }
        }
        radius-accounting {
            policy ["real-acct-srvr" "lsn"]
            }
        }
        radius-accounting-policy "lsn" {
            session-id-format number
            update-interval {
                interval 5
            }
            include-radius-attribute {
                acct-authentic true
                acct-delay-time true
                acct-triggered-reason true
                called-station-id true
                circuit-id true
                framed-interface-id true
                framed-ip-address true
                framed-ip-netmask true
                mac-address true
                nas-identifier true
                nat-port-range true
                remote-id true
                sla-profile true
                sub-profile true
                subscriber-id true
                user-name true
                calling-station-id {
                    type remote-id
                }
                nas-port-id {
                }
                nas-port-type {
                }
            }
        }
        radius-accounting-policy "real-acct-srvr" {
            session-id-format number
            update-interval {
                interval 5
            }
            include-radius-attribute {
                acct-authentic true
                acct-delay-time true
                acct-triggered-reason true
                called-station-id true
                circuit-id true
                framed-interface-id true
                framed-ip-address true
                framed-ip-netmask true
                mac-address true
                nas-identifier true
                nat-port-range true
                remote-id true
                sla-profile true
                sub-profile true
                subscriber-id true
                user-name true
                calling-station-id {
                    type remote-id
                }
                nas-port-id {
                }
                nas-port-type {
                }
            }
        }
    Configure two RADIUS accounting policies in BNG (classic CLI)
    A:node-2>config>subscr-mgmt>sub-prof# info
    ----------------------------------------------
                radius-accounting
                    policy "real-acct-srvr" ‟lsn”
                exit
                egress
                    agg-rate-limit 10000
                exit
    ----------------------------------------------
    A:node-2>config>subscr-mgmt>acct-plcy# info 
                description ‟lsn radius-acct-policy”
                update-interval 5
                include-radius-attribute
                    acct-authentic
                    acct-delay-time
                    called-station-id
                    calling-station-id remote-id
                    circuit-id
                    framed-interface-id
                    framed-ip-addr
                    framed-ip-netmask
                    mac-address
                    nas-identifier
                    nas-port-id  
                    nas-port-type
                    nat-port-range
                    remote-id
                    sla-profile
                    sub-profile
                    subscriber-id
                    user-name
                    alc-acct-triggered-reason
                exit
                session-id-format number

    In the following configuration, the router instance is the service ID where proxy radius is configured. The RADIUS proxy IP address is 10.5.5.5 on interface ‟rad-proxy-loopback”; the ‛secret’ is the same as configured on RADIUS accounting proxy.

    Configure a RADIUS server (MD-CLI)
    [ex:/configure application-assurance radius-accounting-policy "lsn-radius-acct-policy"]
    A:admin@node-2# info
        radius-accounting-server {
            router-instance "10"
            server 1 {
                address 10.5.5.5
                secret "cVi1sidvgH28Pd9QoN1flE hash2"
            }
        }
    
    Configure a RADIUS server (classic CLI)
    A:node-2>config>app-assure>rad-acct-plcy# info
    ----------------------------------------------
                radius-accounting-server
                    router "10" 
                    server 1 address 10.5.5.5 secret "cVi1sidvgH28Pd9QoN1flE" hash2 port 1813 create
    
                exit
    ----------------------------------------------
  3. Configure sub-aware Large Scale NAT44 references.
    The RADIUS accounting proxy server ‛proxy-acct’ and defines the calling-station-id command from the BNG subscriber as the matching attribute.
    Configure sub-aware Large Scale NAT44 references (MD-CLI)
    [ex:/configure service vprn "57" nat inside]
    A:admin@node-2# info
        large-scale {
            nat-policy "nat-base"
            nat44 {
                destination-prefix 10.0.0.0/16 {
                }
            }
            subscriber-identification {
                admin-state enable
                description "sub-aware CGN"
                attribute {
                    vendor standard
                    type station-id
                }
                radius-proxy-server {
                    router-instance "10"
                    server "proxy-acct"
                }
            }
        }
    Configure sub-aware Large Scale NAT44 references (classic CLI)
    A:node-2>config>service>vprn>nat>inside# info 
    ----------------------------------------------
       nat-policy "nat-base"
          destination-prefix 10.0.0.0/16
          subscriber-identification
              attribute vendor "standard" attribute-type "station-id"
          description "sub-aware CGN"
          radius-proxy-server router 10 name "proxy-acct"
          no shutdown
        exit
    ----------------------------------------------
  4. Optionally, enable RADIUS NAT accounting.
    Enable RADIUS NAT accounting (MD-CLI)
    [ex:/configure isa]
    A:admin@node-2# info
        nat-group 1 {
            admin-state enable
            radius-accounting-policy "nat-acct-basic"
            redundancy {
                active-mda-limit 1
            }
            mda 1/2 { }
        }
    
    [ex:/configure aaa]
    A:admin@node-2# info
        radius {
    ...
            isa-policy "nat-acct-basic" {
                description "radius accounting policy for NAT"
                accounting {
                    include-attributes {
                        called-station-id true
                        frame-counters true
                        framed-ip-address true
                        hardware-timestamp true
                        multi-session-id true
                        nas-identifier true
                        nat-inside-service-id true
                        nat-outside-ip-address true
                        nat-outside-service-id true
                        nat-port-range-block true
                        nat-subscriber-string true
                        octet-counters true
                        proxied-subscriber-data true
                        release-reason true
                        session-time true
                        user-name true
                    }
                }
            }
        }
    
    
    [ex:/configure application-assurance]
    A:admin@node-2# info
        radius-accounting-policy "nat-base" {
            radius-accounting-server {
                router-instance "Base"
                source-address 192.168.1.20
                server 1 {
                    address 192.168.1.10
                    port 1813
                    secret "0m4WtRekKSTUElrAhicryM0B9Ncnk7rkLg== hash2"
                }
            }
        }
    Enable RADIUS NAT accounting (classic CLI)
    A:node-2>config>isa>nat-group# info 
    ----------------------------------------------
                active-mda-limit 1
                radius-accounting-policy "nat-acct-basic"
                mda 1/2
                no shutdown
    
    A:node-2>config>aaa>isa-radius-plcy# info
    ----------------------------------------------
                description "radius accounting policy for NAT"
                acct-include-attributes
                    called-station-id
                    frame-counters
                    framed-ip-addr
                    hardware-timestamp
                    inside-service-id
                    multi-session-id
                    nas-identifier
                    nat-subscriber-string
                    octet-counters
                    outside-ip
                    outside-service-id
                    port-range-block
                    release-reason
                    session-time
                    subscriber-data
                    user-name
                exit
    ----------------------------------------------
    A:node-2>config>app-assure# info
    ----------------------------------------------
            radius-accounting-policy "nat-base" create
                radius-accounting-server
                    router "Base"
                    server 1 address 192.168.1.10 secret "0m4WtRekKSTUElrAhicryBgQx5VRG90x2A==" hash2 port 1813 create
                    source-address 192.168.1.20
                exit
            exit
    ----------------------------------------------
    The preceding setup would produce a stream of the following Large Scale NAT44 RADIUS accounting messages.
    Large Scale NAT44 RADIUS accounting messages output
    Mon Jul 16 10:59:27 2012
            NAS-IP-Address = 10.1.1.1
            NAS-Identifier = "left-a20"
            NAS-Port = 37814272
            Acct-Status-Type = Start
            Acct-Multi-Session-Id = "500456500365a4de7c29a9a07c29a9a0"
            Acct-Session-Id = "500456500365a4de6201d7b87c29a9a0"
            Called-Station-Id = "00-00-00-00-01-01"
            User-Name = "remote0"
            Calling-Station-Id = "remote0"
            Alc-Serv-Id = 10
            Framed-IP-Address = 10.0.0.7
            Alc-Nat-Outside-Ip-Addr = 198.51.100.1
            Alc-Nat-Port-Range = "198.51.100.1 1054-1058 router base"
            Acct-Input-Packets = 0
            Acct-Output-Packets = 0
            Acct-Input-Octets = 0
            Acct-Output-Octets = 0
            Acct-Input-Gigawords = 0
            Acct-Output-Gigawords = 0
            Acct-Session-Time = 0
            Event-Timestamp = "Jul 16 2012 10:58:40 PDT"
            NAS-IP-Address = 10.1.1.1
            User-Name = "cgn_1_ipoe"
            Framed-IP-Netmask = 255.255.255.0
            Class = 0x63676e2d636c6173732d7375622d6177617265
            NAS-Identifier = "left-a20"
            Acct-Session-Id = "D896FF0000000550045640"
            Event-Timestamp = "Jul 16 2012 10:58:24 PDT"
            NAS-Port-Type = Ethernet
            NAS-Port-Id = "1/1/5:5.10"
            Acct-Delay-Time = 0
            Acct-Authentic = RADIUS
            Acct-Unique-Session-Id = "10f8bce6e5e7eb41"
            Timestamp = 1342461567
            Request-Authenticator = Verified
    
    Mon Jul 16 11:03:56 2012
            NAS-IP-Address = 10.1.1.1
            NAS-Identifier = "left-a20"
            NAS-Port = 37814272
            Acct-Status-Type = Interim-Update
            Acct-Multi-Session-Id = "500456500365a4de7c29a9a07c29a9a0"
            Acct-Session-Id = "500456500365a4de6201d7b87c29a9a0"
            Called-Station-Id = "00-00-00-00-01-01"
            User-Name = "remote0"
            Calling-Station-Id = "remote0"
            Alc-Serv-Id = 10
            Framed-IP-Address = 10.0.0.7
            Alc-Nat-Outside-Ip-Addr = 198.51.100.1
            Alc-Nat-Port-Range = "198.51.100.1 1054-1058 router base"
            Acct-Input-Packets = 0
            Acct-Output-Packets = 1168
            Acct-Input-Octets = 0
            Acct-Output-Octets = 86432
            Acct-Input-Gigawords = 0
            Acct-Output-Gigawords = 0
            Acct-Session-Time = 264
            Event-Timestamp = "Jul 16 2012 11:03:04 PDT"
            Acct-Delay-Time = 5
            NAS-IP-Address = 10.1.1.1
            User-Name = "cgn_1_ipoe"
            Framed-IP-Netmask = 255.255.255.0
            Class = 0x63676e2d636c6173732d7375622d6177617265
            NAS-Identifier = "left-a20"
            Acct-Session-Id = "D896FF0000000550045640"
            Acct-Session-Time = 279
            Event-Timestamp = "Jul 16 2012 11:03:04 PDT"
            NAS-Port-Type = Ethernet
            NAS-Port-Id = "1/1/5:5.10"
            Acct-Delay-Time = 0
            Acct-Authentic = RADIUS
            Acct-Unique-Session-Id = "10f8bce6e5e7eb41"
            Timestamp = 1342461836
            Request-Authenticator = Verified
    
    Mon Jul 16 11:04:34 2012
            NAS-IP-Address = 10.1.1.1
            NAS-Identifier = "left-a20"
            NAS-Port = 37814272
            Acct-Status-Type = Stop
            Acct-Multi-Session-Id = "500456500365a4de7c29a9a07c29a9a0"
            Acct-Session-Id = "500456500365a4de6201d7b87c29a9a0"
            Called-Station-Id = "00-00-00-00-01-01"
            User-Name = "remote0"
            Calling-Station-Id = "remote0"
            Alc-Serv-Id = 10
            Framed-IP-Address = 10.0.0.7
            Alc-Nat-Outside-Ip-Addr = 198.51.100.1
            Alc-Nat-Port-Range = "198.51.100.1 1054-1058 router base"
            Acct-Terminate-Cause = Host-Request
            Acct-Input-Packets = 0
            Acct-Output-Packets = 1321
            Acct-Input-Octets = 0
            Acct-Output-Octets = 97754
            Acct-Input-Gigawords = 0
            Acct-Output-Gigawords = 0
            Acct-Session-Time = 307
            Event-Timestamp = "Jul 16 2012 11:03:47 PDT"
            NAS-IP-Address = 10.1.1.1
            User-Name = "cgn_1_ipoe"
            Framed-IP-Netmask = 255.255.255.0
            Class = 0x63676e2d636c6173732d7375622d6177617265
            NAS-Identifier = "left-a20"
            Acct-Session-Id = "D896FF0000000550045640"
            Acct-Session-Time = 279
            Event-Timestamp = "Jul 16 2012 11:03:04 PDT"
            NAS-Port-Type = Ethernet
            NAS-Port-Id = "1/1/5:5.10"
            Acct-Delay-Time = 0
            Acct-Authentic = RADIUS
            Acct-Unique-Session-Id = "10f8bce6e5e7eb41"
            Timestamp = 1342461874
            Request-Authenticator = Verified
    
    

    The matching accounting stream generated on the BNG is shown in the following output.

    Accounting stream generated on the BNG output
    Mon Jul 16 10:59:11 2012
            Acct-Status-Type = Start
            NAS-IP-Address = 10.1.1.1
            User-Name = "cgn_1_ipoe"
            Framed-IP-Address = 10.0.0.7
            Framed-IP-Netmask = 255.255.255.0
            Class = 0x63676e2d636c6173732d7375622d6177617265
            Calling-Station-Id = "remote0"
            NAS-Identifier = "left-a20"
            Acct-Session-Id = "D896FF0000000550045640"
            Event-Timestamp = "Jul 16 2012 10:58:24 PDT"
            NAS-Port-Type = Ethernet
            NAS-Port-Id = "1/1/5:5.10"
            ADSL-Agent-Circuit-Id = "cgn_1_ipoe"
            ADSL-Agent-Remote-Id = "remote0"
            Alc-Subsc-ID-Str = "CGN1"
            Alc-Subsc-Prof-Str = "nat"
            Alc-SLA-Prof-Str = "tp_sla_prem"
            Alc-Client-Hardware-Addr = "2001:db8:65:05:10:01"
            Acct-Delay-Time = 0
            Acct-Authentic = RADIUS
            Acct-Unique-Session-Id = "9c1723d05e87c043"
            Timestamp = 1342461551
            Request-Authenticator = Verified
    
    Mon Jul 16 11:03:51 2012
            Acct-Status-Type = Interim-Update
            NAS-IP-Address = 10.1.1.1
            User-Name = "cgn_1_ipoe"
            Framed-IP-Address = 10.0.0.7
            Framed-IP-Netmask = 255.255.255.0
            Class = 0x63676e2d636c6173732d7375622d6177617265
            Calling-Station-Id = "remote0"
            NAS-Identifier = "left-a20"
            Acct-Session-Id = "D896FF0000000550045640"
            Acct-Session-Time = 279
            Event-Timestamp = "Jul 16 2012 11:03:04 PDT"
            NAS-Port-Type = Ethernet
            NAS-Port-Id = "1/1/5:5.10"
            ADSL-Agent-Circuit-Id = "cgn_1_ipoe"
            ADSL-Agent-Remote-Id = "remote0"
            Alc-Subsc-ID-Str = "CGN1"
            Alc-Subsc-Prof-Str = "nat"
            Alc-SLA-Prof-Str = "tp_sla_prem"
            Alc-Client-Hardware-Addr = "2001:db8:65:05:10:01"
            Acct-Delay-Time = 0
            Acct-Authentic = RADIUS
            Alcatel-IPD-Attr-163 = 0x00000001
            Alc-Acct-I-Inprof-Octets-64 = 0x00010000000000000000
            Alc-Acct-I-Outprof-Octets-64 = 0x00010000000000020468
            Alc-Acct-I-Inprof-Pkts-64 = 0x00010000000000000000
            Alc-Acct-I-Outprof-Pkts-64 = 0x0001000000000000052a
            Alc-Acct-I-Inprof-Octets-64 = 0x00030000000000000000
            Alc-Acct-I-Outprof-Octets-64 = 0x00030000000000000000
            Alc-Acct-I-Inprof-Pkts-64 = 0x00030000000000000000
            Alc-Acct-I-Outprof-Pkts-64 = 0x00030000000000000000
            Alc-Acct-I-Inprof-Octets-64 = 0x00050000000000000000
            Alc-Acct-I-Outprof-Octets-64 = 0x00050000000000000000
            Alc-Acct-I-Inprof-Pkts-64 = 0x00050000000000000000
            Alc-Acct-I-Outprof-Pkts-64 = 0x00050000000000000000
            Alc-Acct-O-Inprof-Octets-64 = 0x00010000000000000000
            Alc-Acct-O-Outprof-Octets-64 = 0x00010000000000003154
            Alc-Acct-O-Inprof-Pkts-64 = 0x00010000000000000000
            Alc-Acct-O-Outprof-Pkts-64 = 0x0001000000000000009a
            Alc-Acct-O-Inprof-Octets-64 = 0x00030000000000000000
            Alc-Acct-O-Outprof-Octets-64 = 0x00030000000000000000
            Alc-Acct-O-Inprof-Pkts-64 = 0x00030000000000000000
            Alc-Acct-O-Outprof-Pkts-64 = 0x00030000000000000000
            Alc-Acct-O-Inprof-Octets-64 = 0x00050000000000000000
            Alc-Acct-O-Outprof-Octets-64 = 0x00050000000000000000
            Alc-Acct-O-Inprof-Pkts-64 = 0x00050000000000000000
            Alc-Acct-O-Outprof-Pkts-64 = 0x00050000000000000000
            Acct-Unique-Session-Id = "9c1723d05e87c043"
            Timestamp = 1342461831
            Request-Authenticator = Verified
    
    Mon Jul 16 11:04:34 2012
            Acct-Status-Type = Stop
            NAS-IP-Address = 10.1.1.1
            User-Name = "cgn_1_ipoe"
            Framed-IP-Address = 10.0.0.7
            Framed-IP-Netmask = 255.255.255.0
            Class = 0x63676e2d636c6173732d7375622d6177617265
            Calling-Station-Id = "remote0"
            NAS-Identifier = "left-a20"
            Acct-Session-Id = "D896FF0000000550045640"
            Acct-Session-Time = 322
            Acct-Terminate-Cause = User-Request
            Event-Timestamp = "Jul 16 2012 11:03:47 PDT"
            NAS-Port-Type = Ethernet
            NAS-Port-Id = "1/1/5:5.10"
            ADSL-Agent-Circuit-Id = "cgn_1_ipoe"
            ADSL-Agent-Remote-Id = "remote0"
            Alc-Subsc-ID-Str = "CGN1"
            Alc-Subsc-Prof-Str = "nat"
            Alc-SLA-Prof-Str = "tp_sla_prem"
            Alc-Client-Hardware-Addr = "2001:db8:65:05:10:01"
            Acct-Delay-Time = 0
            Acct-Authentic = RADIUS
            Alc-Acct-I-Inprof-Octets-64 = 0x00010000000000000000
            Alc-Acct-I-Outprof-Octets-64 = 0x000100000000000248c4
            Alc-Acct-I-Inprof-Pkts-64 = 0x00010000000000000000
            Alc-Acct-I-Outprof-Pkts-64 = 0x000100000000000005d9
            Alc-Acct-I-Inprof-Octets-64 = 0x00030000000000000000
            Alc-Acct-I-Outprof-Octets-64 = 0x00030000000000000000
            Alc-Acct-I-Inprof-Pkts-64 = 0x00030000000000000000
            Alc-Acct-I-Outprof-Pkts-64 = 0x00030000000000000000
            Alc-Acct-I-Inprof-Octets-64 = 0x00050000000000000000
            Alc-Acct-I-Outprof-Octets-64 = 0x00050000000000000000
            Alc-Acct-I-Inprof-Pkts-64 = 0x00050000000000000000
            Alc-Acct-I-Outprof-Pkts-64 = 0x00050000000000000000
            Alc-Acct-O-Inprof-Octets-64 = 0x00010000000000000000
            Alc-Acct-O-Outprof-Octets-64 = 0x00010000000000003860
            Alc-Acct-O-Inprof-Pkts-64 = 0x00010000000000000000
            Alc-Acct-O-Outprof-Pkts-64 = 0x000100000000000000b0
            Alc-Acct-O-Inprof-Octets-64 = 0x00030000000000000000
            Alc-Acct-O-Outprof-Octets-64 = 0x00030000000000000000
            Alc-Acct-O-Inprof-Pkts-64 = 0x00030000000000000000
            Alc-Acct-O-Outprof-Pkts-64 = 0x00030000000000000000
            Alc-Acct-O-Inprof-Octets-64 = 0x00050000000000000000
            Alc-Acct-O-Outprof-Octets-64 = 0x00050000000000000000
            Alc-Acct-O-Inprof-Pkts-64 = 0x00050000000000000000
            Alc-Acct-O-Outprof-Pkts-64 = 0x00050000000000000000
            Acct-Unique-Session-Id = "9c1723d05e87c043"
            Timestamp = 1342461874
            Request-Authenticator = Verified
    

Mapping of Address and Port using Translation (MAP-T)

Note: MAP-T is available on both the Nokia 7750 SR and Virtualized Service Router (VSR) platforms. In 7750 SR, forwarding occurs in the data plane, except for fragmentation and ICMP processing, which require ESA-VM. This section focuses on the common components shared by both VSR and 7750 SR implementations. The later sections are divided into VSR-specific (see MAP-T on VSR) and SR-specific (see MAP-T on SR) parts.

MAP-T is a NAT technique defined in RFC 7599. Its key advantage is the decentralization of stateful NAT while enabling the sharing of public IPv4 addresses among the customer edge (CE) devices. In a nutshell, the CE performs the stateful NAT44 function and translates the resulting IPv4 packet into an IPv6 packet. The IPv6 packet is transported over the IPv6 network to the Border Router (BR), which then translates the IPv6 packet to IPv4 and sends it into the public domain.

As multiple CEs can share a single public IPv4 address, MAP-T must rely on an algorithm (A+P algorithm running on the CEs and BR) to ensure that each CE is assigned a unique port-range on a shared IPv4 public address. In this way, each CE can be uniquely identified at the BR by a combination of the shared IPv4 public address and unique port-range. A set of CEs and BR that share a common set of MAP algorithm rules constitutes a MAP domain.

MAP-T offers the following advantages mainly as a result of its stateless BR operation:

  • improved scaling

    State maintenance is decentralized, which enables better scaling.

  • simplified redundancy

    There are no sessions synchronized between redundant BRs and this translates to simplified redundancy.

  • reduced logging

    As there are no NAT resources in the BR that require logging, only configuration changes in the BR are logged, which reduces the volume of logging data.

  • simpler communication

    MAP-T simplifies user-to-user communication.

  • higher throughput

    MAP-T offers higher throughput than a stateful solution, with less processing required in the BR.

Mapping of address and port (MAP) is a generic function, regardless of the underlying transport mechanism (MAP-T or MAP-E) used. Each MAP CE is assigned as follows:

  • a shared public IPv4 address with a unique port-range on the shared IPv4 address

    Although a shared IPv4 address is used in most cases, the CE is sometimes assigned a unique IPv4 address or even an IPv4 prefix. This information is used for stateful NAT44 at the CE.

  • an IPv6 prefix (IA-PD)

    A ‟subnet” from the IPv6 prefix is allocated to the CE as a MAP prefix. The MAP prefix is used to encode public IPv4 information and identify the CE in a MAP domain. The remainder of the IA-PD is used on the LAN side of the CE.

  • an IPv6 address (IA-NA)

    The IPv6 address is independent of MAP and is a regular IPv6 address on the WAN side. The address is used for native end-to-end IPv6 communication (it can participate in forming routing adjacencies and other tasks).

The CE and BR perform the following functions in the MAP-T domain:

  • CE upstream direction (IPv4→IPv6)

    • Perform stateful NAT44 function (private→public).

    • Translate the public IPv4 address and port into an assigned IPv6 MAP source address.

    • Send the IPv6 packet with encoded IPv4 information toward the BR.

  • BR upstream direction (IPv6→IPv4)

    • Perform an anti-spoof check on the received IPv6 packet to ensure that it is coming from a trusted source (CE).

      Anti-spoofing is achieved by checking the source IPv6 MAP address against the configured MAP rules and making sure that the correct public IPv4 address and port-range of the CE are encoded in the CE's source IPv6 MAP address.

    • Translate the IPv6 packet into an IPv4 packet and forward it into the public domain.

  • BR downstream direction (IPv6←IPv4)

    • Translate the IPv4 packet into an IPv6 packet according to MAP rules.

      The IPv4 destination address of the received packet is translated into an IPv6 MAP address of the CE.

    • Send the IPv6 packet toward the CE.

  • CE downstream direction (IPv4←IPv6)

    • Perform the anti-spoofing function using the destination IPv6 address to verify that the packet is destined for the CE.

      MAP rules are used to verify that the public IPv4 address and the port-range of the CE is encoded in the IPv6 destination IP address of the received packet (IPv6 MAP address of the CE).

    • Translate the IPv6 packet into an IPv4 packet.

    • Perform the NAT44 function (public→private).

    • Forward the packet into the private IPv4 network.

Each device (CE and BR) is also responsible for fragmentation handling and ICMP error reporting (MTU to small, TTL expired, and so on).

MAP-T rules

MAP-T rules control the address translation in a MAP-T domain. The mapping rules can be delivered to the devices in the MAP domain using RADIUS or DHCP, or be statically provisioned.

The MAP-T rules are:

  • Basic Mapping Rule (BMR)

    The BMR is used to translate the public IPv4 address and port-range assigned to the CE into the IPv6 MAP address. It is composed of the following parameters:

    • rule IPv6 prefix (including prefix length)

    • rule IPv4 prefix (including prefix length)

    • rule Embedded Address bits (EA-bits) define the portion of the IA-PD that encodes the IPv4 suffix and port-range

    • rule Port Parameters (optional)

  • Forwarding Mapping Rule (FMR)

    The FMR is used for forwarding within the MAP domain. FMRs are instantiated in the BR so that the BR can forward traffic to the CEs. FMRs can also be instantiated in CEs to forward traffic directly between CEs, effectively bypassing BR. The FMR is composed of the same set of parameters as the BMR:

    • rule IPv6 prefix (including prefix length)

    • rule IPv4 prefix (including prefix length)

    • rule Embedded Address (EA) bits that define the portion of the IA-PD that encodes the IPv4 suffix and port-range

    • rule Port Parameters (optional)

  • Default Mapping Rule (DMR)

    The DMR is used to forward traffic outside the MAP domain. This rule contains the IPv6 prefix of the BR in MAP-T and it is used as the default route.

A+P mapping algorithm

The public IPv4 address and the port-range information of the CE is encoded in its assigned IPv6 delegated prefix (IA-PD). The BMR holds the key to decode this information from the IA-PD of the CE. The BMR identifies the portion of bits of the IA-PD that contain the suffix of the IPv4 address and the port set ID (PSID). These bits are called the EA bits. The PSID represents the port range assigned to the CE.

The public IPv4 address of the CE is constructed by concatenating the IPv4 prefix carried in the BMR and the suffix, which is extracted from the EA bits within the IA-PD. The port range is identified by the remaining EA bits (PSID portion). The EA bits uniquely identify the CE within the IPv6 network in a MAP domain.

The PSID offset value must be set to a value greater than 0. It represents ports that are omitted from the mapping (for example, well-known ports).

An IPv4 address and port on the private side of the CE must be statefully translated to a public IPv4 address, and the port within the assigned port must be set on the public side of the CE. This ensures that the BR, based on the same MAP rules, can extrapolate the IPv4 source of the packet for verification (anti-spoofing) purposes in the upstream direction, and conversely, to determine the destination IPv6 MAP address (CE address) in the downstream direction (based on the destination IPv4+port).

The IPv6 MAP address is constructed by setting the subnet ID in the delegated IPv6 prefix to 0. In this way, the subnet ID of 0 is reserved for MAP function. The remaining subnets can be delegated on the LAN side of the CE.

The interface ID is set to the IPv4 public address and PSID. This is described in RFC 7599, §6.

In this way, the IPv4 and IPv6 addresses of the CE are defined and easily converted to each other based on the BMR and the port information in the packet. A+P mapping shows the A+P mapping algorithm.

Figure 54. A+P mapping

Routing considerations

The following figure shows a MAP-T deployment scenario.

Figure 55. MAP-T deployment scenario

The routes related to MAP-T are:

  • IPv4 prefixes from the MAP-rules

    These routes are advertised in the upstream direction.

  • DMR

    This is the BR prefix for a specific domain. This route is advertised in the downstream direction.

Use the following commands to advertise routes related to MAP-T through IPv4 and IPv6 routing protocols:
  • MD-CLI
    configure policy-options policy-statement entry from protocol name
    configure policy-options policy-statement entry to protocol name
  • classic CLI
    configure router policy-options policy-statement entry from protocol name
    configure router policy-options policy-statement entry to protocol name

MAP-T routes in the 7750 SR and VSR are owned by configuring the protocol to nat with a metric of 50. This can be used to configure an export routing policy when advertising MAP-T routes in IGP or BGP.

Multiple MAP-T domains can be supported in the same routing context.

Note: IPv6 IA-PD end-user prefixes are carved out of the IPv6 rule prefix. Aside from MAP-T, IA-PD is used for native IPv6 end-to-end traffic outside of MAP-T. Although the IPv6 rule prefix is not marked as a NAT route in the routing table, it is nonetheless advertised in the upstream direction.

Forwarding considerations in the BR

In the upstream direction, when the BR receives an IPv6 packet destined for the BR prefix, a source-based IPv6 address lookup (anti-spoofing) is performed to verify that the packet is sent by the credible CE.

In the downstream direction, a destination-based IPv4 lookup is performed. This leads to the MAP-T rule entry, which provides the information necessary to derive the IPv6 address of the destination CE.

The MAP-T forwarding function in the VSR is also responsible for:

  • address conversion between IPv4 and IPv6 based on the BMR rule

  • header translation between IPv4 and IPv6, as described in RFC 6145

In address-sharing scenarios, address translation is performed for TCP/UDP and a subset of ICMP traffic; everything else is dropped. In contrast, 1:1 and prefix-sharing scenarios are protocol agnostic.

IPv6 addresses

An IPv6 address of the MAP-T node is constructed according to RFC 7597, §5.2 and RFC 7599, §6. The following figure shows the IPv6 address of the MAP-T node.

Figure 56. IPv6 address construction

The subnet ID for a MAP node (CE) is set to 0. The following figure shows the node interface (PSID is left-padded with zeros to create a 16-bit field and the IPv4 address is the public IPv4 address assigned to the CE).

Figure 57. Node interface

This constructed IPv6 address represents the source IPv6 address of traffic sent from the CE to BR (upstream direction), and the destination IPv6 address in the opposite direction (downstream traffic sent from the BR to the CE).

The source IPv6 address in the downstream direction is a combination of the BR IPv6 prefix and the source IPv4 address (per RFC 7599, §5.1) received in the original packet.

The destination IPv6 address in the upstream direction is a combination of the BR IPv6 prefix and the IPv4 destination address (RFC 7599, §5.1) in the original packet.

1:1 translations and IPv4 prefix translations

1:1 translations refer to the case in which each CE is assigned a distinct public IPv4 address; that is, there is no public IPv4 address sharing between the CEs. In this case, the PSID field is 0 and the sum of lengths for the IPv4 rule prefix and EA bits is 32. In other words, all the EA bits represent the IPv4 suffix. The public IPv4 address of the CE is created by concatenating the Rule IPv4 Prefix and the EA bits.

IPv4 Prefix translations refer to the case where an IPv4 prefix is assigned to a CE. In this case, the PSID field is 0 and the sum of the lengths for IPv4 rule prefix and EA bits is less than 32.

In both preceding cases, the translations are protocol agnostic; all protocols, not just TCP/UDP or ICMP, is translated.

Hub-and-spoke topology

The BR supports hub-and-spoke topology, which means that the BR facilitates communication between MAP-T CEs.

Rule prefix overlap

Rule prefix overlap is not supported because it can cause lookup ambiguity. The following figure shows a rule prefix overlap example.

Figure 58. IPv6 rule prefix overlap

In the case where rule IPv6 prefix 1 is a subset of rule IPv6 prefix 2, the overlapping bits between the EA-bits in end user prefix 2 and the overlapping bits in rule prefix 1 (represented by the shaded sections in IPv6 rule prefix overlap) could render end-user prefixes 1 and 2 indistinguishable (everything else being the same) when anti-spoof lookup is performed in the upstream direction. This could result in an incorrect anti-spoofing lookup.

A similar logic can be applied to overlapping IPv4 prefixes in the downstream direction, where the longest prefix match always leads to the same CE, while the shortest match (leading to a different CE) is not evaluated.

BMR rules implementation example

This section examines an example MAP-T deployment with three MAP rules. The deployment assumes the following:

  • There are about 12,000 private IPv4 addresses that need to be translated via MAP-T.

  • Each such address should have approximately 4000 ports available per CE. Therefore, the IP address sharing ratio is 16:1; that is, 16 CEs share the same public IP address.

  • The public IPv4 addresses that are available to the operator for this translation are from three /24 subnets (10.11.11.0/24, 10.12.12.0/24 and 10.13.13.0/24).

  • All users (or CEs) are assigned a /60 IA-PD.

The 12,000 private IPv4 addresses (CEs) in a 16:1 sharing scenario can be covered using three /24 subnets as follows:

(3 * 2^8 * 16 = 12,288)

The IPv4 rule prefix and EA bits length per rule in this scenario are:

  • 10.11.11.0/24 EA length: 12 bits (8 bits for the IPv4 suffix and 4 bits for PSID)

  • 10.12.12.0/24 EA length: 12 bits (8 bits for the IPv4 suffix and 4 bits for PSID)

  • 10.13.13.0/24 EA length: 12 bits (8 bits for the IPv4 suffix and 4 bits for PSID)

The first 6 bits of the 16 bit port-range are set to 000000 and are reserved for psid-offset (ports 0-1023 are reserved); therefore, the user-allocated port space is calculated as follows:

4000 - 64 = 4032 ports

The IPv6 rule prefix is the next parameter in the MAP rule. The following figure shows the relevant bits in the IPv6 address: only bits /32 to /64 are considered; the irrelevant bits of the IPv6 addresses are ignored in this example.

Figure 59. Determining the rule IPv6 prefix

The following three rules are created in this example:

  • Rule 1 covers subnet 1.

  • Rule 2 covers subnet 2.

  • Rule 3 covers subnet 3.

In each of the three cases, the EA bits extend from the PD length (/60) to the IPv6 rule prefix length (/48).

The IPv6 rule prefix length is determined for each of the three rules. However, the IPv6 rule prefixes must not overlap, see section Rule prefix overlap for more information. Non-overlapping IPv6 rule prefixes ensure that each CE is assigned a unique IA-PD. IPv6 rule prefixes describes the rules.

Table 23. IPv6 rule prefixes

Rule 1 Rule 1 Rule 1

IPv6 rule prefix

2001:db8:0000::/48-

2001:db8:0001::/48-

2001:db8:0002::/48-

IPv4 rule prefix

10.11.11.0/24

10.12.12.0/24

10.13.13.0/24

EA bits

12

12

12

Paid-Offset

6

6

6

The final step is to ensure that the DHCPv6 server hands out correct end-user prefixes (IA-PD), and the rules are also delegated.

In this example, each /48 IPv6 rule prefix supports 4,000 MAP-T CEs, where each CE can further delegate 15 IPv6 ‟subnets” on the LAN side and each CE is allocated about 4,000 ports to use in stateful NAT44.

Note: The 7750 SR and VSR BR supports only IPv6 rule prefixes of the same length within a domain. To accommodate a different prefix length assignment for IA-PD (for example /56), create another domain with a different IPv6 rule prefix (/44 instead of /48).

ICMP

The following ICMPv4 messages are supported in MAP-T on the 7750 SR and VSR platforms; other types of ICMP messages are not supported:

  • ICMP Query messages

    These messages contain an identifier field in the ICMP header, which is referred to as the ‟query identifier” or ‟query ID” and it is used in MAP-T in the same way as the L4 ports are used in TCP or UDP. ICMP Echo Req/Rep (PING) and traceroute are examples that rely on ICMP Query messages.

  • ICMP Error messages

    These messages contain the embedded original datagram that triggers the ICMP error message. The ICMP error messages do not contain the query-id field.

The ICMP Query messages and ICMP Error messages are supported regardless of whether they are just passing through a 7750 SR and VSR (transit messages), or are terminated or generated in or from a 7750 SR and VSR.

The NAT-related ICMPv4 behavior is described in RFC 5508. The following NAT messages are supported in the MAP-T 7750 SR and VSR (RFC 5508, §7, Requirement 10a):

  • ICMPv4 Error Message: Destination Unreachable Message (Type 3)

  • ICMPv4 Error Message: Time Exceeded Message (Type 11)

  • ICMPv4 Query Message: Echo and Echo Reply Messages (Type 8 and Type 0)

Fragmentation

In fragmentation, the following cases are distinguished:

  • SR/VSR fragments packets

    • In the downstream direction, SR/VSR fragments IPv6 packets but not IPv4 packets.

    • In the upstream direction, no additional translation is required. IPv4 fragmentation occurs after translation, following standard behavior.

  • SR/VSR receives fragments that need to be translated

    • In the downstream direction, fragments are translated into IPv6, and caching is required.

      Note: The received fragments may undergo further fragmentation.
    • In the upstream direction, fragments are translated into IPv4 and caching is NOT required.

The IPv6 header of the IPv4-translated packet in MAP-T can be up to 28 bytes larger than the IPv4 header (40-byte IPv6 header plus 8-byte fragmentation header versus 20-byte IPv4 header).

The IPv6 MTU in the 7750 SR and VSR is configurable for each MAP-T domain. The Layer 2 header is excluded from the IPv6 MTU.

Fragmentation in the downstream direction

All fragments of the same IPv4 packet are translated and sent toward the same CE. As the second and consecutive fragments do not contain any port information, the translation is performed based on the <SA, DA, Prot, Ident> cached flow records extracted from the IPv4 header.

Note: The VSR may further fragment an IPv4 fragment that it has received to fit it within the IPv6 MTU.

The following figure shows downstream fragmentation scenarios.

Figure 60. Fragmentation in the downstream direction

Fragmentation in the upstream direction

In the upstream direction, the received IPv6 fragments are artifacts of the IPv4 packets being fragmented on the CP side, before they are translated into IPv6. No flow caching is performed in the upstream direction. The BR performs an anti-spoof for each fragment and if the anti-spoof is successful, the fragment is translated to IPv4. The following figure shows the upstream fragmentation scenario.

Figure 61. Fragmentation in the upstream direction

Inter-chassis redundancy

MAP natively provides multichassis redundancy through the use of the anycast BR prefix that is advertised from multiple nodes.

As there is no state maintenance in the MAP-T BR, any BR node can process traffic for the same domain at all times. The only traffic interruption during the switch-over is for the fragmented traffic in the downstream direction being handled at the time of switchover (the flow record cache is not synchronized between the nodes).

Logging

As with any NAT operation where the identity of the user is hidden behind the NAT identity, logging of the NAT translation information is required. In the MAP-T domain, NAT logging is based on configuration changes because the user identity can be derived from the configured rules.

A system can have a large number of rules and each configured MAP rule generates a separate log. As a result, the amount of logs generated can be substantial. Logging is explicitly enabled using a log event.

A NAT log contains information about the following:

  • MAP type (Map-T)

  • map-domain name

  • map-rule name

  • v6 rule-prefix

  • v4 rule-prefix

  • EA bits

  • psid-offset bits

  • associated routing context for the MAP-T rule

  • timestamp

A MAP rule log is generated when both of the following conditions are met:

  • a MAP rule is activated and deactivated in the system (administratively enabled or disabled, corresponding MAP domain is associated or dissociated from the routing context, corresponding MAP domain is administratively enabled or disabled, and so on)

  • event tmnxNatMapRuleChange (id=2036) has been enabled in event-control

MAP rule log output

551 2016/04/22 14:56:35.44 UTC MINOR: NAT #2036 vprn220 NAT MAP
"map-type=map-t map-domain=domain-name-1 rule-name=rule-name-1 rule-prefix=2001:db8::/44 ipv4-prefix=192.168.10.0/24 ea-length=12 psid-offset=6 enabled router=vprn220 at 2016/04/22 14:56:35"

MAP-T on VSR

VSR-specific implementation is described in the following sections.

Fragmentation statistics

Use the following command to clear fragmentation statistics.

clear nat map statistics frag-stats

Use the following command to display NAT MAP fragmentation information.

show service nat map frag-stats

The following fragmentation statistics are available using the preceding command:

  • Rx Resolved Frags

    This counter shows fragments that were resolved and never buffered; for example:

    • first fragments (MF=1, FO=0), which are always resolved by definition

    • non-first fragments with matching flow records

  • Rx Unresolved Frags

    This counter shows the number of packets that were queued in the system since the last clear command was invoked. For example, packets with out-of-order fragments without a matching flow record (missing 1st fragment) can be eventually resolved and forwarded, or discarded (for example, because of timeout).

  • Tx Frags

    This counter shows the fragments that were transmitted (Rx Resolved Frags and Rx Unresolved Frags that were eventually resolved) out of fragmentation logic within the VSR. There is no guarantee that the fragments are transmitted out of the system as they may be dropped on egress because of congestion or restrictions imposed by the configured filter.

  • Dropped Frags

    This counter represents the fragments that are dropped because of fragmentation issues such as timeout, buffer full, and so on.

  • Created Flows

    This is a cumulative counter that represents the total number of flow records since the last clear command was invoked. It only counts the first fragment. It represents the number of fragmented packets that were processed by the system since the last clear command. This counter does not indicate the number of flows (packets whose fragments were transmitted fully) that were actually transmitted.

  • Flow Collisions

    This counter represents the number of overlapping first fragments. For example, in the case where a flow record already exists and another first fragment for the flow is received.

  • Exceeded Max Flows

    This counter represents the number of times that the flows in the system has exceeded the maximum supported value.

  • Exceeded Max Timeouts

    This counter shows the number of fragments that have timed out since the last clear command. The represented fragments are:

    • Rx unresolved (buffered) fragments that have timed out because of a missing first fragment

    • deleted flow-records because they have not received all fragments within the timeout period

  • Exceeded Max Buffers

    This counter represents the number of times that the buffers in the system has exceeded the maximum supported value.

  • Exceeded Max Buffers Per Flow

    This counter represents the number of times that the fragment counter per flow has exceeded its limit.

  • In-Use Flows %

    The counter gives an approximation of the number of flow records currently in use and the number of fragmented packets being processed at the time the counter was invoked, as a percentage.

  • Max Flows %

    A non-cumulative counter that represents the maximum number of flow records reached since the last clear command. The counter shows the highest value of the In-Use Flows counter since the last clear command, as a percentage.

  • In-Use Buffers %

    This counter represents the amount of buffered fragments expressed as a percentage of the maximum buffer space that can be used for fragmentation.

  • Max Buffers %

    This is a non-cumulative counter that represents the maximum number of buffers allocated since the last clear command. The counter captures the highest value of the In-Use Buffers counter since the last clear command. The unit of this counter is the percentage of the total buffer space that can be used by fragmentation.

Maximum Segment Size (MSS) adjust

The MSS Adjust feature is used to prevent fragmentation of TCP traffic. The TCP synchronize/start (SYN) packets are intercepted and their MSS value inspected to ensure that it conforms with the configured MSS value. If the inspected value is greater than the value configured in the VSR BR, the MSS value in the packet is lowered to match the configured value before the TCP SYN packet is forwarded.

The end nodes governing the MSS value are IPv4 nodes; and therefore, this feature is supported for IPv4 packets only.

An MSS adjust is performed in both the upstream and downstream directions.

Statistics collection

The Virtualized Service Router Border Router (VSR BR) maintains statistics per MAP-T domain and per MAP-T rule.

Per-direction aggregated counts of forwarded and dropped packets are always available. However, detailed statistics for individual rules must be explicitly enabled via configuration for each rule that requires such tracking. See the scaling guide for the appropriate release for the specific maximum number of simultaneous rules that can have detailed statistics enabled.

Use the following command to enable detailed statistics collection per MAP-T rule:

  • MD-CLI
    configure service nat map-t domain string mapping-rule statistics-collection true
  • classic CLI
    configure service nat map-domain mapping-rule enable-statistics

Use the following commands to display statistics per MAP-T domain or per MAP-T rule.

show service nat map map-domain domain-name statistics
show service nat map map-domain domain-name statistics mapping-rule rule-name 

VSR has the following statistics enabled and these statistics are described in VSR statistics field descriptions:

Table 24. VSR statistics field descriptions

Field name

Description

Upstream (IPv6->IPv4) forwarded packets

Specifies the number of packets forwarded in the upstream direction

Upstream (IPv6->IPv4) forwarded octets

Specifies the number of octets forwarded in the upstream direction

Upstream (IPv6->IPv4) dropped packets

Specifies the number of packets discarded in the upstream direction

Upstream (IPv6->IPv4) dropped octets

Specifies the number of octets discarded in the upstream direction

Upstream (IPv6->IPv4) dropped anti-spoof packets

Specifies the number of packets dropped in the upstream direction because of an anti-spoof lookup failure

Upstream (IPv6->IPv4) dropped icmpv6 packets

Specifies the number of ICMPv6 dropped packets in the upstream direction

Upstream (IPv6->IPv4) dropped other packet

Specifies the number of packets dropped in the upstream direction because of other reasons

Upstream (IPv6->IPv4) dropped unknown protocol packet

Specifies the number of packets dropped in the upstream direction because of an unknown protocol (not TCP/UDP/ICMP)

Upstream (IPv6->IPv4) fragmented packet

Specifies the number of fragmented packets received in the upstream direction

Upstream (IPv6->IPv4) icmpv6 node info packets

Specifies the number of ICMPv6 node information packets received in the upstream direction

Upstream (IPv6->IPv4) cpe icmpv6 error packets

Specifies the number of CPE-generated ICMPv6 error report packets received in the upstream direction

Upstream (IPv6->IPv4) intermediate icmpv6 error packets

Specifies the number of intermediate node-generated ICMPv6 error report packets received in the upstream direction

Downstream (IPv4->IPv6) forwarded packets

Specifies the number of packets forwarded in the downstream direction

Downstream (IPv4->IPv6) forwarded octets

Specifies the number of octets forwarded in the downstream direction

Downstream (IPv4->IPv6) dropped packets

Specifies the number of packets discarded in the downstream direction

Downstream (IPv4->IPv6) dropped octets

Specifies the number of octets discarded in the downstream direction

Downstream (IPv4->IPv6) dropped fragmented packets

Specifies the number of packets discarded in the downstream direction because of fragmentation

Downstream (IPv4->IPv6) dropped icmpv4 packets

Specifies the number of ICMPv4 packets discarded in the downstream direction

Downstream (IPv4->IPv6) dropped unknown protocol packets

Specifies the number of packets dropped in the downstream direction because of an unknown protocol (not TCP/UDP/ICMP)

Downstream (IPv4->IPv6) fragmented packets

Specifies the number of fragmented packets received in the downstream direction

Downstream (IPv4->IPv6) need fragmentation packets

Specifies the number of packets with need-fragmentation received in the downstream direction

Downstream (IPv4->IPv6) icmpv4 error packet

Specifies the number of ICMPv4 error reports packets received in the downstream direction

Downstream (IPv4->IPv6) icmpv4 echo packets

Specifies the number of ICMPv4 echo packets received in the downstream direction

Use the following command to display MAP domain statistics.

show service nat map map-domain "map-domain1" statistics
===============================================================================
MAP domain "map-domain1"
===============================================================================
Upstream (IPv6->IPv4) forwarded packets                 : 0
Upstream (IPv6->IPv4) forwarded octets                  : 0
Upstream (IPv6->IPv4) dropped packets                   : 0
Upstream (IPv6->IPv4) dropped octets                    : 0
Upstream (IPv6->IPv4) dropped anti-spoof packets        : 0
Upstream (IPv6->IPv4) dropped icmpv6 packets            : 0
Upstream (IPv6->IPv4) dropped other packets             : 0
Upstream (IPv6->IPv4) dropped unknown protocol packets  : 0
Upstream (IPv6->IPv4) fragmented packets                : 0
Upstream (IPv6->IPv4) icmpv6 node info packets          : 0
Upstream (IPv6->IPv4) cpe icmpv6 error packets          : 0
Upstream (IPv6->IPv4) intermediate icmpv6 error packets : 0
Downstream (IPv4->IPv6) forwarded packets               : 0
Downstream (IPv4->IPv6) forwarded octets                : 0
Downstream (IPv4->IPv6) dropped packets                 : 0
Downstream (IPv4->IPv6) dropped octets                  : 0
Downstream (IPv4->IPv6) dropped fragmented packets      : 1
Downstream (IPv4->IPv6) dropped icmpv4 packets          : 0
Downstream (IPv4->IPv6) dropped unknown protocol packets: 0
Downstream (IPv4->IPv6) fragmented packets              : 0
Downstream (IPv4->IPv6) need fragmentation packets      : 1
Downstream (IPv4->IPv6) icmpv4 error packets            : 0
Downstream (IPv4->IPv6) icmpv4 echo packets             : 0
===============================================================================

Use the following command to display MAP domain statistics with mapping rule.

show service nat map map-domain "map-domain1" statistics mapping-rule "map-domain1-rule1"
===============================================================================
MAP-T domain "map-domain1" mapping-rule "map-domain1-rule1"
===============================================================================
Upstream (IPv6->IPv4) forwarded packets                 : 0
Upstream (IPv6->IPv4) forwarded octets                  : 0
Upstream (IPv6->IPv4) dropped packets                   : 0
Upstream (IPv6->IPv4) dropped octets                    : 0
Upstream (IPv6->IPv4) dropped anti-spoof packets        : 0
Upstream (IPv6->IPv4) dropped icmpv6 packets            : 0
Upstream (IPv6->IPv4) dropped other packets             : 0
Upstream (IPv6->IPv4) dropped unknown protocol packets  : 0
Upstream (IPv6->IPv4) fragmented packets                : 0
Upstream (IPv6->IPv4) icmpv6 node info packets          : 0
Upstream (IPv6->IPv4) cpe icmpv6 error packets          : 0
Upstream (IPv6->IPv4) intermediate icmpv6 error packets : 0
Downstream (IPv4->IPv6) forwarded packets               : 0
Downstream (IPv4->IPv6) forwarded octets                : 0
Downstream (IPv4->IPv6) dropped packets                 : 0
Downstream (IPv4->IPv6) dropped octets                  : 0
Downstream (IPv4->IPv6) dropped fragmented packets      : 1
Downstream (IPv4->IPv6) dropped icmpv4 packets          : 0
Downstream (IPv4->IPv6) dropped unknown protocol packets: 0
Downstream (IPv4->IPv6) fragmented packets              : 0
Downstream (IPv4->IPv6) need fragmentation packets      : 1
Downstream (IPv4->IPv6) icmpv4 error packets            : 0
Downstream (IPv4->IPv6) icmpv4 echo packets             : 0
===============================================================================

Licensing

A valid MAP-T license is required to enable the MAP-T functionality in the VSR BR. A MAP-T domain can only be instantiated with the appropriate license, which enables the following CLI command:

  • MD-CLI
    configure service nat map-t domain
  • classic CLI
    configure service nat map-domain

MAP-T configuration

The MAP-T configuration consists of defining MAP-T command options within a template. The MAP-T domain is then instantiated by applying (referencing) this template within a routing (router or VPRN) context.

Define a MAP domain template (MD-CLI)
[ex:/configure service nat]
A:admin@node-2# info
    map-t {
        domain "test" {
            dmr-prefix 2001::/96
            mtu 5000
            tcp-mss-adjust 200
            ip-fragmentation {
                v6-frag-header true
            }
            mapping-rule "test1" {
                ipv4-prefix 192.0.0.0/32
                ea-length 48
                psid-offset 16
                rule-prefix 2001::/64
            }
            mapping-rule "test2" {
                ipv4-prefix 192.0.0.0/28
                ea-length 12
                psid-offset 10
                rule-prefix 3ffe::/64
            }
        }
    }
Define a MAP domain template (classic CLI)
A:node-2>config>service>nat# info
----------------------------------------------
            map-domain "test" create
                shutdown
                dmr-prefix 2001::/96
                mtu 5000
                tcp-mss-adjust 200
                ip-fragmentation
                    v6-frag-header
                exit
                mapping-rule "test1" create
                    shutdown
                    ea-length 48
                    ipv4-prefix 192.0.0.0/32
                    psid-offset 16
                    rule-prefix 2001::/64
                exit
                mapping-rule "test2" create
                    shutdown
                    ea-length 12
                    ipv4-prefix 192.0.0.0/28
                    psid-offset 10
                    rule-prefix 3ffe::/64
                exit
            exit
----------------------------------------------
Configure a MAP-T domain instantiation on a routing instance (MD-CLI)
[ex:/configure router "Base"]
A:admin@node-2# info
    nat {
        map {
            map-domain "test" { }
        }
    }
Configure a MAP-T domain instantiation on a routing instance (classic CLI)
A:node-2>config>router# info
#--------------------------------------------------
echo "NAT Configuration"
#--------------------------------------------------
        nat
            map
                map-domain "test"
            exit
        exit
----------------------------------------------
Configure a MAP-T domain instantiation on a VPRN instance (MD-CLI)
[ex:/configure service vprn "5"]
A:admin@node-2# info
    admin-state enable
    customer "1"
    nat {
        map {
            map-domain "test1" { }
        }
    }
Configure a MAP-T domain instantiation on a VPRN instance (classic CLI)
A:node-2>config>service>vprn# info
----------------------------------------------
            nat
                map
                    map-domain "test1"
                exit
            exit
            no shutdown
----------------------------------------------
MAP domain template for the BMRs

The following example shows the MAP domain template for the BMRs defined BMR rules implementation example.

Define a MAP domain template for the BMRs (MD-CLI)
[ex:/configure service]
A:admin@node-2# info
    nat {
        map-t {
            domain "domain_1" {
                admin-state enable
                dmr-prefix 2001:db8:100::/64
                mapping-rule "rule_1" {
                    ipv4-prefix 10.11.11.0/24
                    ea-length 12
                    rule-prefix 2001:db8::/48
                }
                mapping-rule "rule_2" {
                    ipv4-prefix 10.12.12.0/24
                    ea-length 12
                    rule-prefix 2001:db8:1::/48
                }
                mapping-rule "rule_3" {
                    ipv4-prefix 10.13.13.0/24
                    ea-length 12
                    rule-prefix 2001:db8:2::/48
                }
            }
MAP domain template for the BMRs (classic CLI)
A:node-2>config>service# info
----------------------------------------------
        customer 1 name "1" create
            description "Default customer"
        exit
        nat
            map-domain "domain_1" create
                dmr-prefix 2001:db8:100::/64
                mapping-rule "rule_1" create
                    shutdown
                    ea-length 12
                    ipv4-prefix 10.11.11.0/24
                    rule-prefix 2001:db8::/48
                exit
                mapping-rule "rule_2" create
                    shutdown
                    ea-length 12
                    ipv4-prefix 10.12.12.0/24
                    rule-prefix 2001:db8:1::/48
                exit
                mapping-rule "rule_3" create
                    shutdown
                    ea-length 12
                    ipv4-prefix 10.13.13.0/24
                    rule-prefix 2001:db8:2::/48
                exit
                no shutdown
            exit
MAP-T domain instantiation example

The following example shows the MAP-T domain instantiation for the BMRs defined in BMR rules implementation example.

Configure a MAP-T domain instantiation for the BMRs (MD-CLI)
[ex:/configure service vprn "10"]
A:admin@node-2# info
    customer "1"
    nat {
        map {
            map-domain "domain_1" { }
        }
    }
Configure a MAP-T domain instantiation for the BMRs (classic CLI)
A:node-2>config>service>vprn$ info
----------------------------------------------
            shutdown
            nat
                map
                    map-domain "domain_1"
                exit
            exit
----------------------------------------------
Modifying MAP-T command options when the MAP-T domain is active

You can add new rules to an existing MAP-T domain while the MAP-T domain is instantiated and forwarding traffic. In the classic CLI, however, each rule must be in the administratively disabled state before any of its command options are modified.

In the classic CLI, MAP-T domain must be in the administratively disabled state to modify the dmr-prefix command option. The remaining command options (tcp-mss-adjust, mtu, ip-fragmentation) can be modified while the domain is active.

A MAP domain does not have to be in a an administratively disabled state when rule modification is in progress.

MAP-T on SR

MAP-T on 7750 SR provides high throughput because most of the traffic is handled directly in the data plane. Only fragmented and ICMP traffic is offloaded to the ESA-VM.

When fragments are received downstream, they require caching and buffering, which demands significant memory. Such traffic is unsuitable for processing in the data plane or CPM because it can expose the CPM and the overall routing and management plane to vulnerabilities, such as fragmentation attacks or memory exhaustion. Since this fragmented traffic is transit and not locally terminated, processing it in the CPM can overwhelm system resources.

The ESA-VM, with its greater memory and processing capabilities, is optimized to handle this type of traffic. The ESA-VM provides a safer environment by isolating routing and management functions from forwarding processes.

MAP-T on 7750 SR also requires a Port-Cross-Connect (PXC), which acts as a loopback in the system. Use of PXC is simplified through the Forwarding Path Extensions (FPE) concept.

For more information about PXC and FPE, see sections "Port-Cross Connect" and "FPE" in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Interface Configuration Guide. See MAP-T configuration for configuration examples detailing MAP-t setup on the 7750 SR.

Fragmentation statistics and resources

When downstream fragments are received out of order (where non-first fragments arrive before the first fragment), the non-first fragments are buffered until the first fragment is received. The number of fragments buffered per packet is configurable, with a default value of 5. Use the following command to configure the number of buffered fragments per packet.

configure isa map-t-group fragments-per-packet

The ESA-VM offers resource monitoring capabilities to track fragmentation-related resources. The following metrics are available for total, current usage, and peak values:

  • Buffers

    This denotes the total buffers available for fragmentation in the system, currently in use, and the maximum buffers allocated since the last reset. These are absolute values.

  • Fragment Lists

    This tracks fragments of a single packet. The total, currently in use, and the maximum fragmentation lists since the last reset are monitored. These are absolute values.

Use the following command to display fragmentation related resources.

show isa nat-system-resources esa-vm 1/1
===============================================================================
MAP-T ESA-VM 1/1 resources
===============================================================================
Name                                                                          
                      Maximum        Actual          Peak       Peak Timestamp
-------------------------------------------------------------------------------
Downstream fragment lists
                      1835008             0             0                  N/A
Downstream fragment bufs
                      1835008             0             0                  N/A
===============================================================================

In addition to resource monitoring, the following statistics for processed fragments are available:

  • RX Resolved frags

    This counter represents fragments that were resolved and never buffered since the last clear command was invoked. For example:

    • first fragments (MF=1, FO=0), which are always resolved by definition

    • non-first fragments with matching flow records

  • RX Unresolved frags

    This counter represents the number of packets that were buffered in the system since the last clear command was invoked. For example, packets with out-of-order fragments without a matching flow record (missing first fragment) can be eventually resolved and forwarded, or discarded (for example, because of timeout).

  • TX frags

    This counter represents the fragments that were transmitted from the ESA-VM.

  • Dropped timeout

    This counter represents the fragments that have timed out since the last clear command.

    • Unresolved (buffered) fragments that timed out due to a missing first fragment

    • Deleted flow-records due to incomplete fragments within the timeout period (2s)

  • Dropped buffer exhaustion

    This counter represents the number of fragments dropped because the total buffer capacity was exhausted.

  • Dropped too many frags

    This counter represents the number of fragments dropped because the number of fragments per packet exceeded the configured limit.

  • Dropped too many lists

    This counter represents the number of fragments dropped due to too many fragmented packets being processed concurrently.

  • Dropped fragment lists timeout

    This counter represents the number of fragment lists dropped after reaching the timeout limit.

  • Overlapping first fragments

    This counter represents the number of overlapping first fragments detected. For example, if a fragment list already exists, and another fragment with the same fields (MF=1, FO=0) arrives, this is counted as a collision.

Use the following command to display the number of fragments per flow in a fragment list at the time when the fragment list was resolved or dropped.

show service nat map ma-domain “demo-domain 1” frag-stats esa-vm “1/1”  
===============================================================================
MAP-T domain "demo-domain-1" esa-vm 1/1 fragmentation statistics
===============================================================================
RX Resolved frags                                       : 0
RX Unresolved frags                                     : 0
TX Frags                                                : 0
Dropped timeout                                         : 0
Dropped buffer exhaustion                               : 0
Dropped too many frags                                  : 0
Dropped too many lists                                  : 0
Dropped fragment lists timeout                          : 0
Overlapping first fragments                             : 0
===============================================================================

Use the following command to display the number of fragments that were buffered when the fragment list was either resolved or dropped. An increase in counters toward the end of the output suggests a rise in out-of-order packets.

show service nat map map-domain "demo-domain-1" esa-vm "1/1" frag-stats lists 
===============================================================================
MAP-T domain "demo-domain-1" esa-vm 1/1 fragment lists histogram
===============================================================================
Histogram of resolved fragment lists:
0 fragments                                             : 0
1 fragments                                             : 0
2 fragments                                             : 0
3 fragments                                             : 0
4 fragments                                             : 0
5 fragments                                             : 0

Histogram of dropped fragment lists:
0 fragments                                             : 0
1 fragments                                             : 0
2 fragments                                             : 0
3 fragments                                             : 0
4 fragments                                             : 0
5 fragments                                             : 0
===============================================================================

Use the following commands to clear fragmentation statistics.

clear nat map statistics frag-stats map-domain "demo-domain-1" esa-vm "1/1" lists 

Statistics collection

MAP-T statistics in 7750 SR are collected in two locations:

  • ESA-VM

    This location gathers statistics related to ICMP traffic and packets that are received fragmented by 7750 SR and punted to ESA-VM for further processing (in both upstream and downstream directions).

  • FP

    This location tracks statistics for MAP-T traffic that is processed directly by the forwarding path (FP) without being punted to ESA-VM.

FP statistics

The following table describes the domain-level statistics enabled on the FP for the 7750 SR.

Table 25. Domain-level statistics on FP

Field name

Description

Upstream (IPv6->IPv4) forwarded packets

Specifies the number of packets forwarded in the upstream direction

Upstream (IPv6->IPv4) forwarded octets

Specifies the number of octets forwarded in the upstream direction

Upstream (IPv6->IPv4) dropped anchor-interface packets

Specifies the number of packets discarded in the upstream direction

Downstream (IPv4->IPv6) forwarded packets

Specifies the number of packets forwarded in the downstream direction

Downstream (IPv4->IPv6) forwarded octets

Specifies the number of octets forwarded in the downstream direction

Downstream (IPv4->IPv6) dropped anchor-interface packets

Specifies the number of packets discarded in the downstream direction

The following table describes the domain-level statistical failures enabled on the 7750 SR.

Table 26. Domain-level failures on FP

Field name

Description

Upstream (IPv6->IPv4) dropped anti-spoof packets

Specifies the number of packets dropped in the upstream direction because of an anti-spoof failures

Upstream (IPv6->IPv4) dropped unknown protocol packets

Specifies the number of packets dropped in the upstream direction because of an unknown protocol (not TCP/UDP/ICMP)

Downstream (IPv4->IPv6) dropped unknown protocol packets

Specifies the number of packets dropped in the downstream direction because of an unknown protocol (not TCP/UDP/ICMP)

The following table describes the rule-level statistics enabled on the 7750 SR.

Table 27. Rule-level statistics on FP

Field name

Description

Upstream (IPv6->IPv4) dropped anti-spoof packets

Specifies the number of packets dropped in the upstream direction because of an anti-spoof failures

Upstream (IPv6->IPv4) forwarded packets

Specifies the number of packets forwarded in the upstream direction

Upstream (IPv6->IPv4) forwarded octets

Specifies the number of octets forwarded in the upstream direction

Downstream (IPv4->IPv6) forwarded packets

Specifies the number of packets forwarded in the downstream direction

Downstream (IPv4->IPv6) forwarded octets

Specifies the number of octets forwarded in the downstream direction

ESA-VM statistics

The following table describes the statistics enabled for the ESA-VM.

Table 28. Domain- and rule-level statistics on ESA-VM

Field name

Description

Upstream (IPv6->IPv4) forwarded packets

Specifies the number of packets forwarded in the upstream direction

Upstream (IPv6->IPv4) forwarded octets

Specifies the number of octets forwarded in the upstream direction

Upstream (IPv6->IPv4) dropped packets

Specifies the number of packets discarded in the upstream direction

Upstream (IPv6->IPv4) dropped octets

Specifies the number of octets discarded in the upstream direction

Upstream (IPv6->IPv4) dropped anti-spoof packets

Specifies the number of packets dropped in the upstream direction because of an anti-spoof lookup failure

Upstream (IPv6->IPv4) dropped icmpv6 packets

Specifies the number of ICMPv6 dropped packets in the upstream direction

Upstream (IPv6->IPv4) dropped unknown protocol packets

Specifies the number of packets dropped in the upstream direction because of an unknown protocol (not TCP/UDP/ICMP)

Upstream (IPv6->IPv4) fragmented packets

Specifies the number of fragmented packets received in the upstream direction

Upstream (IPv6->IPv4) icmpv6 echo packets

Specifies the number of ICMPv6 echo packets received in the upstream direction

Upstream (IPv6->IPv4) cpe icmpv6 error packets

Specifies the number of CPE-generated ICMPv6 error report packets received in the upstream direction

Upstream (IPv6->IPv4) intermediate icmpv6 error packets

Specifies the number of intermediate node-generated ICMPv6 error report packets received in the upstream direction

Downstream (IPv4->IPv6) forwarded packets

Specifies the number of packets forwarded in the downstream direction

Downstream (IPv4->IPv6) forwarded octets

Specifies the number of octets forwarded in the downstream direction

Downstream (IPv4->IPv6) dropped packets

Specifies the number of packets discarded in the downstream direction

Downstream (IPv4->IPv6) dropped octets

Specifies the number of octets discarded in the downstream direction

Downstream (IPv4->IPv6) dropped fragmented packets

Specifies the number of fragments discarded in the downstream direction

Downstream (IPv4->IPv6) dropped need-fragmentation pckts

Number of packets discarded in the downstream direction that need fragmentation, but DF bit is set

Downstream (IPv4->IPv6) dropped icmpv4 packets

Specifies the number of ICMPv4 packets discarded in the downstream direction

Downstream (IPv4->IPv6) dropped unknown protocol packets

Specifies the number of packets dropped in the downstream direction because of an unknown protocol (not TCP/UDP/ICMP)

Downstream (IPv4->IPv6) fragmented packets

Specifies the number of fragmented packets received in the downstream direction

Downstream (IPv4->IPv6) need fragmentation packets

Specifies the number of packets received in the downstream direction that are larger than the domain MTU

Downstream (IPv4->IPv6) icmpv4 error packets

Specifies the number of ICMPv4 error reports packets received in the downstream direction

Downstream (IPv4->IPv6) icmpv4 echo packets

Specifies the number of ICMPv4 echo packets received in the downstream direction

Generated icmpv4 Fragmentation Problem

Specifies the number of generated ICMPv4 error messages with code Fragmentation Needed and DF Set

For both scenarios in the preceding example, an ICMP error is generated (type=3, code=4) where the value for next-hop-mtu is adjusted for IPv4→IPv6 translation

Show commands

Use the following command to display FP statistics.

show service nat map map-domain "demo-domain-1" fp statistics
===============================================================================
MAP-T domain "demo-domain-1" fp statistics
===============================================================================
Upstream (IPv6->IPv4) forwarded packets                 : 0
Upstream (IPv6->IPv4) forwarded octets                  : 0
Upstream (IPv6->IPv4) dropped anchor-interface packets  : 0
Upstream (IPv6->IPv4) dropped anti-spoof packets        : 0
Upstream (IPv6->IPv4) dropped unknown protocol packets  : 0
Downstream (IPv4->IPv6) forwarded packets               : 0
Downstream (IPv4->IPv6) forwarded octets                : 0
Downstream (IPv4->IPv6) dropped anchor-interface packets: 0
Downstream (IPv4->IPv6) dropped unknown protocol packets: 0
=============================================================================== 

Use the following command to display FP mapping rule statistics.

show service nat map map-domain "map-domain1" fp mapping-rule "map-domain1-rule1" statistics
===============================================================================
MAP-T domain "map-domain1" mapping-rule "map-domain1-rule1"
===============================================================================
Upstream (IPv6->IPv4) forwarded packets                 	: 0
Upstream (IPv6->IPv4) forwarded octets                  	: 0
Upstream (IPv6->IPv4) dropped anti-spoof packets        	: 0
Downstream (IPv4->IPv6) forwarded packets               	: 0
Downstream (IPv4->IPv6) forwarded octets                	: 0
===============================================================================

Use the following command to display ESA-VM mapping rule statistics.

show service nat map map-domain "demo-domain-1" esa-vm 1/1 statistics 
===============================================================================
MAP-T domain "demo-domain-1" esa-vm 1/1 statistics
===============================================================================
Upstream (IPv6->IPv4) forwarded packets                 : 0
Upstream (IPv6->IPv4) forwarded octets                  : 0
Upstream (IPv6->IPv4) dropped packets                   : 0
Upstream (IPv6->IPv4) dropped octets                    : 0
Upstream (IPv6->IPv4) dropped anti-spoof packets        : 0
Upstream (IPv6->IPv4) dropped icmpv6 packets            : 0
Upstream (IPv6->IPv4) dropped unknown protocol packets  : 0
Upstream (IPv6->IPv4) fragmented packets                : 0
Upstream (IPv6->IPv4) icmpv6 echo packets               : 0
Upstream (IPv6->IPv4) cpe icmpv6 error packets          : 0
Upstream (IPv6->IPv4) intermediate icmpv6 error packets : 0
Downstream (IPv4->IPv6) forwarded packets               : 0
Downstream (IPv4->IPv6) forwarded octets                : 0
Downstream (IPv4->IPv6) dropped packets                 : 0
Downstream (IPv4->IPv6) dropped octets                  : 0
Downstream (IPv4->IPv6) dropped fragmented packets      : 0
Downstream (IPv4->IPv6) dropped need fragmentation pckts: 0
Downstream (IPv4->IPv6) dropped icmpv4 packets          : 0
Downstream (IPv4->IPv6) dropped unknown protocol packets: 0
Downstream (IPv4->IPv6) fragmented packets              : 0
Downstream (IPv4->IPv6) need fragmentation packets      : 0
Downstream (IPv4->IPv6) icmpv4 echo packets             : 0
Downstream (IPv4->IPv6) icmpv4 error packets            : 0
Generated icmpv4 Fragmentation Problem                  : 0
=============================================================================== 
Clear commands

Use the following commands to clear FP statistics:

clear nat map map-domain domain-name fp statistics
clear nat map map-domain domain-name mapping-rule rule-name fp statistics

Use the following commands to clear ESA statistics:

clear nat map map-domain domain-name esa-vm id statistics
clear nat map map-domain domain-name mapping-rule rule-name esa-vm id statistics

MAP-T configuration

To configure a MAP-T, perform the following:

  • Configure a MAP-T group with ESA-VM.

  • Configure PXC and FPE.

  • Configure MAP domain templates which are configured under a generic configure service nat hierarchy. The MAP domain templates contain MAP domain rules with their parameters and reference to the FPE (PXC) and MAP-T group (ESA-VM).

  • Reference the MAP domain template within a routing context (router or VPRN). This step triggers MAP-T domain instantiation (rules are activated and programmed in the system).

Configuring ESA-VM

ESA-VM is responsible for processing transit fragmentation and ICMP traffic. ESA-VM instances are configured within a map-t-group, which is then associated with a map-t domain when the domain is instantiated within a specific routing context.

The following example displays the ESA-VM configuration.

MD-CLI
[ex:/configure]
A:admin@node-2# info
isa {
map-t-group 1 {
   		admin-state enable
      	esa 1 vm 1 { }
 	}
}	
    	

esa 1 {
    		admin-state enable
        host-port 3/1/c1/1 {
        }
        vm 1 {
            admin-state enable
            host-port 3/1/c1/1
            vm-type bb
            cores 15
            memory 96
        }
  	}
classic CLI
A:node-2>configure# info
			
            esa 1 create
                host-port 3/1/c1/1
                vm 1 create
                    host-port 3/1/c1/1
                    vm-type bb
                    cores 15
                    memory 96
                    no shutdown

            isa
                map-t-group 1 create
                    esa-vm 1/1
	
Configuring PXC and FPE

The PXC can be allocated on a faceplate port or it can be allocated internally on E-chip (MAC). In this example, PXC is allocated internally on a loopback port created in MAC 2.

The following example displays the PXC configuration.

Creating loopback port (MD-CLI)
[ex:/configure]
A:admin@node-2# info
card 3 {
mda 1 {            
     	xconnect {
        		mac 2 {
            	loopback 1 {
                	bandwidth 400
             	}
          	}
      	}
   	}       
}
creating loopback port (classic CLI)
A:node-2>configure# info
				card 3        	
        	mda 1
        		xconnect
            	mac 2 create
               		loopback 1 create
                   		bandwidth 400
                  	exit
              	exit
          	exit
           	no shutdown

The following example displays the association of a PXC with an internal loopback port configuration

associating PXC with an internal loopback port (MD-CLI)
[ex:/configure]
A:admin@node-2# info
port-xc {
pxc 1 {
    	admin-state enable
       	port-id 3/1/m2/1
          }
       }
associating PXC with an internal loopback port (classic CLI)
A:node-2>configure# info
			port-xc
        		pxc 1 create
            	port 3/1/m2/1
            	no shutdown

The following example displays the creation of logical PXC port configuration.

creating logical PXC ports (MD-CLI)
[ex:/configure]
A:admin@node-2# info
port pxc-1.a {
   		admin-state enable
    	}
    	port pxc-1.b {
     	admin-state enable
    	}
creating logical PXC ports (classic CLI)
A:node-2>configure# info
			port pxc-1.a
        		ethernet
        		exit
        	no shutdown
    		exit
    		port pxc-1.b
        		ethernet
        		exit
        	no shutdown

The following example displays associating PXCs with forwarding path extensions (FPE) configured for MAP-T. FPE 1 will be associated with a MAP-T domain when the domain is instantiated within a specific routing context.

associating PXC with FPE (MD-CLI)
[ex:/configure]
A:admin@node-2# info
fwd-path-ext {
       	sdp-id-range {
        		start 128
           	end 128
        	}
        	fpe 1 {
          	path {
            	pxc 1
           	}
           	application {
              	map-t true
          	}
       	}
   	}
associating PXC with FPE (classic CLI)
A:node-2>configure# info
				fwd-path-ext
        			sdp-id-range from 128 to 128
        			fpe 1 create
            			path pxc 1
            			map-t
        			exit
    			exit
Defining a MAP-T domain template

The following example displays a MAP-T domain template with three MAP-T rules configuration.

MAP-T template with three rules (MD-CLI)
[ex:/configure]
A:admin@node-2# info
		service {
       		nat {
           	map-t {
              		domain "demo-domain-1" {
                  	admin-state enable
                    	description "map-t domain used in this example"
                    	dmr-prefix 2001:db8:100::/64
                    	fpe 1
                    	map-t-group 1
                    	mtu 1514
                    	mapping-rule "rule_1" {
                    		admin-state enable
                        	ipv4-prefix 11.11.11.0/24
                        	ea-length 12
                        	rule-prefix 2001:db8::/48
                 		}
                    	mapping-rule "rule_2" {
                        	admin-state enable
                        	ipv4-prefix 12.12.12.0/24
                        	ea-length 12
                        	rule-prefix 2001:db8:1::/48
                    	}
                    	mapping-rule "rule_3" {
                       	admin-state enable
                        	ipv4-prefix 13.13.13.0/24
                        	ea-length 12
                        	rule-prefix 2001:db8:2::/48
                    	}
                	}
            	}
        	}
		}
	}
MAP-T template with 3 rules (classic CLI)
A:node-2>configure# info
			service 
				nat
            		map-domain "demo-domain-1" create
                		description "map-t domain used in this example"
                		dmr-prefix 2001:db8:100::/64
                		fpe 1
                		map-t-group 1
                		mtu 1514
                		mapping-rule "rule_1" create
                    		ea-length 12
                    		ipv4-prefix 11.11.11.0/24
                    		rule-prefix 2001:db8::/48
                    		no shutdown
                		exit
                		mapping-rule "rule_2" create
                    		ea-length 12
                    		ipv4-prefix 12.12.12.0/24
                    		rule-prefix 2001:db8:1::/48
                    		no shutdown       
                		exit                  
                		mapping-rule "rule_3" create
                    		ea-length 12      
                    		ipv4-prefix 13.13.13.0/24
                    		rule-prefix 2001:db8:2::/48
                    		no shutdown       
                		exit                  
                		no shutdown           
            		exit                      
        	exit             
MAP-T domain instantiation in the VPRN "demo-vprn"

The following example displays a MAP-T domain instantiation in the VPRN "demo-vprn" configuration.

MAP-T domain instantiation (MD-CLI)
[ex:/configure]
A:admin@node-2# info
		service {
			vprn "demo-vprn" {
         		service-id 1
            	customer "1"
            	nat {
              	map {
                		map-domain "demo-domain-1" { }
                	}
            	}
        	}
MAP-T domain instantiation (classic CLI)
A:node-2>configure# info
			service 
				vprn 1 name "demo-vprn" customer 1 create                 
            		nat                       
                		map                   
                    		map-domain "demo-domain-1"
                		exit                  
            		exit                      
        		exit        
Changing MAP-T parameters online

Rules within the domain can be added while the domain is instantiated and operational (active). However, each rule must be administratively disabled before any of its command options are modified.

A MAP-T domain must be administratively disabled in order to modify the dmr-prefix command. The remaining command options (mtu and ip-fragmentation) can be changed while the domain is active.

A MAP domain does not have to be administratively disabled while its rules are being modified.

A MAP-T group and FPE association with the domain cannot be performed online. Instead, the domain must first be disabled or shut down.

Configuring NAT

This section provides information to configure NAT using the command line interface.

ISA redundancy

The 7750 SR supports ISA redundancy to provide reliable NAT even when an MDA fails. The active-mda-limit command allows a user to specify how many MDAs are active in a NAT group. Any number of MDAs configured above the active-mda-limit are spare MDAs; they take over the NAT function if one of the current active MDAs fail.

The following example displays a sample ISA redundancy configuration.

Configure ISA redundancy (MD-CLI)

[ex:/configure isa]
A:admin@node-2# info
    nat-group 1 {
        redundancy {
            active-mda-limit 1
            }
        mda 1/2 { }
        mda 2/2 { }
    }

Configure ISA redundancy (classic CLI)

A:node-2>config>isa# info
----------------------------------------------
        nat-group 1 create
            no shutdown
            active-mda-limit 1
            mda 1/2
            mda 2/2
        exit
----------------------------------------------

Use the following command to display the actual state of a NAT group and its corresponding MDAs.

show isa nat-group 1

NAT group state output

===============================================================================
ISA NAT Group 1
===============================================================================
Description                 : (Not Specified)
Admin state                 : inService
Operational state           : inService
Degraded                    : false
Redundancy                  : active-standby
Active MDA limit            : 1
Failed MDA limit            : 0
Scaling profile             : profile3

-------------------------------------------------------------------------------
NAT specific information for ISA group 1
-------------------------------------------------------------------------------
Reserved sessions           : 0
High Watermark (%)          : (Not Specified)
Low Watermark (%)           : (Not Specified)
Accounting policy           : (Not Specified)
UPnP mapping limit          : 524288
Suppress LsnSubBlksFree     : false
LSN support                 : enabled
Last Mgmt Change            : 06/11/2024 09:26:46
-------------------------------------------------------------------------------
===============================================================================

===============================================================================
ISA Group 1 members
===============================================================================
Group Member State          MDA/VM   Addresses  Blocks     Se-% Hi Se-Prio
-------------------------------------------------------------------------------
1     1      active         3/1      101        30300      < 1  N  0
-------------------------------------------------------------------------------
No. of members: 1
===============================================================================

A NAT group cannot become active (administratively enabled) if the number of configured MDAs is lower than the active-mda-limit.

An MDA can be configured in several NAT groups but it can only be active in a single NAT group at any moment in time. Spare MDAs can be shared in several NAT groups, but a spare can only become active in one NAT group at a time. Changing the active-mda-limit, adding or removing MDAs can only be done when the NAT group is administratively disabled.

NAT groups that share spare MDAs must be configured with the same list of MDAs. It is possible to remove or add spare MDAs to a NAT group while the NAT group is administratively enabled.

Configure NAT groups with the same MDAs (MD-CLI)

[ex:/configure isa]
A:admin@node-2# info
    nat-group 1 {
        redundancy {
            active-mda-limit 1
            }
        }
        mda 1/2 { }
        mda 2/2 { }
        mda 3/1 { }
    }
    nat-group 2 {
        redundancy {
            active-mda-limit 1
        }
        mda 1/2 { }
        mda 2/2 { }
        mda 3/1 { }
    }

Configure NAT groups with the same MDAs (classic CLI)

A:node-2>config>isa# info
----------------------------------------------
        nat-group 1 create
            active-mda-limit 1
            mda 1/2
            mda 2/2
            mda 3/1
            no shutdown
        exit
        nat-group 2 create
            active-mda-limit 1
            mda 1/2
            mda 2/2
            mda 3/1
            no shutdown
        exit
    exit
exit

Use the following command to display an overview of all the NAT groups and MDAs.

show isa nat-group

NAT group and MDA overview output

===============================================================================
ISA NAT Group Summary
===============================================================================
Mda  Group 1            Group 2           
-------------------------------------------------------------------------------
1/1  active             busy           
2/2  busy               active    
3/1  standby            standby 
===============================================================================

If an MDA fails, the spare (if available) takes over. All active sessions are lost, but new incoming sessions makes use of the spare MDA.

In case of an MDA failure in a NAT group without any spare MDA, all traffic toward that MDA is blackholed.

For Layer 2–aware NAT, the user has the possibility to clear all the subscribers on the affected MDA (clear nat isa), terminating all the subscriber leases. New incoming subscribers make use of the MDAs that are still available in the NAT group.

NAT Layer 2–Aware configurations

The following examples display NAT Layer 2–Aware configurations.

Subscribers using these subprofiles are subject to Layer 2–aware NAT. The configured NAT policies determine which IP pool to use.

Configure NAT Layer 2–Aware (MD-CLI)

[/]
A:admin@node-2# admin show configuration
configure {
    card 1 {
        card-type iom4-e-b
        mda 1 {
            mda-type me12-10/1gb-sfp+
        }
        mda 2 {
            mda-type isa2-bb
        }
    }
    card 2 {
        card-type iom4-e-b
        mda 1 {
            mda-type me12-10/1gb-sfp+
        }
        mda 2 {
            mda-type isa2-bb
        }
    }
    isa {
        nat-group 1 {
            admin-state enable
            description "1 active + 1 spare"
            redundancy {
                active-mda-limit 1
            }
            mda 1/2 { }
            mda 2/2 { }
        }
    }
...
    router "Base" {
        nat {
            outside {
                pool "pool1" {
                    type l2-aware
                    nat-group 1
                    address-range 10.81.0.0 end 10.81.0.200 {
                    }
                }
            }
        }
    }
    service {
        nat {
            nat-policy "l2-aware-nat-policy1" {
                pool {
                    router-instance "Base"
                    name "pool1"
                }
            }
            nat-policy "l2-aware-nat-policy2" {
                pool {
                    router-instance "100"
                    name "pool2"
                }
            }
        }
        vprn "100" {
            customer "1"
            nat {
                outside {
                    pool "pool2" {
                        admin-state enable
                        type l2-aware
                        nat-group 1
                        address-range 10.0.0.0 end 10.0.0.200 {
                        }
                    }
                }
            }
        }
        vprn "101" {
            customer "1"
            nat {
                inside {
                    l2-aware {
                        subscribers 10.0.0.1/29 { }
                        subscribers 10.1.0.1/29 { }
                    }
                }
            }
        }
    }
    subscriber-mgmt {
        sub-profile "l2-aware-profile1" {
            nat {
                policy "l2-aware-nat-policy1"
            }
        }
        sub-profile "l2-aware-profile2" {
            nat {
                policy "l2-aware-nat-policy2"
            }
        }
    }

Configure NAT Layer 2–Aware (classic CLI)

A:node-2# admin display-config
configure
#--------------------------------------------------
echo "Card Configuration"
#--------------------------------------------------
    card 1
        card-type iom4-e-b
        mda 1
            mda-type me12-10/1gb-sfp+
            no shutdown
        exit
        mda 2
            mda-type isa2-bb
            no shutdown
        exit
        no shutdown
    exit
    card 2
        card-type iom4-e-b
        mda 1
            mda-type me12-10/1gb-sfp+
            no shutdown
        exit
        mda 2
            mda-type isa2-bb
            no shutdown
        exit
        no shutdown
    exit

#--------------------------------------------------
echo "ISA Configuration"
#--------------------------------------------------
    isa
        nat-group 1 create
            description "1 active + 1 spare"
            active-mda-limit 1
            mda 1/2
            mda 2/2
            no shutdown
        exit
    exit
#--------------------------------------------------
echo "Router (Network Side) Configuration"
#--------------------------------------------------
    router 
        ...
#--------------------------------------------------
echo "NAT (Network Side) Configuration"
#--------------------------------------------------
        nat
            outside
                pool "pool1" nat-group 1 type l2-aware create 
                    address-range 10.81.0.0 10.81.0.200 create
                    exit
                    no shutdown
                exit
            exit
        exit
#--------------------------------------------------
echo "Service Configuration"
#--------------------------------------------------
    service
        customer 1 create
            description "Default customer"
        exit
        ...
        vprn 100 customer 1 create
            ...
            nat
                outside
                    pool "pool2" nat-group 1 type l2-aware create 
                        address-range 10.0.0.0 10.0.0.200 create
                        exit
                        no shutdown
                    exit
                exit
            exit
        exit

        vprn 101 customer 1 create
            ...
            nat
                inside
                    l2-aware
                        # Hosts in this service with IP addresses in these ranges
                        # will be subject to Layer 2–aware NAT.
                        address 10.0.0.1/29
                        address 10.1.0.1/29
                    exit
                exit
            exit
        exit
        ...
        nat
            nat-policy "l2-aware-nat-policy1" create 
                pool "pool1" router Base
            exit
            nat-policy "l2-aware-nat-policy2" create 
                pool "pool2" router 100
            exit
        exit
        ...
    exit
#--------------------------------------------------
echo "Subscriber-mgmt Configuration"
#--------------------------------------------------
    subscriber-mgmt
        sub-profile "l2-aware-profile1" create
            nat-policy "l2-aware-nat-policy1"
        exit
        sub-profile "l2-aware-profile2" create
            nat-policy "l2-aware-nat-policy2"
        exit
        ...
    exit 

Large scale NAT configuration

The following example displays a Large Scale NAT configuration.

MD-CLI

[ex:/configure]
A:admin@node-2# admin show configuration
configure {
...
    card 3 {
        card-type imm-2pac-fp3
        mda 1 {
            mda-type isa2-bb
        }
        mda 2 {
            mda-type isa2-bb
        }
    }
    filter {
        ip-filter "123" {
            entry 10 {
                match {
                    src-ip {
                        address 10.0.0.1/8
                    }
                }
                action {
                    nat {
                    }
                }
            }
        }
    }
    isa {
        nat-group 1 {
            admin-state enable
            redundancy {
                active-mda-limit 2
            }
            mda 3/1 { }
            mda 3/2 { }
        }
    }
    service {
        nat {
            nat-policy "ls-outPolicy" {
                pool {
                    router-instance "500"
                    name "nat1-pool"
                }
                timeouts {
                    udp {
                        normal 18000
                        initial 240
                    }
                }
            }
        }
        vprn "500" {
            admin-state enable
            customer "1"
            router-id 10.21.1.2
            nat {
                outside {
                    pool "nat1-pool" {
                        admin-state enable
                        type large-scale
                        nat-group 1
                        port-reservation {
                            ports 200
                        }
                        address-range 10.81.0.0 end 10.81.6.0 {
                        }
                    }
                }
            }
            bgp-ipvpn {
                mpls {
                    admin-state enable
                    route-distinguisher "500:10"
                    vrf-target {
                        import-community "target:500:1"
                        export-community "target:500:1"
                    }
                }
            }
            interface "ip-192.168.113.1" {
                ipv4 {
                    primary {
                        address 192.168.113.1
                        prefix-length 24
                    }
                    neighbor-discovery {
                        static-neighbor 192.168.113.5 {
                            mac-address 00:00:5e:00:53:00
                        }
                    }
                }
                sap 1/1/1:200 {
                }
            }
        }
        vprn "550" {
            customer "1"
            router-id 10.21.1.2
             nat {
                inside {
                    large-scale {
                        nat-policy "ls-outPolicy"
                    }
                }
            }
            bgp-ipvpn {
                mpls {
                    admin-state enable
                    route-distinguisher "550:10"
                    vrf-target {
                        import-community "target:550:1"
                        export-community "target:550:1"
                    }
                }
            }
            interface "ip-192.168.13.1" {
                ipv4 {
                    primary {
                        address 192.168.13.1
                        prefix-length 8
                    }
                }
                sap 1/2/1:900 {
                    ingress {
                        filter {
                            ip "123"
                        }
                    }
                }
            }
        }
    }
...

classic CLI

A:node-2# admin display-config
configure
#--------------------------------------------------
echo "Card Configuration"
#--------------------------------------------------
    card 3
        card-type imm-2pac-fp3
        mda 1
            mda-type isa2-bb
        exit
        mda 2
            mda-type isa2-bb
        exit
    exit
#--------------------------------------------------
echo "ISA Configuration"
#--------------------------------------------------
    isa
        nat-group 1 create
            active-mda-limit 2
            mda 3/1
            mda 3/2
            no shutdown
        exit
    exit
#--------------------------------------------------
echo "Filter Configuration"
#--------------------------------------------------
    filter 
        ip-filter 123 create
            entry 10 create
                match 
                    src-ip 10.0.0.1/8
                exit 
                action nat
            exit 
        exit 
    exit 
#--------------------------------------------------
echo "NAT (Declarations) Configuration"
#--------------------------------------------------
    service
        nat
            nat-policy "ls-outPolicy" create 
            exit
        exit
    exit
#--------------------------------------------------
echo "Service Configuration"
#--------------------------------------------------
    service
        customer 1 create
            description "Default customer"
        exit
        vprn 500 customer 1 create
            interface "ip-192.168.113.1" create
            exit
            nat
                outside
                    pool "nat1-pool" nat-group 1 type large-scale create 
                        port-reservation ports 200 
                        address-range 10.81.0.0 10.81.6.0 create
                        exit
                        no shutdown
                    exit
                exit
            exit
        exit
        vprn 550 customer 1 create
            interface "ip-192.168.13.1" create
            exit
        exit
        nat
            nat-policy "ls-outPolicy" create 
                pool "nat1-pool" router 500
                timeouts
                    udp hrs 5 
                    udp-initial min 4 
                exit
            exit
        exit
        vprn 500 customer 1 create
            router-id 10.21.1.2
            route-distinguisher 500:10
            vrf-target export target:500:1 import target:500:1
            interface "ip-192.168.113.1" create
                address 192.168.113.1/24
                static-arp 192.168.113.5 00-00-5e-00-53-00
                sap 1/1/1:200 create
                exit
            exit
            no shutdown
        exit
        vprn 550 customer 1 create
            router-id 10.21.1.2
            route-distinguisher 550:10
            vrf-target export target:550:1 import target:550:1
            interface "ip-192.168.13.1" create
                address 192.168.13.1/8
                sap 1/2/1:900 create
                    ingress
                        filter ip 123
                    exit
                exit
            exit
            nat
                inside
                    nat-policy "ls-outPolicy"
                exit
            exit
            no shutdown
        exit
    exit
exit all

NAT configuration examples

The following examples display configuration information for a VPRN service, router NAT, and a NAT service.

Configure the VPRN service (MD-CLI)

[ex:/configure service vprn "100" nat]
A:admin@node-2# info
    inside {
        l2-aware {
            force-unique-ip-addresses false
        }
        large-scale {
            nat-policy "priv-nat-policy"
            traffic-identification {
                source-prefix-only false
            }
            nat44 {
                destination-prefix 0.0.0.0/0 {
                }
            }
            dual-stack-lite {
                admin-state enable
                subscriber-prefix-length 128
                endpoint 2001:db8:470:fff:190:1:1:1 {
                    tunnel-mtu 1500
                    reassembly false
                    min-first-fragment-size-rx 1280
                }
                endpoint 2001:db8:470:1f00:ffff:190:1:1 {
                    tunnel-mtu 1500
                    reassembly false
                    min-first-fragment-size-rx 1280
                }
            }
            subscriber-identification {
                admin-state disable
                drop-unidentified-traffic false
                attribute {
                    vendor nokia
                    type alc-sub-string
                }
            }
        }
    }

Configure the VPRN service (classic CLI)

A:node-2>config>service# info detail
----------------------------------------------
        vprn 100 name "100" customer 1 create
            shutdown
            nat
                inside
                    nat-policy "priv-nat-policy"
                    destination-prefix 0.0.0.0/0
                    dual-stack-lite
                        subscriber-prefix-length 128
                        address 2001:db8:470:1f00:ffff:190:1:1
                            tunnel-mtu 1500
                        exit
                        no shutdown
                    exit
                    redundancy
                        no peer
                        no steering-route
                    exit
                    subscriber-identification
                        shutdown
                        no attribute
                        no description
                        no radius-proxy-server
                    exit
                    l2-aware
                    exit
                exit
                outside
                    no mtu
                exit

Configure a router NAT (MD-CLI)

[ex:/configure router "Base" nat outside]
A:admin@node-2# info
    pool "privpool" {
        admin-state enable
        type large-scale
        nat-group 3
        address-pooling paired
        icmp-echo-reply false
        mode auto
        applications {
            agnostic false
            flexible-port-allocation false
        }
        port-forwarding {
            dynamic-block-reservation false
            range-start 1
            range-end 1023
        }
        port-reservation {
            port-blocks 128
        }
        large-scale {
            subscriber-limit 65535
            redundancy {
                admin-state disable
            }
        }
        address-range 10.0.0.5 end 10.0.0.6 {
            drain false
        }
    }
    pool "pubpool" {
        admin-state enable
        type large-scale
        nat-group 1
        address-pooling paired
        icmp-echo-reply false
        mode auto
        applications {
            agnostic false
            flexible-port-allocation false
        }
        port-forwarding {
            dynamic-block-reservation false
            range-start 1
            range-end 1023
        }
        port-reservation {
            port-blocks 1
        }
        large-scale {
            subscriber-limit 65535
            redundancy {
                admin-state disable
            }
        }
        address-range 192.168.8.241 end 192.168.8.247 {
            drain false
        }
    }
    pool "test" {
        type large-scale
        nat-group 1
    }

Configure a router NAT (classic CLI)

A:node-2>config>router>nat# info detail
----------------------------------------------
            outside
                no mtu
                pool "privpool" nat-group 3 type large-scale create 
                    no description
                    port-reservation blocks 128 
                    port-forwarding-range 1023
                    redundancy
                        no export
                        no monitor
                    exit
                    subscriber-limit 65535
                    no watermarks
                    mode auto
                    address-range 10.0.0.5 10.0.0.6 create
                        no description
                        no drain
                    exit
                    no shutdown
                exit
                pool "pubpool" nat-group 1 type large-scale create 
                    no description
                    port-reservation blocks 1 
                    port-forwarding-range 1023
                    redundancy
                        no export
                        no monitor
                    exit
                    subscriber-limit 65535
                    no watermarks
                    mode auto
                    address-range 192.168.8.241 192.168.8.247 create
                        no description
                        no drain
                    exit
                    no shutdown
                exit
            exit

Configure a service NAT (MD-CLI)

[ex:/configure service nat nat-policy "priv-nat-policy"]
A:admin@node-2# info
    block-limit 4
    filtering endpoint-independent
    port-forwarding-range-end 1023
    pool {
        router-instance "Base"
        name "privpool"
    }
    alg {
        ftp true
        pptp false
        rtsp true
        sip true
    }
    port-limits {
        forwarding 64
        dynamic-ports 65536
    }
    priority-sessions {
        fc {
            be false
            l2 false
            af false
            l1 false
            h2 false
            ef false
            h1 false
            nc false
        }
    }
    session-limits {
        max 65535
    }
    tcp {
        reset-unknown false
    }
    timeouts {
        icmp-query 60
        sip 120
        subscriber-retention 0
        tcp {
            established 7440
            rst 0
            syn 15
            time-wait 0
            transitory 240
        }
        udp {
            normal 300
            dns 15
            initial 15
        }
    }
    udp {
        inbound-refresh false
    }

[ex:/configure service nat nat-policy "pub-nat-policy"]
A:admin@node-2# info
    block-limit 1
    filtering endpoint-independent
    port-forwarding-range-end 1023
    pool {
        router-instance "Base"
        name "pubpool"
    }
    alg {
        ftp true
        pptp false
        rtsp false
        sip false
    }
    port-limits {
        dynamic-ports 65536
    }
    priority-sessions {
        fc {
            be false
            l2 false
            af false
            l1 false
            h2 false
            ef false
            h1 false
            nc false
        }
    }
    session-limits {
        max 65535
    }
    tcp {
        reset-unknown false
    }
    timeouts {
        icmp-query 60
        sip 120
        subscriber-retention 0
        tcp {
            established 7440
            rst 0
            syn 15
            time-wait 0
            transitory 240
        }
        udp {
            normal 300
            dns 15
            initial 15
        }
    }
    udp {
        inbound-refresh false
    }

Configure a service NAT (classic CLI)

A:node-2>config>service>nat# info detail
----------------------------------------------
            nat-policy "priv-nat-policy" create
                alg
                    ftp
                    rtsp
                    sip
                exit
                block-limit 4
                no destination-nat
                no description
                filtering endpoint-independent
                pool "privpool" router Base
                no ipfix-export-policy
                port-limits
                    forwarding 64
                    no reserved
                    no watermarks
                exit
                priority-sessions
                exit
                session-limits
                    max 65535
                    no reserved
                    no watermarks
                exit
                timeouts
                    icmp-query min 1 
                    sip min 2 
                    no subscriber-retention
                    tcp-established hrs 2 min 4 
                    tcp-syn sec 15 
                    no tcp-time-wait
                    tcp-transitory min 4 
                    udp min 5 
                    udp-initial sec 15 
                    udp-dns sec 15 
                exit
                no tcp-mss-adjust
                no udp-inbound-refresh
            exit
            nat-policy "pub-nat-policy" create
                alg
                    ftp
                    no rtsp
                    no sip
                exit
                block-limit 1
                no destination-nat
                no description
                filtering endpoint-independent
                pool "pubpool" router Base
                no ipfix-export-policy
                port-limits
                    no forwarding
                    no reserved
                    no watermarks
                exit
                priority-sessions
                exit
                session-limits
                    max 65535
                    no reserved
                    no watermarks
                exit
                timeouts
                    icmp-query min 1 
                    sip min 2 
                    no subscriber-retention
                    tcp-established hrs 2 min 4 
                    tcp-syn sec 15 
                    no tcp-time-wait
                    tcp-transitory min 4 
                    udp min 5 
                    udp-initial sec 15 
                    udp-dns sec 15 
                exit
                no tcp-mss-adjust
                no udp-inbound-refresh
            exit

Configuring VSR-NAT

This section provides information about the VSR-NAT functionality, including licensing requirements, statistics collection, and examples of show command output.

VSR-NAT licensing

Appropriate licensing is required to enable the VSR-NAT functionality in the system. However, no further licensing enforcement is performed based on resource utilization, such as the consumed bandwidth or the number of NAT bindings.

The following NAT-related functionality is enabled through licensing:

  • LSN (LSN44, DS-Lite, and NAT64)

  • Layer 2–aware NAT

  • UPnP

  • geo-redundancy

You can use the CLI or the MIB on VSR-NAT to get more information about the number of LSN bindings and LSN bandwidth.

NAT licenses required to unlock NAT functionality describes the licenses required to unlock the VSR-NAT functionality.

Table 29. NAT licenses required to unlock NAT functionality
NAT license title Functionality enabled License purchased

LSN

LSN Pool

configure router nat outside pool type large-scale
configure service vprn nat outside pool type large-scale

The following two scaling licenses are required:

  • license for the number of LSN bindings

  • license for consumed bandwidth

You must purchase both licenses to enable the LSN functionality.

L2AWARE

L2Aware Pool

configure router nat outside pool type l2-aware
configure service vprn nat outside pool type l2-aware

Purchase the L2-Aware license to enable the functionality. The LSN scaling license is not required.

Note: The Layer 2–aware NAT functionality can only be used with the VBNG.

UPnP

UPnP commands:

configure subscriber-mgmt sub-profile upnp-policy 

Purchase the UPnP license to enable the functionality.

Note: The UPnP functionality can only be used with the Layer 2–aware NAT.

GEO REDUNDANCY

Geo-redundancy Pool

  • MD-CLI
    configure router nat outside pool large-scale redundancy
    configure service vprn nat outside pool large-scale redundancy
  • classic CLI
    configure router nat outside pool redundancy
    configure service vprn nat outside pool redundancy

Purchase the Geo Redundancy license to enable the functionality.

Statistics collection For LSN bindings

A NAT subscriber is an internal entity whose true identity is hidden outside the network. The NAT subscriber is represented by a binding that is a set of stateful mappings between the internal and external representations of the subscriber. From the licensing perspective, the terms ‟NAT bindings” and ‟NAT subscribers” can be used interchangeably.

VSR-NAT collects the number of LSN subscribers for licensing purposes; the Layer 2–aware NAT subscribers are excluded from this count. An LSN subscriber is defined as follows:

  • Large Scale NAT44 (or CGN)

    The subscriber is an internal IPv4 address.

  • DS-Lite

    The subscriber is identified by the CPE IPv6 address (B4 element) or an IPv6 prefix. The selection of the address or prefix as the representation of a DS-Lite subscriber is configuration-dependent.

  • NAT64

    The subscriber is an IPv6 address.

The number of LSN subscribers (LSN44, DS-Lite, and NAT64) in VSR-NAT is sampled every hour on the hour (for example, at 00:00 am, 01:00 am, 02:00 am, and so on). Each sample is a snapshot of the number of subscribers at the time that the statistics are collected.

The CLI can be used to view the following information:

  • 24 samples (one per hour) in the current day

  • maximum value for each of the last 7 days

  • average value for each of the last 7 days

  • maximum value since the system booted

For the list of CLI commands available for use, see VSR-NAT show command examples.

Statistics collection for LSN bandwidth

The measurement of LSN bandwidth includes translated packets and octets in the upstream and downstream direction. Packets that are rejected for any reason and traffic carrying logging information are both excluded from the statistics.

LSN bandwidth statistics for VSR-NAT are collected every 10 minutes. The bandwidth is derived as a difference in octet count between the two consecutive collection intervals, divided by a 10 minute interval. There is no bandwidth differentiation per LSN type (LSN44, DS-Lite, and NAT64) or per direction. Aggregate bandwidth values per node are maintained in kb/s units. Layer 2–aware NAT and WLAN gateway statistics are not included in the statistics collection.

The CLI can be used to view the following LSN bandwidth information:

  • 144 bandwidth values for the current day (bandwidth statistics are collected every 10 minutes)

  • maximum bandwidth value for each of the last 7 days

  • average bandwidth value for each of the last 7 days

  • maximum bandwidth value since the system booted

For the list of CLI commands available for use, see VSR-NAT show command examples.

VSR-NAT show command examples

Use the following VSR-NAT show commands for more information about VSR-NAT.

show system license-statistics 24-hours application nat
show system license-statistics week application nat
show system license-statistics peak application nat

Use the following command to display weekly statistics.

show system license-statistics week application nat

Weekly statistics output

=====================================================================
week license statistics for nat
========================================================================
index       time                average              peak
---------------------------------------------------------------------
LSN subscribers
1           2016/02/01 00:00:00 370                  456
2           2016/01/31 00:00:00 375                  512
3           2016/01/30 00:00:00 374                  510
4           2016/01/29 00:00:00 373                  478
5           2016/01/28 00:00:00 360                  450
6           2016/01/27 00:00:00 370                  496
7           2016/01/26 00:00:00 373                  503
LSN bandwidth
1           2016/02/01 00:00:00 12472623             12472623
2           2016/01/31 00:00:00 12472623             12472623
3           2016/01/30 00:00:00 12472623             12472623
4           2016/01/29 00:00:00 12472623             12472623
5           2016/01/28 00:00:00 12472623             12472623
6           2016/01/27 00:00:00 12472623             12472623
7           2016/01/26 00:00:00 12472623             12472623
---------------------------------------------------------------------
No. of license statistics entries: 14 
=====================================================================

Use the following command to display statistics over a 24-hour period.

show system license-statistics 24-hours application nat

Statistics over a 24-hour period output

========================================================================
24 hours license statistics for nat
========================================================================
index       time                value                
------------------------------------------------------------------------
LSN subscribers
1           2016/06/29 19:00:00 512                  
2           2016/06/29 20:00:00 512                  
LSN bandwidth
1           2016/06/29 18:10:00 0                    
2           2016/06/29 18:20:00 0                    
3           2016/06/29 18:30:00 0                    
4           2016/06/29 18:40:00 2996286              
5           2016/06/29 18:50:00 12472524             
6           2016/06/29 19:00:00 12472424             
7           2016/06/29 19:10:00 12471020             
8           2016/06/29 19:20:00 12471980             
9           2016/06/29 19:30:00 12471566             
10          2016/06/29 19:40:00 12471881             
11          2016/06/29 19:50:00 12472116             
12          2016/06/29 20:00:00 12472623             
------------------------------------------------------------------------
No. of license statistics entries: 14
========================================================================

Use the following command to display peak statistics.

show system license-statistics peak application nat

Peak statistics output

========================================================================
peak license statistics for nat
========================================================================
                                      time                peak
------------------------------------------------------------------------
LSN subscribers                       2016/06/29 19:00:00 512
LSN bandwidth                         2016/06/29 20:00:00 12472623
------------------------------------------------------------------------
No. of license statistics entries: 2
========================================================================

The following table describes the NAT statistics output fields.

Table 30. NAT statistics output fields
Label Description

Index

The entry number of the displayed value.

A weekly display contains 7 entries, one for each of the last 7 days.

A 24-hour display can contain up to 24 values for NAT subscribers (statistics are collected hourly) and 144 values for NAT bandwidth (statistics are collected every 10 minutes).

Time

The timestamp of the statistics collection.

The bandwidth is averaged in 10 minute intervals. Consequently, bandwidth value at a specific time represents the average bandwidth for the last 10 minute period.

Value

The value for the number of NAT subscribers at a specific time, or the average bandwidth in kb/s for the last 10 minute period.

Average

In the weekly display, the average daily value for the number of NAT subscribers or the NAT bandwidth.

Peak

In the weekly display, the daily peak value for the number of NAT subscribers or the NAT bandwidth.

VSR scaling profiles on BB-ISA

Scaling profiles on the VSR

To meet flexible scaling requirements in common compute platforms, users can use the CLI to select VSR-NAT scaling profiles that correspond to the amount of memory allocated in the VM.

The following scaling profiles have predefined upper scaling limits and are available for VSR-NAT and IPv6 FW:

  • profile1 – is a lower scaling profile

  • profile2 – is a higher scaling profile

The default scaling profile is profile1.

Contact your Nokia representative for more information about NAT scaling figures in each profile.

See the Virtualized 7250 IXR, 7750 SR, and 7950 XRS Simulator (vSIM) Installation and Setup Guide , "Sysinfo" section, for information about the number of required CPU control cores on a VSR-I in relation to profiles.

See SR OS R24.x.Rx Software Release Notes, "VM memory requirements by function mix" section, for information about the amount of required memory on VSR-I in relation to profiles.

The following example displays scaling profiles being applied.

Configure the scaling profile (MD-CLI)

[ex:/configure isa nat-group 1]
A:admin@node-2# info
    scaling-profile profile2

Configure the scaling profile (classic CLI)

A:node-2>config>isa>nat-group# info
----------------------------------------------
            shutdown
            scaling-profile profile2
----------------------------------------------

Scale profile modification

A scaling profile can be changed only when the NAT group is in a an administratively disabled state. After the scaling profile is changed and the NAT group is activated (administratively enabled), the system tries to allocate necessary memory. If successful, the vISA transitions in service; if unsuccessful (for example, if there is not enough memory in the system), the NAT group remains in the administratively disabled state.

Without sufficient resources to accommodate the required scaling profile, the vMDA where the vISA resides transitions into a failed state, followed by logs describing the failure.

Failure log output

33 2018/05/28 09:36:30.484 UTC MAJOR: CHASSIS #2001 Base Mda 1/1
"Class MDA Module : failed, reason: Insufficient memory to boot"

36 2018/09/18 14:13:02.426 UTC MAJOR: CHASSIS #2001 Base Mda 1/1
"Class MDA Module : failed, reason: Insufficient mgmt cores to boot"

NAT scaling profiles on ESA

Scaling profiles for NAT on ESA

NAT on ESA offers the following scaling profiles, each of which are adapted to the amount of memory allocated to the VM:

  • profile1 – a lower scaling profile that requires 8 CPU cores and 32 GB of DRAM memory per ESA-VM

  • profile2 – a medium scaling profile that requires 11 CPU cores and 96 GB of DRAM memory per ESA‑VM

  • profile3 – a high scaling profile that requires 15 CPU cores and 115 GB of DRAM memory per ESA-VM

The default scaling profile is profile1.

Contact your Nokia representative for more specific information about the NAT scaling figures in each profile.

Use the following command to configure scaling profiles.

configure isa nat-group scaling-profile

Scale profile modification

A scaling profile can be changed using CLI only when all ESA-VMs in a NAT group are removed from the configuration.

For example, in the following case, transitioning from profile2 to profile1 is not possible until esa-vm 1/1 is removed from the CLI.

MD-CLI

[ex:/configure isa]
A:admin@node-2# info
    nat-group 1 {
        scaling-profile profile2
        esa 1 vm 1 { }
    }
*[ex:/configure isa nat-group 1]
A:admin@node-2# scaling-profile profile1
MINOR: BBGRPMGR #1052: configure isa nat-group 1 - Cannot change scale-profile with MDAs or ESA-VMs provisioned

classic CLI

A:node-2>config>isa# info
----------------------------------------------
        nat-group 1 create
            esa-vm 1/1
        scaling-profile profile2
config>isa>nat-group# scaling-profile profile1
MINOR: BBGRPMGR #1052 Cannot change scale-profile with MDAs or ESA-VMs provisioned

Expanding a NAT group

Adding or removing an MDA from a NAT group affects all currently active subscribers and may invalidate existing static port forwards and mappings configured in deterministic NAT.

Store configurations offline before removing the configuration as part of the NAT group modification process that is described in the following information. You can restore the configuration to the node after the change is complete.

The procedure to add or remove an MDA from a NAT group is described in the following information.

Adding and removing an MDA from a NAT group in the MD-CLI

In the MD-CLI, use the following steps to add or remove an MDA from a NAT group in the MD-CLI:
  1. Administratively disable deterministic prefix policies and delete their mappings. Perform this for every deterministic prefix and their mapping used in a NAT group in which the size is modified. Store the deterministic mapping configuration offline before removing it and then reapply after the change. When the NAT group size is modified and the deterministic mappings reapplied, the commit may fail. If the commit fails, you must create a new mapping. Use the following command to create a new mapping.

    tools perform nat deterministic calculate-maps

    Static port forwards configurations created with the tools command are automatically deleted during the commitment of the modified NAT group.

  2. Commit the changes.

  3. Change the active and failed MDA limit.

  4. Commit the changes.

  5. Re-apply deterministic mappings and static port forwards.

Adding and removing an MDA from a NAT group in the classic CLI

In the classic CLI, use the following steps to add or remove an MDA from a NAT group:

  1. Shut down the NAT group.

  2. Remove all statically configured large-scale subscribers (such as deterministic, LI, debug, and subscriber aware) in a NAT group that is being modified.

  3. A static port forward configuration created via the tools command is automatically deleted.

  4. Manually delete any static port forward configurations that were created..

  5. Shut down and remove the deterministic policies.

  6. Delete NAT policy references in all inside routing contexts associated with the NAT group that is being modified.

  7. Reconfigure the active-mda-limit and failed-mda-limit options.

  8. Administratively enable the NAT group.

  9. Restore previously removed NAT group references in all of the inside routing contexts associated with the modified NAT group.

  10. Reapply the subscriber-aware and deterministic subscribers (prefixes and maps), static port forwards, LI, and debug.

1 Signifies the fixed parameters (requirements)