Label Distribution Protocol

Label Distribution Protocol

Label Distribution Protocol (LDP) is a protocol used to distribute labels in non-traffic-engineered applications. LDP allows routers to establish label switched paths (LSPs) through a network by mapping network-layer routing information directly to data link layer-switched paths.

An LSP is defined by the set of labels from the ingress Label Switching Router (LSR) to the egress LSR. LDP associates a Forwarding Equivalence Class (FEC) with each LSP it creates. A FEC is a collection of common actions associated with a class of packets. When an LSR assigns a label to a FEC, it must allow other LSRs in the path know about the label. LDP helps to establish the LSP by providing a set of procedures that LSRs can use to distribute labels.

The FEC associated with an LSP specifies which packets are mapped to that LSP. LSPs are extended through a network as each LSR splices incoming labels for a FEC to the outgoing label assigned to the next hop for the FEC. The next-hop for a FEC prefix is resolved in the routing table. LDP can only resolve FECs for IGP and static prefixes. LDP does not support resolving FECs of a BGP prefix.

LDP allows an LSR to request a label from a downstream LSR so it can bind the label to a specific FEC. The downstream LSR responds to the request from the upstream LSR by sending the requested label.

LSRs can distribute a FEC label binding in response to an explicit request from another LSR. This is known as Downstream On Demand (DOD) label distribution. LSRs can also distribute label bindings to LSRs that have not explicitly requested them. This is called Downstream Unsolicited (DU).

LDP and MPLS

LDP performs the label distribution only in MPLS environments. The LDP operation begins with a hello discovery process to find LDP peers in the network. LDP peers are two LSRs that use LDP to exchange label/FEC mapping information. An LDP session is created between LDP peers. A single LDP session allows each peer to learn the other's label mappings (LDP is bidirectional) and to exchange label binding information.

LDP signaling works with the MPLS label manager to manage the relationships between labels and the corresponding FEC. For service-based FECs, LDP works in tandem with the Service Manager to identify the virtual leased lines (VLLs) and Virtual Private LAN Services (VPLSs) to signal.

An MPLS label identifies a set of actions that the forwarding plane performs on an incoming packet before discarding it. The FEC is identified through the signaling protocol (in this case, LDP) and allocated a label. The mapping between the label and the FEC is communicated to the forwarding plane. In order for this processing on the packet to occur at high speeds, optimized tables are maintained in the forwarding plane that enable fast access and packet identification.

When an unlabeled packet ingresses the router, classification policies associate it with a FEC. The appropriate label is imposed on the packet, and the packet is forwarded. Other actions that can take place before a packet is forwarded are imposing additional labels, other encapsulations, learning actions, and so on When all actions associated with the packet are completed, the packet is forwarded.

When a labeled packet ingresses the router, the label or stack of labels indicates the set of actions associated with the FEC for that label or label stack. The actions are performed on the packet and then the packet is forwarded.

The LDP implementation provides DOD, DU, ordered control, liberal label retention mode support.

LDP architecture

LDP comprises a few processes that handle the protocol PDU transmission, timer-related issues, and protocol state machine. The number of processes is kept to a minimum to simplify the architecture and to allow for scalability. Scheduling within each process prevents starvation of any particular LDP session, while buffering alleviates TCP-related congestion issues.

The LDP subsystems and their relationships to other subsystems are illustrated in Subsystem interrelationships. This illustration shows the interaction of the LDP subsystem with other subsystems, including memory management, label management, service management, SNMP, interface management, and RTM. In addition, debugging capabilities are provided through the logger.

Communication within LDP tasks is typically done by inter-process communication through the event queue, as well as through updates to the various data structures. The primary data structures that LDP maintains are:

  • FEC/label database

    This database contains all FEC to label mappings that include both sent and received. It also contains both address FECs (prefixes and host addresses) and service FECs (L2 VLLs and VPLS).

  • timer database

    This database contains all timers for maintaining sessions and adjacencies.

  • session database

    This database contains all session and adjacency records and serves as a repository for the LDP MIB objects.

Subsystem interrelationships

The sections below describe how LDP and the other subsystems work to provide services. Subsystem interrelationships shows the interrelationships among the subsystems.

Figure 1. Subsystem interrelationships

Memory manager and LDP

LDP does not use any memory until it is instantiated. It pre-allocates some amount of fixed memory so that initial startup actions can be performed. Memory allocation for LDP comes out of a pool reserved for LDP that can grow dynamically as needed. Fragmentation is minimized by allocating memory in larger chunks and managing the memory internally to LDP. When LDP is shut down, it releases all memory allocated to it.

Label manager

LDP assumes that the label manager is up and running. LDP aborts initialization if the label manager is not running. The label manager is initialized at system boot up; therefore, anything that causes it to fail likely implies that the system is not functional. The router uses the dynamic label range to allocate all dynamic labels, including RSVP and BGP allocated labels and VC labels.

LDP configuration

The router uses a single consistent interface to configure all protocols and services. CLI commands are translated to SNMP requests and are handled through an agent-LDP interface. LDP can be instantiated or deleted through SNMP. Also, LDP targeted sessions can be set up to specific endpoints. Targeted-session parameters are configurable.

Logger

LDP uses the logger interface to generate debug information relating to session setup and teardown, LDP events, label exchanges, and packet dumps. Per-session tracing can be performed.

Service manager

All interaction occurs between LDP and the service manager, because LDP is used primarily to exchange labels for Layer 2 services. In this context, the service manager informs LDP when an LDP session is to be set up or torn down, and when labels are to be exchanged or withdrawn. In turn, LDP informs service manager of relevant LDP events, such as connection setups and failures, timeouts, labels signaled/withdrawn.

Execution flow

LDP activity in the operating system is limited to service-related signaling. Therefore, the configurable parameters are restricted to system-wide parameters, such as hello and keepalive timeouts.

Initialization

LDP makes sure that the various prerequisites, such as ensuring the system IP interface is operational, the label manager is operational, and there is memory available, are met. It then allocates itself a pool of memory and initializes its databases.

Session lifetime

In order for a targeted LDP (T-LDP) session to be established, an adjacency must be created. The LDP extended discovery mechanism requires Hello messages to be exchanged between two peers for session establishment. After the adjacency establishment, session setup is attempted.

Adjacency establishment

In the router, the adjacency management is done through the establishment of a Service Distribution Path (SDP) object, which is a service entity in the Nokia service model.

The Nokia service model uses logical entities that interact to provide a service. The service model requires the service provider to create configurations for four main entities:

  • customers

  • services

  • Service Access Paths (SAPs) on the local routers

  • Service Distribution Points (SDPs) that connect to one or more remote routers

An SDP is the network-side termination point for a tunnel to a remote router. An SDP defines a local entity that includes the system IP address of the remote routers and a path type. Each SDP comprises:

  • the SDP ID

  • the transport encapsulation type, either MPLS or GRE

  • the far-end system IP address

If the SDP is identified as using LDP signaling, then an LDP extended Hello adjacency is attempted.

Note: If the tldp option is selected as the mechanism for exchanging service labels over an MPLS or GRE SDP and the T-LDP session is automatically established, an explicit T-LDP session that is subsequently configured takes precedence over the automatic T-LDP session. However, if the explicit, manually-configured session is then removed, the system does not revert to the automatic session and the automatic session is also deleted. To address this, recreate the T-LDP session by disabling and re-enabling the SDP using the shutdown and no shutdown commands. To address this in MD-CLI, recreate the T-LDP session by using the admin-state command to administratively disable and then enable the SDP.

If another SDP is created to the same remote destination, and if LDP signaling is enabled, no further action is taken, because only one adjacency and one LDP session exists between the pair of nodes.

An SDP is a unidirectional object, so a pair of SDPs pointing at each other must be configured in order for an LDP adjacency to be established. When an adjacency is established, it is maintained through periodic Hello messages.

Session establishment

When the LDP adjacency is established, the session setup follows as per the LDP specification. Initialization and keepalive messages complete the session setup, followed by address messages to exchange all interface IP addresses. Periodic keepalives or other session messages maintain the session liveliness.

Because TCP is back-pressured by the receiver, it is necessary to be able to push that back-pressure all the way into the protocol. Packets that cannot be sent are buffered on the session object and re-attempted as the back-pressure eases.

Label exchange

Label exchange is initiated by the service manager. When an SDP is attached to a service (for example, the service gets a transport tunnel), a message is sent from the service manager to LDP. This causes a label mapping message to be sent. Additionally, when the SDP binding is removed from the service, the VC label is withdrawn. The peer must send a label release to confirm that the label is not in use.

Other reasons for label actions

Other reasons for label actions include:

  • MTU changes

    LDP withdraws the previously assigned label and re-signals the FEC with the new MTU in the interface parameter.

  • clear labels

    When a service manager command is issued to clear the labels, the labels are withdrawn, and new label mappings are issued.

  • SDP down

    When an SDP goes administratively down, the VC label associated with that SDP for each service is withdrawn.

  • memory allocation failure

    If there is no memory to store a received label, it is released.

  • VC type unsupported

    When an unsupported VC type is received, the received label is released.

Cleanup

LDP closes all sockets, frees all memory, and shuts down all its tasks when it is deleted, so its memory usage is 0 when it is not running.

Configuring implicit null label

The implicit null label option allows an egress LER to receive MPLS packets from the previous hop without the outer LSP label. The user can configure to signal the implicit operation of the previous hop is referred to as penultimate hop popping (PHP). This option is signaled by the egress LER to the previous hop during the FEC signaling by the LDP control protocol.

Enable the use of the implicit null option, for all LDP FECs for which this node is the egress LER, using the following command:

config>router>ldp>implicit-null-label

When the user changes the implicit null configuration option, LDP withdraws all the FECs and re-advertises them using the new label value.

Global LDP filters

Both inbound and outbound LDP label binding filtering are supported.

Inbound filtering is performed by way of the configuration of an import policy to control the label bindings an LSR accepts from its peers. Label bindings can be filtered based on:

  • prefix-list (match on bindings with the specified prefix/prefixes)

  • neighbor (match on bindings received from the specified peer)

The default import policy is to accept all FECs received from peers.

Outbound filtering is performed by way of the configuration of an export policy. The Global LDP export policy can be used to explicitly originate label bindings for local interfaces. The Global LDP export policy does not filter out or stop propagation of any FEC received from neighbors. Use the LDP peer export prefix policy for this purpose.

By default, the system does not interpret the presence or absence of the system IP in global policies, and as a result always exports a FEC for that system IP. The consider-system-ip-in-gep command causes the system to interpret the presence or absence of the system IP in global export policies in the same way as it does for the IP addresses of other interfaces.

Export policy enables configuration of a policy to advertise label bindings based on:

  • direct (all local subnets)

  • prefix-list (match on bindings with the specified prefix or prefixes)

The default export policy is to originate label bindings for system address only and to propagate all FECs received from other LDP peers.

Finally, the 'neighbor interface' statement inside a global import policy is not considered by LDP.

Per LDP peer FEC import and export policies

The FEC prefix export policy provides a way to control which FEC prefixes received from prefixes received from other LDP and T-LDP peers are re-distributed to this LDP peer.

The user configures the FEC prefix export policy using the following command:

config>router>ldp>session-params>peer>export-prefixes policy-name

By default, all FEC prefixes are exported to this peer.

The FEC prefix import policy provides a mean of controlling which FEC prefixes received from this LDP peer are imported and installed by LDP on this node. If resolved these FEC prefixes are then re-distributed to other LDP and T-LDP peers.

The user configures the FEC prefix export policy using the following command:

config>router>ldp>session-params>peer>import-prefixes policy-name

By default, all FEC prefixes are imported from this peer.

Configuring multiple LDP LSR ID

The multiple LDP LSR-ID feature provides the ability to configure and initiate multiple Targeted LDP (T-LDP) sessions on the same system using different LDP LSR-IDs. In the current implementation, all T-LDP sessions must have the LSR-ID match the system interface address. This feature continues to allow the use of the system interface by default, but also any other network interface, including a loopback, address on a per T-LDP session basis. The LDP control plane does not allow more than a single T-LDP session with different local LSR ID values to the same LSR-ID in a remote node.

An SDP of type LDP can use a provisioned targeted session with the local LSR-ID set to any network IP for the T-LDP session to the peer matching the SDP far-end address. If, however, no targeted session has been explicitly pre-provisioned to the far-end node under LDP, then the SDP auto-establishes one but uses the system interface address as the local LSR ID.

An SDP of type RSVP must use an RSVP LSP with the destination address matching the remote node LDP LSR-ID. An SDP of type GRE can only use a T-LDP session with a local LSR-ID set to the system interface.

The multiple LDP LSR-ID feature also provides the ability to use the address of the local LDP interface, or any other network IP interface configured on the system, as the LSR-ID to establish link LDP Hello adjacency and LDP session with directly connected LDP peers. The network interface can be a loopback or not.

Link LDP sessions to all peers discovered over a specific LDP interface share the same local LSR-ID. However, LDP sessions on different LDP interfaces can use different network interface addresses as their local LSR-ID.

By default, the link and targeted LDP sessions to a peer use the system interface address as the LSR-ID unless explicitly configured using this feature. The system interface must always be configured on the router or else the LDP protocol does not come up on the node. There is no requirement to include it in any routing protocol.

When an interface other than system is used as the LSR-ID, the transport connection (TCP) for the link or targeted LDP session also uses the address of that interface as the transport address.

Advertisement of FEC for local LSR ID

The FEC for a Local LSR ID is not advertised by default by the system, unless it is explicitly configured to do so. The advertisement of the local-lsr-id is configured using the adv-local-lsr-id commands in the session parameters for a specified peer or the targeted-session peer-template.

Extend LDP policies to mLDP

In addition to link LDP, a policy can be assigned to mLDP as an import policy. For example, if the following policy was assigned as an import policy to mLDP, any FEC arriving with an IP address of 100.0.1.21 is dropped.

*A:SwSim2>config>router>policy-options# info 
----------------------------------------------
            prefix-list "100.0.1.21/32"
                prefix 100.0.1.21/32 exact
            exit
            policy-statement "policy1"
                entry 10
                    from
                        prefix-list "100.0.1.21/32"
                    exit
                    action drop
                    exit
                exit
                entry 20
                exit
                default-action accept
                exit
            exit

The policy can be assigned to mLDP using the configure router ldp import-mcast-policy policy1 command. Based on this configuration, the prefix list matches the mLDP outer FEC and the action is executed.

Note: mLDP import policies are only supported for IPv4 FECs.

The mLDP import policy is useful for enforcing root only functionality on a network. For a PE to be a root only, enable the mLDP import policy to drop any arriving FEC on the P router.

Recursive FEC behavior

In the case of recursive FEC, the prefix list matches the outer root. For example, for recursive FEC <outerROOT, opaque <ActualRoot, opaque<lspID>> the import policy works on the outerROOT of the FEC.

The policy only matches to the outer root address of the FEC and no other field in the FEC.

Import policy

For mLDP, a policy can be assigned as an import policy only. Import policies only affect FECs arriving to the node, and do not affect the self-generated FECs on the node. The import policy causes the multicast FECs received from the peer to be rejected and stored in the LDP database but not resolved. Therefore, the show router ldp binding command displays the FEC but the FEC is not shown by the show router ldp binding active command. The FEC is not resolved if it is not allowed by the policy.

Only global import policies are supported for mLDP FEC. Per-peer import policies are not supported.

As defined in RFC 6388 for P2MP FEC, SR OS only matches the prefix against the root node address field of the FEC, and no other fields. This means that the policy works on all P2MP Opaque types.

The P2MP FEC Element is encoded as shown in P2MP FEC element encoding.

Figure 2. P2MP FEC element encoding

LDP FEC resolution per specified community

LDP communities provide separation between groups of FECs at the LDP session level. LDP sessions are assigned a community value and any FECs received or advertised over them are implicitly associated with that community.

Note: The community value only has local significance to a node. The user must therefore ensure that communities are assigned consistently to sessions across the network.

SR OS supports multiple targeted LDP sessions over a specified network IP interface between LDP peer systems, each with its own local LSR ID. This makes it especially suitable for building multiple LDP overlay topologies over a common IP infrastructure, each with their own community.

LDP FEC resolution per specified community is supported in combination with stitching to SR or BGP tunnels as follows:

  • Although a FEC is only advertised within a specific LDP community, FEC can resolve to SR or BGP tunnels if those are the only available tunnels.

  • If LDP has received a label from an LDP peer with an assigned community, that FEC is assigned the community of that session.

  • If no LDP peer has advertised the label, LDP leaves the FEC with no community.

  • The FEC may be resolvable over an SR or BGP tunnel, but the community it is assigned at the stitching node depends on whether LDP has also advertised that FEC to that node, and the community assigned to the LDP session over which the FEC was advertised.

Configuration

Note: The no local-lsr-id or local-lsr-id system commands are synonymous and mean that there is no local LSR ID for a session. These commands apply to classic CLI only.
A community is assigned to an LDP session by configuring a community string in the corresponding session parameters for the peer or the targeted session peer template. A community only applies to a local LSR ID for a session for the following commands.
configure router ldp interface-parameters interface ipv4 local-lsr-id
configure router ldp interface-parameters interface ipv6 local-lsr-id
configure router ldp targeted-session peer local-lsr-id
configure router ldp targeted-session peer-template local-lsr-id
It is never applied to a system FEC or local static FEC. A system FEC or static FEC cannot have a community associated with it and is therefore not advertised over an LDP session with a configured community. Only a single community string can be configured for a session toward a specified peer or within a specified targeted peer template. The FEC advertised by the following commands is automatically put in the community configured on the session.
configure router ldp session-parameters peer adv-local-lsr-id
configure router ldp targeted-session peer-template adv-local-lsr-id

The specified community is only associated with IPv4 and IPv6 address FECs incoming or outgoing on the relevant session, and not to IPv4/IPv6 P2MP FECs, or service FECs incoming/outgoing on the session.

Static FECs are treated as having no community associated with them, even if they are also received over another session with an assigned community. A mismatch is declared if this situation arises.

Operation

If a FEC is received over a session of a specified community, it is assumed to be associated with that community and is only broadcast to peers using sessions of that community. Likewise, a FEC received over a session with no community is only broadcast over other sessions with no community.

If a FEC is received over a session that does not have an assigned community, the FEC is treated as if it was received from a session with a differing assigned community. In other words, any particular FEC must only be received from sessions with a single, assigned community or no community. In any other case (from sessions with differing communities, or from a combination of sessions with a community and sessions without a community), the FEC is considered to have a community mismatch.

The following procedures apply:

  1. The system remembers the first community (including no community) of the session that a FEC is received on.

  2. If the same FEC is subsequently received over a session with a differing community, the FEC is marked as mismatched and the system raises a trap indicating community mismatch.

    Note: Subsequent traps because of a mismatch for a FEC arriving over a session of the same community (or no community) are squelched for a period of 60 seconds after the first trap. The trap indicates the session and the community of the session, but does not need to indicate the FEC itself.
  3. After a FEC has been marked as mismatched, the FEC is no longer advertised over sessions (or resolved to sessions) that differ either from the original community or in whether a community has been assigned. This can result in asymmetrical leaking of traffic between communities in specific cases, as illustrated by the following scenario. It is therefore recommended that FEC mismatches be resolved as soon as possible after they occur.

    Consider a triangle topology of Nodes A-B-C with iLDP sessions between them, using community=RED. At bootstrap, all the adv-local-lsrId FECs are exchanged, and the FECs are activated correctly as per routing. On each node, for each FEC there is a [local push] and a [local swap] as there is more than one peer advertising such a FEC. At this point all FECs are marked as being RED.

    • Focusing on Node C, consider:

      • Node A-owned RED FEC=X/32

      • Node B-owned RED FEC=Y/32

    • On Node C, the community of the session to node B is changed to BLUE. The consequence of this on Node C follows:

      • The [swap] operation for the remote Node A RED FEC=X/32 is de-programmed, as the Node B peer now BLUE, and therefore are not receiving Node A FEC=X/32 from B. Only the push is left programmed.

      • The [swap] operation for the remote Node B RED FEC=Y/32, is still programmed, even though this RED FEC is in mismatch, as it is received from both the BLUE peer Node B and the RED peer Node C.

  4. When a session community changes, the session is flapped and the FEC community audited. If the original session is flapped, the FEC community changes as well. The following scenarios illustrate the operation of FEC community auditing:

    • scenario A

      • The FEC comes in on blue session A. The FEC is marked blue.

      • The FEC comes in on red session B. The FEC is marked ‟mismatched” and stays blue.

      • Session B is changed to green. Session B is bounced. The FEC community is audited, stays blue, and stays mismatched.

    • scenario B

      • The FEC comes in on blue session A. The FEC is marked blue.

      • The FEC comes in on red session B. The FEC is marked ‟mismatched” and stays blue.

      • Session A is changed to red. The FEC community audit occurs. The ‟mismatch” indication is cleared and the FEC is marked as red. The FEC remains red when session A comes back up.

    • scenario C

      • The FEC comes in on blue session A. The FEC is marked blue.

      • The FEC comes in on red session B. The FEC is marked

        ‟mismatched” and stays blue.

      • Session A goes down. The FEC community audit occurs. The FEC is marked as red and the ‟mismatch” indication is cleared. The FEC is advertised over red session B.

      • Session A subsequently comes back up and it is still blue. The FEC remains red but is marked ‟mismatched”. The FEC is no longer advertised over blue session A.

The community mismatch state for a prefix FEC is visible through the show>router>ldp>bindings>prefixes command output, while the community mismatch state is visible via a MIB flag (in the vRtrLdpNgAddrFecFlags object).

The fact that a FEC is marked ‟mismatched” has no bearing on its accounting with respect to the limit of the number of FECs that may be received over a session.

The ability of a policy to reject a FEC is independent of the FEC mismatch. A policy prevents the system from using the label for resolution, but if the corresponding session is sending community-mismatched FECs, there is a problem and it should be flagged. For example, the policy and community mismatch checks are independent, and a FEC should still be marked with a community mismatch, if needed, per the rules above

T-LDP hello reduction

This feature implements a new mechanism to suppress the transmission of the Hello messages following the establishment of a Targeted LDP session between two LDP peers. The Hello adjacency of the targeted session does not require periodic transmission of Hello messages as in the case of a link LDP session. In link LDP, one or more peers can be discovered over a specific network IP interface and therefore, the periodic transmission of Hello messages is required to discover new peers in addition to the periodic keepalive message transmission to maintain the existing LDP sessions. A Targeted LDP session is established to a single peer. Thus, after the Hello adjacency is established and the LDP session is brought up over a TCP connection, keepalive messages are sufficient to maintain the LDP session.

When this feature is enabled, the targeted Hello adjacency is brought up by advertising the Hold-Time value the user configured in the hello timeout parameter for the targeted session. The LSR node then starts advertising an exponentially increasing Hold-Time value in the Hello message as soon as the targeted LDP session to the peer is up. Each new incremented Hold-Time value is sent in a number of Hello messages equal to the value of the hello reduction factor before the next exponential value is advertised. This provides time for the two peers to settle on the new value. When the Hold-Time reaches the maximum value of 0xffff (binary 65535), the two peers send Hello messages at a frequency of every [(65535-1)/local helloFactor] seconds for the lifetime of the targeted-LDP session (for example, if the local hello factor is three (3), then Hello messages are sent every 21844 seconds).

Both LDP peers must be configured with this feature to bring gradually their advertised Hold-Time up to the maximum value. If one of the LDP peers does not, the frequency of the Hello messages of the targeted Hello adjacency continues to be governed by the smaller of the two Hold-Time values. This feature complies to draft-pdutta-mpls-tldp-hello-reduce.

Tracking a T-LDP peer with BFD

BFD tracking of an LDP session associated with a T-LDP adjacency allows for faster detection of the liveliness of the session by registering the peer transport address of a LDP session with a BFD session. The source or destination address of the BFD session is the local or remote transport address of the targeted or link (if peers are directly connected) Hello adjacency which triggered the LDP session.

By enabling BFD for a selected targeted session, the state of that session is tied to the state of the underneath BFD session between the two nodes. The parameters used for the BFD are set with the BFD command under the IP interface which has the source address of the TCP connection.

Link LDP hello adjacency tracking with BFD

LDP can only track an LDP peer using the Hello and keepalive timers. If an IGP protocol registered with BFD on an IP interface to track a neighbor, and the BFD session times out, the next-hop for prefixes advertised by the neighbor are no longer resolved. This however does not bring down the link LDP session to the peer because the LDP peer is not directly tracked by BFD.

To properly track the link LDP peer, LDP needs to track the Hello adjacency to its peer by registering with BFD.

The user effects Hello adjacency tracking with BFD by enabling BFD on an LDP interface:

config>router>ldp>if-params>if>enable-bfd [ipv4][ipv6]

The parameters used for the BFD session, that is, transmit-interval, receive-interval, and multiplier, are those configured under the IP interface:

config>router>if>bfd

The source or destination address of the BFD session is the local or remote address of link Hello adjacency. When multiple links exist to the same LDP peer, a Hello adjacency is established over each link. However, a single LDP session exists to the peer and uses a TCP connection over one of the link interfaces. Also, a separate BFD session should be enabled on each LDP interface. If a BFD session times out on a specific link, LDP immediately brings down the Hello adjacency on that link. In addition, if there are FECs that have their primary NHLFE over this link, LDP triggers the LDP FRR procedures by sending to IOM and line cards the neighbor/next-hop down message. This results in moving the traffic of the impacted FECs to an LFA next-hop on a different link to the same LDP peer or to an LFA backup next-hop on a different LDP peer depending on the lowest backup cost path selected by the IGP SPF.

As soon as the last Hello adjacency goes down as a result of the BFD timing out, the LDP session goes down and the LDP FRR procedures are triggered. This results in moving the traffic to an LFA backup next-hop on a different LDP peer.

LDP LSP statistics

RSVP-TE LSP statistics is extended to LDP to provide the following counters:

  • per-forwarding-class forwarded in-profile packet count

  • per-forwarding-class forwarded in-profile byte count

  • per-forwarding-class forwarded out-of-profile packet count

  • per-forwarding-class forwarded out-of-profile byte count

The counters are available for the egress data path of an LDP FEC at ingress LER and at LSR. Because an ingress LER is also potentially an LSR for an LDP FEC, combined egress data path statistics is provided whenever applicable.

MPLS entropy label

The router supports the MPLS entropy label (RFC 6790) on LDP LSPs used for IGP and BGP shortcuts. This allows LSR nodes in a network to load-balance labeled packets in a much more granular fashion than allowed by simply hashing on the standard label stack.

To configure insertion of the entropy label on IGP or BGP shortcuts, use using the entropy-label command under the configure router context.

Importing LDP tunnels to non-host prefixes to TTM

When an LDP LSP is established, TTM is automatically populated with the corresponding tunnel. This automatic behavior does not apply to non-host prefixes. The config>router>ldp>import-tunnel-table command allows for TTM to be populated with LDP tunnels to such prefixes in a controlled manner for both IPv4 and IPv6.

TTL security for BGP and LDP

The BGP TTL Security Hack (BTSH) was originally designed to protect the BGP infrastructure from CPU utilization-based attacks. It is derived from the fact that the vast majority of ISP EBGP peerings are established between adjacent routers. Because TTL spoofing is considered nearly impossible, a mechanism based on an expected TTL value can provide a simple and reasonably robust defense from infrastructure attacks based on forged BGP packets.

While TTL Security Hack (TSH) is most effective in protecting directly connected peers, it can also provide a lower level of protection to multi-hop sessions. When a multi-hop BGP session is required, the expected TTL value can be set to 255 minus the configured range-of-hops. This approach can provide a qualitatively lower degree of security for BGP (such as a DoS attack could, theoretically, be launched by compromising a box in the path). However, BTSH catches a vast majority of observed distributed DoS (DDoS) attacks against EBGP.

TSH can be used to protect LDP peering sessions as well. For more information, see draft-chen-ldp-ttl-xx.txt, TTL-Based Security Option for LDP Hello Message.

The TSH implementation supports the ability to configure TTL security per BGP/LDP peer and evaluate (in hardware) the incoming TTL value against the configured TTL value. If the incoming TTL value is less than the configured TTL value, the packets are discarded and a log is generated.

ECMP support for LDP

ECMP support for LDP performs load balancing for LDP based LSPs by having multiple outgoing next-hops for a specific IP prefix on ingress and transit LSRs.

An LSR that has multiple equal cost paths to a specific IP prefix can receive an LDP label mapping for this prefix from each of the downstream next-hop peers. As the LDP implementation uses the liberal label retention mode, it retains all the labels for an IP prefix received from multiple next-hop peers.

Without ECMP support for LDP, only one of these next-hop peers is selected and installed in the forwarding plane. The algorithm used to determine the next-hop peer to be selected involves looking up the route information obtained from the RTM for this prefix and finding the first valid LDP next-hop peer (for example, the first neighbor in the RTM entry from which a label mapping was received). If, for some reason, the outgoing label to the installed next-hop is no longer valid, say the session to the peer is lost or the peer withdraws the label, a new valid LDP next-hop peer is selected out of the existing next-hop peers and LDP reprograms the forwarding plane to use the label sent by this peer.

With ECMP support, all the valid LDP next-hop peers, those that sent a label mapping for a specific IP prefix, are installed in the forwarding plane. In both cases, ingress LER and transit LSR, an ingress label are mapped to the next hops that are in the RTM and from which a valid mapping label has been received. The forwarding plane then uses an internal hashing algorithm to determine how the traffic is distributed amongst these multiple next-hops, assigning each ‟flow” to a particular next-hop.

The hash algorithm at LER and transit LSR are described in the "Traffic Load Balancing Options" in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Interface Configuration Guide.

LDP supports up to 64 ECMP next hops. LDP takes its maximum limit from the lower of config>router>ecmp and config>router>ldp>max-ecmp-routes.

Label operations

If an LSR is the ingress for a specific IP prefix, LDP programs push operation for the prefix in the forwarding engine. This creates an LSP ID to the Next Hop Label Forwarding Entry (NHLFE) (LTN) mapping and an LDP tunnel entry in the forwarding plane. LDP also informs the Tunnel Table Manager (TTM) of this tunnel. Both the LTN entry and the tunnel entry have a NHLFE for the label mapping that the LSR received from each of its next-hop peers.

If the LSR is to behave as a transit for a specific IP prefix, LDP programs a swap operation for the prefix in the forwarding engine. This involves creating an Incoming Label Map (ILM) entry in the forwarding plane. The ILM entry has to map an incoming label to possibly multiple NHLFEs. If an LSR is an egress for a specific IP prefix, LDP programs a POP entry in the forwarding engine. This too results in an ILM entry being created in the forwarding plane but with no NHLFEs.

When unlabeled packets arrive at the ingress LER, the forwarding plane consults the LTN entry and uses a hashing algorithm to map the packet to one of the NHLFEs (push label) and forward the packet to the corresponding next-hop peer. For labeled packets arriving at a transit or egress LSR, the forwarding plane consults the ILM entry and either use a hashing algorithm to map it to one of the NHLFEs if they exist (swap label) or simply route the packet if there are no NHLFEs (pop label).

Static FEC swap is not activated unless there is a matching route in system route table that also matches the user configured static FEC next-hop.

Weighted ECMP support for LDP

The router supports weighted ECMP in cases where LDP resolves a FEC over an ECMP set of direct next hops corresponding to IP network interfaces, and where it resolves the FEC over an ECMP set of RSVP-TE tunnels. See Weighted load-balancing for LDP over RSVP and SR-TE for information about LDP over RSVP.

Weighted ECMP for direct IP network interfaces uses a load-balancing-weight configured under the config>router>ldp>interface-parameters>interface context. Similar to LDP over RSVP, Weighted ECMP for LDP is enabled using the weighted-ecmp command under the config>router>ldp context. If the interface becomes an ECMP next hop for an LDP FEC, and all the other ECMP next hops are interfaces with configured (non-zero) load-balancing weights, then the traffic distribution over the ECMP interfaces is proportional to the normalized weight. Then, LDP performs the normalization with a granularity of 64.

If one or more of the LDP interfaces in the ECMP set does not have a configured-load-balancing weight, then the system falls back to ECMP.

If both an IGP shortcut tunnel and a direct next hop exist to resolve a FEC, LDP prefers the tunneled resolution. Therefore, if an ECMP set consists of both IGP shortcuts and direct next hops, LDP only load balances across the IGP shortcuts.

Note:
  • LDP only uses configured LDP interface load balancing weights with non-LDP over RSVP resolutions.

  • Weights are normalized across all possible next-hops for a FEC. If the number of ECMP routes configured with the configure>router>ldp>max-ecmp-routes is less than the actual number of next-hops, traffic is load-balanced using the normalized weights from the first max-ecmp-routes next-hop. This can cause load distribution within the LDP max-ecmp-routes that is not representative of the distribution that would occur across all ECMP next-hops.

Unnumbered interface support in LDP

This feature allows LDP to establish Hello adjacency and to resolve unicast and multicast FECs over unnumbered LDP interfaces.

This feature also extends the support of lsp-ping, p2mp-lsp-ping, and ldp-treetrace to test an LDP unicast or multicast FEC which is resolved over an unnumbered LDP interface.

Feature configuration

This feature does not introduce a new CLI command for adding an unnumbered interface into LDP. Rather, the fec-originate command is extended to specify the interface name because an unnumbered interface does not have an IP address of its own. The user can, however, specify the interface name for numbered interfaces.

See the CLI section for the changes to the fec-originate command.

Operation of LDP over an unnumbered IP interface

Consider the setup shown in LDP adjacency and session over unnumbered interface.

Figure 3. LDP adjacency and session over unnumbered interface

LSR A and LSR B have the following LDP identifiers respectively:

<LSR Id=A> : <label space id=0>

<LSR Id=B> : <label space id=0>

There are two P2P unnumbered interfaces between LSR A and LSR B. These interfaces are identified on each system with their unique local link identifier. In other words, the combination of {Router-ID, Local Link Identifier} uniquely identifies the interface in OSPF or IS-IS throughout the network.

A borrowed IP address is also assigned to the interface to be used as the source address of IP packets which need to be originated from the interface. The borrowed IP address defaults to the system loopback interface address, A and B respectively in this setup. The user can change the borrowed IP interface to any configured IP interface, loopback or not, by applying the following command:

config>router>if>unnumbered [<ip-int-name | ip-address>]

When the unnumbered interface is added into LDP, it has the following behavior.

Link LDP

When the IPv6 context of interfaces I/F1 and I/F2 are brought up, the following procedures are performed.

  1. LSR A (LSR B) sends a IPv6 Hello message with source IP address set to the link-local unicast address of the specified local LSR ID interface, for example, fe80::a1 (fe80::a2), and a destination IP address set to the link-local multicast address ff02:0:0:0:0:0:0:2.

  2. LSR A (LSR B) sets the LSR-ID in LDP identifier field of the common LDP PDU header to the 32-bit IPv4 address of the specified local LSR-ID interface LoA1 (LoB1), for example, A1/32 (B1/32).

    If the specified local LSR-ID interface is unnumbered or does not have an IPv4 address configured, the adjacency does not come up and an error is returned (lsrInterfaceNoValidIp [17]) in the output of the following command.
    show router ldp interface detail
  3. LSR A (LSR B) sets the transport address TLV in the Hello message to the IPv6 address of the specified local LSR-ID interface LoA1 (LoB1), for example, A2/128 (B2/128).

    If the specified local LSR-ID interface is unnumbered or does not have an IPv6 address configured, the adjacency does not come up and an error is returned (interfaceNoValidIp [16]) in the output of the following command.

    show router ldp interface detail
  4. LSR A (LSR B) includes in each IPv6 Hello message the dual-stack TLV with the transport connection preference set to IPv6 family.

    • If the peer is a third-party LDP IPv6 implementation and does not include the dual-stack TLV, then LSR A (LSR B) resolves IPv6 FECs only because IPv6 addresses are not advertised in Address messages as per RFC 7552 [ldp-ipv6-rfc].

    • If the peer is a third-party LDP IPv6 implementation and includes the dual-stack TLV with transport connection preference set to IPv4, LSR A (LSR B) does not bring up the Hello adjacency and discards the Hello message. If the LDP session was already established, then LSRA(B) sends a fatal Notification message with status code of 'Transport Connection Mismatch' (0x00000032)' and restart the LDP session [ldp-ipv6-rfc]. In both cases, a new counter for the transport connection mismatches is incremented in the output of the following command.

      show router ldp statistics
  5. The LSR with highest transport address takes on the active role and initiates the TCP connection for the LDP IPv6 session using the corresponding source and destination IPv6 transport addresses.

Targeted LDP

Source and destination addresses of targeted Hello packet are the LDP LSR-IDs of systems A and B. The user can configure the local-lsr-id option on the targeted session and change the value of the LSR-ID to either the local interface or to some other interface name, loopback or not, numbered or not. If the local interface is selected or the provided interface name corresponds to an unnumbered IP interface, the unnumbered interface borrowed IP address is used as the LSR-ID. In all cases, the transport address for the LDP session and the source IP address of targeted Hello message is updated to the new LSR-ID value.

The LSR with the highest transport address, that is, LSR-ID in this case, bootstraps the TCP connection and LDP session. Source and destination IP addresses of LDP messages are the transport addresses, that is, LDP LSR-IDs of systems A and B in this case.

FEC resolution

LDP advertises/withdraws unnumbered interfaces using the Address/Address-Withdraw message. The borrowed IP address of the interface is used.

A FEC can be resolved to an unnumbered interface in the same way as it is resolved to a numbered interface. The outgoing interface and next-hop are looked up in RTM cache. The next-hop consists of the router-id and link identifier of the interface at the peer LSR.

LDP FEC ECMP next-hops over a mix of unnumbered and numbered interfaces is supported.

All LDP FEC types are supported.

The fec-originate command is supported when the next-hop is over an unnumbered interface.

All LDP features are supported except for the following:

  • BFD cannot be enabled on an unnumbered LDP interface. This is a consequence of the fact that BFD is not supported on unnumbered IP interface on the system.

  • As a consequence of (1), LDP FRR procedures are not triggered via a BFD session timeout but only by physical failures and local interface down events.

  • Unnumbered IP interfaces cannot be added into LDP global and peer prefix policies.

LDP over RSVP tunnels

LDP over RSVP-TE provides end-to-end tunnels that have two important properties, fast reroute and traffic engineering which are not available in LDP. LDP over RSVP-TE is focused at large networks (over 100 nodes in the network). Simply using end-to-end RSVP-TE tunnels do not scale. While an LER may not have that many tunnels, any transit node potentially has thousands of LSPs, and if each transit node also has to deal with detours or bypass tunnels, this number can make the LSR overly burdened.

LDP over RSVP-TE allows tunneling of user packets using an LDP LSP inside an RSVP LSP. The main application of this feature is for deployment of MPLS based services, for example, VPRN, VLL, and VPLS services, in large scale networks across multiple IGP areas without requiring full mesh of RSVP LSPs between PE routers.

Figure 4. LDP over RSVP application

The network displayed in LDP over RSVP application consists of two metro areas, Area 1 and 2 respectively, and a core area, Area 3. Each area makes use of TE LSPs to provide connectivity between the edge routers. To enable services between PE1 and PE2 across the three areas, LSP1, LSP2, and LSP3 are set up using RSVP-TE. There are in fact 6 LSPs required for bidirectional operation but we refer to each bidirectional LSP with a single name, for example, LSP1. A targeted LDP (T-LDP) session is associated with each of these bidirectional LSP tunnels. That is, a T-LDP adjacency is created between PE1 and ABR1 and is associated with LSP1 at each end. The same is done for the LSP tunnel between ABR1 and ABR2, and finally between ABR2 and PE2. The loopback address of each of these routers is advertised using T-LDP. Similarly, backup bidirectional LDP over RSVP tunnels, LSP1a and LSP2a, are configured by way of ABR3.

This setup effectively creates an end-to-end LDP connectivity which can be used by all PEs to provision services. The RSVP LSPs are used as a transport vehicle to carry the LDP packets from one area to another. Only the user packets are tunneled over the RSVP LSPs. The T-LDP control messages are still sent unlabeled using the IGP shortest path.

In this application, the bidirectional RSVP LSP tunnels are not treated as IP interfaces and are not advertised back into the IGP. A PE must always rely on the IGP to look up the next hop for a service packet. LDP-over-RSVP introduces a new tunnel type, tunnel-in-tunnel, in addition to the existing LDP tunnel and RSVP tunnel types. If multiple tunnels types match the destination PE FEC lookup, LDP prefers an LDP tunnel over an LDP-over-RSVP tunnel by default.

The design in LDP over RSVP application allows a service provider to build and expand each area independently without requiring a full mesh of RSVP LSPs between PEs across the three areas.

To participate in a VPRN service, the PE1 and PE2 perform the autobind to LDP. The LDP label which represents the target PE loopback address is used below the RSVP LSP label. Therefore a 3 label stack is required.

To provide a VLL service, PE1 and PE2 are still required to set up a targeted LDP session directly between them. Again a 3 label stack is required, the RSVP LSP label, followed by the LDP label for the loopback address of the destination PE, and finally the pseudowire label (VC label).

This implementation supports a variation of the application in LDP over RSVP application, in which area 1 is an LDP area. In that case, PE1 pushes a two label stack while ABR1 swaps the LDP label and push the RSVP label as illustrated in LDP over RSVP application variant. LDP-over-RSVP tunnels can also be used as IGP shortcuts.

Figure 5. LDP over RSVP application variant

Signaling and operation

LDP label distribution and FEC resolution

The user creates a targeted LDP (T-LDP) session to an ABR or the destination PE. This results in LDP hellos being sent between the two routers. These messages are sent unlabeled over the IGP path. Next, the user enables LDP tunneling on this T-LDP session and optionally specifies a list of LSP names to associate with this T-LDP session. By default, all RSVP LSPs which terminate on the T-LDP peer are candidates for LDP-over-RSVP tunnels. At this point in time, the LDP FECs resolving to RSVP LSPs are added into the Tunnel Table Manager as tunnel-in-tunnel type.

If LDP is running on regular interfaces also, the prefixes LDP learns are going to be distributed over both the T-LDP session as well as regular IGP interfaces. LDP FEC prefixes with a subnet mask lower or equal than 32 are resolved over RSVP LSPs. The policy controls which prefixes go over the T-LDP session, for example, only /32 prefixes, or a particular prefix range.

LDP-over-RSVP works with both OSPF and ISIS. These protocols include the advertising router when adding an entry to the RTM. LDP-over-RSVP tunnels can be used as shortcuts for BGP next-hop resolution.

Default FEC resolution procedure

When LDP tries to resolve a prefix received over a T-LDP session, it performs a lookup in the Routing Table Manager (RTM). This lookup returns the next hop to the destination PE and the advertising router (ABR or destination PE itself). If the next-hop router advertised the same FEC over link-level LDP, LDP prefers the LDP tunnel by default unless the user explicitly changed the default preference using the system wide prefer-tunnel-in-tunnel command. If the LDP tunnel becomes unavailable, LDP selects an LDP-over-RSVP tunnel if available.

When searching for an LDP-over-RSVP tunnel, LDP selects the advertising routers with best route. If the advertising router matches the T-LDP peer, LDP then performs a second lookup for the advertising router in the Tunnel Table Manager (TTM) which returns the user configured RSVP LSP with the best metric. If there are more than one configured LSP with the best metric, LDP selects the first available LSP.

If all user configured RSVP LSPs are down, no more action is taken. If the user did not configure any LSPs under the T-LDP session, the lookup in TTM returns the first available RSVP LSP which terminates on the advertising router with the lowest metric.

FEC resolution procedure When prefer-tunnel-in-tunnel is enabled

When LDP tries to resolve a prefix received over a T-LDP session, it performs a lookup in the Routing Table Manager (RTM). This lookup returns the next hop to the destination PE and the advertising router (ABR or destination PE itself).

When searching for an LDP-over-RSVP tunnel, LDP selects the advertising routers with best route. If the advertising router matches the targeted LDP peer, LDP then performs a second lookup for the advertising router in the Tunnel Table Manager (TTM) which returns the user configured RSVP LSP with the best metric. If there are more than one configured LSP with the best metric, LDP selects the first available LSP.

If all user configured RSVP LSPs are down, then an LDP tunnel is selected if available.

If the user did not configure any LSPs under the T-LDP session, a lookup in TTM returns the first available RSVP LSP which terminates on the advertising router. If none are available, then an LDP tunnel is selected if available.

Rerouting around failures

Every failure in the network can be protected against, except for the ingress and egress PEs. All other constructs have protection available. These constructs are LDP-over-RSVP tunnel and ABR.

LDP-over-RSVP tunnel protection

An RSVP LSP can deal with a failure in two ways:

  • If the LSP is a loosely routed LSP, then RSVP finds a new IGP path around the failure, and traffic follows this new path. This may involve some churn in the network if the LSP comes down and then gets re-routed. The tunnel damping feature was implemented on the LSP so that all the dependent protocols and applications do not flap unnecessarily.

  • If the LSP is a CSPF-computed LSP with the fast reroute option enabled, then RSVP switches to the detour path very quickly. From that point, a new LSP is attempted from the head-end (global revertive). When the new LSP is in place, the traffic switches over to the new LSP with make-before-break.

ABR protection

If an ABR fails, then routing around the ABR requires that a new next-hop LDP-over-RSVP tunnel be found to a backup ABR. If an ABR fails, then the T-LDP adjacency fails. Eventually, the backup ABR becomes the new next hop (after SPF converges), and LDP learns of the new next-hop and can reprogram the new path.

LDP over RSVP without area boundary

The LDP over RSVP capability set includes the ability to stitch LDP-over-RSVP tunnels at internal (non-ABR) OSPF and IS-IS routers.

Figure 6. LDP over RSVP without ABR stitching point

In LDP over RSVP without ABR stitching point, assume that the user wants to use LDP over RSVP between router A and destination ‟Dest”. The first thing that happens is that either OSPF or IS-IS performs an SPF calculation resulting in an SPF tree. This tree specifies the lowest possible cost to the destination. In the example shown, the destination ‟Dest” is reachable at the lowest cost through router X. The SPF tree has the following path: A>C>E>G>X.

Using this SPF tree, router A searches for the endpoint that is closest (farthest/highest cost from the origin) to ‟Dest” that is eligible. Assuming that all LSPs in the above diagram are eligible, LSP endpoint G is selected as it terminates on router G while other LSPs only reach routers C and E, respectively.

IGP and LSP metrics associated with the various LSP are ignores; only tunnel endpoint matters to IGP. The endpoint that terminates closest to ‟Dest” (highest IGP path cost) is selected for further selection of the LDP over RSVP tunnels to that endpoint. The explicit path the tunnel takes may not match the IGP path that the SPF computes.

If router A and G have an additional LSP terminating on router G, there would now be two tunnels both terminating on the same router closest to the final destination. For IGP, it does not make any difference on the numbers of LDPs to G, only that there is at least one LSP to G. In this case, the LSP metric is considered by LDP when deciding which LSP to stitch for the LDP over RSVP connection.

The IGP only passes endpoint information to LDP. LDP looks up the tunnel table for all tunnels to that endpoint and picks up the one with the least tunnel metric. There may be many tunnels with the same least cost. LDP FEC prefixes with a subnet mask lower or equal than 32 is resolved over RSVP LSPs within an area.

LDP over RSVP and ECMP

ECMP for LDP over RSVP is supported (also see ECMP support for LDP). If ECMP applies, all LSP endpoints found over the ECMP IGP path is installed in the routing table by the IGP for consideration by LDP. IGP costs to each endpoint may differ because IGP selects the farthest endpoint per ECMP path.

LDP chooses the endpoint that is highest cost in the route entry and does further tunnel selection over those endpoints. If there are multiple endpoints with equal highest cost, then LDP considers all of them.

Weighted load-balancing for LDP over RSVP and SR-TE

Weighted load-balancing (Weighted ECMP) is supported for LDP over RSVP (LoR):
  • when the LDP next hop resolves to an IGP shortcut tunnel over RSVP
  • when the LDP resolves to a static route with next hops which in turn uses RSVP tunnels
  • where the tunneling command is configured for the LDP peer (classic LDP over RSVP)

It is also supported when the LDP next hop resolves to an IGP shortcut tunnel over SR-TE. Weighted load-balancing is supported for both push and swap NHLFEs.

At a high level, weighted load-balancing operates as follows:

  1. All the RSVP or SR-TE LSPs in the ECMP set must have a load-balancing-weight configured; otherwise, non-weighted ECMP behavior is used.

  2. The normalized weight of each RSVP or SR-TE LSP is calculated based on its configured load-balancing weight. LDP performs the calculation to a resolution of 64, meaning if there are values between 1 and 200, the system buckets these into 64 values. These LSP next-hops are then populated in TTM.

  3. RTM entries are updated accordingly for LDP shortcuts.

  4. When weighted ECMP is configured for LDP, the normalized weight is downloaded to the IOM when the LDP route is resolved. This occurs for both push and swap NHLFEs.

  5. LDP labeled packets are then sprayed in proportion to the normalized weight of the RSVP or SR-TE LSPs that they are forwarded over.

  6. No per-service differentiation exists between packets. LDP-labeled packets from all services are sprayed in proportion to the normalized weight.

  7. Tunnel-in-tunnel takes precedence over the existence of a static route with a tunneled next hop. If tunneling is configured, then LDP uses these LSPs instead of those used by the static route. This means that LDP may use different tunnels to those pointed to by static routes.

Weighted ECMP for LDP over RSVP, when using IGP shortcuts or static routes, or LDP over SR-TE, when using IGP shortcuts, is enabled as follows:

  • Classic CLI commands

    configure
       router
          weighted-ecmp [strict]
          no weighted-ecmp
    
    configure
       router
          ldp
             [no] weighted-ecmp
    
  • MD-CLI commands

    configure
       router
          weighted-ecmp {false|true|strict}
    
    configure
       router
          ldp
             weighted-ecmp {true|false}
    

However, in the case of classic LoR, the operator only needs to configure weighted ECMP needs under LDP. The maximum number of ECMP tunnels is taken from the lower of the config>router>ecmp and config>router>ldp>max-ecmp-routes commands.

The following configuration illustrates the case of LDP resolving to a static route with one or more indirect next hops and a set of RSVP tunnels specified in the resolution filter:
  • Classic CLI

    config>router 
       static-route-entry 192.0.2.102/32 
          indirect 192.0.2.2 
            tunnel-next-hop 
               resolution-filter 
                  rsvp-te 
                     lsp "LSP-ABR-1-1" 
                     lsp "LSP-ABR-1-2" 
                     lsp "LSP-ABR-1-3" 
                     exit 
               exit 
          indirect 192.0.2.3 
            tunnel-next-hop 
               resolution-filter 
                  rsvp-te 
                     lsp "LSP-ABR-2-1" 
                     lsp "LSP-ABR-2-2" 
                     lsp "LSP-ABR-2-3" 
                     exit 
                exit 
                no shutdown 
           exit
    
  • MD-CLI

    !*[gl:/configure router "Base"]
        static-routes route 192.0.2.102/32 route-type unicast
          indirect 192.0.2.2
            tunnel-next-hop
              resolution-filter
                rsvp-te
                  lsp "LSP-ABR-1-1"
                  lsp "LSP-ABR-1-2" 
                  lsp "LSP-ABR-1-3" 
                  exit 
            exit
          indirect 192.0.2.3 
            tunnel-next-hop 
               resolution-filter 
                  rsvp-te 
                     lsp "LSP-ABR-2-1" 
                     lsp "LSP-ABR-2-2" 
                     lsp "LSP-ABR-2-3" 
                     exit 
               exit 
            exit

If both config>router>weighted-ecmp and config>router>ldp>weighted-ecmp are configured, then the weights of all of the RSVP tunnels for the static route are normalized to 64 and these are used to spray LDP labeled packets across the set of LSPs. This applies across all shortcuts (static and IGP) to which a route is resolved to the far-end prefix.

Interaction with Class-Based Forwarding

Class Based Forwarding (CBF) is not supported together with Weighted ECMP in LoR.

If both weighted ECMP and class-forwarding are configured under LDP, then LDP uses weighted ECMP only if all LSP next hops have non-default-weighted values configured. If any of the ECMP set LSP next hops do not have the weight configured, then LDP uses CBF. Otherwise, LDP uses CBF if possible. If weighted ECMP is configured for both LDP and the IGP shortcut for the RSVP tunnel, (config>router>weighted-ecmp), then weighted ECMP is used.

LDP resolves and programs FECs according to the weighted ECMP information if the following conditions are met:

  • LDP has both CBF and weighted ECMP fully configured.

  • All LSPs in ECMP set have both a load-balancing weight and CBF information configured.

  • weighted-ecmp is enabled under config>router.

Subsequently, deleting the CBF configuration has no effect; however, deleting the weighted ECMP configuration causes LDP to resolve according to CBF, if complete, consistent CBF information is available. Otherwise LDP sprays over all the LSPs equally, using non-weighted ECMP behavior.

If the IGP shortcut tunnel using the RSVP LSP does not have complete weighted ECMP information (for example, config>router>weighted-ecmp is not configured or one or more of the RSVP tunnels has no load-balancing-weight) then LDP attempts CBF resolution. If the CBF resolution is complete and consistent, then LDP programs that resolution. If a complete, consistent CBF resolution is not received, then LDP sprays over all the LSPs equally, using regular ECMP behavior.

Where entropy labels are supported on LoR, the entropy label (both insertion and extraction at LER for the LDP label and hashing at LSR for the LDP label) is supported when weighted ECMP is in use.

Class-Based Forwarding of LDP prefix packets over IGP shortcuts

Within large ISP networks, services are typically required from any PE to any PE and can traverse multiple domains. Also, within a service, different traffic classes can coexist, each with specific requirements on latency and jitter.

SR OS provides a comprehensive set of Class Based Forwarding capabilities. Specifically the following can be performed:

  • class-based forwarding, in conjunction with ECMP, for incoming unlabeled traffic resolving to an LDP FEC, over IGP IPv4 shortcuts (LER role)

  • class-based forwarding, in conjunction with ECMP, for incoming labeled LDP traffic, over IGP IPv4 shortcuts (LSR role)

  • class-based forwarding, in conjunction with ECMP, of GRT IPv4/IPv6 prefixes over IGP IPv4 shortcuts

    See chapter IP Router Configuration, Section 2.3 in 7450 ESS, 7750 SR, 7950 XRS, and VSR Router Configuration Guide, for a description of this case.

  • class-based forwarding, in conjunction with ECMP, of VPN-v4/-v6 prefixes over RSVP-TE or SR-TE

    See chapter Virtual Private Routed Network Service, Section 3.2.27 in 7450 ESS, 7750 SR, 7950 XRS, and VSR Layer 3 Services Guide: IES and VPRN, for a description of this case.

IGP IPv4 shortcuts, in all four cases, see MPLS RSVP-TE or SR-TE LSPs.

Configuration and operation

The class-based forwarding feature enables service providers to control which LSPs, of a set of ECMP tunnel next hops that resolve an LDP FEC prefix, to forward packets that were classified to specific forwarding classes, as opposed to normal ECMP spraying where packets are sprayed over the whole set of LSPs.

To activate CBF, the user should enable the following:

  • IGP shortcuts or forwarding adjacencies in the routing instance

  • ECMP

  • advertisement of unicast prefix FECs on the Targeted LDP session to the peer

  • class-based forwarding in the LDP context (LSR role, LER role or both)

The FC-to-Set based configuration mode is controlled by the following commands:

config>router>mpls>class-forwarding-policy policy-name

config>router>mpls>class-forwarding-policy>fc> {be | l2 | af | l1 | h2 | ef | h1 | nc} forwarding-set value

config>router>mpls>class-forwarding-policy>default-set value

config>router>mpls>lsp>class-forwarding>forwarding-set policy policy-name set set-id

The last command applies to the lsp-template context. So, LSPs that are created from that template, acquire the assigned CBF configurations.

Multiple FCs can be assigned to a specific set. Also, multiple LSPs can map to the same (policy, set) pair. However, an LSP cannot map to more than one (policy, set) pair.

Both configuration modes are mutually exclusive on a per LSP basis.

The CBF behavior depends on the configuration used, and on whether CBF was enabled for the LER or LSR roles, or both. The table below illustrates the different modes of operation of Class Based Forwarding depending on the node functionality where enabled, and on the type of configuration present in the ECMP set.

These modes of operation are described in following sections.

LSR and LER roles with FC-to-Set configuration

Both LSR and LER roles behave in the same way with this type of configuration. Before installing CBF information in the forwarding path, the system performs a consistency check on the CBF information of the ECMP set of tunnel next hops that resolve an LDP prefix FEC.

If no LSP, in the full ECMP set, has been assigned with a class forwarding policy configuration, the set is considered as inconsistent from a CBF perspective. The system, then, programs in the forwarding path, the whole ECMP set without any CBF information, and regular ECMP spraying occurs over the full set.

If the ECMP set is assigned to more than one class forwarding policy, the set is inconsistent from a CBF perspective. Then, the system programs, in the forwarding path, the whole ECMP set without any CBF information, and regular ECMP spraying occurs over the full set.

A full ECMP set is consistent from a CBF perspective when the ECMP:

  • is assigned to a single class forwarding policy

  • contains either an LSP assigned to the default set (implicit or explicit)

  • contains an LSP assigned to a non-default set that has explicit FC mappings

If there is no default set in a consistent ECMP set, the system automatically selects one set as the default one. The selected set is one set with the lowest ID among those referenced by the LSPs of the ECMP set.

If the ECMP set is consistent from a CBF perspective, the system programs in the forwarding path all the LSPs which have CBF configuration, and packets classified to a specific FC are forwarded by using the LSPs of the corresponding forwarding set.

If there are more than one LSPs in a forwarding set, the system performs a modulo operation on these LSPs only to select one. As a result, ECMP spraying occurs for multiple packets of this forwarding class. Also, the system programs, in the forwarding path, the remaining LSPs of the ECMP set, without any CBF information. These LSPs are not used for class-based forwarding.

If there is no operational LSP in a specific forwarding set, the system forwards packets which have been classified to the corresponding forwarding class onto the default set. Additionally, if there is no operational LSP in the default set, the system reverts to regular ECMP spraying over the full ECMP set.

If the user changes (by adding, modifying or deleting) the CBF configuration associated with an LSP that was previously selected as part of an ECMP set, then the FEC resolution is automatically updated, and a CBF consistency check is performed. Moreover, the user changes can update the forwarding configuration.

The LSR role applies to incoming labeled LDP traffic whose FEC is resolved to IGP IPv4 shortcuts.

The LER role applies to the following:

  • IPv4 and IPv6 prefixes in GRT (with an IPv4 BGP NH)

  • VPN-v4 and VPN-v6 routes

However, LER does not apply to any service which uses either explicit binding to an SDP (static or T-LDP signaled services), or auto-binding to SDP (BGP-AD VPLS, BGP-VPLS, BGP-VPWS, Dynamic MS-PW).

For BGP-LU, ECMP+CBF is supported only in the absence of the VPRN label. Therefore, ECMP+CBF is not supported when a VPRN label runs on top of BGP-LU (itself running over LDPoRSVP).

The CBF capability is available with any system profile. The number of sets is limited to four with system profile None or A, and to six with system profile B. This capability does not apply to CPM generated packets, including OAM packets, which are looked-up in RTM, and which are forwarded over tunnel next hops. These packets are forwarded by using either regular ECMP, or by selecting one nexthop from the set.

LDP ECMP uniform failover

LDP ECMP uniform failover allows the fast re-distribution by the ingress data path of packets forwarded over an LDP FEC next-hop to other next-hops of the same FEC when the currently used next-hop fails. The switchover is performed within a bounded time, which does not depend on the number of impacted LDP ILMs (LSR role) or service records (ingress LER role). The uniform failover time is only supported for a single LDP interface or LDP next-hop failure event.

This feature complements the coverage provided by the LDP Fast-ReRoute (FRR) feature, which provides a Loop-Free Alternate (LFA) backup next-hop with uniform failover time. Prefixes that have one or more ECMP next-hop protection are not programmed with a LFA back-up next-hop, and the other way around.

The LDP ECMP uniform failover feature builds on the concept of Protect Group ID (PG-ID) introduced in LDP FRR. LDP assigns a unique PG-ID to all FECs that have their primary Next-Hop Label Forwarding Entry (NHLFE) resolved to the same outgoing interface and next-hop.

When an ILM record (LSR role) or LSPid-to-NHLFE (LTN) record (LER role) is created on the IOM, it has the PG-ID of each ECMP NHLFE the FEC is using.

When a packet is received on this ILM/LTN, the hash routine selects one of the up to 64, or the ECMP value configured on the system, whichever is less, ECMP NHLFEs for the FEC based on a hash of the packet’s header. If the selected NHLFE has its PG-ID in DOWN state, the hash routine re-computes the hash to select a backup NHLFE among the first 16, or the ECMP value configured on the system, whichever is less, NHLFEs of the FEC, excluding the one that is in DOWN state. Packets of the subset of flows that resolved to the failed NHLFE are therefore sprayed among a maximum of 16 NHLFEs.

LDP then re-computes the new ECMP set to exclude the failed path and downloads it into the IOM. At that point, the hash routine updates the computation and begin spraying over the updated set of NHLFEs.

LDP sends the DOWN state update of the PG-ID to the IOM when the outgoing interface or a specific LDP next-hop goes down. This can be the result of any of the following events:

  • interface failure detected directly

  • failure of the LDP session detected via T-LDP BFD or LDP keepalive

  • failure of LDP Hello adjacency detected via link LDP BFD or LDP Hello

In addition, PIP sends an interface down event to the IOM if the interface failure is detected by other means than the LDP control plane or BFD. In that case, all PG-IDs associated with this interface have their state updated by the IOM.

When tunneling LDP packets over an RSVP LSP, it is the detection of the T-LDP session going down, via BFD or keepalive, which triggers the LDP ECMP uniform failover procedures. If the RSVP LSP alone fails and the latter is not protected by RSVP FRR, the failure event triggers the re-resolution of the impacted FECs in the slow path.

When a multicast LDP (mLDP) FEC is resolved over ECMP links to the same downstream LDP LSR, the PG-ID DOWN state causes packets of the FEC resolved to the failed link to be switched to another link using the linear FRR switchover procedures.

The LDP ECMP uniform failover is not supported in the following forwarding contexts:

  • VPLS BUM packets

  • packets forwarded to an IES/VPRN spoke-interface

  • packets forwarded toward VPLS spoke in routed VPLS

Finally, the LDP ECMP uniform failover is only supported for a single LDP interface, LDP next-hop, or peer failure event.

LDP Fast-Reroute for IS-IS and OSPF prefixes

LDP Fast Re-Route (FRR) is a feature which allows the user to provide local protection for an LDP FEC by pre-computing and downloading to the IOM or XCM both a primary and a backup NHLFE for this FEC.

The primary NHLFE corresponds to the label of the FEC received from the primary next-hop as per standard LDP resolution of the FEC prefix in RTM. The backup NHLFE corresponds to the label received for the same FEC from a Loop-Free Alternate (LFA) next-hop.

The LFA next-hop pre-computation by IGP is described in RFC 5286 – Basic Specification for IP Fast Reroute: Loop-Free Alternates. LDP FRR relies on using the label-FEC binding received from the LFA next-hop to forward traffic for a prefix as soon as the primary next-hop is not available. This means that a node resumes forwarding LDP packets to a destination prefix without waiting for the routing convergence. The label-FEC binding is received from the loop-free alternate next-hop ahead of time and is stored in the Label Information Base because LDP on the router operates in the liberal retention mode.

This feature requires that IGP performs the Shortest Path First (SPF) computation of an LFA next-hop, in addition to the primary next-hop, for all prefixes used by LDP to resolve FECs. IGP also populates both routes in the Routing Table Manager (RTM).

LDP FRR configuration

Use the follow commands to enable Loop-Free Alternate (LFA) computation by SPF under the IS-IS or OSPF routing protocol level:

  • MD-CLI
    configure router isis loopfree-alternate
    configure router ospf loopfree-alternate
  • classic CLI
    configure router isis loopfree-alternates
    configure router ospf loopfree-alternates

The preceding commands instruct the IGP SPF to attempt to pre-compute both a primary next hop and an LFA next hop for every learned prefix. When found, the LFA next hop is populated into the RTM along with the primary next hop for the prefix.

Next the user enables the use by LDP of the LFA next hop by configuring the following command.

configure router ldp fast-reroute

When this command is enabled, LDP uses both the primary next hop and LFA next hop, when available, for resolving the next hop of an LDP FEC against the corresponding prefix in the RTM. This results in LDP programming a primary NHLFE and a backup NHLFE into the IOM or XCM for each next hop of a FEC prefix for the purpose of forwarding packets over the LDP FEC.

Because LDP can detect the loss of a neighbor/next hop independently, it is possible that it switches to the LFA next hop while IGP is still using the primary next hop. To avoid this situation, Nokia recommends the user enable IGP-LDP synchronization on the LDP interface with the following command.

configure router interface ldp-sync-timer

LDP FRR procedures

The LDP FEC resolution when LDP FRR is not enabled operates as follows. When LDP receives a FEC, label binding for a prefix, it resolves it by checking if the exact prefix, or a longest match prefix when the following command is enabled in LDP, exists in the routing table, and is resolved against a next hop which is an address belonging to the LDP peer which advertised the binding, as identified by its LSR-id.
configure router ldp aggregate-prefix-match
When the next hop is no longer available, LDP deactivates the FEC and deprograms the NHLFE in the datapath. LDP also immediately withdraws the labels it advertised for this FEC and deletes the ILM in the datapath unless the user configured the following command to delay this operation.
configure router ldp label-withdrawal-delay
Traffic that is received while the ILM is still in the data path is dropped. When routing computes and populates the routing table with a new next hop for the prefix, LDP resolves again the FEC and programs the data path accordingly.

When LDP FRR is enabled and an LFA backup next hop exists for the FEC prefix in RTM, or for the longest prefix the FEC prefix matches to when the aggregate-prefix-match command is enabled in LDP, LDP resolves the FEC as above but programs the data path with both a primary NHLFE and a backup NHLFE for each next hop of the FEC.

In order perform a switchover to the backup NHLFE in the fast path, LDP follows the uniform FRR failover procedures which are also supported with RSVP FRR.

When any of the following events occurs, LDP instructs in the fast path the IOM on the line cards to enable the backup NHLFE for each FEC next hop impacted by this event. The IOM line cards do that by simply flipping a single state bit associated with the failed interface or neighbor/next hop:

  • An LDP interface goes operationally down, or is admin shutdown. In this case, LDP sends a neighbor/next hop down message to the IOM line cards for each LDP peer it has adjacency with over this interface.

  • An LDP session to a peer went down as the result of the Hello or keepalive timer expiring over a specific interface. In this case, LDP sends a neighbor/next-hop down message to the IOM line cards for this LDP peer only.

  • The TCP connection used by a link LDP session to a peer went down, due say to next-hop tracking of the LDP transport address in RTM, which brings down the LDP session. In this case, LDP sends a neighbor/next-hop down message to the IOM line cards for this LDP peer only.

  • A BFD session, enabled on a T-LDP session to a peer, times-out and as a result the link LDP session to the same peer and which uses the same TCP connection as the T-LDP session goes also down. In this case, LDP sends a neighbor/next-hop down message to the IOM line cards for this LDP peer only.

  • A BFD session enabled on the LDP interface to a directly connected peer, times-out and brings down the link LDP session to this peer. In this case, LDP sends a neighbor/next-hop down message to the IOM line cards for this LDP peer only. BFD support on LDP interfaces is a new feature introduced for faster tracking of link LDP peers.

The following commands, when enabled, do not cause the corresponding timer to be activated for a FEC as long as a backup NHLFE is still available.
configure router ldp tunnel-down-damp-time
configure router ldp label-withdrawal-delay

ECMP considerations

Whenever the SPF computation determined that there is more than one primary next hop for a prefix, it does not program any LFA next hop in RTM. In this case, the LDP FEC resolves to the multiple primary next hops, which provides the required protection.

Also, when the system ECMP value is configured as configure router ecmp 1 which is the default value, SPF can use the overflow ECMP links as LFA next hops in these two cases.

LDP FRR and LDP shortcut

When LDP FRR is enabled in LDP and the ldp-shortcut option is enabled in the router level, in transit IPv4 packets and specific CPM generated IPv4 control plane packets with a prefix resolving to the LDP shortcut are protected by the backup LDP NHLFE.

LDP FRR and LDP-over-RSVP

When LDP-over-RSVP is enabled, the RSVP LSP is modeled as an endpoint, that is, the destination node of the LSP, and not as a link in the IGP SPF. Thus, it is not possible for IGP to compute a primary or alternate next hop for a prefix which FEC path is tunneled over the RSVP LSP. Only LDP is aware of the FEC tunneling but it cannot determine on its own a loop-free backup path when it resolves the FEC to an RSVP LSP.

As a result, LDP does not activate the LFA next hop it learned from RTM for a FEC prefix when the FEC is resolved to an RSVP LSP. LDP activates the LFA next hop as soon as the FEC is resolved to direct primary next hop.

LDP FEC tunneled over an RSVP LSP because of enabling the LDP-over-RSVP feature therefore does not support the LDP FRR procedures and follows the slow path procedure of prior implementation.

When the user enables the following command option for an RSVP LSP, as described in "Loop-Free Alternate calculation in the presence of IGP shortcuts" in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide, the LSP is not used by LDP to tunnel an LDP FEC even when IGP shortcut is disabled but LDP-over-RSVP is enabled in IGP.
  • MD-CLI
    configure router mpls lsp igp-shortcut lfa-type lfa-only
  • classic CLI
    configure router mpls lsp igp-shortcut lfa-only

LDP FRR and RSVP shortcut (IGP shortcut)

When an RSVP LSP is used as a shortcut by IGP, it is included by SPF as a P2P link and can also be optionally advertised into the rest of the network by IGP. Thus the SPF is able of using a tunneled next hop as the primary next hop for a specific prefix. LDP is also able of resolving a FEC to a tunneled next hop when the IGP shortcut feature is enabled.

When both IGP shortcut and LFA are enabled in IS-IS or OSPF, and LDP FRR is also enabled, then the following additional LDP FRR capabilities are supported:

  • A FEC which is resolved to a direct primary next hop can be backed up by a LFA tunneled next hop.

  • A FEC which is resolved to a tunneled primary next hop does not have an LFA next hop. It relies on RSVP FRR for protection.

The LFA SPF is extended to use IGP shortcuts as LFA next hops as described in "Loop-Free Alternate calculation in the presence of IGP shortcuts" in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide.

IS-IS and OSPF support for Loop-Free Alternate calculation

See "OSPF and IS-IS support for LFA calculation" in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide.

Loop-Free Alternate calculation in the presence of IGP shortcuts

See "Loop-Free Alternate calculation in the presence of IGP shortcuts" in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide.

Loop-Free Alternate calculation for inter-area/inter-level prefixes

See "LFA calculation for inter-area and inter-level prefixes" in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide.

LFA SPF Policies

A Loop-Free Alternate Shortest Path First (LFA SPF) policy allows the user to apply specific criteria, such as admin group and SRLG constraints, to the selection of a LFA backup next hop for a subset of prefixes that resolve to a specific primary next hop. For more information, see "Loop-Free Alternate Shortest Path First (LFA SPF) Policies" in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide.

LDP FEC to BGP label route stitching

The stitching of an LDP FEC to a BGP labeled route allows the LDP capable PE devices to offer services to PE routers in other areas or domains without the need to support BGP labeled routes.

This feature is used in a large network to provide services across multiple areas or autonomous systems. Application of LDP to BGP FEC stitching shows a network with a core area and regional areas.

Figure 7. Application of LDP to BGP FEC stitching

Specific /32 routes in a regional area are not redistributed into the core area. Therefore, only nodes within a regional area and the ABR nodes in the same area exchange LDP FECs. A PE router, for example, PE21, in a regional area learns the reachability of PE routers in other regional areas by way of RFC 8277 BGP labeled routes redistributed by the remote ABR nodes by way of the core area. The remote ABR then sets the next-hop self on the labeled routes before re-distributing them into the core area. The local ABR for PE2, for example, ABR3 may or may not set next-hop self when it re-distributes these labeled BGP routes from the core area to the local regional area.

When forwarding a service packet to the remote PE, PE21 inserts a VC label, the BGP route label to reach the remote PE, and an LDP label to reach either ABR3, if ABR3 sets next-hop self, or ABR1.

In the same network, an MPLS capable DSLAM also acts as PE router for VLL services and needs to establish a PW to a PE in a different regional area by way of router PE21, acting now as an LSR. To achieve that, PE21 is required to perform the following operations:

  • Translate the LDP FEC it learned from the DSLAM into a BGP labeled route and re-distribute it by way of Interior Border Gateway Protocol (IBGP) within its area. This is in addition to redistributing the FEC to its LDP neighbors in the same area.

  • Translate the BGP labeled routes it learns through IBGP into an LDP FEC and re-distribute it to its LDP neighbors in the same area. In the application in Application of LDP to BGP FEC stitching, the DSLAM requests the LDP FEC of the remote PE router using LDP Downstream on Demand (DoD).

  • When a packet is received from the DSLAM, PE21 swaps the LDP label into a BGP label and pushes the LDP label to reach ABR3 or ABR1. When a packet is received from ABR3, the top label is removed and the BGP label is swapped for the LDP label corresponding to the DSLAM FEC.

Configuration

Note: The no local-lsr-id or local-lsr-id system commands are synonymous and mean that there is no local LSR ID for a session. These commands apply to classic CLI only.
A community is assigned to an LDP session by configuring a community string in the corresponding session parameters for the peer or the targeted session peer template. A community only applies to a local LSR ID for a session for the following commands.
configure router ldp interface-parameters interface ipv4 local-lsr-id
configure router ldp interface-parameters interface ipv6 local-lsr-id
configure router ldp targeted-session peer local-lsr-id
configure router ldp targeted-session peer-template local-lsr-id
It is never applied to a system FEC or local static FEC. A system FEC or static FEC cannot have a community associated with it and is therefore not advertised over an LDP session with a configured community. Only a single community string can be configured for a session toward a specified peer or within a specified targeted peer template. The FEC advertised by the following commands is automatically put in the community configured on the session.
configure router ldp session-parameters peer adv-local-lsr-id
configure router ldp targeted-session peer-template adv-local-lsr-id

The specified community is only associated with IPv4 and IPv6 address FECs incoming or outgoing on the relevant session, and not to IPv4/IPv6 P2MP FECs, or service FECs incoming/outgoing on the session.

Static FECs are treated as having no community associated with them, even if they are also received over another session with an assigned community. A mismatch is declared if this situation arises.

Detailed LDP FEC resolution

When an LSR receives a FEC-label binding from an LDP neighbor for a specific FEC1 element, the following procedures are performed:

  1. LDP installs the FEC if:

    • It was able to perform a successful exact match or a longest match, if the following command is enabled in LDP, of the FEC /32 prefix with a prefix entry in the routing table.

      configure router ldp aggregate-prefix-match
    • The advertising LDP neighbor is the next hop to reach the FEC prefix.

  2. When such a FEC-label binding has been installed in the LDP FIB, LDP performs the following:

    1. Program a push and a swap NHLFE entries in the egress data path to forward packets to FEC1.

    2. Program the CPM tunnel table with a tunnel entry for the NHLFE.

    3. Advertise a new FEC-label binding for FEC1 to all its LDP neighbors according to the global and per-peer LDP prefix export policies.

    4. Install the ILM entry pointing to the swap NHLFE.

  3. When BGP learns the LDP FEC by way of the CPM tunnel table and the FEC prefix exists in the BGP route export policy, it performs the following:

    1. Originate a labeled BGP route for the same prefix with this node as the next hop and advertise it by way of IBGP to its BGP neighbors. For example, the local ABR/ASBR nodes, which have the following command enabled:

      • MD-CLI
        configure router bgp neighbor advertise-ldp-prefix
      • classic CLI
        configure router bgp group neighbor advertise-ldp-prefix
    2. Install the ILM entry pointing to the swap NHLFE programmed by LDP.

Detailed BGP labeled route resolution

When an LSR receives a BGP labeled route by way of IBGP for a specific /32 prefix, the following procedures are performed:

  1. BGP resolves and installs the route in BGP if an LDP LSP to the BGP neighbor exists, for example, the ABR or ASBR, which advertised it and which is the next hop of the BGP labeled route.

  2. When the BGP route is installed, BGP does the following:

    1. pushes NHLFE in the egress data path to forward packets to this BGP labeled route

    2. programs the CPM tunnel table with a tunnel entry for the NHLFE

  3. When LDP learns the BGP labeled route from the CPM tunnel table and the prefix exists in the new LDP tunnel table route export policy, it does the following:

    1. Advertise a new LDP FEC-label binding for the same prefix to its LDP neighbors according the global and per-peer LDP export prefix policies. If LDP already advertised a FEC for the same /32 prefix after receiving it from an LDP neighbor then no action is required. For LDP neighbors that negotiated LDP Downstream on Demand (DoD), the FEC is advertised only when this node receives a Label Request message for this FEC from its neighbor.

    2. Install the ILM entry pointing to the BGP NHLFE if a new LDP FEC-label binding is advertised. If an ILM entry exists and points to an LDP NHLFE for the same prefix then no update to the ILM entry is performed. The LDP route is always preferred over the BGP labeled route.

The following command (in the LDP context) has no effect on LDP-to-BGP stitching except for one specific case as described below.
configure router ldp prefer-protocol-stitching
Typically BGP does not add a TTM entry if the BGP-LU route is not the most preferred route in RTM. Because a BGP-LU route cannot be used for LDP FEC resolution, there are no two TTM entries to choose from, so the command has no effect. However, it is possible to program BGP-LU tunnels for prefixes available in the IGP by blocking those prefixes from the label IPv4 RIB using the following command.
configure router bgp rib-management label-ipv4 route-table-import
In this case, the prefer-protocol-stitching command impacts the stitching and prefers stitching to BGP instead of LDP.
Note: The following BGP command, if set to a lower value than the IGP preference in the route table, overrides the IGP preference.
configure router bgp label-preference

When resolving a FEC, LDP prefers the RTM over the TTM when both resolutions are possible. That is, swapping the LDP ILM to an LDP NHLFE is preferred over stitching the LDP ILM to an SR NHLFE. This behavior can be overridden by enabling the prefer-protocol-stitching command in the LDP context, in which case LDP prefers stitching to the SR tunnel, even if an LDP tunnel exists. This capability interacts with SR-to-LDP stitching. When SR stitches to LDP, no SR tunnel entry is added to the TTM and the command has no effect.

Data plane forwarding

When a packet is received from an LDP neighbor, the LSR swaps the LDP label into a BGP label and pushes the LDP label to reach the BGP neighbor, for example, ABR/ASBR, which advertised the BGP labeled route with itself as the next hop.

When a packet is received from a BGP neighbor such as an ABR/ASBR, the top label is removed and the BGP label is swapped for the LDP label to reach the next hop for the prefix.

LDP-SR stitching for IPv4 prefixes

This feature enables stitching between an LDP FEC and SR node-SID route for the same IPv4 /32prefix.

LDP-SR stitching configuration

The user enables the stitching between an LDP FEC and SR node-SID route for the same prefix by configuring the export of SR (LDP) tunnels from the CPM Tunnel Table Manager (TTM) into LDP (IGP).

In the LDP-to-SR data path direction, the existing tunnel table route export policy in LDP, which was introduced for LDP-BGP stitching, is enhanced to support the export of SR tunnels from the TTM to LDP. The user adds the following IS-IS or OSPF configuration information:
  • IS-IS (MD-CLI)
    configure policy-options policy-statement entry from protocol name isis
    configure policy-options policy-statement entry from protocol instance
  • IS-IS (classic CLI)
    configure router policy-options policy-statement entry from protocol isis instance
  • OSPF (MD-CLI)
    configure policy-options policy-statement entry from protocol name ospf
    configure policy-options policy-statement entry from protocol instance
  • OSPF (classic CLI)
    configure router policy-options policy-statement entry from protocol ospf instance
The preceding configuration information is added to the LDP tunnel table export policy using the following command.
configure router ldp export-tunnel-table

The user can restrict the export to LDP of SR tunnels from a specific prefix list. The user can also restrict the export to a specific IGP instance by optionally specifying the instance ID in the "from" statement.

The "from protocol" statement has an effect only when the protocol value is IS-IS, OSPF, or BGP. Policy entries configured with any other value are ignored when the policy is applied. If the user configures multiple "from" statements in the same policy or does not include the "from" statement but adds a default accept action using the following command:
  • MD-CLI
    configure policy-options policy-statement default-action action-type accept
    
  • classic CLI
    configure router policy-options policy-statement default-action accept
When the accept action is enabled, the LDP follows the TTM selection rules as described in the "Segment Routing Tunnel Management" in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide to select a tunnel to which it stitches the LDP ILM to:
  1. LDP selects the tunnel from the lowest TTM preference protocol.

  2. If IS-IS and BGP protocols have the same preference, then LDP uses the default TTM protocol preference to select the protocol.

  3. Within the same IGP protocol, LDP selects the lowest instance ID.

When this policy is enabled in LDP, LDP listens to SR tunnel entries in the TTM. If an LDP FEC primary next hop cannot be resolved using an RTM route and a SR tunnel of type SR IS-IS or SR OSPF to the same destination exists in TTM, LDP programs an LDP ILM and stitches it to the SR node-SID tunnel endpoint. LDP also originates an FEC for the prefix and re-distributes it to its LDP and T-LDP peers. The latter allows an LDP FEC that is tunneled over a RSVP-TE LSP to have its ILM stitched to an SR tunnel endpoint. When a LDP FEC is stitched to a SR tunnel, forwarded packets benefit from the protection provided by the LFA/remote LFA backup next hop of the SR tunnel.

When resolving a FEC, LDP prefers the RTM over the TTM when both resolutions are possible. That is, swapping the LDP ILM to an LDP NHLFE is preferred over stitching the LDP ILM to an SR NHLFE. This behavior can be overridden by enabling the prefer-protocol-stitching command (in the LDP context), in which case LDP prefers stitching to the SR tunnel, even if an LDP tunnel exists. This capability interacts with SR-to-LDP stitching; when SR stitches to LDP no SR tunnel entry is added to the TTM and the command has no effect.

Note: Forcing the stitching to SR affects forwarding at the LER and LSR roles. Typically, a specific prefix has a "push" and a "swap" binding for the LER and LSR roles, respectively. When prefer-protocol-stitching is enabled, the "swap" binding points to an SR tunnel and the "push" binding is removed. Services using the LDP tunnel should use the SR tunnel instead.

In the SR-to-LDP data path direction, the SR mapping server provides a global policy for prefixes corresponding to the LDP FECs the SR needs to stitch to. For more information, see "Segment Routing Mapping Server" in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide. As a result, a tunnel table export policy is not required. Instead, you can export to an IGP instance the LDP tunnels for FEC prefixes advertised by the mapping server using the following commands:

configure router isis segment-routing export-tunnel-table ldp
configure router ospf segment-routing export-tunnel-table ldp

When this command is enabled in the segment-routing context of an IGP instance, IGP listens to LDP tunnel entries in the TTM. When a /32 LDP tunnel destination matches a prefix for which IGP has received a prefix-SID sub-TLV from a mapping server, IGP instructs the SR module to program the SR ILM and stitch it to the LDP tunnel endpoint. The SR ILM can stitch to an LDP FEC resolved over either link LDP or T-LDP. In the latter case, the stitching is performed to an LDP-over-RSVP tunnel. When an SR tunnel is stitched to an LDP FEC, packets forwarded benefit from the FRR protection of the LFA backup next hop of the LDP FEC.

When resolving a node SID, IGP prefers a prefix SID received in an IP Reach TLV over a prefix SID received via the mapping server. That is, swapping the SR ILM to a SR NHLFE is preferred over stitching it to a LDP tunnel endpoint. For more information about prefix SID resolution, see "Segment Routing Mapping Server Prefix SID Resolution" in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide.

Nokia recommends enabling the BFD option on the interfaces in both LDP and IGP instance contexts to speed up the failure detection and the activation of the LFA/remote-LFA backup next hop in either direction. This is particularly true if the injected failure is a remote failure.

This feature is limited to IPv4 /32 prefixes in both LDP and SR.

Stitching in the LDP-to-SR direction

Stitching in dataplane from the LDP-to-SR direction is based on the LDP module monitoring the TTM for a SR tunnel of a prefix matching an entry in the LDP TTM export policy.

Figure 8. Stitching in the LDP-to-SR direction

In Stitching in the LDP-to-SR direction, the boundary router R1 performs the following procedure to effect stitching:

  1. Router R1 is at the boundary between an SR domain and LDP domain and is configured to stitch between SR and LDP.
  2. Link R1-R2 is LDP-enabled, but router R2 does not support SR (or SR is disabled).
  3. Router R1 receives a prefix-SID sub-TLV in an IS-IS IP reachability TLV originated by router Ry for prefix Y.
  4. R1 resolves the prefix-SID and programs an NHLFE on the link toward the next hop in the SR domain. R1 programs an SR ILM and points it to this NHLFE.
  5. Because R1 is programmed to stitch LDP to SR, the LDP in R1 discovers in TTM the SR tunnel to Y. LDP programs an LDP ILM and points it to the SR tunnel. As a result, both the SR ILM and LDP ILM now point to the SR tunnel, one via the SR NHLFE and the other via the SR tunnel endpoint.
  6. R1 advertises the LDP FEC for the prefix Y to all its LDP peers. R2 is now able to install a LDP tunnel toward Ry.
  7. If R1 finds multiple SR tunnels to destination prefix Y, it uses the following steps of the TTM tunnel selection rules to select the SR tunnel.
    1. R1 selects the tunnel from the lowest preference IGP protocol.
    2. Select the protocol using the default TTM protocol preference.
    3. Within the same IGP protocol, R1 uses the lowest instance ID to select the tunnel.
  8. If the user concurrently configured BGP, IS-IS, and OSPF from protocol statements (as follows) in the same LDP tunnel table export policy, or did not include the from statement but added a default action of accept, R1 selects the tunnel to destination prefix Y to stitch the LDP ILM to using the TTM tunnel selection rules.
    • MD-CLI
      configure policy-options policy-statement entry from protocol name {bgp | isis | ospf}
    • classic CLI
      configure router policy-options policy-statement entry from protocol {bgp | isis | ospf}
      
    The TTM tunnel selection rules are:
    1. R1 selects the tunnel from the lowest preference protocol.
    2. If any two or all of IS-IS, OSPF, and BGP protocols have the same preference, then R1 selects the protocol using the default TTM protocol preference.
    3. Within the same IGP protocol, R1 uses the lowest instance ID to select the tunnel.
    Note: If R1 has already resolved a LDP FEC for prefix Y, it has an ILM for it, but this ILM is not updated to point toward the SR tunnel. This is because LDP resolves in RTM first before going to TTM and, therefore, prefers the LDP tunnel over the SR tunnel. Similarly, if an LDP FEC is received after the stitching is programmed, the LDP ILM is updated to point to the LDP NHLFE because LDP can resolve the LDP FEC in RTM.
  9. The user enables SR in R2. R2 resolves the prefix SID for Y and installs the SR ILM and the SR NHLFE. R2 is now able of forwarding packets over the SR tunnel to router Ry. No processing occurs in R1 because the SR ILM is already programmed.
  10. The user disables LDP on the interface R1-R2 (both directions) and the LDP FEC ILM and NHLFE are removed in R1. The same occurs in R2 which can then only forward using the SR tunnel toward Ry.

Stitching in the SR-to-LDP direction

The stitching in dataplane from the SR-to-LDP direction is based on the IGP monitoring the TTM for a LDP tunnel of a prefix matching an entry in the SR TTM export policy.

In Stitching in the LDP-to-SR direction, the boundary router R1 performs the following procedure to effect stitching:

  1. Router R1 is at the boundary between a SR domain and a LDP domain and is configured to stitch between SR and LDP.

    Link R1-R2 is LDP enabled but router R2 does not support SR (or SR is disabled).

  2. R1 receives an LDP FEC for prefix X owned by router Rx further down in the LDP domain.

    RTM in R1 shows that the interface to R2 is the next hop for prefix X.

  3. LDP in R1 resolves this FEC in RTM and creates an LDP ILM for it with, for example, ingress label L1, and points it to an LDP NHLFE toward R2 with egress label L2.
  4. Later on, R1 receives a prefix-SID sub-TLV from the mapping server R5 for prefix X.
  5. IGP in R1 is resolving in its routing table the next hop of prefix X to the interface to R2. R1 knows that R2 did not advertise support of Segment Routing and, therefore, SID resolution for prefix X in routing table fails.
  6. IGP in R1 attempts to resolve prefix SID of X in TTM because it is configured to stitch SR-to-LDP. R1 finds a LDP tunnel to X in TTM, instructs the SR module to program a SR ILM with ingress label L3, and points it to the LDP tunnel endpoint, consequently stitching ingress L3 label to egress L2 label.
    Note:
    • Here, two ILMs, the LDP and SR, are pointing to the same LDP tunnel one via NHLFE and one via tunnel endpoint.

    • No SR tunnel to destination X should be programmed in TTM following this resolution step.

    • A trap is generated for prefix SID resolution failure only after IGP fails to complete step 5 and step 6. The existing trap for prefix SID resolution failure is enhanced to state whether the prefix SID which failed resolution was part of mapping server TLV or a prefix TLV.

  7. The user enables segment routing on R2.
  8. IGP in R1 discovers that R2 supports SR via the SR capability.

    Because R1 still has a prefix-SID for X from the mapping server R5, it maintains the stitching of the SR ILM for X to the LDP FEC unchanged.

  9. The operator disables the LDP interface between R1 and R2 (both directions) and the LDP FEC ILM and NHLFE for prefix X are removed in R1.
  10. This triggers the re-evaluation of the SIDs. R1 first attempts the resolution in routing table and because the next hop for X now supports SR, IGP instructs the SR module to program a NHLFE for prefix-SID of X with egress label L4 and outgoing interface to R2. R1 installs a SR tunnel in TTM for destination X. R1 also changes the SR ILM with ingress label L3 to point to the SR NHLFE with egress label L4.

    Router R2 now becomes the SR-LDP stitching router.

  11. Later, router Rx, which owns prefix X, is upgraded to support SR. R1 now receives a prefix-SID sub-TLV in a ISIS or OSPF prefix TLV originated by Rx for prefix X. The SID information may or may not be the same as the one received from the mapping server R5. In this case, IGP in R1 prefers the prefix-SID originated by Rx and update the SR ILM and NHLFE with appropriate labels.
  12. Finally, the operator cleans up the mapping server and removes the mapping entry for prefix X, which then gets withdrawn by IS-IS.

LDP FRR LFA backup using SR tunnel for IPv4 prefixes

The user enables the use of an SR tunnel as a remote LFA or as a TI-LFA backup tunnel next hop by an LDP FEC via the following command.

configure router ldp fast-reroute backup-sr-tunnel

As a prerequisite, the user must enable the stitching of LDP and SR in the LDP-to-SR direction as described in LDP-SR stitching configuration. That is because the LSR must perform the stitching of the LDP ILM to SR tunnel when the primary LDP next hop of the FEC fails. Thus, LDP must listen to SR tunnels programmed by the IGP in TTM, but the mapping server feature is not required.

Assume the backup-sr-tunnel command option is enabled in LDP and remote-lfa or ti-lfa, or both are enabled by the IGP instance:
  • MD-CLI
    configure router isis loopfree-alternate remote-lfa
    configure router ospf loopfree-alternate remote-lfa
    configure router isis loopfree-alternate ti-lfa 
    configure router ospf loopfree-alternate ti-lfa
  • classic CLI
    configure router isis loopfree-alternates remote-lfa
    configure router ospf loopfree-alternates remote-lfa
    configure router isis loopfree-alternates ti-lfa 
    configure router ospf loopfree-alternates ti-lfa
    
and that LDP was able to resolve the primary next hop of the LDP FEC in RTM. IGP SPF runs both the base LFA and the TI-LFA algorithms and if it does not find a backup next hop for a prefix of an LDP FEC, it also runs the remote LFA algorithm. If IGP finds a TI-LFA or a remote LFA tunnel next hop, LDP programs the primary next hop of the FEC using an LDP NHLFE and programs the LFA backup next hop using an LDP NHLFE pointing to the SR tunnel endpoint.
Note: The LDP packet is not ‟tunneled” over the SR tunnel. The LDP label is actually stitched to the segment routing label stack. LDP points both the LDP ILM and the LTN to the backup LDP NHLFE which itself uses the SR tunnel endpoint.

The behavior of the feature is similar to the LDP-to-SR stitching feature described in the LDP-SR stitching for IPv4 prefixes section, except the behavior is augmented to allow the stitching of an LDP ILM/LTN to an SR tunnel for the LDP FEC backup NHLFE when the primary LDP NHLFE fails.

The following is the behavior of this feature:

  • When LDP resolves a primary next hop in RTM and a TI-LFA or a remote LFA backup next hop using SR tunnel in TTM, LDP programs a primary LDP NHLFE as usual and a backup LDP NHLFE pointing to the SR tunnel, which has the TI-LFA or remote LFA backup for the same prefix.

  • If the LDP FEC primary next hop failed and LDP has pre-programmed a TI-LFA or a remote LFA next hop with an LDP backup NHLFE pointing to the SR tunnel, the LDP ILM/LTN switches to it.

    Note: If, for some reason, the failure impacted only the LDP tunnel primary next hop but not the SR tunnel primary next hop, the LDP backup NHLFE effectively points to the primary next hop of the SR tunnel and traffic of the LDP ILM/LTN follows this path instead of the TI-LFA or remote LFA next hop of the SR tunnel until the latter is activated.
  • If the LDP FEC primary next hop becomes unresolved in RTM, LDP switches the resolution to a SR tunnel in TTM, if one exists, as per the LDP-to-SR stitching procedures described in Stitching in the LDP-to-SR direction.

  • If both the LDP primary next hop and a regular LFA next hop become resolved in RTM, the LDP FEC programs the primary and backup NHLFEs as usual.

  • It is recommended to enable the bfd-enable command option on the interfaces in both LDP and IGP instance contexts to speed up the failure detection and the activation of the LFA/TI-LFA/remote-LFA backup next hop in either direction.

LDP Remote LFA

LDP Remote LFA (rLFA) builds on the pre-existing capability to compute repair paths to a remote LFA node (or PQ node), which puts the packets onto the shortest path without looping them back to the node that forwarded them over the repair tunnel. See "Remote LFA with Segment Routing" in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide for more information about rLFA computation. In SR OS, a repair tunnel can also be an SR tunnel, however this section describes an LDP-in-LDP tunnel.

As a prerequisite for LDP rLFA configuration, enable Remote LFA computation using the following commands:
  • MD-CLI
    configure router isis loopfree-alternate remote-lfa
    configure router ospf loopfree-alternate remote-lfa
    
  • classic CLI
    configure router isis loopfree-alternates remote-lfa
    configure router ospf loopfree-alternates remote-lfa
Enable attaching rLFA information to RTM entries using the following commands:
  • MD-CLI
    configure router isis loopfree-alternate augment-route-table
    configure router ospf loopfree-alternate augment-route-table
  • classic CLI
    configure router isis loopfree-alternates augment-route-table
    configure router ospf loopfree-alternates augment-route-table

These commands attach rLFA-specific information to route entries that are necessary for LDP to program repair tunnels toward the PQ node using a specific neighbor.

Finally, enable tunneling on both the PQ node and the source node using the following command.

configure router ldp targeted-session peer tunneling

The following figure shows the general principles of LDP rLFA operation.

Figure 9. General principles of LDP rLFA operation

In the preceding figure, S is the source node and D is the destination node. The primary path is the direct link between S and D. The rLFA algorithm has determined the PQ node. In the event of a failure between S and D, for traffic not to loopback to S, the traffic must be sent directly to the PQ node. An LDP targeted session is required between PQ and S. Over that T-LDP session, the PQ node advertises label 23 for FEC D. All other labels are link LDP bindings, which allow traffic to reach the PQ node. On S, LDP creates an NHLFE that has two labels, where label 23 is the inner label. Label 23 is tunneled up to the PQ node, which then forwards traffic on the shortest path to D.

Note: LDP rLFA applies to IPv4 FECs only. LDP rLFA requires the targeted sessions (between Source node and PQ node) to be manually configured beforehand (the system does not automatically set-up T-LDP sessions toward the PQ nodes that the rLFA algorithm has identified). These targeted sessions must be set up with router IDs that match the ones the rLFA algorithm uses. LDP rLFA is designed to be operated in LDP-only environments; therefore, LDP does not establish rLFA backups when in the presence of LDP over RSVP-TE or LDP over SR-TE tunnels. The following OAM command is not supported over the repair tunnels.
oam lsp-trace

Automatic LDP rLFA

The manual LDP rLFA configuration method requires the user to specify beforehand, on each node, the list of peers with which a targeted session is established. See LDP Remote LFA for information about the rLFA LDP tunneling technology, and how to configure LDP to establish targeted sessions.

This section describes the automatic LDP rLFA mechanisms used to automatically establish targeted LDP sessions without the need to specify, on each node, the list of peers with which the targeted sessions must be established. The automatic LDP rLFA method considerably minimizes overall configuration, and increases dynamic flexibility.

The basic principles of operation for the automatic LDP rLFA capability are described in LDP Remote LFA. In the example shown in General principles of LDP rLFA operation, considering a failure on the shortest path between S and D nodes, S needs a targeted LDP session toward the PQ node to learn the label-binding information configured on PQ node for FEC D. As a prerequisite, the LFA algorithm has run successfully and the PQ node information is attached to the route entries used by LDP.

Enable remote LFA computation using the following command:

  • MD-CLI
    configure router isis loopfree-alternate remote-lfa
  • classic CLI
    configure router isis loopfree-alternates remote-lfa
    

Enable attaching rLFA information to RTM entries using the following command:

  • MD-CLI
    configure router isis loopfree-alternate augment-route-table
  • classic CLI
    configure router isis loopfree-alternates augment-route-table
    

In the General principles of LDP rLFA operation scenarios, because the S node requires the T-LDP session, it should initiate the T-LDP session request. The PQ node receives the request for this session. Therefore, S node configuration is as follows.

MD-CLI

[ex:/configure router "Base" ldp targeted-session auto-tx ipv4]
A:admin@node-2# info
    admin-state enable
    tunneling false

classic CLI

A:node-2>config>router>ldp>targ-session>auto-tx>ipv4# info
----------------------------------------------
                        no shutdown
----------------------------------------------

And PQ node configuration is as follows:

MD-CLI

[ex:/configure router "Base" ldp targeted-session auto-tx ipv4]
A:admin@node-2# info
    admin-state enable
    tunneling true

classic CLI

A:node-2>config>router>ldp>targ-session>auto-tx>ipv4# info
----------------------------------------------
                        tunneling
                        no shutdown
----------------------------------------------

Based on the preceding configurations, the S node, using the PQ node information attached to the route entries, automatically starts sending LDP targeted Hello messages to the PQ node. The PQ node accepts them and the T-LDP session is established. For the same reason, as in case of manual LDP rLFA, enabling tunneling at the PQ node is required to enable PQ to send to S the label that it is bound to FEC D. In such a simple configuration, if there is a change in both the network topology and the PQ node of S for FEC D, S automatically kills the session to the previous PQ node and establish a new one (toward the new PQ node).

Note: It is not possible to configure command options specifically for automatic T-LDP sessions. The system inherits command options, either those defined for the IPv4 family (under targeted-session) or the default command options of the system. This applies to the following configurations.
configure router ldp targeted-session ipv4 hello
configure router ldp targeted-session ipv4 hello-reduction
configure router ldp targeted-session ipv4 keepalive
Also, the automatic T-LDP session can use parameters defined for the following configuration if the specified address is the router ID of the peer.
configure router ldp tcp-session-parameters peer-transport
In typical network deployments, each node is potentially the source node as well as the PQ node of a source node for a specific destination FEC. Therefore, all nodes may have both auto-tx and auto-rx configured and enabled as follows:
configure router ldp targeted-session auto-tx
configure router ldp targeted-session auto-rx
Nodes may also have other configurations defined (for example, peer, peer-template, and so on).

There are several implications (explicit or implicit) of having multiple configurations on a peer (either explicit or implicit).

One implication is that LDP operates using precedence levels. When a targeted session is established with a peer, LDP uses the session parameters with the highest precedence. The order of precedence is as follows (highest to lowest):

  • peer

  • template

  • auto-tx

  • auto-rx

  • sdp

Allow us to consider the case where a T-LDP session is needed between nodes A (source) and B (PQ node). If A has auto-tx enabled and a per-peer configuration for B also exists, A establishes the session using the parameters defined in the per-peer configuration for B, instead of using those defined under auto-tx. The same applies on B. However, if B uses per-peer configuration for A and the chosen configuration does not enable tunneling, LDP rLFA does not work because the PQ node does not tunnel the FEC/label bindings. This mechanism also applies to auto-tx and auto-rx.

In a typical scenario in which the auto-tx and auto-rx modes are both enabled on a node that acts as the PQ node, and the node chooses the auto-tx configuration for the T-LDP session (because it has the higher precedence than auto-rx), LDP rLFA only works if tunneling is enabled under auto-tx. The configuration from which the session command options are taken is indicated in the following command (‟creator” label).
show router ldp targ-peer detail

Another implication is that redundant T-LDP sessions may remain up after a topology change when they are no longer required. The following clear command enables the user to delete these redundant T-LDP sessions.

clear router ldp targeted-auto-rx hold-time

The operator must run the command during a specific time window on all nodes on which auto-rx is configured. The hold-time value should be greater than the hello-timer value plus the time required to run the clear command on all applicable nodes. A system check verifies that a non-zero value is configured; no other checks are enforced. It is the responsibility of the operator to ensure that the configured non-zero value is long enough to meet the preceding criterion.

While the hold timer for the clear command is in progress, the remaining timeout value can be displayed using the following command.
tools dump router ldp timers

The clear command is not synchronized to the standby CPM. If a user does a clear with a large hold-time value and the CPM does a switchover during this time, the operator needs to restart the clear on the newly active CPM.

Note: The following considerations apply when configuring automatic LDP rLFA:
  • works with IS-IS only
  • only supports IPv4 FECs

  • local-lsr-id configuration and templates are not supported

  • lsp-trace on backup path is not supported

Automatic creation of a targeted Hello adjacency and LDP session

This feature enables the automatic creation of a targeted Hello adjacency and LDP session to a discovered peer.

Feature configuration

The user first creates a targeted LDP session peer parameter template by using the following command.

configure router ldp targeted-session peer-template

Inside the template the user configures the common T-LDP session command options shared by all peers using this template with the following commands:

  • MD-CLI
    configure router ldp targeted-session peer-template bfd-liveness
    configure router ldp targeted-session peer-template hello
    configure router ldp targeted-session peer-template hello-reduction
    configure router ldp targeted-session peer-template keepalive
    configure router ldp targeted-session peer-template local-lsr-id
    configure router ldp targeted-session peer-template tunneling
  • classic CLI
    configure router ldp targeted-session peer-template bfd-enable
    configure router ldp targeted-session peer-template hello
    configure router ldp targeted-session peer-template hello-reduction
    configure router ldp targeted-session peer-template keepalive
    configure router ldp targeted-session peer-template local-lsr-id
    configure router ldp targeted-session peer-template tunneling

The tunneling option does not support adding explicit RSVP LSP names. LDP selects RSVP LSP for an endpoint in LDP-over-RSVP directly from the Tunnel Table Manager (TTM).

Then the user references the peer prefix list which is defined inside a policy statement defined in the global policy manager using the following command:

  • MD-CLI
    configure router ldp targeted-session peer-template-map template-map-name
    configure router ldp targeted-session peer-template-map policy-map
  • classic CLI
    configure router ldp targeted-session peer-template-map peer-template policy

Each application of a targeted session template to a specific prefix in the prefix list results in the establishment of a targeted Hello adjacency to an LDP peer using the template parameters as long as the prefix corresponds to a router-id for a node in the TE database. The targeted Hello adjacency either triggers a new LDP session or is associated with an existing LDP session to that peer.

Up to five (5) peer prefix policies can be associated with a single peer template at all times. Also, the user can associate multiple templates with the same or different peer prefix policies. Thus multiple templates can match with a specific peer prefix. In all cases, the targeted session parameters applied to a specific peer prefix are taken from the first created template by the user. This provides a more deterministic behavior regardless of the order in which the templates are associated with the prefix policies.

Each time the user executes the above command, with the same or different prefix policy associations, or the user changes a prefix policy associated with a targeted peer template, the system re-evaluates the prefix policy. The outcome of the re-evaluation tells LDP if an existing targeted Hello adjacency needs to be torn down or if an existing targeted Hello adjacency needs to have its parameters updated on the fly.

If a /32 prefix is added to (removed from) or if a prefix range is expanded (shrunk) in a prefix list associated with a targeted peer template, the same prefix policy re-evaluation described above is performed.

The template comes up in the enabled state and therefore it takes effect immediately. After a template is in use, the user can change any of the parameters on the fly without shutting down the template. In this case, all targeted Hello adjacencies are updated.

Feature behavior

Whether the prefix list contains one or more specific /32 addresses or a range of addresses, an external trigger is required to indicate to LDP to instantiate a targeted Hello adjacency to a node which address matches an entry in the prefix list. The objective of the feature is to provide an automatic creation of a T-LDP session to the same destination as an auto-created RSVP LSP to achieve automatic tunneling of LDP-over-RSVP. The external trigger is when the router with the matching address appears in the Traffic Engineering database. In the latter case, an external module monitoring the TE database for the peer prefixes provides the trigger to LDP. As a result of this, the user must enable the following command option in IS-IS or OSPF.

configure router isis traffic-engineering
configure router ospf traffic-engineering

Each mapping of a targeted session peer parameter template to a policy prefix which exists in the TE database results in LDP establishing a targeted Hello adjacency to this peer address using the targeted session parameters configured in the template. This Hello adjacency then either gets associated with an LDP session to the peer if one exists or it triggers the establishment of a new targeted LDP session to the peer.

The SR OS supports multiple ways of establishing a targeted Hello adjacency to a peer LSR:

  • User configuration of the peer with the targeted session command options inherited from the following top level context.
    configure router ldp targeted-session ipv4
    User configuration of the peer with the targeted session command options explicitly configured for this peer in the following context and which overrides the top level command options shared by all targeted peers.
    configure router ldp targeted-session peer
    This allows us to refer to the top level configuration context as the global context. Some command options only exist in the global context; their value is always inherited by all targeted peers regardless of which event triggered it.
  • User configuration of an SDP of any type to a peer with the following command enabled (default configuration). In this case the targeted session command option values are taken from the global context.
    configure service sdp signaling tldp
  • User configuration of a (FEC 129) PW template binding in a BGP-VPLS service. In this case the targeted session parameter values are taken from the global context.

  • User configuration of a (FEC 129 type II) PW template binding in a VLL service (dynamic multi-segment PW). In this case the target session parameter values are taken from the global context.

  • User configuration of a mapping of a targeted session peer parameter template to a prefix policy when the peer address exists in the TE database. In this case, the targeted session command option values are taken from the template.

  • Features using an LDP LSP, which itself is tunneled over an RSVP LSP (LDP-over-RSVP), as a shortcut do not trigger automatically the creation of the targeted Hello adjacency and LDP session to the destination of the RSVP LSP. The user must configure manually the peer command options or configure a mapping of a targeted session peer parameter template to a prefix policy. These features are the following:

    • BGP shortcut

      configure router bgp next-hop-resolution shortcut-tunnel
    • IGP shortcut

      configure router isis igp-shortcut  
      configure router ospf igp-shortcut
      configure router ospf3 igp-shortcut
      
    • LDP shortcut for IGP routes

      • MD-CLI
        configure router ldp ldp-shortcut
      • classic CLI
        configure router ldp-shortcut
    • static route LDP shortcut (ldp option in a static route)

      • MD-CLI
        configure router static-routes route indirect tunnel-next-hop resolution-filter ldp
      • classic CLI
        configure router static-route-entry indirect tunnel-next-hop resolution-filter ldp
    • VPRN service

      configure service vprn bgp-ipvpn mpls auto-bind-tunnel resolution-filter ldp
      configure service vprn bgp-evpn mpls auto-bind-tunnel resolution-filter ldp
Because the above triggering events can occur simultaneously or in any arbitrary order, the LDP code implements a priority handling mechanism to decide which event overrides the active targeted session parameters. The overriding trigger becomes the owner of the targeted adjacency to a specific peer and is displayed using the following command.
show router ldp targ-peer

Targeted LDP adjacency triggering events and priority summarizes the triggering events and the associated priority.

Table 1. Targeted LDP adjacency triggering events and priority
Triggering event Automatic creation of targeted Hello adjacency Active targeted adjacency parameter override priority

Manual configuration of peer parameters (creator=manual)

Yes

1

Mapping of targeted session template to prefix policy (creator=template)

Yes

2

Manual configuration of SDP with signaling tldp option enabled (creator=service manager)

Yes

3

PW template binding in BGP-AD VPLS (creator=service manager)

Yes

3

PW template binding in FEC 129 VLL (creator=service manager)

Yes

3

LDP-over-RSVP as a BGP/IGP/LDP/Static shortcut

No

LDP-over-RSVP in VPRN auto-bind

No

LDP-over-RSVP in BGP Label Route resolution

No

Any parameter value change to an active targeted Hello adjacency caused by any of the above triggering events is performed by having LDP immediately send a Hello message with the new parameters to the peer without waiting for the next scheduled time for the Hello message. This allows the peer to adjust its local state machine immediately and maintains both the Hello adjacency and the LDP session in UP state. The only exceptions are the following:

  • The triggering event caused a change to the local-lsr-id value. In this case, the Hello adjacency is brought down which also causes the LDP session to be brought down if this is the last Hello adjacency associated with the session. A new Hello adjacency and LDP session then get established to the peer using the new value of the local LSR ID.

  • The triggering event caused the targeted peer to become disabled. In this case, the Hello adjacency is brought down which also causes the LDP session to be brought down if this is the last Hello adjacency associated with the session.

Finally, the value of any LDP parameter which is specific to the LDP/TCP session to a peer is inherited from the following context.
configure router ldp session-parameters peer
This includes MD5 authentication, LDP prefix per-peer policies, label distribution mode (DU or DOD), and so on.

Multicast P2MP LDP for GRT

The P2MP LDP LSP setup is initiated by each leaf node of multicast tree. A leaf PE node learns to initiate a multicast tree setup from client application and sends a label map upstream toward the root node of the multicast tree. On propagation of label map, intermediate nodes that are common on path for multiple leaf nodes become branch nodes of the tree.

Video distribution using P2MP LDP illustrates wholesale video distribution over P2MP LDP LSP. Static IGMP entries on edge are bound to P2MP LDP LSP tunnel-interface for multicast video traffic distribution.

Figure 10. Video distribution using P2MP LDP

LDP P2MP support

LDP P2MP configuration

A node running LDP also supports P2MP LSP setup using LDP. By default, it would advertise the capability to a peer node using P2MP capability TLV in LDP initialization message.

This configuration option per interface is provided to restrict/allow the use of interface in LDP multicast traffic forwarding toward a downstream node. Interface configuration option does not restrict/allow exchange of P2MP FEC by way of established session to the peer on an interface, but it would only restrict/allow use of next-hops over the interface.

LDP P2MP protocol

Only a single generic identifier range is defined for signaling multipoint tree for all client applications. Implementation on the 7750 SR or 7950 XRS reserves the range (1..8292) of generic LSP P2MP-ID on root node for static P2MP LSP.

MBB

When a transit or leaf node detects that the upstream node toward the root node of multicast tree has changed, it follows graceful procedure that allows make-before-break transition to the new upstream node. Make-before-break support is optional. If the new upstream node does not support MBB procedures then the downstream node waits for the configured timer before switching over to the new upstream node.

ECMP support

In ECMP support, the leaf discovers the ROOT-1 from all three ASBRs (ASBR-3, ASBR-4 and ASBR-5).

Figure 11. ECMP support

The leaf chooses uses the following process to choose the ASBR used for the multicast stream:

  1. The leaf determines the number of ASBRs that should be part of the hash calculation.

    The number of ASBRs that are part of the hash calculation comes from the configured ECMP (configure router ecmp). For example, if the ECMP value is set to 2, only two of the ASBRs are part of the hash algorithm selection.

  2. After deciding the upstream ASBR, the leaf determines whether there are multiple equal cost paths between it and the chosen ASBR.

    • If there are multiple ECMP paths between the leaf and the ASBR, the leaf performs another ECMP selection based on the configured value in configure router ecmp. This is a recursive ECMP lookup.

    • The first lookup chooses the ASBR and the second lookup chooses the path to that ASBR.

      For example, if the ASBR 5 was chosen in ECMP support, there are three paths between the leaf and ASBR-5. As such, a second ECMP decision is made to choose the path.

  3. At ASBR-5, the process is repeated. For example, in ECMP support, ASBR-5 goes through steps 1 and 2 to choose between ASBR-1 and ASBR-2, and a second recursive ECMP lookup to choose the path to that ASBR.

When there are several candidate upstream LSRs, the LSR must select one upstream LSR. The algorithm used for the LSR selection is a local matter. If the LSR selection is done over a LAN interface and the Section 6 procedures are applied, the procedure described in ECMP hash algorithm is applied to ensure that the same upstream LSR is elected among a set of candidate receivers on that LAN.

The ECMP hash algorithm ensures that there is a single forwarder over the LAN for a specific LSP.

Inter-AS non-segmented mLDP

This feature allows multicast services to use segmented protocols and span them over multiple autonomous systems (ASs), like in unicast services. As IP VPN or GRT services span multiple IGP areas or multiple ASs, either because of a network designed to deal with scale or as result of commercial acquisitions, operators may require inter-AS VPN (unicast) connectivity. For example, an inter-AS VPN can break the IGP, MPLS, and BGP protocols into access segments and core segments, allowing higher scaling of protocols by segmenting them into their own islands. SR OS allows for similar provision of multicast services and for spanning these services over multiple IGP areas or multiple ASs.

mLDP supports non-segmented mLDP trees for inter-AS solutions, applicable for multicast services in the GRT (Global Routing Table) where they need to traverse mLDP point-to-multipoint tunnels as well as NG-MVPN services.

In-band signaling with non-segmented mLDP trees in GRT

mLDP can be used to transport multicast in GRT. For mLDP LSPs to be generated, a multicast request from the leaf node is required to force mLDP to generate a downstream unsolicited (DU) FEC toward the root to build the P2MP LSPs.

For inter-AS solutions, the root may not be in the RTM of the leaf node or, if it is present, it is installed using BGP with ASBRs acting as the local AS root of the leaf. Therefore, the local AS intermediate routers on the leaf may not know the path to the root.

Control protocols used for constructing P2MP LSPs contain a field that identifies the address of a root node. Intermediate nodes are expected to be able to look up that address in their routing tables; however, this is not possible if the route to the root node is a BGP route and the intermediate nodes are part of a BGP-free core (for example, if they use IGP).

To enable an mLDP LSP to be constructed through a BGP-free segment, the root node address is temporarily replaced by an address that is known to the intermediate nodes and is on the path to the true root node. For example, Inter-AS Option C shows the procedure when the PE-2 (leaf) receives the route for root through ASBR3. This route resembles the root next-hop ASBR-3. The leaf, in this case, generates an LDP FEC which has an opaque value, and has the root address set as ASBR-3. This opaque value has more information needed to reach the root from ASBR-3. As a result, the SR core AS3 only needs to be able to resolve the local AS ASBR-3 for the LDP FEC. The ASBR-3 uses the LDP FEC opaque value to find the path to the root.

Figure 12. Inter-AS Option C

Because non-segmented d-mLDP requires end-to-end mLDP signaling, the ASBRs support both mLDP and BGP signaling between them.

LDP recursive FEC process

For inter-AS networks where the leaf node does not have the root in the RTM or where the leaf node has the root in the RTM using BGP, and the leaf’s local AS intermediate nodes do not have the root in their RTM because they are not BGP-enabled, RFC 6512 defines a recursive opaque value and procedure for LDP to build an LSP through multiple ASs.

For mLDP to be able to signal through a multiple-AS network where the intermediate nodes do not have a routing path to the root, a recursive opaque value is needed. The LDP FEC root resolves the local ASBR, and the recursive opaque value contains the P2MP FEC element, encoded as specified in RFC 6513, with a type field, a length field, and a value field of its own.

RFC 6826 section 3 defines the Transit IPv4 opaque for P2MP LDP FEC, where the leaf in the local AS wants to establish an LSP to the root for P2MP LSP. mLDP FEC for single AS with transit IPv4 opaque shows this FEC representation.

Figure 13. mLDP FEC for single AS with transit IPv4 opaque

mLDP FEC for inter-AS with recursive opaque value shows an inter-AS FEC with recursive opaque based on RFC 6512.

Figure 14. mLDP FEC for inter-AS with recursive opaque value

As shown in the preceding figure, the root ‟10.0.0.21” is an ASBR and the opaque value contains the original mLDP FEC. As such, in the AS of the leaf where the actual root ‟10.0.0.14” is not known, the LDP FEC can be routed using the local root of ASBR. When the FEC arrives at an ASBR that co-locates in the same AS as the actual root, an LDP FEC with transit IPv4 opaque is generated. The end-to-end picture for inter-AS mLDP for non-VPN multicast is shown in Non-VPN mLDP with recursive opaque for inter-AS.

Figure 15. Non-VPN mLDP with recursive opaque for inter-AS

As shown in the preceding figure, the leaf is in AS3s where the AS3 intermediate nodes do not have the ROOT-1 in their RTM. The leaf has the S1 installed in the RTM via BGP. All ASBRs are acting as next-hop-self in the BGP domain. The leaf resolving the S1 via BGP generates an mLDP FEC with recursive opaque, represented as:

Leaf FEC: <Root=ASBR-3, opaque-value=<Root=Root-1, <opaque-value = S1,G1>>>

This FEC is routed through the AS3 Core to ASBR-3.

Note: AS3 intermediate nodes do not have ROOT-1 in their RTM; that is, are not BGP-capable.

At ASBR-3 the FEC is changed to:

ASBR-3 FEC: <Root=ASBR-1, opaque-value=<Root=Root-1, <opaque-value = S1,G1>>>

This FEC is routed from ASBR-3 to ASBR-1. ASBR-1 is colocated in the same AS as ROOT-1. Therefore, the ASBR-1 does not need a FEC with a recursive opaque value.

ASBR-1 FEC: <Root=Root-1, <opaque-value =S1,G1>>

This process allows all multicast services to work over inter-AS networks. All d-mLDP opaque types can be used in a FEC with a recursive opaque value.

Supported recursive opaque values

A recursive FEC is built using the Recursive Opaque Value and VPN-Recursive Opaque Value types (opaque values 7 and 8 respectively). All SR non-recursive opaque values can be recursively embedded into a recursive opaque.

The following table lists all supported opaque values in SR OS.

Table 2. Opaque types supported by SR OS
Opaque type Opaque name RFC SR OS use FEC representation

1

Generic LSP Identifier

RFC 6388

VPRN Local AS

<Root, Opaque<P2MPID>>

3

Transit IPv4 Source TLV Type

RFC 6826

IPv4 multicast over mLDP in GRT

<Root, Opaque<SourceIPv4, GroupIPv4>>

4

Transit IPv6 Source TLV Type

RFC 6826

IPv6 multicast over mLDP in GRT

<Root, Opaque<SourceIPv6, GroupIPv6>>

7

Recursive Opaque Value

RFC 6512

Inter-AS IPv4 multicast over mLDP in GRT

<ASBR, Opaque<Root, Opaque<SourceIPv4, GroupIPv4>>>

Inter-AS IPv6 multicast over mLDP in GRT

<ASBR, Opaque<Root, Opaque<SourceIPv6, GroupIPv6>>>

Inter-AS Option C MVPN over mLDP

<ASBR, Opaque<Root, Opaque<P2MPID>>>

8

VPN-Recursive Opaque Value

RFC 6512

Inter-AS Option B MVPN over mLDP

<ASBR, Opaque <RD, Root, P2MPID>>

250

Transit VPNv4 Source TLV Type

RFC 7246

In-band signaling for VPRN

<Root, Opaque<SourceIPv4 or RPA, GroupIPv4, RD>>

251

Transit VPNv6 Source TLV Type

RFC 7246

In-band signaling for VPRN

<Root, Opaque<SourceIPv6 or RPA, GroupIPv6, RD>>

Optimized Option C and basic FEC generation for inter-AS

Not all leaf nodes can support label route or recursive opaque, so recursive opaque functionality can be transferred from the leaf to the ASBR, as shown in Optimized Option C — leaf router not responsible for recursive FEC.

Figure 16. Optimized Option C — leaf router not responsible for recursive FEC

In Optimized Option C — leaf router not responsible for recursive FEC, the root advertises its unicast routes to ASBR-3 using IGP, and the ASBR-3 advertises these routes to ASBR-1 using label-BGP. ASBR-1 can redistribute these routes to IGP with next-hop ASBR-1. The leaf resolves the actual root 10.0.0.14 using IGP and creates a type 1 opaque value <Root 10.0.0.14, Opaque <8193>> to ASBR-1. In addition, all P routers in AS 2 know how to resolve the actual root because of BGP-to-IGP redistribution within AS 2.

ASBR-1 attempts to resolve the 10.0.0.14 actual route via BGP, and creates a recursive type 7 opaque value <Root 10.0.0.2, Opaque <10.0.0.14, 8193>>.

Basic opaque generation when root PE is resolved using BGP

For inter-AS or intra-AS MVPN, the root PE (the PE on which the source resides) loopback IP address is usually not advertised into each AS or area. As such, the P routers in the ASs or areas that the root PE is not part of are not able to resolve the root PE loopback IP address. To resolve this issue, the leaf PE, which has visibility of the root PE loopback IP address using BGP, creates a recursive opaque with an outer root address of the local ASBR or ABR and an inner recursive opaque of the actual root PE.

Some non-Nokia routers do not support recursive opaque FEC when the root node loopback IP address is resolved using IBGP or EBGP. These routers accept and generate a basic opaque type. In such cases, there should not be any P routers between a leaf PE and ASBR or ABR, or any P routers between ASBR or ABR and the upstream ASBR or ABR. Example AS shows an example of this situation.

Figure 17. Example AS

In Example AS, the leaf HL1 is directly attached to ABR HL2, and ABR HL2 is directly attached to ABR HL3. In this case, it is possible to generate a non-recursive opaque simply because there is no P router that cannot resolve the root PE loopback IP address in between any of the elements. All elements are BGP-speaking and have received the root PE loopback IP address via IBGP or EBGP.

In addition, SR OS does not generate a recursive FEC. The following global command disables recursive opaque FEC generation when the provider needs basic opaque FEC generation on the node.
configure router ldp generate-basic-fec-only
In Example AS, the basic non-recursive FEC is generated even if the root node HL6 is resolved via BGP (IBGP or EBGP).

Currently, when the root node HL6 systemIP is resolved via BGP, a recursive FEC is generated by the leaf node HL1:

HL1 FEC = <HL2, <HL6, OPAQUE>>

When the generate-basic-fec-only command is enabled on the leaf node or any ABR, they generate a basic non-recursive FEC:

HL1 FEC = <HL6, OPAQUE>

When this FEC arrives at HL2, if the generate-basic-fec-only command is enabled then HL2 generates the following FEC:

HL2 FEC = <HL6, OPAQUE>

If there are any P routers between the leaf node and an ASBR or ABR, or any P routers between ASBRs or ABRs that do not have the root node (HL6) in their RTM, then this type 1 opaque FEC is not resolved and forwarded upstream, and the solution fails.

Leaf and ABR behavior
When the following command is enabled on a leaf node, LDP generates a basic opaque type 1 FEC.
configure router ldp generate-basic-fec-only

When generate-basic-fec-only is enabled on the ABR, LDP accepts a lower FEC of basic opaque type 1 and generate a basic opaque type 1 upper FEC. LDP then stitches the lower and upper FECs together to create a cross connect.

When generate-basic-fec-only is enabled and the ABR receives a lower FEC of:

  1. For recursive FEC with type 7 opaque, the ABR stitches the lower FEC to an upper FEC with basic opaque type 1.

  2. For any FEC type other than a recursive FEC with type 7 opaque or a non-recursive FEC with type 1 basic opaque, ABR processes the packet in the same manner as when generate-basic-fec-only is disabled.

Intra-AS support

ABR uses IBGP and peers between the system IP or loopback IP addresses, as shown in ABR and IBGP.

Figure 18. ABR and IBGP

The generate-basic-fec-only command is supported on leaf PE and ABR nodes. The generate-basic-fec-only command only interoperates with intra-AS as option C, or opaque type 7 with inner opaque type 1. No other opaque type is supported.

Opaque type behavior with basic FEC generation

Opaque type behavior with basic FEC generation describes the behavior of different opaque types when the generate-basic-fec-only command is enabled or disabled.

Table 3. Opaque type behavior with basic FEC generation
FEC opaque type generate-basic-fec-only enabled

1

Generate type 1 basic opaque when FEC is resolved using BGP route

3

Same behavior as when generate-basic-fec-only is disabled

4

Same behavior as when generate-basic-fec-only is disabled

7 with inner type 1

Generate type 1 basic opaque

7 with inner type 3 or 4

Same behavior as when generate-basic-fec-only is disabled

8 with inner type 1

Same behavior as when generate-basic-fec-only is disabled

Inter-AS support

In the inter-AS case, the ASBRs use EBGP as shown in ASBR and EBGP.

The two ASBRs become peers via local interface. The generate-basic-fec-only command can be used on the LEAF or the ASBR to force SR OS to generate a basic opaque FEC when the actual ROOT is resolved via BGP. The opaque type behavior is on par with the intra-AS scenario as shown in ABR and IBGP.

Figure 19. ASBR and EBGP

The generate-basic-fec-only command is supported on LEAF PE and ASBR nodes in case of inter-AS. The generate-basic-fec-only command only interoperates with inter-AS as option C and opaque type 7 with inner opaque type 1.

Redundancy and resiliency

For mLDP, MoFRR is supported with the IGP domain; for example, ASBRs that are not directly connected. MoFRR is not supported between directly connected ASBRs, such as ASBRs that use EBGP without IGP.

Figure 20. ASBRs using EBGP without IGP

ASBR physical connection

Non-segmented mLDP functions with ASBRs directly connected or connected via an IGP domain, as shown in the preceding figure.

OAM

Note: The oam p2mp-lsp-ping command only applies to the classic CLI.

LSPs are unidirectional tunnels. When an LSP ping is sent, the echo request is transmitted via the tunnel and the echo response is transmitted via the vanilla IP to the source. Similarly, for the oam p2mp-lsp-ping command, on the root, the echo request is transmitted via the mLDP P2MP tunnel to all leafs and the leafs use vanilla IP to respond to the root.

The echo request for mLDP is generated carrying a root Target FEC Stack TLV, which is used to identify the multicast LDP LSP under test at the leaf. The Target FEC Stack TLV must carry an mLDP P2MP FEC Stack Sub-TLV from RFC 6388 or RFC 6512. See ECHO request target FEC Stack TLV.

Figure 21. ECHO request target FEC Stack TLV

The same concept applies to inter-AS and non-segmented mLDP. The leafs in the remote AS should be able to resolve the root via GRT routing. This is possible for inter-AS Option C where the root is usually in the leaf RTM, which is a next-hop ASBR.

For inter-AS Option B where the root is not present in the leaf RTM, the echo reply cannot be forwarded via the GRT to the root. To solve this problem, for inter-AS Option B, the SR OS uses VPRN unicast routing to transmit the echo reply from the leaf to the root via VPRN.

Figure 22. MVPN inter-AS Option B OAM
Note: The vpn-recursive-fec command option only applies to the classic CLI.

As shown in the preceding figure, the echo request for VPN recursive FEC is generated from the root node by executing the oam p2mp-lsp-ping with the vpn-recursive-fec command option. When the echo request reaches the leaf, the leaf uses the sub-TLV within the echo request to identify the corresponding VPN via the FEC which includes the RD, the root, and the P2MP-ID.

After identifying the VPRN, the echo response is sent back via the VPRN and unicast routes. A unicast route (for example, root 10.0.0.14, as shown in MVPN inter-AS Option B OAM) must be present in the leaf VPRN to allow the unicast routing of the echo reply back to the root via VPRN. To distribute this root from the root VPRN to all VPRN leafs, a loopback interface should be configured in the root VPRN and distributed to all leafs via MP-BGP unicast routes.

The OAM functionality for Options B and C is summarized in OAM functionality for Options B and C.

Notes:

Note: The vpn-recursive-fec command option only applies to the classic CLI.
  • For SR OS in the classic CLI, all P2MP mLDP FEC types respond to the vpn-recursive-fec echo request. Leafs in the local-AS and inter-AS Option C respond to the recursive-FEC TLV echo request in addition to the leafs in the inter-AS Option B.

    For non inter-AS Option B where the root system IP is visible through the GRT, the echo reply is sent via the GRT, that is, not via the VPRN.

  • In the classic CLI, this vpn-recursive-fec is a Nokia proprietary implementation, and therefore third-party routers do not recognize the recursive FEC and do not generate an echo respond.

    In the classic CLI, the user can generate the p2mp-lsp-ping without the vpn-recursive-fec to discover non-Nokia routers in the local-AS and inter-AS Option C, but not the inter-AS Option B leafs.

Note: The information in the following table only applies to the classic CLI.
Table 4. OAM functionality for Options B and C
OAM command (for mLDP) Leaf and root in same AS Leaf and root in different AS (Option B) Leaf and root in different AS (Option C)

p2mp-lsp-ping ldp

p2mp-lsp-ping ldp-ssm

p2mp-lsp-ping ldp vpn-recursive-fec

p2mp-lsp-trace

ECMP support

In ECMP support, the leaf discovers the ROOT-1 from all three ASBRs (ASBR-3, ASBR-4 and ASBR-5).

Figure 23. ECMP support

The leaf chooses uses the following process to choose the ASBR used for the multicast stream:

  1. The leaf determines the number of ASBRs that should be part of the hash calculation.

    The number of ASBRs that are part of the hash calculation comes from the configured ECMP (configure router ecmp). For example, if the ECMP value is set to 2, only two of the ASBRs are part of the hash algorithm selection.

  2. After deciding the upstream ASBR, the leaf determines whether there are multiple equal cost paths between it and the chosen ASBR.

    • If there are multiple ECMP paths between the leaf and the ASBR, the leaf performs another ECMP selection based on the configured value in configure router ecmp. This is a recursive ECMP lookup.

    • The first lookup chooses the ASBR and the second lookup chooses the path to that ASBR.

      For example, if the ASBR 5 was chosen in ECMP support, there are three paths between the leaf and ASBR-5. As such, a second ECMP decision is made to choose the path.

  3. At ASBR-5, the process is repeated. For example, in ECMP support, ASBR-5 goes through steps 1 and 2 to choose between ASBR-1 and ASBR-2, and a second recursive ECMP lookup to choose the path to that ASBR.

When there are several candidate upstream LSRs, the LSR must select one upstream LSR. The algorithm used for the LSR selection is a local matter. If the LSR selection is done over a LAN interface and the Section 6 procedures are applied, the procedure described in ECMP hash algorithm is applied to ensure that the same upstream LSR is elected among a set of candidate receivers on that LAN.

The ECMP hash algorithm ensures that there is a single forwarder over the LAN for a specific LSP.

ECMP hash algorithm

The ECMP hash algorithm requires the opaque value of the FEC (see ECMP hash algorithm) and is based on RFC 6388 section 2.4.1.1.

  • The candidate upstream LSRs are numbered from lower to higher IP addresses.

  • The following hash is performed: H = (CRC32 (Opaque Value)) modulo N, where N is the number of upstream LSRs and ‟Opaque Value” is the field identified in the FEC element after ‟Opaque Length”. The ‟Opaque Length” indicates the size of the opaque value used in this calculation.

  • The selected upstream LSR U is the LSR that has the number H above.

Dynamic mLDP and static mLDP coexisting on the same node

When creating a static mLDP tunnel, use the commands in the following context to configure the P2MP tunnel ID.

configure router tunnel-interface

This P2MP ID can coincide with a dynamic mLDP P2MP ID. The dynamic mLDP is created by the PIM automatically without manual configuration. If the node has a static and dynamic mLDP with same label and P2MP ID, there are collisions and OAM errors.

Do not use a static and dynamic mLDP on the same node. If it is necessary to do so, ensure that the P2MP ID is not the same between the two tunnel types.

Static mLDP FECs originate at the leaf node. If the FEC is resolved using BGP, it is not forwarded downstream. A static mLDP FEC is only created and forwarded if it is resolved using IGP. For optimized Option C, the static mLDP can originate at the leaf node because the root is exported from BGP to IGP at the ASBR; therefore the leaf node resolves the root using IGP.

In the optimized Option C scenario, it is possible to have a static mLDP FEC originate from a leaf node as follows:

static-mLDP <Root: ROOT-1, Opaque: <p2mp-id-1>>

A dynamic mLDP FEC can also originate from a separate leaf node with the same FEC:

dynamic-mLDP <Root: ROOT-1, Opaque: <p2mp-id-1>>

In this case, the tree and the up-FEC merge the static mLDP and dynamic mLDP traffic at the ASBR. The user must ensure that the static mLDP P2MP ID is not used by any dynamic mLDP LSPs on the path to the root.

Static and dynamic mLDP interaction illustrates the scenario where one leaf (LEAF-1) is using dynamic mLDP for NG-MVPN and a separate leaf (LEAF-2) is using static mLDP for a tunnel interface.

Figure 24. Static and dynamic mLDP interaction

In the preceding figure, both FECs generated by LEAF-1 and LEAF-2 are identical, and the ASBR-3 merges the FECs into a single upper FEC. Any traffic arriving from ROOT-1 to ASBR-3 over VPRN-1 is forked to LEAF-1 and LEAF-2, even if the tunnels were signaled for different services.

Intra-AS non-segmented mLDP

Non-segmented mLDP intra-AS (inter-area) is supported on option B and C only. Intra-AS non-segmented topology shows a typical intra-AS topology. With a backbone IGP area 0 and access non- backbone IGP areas 1 and 2. In these topologies, the ABRs usually does next-hop-self for BGP label routes, which requires recursive FEC.

Figure 25. Intra-AS non-segmented topology
For option B, the ABR routers change the next hop of the MVPN AD routes to the ABR system IP or Loopback IP. The following commands for BGP do not change the next hop of the MVPN AD routes.
configure router bgp group next-hop-self
configure router bgp group neighbor next-hop-self 
configure service vprn bgp group next-hop-self
configure service vprn bgp group neighbor next-hop-self
Instead, a BGP policy can be used to change the MVPN AD routes next hop at the ABR.

In the meantime a BGP policy can be used to change the MVPN AD routes next hop at the ABR.

ABR MoFRR for intra-AS

With ABR MoFRR in the intra-AS environment, the leaf chooses a local primary ABR and a backup ABR, with separate mLDP signaling toward these two ABRs. In addition, each path from a leaf to the primary ABR and from a leaf to the backup ABR supports IGP MoFRR. This behavior is similar to ASBR MoFRR in the inter-AS environment; for more details, see ASBR MoFRR. MoFRR is only supported for intra-AS option C, with or without RR.

Interaction with an inter-AS non-segmented mLDP solution

Intra-AS option C is supported in conjunction to inter-AS option B or C. Intra-AS option C with inter-AS option B is not supported.

Intra-AS/inter-AS Option B

For intra/inter-AS option B the root is not visible on the leaf. LDP is responsible for building the recursive FEC and signaling the FEC to ABR/ASBR on the leaf. The ABR/ASBR needs to have the PMSI AD router to re-build the FEC (recursive or basic) depending on if they are connected to another ABR/ASBR or to a root node. LDP must import the MVPN PMSI AD routes. To reduce resource usage, importing of the MVPN PMSI AD routes is done manually using the following command.
configure router ldp import-pmsi-routes mvpn
When enabled, LDP requests BGP to provide the LDP task with all of the MVPN PMSI AD routes and LDP caches these routes internally. If import-pmsi-routes mvpn is disabled, MVPN discards the cached routes to save resources.

The import-pmsi-routes mvpn command is enabled if there is an upgrade from a software version that does not support this inter-AS case. Otherwise, by default import-pmsi-routes mvpn is disabled for MVPN inter-AS, MVPN intra-AS, and EVPN, so LDP does not cache any MVPN PMSI AD routes.

ASBR MoFRR

ASBR MoFRR in the inter-AS environment allows the leaf PE to signal a primary path to the remote root through the first ASBR and a backup path through the second ASBR, so that there is an active LSP signaled from the leaf node to the first local root (ASBR-1 in BGP neighboring for MoFRR) and a backup LSP signaled from the leaf node to the second local root (ASBR-2 in BGP neighboring for MoFRR) through the best IGP path in the AS.

Using BGP neighboring for MoFRR as an example, ASBR-1 and ASBR-2 are local roots for the leaf node, and ASBR-3 and ASBR-4 are local roots for ASBR-1 or ASBR-2. The actual root node (ROOT-1) is also a local root for ASBR-3 and ASBR-4.

Figure 26. BGP neighboring for MoFRR

In BGP neighboring for MoFRR, ASBR-2 is a disjointed ASBR; with the AS spanning from the leaf to the local root, which is the ASBR selected in the AS, the traditional IGP MoFRR is used. ASBR MoFRR is used from the leaf node to the local root, and IGP MoFRR is used for any P router that connects the leaf node to the local root.

IGP MoFRR versus BGP (ASBR) MoFRR

The local leaf can be the actual leaf node that is connected to the host, or an ASBR node that acts as the local leaf for the LSP in that AS, as illustrated in ASBR node acting as local leaf.

Figure 27. ASBR node acting as local leaf

Two types of MoFRR can exist in a unique AS:

  • IGP MoFRR

    When the following command is enabled for LDP, the local leaf selects a single local root, either ASBR or actual, and creates a FEC toward two different upstream LSRs using LFA/ECMP for the ASBR route.
    configure router ldp mcast-upstream-frr
    If there are multiple ASBRs directed toward the actual root, the local leaf only selects a single ASBR; for example, ASBR-1 in IGP MoFRR. In this example, LSPs are not set up for ASBR-2. The local root ASBR-1 is selected by the local leaf and the primary path is set up to ASBR-1, while the backup path is set up through ASBR-2.

    For more information, see Multicast LDP fast upstream switchover.

    Figure 28. IGP MoFRR
  • ASBR MoFRR

    When the following command is enabled for LDP, and the mcast-upstream-frr command is not enabled, the local leaf selects a single ASBR as the primary ASBR and another ASBR as the backup ASBR.
    configure router ldp mcast-upstream-asbr-frr
    The primary and backup LSPs are set to these two ASBRs, as shown in ASBR MoFRR. Because the mcast-upstream-frr command is not configured, IGP MoFRR is not enabled in the AS2, and therefore none of the P routers perform local IGP MoFRR.

    BGP neighboring and sessions can be used to detect BGP peer failure from the local leaf to the ASBR, and can cause a MoFRR switch from the primary LSP to the backup LSP. Multihop BFD can be used between BGP neighbors to detect failure more quickly and remove the primary BGP peer (ASBR-1 in ASBR MoFRR) and its routes from the routing table so that the leaf can switch to the backup LSP and backup ASBR.

    Figure 29. ASBR MoFRR

The mcast-upstream-frr and mcast-upstream-asbr-frr commands can be configured together on the local leaf of each AS to create a high-resilience MoFRR solution. When both commands are enabled, the local leaf sets up ASBR MoFRR first and sets up a primary LSP to one ASBR (ASBR-1 in ASBR MoFRR and IGP MoFRR) and a backup LSP to another ASBR (ASBR-2 in ASBR MoFRR and IGP MoFRR). In addition, the local leaf protects each LSP using IGP MoFRR through the P routers in that AS.

Figure 30. ASBR MoFRR and IGP MoFRR
Note: Enabling both the mcast-upstream-frr and mcast-upstream-asbr-frr commands can cause extra multicast traffic to be created. Ensure that the network is designed and the appropriate commands are enabled to meet network resiliency needs.

At each AS, either command can be configured; for example, in ASBR MoFRR and IGP MoFRR, the leaf is configured with mcast-upstream-asbr-frr enabled and sets up a primary LSP to ASBR-1 and a backup LSP to ASBR-2. ASBR-1 and ASBR-2 are configured with mcast-upstream-frr enabled, and both perform IGP MoFRR to ASBR-3 only. ASBR-2 can select ASBR-3 or ASBR-4 as its local root for IGP MoFRR; in this example, ASBR-2 has selected ASBR-3 as its local root.

There are no ASBRs in the root AS (AS-1), so IGP MoFRR is performed if mcast-upstream-frr is enabled on ASBR-3.

The mcast-upstream-frr and mcast-upstream-asbr-frr commands work separately depending on the needed behavior. If there is more than one local root, then mcast-upstream-asbr-frr can provide extra resiliency between the local ASBRs, and mcast-upstream-frr can provide extra redundancy between the local leaf and the local root by creating a disjointed LSP for each ASBR.

If the mcast-upstream-asbr-frr command is disabled and mcast-upstream-frr is enabled, and there is more than one local root, only a single local root is selected and IGP MoFRR can provide local AS resiliency.

In the actual root AS, only the mcast-upstream-frr command needs to be configured.

ASBR MoFRR leaf behavior

With inter-AS MoFRR at the leaf, the leaf selects a primary ASBR and a backup ASBR. These ASBRs are disjointed ASBRs.

The primary and backup LSPs is set up using the primary and backup ASBRs, as illustrated in ASBR MoFRR leaf behavior.

Figure 31. ASBR MoFRR leaf behavior
Note: Using ASBR MoFRR leaf behavior as a reference, ensure that the paths to ASBR-1 and ASBR-2 are disjointed from the leaf. MLDP does not support TE and cannot create two disjointed LSPs from the leaf to ASBR-1 and ASBR-2. The operator and IGP architect must define the disjointed paths.

ASBR MoFRR ASBR behavior

Each LSP at the ASBR creates its own primary and backup LSPs.

As shown in ASBR MoFRR ASBR behavior, the primary LSP from the leaf to ASBR-1 generates a primary LSP to ASBR-3 (P-P) and a backup LSP to ASBR-4 (P-B). The backup LSP from the leaf also generates a backup primary to ASBR-4 (B-P) and a backup backup to ASBR-3 (B-B). When two similar FECs of an LSP intersect, the LSPs merge.

Figure 32. ASBR MoFRR ASBR behavior

MoFRR root AS behavior

In the root AS, MoFRR is based on regular IGP MoFRR. At the root, there are primary and backup LSPs for each of the primary and backup LSPs that arrive from the neighboring AS, as shown in MoFRR root AS behavior.

Figure 33. MoFRR root AS behavior

Traffic flow

Traffic flow illustrates traffic flow based on the LSP setup. The backup LSPs of the primary and backup LSPs (B-B, P-B) are blocked in the non-leaf AS.

Figure 34. Traffic flow

Failure detection and handling

Failure detection can be achieved by using either of the following:

  • IGP failure detection

    • Enabling BFD is recommended for IGP protocols or static route (if static route is used for IGP forwarding). This enables faster IGP failure detection.

    • IGP can detect P router failures for IGP MoFRR (single AS).

    • If the ASBR fails, IGP can detect the failure and converge the route table to the local leaf. The local leaf in an AS can be either the ASBR or the actual leaf.

    • IGP routes to the ASBR address must be deleted for IGP failure to be handled.

  • BGP failure detection

    • BGP neighboring must be established between the local leaf and each ASBR. Using multi-hop BFD for ASBR failure is recommended.

    • Each local leaf attempts to calculate a primary ASBR or backup ASBR. The local leaf sets up a primary LSP to the primary ASBR and a backup LSP to the backup ASBR. If the primary ASBR has failed, the local leaf removes the primary ASBR from the next-hop list and allows traffic to be processed from the backup LSP and the backup ASBR.

    • BGP MoFRR can offer faster ASBR failure detection than IGP MoFRR.

    • BGP MoFRR can also be activated via IGP changes, such as if the node detects a direct link failure, or if IGP removes the BGP neighbor system IP address from the routing table. These events can cause a switch from the primary ASBR to a backup ASBR. It is recommended to deploy IGP and BFD in tandem for fast failure detection.

Failure scenario

As shown in Failure scenario 1, when ASBR-3 fails, ASBR-1 detects the failure using ASBR MoFRR and enables the primary backup path (P-B). This is the case for every LSP that has been set up for ASBR MoFRR in any AS.

Figure 35. Failure scenario 1

In another example, as shown in Failure scenario 2, failure on ASBR-1 causes the attached P router to generate a route update to the leaf, removing the ASBR-1 from the routing table and causing an ASBR-MoFRR on the leaf node.

Figure 36. Failure scenario 2

ASBR MoFRR consideration

As illustrated in Resolution via ASBR-3, it is possible for the ASBR-1 primary-primary (P-P) LSP to be resolved using ASBR-3, and for the ASBR-2 backup-primary (B-P) LSP to be resolved using the same ASBR-3.

Figure 37. Resolution via ASBR-3

In this case, both the backup-primary LSP and primary-primary LSP are affected when a failure occurs on ASBR-3, as illustrated in ASBR-3 failure.

Figure 38. ASBR-3 failure

In ASBR-3 failure, the MoFRR can switch to the primary-backup LSP between ASBR-4 and ASBR-1 by detecting BGP MoFRR failure on ASBR-3.

It is strongly recommended that LDP signaling be enabled on all links between the local leaf and local roots, and that all P routers enable ASBR MoFRR and IGP MoFRR. If only LDP signaling is configured, the routing table may resolve a next-hop for LDP FEC when there is no LDP signaling and the primary or backup MoFRR LSPs may not be set up.

ASBR MoFRR guarantees that ASBRs are disjointed, but does not guarantee that the path from the local leaf to the local ASBR are disjointed. The primary and backup LSPs take the best paths as calculated by IGP, and if IGP selects the same path for the primary ASBR and the backup ASBR, then the two LSPs are not disjointed. Ensure that 2 disjointed paths are created to the primary and backup ASBRs.

ASBR MoFRR opaque support

ASBR MoFRR opaque support lists the FEC opaque types that are supported by ASBR MoFRR.

Table 5. ASBR MoFRR opaque support
FEC opaque type Supported for ASBR MoFRR

Type 1

Type 3

Type 4

Type 7, inner type 1

Type 7, inner type 3 or 4

Type 8, inner type 1

Type 250

Type 251

MBB for MoFRR

Any optimization of the MoFRR primary LSP should be performed by the Make Before Break (MBB) mechanism. For example, if the primary LSP fails, a switch to the backup LSP occurs and the primary LSP is signaled. After the primary LSP is successfully re-established, MoFRR switches from the backup LSP to the primary LSP.

MBB is performed from the leaf node to the root node, and therefore it is not performed per autonomous system (AS); the MBB signaling must be successful from the leaf PE to the root PE, including all ASBRs and P routers in between.

The conditions of MBB for mLDP LSPs are:

  • re-calculation of the SFP

  • failure of the primary ASBR

If the primary ASBR fails and a switch is made to the backup ASBR, and the backup ASBR is the only other ASBR available, the MBB mechanism does not signal any new LSP and uses this backup LSP as the primary.

Add-paths for route reflectors

If the ASBRs and the local leaf are connected by a route reflector, the following BGP add-paths command must be enabled on the route reflector.
configure router bgp add-paths
This allows for the configuration of the following commands.
configure router bgp add-paths mcast-vpn-ipv4
configure router bgp add-paths mcast-vpn-ipv6
configure router bgp add-paths label-ipv4 (if Option C is used)
The add-paths command forces the route reflector to advertise all ASBRs to the local leaf as the next hop for the actual root.

If the add-paths command is not enabled for the route reflector, only a single ASBR is advertised to the local root, and ASBR MoFRR is not available.

Multicast LDP fast upstream switchover

This feature allows a downstream LSR of a multicast LDP (mLDP) FEC to perform a fast switchover and source the traffic from another upstream LSR while IGP and LDP are converging because of a failure of the upstream LSR which is the primary next-hop of the root LSR for the P2MP FEC. In essence it provides an upstream Fast-Reroute (FRR) node-protection capability for the mLDP FEC packets. It does it at the expense of traffic duplication from two different upstream nodes into the node which performs the fast upstream switchover.

The detailed procedures for this feature are described in draft-pdutta-mpls-mldp-up-redundancy.

Feature configuration

The user enables the mLDP fast upstream switchover feature by configuring the following option in CLI.

configure router ldp mcast-upstream-frr

When this command is enabled and LDP is resolving a mLDP FEC received from a downstream LSR, it checks if an ECMP next hop or a LFA next hop exist to the root LSR node. If LDP finds one, it programs a primary ILM on the interface corresponding to the primary next hop and a backup ILM on the interface corresponding to the ECMP or LFA next hop. LDP then sends the corresponding labels to both upstream LSR nodes. In normal operation, the primary ILM accepts packets while the backup ILM drops them. If the interface or the upstream LSR of the primary ILM goes down causing the LDP session to go down, the backup ILM then starts accepting packets.

To make use of the ECMP next hop, the user must configure the following command value in the system to two or more.

configure router ecmp max-ecmp-routes

To make use of the LFA next hop, the user must enable LFA using the following commands:

  • MD-CLI
    configure router isis loopfree-alternate
    configure router ospf loopfree-alternate
  • classic CLI
    configure router isis loopfree-alternates
    configure router ospf loopfree-alternates

Enabling IP FRR or LDP FRR using the following commands is not strictly required because LDP only needs to know where the alternate next hop to the root LSR is to be able to send the Label Mapping message to program the backup ILM at the initial signaling of the tree. Thus enabling the LFA option is sufficient. If however, unicast IP and LDP prefixes need to be protected, these features and the mLDP fast upstream switchover can be enabled concurrently using the following commands:

  • MD-CLI
    configure routing-options ip-fast-reroute
    configure router ldp fast-reroute
  • classic CLI
    configure router ip-fast-reroute
    configure router ldp fast-reroute
Caution: The mLDP FRR fast switchover relies on the fast detection of loss of **LDP session** to the upstream peer to which the primary ILM label had been advertised. Nokia strongly recommends that you perform the following:
  1. Enable BFD on all LDP interfaces to upstream LSR nodes. When BFD detects the loss of the last adjacency to the upstream LSR, it brings down immediately the LDP session which causes the IOM to activate the backup ILM.

  2. If there is a concurrent TLDP adjacency to the same upstream LSR node, enable BFD on the T-LDP peer in addition to enabling it on the interface.

  3. Enable the following command option on all interfaces to the upstream LSR nodes.

    configure router interface ldp-sync-timer

    If an LDP session to the upstream LSR to which the primary ILM is resolved goes down for any other reason than a failure of the interface or of the upstream LSR, routing and LDP goes out of sync. This means the backup ILM remains activated until the next time SPF is rerun by IGP. By enabling IGP-LDP synchronization feature, the advertised link metric is changed to max value as soon as the LDP session goes down. This in turn triggers an SPF and LDP likely downloads a new set of primary and backup ILMs.

Feature behavior

This feature allows a downstream LSR to send a label binding to a couple of upstream LSR nodes but only accept traffic from the ILM on the interface to the primary next hop of the root LSR for the P2MP FEC in normal operation, and accept traffic from the ILM on the interface to the backup next hop under failure. A candidate upstream LSR node must either be an ECMP next hop or a Loop-Free Alternate (LFA) next hop. This allows the downstream LSR to perform a fast switchover and source the traffic from another upstream LSR while IGP is converging because of a failure of the LDP session of the upstream peer which is the primary next hop of the root LSR for the P2MP FEC. In a sense it provides an upstream Fast-Reroute (FRR) node-protection capability for the mLDP FEC packets.

Figure 39. mLDP LSP with backup upstream LSR nodes

Upstream LSR U in mLDP LSP with backup upstream LSR nodes is the primary next hop for the root LSR R of the P2MP FEC. This is also referred to as primary upstream LSR. Upstream LSR U’ is an ECMP or LFA backup next hop for the root LSR R of the same P2MP FEC. This is referred to as backup upstream LSR. Downstream LSR Z sends a label mapping message to both upstream LSR nodes and programs the primary ILM on the interface to LSR U and a backup ILM on the interface to LSR U’. The labels for the primary and backup ILMs must be different. LSR Z therefore attracts traffic from both of them. However, LSR Z blocks the ILM on the interface to LSR U’ and only accepts traffic from the ILM on the interface to LSR U.

In case of a failure of the link to LSR U or of the LSR U itself causing the LDP session to LSR U to go down, LSR Z detects it and reverses the ILM blocking state and immediately starts receiving traffic from LSR U’ until IGP converges and provides a new primary next hop, and ECMP or LFA backup next hop, which may or may not be on the interface to LSR U’. At that point LSR Z updates the primary and backup ILMs in the data path.

The LDP uses the interface of either an ECMP next hop or a LFA next hop to the root LSR prefix, whichever is available, to program the backup ILM. ECMP next hop and LFA ext hop are however mutually exclusive for a specified prefix. IGP installs the ECMP next hop in preference to an LFA next hop for a prefix in the Routing Table Manager (RTM).

If one or more ECMP next hops for the root LSR prefix exist, LDP picks the interface for the primary ILM based on the rules of mLDP FEC resolution specified in RFC 6388:

  • The candidate upstream LSRs are numbered from lower to higher IP address.

  • The following hash is performed: H = (CRC32(Opaque Value)) modulo N, where N is the number of upstream LSRs. The Opaque Value is the field identified in the P2MP FEC Element right after 'Opaque Length' field. The 'Opaque Length' indicates the size of the opaque value used in this calculation.

  • The selected upstream LSR U is the LSR that has the number H.

LDP then picks the interface for the backup ILM using the following new rules:

if (H + 1 < NUM_ECMP) {

// If the hashed entry is not last in the next hops then pick up the next as backup.

backup = H + 1;

} else {

// Wrap around and pickup the first.

backup = 1;

}

In some topologies, it is possible that no ECMP or LFA next hop is found. In this case, LDP programs the primary ILM only.

Uniform failover from primary to backup ILM

When LDP programs the primary ILM record in the data path, it provides the IOM with the Protect-Group Identifier (PG-ID) associated with this ILM and which identifies which upstream LSR is protected.

For the system to perform a fast switchover to the backup ILM in the fast path, LDP applies to the primary ILM uniform FRR failover procedures similar in concept to the ones applied to an NHLFE in the existing implementation of LDP FRR for unicast FECs. There are however important differences to note. LDP associates a unique Protect Group ID (PG–ID) to all mLDP FECs which have their primary ILM on any LDP interface pointing to the same upstream LSR. This PG-ID is assigned per upstream LSR regardless of the number of LDP interfaces configured to this LSR. Therefore, this PG-ID is different from the one associated with unicast FECs and which is assigned to each downstream LDP interface and next hop. However, if a failure caused an interface to go down and also caused the LDP session to upstream peer to go down, both PG-IDs have their state updated in the IOM and therefore the uniform FRR procedures are triggered for both the unicast LDP FECs forwarding packets toward the upstream LSR and the mLDP FECs receiving packets from the same upstream LSR.

When the mLDP FEC is programmed in the data path, the primary and backup ILM records therefore contain the PG-ID the FEC is associated with. The IOM also maintains a list of PG-IDs and a state bit which indicates if it is UP or DOWN. When the PG-ID state is UP the primary ILM for each mLDP FEC is open and accepts mLDP packets while the backup ILM is blocked and drops mLDP packets. LDP sends a PG-ID DOWN notification to IOM when it detects that the LDP session to the peer is gone down. This notification causes the backup ILMs associated with this PG-ID to open and accept mLDP packets immediately. When IGP re-converges, an updated pair of primary and backup ILMs is downloaded for each mLDP FEC by LDP into the IOM with the corresponding PG-IDs.

If multiple LDP interfaces exist to the upstream LSR, a failure of one interface brings down the link Hello adjacency on that interface but not the LDP session which is still associated with the remaining link Hello adjacencies. In this case, the upstream LSR updates in IOM the NHLFE for the mLDP FEC to use one of the remaining links. The switchover time in this case is not managed by the uniform failover procedures.

Multi-area and multi-instance extensions to LDP

To extend LDP across multiple areas of an IGP instance or across multiple IGP instances, the current standard LDP implementation based on RFC 3036 requires that all /32 prefixes of PEs be leaked between the areas or instances. This is because an exact match of the prefix in the routing table is required to install the prefix binding in the LDP Forwarding Information Base (FIB). Although a router does this by default when configured as Area Border Router (ABR), this increases the convergence of IGP on routers when the number of PE nodes scales to thousands of nodes.

Multi-area and multi-instance extensions to LDP provide an optional behavior by which LDP installs a prefix binding in the LDP FIB by simply performing a longest prefix match with an aggregate prefix in the routing table (RIB). That way, the ABR is configured to summarize the /32 prefixes of PE routers. This method is compliant to RFC 5283, LDP Extension for Inter-Area Label Switched Paths (LSPs).

LDP shortcut for BGP next hop resolution

LDP shortcut for BGP next-hop resolution shortcuts allow for the deployment of a ‛route-less core’ infrastructure on the 7750 SR and 7950 XRS. Many service providers either have or intend to remove the IBGP mesh from their network core, retaining only the mesh between routers connected to areas of the network that require routing to external routes.

Shortcuts are implemented by utilizing Layer 2 tunnels (that is, MPLS LSPs) as next hops for prefixes that are associated with the far end termination of the tunnel. By tunneling through the network core, the core routers forwarding the tunnel have no need to obtain external routing information and are immune to attack from external sources.

The tunnel table contains all available tunnels indexed by remote destination IP address. LSPs derived from received LDP /32 route FECs are automatically installed in the table associated with the advertising router-ID when IGP shortcuts are enabled.

Evaluating tunnel preference is based on the following order in descending priority:

  1. LDP /32 route FEC shortcut

  2. Actual IGP next-hop

If a higher priority shortcut is not available or is not configured, a lower priority shortcut is evaluated. When no shortcuts are configured or available, the IGP next-hop is always used. Shortcut and next-hop determination is event driven based on dynamic changes in the tunneling mechanisms and routing states.

See the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide for details on the use of LDP FEC and RSVP LSP for BGP Next-Hop Resolution.

LDP shortcut for IGP routes

The LDP shortcut for IGP route resolution feature allows forwarding of packets to IGP learned routes using an LDP LSP. When LDP shortcut is enabled globally, IP packets forwarded over a network IP interface are labeled with the label received from the next hop for the route and corresponding to the FEC-prefix matching the destination address of the IP packet. In such a case, the routing table has the shortcut next hop as the best route. If such a LDP FEC does not exist, then the routing table has the regular IP next hop and regular IP forwarding is performed on the packet.

An egress LER advertises and maintains a FEC, label binding for each IGP learned route. This is performed by the existing LDP fec-originate capability.

LDP shortcut configuration

The user enables the use of LDP shortcut for resolving IGP routes by entering the following global command:

  • MD-CLI
    configure router ldp ldp-shortcut
  • classic CLI
    configure router ldp-shortcut

This command enables forwarding of user IP packets and specified control IP packets using LDP shortcuts over all network interfaces in the system which participate in the IS-IS and OSPF routing protocols. The default is to disable the LDP shortcut across all interfaces in the system.

IGP route resolution

When LDP shortcut is enabled, LDP populates the RTM with next-hop entries corresponding to all prefixes for which it activated an LDP FEC. For a specified prefix, two route entries are populated in RTM. One corresponds to the LDP shortcut next hop and has an owner of LDP. The other one is the regular IP next hop. The LDP shortcut next hop always has preference over the regular IP next hop for forwarding user packets and specified control packets over a specified outgoing interface to the route next hop.

The prior activation of the FEC by LDP is done by performing an exact match with an IGP route prefix in RTM. It can also be done by performing a longest prefix-match with an IGP route in RTM if the aggregate-prefix-match option is enabled globally in LDP.

This feature is not restricted to /32 FEC prefixes. However only /32 FEC prefixes are populated in the CPM Tunnel Table for use as a tunnel by services.

All user packets and specified control packets for which the longest prefix match in RTM yields the FEC prefix are forwarded over the LDP LSP. Currently, the control packets that could be forwarded over the LDP LSP are ICMP ping and UDP-traceroute. The following is an example of the resolution process.

Assume the egress LER advertised a FEC for some /24 prefix using the following command.
configure router ldp fec-originate
At the ingress LER, LDP resolves the FEC by checking in RTM that an exact match exists for this prefix. After LDP activated the FEC, it programs the NHLFE in the egress data path and the LDP tunnel information in the ingress data path tunnel table.

Next, LDP provides the shortcut route to RTM which associates it with the same /24 prefix. There are two entries for this /24 prefix, the LDP shortcut next hop and the regular IP next hop. The latter was used by LDP to validate and activate the FEC. RTM then resolves all user prefixes which succeed a longest prefix match against the /24 route entry to use the LDP LSP.

Assume now the aggregate-prefix-match was enabled and that LDP found a /16 prefix in RTM to activate the FEC for the /24 FEC prefix. In this case, RTM adds a new more specific route entry of /24 and has the next hop as the LDP LSP but it still does not have a specific /24 IP route entry. RTM then resolves all user prefixes which succeed a longest prefix match against the /24 route entry to use the LDP LSP, while all other prefixes that succeed a longest prefix-match against the /16 route entry use the IP next hop.

LDP shortcut forwarding plane

After LDP activated a FEC for a specified prefix and programmed RTM, it also programs the ingress Tunnel Table in forwarding engine with the LDP tunnel information.

When an IPv4 packet is received on an ingress network interface, or a subscriber IES interface, or a regular IES interface, the lookup of the packet by the ingress forwarding engine results in the packet being sent labeled with the label stack corresponding to the NHLFE of the LDP LSP when the preferred RTM entry corresponds to an LDP shortcut.

If the preferred RTM entry corresponds to an IP next-hop, the IPv4 packet is forwarded unlabeled.

ECMP considerations

When ECMP is enabled and multiple equal-cost next hops exist for the IGP route, the ingress forwarding engine sprays the packets for this route based on hashing routine currently supported for IPv4 packets.

When the preferred RTM entry corresponds to an LDP shortcut route, spraying is performed across the multiple next hops for the LDP FEC. The FEC next hops can either be direct link LDP neighbors or T-LDP neighbors reachable over RSVP LSPs in the case of LDP-over-RSVP but not both. This is as per ECMP for LDP in existing implementation.

When the preferred RTM entry corresponds to a regular IP route, spraying is performed across regular IP next hops for the prefix.

Disabling TTL propagation in an LSP shortcut

This feature provides the option for disabling TTL propagation from a transit or a locally generated IP packet header into the LSP label stack when an LDP LSP is used as a shortcut for BGP next-hop resolution, a static-route next hop resolution, or for an IGP route resolution.

A transit packet is a packet received from an IP interface and forwarded over the LSP shortcut at ingress LER.

A locally-generated IP packet is any control plane packet generated from the CPM and forwarded over the LSP shortcut at ingress LER.

TTL handling can be configured for all LDP LSP shortcuts originating on an ingress LER using the following global commands.

configure router ldp shortcut-transit-ttl-propagate
configure router ldp shortcut-local-ttl-propagate

These commands apply to all LDP LSPs which are used to resolve static routes, BGP routes, and IGP routes.

When the following command is configured, TTL propagation is disabled on all locally generated IP packets, including ICMP ping, traceroute, and OAM packets that are destined for a route that is resolved to the LSP shortcut:
  • MD-CLI
    configure router ldp shortcut-local-ttl-propagate false
  • classic CLI
    configure router ldp no shortcut-local-ttl-propagate
In this case, a TTL of 255 is programmed onto the pushed label stack. This is referred to as pipe mode.
Similarly, when the following command is configured, TTL propagation is disabled on all IP packets received on any IES interface and destined for a route that is resolved to the LSP shortcut:
  • MD-CLI
    configure router ldp shortcut-transit-ttl-propagate false
  • classic CLI
    configure router ldp no shortcut-transit-ttl-propagate
In this case, a TTL of 255 is programmed onto the pushed label stack.

LDP graceful handling of resource exhaustion

This feature enhances the behavior of LDP when a datapath or a CPM resource required for the resolution of a FEC is exhausted. In prior releases, the LDP module shuts down. The user is required to fix the issue causing the FEC scaling to be exceeded and to restart the LDP module by executing the following command:

  • MD-CLI
    configure router ldp admin-state enable
  • classic CLI
    configure router ldp no shutdown

LDP base graceful handling of resources

This feature implements a base graceful handling capability by which the LDP interface to the peer, or the targeted peer in the case of Targeted LDP (T-LDP) session, is shutdown. If LDP tries to resolve a FEC over a link or a targeted LDP session and it runs out of data path or CPM resources, it brings down that interface or targeted peer which brings down the Hello adjacency over that interface to the resolved link LDP peer or to the targeted peer. The interface is brought down in LDP context only and is still available to other applications such as IP forwarding and RSVP LSP forwarding.

Depending of what type of resource was exhausted, the scope of the action taken by LDP is different. Some resource such as NHLFE have interface local impact, meaning that only the interface to the downstream LSR which advertised the label is shutdown. Some resources such as ILM have global impact, meaning that they impact every downstream peer or targeted peer which advertised the FEC to the node. The following are examples to illustrate this:

  • For NHLFE exhaustion, one or more interfaces or targeted peers, if the FEC is ECMP, is shut down. ILM is maintained as long as there is at least one downstream for the FEC for which the NHLFE has been successfully programmed.

  • For an exhaustion of an ILM for a unicast LDP FEC, all interfaces to peers or all target peers which sent the FEC are shut down. No deprogramming of data path is required because FEC is not programmed.

  • An exhaustion of ILM for an mLDP FEC can happen during primary ILM programming, MBB ILM programming, or multicast upstream FRR backup ILM programming. In all cases, the P2MP index for the mLDP tree is deprogrammed and the interfaces to each downstream peer that sent a Label Mapping message associated with this ILM are shut down.

After the user has taken action to free resources up, the user must manually unshut the interface or the targeted peer to bring it back into operation. This then re-establishes the Hello adjacency and resumes the resolution of FECs over the interface or to the targeted peer.

Detailed guidelines for using the feature and for troubleshooting a system which activated this feature are provided in the following sections.

This behavior is the default behavior and interoperates with the SR OS based LDP implementation and any other third party LDP implementation.

The following datapath resources can trigger this mechanism:

  • NHLFE

  • ILM

  • Label-to-NHLFE (LTN)

  • Tunnel Index

  • P2MP Index

The Label allocation CPM resource can trigger this mechanism:

LDP enhanced graceful handling of resources

This feature is an enhanced graceful handling capability that is supported only among SR OS based implementations. If LDP tries to resolve a FEC over a link or a targeted session and it runs out of data path or CPM resources, it puts the LDP/T-LDP session into overload state. As a result, it releases to its LDP peer the labels of the FECs which it could not resolve and also sends an LDP notification message to all LDP peers with the new status load of overload for the FEC type which caused the overload. The notification of overload is per FEC type, that is, unicast IPv4, P2MP mLDP and so on, and not per individual FEC. The peer which caused the overload and all other peers stop sending any new FECs of that type until this node updates the notification stating that it is no longer in overload state for that FEC type. FECs of this type previously resolved and other FEC types to this peer and all other peers continues to forward traffic normally.

After the user has taken action to free resources up, the overload state of the LDP/T-LDP sessions toward its peers must be manually cleared.

The enhanced mechanism is enabled instead of the base mechanism only if both LSR nodes advertise this new LDP capability at the time the LDP session is initialized. Otherwise, they continue to use the base mechanism.

This feature operates among SR OS LSR nodes using a couple of private vendor LDP capabilities:

  • The first one is the LSR Overload Status TLV to signal or clear the overload condition.

  • The second one is the Overload Protection Capability Parameter, which allows LDP peers to negotiate the use of the overload notification feature and therefore the enhanced graceful handling mechanism.

When interoperating with an LDP peer which does not support the enhanced resource handling mechanism, the router reverts automatically to the default base resource handling mechanism.

The following are the details of the mechanism.

LSR overload notification

When an upstream LSR is overloaded for a FEC type, it notifies one or more downstream peer LSRs that it is overloaded for the FEC type.

When a downstream LSR receives overload status ON notification from an upstream LSR, it does not send further label mappings for the specified FEC type. When a downstream LSR receives overload OFF notification from an upstream LSR, it sends pending label mappings to the upstream LSR for the specified FEC type.

This feature introduces a new TLV referred to as LSR Overload Status TLV, shown below. This TLV is encoded using vendor proprietary TLV encoding as per RFC 5036. It uses a TLV type value of 0x3E02 and the Timetra OUI value of 0003FA.

0                   1                   2                   3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|U|F| Overload Status TLV Type  |            Length             |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                  Timetra OUI  = 0003FA                        |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|S|                         Reserved                            |

where:
   U-bit: Unknown TLV bit, as described in RFC 5036. The value MUST
   be 1 which means if unknown to receiver then receiver should ignore

   F-bit: Forward unknown TLV bit, as described in RFC RFC5036. The value
   of this bit MUST be 1 since a LSR overload TLV is sent only between
   two immediate LDP peers, which are not forwarded.

   S-bit: The State Bit. It indicates whether the sender is setting the
   LSR Overload Status ON or OFF. The State Bit value is used as
   follows:

   1 - The TLV is indicating LSR overload status as ON.

   0 - The TLV is indicating LSR overload status as OFF.

When a LSR that implements the procedures defined in this document generates LSR overload status, it must send LSR Overload Status TLV in a LDP Notification Message accompanied by a FEC TLV. The FEC TLV must contain one Typed Wildcard FEC TLV that specifies the FEC type to which the overload status notification applies.

The feature in this document re-uses the Typed Wildcard FEC Element which is defined in RFC 5918.

LSR overload protection capability

To ensure backward compatibility with procedures in RFC 5036 an LSR supporting Overload Protection need means to determine whether a peering LSR supports overload protection or not.

An LDP speaker that supports the LSR Overload Protection procedures as defined in this document must inform its peers of the support by including a LSR Overload Protection Capability Parameter in its initialization message. The Capability parameter follows the guidelines and all Capability Negotiation Procedures as defined in RFC 5561. This TLV is encoded using vendor proprietary TLV encoding as per RFC 5036. It uses a TLV type value of 0x3E03 and the Timetra OUI value of 0003FA.

       0                   1                   2                   3
      0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |U|F| LSR Overload Cap TLV Type |            Length             |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                  Timetra OUI = 0003FA                         |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |S| Reserved    |
      +-+-+-+-+-+-+-+-+
Where:

   U and F bits : MUST be 1 and 0 respectively as per section 3 of LDP
   Capabilities [RFC5561].
   
   S-bit : MUST be 1 (indicates that capability is being advertised).

Procedures for LSR overload protection

The procedures defined in this document apply only to LSRs that support Downstream Unsolicited (DU) label advertisement mode and Liberal Label Retention Mode. An LSR that implements the LSR overload protection follows the following procedures:

Note: An LSR must not use LSR overload notification procedures with a peer LSR that has not specified LSR Overload Protection Capability in Initialization Message received from the peer LSR.
  1. When an upstream LSR detects that it is overloaded with a FEC type then it must initiate an LDP notification message with the S-bit ON in LSR Overload Status TLV and a FEC TLV containing the Typed Wildcard FEC Element for the specified FEC type. This message may be sent to one or more peers.

  2. After it has notified peers of its overload status ON for a FEC type, the overloaded upstream LSR can send Label Release for a set of FEC elements to respective downstream LSRs to off load its LIB to below a specified watermark.

  3. When an upstream LSR that was previously overloaded for a FEC type detects that it is no longer overloaded, it must send an LDP notification message with the S-bit OFF in LSR Overload Status TLV and FEC TLV containing the Typed Wildcard FEC Element for the specified FEC type.

  4. When an upstream LSR has notified its peers that it is overloaded for a FEC type, then a downstream LSR must not send new label mappings for the specified FEC type to the upstream LSR.

  5. When a downstream LSR receives LSR overload notification from a peering LSR with status OFF for a FEC type then the receiving LSR must send any label mappings for the FEC type which were pending to the upstream LSR for which are eligible to be sent now.

  6. When an upstream LSR is overloaded for a FEC type and it receives Label Mapping for that FEC type from a downstream LSR then it can send Label Release to the downstream peer for the received Label Mapping with LDP Status Code as No_Label_Resources as defined in RFC 5036.

User guidelines and troubleshooting procedures

Common procedures

When troubleshooting a LDP resource exhaustion situation on an LSR, the user must first determine which of the LSR and its peers supports the enhanced handling of resources. This is done by checking if the local LSR or its peers advertised the LSR Overload Protection Capability by using the following command.

show router ldp status
===============================================================================
LDP Status for IPv4 LSR ID 0.0.0.0
               IPv6 LSR ID ::
===============================================================================
Created at         : 01/08/19 17:57:06    
Last Change        : 01/08/19 17:57:06    
Admin State        : Up                   
IPv4 Oper State    : Down                 IPv6 Oper State      : Down
IPv4 Down Time     : 0d 00:12:58          IPv6 Down Time       : 0d 00:12:58
IPv4 Oper Down Rea*: systemIpDown         IPv6 Oper Down Reason: systemIpDown
IPv4 Oper Down Eve*: 0                    IPv6 Oper Down Events: 0
Tunn Down Damp Time: 3 sec                Weighted ECMP        : Disabled
Label Withdraw Del*: 0 sec                Implicit Null Label  : Disabled
Short. TTL Local   : Enabled              Short. TTL Transit   : Enabled
ConsiderSysIPInGep : Disabled             
Imp Ucast Policies :                      Exp Ucast Policies   : 
    pol1                                      none
Imp Mcast Policies :                                              
    pol1                                  
    policy2                               
    policy-3                              
    policy-four                           
    pol-five                              
Tunl Exp Policies  : None                 Tunl Imp Policies    : None
FRR                : Disabled             Mcast Upstream FRR   : Disabled
Mcast Upst ASBR FRR: Disabled 

Base resource handling procedures

  1. If the peer or the local LSR does not support the Overload Protection Capability, it means that the associated adjacency [interface/peer] is brought down as part of the base resource handling mechanism.

    The user can determine which interface or targeted peer was administratively disabled, by applying the following commands.

    show router ldp interface resource-failures
    show router ldp targ-peer resource-failures
    show router ldp interface resource-failures
    ===============================================================================
    LDP Interface Resource Failures
    ===============================================================================
    srl                                     srr
    sru4                                    sr4-1-5-1
    ===============================================================================
    show router ldp targ-peer resource-failures
    ===============================================================================
    LDP Peers Resource Failures
    ===============================================================================
    10.20.1.22                              192.168.1.3
    ===============================================================================

    A trap is also generated for each interface or targeted peer:

    16 2013/07/17 14:21:38.06 PST MINOR: LDP #2003 Base LDP Interface Admin State
    "Interface instance state changed - vRtrID: 1, Interface sr4-1-5-1, administrati
    ve state: inService, operational state: outOfService"
    
    13 2013/07/17 14:15:24.64 PST MINOR: LDP #2003 Base LDP Interface Admin State
    "Interface instance state changed - vRtrID: 1, Peer 10.20.1.22, administrative s
    tate: inService, operational state: outOfService"
    

    The user can then check that the base resource handling mechanism has been applied to a specific interface or peer by running the following show commands.

    show router ldp interface detail
    show router ldp targ-peer detail
    show router ldp interface detail 
    ===============================================================================
    LDP Interfaces (Detail)
    ===============================================================================
    -------------------------------------------------------------------------------
    Interface "sr4-1-5-1"
    -------------------------------------------------------------------------------
    Admin State        : Up                  Oper State       : Down
    Oper Down Reason   : noResources  <----- //link LDP resource exhaustion handled
    Hold Time          : 45                  Hello Factor     : 3
    Oper Hold Time     : 45                  
    Hello Reduction    : Disabled            Hello Reduction *: 3
    Keepalive Timeout  : 30                  Keepalive Factor : 3
    Transport Addr     : System              Last Modified    : 07/17/13 14:21:38
    Active Adjacencies : 0                   
    Tunneling          : Disabled            
    Lsp Name           : None
    Local LSR Type     : System
    Local LSR          : None
    BFD Status         : Disabled            
    Multicast Traffic  : Enabled             
    -------------------------------------------------------------------------------
    show router ldp discovery interface "sr4-1-5-1" detail 
    ===============================================================================
    LDP Hello Adjacencies (Detail)
    ===============================================================================
    -------------------------------------------------------------------------------
    Interface "sr4-1-5-1"
    -------------------------------------------------------------------------------
    Local Address      : 192.168.2.110      Peer Address        : 192.168.0.2
    Adjacency Type     : Link               State               : Down            
    ===============================================================================
    show router ldp targ-peer detail 
    ===============================================================================
    LDP Peers (Detail)
    ===============================================================================
    -------------------------------------------------------------------------------
    Peer 10.20.1.22
    -------------------------------------------------------------------------------
    Admin State        : Up              Oper State           : Down
    Oper Down Reason   : noResources     <----- // T-LDP resource exhaustion handled
    Hold Time          : 45              Hello Factor         : 3
    Oper Hold Time     : 45              
    Hello Reduction    : Disabled        Hello Reduction Fact*: 3
    Keepalive Timeout  : 40              Keepalive Factor     : 4
    Passive Mode       : Disabled        Last Modified        : 07/17/13 14:15:24
    Active Adjacencies : 0               Auto Created         : No
    Tunneling          : Enabled         
    Lsp Name           : None
    Local LSR          : None
    BFD Status         : Disabled        
    Multicast Traffic  : Disabled        
    -------------------------------------------------------------------------------
    show router ldp discovery peer 10.20.1.22 detail       
    ===============================================================================
    LDP Hello Adjacencies (Detail)
    ===============================================================================
    -------------------------------------------------------------------------------
    Peer 10.20.1.22
    -------------------------------------------------------------------------------
    Local Address      : 192.168.1.110      Peer Address        : 10.20.1.22
    Adjacency Type     : Targeted           State               : Down   <----- 
    //T-LDP resource exhaustion handled
    ===============================================================================
    
  2. Besides interfaces and targeted peer, locally originated FECs may also be put into overload. These are the following:
    • unicast fec-originate pop

    • multicast local static p2mp-fec type=1 [on leaf LSR]

    • multicast local Dynamic p2mp-fec type=3 [on leaf LSR]

    The user can check if only remote or local, or both FECs have been set in overload by the resource base resource exhaustion mechanism using the tools dump router ldp instance command.

    The relevant part of the output is described below:

    {...... snip......}
    Num OLoad Interfaces:      4     <----- //#LDP interfaces resource in exhaustion
    Num Targ Sessions:         72          Num Active Targ Sess:  62
    Num OLoad Targ Sessions:   7     <----- //#T-LDP peers in resource exhaustion
    Num Addr FECs Rcvd:        0           Num Addr FECs Sent:    0
    Num Addr Fecs OLoad:       1     <----- //# of local/remote unicast FECs in Overload
    Num Svc FECs Rcvd:         0           Num Svc FECs Sent:     0
    Num Svc FECs OLoad:        0     <----- // # of local/
    remote service Fecs in Overload
    Num mcast FECs Rcvd:       0           Num Mcast FECs Sent:   0
    Num mcast FECs OLoad:      0     <----- // # of local/
    remote multicast Fecs in Overload
    {...... snip......}
    

    When at least one local FEC has been set in overload the following trap occurs:

    23 2013/07/17 15:35:47.84 PST MINOR: LDP #2002 Base LDP Resources Exhausted 
    "Instance
     state changed - vRtrID: 1, administrative state: inService, operationa l state:
     inService"
    
  3. After the user has detected that at least, one link LDP or T-LDP adjacency has been brought down by the resource exhaustion mechanism, he/she must protect the router by applying one or more of the following to free resources up:
    • Identify the source for the [unicast/multicast/service] FEC flooding.

    • Configure the appropriate [import/export] policies and/or delete the excess [unicast/multicast/service] FECs that are not currently handled.

  4. Next, the user has to manually attempt to clear the overload (no resource) state and allow the router to attempt to restore the link and targeted sessions to its peer.
    Note: Because of the dynamic nature of FEC distribution and resolution by LSR nodes, one cannot predict exactly which FECs and which interfaces or targeted peers are restored after performing the following commands if the LSR activates resource exhaustion again.

    Use one of the following commands to clear the overload state:

    • clear router ldp resource-failures
      • clears the overload state and attempt to restore adjacency and session for LDP interfaces and peers

      • clears the overload state for the local FECs

    • clear router ldp interface

      or

      clear router ldp peer
      • clears the overload state and attempt to restore adjacency and session for LDP interfaces and peers

      • these two commands do not clear the overload state for the local FECs

Enhanced resource handling procedures

  1. If the peer and the local LSR do support the Overload Protection Capability it means that the LSR signals the overload state for the FEC type that caused the resource exhaustion as part of the enhanced resource handling mechanism.

    To verify if the local router has received or sent the overload status TLV, use the following command.

    show router ldp session 192.168.1.1 detail
    -------------------------------------------------------------------------------
    Session with Peer 192.168.1.1:0, Local 192.168.1.110:0
    -------------------------------------------------------------------------------
    Adjacency Type         : Both           State                  : Established
    Up Time                : 0d 00:05:48    
    Max PDU Length         : 4096           KA/Hold Time Remaining : 24
    Link Adjacencies       : 1              Targeted Adjacencies   : 1
    Local Address          : 192.168.1.110  Peer Address           : 192.168.1.1
    Local TCP Port         : 51063          Peer TCP Port          : 646
    Local KA Timeout       : 30             Peer KA Timeout        : 45
    Mesg Sent              : 442            Mesg Recv              : 2984
    FECs Sent              : 16             FECs Recv              : 2559
    Addrs Sent             : 17             Addrs Recv             : 1054
    GR State               : Capable        Label Distribution     : DU
    Nbr Liveness Time      : 0              Max Recovery Time      : 0
    Number of Restart      : 0              Last Restart Time      : Never
    P2MP                   : Capable        MP MBB                 : Capable
    Dynamic Capability     : Not Capable    LSR Overload           : Capable
    Advertise              : Address/Servi* BFD Operational Status : inService
    Addr FEC OverLoad Sent : Yes            Addr FEC OverLoad Recv : No     <---- 
    // this LSR sent overLoad for unicast FEC type to peer
    Mcast FEC Overload Sent: No             Mcast FEC Overload Recv: No
    Serv FEC Overload Sent : No             Serv FEC Overload Recv : No
    -------------------------------------------------------------------------------
    
    show router ldp session 192.168.1.110 detail
    -------------------------------------------------------------------------------
    Session with Peer 192.168.1.110:0, Local 192.168.1.1:0
    -------------------------------------------------------------------------------
    Adjacency Type         : Both           State                  : Established
    Up Time                : 0d 00:08:23    
    Max PDU Length         : 4096           KA/Hold Time Remaining : 21
    Link Adjacencies       : 1              Targeted Adjacencies   : 1
    Local Address          : 192.168.1.1    Peer Address           : 192.168.1.110
    Local TCP Port         : 646            Peer TCP Port          : 51063
    Local KA Timeout       : 45             Peer KA Timeout        : 30
    Mesg Sent              : 3020           Mesg Recv              : 480
    FECs Sent              : 2867           FECs Recv              : 16
    Addrs Sent             : 1054           Addrs Recv             : 17
    GR State               : Capable        Label Distribution     : DU
    Nbr Liveness Time      : 0              Max Recovery Time      : 0
    Number of Restart      : 0              Last Restart Time      : Never
    P2MP                   : Capable        MP MBB                 : Capable
    Dynamic Capability     : Not Capable    LSR Overload           : Capable
    Advertise              : Address/Servi* BFD Operational Status : inService
    Addr FEC OverLoad Sent : No             Addr FEC OverLoad Recv : Yes     <---- 
    // this LSR received overLoad for unicast FEC type from peer
    Mcast FEC Overload Sent: No             Mcast FEC Overload Recv: No
    Serv FEC Overload Sent : No             Serv FEC Overload Recv : No
    =============================================================================== 

    A trap is also generated:

    70002 2013/07/17 16:06:59.46 PST MINOR: LDP #2008 Base LDP Session State Change
    "Session state is operational. Overload Notification message is sent to/from peer 
      192.168.1.1:0 with overload state true for fec type prefixes"
    
  2. Besides interfaces and targeted peer, locally originated FECs may also be put into overload. These are the following:
    • unicast fec-originate pop

    • multicast local static p2mp-fec type=1 [on leaf LSR]

    • multicast local Dynamic p2mp-fec type=3 [on leaf LSR]

    The user can check if only remote or local, or both FECs have been set in overload by the resource enhanced resource exhaustion mechanism using the following command.

    tools dump router ldp instance

    The relevant part of the output is described below:

      Num Entities OLoad (FEC: Address Prefix  ):  Sent: 7           Rcvd: 0   <----- 
    // # of session in OvLd for fec-type=unicast
      Num Entities OLoad (FEC: PWE3            ):  Sent: 0           Rcvd: 0   <----- 
    // # of session in OvLd for fec-type=service
      Num Entities OLoad (FEC: GENPWE3         ):  Sent: 0           Rcvd: 0   <----- 
    // # of session in OvLd for fec-type=service
      Num Entities OLoad (FEC: P2MP            ):  Sent: 0           Rcvd: 0   <----- 
    // # of session in OvLd for fec-type=MulticastP2mp
      Num Entities OLoad (FEC: MP2MP UP        ):  Sent: 0           Rcvd: 0   <----- 
    // # of session in OvLd for fec-type=MulticastMP2mp
      Num Entities OLoad (FEC: MP2MP DOWN      ):  Sent: 0           Rcvd: 0   <----- 
    // # of session in OvLd for fec-type=MulticastMP2mp
      Num Active Adjacencies:    9
      Num Interfaces:            6           Num Active Interfaces: 6
      Num OLoad Interfaces:      0      <----- // link LDP interfaces in resource 
     exhaustion
     should be zero when Overload Protection Capability is supported
      Num Targ Sessions:         72          Num Active Targ Sess:  67
      Num OLoad Targ Sessions:   0      <----- // T-LDP peers in resource exhaustion
     should be zero if Overload Protection Capability is supported
      Num Addr FECs Rcvd:        8667        Num Addr FECs Sent:    91
      Num Addr Fecs OLoad:       1                                    <----- 
    // # of local/remote unicast Fecs in Overload
      Num Svc FECs Rcvd:         3111        Num Svc FECs Sent:     0
      Num Svc FECs OLoad:        0                                    <----- 
    // # of local/remote service   Fecs in Overload
      Num mcast FECs Rcvd:       0           Num Mcast FECs Sent:   0
      Num mcast FECs OLoad:      0                                    <----- 
    // # of local/remote multicast Fecs in Overload
      Num MAC Flush Rcvd:        0           Num MAC Flush Sent:    0
    

    When at least one local FEC has been set in overload the following trap occurs:

    69999 2013/07/17 16:06:59.21 PST MINOR: LDP #2002 Base LDP Resources Exhausted
     "Instance state changed - vRtrID: 1, administrative state: inService, operational
     state: inService"
    
  3. After the user has detected that at least one overload status TLV has been sent or received by the LSR, he/she must protect the router by applying one or more of the following to free resources up:
    • Identify the source for the [unicast/multicast/service] FEC flooding. This is most likely the LSRs which session received the overload status TLV.

    • Configure the appropriate [import/export] policies and delete the excess [unicast/multicast/service] FECs from the FEC type in overload.

  4. Next, the user has to manually attempt to clear the overload state on the affected sessions and for the affected FEC types and allow the router to clear the overload status TLV to its peers.
    Note: Because of the dynamic nature of FEC distribution and resolution by LSR nodes, one cannot predict exactly which sessions and which FECs are cleared after performing the following commands if the LSR activates overload again.

    One of the following commands can be used depending on whether the user wants to clear all sessions in one step or one session at a time:

    • clear router ldp resource-failures
      • clears the overload state for the affected sessions and FEC types

      • clears the overload state for the local FECs

    • clear router ldp session ip-address overload fec-type 
      • clears the overload state for the specified session and FEC type

      • clears the overload state for the local FECs

LDP-IGP synchronization

The SR OS supports the synchronization of an IGP and LDP based on a solution described in RFC 5443, which consists of setting the cost of a restored link to infinity to give both the IGP and LDP time to converge. When a link is restored after a failure, the IGP sets the link cost to infinity and advertises it. The actual value advertised in OSPF is 0xFFFF (65535). The actual value advertised in an IS-IS regular metric is 0x3F (63) and in IS-IS wide-metric is 0xFFFFFE (16777214). This synchronization feature is not supported on RIP interfaces.

When the LDP synchronization timer subsequently expires, the actual cost is put back and the IGP readvertises it and uses it at the next SPF computation. The LDP synchronization timer is configured using the following command:

  • MD-CLI
    configure router interface ldp-sync-timer seconds seconds
  • classic CLI
    configure router interface ldp-sync-timer seconds

The SR OS also supports an LDP End of LIB message, as defined in RFC 5919, that allows a downstream node to indicate to its upstream peer that it has advertised its entire label information base. The effect of this on the IGP-LDP synchronization timer is described below.

If an interface belongs to both IS-IS and OSPF, a physical failure causes both IGPs to advertise an infinite metric and to follow the IGP-LDP synchronization procedures. If only one IGP bounces on this interface or on the system, then only the affected IGP advertises the infinite metric and follows the IGP-LDP synchronization procedures.

Next, an LDP Hello adjacency is brought up with the neighbor. The LDP synchronization timer is started by the IGP when the LDP session to the neighbor is up over the interface. This is to allow time for the label-FEC bindings to be exchanged.

When the LDP synchronization timer expires, the link cost is restored and is readvertised. The IGP announces a new best next hop and LDP uses it if the label binding for the neighbor’s FEC is available.

If the user changes the cost of an interface, the new value is advertised at the next flooding of link attributes by the IGP. However, if the LDP synchronization timer is still running, the new cost value is only advertised after the timer expires. The new cost value is also advertised after the user executes any of the following commands:

  • MD-CLI
    configure router isis ldp-sync false
    configure router isis ldp-sync false
    configure route ldp ldp-sync-timer delete seconds
    tools perform router isis ldp-sync-exit 
    tools perform router ospf ldp-sync-exit
  • classic CLI
    configure router isis disable-ldp-sync
    configure router isis disable-ldp-sync
    configure router interface no ldp-sync-timer
    tools perform router isis ldp-sync-exit 
    tools perform router ospf ldp-sync-exit

If the user changes the value of the LDP synchronization timer command option, the new value takes effect at the next synchronization event. If the timer is still running, it continues to use the previous value.

If parallel links exist to the same neighbor, then the bindings and services should remain up as long as there is one interface that is up. However, the user-configured LDP synchronization timer still applies on the interface that failed and was restored. In this case, the router only considers this interface for forwarding after the IGP readvertises its actual cost value.

The LDP End of LIB message is used by a node to signal completion of label advertisements, using a FEC TLV with the Typed Wildcard FEC element for all negotiated FEC types. This is done even if the system has no label bindings to advertise. The SR OS also supports the Unrecognized Notification TLV (RFC 5919) that indicates to a peer node that it ignores unrecognized status TLVs. This indicates to the peer node that it is safe to send End of LIB notifications even if the node is not configured to process them.

The behavior of a system that receives an End of LIB status notification is configured through the CLI on a per-interface basis as follows:

  • MD-CLI
    configure router interface ldp-sync-timer seconds seconds
    configure router interface ldp-sync-timer end-of-lib 
  • classic CLI
    configure router interface ldp-sync-timer seconds end-of-lib

If the end-of lib command option is not configured, then the LDP synchronization timer is started when the LDP Hello adjacency comes up over the interface, as described above. Any received End of LIB LDP messages are ignored.

If the end-of-lib command option is configured, then the system behaves as follows on the receive side:

  • The ldp-sync-timer is started.

  • If LDP End of LIB Typed Wildcard FEC messages are received for every FEC type negotiated for a specified session to an LDP peer for that IGP interface, the ldp-sync-timer is terminated and processing proceeds as if the timer had expired, that is, by restoring the IGP link cost.

  • If the ldp-sync-timer expires before the LDP End of LIB messages are received for every negotiated FEC type, then the system restores the IGP link cost.

  • The receive side drops any unexpected End of LIB messages.

If the end-of-lib command option is configured, then the system also sends out an End of LIB message for prefix and P2MP FECs after all FECs are sent for all peers that have advertised the Unrecognized Notification Capability TLV.

MLDP resolution using multicast RTM

When unicast services use IGP shortcuts, IGP shortcut next hops are installed in the RTM. Therefore, for multicast P2MP MLDP, the leaf node resolves the root using these IGP shortcuts. Currently MLDP cannot be resolved using IGP shortcuts. To avoid this, MLDP does a lookup in the multicast RTM. IGP shortcuts are not installed in MRTM. The following command forces MLDP do next-hop lookups in the RTM or MRTM.
configure router ldp resolve-root-using {ucast-rtm | mcast-rtm}

By default, the resolve-root-using command is set to the ucast-rtm command option and MLDP uses the unicast RTM for resolution of the FEC in all cases. When MLDP uses the unicast RTM to resolve the FEC, it does not resolve the FEC if its next hop is resolved using an IGP shortcut.

To force MLDP resolution to use the multicast RTM, use the resolve-root-using mcast-rtm command option. When this command is enabled:

  • For FEC resolution using IGP, static or local, the ROOT in this FEC is resolved using the multicast RTM.

  • A FEC being resolved using BGP is recursive, so the FEC next hop (ASBR/ABR) is resolved using the multicast RTM first and, if this fails, it is resolved using the unicast RTM. This next hop needs to be recursively resolved again using IGP/Static-Route or Local, this second resolution (recursive resolution) uses the multicast RTM only; see Recursive FEC behavior.

  • When resolve-root-using ucast-rtm is set, MLDP uses the unicast RTM to resolve the FEC and does not resolve the FEC if its next hop is resolved using an IGP shortcut.

For inter-AS or intra-AS, IGP shortcuts are limited to each AS or area connecting LEAF to ASBR, ASBR to ASBR, or ASBR to ROOT.

Figure 40. Recursive FEC behavior

In Recursive FEC behavior, the FEC between LEAF and ASBR-3 is resolved using an IGP shortcut. When the resolve-root-using mcast-rtm is set, the inner Root 100.0.0.14 is resolved using the multicast RTM first. If the multicast RTM lookup fails, then a second lookup for 100.0.0.14 is done in the unicast RTM. Resolution of 100.0.0.14 results in a next hop of 100.0.0.21 which is ASBR-3, therefore ASBR-3 100.0.0.21 is resolved only using multicast RTM when mcast-rtm is enabled.

Other considerations for multicast RTM MLDP resolution

When the configure ldp resolve-root-using command is set to mcast-rtm and then changed to ucast-rtm there is traffic disruption. If MoFRR is enabled, by toggling from mcast-rtm to ucast-rtm (or the other way around) the MoFRR is not used. In fact, MoFRR is torn down and re-established using the new routing table.

The mcast-rtm only has a local effect. All MLDP routing calculations on this specific node use MRTM and not RTM.

If mcast-rtm is enabled, all MLDP functionality is based on MRTM. This includes MoFRR, ASBR-MoFRR, policy-based SPMSI, and non-segmented inter-AS.

Bidirectional forwarding detection for LDP LSPs

Bidirectional forwarding detection (BFD) for MPLS LSPs monitors the LSP between its LERs, irrespective of how many LSRs the LSP may traverse. This feature enables the detection of faults that are local to individual LSPs, irrespective of whether they also affect forwarding for other LSPs or IP packet flows. BFD for MPLS LSPs is ideal for monitoring LSPs that carry high-value services, and for which the quick detection of forwarding failures is critical. If an LSP BFD session goes down, the system raises an SNMP trap and indicates the BFD session state in the show and tools dump commands.

SR OS supports LSP BFD on RSVP and LDP LSPs. See MPLS and RSVP for information about using LSP BFD on RSVP LSPs. BFD packets are encapsulated in an MPLS label stack corresponding to the FEC that the BFD session is associated with, as described in RFC 5884, Section 7. SR OS does not support the monitoring of multiple ECMP paths that are associated with the same LDP FEC which is using multiple LSP BFD sessions simultaneously. However, LSP BFD still provides continuity checking for paths associated with a target FEC. LDP provides a single path to LSP BFD, corresponding with the first resolved lower interface index next-hop, and the first resolved lower TID index for LDP-over-RSVP cases. The path may potentially change over the lifetime of the FEC, based on resolution changes. The system tracks the changing path and maintains the LSP BFD session.

Because LDP LSPs are unidirectional, a routed return path is used for the BFD control packets traveling from the egress LER to the ingress LER.

Bootstrapping and maintaining LSP BFD sessions

A BFD session on an LSP is bootstrapped using LSP ping. LSP ping is used to exchange the local and remote discriminator values to use for the BFD session for a specific MPLS LSP or FEC.

The process for bootstrapping an LSP BFD session for LDP is the same as for RSVP, as described in Bidirectional forwarding detection for MPLS LSPs.

SR OS supports the sending of periodic LSP ping messages on an LSP for which LSP BFD has been configured, as specified in RFC 5884. The ping messages are sent, along with the bootstrap TLV, at a configurable interval for LSPs where bfd-enable is configured. The default interval is 60 s, with a maximum interval of 300 s. The LSP ping echo request message uses the system IP address as the default source address. An alternative source address consisting of any routable address that is local to the node may be configured and used if the local system IP address is not routable from the far-end node.

Note: SR OS does not take any action if a remote system fails to respond to a periodic LSP ping message. However, when the show test-oam lsp-bfd command is executed, it displays a return code of zero and a replying node address of 0.0.0.0 if the periodic LSP ping times out.
The periodic LSP ping interval is configured using the following command.
configure router ldp lsp-bfd lsp-ping-interval

Configuring an LSP ping interval of 0 disables periodic LSP ping for LDP FECs matching the specified prefix list. The lsp-ping-interval command has a default value of 60 s.

LSP BFD sessions are recreated after a high-availability switchover between active and standby CPMs. However, some disruption may occur to LSP ping as a result LSP BFD.

At the head end of an LSP, sessions are bootstrapped if the local and remote discriminators are not known. The sessions experience jitter at 0 to 25% of a retry time of 5 seconds. A side effect of the bootstrapping is that the following current information is lost from an active show test-oam lsp-bfd display:

  • Replying Node

  • Latest Return Code

  • Latest Return SubCode

  • Bootstrap Retry Count

  • Tx Lsp Ping Requests

  • Rx Lsp Ping Replies

If the local and remote discriminators are known, the system immediately begins generating periodic LSP pings. The pings experience jitter at 0 to 25% of the lsp-ping-interval time of 60 to 300 seconds. The lsp-ping-interval time is synchronized across by LSP BFD. A side effect of the bootstrapping is that the following current information is lost from an active show test-oam lsp-bfd display:

  • Replying Node

  • Latest Return Code

  • Latest Return SubCode

  • Bootstrap Retry Count

  • Tx Lsp Ping Requests

  • Rx Lsp Ping Replies

At the tail end of an LSP, sessions are recreated on the standby CPM following a switchover. The side effect of this is that the following current information is lost from an active tools dump test-oam lsp-bfd tail display:

  • handle

  • seqNum

  • rc

  • rsc

New, incoming bootstrap requests are dropped until the LSP BFD session is active. When the LSP BFD session is active, new bootstrap requests are considered.

BFD configuration on LDP LSPs

Use the commands under the following context to configure LSP BFD for LDP.

configure router ldp lsp-bfd

The lsp-bfd command creates the context for LSP BFD configuration for a set of LDP LSPs with a FEC matching the one defined by the prefix-list-name. The default is unconfigured. Using the following command, for a specified prefix list, removes LSP BFD for all matching LDP FECs except those that also match another LSP BFD prefix list.

  • MD-CLI
    delete lsp-bfd
  • classic CLI
    no lsp-bfd

The prefix-list-name variable refers to a named prefix list configured in the following context:

  • MD-CLI
    configure policy-options
  • classic CLI
    configure router policy-options

Up to 16 instances of LSP BFD can be configured under LDP in the base router instance.

The following optional command configures a priority value that is used to order the processing if multiple prefix lists are configured.
configure router ldp lsp-bfd priority
The default value is 1.

If more than one prefix in a prefix list, or more than one prefix list, contains a prefix that corresponds to the same LDP FEC, then the system tests the prefix against the configured prefix lists in the following order:

  1. numerically by priority level

  2. alphabetically by prefix list name

The system uses the first matching configuration, if one exists.

If an LSP BFD is removed for a prefix list, but there remains another LSP BFD configuration with a prefix list match, then any FECs matched against that prefix is rematched against the remaining prefix list configurations in the same manner as described above.

A non-existent prefix list is equivalent to an empty prefix list. When a prefix list is created and populated with prefixes, LDP matches its FECs against that prefix list. It is not necessary to configure a named prefix list in the configure router policy-options context before specifying a prefix list using the following command.
configure router ldp lsp-bfd

If a prefix list contains a longest match corresponding to one or more LDP FECs, the BFD configuration is applied to all of the matching LDP LSPs.

Only /32 IPv4 and /128 IPv6 host prefix FECs is considered for BFD. BFD on PW FECs uses VCCV BFD.

The following command is used to configure the source address of periodic LSP ping packets and BFD control packets for LSP BFD sessions associated with LDP prefixes in the prefix list.
configure router ldp lsp-bfd source-address
The default value is the system IP address. If the system IP address is not routable from the far-end node of the BFD session, then an alternative routable IP address local to the source node should be used.

The system does not initialize an LSP BFD session if there is a mismatch between the address family of the source address and the address family of the prefix in the prefix list.

If the system has both IPv4 and IPv6 system IP addresses, and the source-address command is not configured, then the system uses a source address of the matching address family for IPv4 and IPv6 prefixes in the prefix list.

The following command applies the specified BFD template to the BFD sessions for LDP LSPs with FECs that match the prefix list.
configure router ldp lsp-bfd bfd-template
The default is no bfd-template. The named BFD template must first be configured using the following command before it can be referenced by LSP BFD, otherwise a CLI error is generated:
  • MD-CLI
    configure bfd bfd-template
  • classic CLI
    configure router bfd bfd-template
The minimum receive interval and transmit interval supported for LSP BFD on LDP LSPs is 1 second.

The bfd-enable command enables BFD on the LDP LSPs with FECs that match the prefix list.

LDP IPv6 control and data planes

SR OS extends the LDP control plane and data plane to support LDP IPv6 adjacency and session using 128-bit LSR-ID.

The implementation allows for concurrent support of independent LDP IPv4 (32-bit LSR-ID) and IPv6 (128-bit LSR-ID) adjacencies and sessions between peer LSRs and over the same or different set of interfaces.

LDP operation in an IPv6 network

LDP IPv6 can be enabled on the SR OS interface. LDP adjacency and session over an IPv6 interface shows the LDP adjacency and session over an IPv6 interface.

Figure 41. LDP adjacency and session over an IPv6 interface

LSR-A and LSR-B have the following IPv6 LDP identifiers respectively:

  • <LSR Id=A/128> : <label space id=0>

  • <LSR Id=B/128> : <label space id=0>

By default, A/128 and B/128 use the system interface IPv6 address.

Note: Although the LDP control plane can operate using only the IPv6 system address, the user must configure the IPv4-formatted router ID for OSPF, IS-IS, and BGP to operate properly.

The following sections describe the behavior when LDP IPv6 is enabled on the interface.

Link LDP

The SR OS LDP IPv6 implementation uses a 128-bit LSR-ID as defined in draft-pdutta-mpls-ldp-v2-00. See LDP process overview for more information about interoperability of this implementation with 32-bit LSR-ID, as defined in RFC 7552.

Hello adjacency is brought up using link Hello packet with source IP address set to the interface link-local unicast address and a destination IP address set to the link-local multicast address FF02:0:0:0:0:0:0:2.

The transport address for the TCP connection, which is encoded in the Hello packet, is set to the LSR-ID of the LSR by default. It is set to the interface IPv6 address if the user enabled the interface option under one of the following contexts.

configure router ldp interface-parameters ipv6 transport-address
configure router ldp interface-parameters interface ipv6 transport-address

The interface global unicast address, meaning the primary IPv6 unicast address of the interface, is used.

The user can configure the local LSR ID option on the interface and change the value of the LSR-ID to either the local interface or to another interface name, loopback or not using the following command.
configure router ldp interface-parameters interface ipv6 local-lsr-id
The global unicast IPv6 address corresponding to the primary IPv6 address of the interface is used as the LSR-ID. If the user invokes an interface which does not have a global unicast IPv6 address in the configuration of the transport address or the configuration of the local-lsr-id command option, the session does not come up and an error message is displayed.

The LSR with the highest transport address bootstraps the IPv6 TCP connection and IPv6 LDP session.

Source and destination addresses of LDP/TCP session packets are the IPv6 transport addresses.

Targeted LDP

Source and destination addresses of targeted Hello packet are the LDP IPv6 LSR-IDs of systems A and B.

The user can configure the local LSR ID option on the targeted session and change the value of the LSR-ID to either the local interface or to some other interface name, loopback or not, by using the following command.
configure router ldp interface-parameters interface ipv6 local-lsr-id
The global unicast IPv6 address corresponding to the primary IPv6 address of the interface is used as the LSR-ID. If the user invokes an interface that does not have a global unicast IPv6 address in the configuration of the transport address or the configuration of the local-lsr-id command option, the session does not come up and an error message is displayed. In all cases, the transport address for the LDP session and the source IP address of targeted Hello message are updated to the new LSR-ID value.

The LSR with the highest transport address (in this case, the LSR-ID) bootstraps the IPv6 TCP connection and IPv6 LDP session.

Source and destination IP addresses of LDP/TCP session packets are the IPv6 transport addresses (in this case, LDP LSR-IDs of systems A and B).

FEC resolution

LDP advertises and withdraws all interface IPv6 addresses using the Address/Address-Withdraw message. Both the link-local unicast address and the configured global unicast addresses of an interface are advertised.

All LDP FEC types can be exchanged over a LDP IPv6 LDP session like in LDP IPv4 session.

The LSR does not advertise a FEC for a link-local address and, if received, the LSR does not resolve it.

A IPv4 or IPv6 prefix FEC can be resolved to an LDP IPv6 interface in the same way as it is resolved to an LDP IPv4 interface. The outgoing interface and next-hop are looked up in RTM cache. The next-hop can be the link-local unicast address of the other side of the link or a global unicast address. The FEC is resolved to the LDP IPv6 interface of the downstream LDP IPv6 LSR that advertised the IPv4 or IPv6 address of the next hop.

An mLDP P2MP FEC with an IPv4 root LSR address, and carrying one or more IPv4 or IPv6 multicast prefixes in the opaque element, can be resolved to an upstream LDP IPv6 LSR by checking if the LSR advertised the next-hop for the IPv4 root LSR address. The upstream LDP IPv6 LSR then resolves the IPv4 P2MP FEC to one of the LDP IPV6 links to this LSR.

Note: Beginning in Release 13.0, a P2MP FEC with an IPv6 root LSR address, carrying one or more IPv4 or IPv6 multicast prefixes in the opaque element, is not supported. Manually configured mLDP P2MP LSP, NG-mVPN, and dynamic mLDP cannot operate in an IPv6-only network.

A PW FEC can be resolved to a targeted LDP IPv6 adjacency with an LDP IPv6 LSR if there is a context for the FEC with local spoke-SDP configuration or spoke-SDP auto-creation from a service such as BGP-AD VPLS, BGP-VPWS or dynamic MS-PW.

LDP session capabilities

LDP supports advertisement of all FEC types over an LDP IPv4 or an LDP IPv6 session. These FEC types are: IPv4 prefix FEC, IPv6 prefix FEC, IPv4 P2MP FEC, PW FEC 128, and PW FEC 129.

In addition, LDP supports signaling the enabling or disabling of the advertisement of the following subset of FEC types both during the LDP IPv4 or IPv6 session initialization phase, and subsequently when the session is already up.

  • IPv4 prefix FEC

    This is performed using the State Advertisement Control (SAC) capability TLV as specified in RFC 7473. The SAC capability TLV includes the IPv4 SAC element having the D-bit (Disable-bit) set or reset to disable or enable this FEC type respectively. The LSR can send this TLV in the LDP Initialization message and subsequently in a LDP Capability message.

  • IPv6 prefix FEC

    This is performed using the State Advertisement Control (SAC) capability TLV as specified in RFC 7473. The SAC capability TLV includes the IPv6 SAC element having the D-bit (Disable-bit) set or reset to disable or enable this FEC type respectively. The LSR can send this TLV in the LDP Initialization message and subsequently in a LDP Capability message to update the state of this FEC type.

  • P2MP FEC

    This is performed using the P2MP capability TLV as specified in RFC 6388. The P2MP capability TLV has the S-bit (State-bit) with a value of set or reset to enable or disable this FEC type respectively. Unlike the IPv4 SAC and IPv6 SAC capabilities, the P2MP capability does not distinguish between IPv4 and IPv6 P2MP FEC. The LSR can send this TLV in the LDP Initialization message and, subsequently, in a LDP Capability message to update the state of this FEC type.

During LDP session initialization, each LSR indicates to its peers which FEC type it supports by including the capability TLV for it in the LDP Initialization message. The SR OS implementation enables the above FEC types by default and sends the corresponding capability TLVs in the LDP initialization message. If one or both peers advertise the disabling of a capability in the LDP Initialization message, no FECs of the corresponding FEC type are exchanged between the two peers for the lifetime of the LDP session unless a Capability message is sent subsequently to explicitly enable it. The same behavior applies if no capability TLV for a FEC type is advertised in the LDP initialization message, except for the IPv4 prefix FEC which is assumed to be supported by all implementations by default.

Dynamic Capability, as defined in RFC 5561, allows all above FEC types to update the enabled or disabled state after the LDP session initialization phase. An LSR informs its peer that it supports the Dynamic Capability by including the Dynamic Capability Announcement TLV in the LDP Initialization message. If both LSRs advertise this capability, the user is allowed to enable or disable any of the above FEC types while the session is up and the change takes effect immediately. The LSR then sends a SAC Capability message with the IPv4 or IPv6 SAC element having the D-bit (Disable-bit) set or reset, or the P2MP capability TLV in a Capability message with the S-bit (State-bit) set or reset. Each LSR then takes the consequent action of withdrawing or advertising the FECs of that type to the peer LSR. If one or both LSRs did not advertise the Dynamic Capability Announcement TLV in the LDP Initialization message, any change to the enabled or disabled FEC types only takes effect at the next time the LDP session is restarted.

The user can enable a specific FEC type for a specific LDP session to a peer by using the following commands.

configure router ldp session-parameters peer fec-type-capability p2mp
configure router ldp session-parameters peer fec-type-capability prefix-ipv4
configure router ldp session-parameters peer fec-type-capability prefix-ipv6

LDP adjacency capabilities

Adjacency-level FEC-type capability advertisement is defined in draft-pdutta-mpls-ldp-adj-capability. By default, all FEC types supported by the LSR are advertised in the LDP IPv4 or IPv6 session initialization; see LDP session capabilities for more information. If a specific FEC type is enabled at the session level, it can be disabled over a specified LDP interface at the IPv4 or IPv6 adjacency level for all IPv4 or IPv6 peers over that interface. If a specific FEC type is disabled at the session level, then FECs are not advertised and enabling that FEC type at the adjacency level does not have any effect. The LDP adjacency capability can be configured on link Hello adjacency only and does not apply to targeted Hello adjacency.

The LDP adjacency capability TLV is advertised in the Hello message with the D-bit (Disable-bit) set or reset to disable or enable the resolution of this FEC type over the link of the Hello adjacency. It is used to restrict which FECs can be resolved over a specified interface to a peer. This provides the ability to dedicate links and data path resources to specific FEC types. For IPv4 and IPv6 prefix FECs, a subset of ECMP links to a LSR peer may be each be configured to carry one of the two FEC types. An mLDP P2MP FEC can exclude specific links to a downstream LSR from being used to resolve this type of FEC.

Like the LDP session-level FEC-type capability, the adjacency FEC-type capability is negotiated for both directions of the adjacency. If one or both peers advertise the disabling of a capability in the LDP Hello message, no FECs of the corresponding FEC type are resolved by either peer over the link of this adjacency for the lifetime of the LDP Hello adjacency, unless one or both peers sends the LDP adjacency capability TLV subsequently to explicitly enable it.

The user can enable a FEC type for a specified LDP interface to a peer by using the following commands.

configure router ldp interface-parameters interface ipv4 fec-type-capability p2mp-ipv4
configure router ldp interface-parameters interface ipv4 fec-type-capability p2mp-ipv6
configure router ldp interface-parameters interface ipv4 fec-type-capability prefix-ipv4
configure router ldp interface-parameters interface ipv4 fec-type-capability prefix-ipv6 

configure router ldp interface-parameters interface ipv6 fec-type-capability p2mp-ipv4
configure router ldp interface-parameters interface ipv6 fec-type-capability p2mp-ipv6
configure router ldp interface-parameters interface ipv6 fec-type-capability prefix-ipv4
configure router ldp interface-parameters interface ipv6 fec-type-capability prefix-ipv6

Unlike the session-level capability, these commands can disable multicast FEC for IPv4 and IPv6 separately.

The encoding of the adjacency capability TLV uses a PRIVATE Vendor TLV. It is used only in a Hello message to negotiate a set of capabilities for a specific LDP IPv4 or IPv6 Hello adjacency.

0                   1                   2                   3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|0| ADJ_CAPABILITY_TLV        |      Length                   |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                           VENDOR_OUI                          |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|S|  Reserved   |                                               |
+-+-+-+-+-+-+-+-+                                               +
|                 Adjacency capability elements                 |
+                                                               +
|                                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

The value of the U-bit for the TLV is set to 1 so that a receiver must silently ignore if the TLV is deemed unknown.

The value of the F-bit is 0. After being advertised, this capability cannot be withdrawn; the S-bit is set to 1 in a Hello message.

Adjacency capability elements are encoded as follows:

0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+
|D|  CapFlag    |
+-+-+-+-+-+-+-+-+
D bit
controls the capability state
1
disable capability
0
enable capability
CapFlag
the adjacency capability
1
prefix IPv4 forwarding
2
prefix IPv6 forwarding
3
P2MP IPv4 forwarding
4
P2MP IPv6 forwarding
5
MP2MP IPv4 forwarding
6
MP2MP IPv6 forwarding

Each CapFlag appears no more than once in the TLV. If duplicates are found, the D-bit of the first element is used. For forward compatibility, if the CapFlag is unknown, the receiver must silently discard the element and continue processing the rest of the TLV.

Address and FEC distribution

After an LDP LSR initializes the LDP session to the peer LSR and the session comes up, local IPv4 and IPv6 interface addresses are exchanged using the Address and Address Withdraw messages. Similarly, FECs are exchanged using Label Mapping messages.

By default, IPv6 address distribution is determined by whether the Dual-stack capability TLV, which is defined in RFC 7552, is present in the Hello message from the peer. This coupling is introduced because of interoperability issues found with existing third-party LDP IPv4 implementations.

The following is the detailed behavior:

  • If the peer sent the dual-stack capability TLV in the Hello message, then IPv6 local addresses are sent to the peer. The user can configure a new address export policy to further restrict which local IPv6 interface addresses to send to the peer. If the peer explicitly stated enabling of LDP IPv6 FEC type by including the IPv6 SAC TLV with the D-bit (Disable-bit) set to 0 in the initialization message, then IPv6 FECs are sent to the peer. FEC prefix export policies can be used to restrict which LDP IPv6 FEC can be sent to the peer.

  • If the peer sent the dual-stack capability TLV in the Hello message, but explicitly stated disabling of LDP IPv6 FEC type by including the IPv6 SAC TLV with the D-bit (Disable-bit) set to 1 in the initialization message, then IPv6 FECs are not sent but IPv6 local addresses are sent to the peer. A CLI is provided to allow the configuration of an address export policy to further restrict which local IPv6 interface addresses to send to the peer. FEC prefix export policy has no effect because the peer explicitly requested disabling the IPv6 FEC type advertisement.

  • If the peer did not send the dual-stack capability TLV in the Hello message, then no IPv6 addresses or IPv6 FECs are sent to that peer, regardless of the presence or not of the IPv6 SAC TLV in the initialization message. This case is added to prevent interoperability issues with existing third-party LDP IPv4 implementations. The user can override this by explicitly configuring an address export policy and a FEC export policy to select which addresses and FECs to send to the peer.

The above behavior applies to LDP IPv4 and IPv6 addresses and FECs. The procedure is summarized in the flowchart diagrams in LDP IPv6 address and FEC distribution procedure and LDP IPv6 address and FEC distribution procedure.

Figure 42. LDP IPv6 address and FEC distribution procedure
Figure 43. LDP IPv6 address and FEC distribution procedure

Controlling IPv6 FEC distribution during an upgrade to SR OS supporting LDP IPv6

A FEC for each of the IPv4 and IPv6 system interface addresses is advertised and resolved automatically by the LDP peers when the LDP session comes up, regardless of whether the session is IPv4 or IPv6.

To avoid the automatic advertisement and resolution of IPv6 system FEC when the LDP session is IPv4, the following procedure must be followed before and after the upgrade to the SR OS version which introduces support of LDP IPv6.

Note: Before the upgrade, implement a global prefix policy which rejects prefix [::0/0 longer] to prevent IPv6 FECs from being installed after the upgrade.
  • In MISSU case:

    • If new IPv4 sessions are created on the node, the per-peer FEC-capabilities must be configured to filter out IPv6 FECs.

    • Until an existing IPv4 session is flapped, FEC-capabilities have no effect on filtering out IPv6 FECs. The import global policy must remain configured in place until the session flaps. Alternatively, a per-peer-import-policy [::0/0 longer] can be associated with this peer.

  • In cold upgrade case:

    • If new IPv4 sessions are created on the node, the per-peer FEC-capabilities must be configured to filter out IPv6 FECs.

    • On older, pre-existing IPv4 sessions, the per-peer FEC-capabilities must be configured to filter out IPv6 FECs.

  • When all LDP IPv4 sessions have dynamic capabilities enabled, with per-peer FEC-capabilities for IPv6 FECs disabled, then the GLOBAL IMPORT policy can be removed.

Handling of duplicate link-local IPv6 addresses in FEC resolution

Link-local IPv6 addresses are scoped to a link and duplicate addresses can be used on different links to the same or different peer LSR. When the duplicate addresses exist on the same LAN, routing detects them and block one of them. In all other cases, duplicate links are valid because they are scoped to the local link.

In this section, LLn refers to Link-Local address (n).

FEC resolution in LAN shows FEC resolution in a LAN.

Figure 44. FEC resolution in LAN

LSR B resolves a mLDP FEC with the root node being Root LSR. The route lookup shows that best route to loopback of Root LSR is {interface if-B and next-hop LL1}.

However, LDP finds that both LSR A and LSR C advertised address LL1 and that there are Hello adjacencies (IPv4 or IPv6) to both A and C. In this case, a change is made so that an LSR only advertises link-local IPv6 addresses to a peer for the links over which it established a Hello adjacency to that peer. In this case, LSR C advertises LL1 to LSR E but not to LSRs A, B, and D. This behavior applies with both P2P and broadcast interfaces.

Ambiguity also exists with prefix FEC (unicast FEC); the above solution also applies.

FEC Resolution over P2P links

---------(LL1)-[C]------

|

[Root LSR]-------[A]-(LL1)-----[B] ------(LL4)-[D]------

| |

|-(LL2)---------|

| |

|-(LL3)---------|

LSR B resolves an mLDP FEC with root node being Root LSR. The route lookup shows that best route to loopback of Root LSR is {interface if-B and next-hop LL1}.

  • case 1

    LDP is enabled on all links. This case has no ambiguity. LDP only selects LSR A because the address LL1 from LSR C is discovered over a different interface. This case also applies to prefix FEC (unicast FEC) and there is no ambiguity in the resolution.

  • case 2

    LDP is disabled on link A-B with next-hop LL1; LSR B can still select one of the two other interfaces to upstream LSR A as long as LSR A advertised LL1 address in the LDP session.

IGP and static route synchronization with LDP

The IGP-LDP synchronization and the static route to LDP synchronization features are modified to operate on a dual-stack IPv4/IPv6 LDP interface as follows:

  • If the router interface goes down or both LDP IPv4 and LDP IPv6 sessions go down, IGP sets the interface metric to maximum value and all static routes with the following command enabled and resolved on this interface are deactivated.
    configure router static-route-entry next-hop ldp-sync 
  • If the router interface is up and only one of the LDP IPv4 or LDP IPv6 interfaces goes down, no action is taken.

  • When the router interface comes up from a down state, and one of either the LDP IPv4 or LDP IPv6 sessions comes up, IGP starts the sync timer at the expiry of which the interface metric is restored to its configured value. All static routes with the ldp-sync command option enabled are also activated at the expiry of the timer.

Because of the above behavior, it is recommended that the user configures the sync timer to a value which allows enough time for both the LDP IPv4 and LDP IPv6 sessions to come up.

BFD operation

The operation of BFD over a LDP interface tracks the next-hop of prefix IPv4 and prefix IPv6 in addition to tracking of the LDP peer address of the Hello adjacency over that link. This tracking is required as LDP can now resolve both IPv4 and IPv6 prefix FECs over a single IPv4 or IPv6 LDP session and therefore the next-hop of a prefix does not necessarily match the LDP peer source address of the Hello adjacency. The failure of either or both of the BFD session tracking the FEC next-hop and the one tracking the Hello adjacency causes the LFA backup NHLFE for the FEC to be activated, or the FEC to be re-resolved if there is no FRR backup.

The following commands allow the user to decide if they want to track only with an IPv4 BFD session, only with an IPv6 BFD session, or both:

  • MD-CLI
    configure router ldp interface-parameters interface bfd-liveness ipv4
    configure router ldp interface-parameters interface bfd-liveness ipv6
    
  • classic CLI
    configure router ldp interface-parameters interface bfd-enable ipv4
    configure router ldp interface-parameters interface bfd-enable ipv6
    

This command provides the flexibility required in case the user does not need to track both Hello adjacency and next-hops of FECs. For example, if the user configures ipv6 only to save on the number of BFD sessions, then LDP tracks the IPv6 Hello adjacency and the next-hops of IPv6 prefix FECs. LDP does not track next-hops of IPv4 prefix FECs resolved over the same LDP IPv6 adjacency. If the IPv4 data plane encounters errors and the IPv6 Hello adjacency is not affected and remains up, traffic for the IPv4 prefix FECs resolved over that IPv6 adjacency is black-holed. If the BFD tracking the IPv6 Hello adjacency times out, then all IPv4 and IPv6 prefix FECs is updated.

The tracking of a mLDP FEC has the following behavior:

  • IPv4 and IPv6 mLDP FECs are only tracked with the Hello adjacency because they do not have the concept of downstream next-hop.

  • The upstream LSR peer for an mLDP FEC supports the multicast upstream FRR procedures, and the upstream peer is tracked using the Hello adjacency on each link or the IPv6 transport address if there is a T-LDP session.

  • The tracking of a targeted LDP peer with BFD does not change with the support of IPv6 peers. BFD tracks the transport address conveyed by the Hello adjacency which bootstrapped the LDP IPv6 session.

Services using SDP with an LDP IPv6 FEC

The SDP LDP type configured using configure service sdp ldp is supported using IPv6 addresses with the following commands.

Use the following command to configure the system IP address of the far-end destination router for the SDP that is the termination point for a service:
  • MD-CLI
    configure service sdp far-end ip-address
  • classic CLI
    configure service sdp far-end
Use the following command to specify an SDP tunnel destination address that is different from the configuration of the SDP far-end option.
configure service sdp tunnel-far-end

The addresses need not be of the same family (IPv6 or IPv4) for the SDP configuration to be allowed. The user can have an SDP with an IPv4 (or IPv6) control plane for the T-LDP session and an IPv6 (or IPv4) LDP FEC as the tunnel.

Because IPv6 LSP is only supported with LDP, the use of a far-end IPv6 address is not allowed with a BGP or RSVP/MPLS LSP. In addition, the CLI does not allow an SDP with a combination of an IPv6 LDP LSP and an IPv4 LSP of a different control plane. As a result, the following commands are blocked within the SDP configuration context when the far-end is an IPv6 address:

configure service sdp bgp-tunnel
configure service sdp lsp
configure service sdp mixed-lsp-mode

SDP admin groups are not supported with an SDP using an LDP IPv6 FEC, and the attempt to assign them is blocked in CLI.

Services that use LDP control plane (such as T-LDP VPLS and R-VPLS, VLL, and IES/VPRN spoke interface) have the spoke SDP (PW) signaled with an IPv6 T-LDP session when the far-end command option is configured to an IPv6 address. The spoke SDP for these services binds by default to an SDP that uses a LDP IPv6 FEC, which prefix matches the far end address. The spoke SDP can use a different LDP IPv6 FEC or a LDP IPv4 FEC as the tunnel by configuring the tunnel-far-end command option. In addition, the IPv6 PW control word is supported with both data plane packets and VCCV OAM packets. Hash label is also supported with the above services, including the signaling and negotiation of hash label support using T-LDP (Flow sub-TLV) with the LDP IPv6 control plane. Finally, network domains are supported in VPLS.

Mirror services and lawful intercept

The user can configure a spoke SDP bound to an LDP IPv6 LSP to forward mirrored packets from a mirror source to a remote mirror destination. In the configuration of the mirror destination service at the destination node, the following command must use a spoke SDP with a VC-ID that matches the one that is configured in the mirror destination service at the mirror source node.
configure mirror mirror-dest remote-source
The following command is not supported with an IPv6 address.
configure mirror mirror-dest remote-source far-end

This also applies to the configuration of the mirror destination for a LI source.

Configuration at mirror source node

Note: This section applies to the classic CLI.

Use the following rules to configure at the mirror source node:

  • The sdp-id must match an SDP which uses LDP IPv6 FEC.

  • Configuring egress-vc-label is optional.

    configure mirror mirror-dest 10

The following example shows an optional vc-label configuration.

MD-CLI
[ex:/configure mirror mirror-dest "10"]
A:admin@node-2# info
    spoke-sdp 2:1 {
        egress {
            vc-label 16
        }
    }
classic CLI
A:node-2>config>mirror>mirror-dest$ info
----------------------------------------------
            shutdown
            spoke-sdp 2:1 create
                egress
                    vc-label 16
                exit
                no shutdown
            exit
----------------------------------------------

Configuration at mirror destination node

Use the following rules to configure at the mirror destination node.

  • The following command is not supported with LDP IPv6 transport tunnel. The user must reference a spoke SDP using a LDP IPv6 SDP coming from mirror source node:

    • MD-CLI
      configure mirror mirror-dest remote-source far-end far-end-addr
    • classic CLI
      configure mirror mirror-dest remote-source far-end
  • Use the following command to configure a spoke SDP for the remote source.
    configure mirror mirror-dest remote-source spoke-sdp
    The vc-id should match that of the configured spoke SDP in the mirror-destination context at mirror source node.
    configure mirror mirror-dest spoke-sdp
  • Configuring ingress-vc-label is optional; both static and t-ldp are supported in the following command.
    configure mirror mirror-dest 10 remote-source

The following example shows an optional vc-label configuration.

MD-CLI
[ex:/configure mirror mirror-dest "10" remote-source]
A:admin@node-2# info
    far-end 10.10.10.5 {
    }
    spoke-sdp 2:1 {
        ingress {
            vc-label 33
        }
    }
classic CLI
A:node-2>config>mirror>mirror-dest>remote-source$ info
----------------------------------------------
            far-end 10.10.10.5

            spoke-sdp 2:1 create
                ingress
                    vc-label 33
                exit
                no shutdown
----------------------------------------------

Mirroring and LI is also supported with the PW redundancy feature when the endpoint spoke SDP, including the ICB, is using a LDP IPv6 tunnel.

Static route resolution to a LDP IPv6 FEC

An LDP IPv6 FEC can be used to resolve a static IPv6 route with an indirect next hop matching the FEC prefix. Use the following command to configure a resolution filter to specify the LDP tunnel type to be selected from TTM:

  • MD-CLI
    configure router static-routes route indirect tunnel-next-hop resolution-filter
  • classic CLI
    configure router static-route-entry indirect tunnel-next-hop resolution-filter

A static route of an IPv6 prefix cannot be resolved to an indirect next hop using a LDP IPv4 FEC. An IPv6 prefix can only be resolved to an IPv4 next hop using the 6-over-4 encapsulation by which the outer IPv4 header uses system IPv4 address as source and the next hop as a destination. So the following example returns an error:

A:node-2>config>router# static-route-entry 3ffe::30/128 indirect 192.168.1.1 tunnel-
next-hop resolution-filter ldp

INFO: PIP #2209 Tunnel parameters cannot be used on 6over4 static-routes

IGP route resolution to a LDP IPv6 FEC

LDP IPv6 shortcut for IGP IPv6 prefix is supported. The following commands allow a user to select if shortcuts must be enabled for IPv4 prefixes only, for IPv6 prefixes only, or for both:

  • MD-CLI
    configure router ldp ldp-shortcut ipv4
    configure router ldp ldp-shortcut ipv6
  • classic CLI
    configure router ldp-shortcut [ipv4] [ipv6]

This CLI command has the following behaviors:

  • When executing a pre-Release 13.0 config file, the existing command is converted as follows: configure router ldp-shortcut changed to configure router ldp-shortcut ipv4

  • If the user enters the command without the command options in the CLI, it defaults to enabling shortcuts for IPv4 IGP prefixes.

  • When the user enters both IPv4 and IPv6 command options in the CLI, shortcuts for both IPv4 and IPv6 prefixes are enabled.

OAM support with LDP IPv6

The following MPLS OAM tools are updated to operate with LDP IPv6:
  • MD-CLI
    oam lsp-ping ldp prefix [path-destination] [source-ip-address]
    oam lsp-trace ldp prefix [path-destination] [source-ip-address]
  • classic CLI
    oam lsp-ping ldp prefix [path-destination] [src-ip-address]
    oam lsp-trace ldp prefix [path-destination] [src-ip-address]
These MPLS OAM tools support the following:
  • use of IPv6 addresses in the echo request and echo reply messages, including in DSMAP TLV, as per RFC 8029

  • use of LDP IPv6 prefix target FEC stack TLV as per RFC 8029

  • use of IPv6 addresses in the DDMAP TLV and FEC stack change sub-TLV, as per RFC 6424

  • use of 127/8 IPv4 mapped IPv6 address; that is, in the range ::ffff:127/104, as the destination address of the echo request message, as per RFC 8029

  • use of 127/8 IPv4 mapped IPv6 address; that is, in the range ::ffff:127/104, as the path-destination address when the user wants to exercise a specific LDP ECMP path

The behavior at the sender and receiver nodes is updated to support both LDP IPv4 and IPv6 target FEC stack TLVs. Specifically:

  • The IP family (IPv4/IPv6) of the UDP/IP echo request message always matches the family of the LDP target FEC stack TLV as entered by the user in the prefix command option.

  • The source-ip-address command option is extended to accept IPv6 address of the sender node. If the user did not enter a source IP address, the system IPv6 address is used. If the user entered a source IP address of a different family than the LDP target FEC stack TLV, an error is returned and the test command is aborted.

  • The IP family of the UDP/IP echo reply message must match that of the received echo request message.

  • For lsp-trace, the downstream information in DSMAP/DDMAP is encoded as the same family as the LDP control plane of the link LDP or targeted LDP session to the downstream peer.

  • The sender node inserts the experimental value of 65503 in the Router Alert Option in the echo request packet’s IPv6 header as per RFC 5350. When a value is allocated by IANA for MPLS OAM as part of draft-ietf-mpls-oam-ipv6-rao, it is updated.

Finally, for classic CLI, the oam vccv-ping and oam vccv-trace commands for a single-hop PW are updated to support IPv6 PW FEC 128 and FEC 129 as per RFC 6829. These two commands only apply to classic CLI. In addition, the PW OAM control word is supported with VCCV packets when the control word option is enabled on the spoke-SDP configuration. The value of the Channel Type field is set to 0x57, which indicates that the Associated Channel carries an IPv6 packet, as per RFC 4385.

LDP IPv6 interoperability considerations

Interoperability with implementations compliant with RFC 7552

The SR OS implementation uses a 128-bit LSR-ID, as defined in RFC 7552, to establish an LDP IPv6 Hello adjacency and session with a peer LSR. This allows a routable system IPv6 address to be used by default to bring up the LDP task on the router and establish link LDP and T-LDP sessions to other LSRs, as is the common practice with LDP IPv4 in existing customer deployments. More importantly, this allows for the establishment of control plane independent LDP IPv4 and LDP IPv6 sessions between two LSRs over the same interface or set of interfaces. The SR OS implementation allows for multiple separate LDP IPv4 and LDP IPv6 sessions between two routers over the same interface or a set of interfaces, as long as each session uses a unique LSR-ID (32-bit for IPv4 and 128-bit for IPv6).

The SR OS LDP IPv6 implementation complies with the control plane procedures defined in RFC 7552 for establishing an LDP IPv6 Hello adjacency and LDP session. However, the implementation does not interoperate, by default, with third-party implementations of this standard because the latter encode a 32-bit LSR-ID in the IPv6 Hello message while SR OS encodes a 128-bit LSR-ID.

To assure interoperability in deployments strictly adhering to RFC 7552, SR OS provides the option for configuring and encoding a 32-bit LSR-ID in the LDP IPv6 Hello message. When this option is enabled, an SR OS LSR establishes an LDP IPv6 Hello adjacency and an LDP IPv6 session with an RFC 7552 compliant peer or targeted peer LSR, using a 32-bit LSR-ID and a 128-bit transport address. See LDP IPv6 32-bit LSR-ID for more information.

In a dual-stack IPv4/IPV6 interface environment, the SR OS based LSR does not originate both IPv6 and IPv4 Hello messages with the configured 32-bit LSR-ID value when both IPv4 and IPv6 contexts are enabled on the same LDP interface. This behavior is allowed in RFC 7552 for migration purposes. However, the SR OS implements separate IPv4 and IPv6 Hello adjacencies and LDP sessions with different LSR-ID values for the LDP IPv4 (32-bit value) and LDP IPv6 (32-bit or 128-bit value) Hello adjacencies. Therefore, the LDP IPv4 and LDP IPv6 sessions are independent in the control plane.

However, if the peer LSR sends both IPv4 and IPv6 Hello messages using the same 32-bit LSR-ID value, as allowed in RFC 7552, only a single LDP session with the local 32-bit LSR-ID comes up toward that peer LSR-ID, depending on which of the IPv4 or IPv6 adjacencies came up first.

The dual-stack capability TLV, in the Hello message, is used by an LSR to inform its peer that it is capable of establishing either an LDP IPv4 or LDP IPv6 session, and the IP family preference for the LDP Hello adjacency for the resulting LDP session.

Finally, the SR OS LDP implementation inter-operates with an implementation using a 32-bit LSR-ID, as defined in RFC 7552, to establish an IPv4 LDP session and to resolve both IPv4 and IPv6 prefix FECs. In this case, the dual-stack capability TLV indicates implicitly the LSR support for resolving IPv6 FECs over an IPv4 LDP session.

LDP IPv6 32-bit LSR-ID

The SR OS implementation provides the option for configuring and encoding a 32-bit LSR-ID in the LDP IPv6 Hello message to achieve interoperability in deployments strictly adhering to RFC 7552.

The LSR-ID of an LDP Label Switched Router (LSR) is a 32-bit integer used to uniquely identify it in a network. SR OS also supports LDP IPv6 in both the control plane and data plane. However, the implementation uses a 128-bit LSR-ID, as defined in draft-pdutta-mpls-ldp-v2 to establish an LDP IPv6 Hello adjacency and session with a peer LSR.

The SR OS LDP IPv6 implementation complies with the control plane procedures defined in RFC 7552 for establishing an LDP IPv6 Hello adjacency and LDP session. However, the SR OS LDP IPv6 implementation does not interoperate with third-party implementations of this standard, because the latter encode a 32-bit LSR-ID in the IPv6 Hello message, while SR OS encodes a 128-bit LSR-ID.

When this feature is enabled, an SR OS LSR is able to establish an LDP IPv6 Hello adjacency and an LDP IPv6 session with an RFC 7552 compliant peer or targeted peer LSR, using a 32-bit LSR-ID and a 128-bit transport address.

Feature configuration

This user configures the 32-bit LSR-ID on a LDP peer or targeted peer using the following commands:

  • MD-CLI
    configure router ldp interface-parameters interface ipv6 local-lsr-id format-32bit
    configure router ldp interface-parameters interface ipv6 local-lsr-id format-32bit
    
  • classic CLI
    configure router ldp interface-parameters interface ipv6 local-lsr-id interface [32bit-format]
    configure router ldp interface-parameters interface ipv6 local-lsr-id [32bit-format]
    configure router ldp targeted-session peer local-lsr-id [32bit-format]

When the local-lsr-id command is enabled with the 32 bit formatting, an SR OS LSR is able to establish a LDP IPv6 Hello adjacency and a LDP IPv6 session with a RFC 7552 compliant peer or targeted peer LSR using a 32-bit LSR-ID set to the value of the IPv4 address of the specified local LSR-ID interface and a 128-bit transport address set to the value of the IPv6 address of the specified local LSR-ID interface.

Note: The system interface cannot be used as a local LSR-ID with the 32 bit formatting enabled as it is the default LSR-ID and transport address for all LDP sessions to peers and targeted peers on this LSR. This configuration is blocked in CLI.

If the user enables the 32 bit formatting in the IPv6 context of a running LDP interface or in the targeted session peer context of a running IPv6 peer, the already established LDP IPv6 Hello adjacency and LDP IPv6 session is brought down and re-established with the new 32-bit LSR-ID value.

The detailed control plane procedures are provided in LDP LSR IPv6 operation with 32-bit LSR-ID.

LDP LSR IPv6 operation with 32-bit LSR-ID

Consider the setup shown in LDP adjacency and session over IPv6 interface.

Figure 45. LDP adjacency and session over IPv6 interface

LSR A and LSR B have the following LDP command options.

LSR A

  • Interface I/F1 : link local address = fe80::a1

  • Interface I/F2 : link local address = fe80::a2

  • Interface LoA1: IPv4 address = <A1/32>; primary IPv6 unicast address = <A2/128>

  • Interface LoA2: IPv4 address = <A3/32>; primary IPv6 unicast address = <A4/128>

  • local-lsr-id = interface LoA1; 32 bit formatting option enabled

    Use the following commands to configure the interface and enable 32 bit formatting:

    • MD-CLI
      configure router ldp interface-parameters interface ipv6 local-lsr-id interface-name LoA1
      configure router ldp interface-parameters interface ipv6 local-lsr-id format-32bit
    • classic CLI
      configure router ldp interface-parameters interface ipv6 local-lsr-id LoA1 32bit-format

    LDP identifier = {<LSR Id=A1/32> : <label space id=0>}; transport address = <A2/128>

  • local-lsr-id = interface LoA2; 32 bit formatting option enabled

    Use the following commands to configure the interface and enable 32 bit formatting:

    • MD-CLI
      configure router ldp targeted-session peer local-lsr-id interface-name LoA2
      configure router ldp targeted-session peer local-lsr-id format-32bit
    • classic CLI
      configure router ldp targeted-session peer local-lsr-id LoA2 32bit-format

    LDP identifier = {<LSR Id=A3/32> : <label space id=0>}; transport address = <A4/128>

LSR B

  • Interface I/F1 : link local address = fe80::b1

  • Interface I/F2 : link local address = fe80::b2

  • Interface LoB1: IPv4 address = <B1/32>; primary IPv6 unicast address = <B2/128>

  • Interface LoB2: IPv4 address = <B3/32>; primary IPv6 unicast address = <B4/128>

  • local-lsr-id = interface LoB1; 32 bit formatting option enabled

    Use the following commands to configure the interface and enable 32 bit formatting:

    • MD-CLI
      configure router ldp interface-parameters interface ipv6 local-lsr-id interface-name LoB1
      configure router ldp interface-parameters interface ipv6 local-lsr-id format-32bit
    • classic CLI
      configure router ldp interface-parameters interface ipv6 local-lsr-id LoB1 32bit-format

    LDP identifier = {<LSR Id=B1/32> : <label space id=0>}; transport address = <B2/128>

  • local-lsr-id = interface LoB2; 32 bit formatting option enabled

    Use the following commands to configure the interface and enable 32 bit formatting:

    • MD-CLI
      configure router ldp targeted-session peer local-lsr-id interface-name LoB2
      configure router ldp targeted-session peer local-lsr-id format-32bit
    • classic CLI
      configure router ldp targeted-session peer local-lsr-id LoB2 32bit-format

    LDP identifier = {<LSR Id=B3/32> : <label space id=0>}; transport address = <B4/128>

Link LDP

When the IPv6 context of interfaces I/F1 and I/F2 are brought up, the following procedures are performed.

  1. LSR A (LSR B) sends a IPv6 Hello message with source IP address set to the link-local unicast address of the specified local LSR ID interface, for example, fe80::a1 (fe80::a2), and a destination IP address set to the link-local multicast address ff02:0:0:0:0:0:0:2.

  2. LSR A (LSR B) sets the LSR-ID in LDP identifier field of the common LDP PDU header to the 32-bit IPv4 address of the specified local LSR-ID interface LoA1 (LoB1), for example, A1/32 (B1/32).

    If the specified local LSR-ID interface is unnumbered or does not have an IPv4 address configured, the adjacency does not come up and an error is returned (lsrInterfaceNoValidIp [17]) in the output of the following command.
    show router ldp interface detail
  3. LSR A (LSR B) sets the transport address TLV in the Hello message to the IPv6 address of the specified local LSR-ID interface LoA1 (LoB1), for example, A2/128 (B2/128).

    If the specified local LSR-ID interface is unnumbered or does not have an IPv6 address configured, the adjacency does not come up and an error is returned (interfaceNoValidIp [16]) in the output of the following command.

    show router ldp interface detail
  4. LSR A (LSR B) includes in each IPv6 Hello message the dual-stack TLV with the transport connection preference set to IPv6 family.

    • If the peer is a third-party LDP IPv6 implementation and does not include the dual-stack TLV, then LSR A (LSR B) resolves IPv6 FECs only because IPv6 addresses are not advertised in Address messages as per RFC 7552 [ldp-ipv6-rfc].

    • If the peer is a third-party LDP IPv6 implementation and includes the dual-stack TLV with transport connection preference set to IPv4, LSR A (LSR B) does not bring up the Hello adjacency and discards the Hello message. If the LDP session was already established, then LSRA(B) sends a fatal Notification message with status code of 'Transport Connection Mismatch' (0x00000032)' and restart the LDP session [ldp-ipv6-rfc]. In both cases, a new counter for the transport connection mismatches is incremented in the output of the following command.

      show router ldp statistics
  5. The LSR with highest transport address takes on the active role and initiates the TCP connection for the LDP IPv6 session using the corresponding source and destination IPv6 transport addresses.

Targeted LDP

Similarly, when the new option is invoked on a targeted IPv6 peer, the router sends a IPv6 targeted Hello message with source IP address set to the global unicast IPv6 address corresponding to the primary IPv6 address of the specified interface and a destination IP address set to configured IPv6 address of the peer. The LSR-ID field in the LDP identifier in the common LDP PDU header is set the 32-bit address of the specified interface. If the specified interface does not have an IPv4 address configured the adjacency does not come up. Any subsequent adjacency or session level messages is sent with the common LDP PDU header set as above.

When the targeted IPv6 peer contexts are brought up, the following procedures are performed:

  1. LSR A (LSR B) sends a IPv6 Hello message with source IP address set to the primary IPv6 unicast address of the specified local LSR ID interface LoA2(LoB2), for example, A4/128 (B4/128), and a destination IP address set to the peer address B4/128(A4/128).

  2. LSR A (LSR B) sets the LSR-ID in LDP identifier field of the common LDP PDU header to the 32-bit IPv4 address of the specified local LSR-ID interface LoA2(LoB2), for example, A3/32 (B3/32).

    If the specified local LSR-ID interface is unnumbered or does not have an IPv4 address configured, the adjacency does not come up and an error is returned.

  3. LSR A (LSR B) sets the transport address TLV in the Hello message to the IPv6 address of the specified local LSR-ID interface LoA2 (LoB2), for example, A4/128 (B4/128).

    If the specified local LSR-ID interface is unnumbered or does not have an IPv6 address configured, the adjacency does not come up and an error is returned.

  4. LSR A (LSR B) includes in each IPv6 Hello message the dual-stack TLV with the preference set to IPv6 family.

    • If the peer is a third-party LDP IPv6 implementation and does not include the dual-stack TLV, then LSR A (LSR B) resolves IPv6 FECs only because IPv6 addresses are not advertised in Address messages as per RFC 7552 [ldp-ipv6-rfc].

    • If the peer is a third-party LDP IPv6 implementation and includes the dual-stack TLV with transport connection preference set to IPv4, LSR A (LSR B) does not bring up the Hello adjacency and discards the Hello message. If the LDP session was already established, then LSRA(B) sends a fatal Notification message with status code of 'Transport Connection Mismatch' (0x00000032)' and restarts the LDP session [ldp-ipv6-rfc]. In both cases, a new counter for the transport connection mismatches is incremented in the output of the following command.

      show router ldp statistics
  5. The LSR with highest transport address takes on the active role and initiates the TCP connection for the LDP IPv6 session using the corresponding source and destination IPv6 transport addresses.

Link and targeted LDP feature interaction

The following describes feature interactions:

  • LSR A (LSR B) do not originate both IPv6 and IPv4 Hello messages with the configured 32-bit LSR-ID value when both IPv4 and IPv6 contexts are enabled on the same LDP interface (dual-stack LDP IPv4/IPv6). This behavior is allowed in RFC 7552 for migration purposes but SR OS implements separate IPv4 and IPv6 Hello adjacencies and LDP sessions with different LSR-ID values. Therefore, an IPv6 context that uses a 32-bit LSR-ID address matching that of the IPv4 context on the same interface is not allowed to be brought up and the other way around.

    Furthermore, an IPv6 context of any interface or targeted peer that uses a 32-bit LSR-ID address matching that of an IPv4 context of any other interface, an IPv6 context of any other interface using 32-bit LSR-ID, a targeted IPv4 peer, a targeted IPv6 peer using 32-bit LSR-ID, or an auto T-LDP IPv4 template on the same router is not allowed to be brought up and the other way around.

  • With the introduction of a 32-bit LSR-ID for a IPv6 LDP interface or peer, it is possible to configure the same IPv6 transport address for an IPv4 LSR-ID and an IPv6 LSR-ID on the same node. For instance, assume the following configuration:

    • interface I/F1

      • local-lsr-id = interface LoA1; option 32 bit-format enabled.

        Use the following commands to configure the interface and enable the 32 bit-format:

        • MD-CLI
          configure router ldp interface-parameters interface ipv6 local-lsr-id interface-name LoA1
          configure router ldp interface-parameters interface ipv6 local-lsr-id format-32bit
        • classic CLI
          configure router ldp interface-parameters interface ipv6 local-lsr-id LoA1 32bit-format
      • LDP identifier = {<LSR Id=A1/32> : <label space id=0>}; transport address = <A2/128>

    • interface I/F2

      • local-lsr-id = interface LoA1

        Use the following command to configure the interface:

        • MD-CLI
          configure router ldp interface-parameters interface ipv6 local-lsr-id interface-name LoA1
          
        • classic CLI
          configure router ldp interface-parameters interface ipv6 local-lsr-id LoA1
      • LDP identifier = {<LSR Id=A2/128> : <label space id=0>}; transport address = <A2/128>

    • targeted session

      • local-lsr-id = interface LoA1

        Use the following command to configure the interface:

        • MD-CLI
          configure router ldp targeted-session peer local-lsr-id interface-name LoA1
          
        • classic CLI
          configure router ldp targeted-session peer local-lsr-id LoA1
      • LDP identifier = {<LSR Id=A2/128> : <label space id=0>}; transport address = <A2/128>

    The above configuration results in two interfaces and a targeted session with the same local end transport IPv6 address but the local LSR-ID for interface I/F1 is different.

    If an IPv6 Hello adjacency over interface I/F1 toward a specified peer comes up first and initiates an IPv6 LDP session, then the other two Hello adjacencies to the same peer do not come up.

    If one of the IPv6 Hello adjacencies of interface I/F2 or Targeted Session 1 comes up first to a peer, it triggers an IPv6 LDP session shared by both these adjacencies and the Hello adjacency over interface I/F1 to the same peer does not come up.

Migration considerations
Migrating services from LDP IPv4 session to 32-bit LSR-ID LDP IPv6 session

Assume the user deploys on a SR OS based LSR a service bound to a SDP which auto-creates the IPv4 targeted LDP session to a peer LSR running a third party LDP implementation. In this case, the auto-created T-LDP session uses the system interface IPv4 address as the local LSR-ID and as the local transport address because there is no targeted session configured in LDP to set these command options away from default values.

When both LSR nodes are being migrated to using LDP IPv6 with a 32-bit LSR-ID, the user must configure the IPv6 context of the local LDP interfaces to use a local LSR-ID interface different than the system interface and with the 32bit-format option enabled. Similarly, the user must configure a new Targeted session in LDP with that same local LSR-ID interface and with the 32bit-format option enabled. This results in a LDP IPv6 session triggered by the link LDP IPv6 Hello adjacency or the targeted IPv6 Hello adjacency which came up first. This LDP IPv6 session uses the IPv4 address and the IPv6 address of the configured local LSR-ID interface as the LSR-ID and transport address respectively.

The user must then modify the service configuration on both ends to use a far-end address matching the far-end IPv6 transport address of the LDP IPv6 session. On the SR OS based LSR, this can be done by creating a new IPv6 SDP of type LDP with the far-end address matching the far-end IPv6 transport address.

If the service enabled PW redundancy, the migration may be eased by creating a standby backup PW bound to the IPv6 SDP and adding it to the same VLL or VPLS endpoint the spoke SDP bound to the IPv4 SDP belongs to. Then, activate the backup PW using the following command.
tools perform service id endpoint force-switchover
This make the spoke SDP bound to the IPv6 SDP the primary PW. Finally, the spoke SDP bound to the IPv4 SDP can be deleted.

Interoperability with implementations compliant with RFC 5036 for IPv4 LDP control plane only

The SR OS implementation supports advertising and resolving IPv6 prefix FECs over an LDP IPv4 session using a 32-bit LSR-ID, in compliance with RFC 7552. When introducing an LSR based on the SR OS in a LAN with a broadcast interface, it can peer with third-party LSR implementations that support RFC 7552 and LSRs that do not. When it peers, using an IPv4 LDP control plane, with a third-party LSR implementation that does not support it, the advertisement of IPv6 addresses or IPv6 FECs to that peer may cause it to bring down the IPv4 LDP session.

That is, there are deployed third-party LDP implementations that are compliant with RFC 5036 for LDP IPv4, but that are not compliant with RFC 5036 for handling IPv6 address or IPv6 FECs over an LDP IPv4 session. To resolve this issue, RFC 7552 modifies RFC 5036 by requiring implementations complying with RFC 7552 to check for the dual-stack capability TLV in the IPv4 Hello message from the peer. Without the peer advertising this TLV, an LSR must not send IPv6 addresses and FECs to that peer. The SR OS implementation supports this requirement.

Configuring LDP with CLI

This section provides information to configure LDP using the command line interface.

LDP configuration overview

When the implementation of LDP is instantiated, the protocol is in the no shutdown state. In addition, targeted sessions are then enabled. The default command options for LDP are set to the documented values for targeted sessions in draft-ietf-mpls-ldp-mib-09.txt .

LDP must be enabled in order for signaling to be used to obtain the ingress and egress labels in frames transmitted and received on the service distribution path (SDP). When signaling is off, labels must be manually configured when the SDP is bound to a service.

Basic LDP configuration

Use this section to configure LDP and remove configuration examples of common configuration tasks.

The LDP protocol instance is created in the enabled state.

The following example displays the default LDP configuration.

MD-CLI

[ex:/configure router "Base" ldp]
A:admin@node-2# info detail
... 
    import-pmsi-routes {
        mvpn false
        mvpn-no-export-community false
    }
 ## fec-originate
    egress-statistics {
     ## fec-prefix
    }
 ## lsp-bfd
    session-parameters {
     ## peer
    }
    tcp-session-parameters {
     ## authentication-keychain
     ## authentication-key
     ## peer-transport
    }
    interface-parameters {
        ipv4 {
            transport-address system
            hello {
                timeout 15
                factor 3
            }
            keepalive {
                timeout 30
                factor 3
            }
        }
        ipv6 {
            transport-address system
            hello {
                timeout 15
                factor 3
            }
            keepalive {
                timeout 30
                factor 3
            }
        }
     ## interface
    }
    targeted-session {
        sdp-auto-targeted-session true
     ## export-prefixes
     ## import-prefixes
        resolve-v6-prefix-over-shortcut false
        ipv4 {
            hello {
                timeout 45
                factor 3
            }
            keepalive {
                timeout 40
                factor 4
            }
            hello-reduction {
                admin-state disable
                factor 3
            }
        }
        ipv6 {
            hello {
                timeout 45
                factor 3
            }
            keepalive {
                timeout 40
                factor 4
            }
            hello-reduction {
                admin-state disable
                factor 3
            }
        }
  ...

classic CLI

A:node-2>config>router>ldp$ info detail
----------------------------------------------
...
            import-pmsi-routes
                no mvpn
                no mvpn-no-export-community
            exit
            tcp-session-parameters
                no auth-keychain
                no authentication-key
            exit
            interface-parameters
                ipv4
                    no hello
                    no keepalive
                    no transport-address
                exit
                ipv6
                    no hello
                    no keepalive
                    no transport-address
                exit
            exit
            targeted-session
                no disable-targeted-session
                no import-prefixes
                no export-prefixes
                ipv4
                    no hello
                    no keepalive
                    no hello-reduction
                exit
                ipv6
                    no hello
                    no keepalive
                    no hello-reduction
                exit
                auto-tx
                    ipv4
                        shutdown
                        no tunneling
                    exit
                exit
                auto-rx
                    ipv4
                        shutdown
                        no tunneling
                    exit
                exit
                no resolve-v6-prefix-over-shortcut
            exit
            no shutdown
----------------------------------------------

Common configuration tasks

This section provides an overview of the tasks to configure LDP and provides the CLI commands.

Enabling LDP

Note: This section applies for the classic CLI.

LDP must be enabled in order for the protocol to be active. MPLS is enabled in the configure router mpls context.

Use the following command to enable LDP on a router:

configure router ldp

The following displays the enabled LDP configuration.

classic CLI
A:node-2>config>router# info
...
#--------------------------------------------------
echo "LDP Configuration"
#--------------------------------------------------
        ldp
            import-pmsi-routes
            exit
            tcp-session-parameters
            exit
            interface-parameters
            exit
            targeted-session
            exit
            no shutdown
        exit
----------------------------------------------

Configuring FEC originate

A FEC can be added to the LDP IP prefix database with a specific label operation on the node. Permitted operations are pop or swap. For a swap operation, an incoming label can be swapped with a label in the range of 16 to 1048575. If a swap-label is not configured then the default value is 3.

A route-table entry is required for a FEC with a pop operation to be advertised. For a FEC with a swap operation, a route-table entry must exist and user configured next-hop for swap operation must match one of the next-hops in route-table entry.

Use the commands in the following context to configure FEC originate.

configure router ldp fec-originate

The following example displays a FEC originate configuration.

MD-CLI
[ex:/configure router "Base" ldp]
A:admin@node-2# info
    fec-originate 10.1.1.1/32 {
        pop true
    }
    fec-originate 10.1.2.1/32 {
        advertised-label 1000
        next-hop 10.10.1.2
    }
    fec-originate 10.1.3.1/32 {
        advertised-label 1001
        next-hop 10.10.2.3
        swap-label 131071
    }
classic CLI
A:node-2>config>router# info

//#--------------------------------------------------
echo "LDP Configuration"
#--------------------------------------------------
        ldp
            fec-originate 10.1.1.1/32 pop
            fec-originate 10.1.2.1/32 advertised-label 1000 next-hop 10.10.1.2
            fec-originate 10.1.3.1/32 advertised-label 1001 next-hop 10.10.2.3 swap-label 131071
            import-pmsi-routes
            exit
            tcp-session-parameters
            exit
            interface-parameters
            exit
            targeted-session
            exit
            no shutdown
        exit
----------------------------------------------

Configuring the graceful-restart helper

Graceful-restart helper advertises to its LDP neighbors by carrying the fault tolerant (FT) session TLV in the LDP initialization message, assisting the LDP in preserving its IP forwarding state across the restart. Nokia’s recovery is self-contained and relies on information stored internally to self-heal. This feature is only used to help third-party routers without a self-healing capability to recover.

Maximum recovery time is the time (in seconds) the sender of the TLV would like the receiver to wait, after detecting the failure of LDP communication with the sender.

Neighbor liveness time is the time (in seconds) the LSR is willing to retain its MPLS forwarding state. The time should be long enough to allow the neighboring LSRs to re-sync all the LSPs in a graceful manner, without creating congestion in the LDP control plane.

Use the commands in the following context to configure graceful-restart.

configure router ldp graceful-restart

Applying export and import policies

Both inbound and outbound label binding filtering are supported. Inbound filtering allows a route policy to control the label bindings an LSR accepts from its peers. An import policy can accept or reject label bindings received from LDP peers.

Label bindings can be filtered based on:

  • neighbor (match on bindings received from the specified peer)

  • prefix-list (match on bindings with the specified prefix/prefixes)

Outbound filtering allows a route policy to control the set of LDP label bindings advertised by the LSR. An export policy can control the set of LDP label bindings advertised by the router. By default, label bindings for only the system address are advertised and propagate all FECs that are received. All other local interface FECs can be advertised using policies.

Note: Static FECs cannot be blocked using an export policy.

Matches can be based on:

  • all (all local subnets)

  • match (match on bindings with the specified prefix/prefixes)

Use the commands in the following contexts to apply import and export policies.

configure router ldp export
configure router ldp import

The following example displays the export and import policy configuration.

MD-CLI
[ex:/configure router "Base"]
A:admin@node-2# info
  ldp {
        import-policy ["LDP-import"]
        export-policy ["LDP-export"]
        fec-originate 192.168.2.1/32 {
            advertised-label 1000
            next-hop 10.10.1.2
        }
        fec-originate 192.168.1.1/32 {
            pop true
        }
    }
classic CLI
A:node-2>config>router# info
#--------------------------------------------------
echo "LDP Configuration"
#--------------------------------------------------
        ldp
            export "LDP-export"
            import "LDP-import"
            fec-originate 192.168.1.1/32 pop
            fec-originate 192.168.2.1/32 advertised-label 1000 next-hop 10.10.1.2
            import-pmsi-routes
            exit
            tcp-session-parameters
            exit
            interface-parameters
            exit
            targeted-session
            exit
            no shutdown
        exit

Targeted session command options

Use the commands in the following context to specify targeted-session command options.

configure router ldp targeted-session

The following example displays an LDP configuration.

MD-CLI
[ex:/configure router "Base" ldp]
A:admin@node-2# info
    targeted-session {
        ipv4 {
            hello {
                timeout 120
            }
            keepalive {
                timeout 120
                factor 3
            }
        }
        peer 10.10.10.104 {
            hello {
                timeout 240
                factor 3
            }
            keepalive {
                timeout 240
                factor 3
            }
        }
    }
classic CLI
A:node-2>config>router>ldp# info
----------------------------------------------
...
            targeted-session
                ipv4
                    hello 120 3
                    keepalive 120 3
                exit
                peer 10.10.10.104
                    hello 240 3
                    keepalive 240 3
                exit
            exit
----------------------------------------------

Configuring the LDP interface

Use the commands in the following context to configure the interface.

configure router ldp interface-parameters

The following example displays an LDP interface configuration.

MD-CLI
[ex:/configure router "Base" ldp]
A:admin@node-2# info
 ...
    interface-parameters {
        interface "to-DUT1" {
            ipv4 {
                hello {
                    timeout 240
                    factor 3
                }
                keepalive {
                    timeout 240
                    factor 3
                }
            }
        }
    }
 
classic CLI
A:node-2>config>router>ldp# info
----------------------------------------------
 ...
            interface-parameters
                interface "to-DUT1" dual-stack
                    ipv4
                        hello 240 3
                        keepalive 240 3
                        no shutdown
                    exit
                    no shutdown
                exit
            exit
----------------------------------------------

Configuring the LDP session parameters

Use the commands in the following contexts to specify session parameters.

configure router ldp session-parameters
configure router ldp tcp-session-parameters

The following example displays an LDP session parameter configuration.

MD-CLI
[ex:/configure router "Base" ldp]
A:admin@node-2# info
    session-parameters {
        peer 10.1.1.1 {
        }
        peer 10.10.10.104 {
        }
    }
    tcp-session-parameters {
        peer-transport 10.10.10.104 {
            authentication-key "McTNkSePNJMVFysxyZa4yw8iLZbb7ys= hash2"
        }
    }

classic CLI
A:node-2>config>router>ldp# info
----------------------------------------------
            import-pmsi-routes
            exit
            session-parameters
                peer 10.1.1.1
                exit
                peer 10.10.10.104
                exit
            exit
            tcp-session-parameters
                peer-transport 10.10.10.104
                    authentication-key "McTNkSePNJMVFysxyZa4yw8iLZbb7ys=" hash2
                exit
            exit
            interface-parameters
            exit
            targeted-session
            exit
            no shutdown
----------------------------------------------

LDP signaling and services

When LDP is enabled, targeted sessions can be established to create remote adjacencies with nodes that are not directly connected. When service distribution paths (SDPs) are configured, extended discovery mechanisms enable LDP to send periodic targeted hello messages to the SDP far-end point. The exchange of LDP hellos trigger session establishment. The SDP signaling default enables targeted LDP (T-LDP).
configure service sdp signaling tldp
The service SDP uses the targeted-session configured in the following context.
configure router ldp targeted-session

The SDP LDP and LSP commands are mutually exclusive; either one LSP can be specified or LDP can be enabled. If LDP is already enabled on an MPLS SDP, then an LSP cannot be specified on the SDP. If an LSP is specified on an MPLS SDP, then LDP cannot be enabled on the SDP.

For more information about configuring SDPs, see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Services Overview Guide.

Use the commands in the following contexts to configure LDP on an MPLS SDP.

configure service sdp ldp
configure service sdp signaling

The following example displays an SDP configuration showing the signaling default tldp enabled.

MD-CLI
[ex:/configure service sdp 1]
A:admin@node-2# info detail
 ...
    description "MPLS: to-99"
    path-mtu 4462
    signaling tldp
    far-end {
        ip-address 10.10.10.99
    }
...
  
classic CLI

In the classic CLI, you must remove the LSP from the configuration using the no lsp lsp-name command to enable LDP on the SDP when an LSP is already specified.

A:node-2>config>service>sdp# info detail
----------------------------------------------
...
            description "MPLS: to-99"
            far-end 10.10.10.99
            signaling tldp
            path-mtu 4462
...
           ----------------------------------------------

The following shows a working configuration of LDP over RSVP-TE (1) where tunnels look like the second example (2):

Example 1 — LDP over RSVP-TE

MD-CLI
[ex:/configure router "Base" ldp]
A:admin@node-2# info
    prefer-tunnel-in-tunnel false
    interface-parameters {
        interface "LDP-test" {
        }
    }
    targeted-session {
        peer 10.51.0.1 {
            admin-state disable
            tunneling {
                lsp "to_P_1" { }
            }
        }
        peer 10.51.0.17 {
            admin-state disable
            tunneling {
                lsp "to_P_6" { }
            }
        }
    }
classic CLI
A:node-2>config>router>ldp# info
----------------------------------------------
            prefer-tunnel-in-tunnel
            interface-parameters
                interface "port-1/1/3"
                exit
                interface "port-lag-1"
                exit
            exit
            targeted-session
                peer 10.51.0.1
                    shutdown
                    tunneling
                        lsp "to_P_1"
                    exit
                exit
                peer 10.51.0.17
                    shutdown
                    tunneling
                        lsp "to_P_6"
                    exit
                exit
            exit
----------------------------------------------

Example 2 — Tunnels

MD-CLI
[ex:/configure router "Base" interface "LDP-test" if-attribute]
A:admin@node-2# info
    admin-group ["1" "2"]

[ex:/configure router "Base" mpls]
A:admin@node-2# info
    admin-state enable
    resignal-timer 30
    path "dyn" {
        admin-state enable
    }
    lsp "to_P_1" {
        admin-state enable
        type p2p-rsvp
        to 10.51.0.1
        fast-reroute {
            frr-method facility
        }
        primary "dyn" {
        }
    }
    lsp "to_P_6" {
        admin-state enable
        type p2p-rsvp
        to 10.51.0.17
        fast-reroute {
            frr-method facility
        }
        primary "dyn" {
        }
    }
classic CLI
A:node-2>config>router>if-attr# info
----------------------------------------------
            
admin-group "lower" value 2 
admin-group "upper" value 1 
----------------------------------------------
*A:ALA-1>config>router>mpls# info
----------------------------------------------
            resignal-timer 30
            interface "system"
            exit
            interface "port-1/1/3"
            exit
            interface "port-lag-1"
            exit
            path "dyn"
                no shutdown
            exit
            lsp "to_P_1"
                to 10.51.0.1
                cspf
                fast-reroute facility
                exit
                primary "dyn"
                exit
                no shutdown
            exit
            lsp "to_P_6"
                to 10.51.0.17
                cspf
                fast-reroute facility
                exit
                primary "dyn"
                exit
                no shutdown
            exit
            no shutdown
----------------------------------------------

LDP configuration management tasks

This section discusses LDP configuration management tasks.

Disabling LDP

The following command disables the LDP protocol on the router. All command options revert to the default settings.

Use the following commands to disable LDP:

  • MD-CLI
    configure router ldp admin-state disable
  • classic CLI

    In the classic CLI, LDP must be shut down before it can be disabled.

    configure router ldp shutdown
    configure router no ldp

Modifying targeted session command options

The modification of LDP targeted session command options does not take effect until the next time the session goes down and is re-established. Individual command options cannot be deleted. Different defaults can be configured for IPv4 and IPv6 LDP targeted Hello adjacencies.

The following example displays the default values.

MD-CLI

[ex:/configure router "Base" ldp targeted-session]
A:admin@node-2# info detail
    sdp-auto-targeted-session true
 ## export-prefixes
 ## import-prefixes
    resolve-v6-prefix-over-shortcut false
    ipv4 {
        hello {
            timeout 45
            factor 3
        }
        keepalive {
            timeout 40
            factor 4
        }
        hello-reduction {
            admin-state disable
            factor 3
        }
    }
    ipv6 {
        hello {
            timeout 45
            factor 3
        }
        keepalive {
            timeout 40
            factor 4
        }
        hello-reduction {
            admin-state disable
            factor 3
        }
    }
 ## peer
 ...

classic CLI

A:node-2>config>router>ldp>targ-session# info detail
----------------------------------------------
                no disable-targeted-session
                no import-prefixes
                no export-prefixes
                ipv4
                    no hello
                    no keepalive
                    no hello-reduction
                exit
                ipv6
                    no hello
                    no keepalive
                    no hello-reduction
                exit
                ...
----------------------------------------------

Modifying interface parameters

Individual parameters cannot be deleted. The modification of LDP targeted session parameters does not take effect until the next time the session goes down and is re-establishes.

The following example displays the default values.

MD-CLI

!*[pr:/configure router "Base" ldp interface-parameters]
A:admin@node-2# info detail
    ipv4 {
        transport-address system
        hello {
            timeout 15
            factor 3
        }
        keepalive {
            timeout 30
            factor 3
        }
    }
    ipv6 {
        transport-address system
        hello {
            timeout 15
            factor 3
        }
        keepalive {
            timeout 30
            factor 3
        }
    }
    interface "LDP-test" {
     ## apply-groups
     ## apply-groups-exclude
        admin-state enable
     ## load-balancing-weight
        bfd-liveness {
            ipv4 false
            ipv6 false
        }
     ## ipv4
     ## ipv6
    }

classic CLI

In the classic CLI, the no form of an interface-parameters interface command reverts modified values back to the defaults.

A:node-2>config>router>ldp>if-params>if$ info detail
----------------------------------------------
                    no bfd-enable
                    no load-balancing-weight
                    ipv4
                        no hello
                        no keepalive
                        no local-lsr-id
                        fec-type-capability
                            prefix-ipv4 enable
                            prefix-ipv6 enable
                            p2mp-ipv4 enable
                            p2mp-ipv6 enable
                        exit
                        no transport-address
                        no shutdown
                    exit
                    no shutdown
----------------------------------------------