Configuring LDP

Enabling LDP

You must enable LDP for the protocol to be active. This procedure applies for both IPv4 and IPv6 LDP.

Enable LDP

The following example administratively enables LDP for the default network instance.

--{ * candidate shared default }--[  ]--
# info network-instance default protocols ldp admin-state
    network-instance default {
        protocols {
            ldp {
                admin-state enable
            }
        }
    }

Configuring LDP neighbor discovery

You can configure LDP neighbor discovery, which allows SR Linux to discover and connect to IPv4 and IPv6 LDP peers without manually specifying the peers. SR Linux supports basic LDP discovery for discovering LDP peers, using multicast UDP hello messages.

Configure LDP neighbor discovery

The following example configures LDP neighbor discovery for a network instance and enables it on a subinterface for IPv4 and IPv6. The hello-interval parameter specifies the number of seconds between LDP link hello messages. The hello-holdtime parameter specifies how long the LDP link hello adjacency is maintained in the absence of link hello messages from the LDP neighbor.

--{ * candidate shared default }--[  ]--
# info network-instance default protocols ldp discovery
    network-instance default {
        protocols {
            ldp {
                discovery {
                    interfaces {
                        hello-holdtime 30
                        hello-interval 10
                        interface ethernet-1/1.1 {
                            ipv4 {
                                admin-state enable
                            }
                            ipv6 {
                                admin-state enable
                            }
                        }
                    }
                }
            }
        }
    }

Configuring LDP peers

You can configure settings that apply to connections between SR Linux and IPv4 and IPv6 LDP peers, including session keepalive parameters. For individual LDP peers, you can configure the maximum number of FEC-label bindings that can be accepted by the peer.

If LDP receives a FEC-label binding from a peer that puts the number of received FECs from this peer at the configured FEC limit, the peer is put into overload. If the peer advertised the Nokia-overload capability (if it is another SR Linux router or an SR OS device) then the overload TLV is transmitted to the peer and the peer stops sending any further FEC-label bindings. If the peer did not advertise the Nokia-overload capability, then no overload TLV is sent to the peer. In either case, the received FEC-label binding is deleted.

Configure LDP peers

The following example configures settings for IPv4 and IPv6 LDP peers:

--{ * candidate shared default }--[  ]--
# info network-instance default protocols ldp peers
    network-instance default {
        protocols {
            ldp {
                peers {
                    session-keepalive-holdtime 240
                    session-keepalive-interval 90
                    peer 10.1.1.1 label-space-id 0 {
                        fec-limit 1024
                    }
                    peer 2001:db8::0a01:0101 label-space-id 0 {
                        fec-limit 1024
                    }
                }
            }
        }
    }

In this example, the session-keepalive-holdtime parameter specifies the number of seconds an LDP session can remain inactive (no LDP packets received from the peer) before the LDP session is terminated and the corresponding TCP session closed. The session-keepalive-interval parameter specifies the number of seconds between LDP keepalive messages. SR Linux sends LDP keepalive messages at this interval only when no other LDP packets are transmitted over the LDP session.

For an individual LDP peer, indicated by its LSR ID and label space ID, a FEC limit is specified. SR Linux deletes FEC-label bindings received from this peer beyond this limit.

Configuring a label block for LDP

To configure LDP, you must specify a reference to a predefined range of labels, called a label block. A label block configuration includes a start-label value and an end-label value. LDP uses labels in the range between the start-label and end-label in the label block.

A label block can be static or dynamic. See Static and dynamic label blocks for information about each type of label block and how to configure them. LDP requires a dynamic, non-shared label block.

Configure dynamic LDP label block

The following example configures LDP to use a dynamic label block named d1. The dynamic label block is configured with a start-label value and an end-label value. See Configuring label blocks for an example of a dynamic label block configuration.

--{ * candidate shared default }--[  ]--
# info network-instance default protocols ldp dynamic-label-block
    network-instance default {
        protocols {
            ldp {
                dynamic-label-block d1
            }
        }
    }

Configuring longest-prefix match for IPv4 and IPv6 FEC resolution

By default, SR Linux supports /32 IPv4 and /128 IPv6 FEC resolution using IGP routes. You can optionally enable longest-prefix match for IPv4 and IPv6 FEC resolution. When this is enabled, IPv4 and IPv6 prefix FECs can be resolved by less-specific routes in the route table, as long as the prefix bits of the route match the prefix bits of the FEC. The IP route with the longest prefix match is the route that is used to resolve the FEC.

Configure longest-prefix match for IPv4 and IPv6 FEC resolution

The following example enables longest-prefix match for IPv4 and IPv6 FEC resolution.

--{ * candidate shared default }--[  ]--
# info network-instance default protocols ldp fec-resolution
    network-instance default {
        protocols {
            ldp {
                fec-resolution {
                    longest-prefix true
                }
            }
        }
    }

Configuring load balancing over equal-cost paths

ECMP support for LDP on SR Linux performs load balancing for LDP-based tunnels by using multiple outgoing next-hops for an IP prefix on ingress and transit LSRs. You can specify the maximum number of next-hops (up to 64) to be used for load balancing toward a specific FEC.

Configure load balancing for LDP

The following example configures the maximum number of next-hops that SR Linux can use for load balancing toward an IPv4 or IPv6 FEC.

--{ * candidate shared default }--[  ]--
# info network-instance default protocols ldp multipath
    network-instance default {
        protocols {
            ldp {
                multipath {
                    max-paths 64
                }
            }
        }
    }

Configuring graceful restart helper capability

Graceful restart allows a router that has restarted its control plane but maintained its forwarding state to restore LDP with minimal disruption.

To do this, the router relies on LDP peers, which have also been configured for graceful restart, to maintain forwarding state while the router restarts. LDP peers configured in this way are known as graceful restart helpers.

You can configure SR Linux to operate as a graceful restart helper for LDP. When the graceful restart helper capability is enabled, SR Linux advertises to its LDP peers by carrying the fault tolerant (FT) session TLV in the LDP initialization message, which assists the LDP peer to preserve its LDP forwarding state across the restart.

Configure graceful restart

The following example enables the graceful restart helper capability for LDP for both IPv4 and IPv6 peers.

--{ * candidate shared default }--[  ]--
# info network-instance default protocols ldp graceful--restart
    network-instance default {
        protocols {
            ldp {
                graceful-restart {
                    helper-enable true
                    max-reconnect-time 180
                    max-recovery-time 240
                }
            }
        }
    }

In this example, the max-reconnect-time parameter specifies the number of seconds the SR Linux waits for the remote LDP peer to reconnect after an LDP communication failure. The max-recovery-time parameter specifies the number of seconds the SR Linux router preserves its MPLS forwarding state after receiving the LDP initialization message from the restarted LDP peer.

Configuring LDP-IGP synchronization

You can configure synchronization between LDP and IPv4 or IPv6 interior gateway protocols (IGPs). LDP-IGP synchronization is supported for IS-IS.

When LDP-IGP synchronization is configured, LDP notifies the IGP to advertise the maximum cost for a link in the following scenarios: when the LDP hello adjacency goes down, when the LDP session goes down, or when LDP is not configured on an interface.

The following apply when LDP-IGP synchronization is configured:

  • If a session goes down, the IGP increases the metric of the corresponding interface to max-metric.

  • When the LDP adjacency is reestablished, the IGP starts a hold-down timer (default 60 seconds). When this timer expires, the IGP restores the normal metric, if it has not been restored already.

  • When LDP informs the IGP that all label-FEC mappings have been received from the peer, the IGP can be configured to immediately restore the normal metric, even if time remains on the hold-down timer.

LDP-IGP synchronization does not take place on LAN interfaces unless the IGP has a point-to-point connection over the LAN, and does not take place on IGP passive interfaces.

Configure LDP-IGP synchronization

The following example configures LDP-IGP synchronization for an IS-IS instance.

--{ * candidate shared default }--[  ]--
# info network-instance default protocols isis instance i1 ldp-synchronization
    network-instance default {
        protocols {
            isis {
                instance i1 {
                    ldp-synchronization {
                        hold-down-timer 120
                        end-of-lib true
                    }
                }
            }
        }
    }

In this example, if the LDP adjacency goes down, the IGP increases the metric of the corresponding interface to max-metric. If the adjacency is subsequently reestablished, the IGP waits for the amount of time configured with the hold-down-timer parameter before restoring the normal metric.

When the end-of-lib parameter is set to true, it causes the IGP to restore the normal metric when all label-FEC mappings have been received from the peer, even if time remains in the hold-down timer.

Configuring static FEC (FEC originate)

Static FEC (also known as FEC originate) triggers a label for prefix announcement without a requirement to enable LDP on a subinterface or to receive a label from a neighbor. Network security requirements may dictate that prefixes advertised using LDP must not be directly associated with a subinterface or system IP address. Static FEC allows the router to advertise these prefixes and their labels to peers as LDP FECs to provide reachability to the desired IP addresses or subnets without explicitly using a subinterface address. The router can advertise a FEC with a pop action.

A FEC can be added to the LDP IP prefix database with a specific label operation on the node. The only permitted operation is pop (the swap parameter must be set to false).

A route-table entry is required for a FEC to be advertised. Static FEC is supported for both IPv4 and IPv6 FECs.

To configure static FEC, use the static-fec command under the network-instance protocols ldp context.

Configure static FEC

--{ + candidate shared default }--[  ]--
# info network-instance default protocols ldp static-fec 10.10.10.2/32
    network-instance default {
        protocols {
            ldp {
                static-fec 10.10.10.2/32 {
                    swap false
                }
            }
        }
    }

Overriding the LSR ID on an interface

For security concerns, it may be beneficial to overwrite the default LSR ID with a different LSR ID for link or targeted LDP sessions. The following LSR IDs can be overwritten:

  • IPv4 l-LDP: local subinterface IPv4 address
  • IPv6 l-LDP: local subinterface IPv4 or IPv6 address, in one of the following combinations:
    • the subinterface IPv4 address as the 32-bit LSR-ID and the subinterface IPv6 address as the transport connection address
    • the subinterface IPv6 address as both a 128-bit LSR-ID and transport connection address
  • IPv4 T-LDP: IPv4 loopback or any IPv4 LDP subinterface
  • IPv6 T-LDP: IPv4 loopback, IPv6 loopack, IPv4 LDP subinterface, or IPv6 LDP subinterface

Note that a loopback interface cannot be used with the 32-bit format.

To change the value of the LSR ID on an interface to the local interface IP address, use the override-lsr-id option. When override-lsr-id is enabled, the transport address for the LDP session and the source IP address of the Hello messages is updated to the interface IP address.

In IPv6 networks, either the IPv4 or IPv6 interface address can override the IPv6 LSR ID.

To override the value of the LSR ID on an interface, use the following commands under the network-instance context:

  • IPv4:

    protocols ldp discovery interfaces interface <name> ipv4 override-lsr-id local-subinterface ipv4

  • IPv6:

    protocols ldp discovery interfaces interface <name> ipv6 override-lsr-id local-subinterface [ipv4 | ipv6]

Configure the LSR ID

--{ * candidate shared default }--[  ]--
# info network-instance default protocols ldp discovery interfaces interface ethernet-1/1.1
    network-instance default {
        protocols {
            ldp {
                discovery {
                    interfaces {
                        interface ethernet-1/1.1 {
                            ipv4 {
                                override-lsr-id {
                                    local-subinterface ipv4
                                }
                            }
                            ipv6 {
                                override-lsr-id {
                                    local-subinterface ipv6
                                }
                            }
                        }
                    }
                }
            }
        }
    }

LDP FEC import and export policies

SR Linux supports FEC prefix import and export policies for LDP, which provide filtering of both inbound and outbound LDP label bindings.

FEC prefix export policy

A FEC prefix export policy controls the set of LDP prefixes and associated LDP label bindings that a router advertises to its LDP peers. By default, the router advertises local label bindings for only the system address, but propagates all FECs that are received from neighbors. The export policy can also be configured to advertise local interface FECs.

The export policy can accept or reject label bindings for advertisement to the LDP peers.

When applied globally, LDP export policies apply to FECs advertised to all neighbors. To control the propagation of FECs to a specific LDP neighbor, you can apply an LDP export prefix policy to the specified peer.

Note: The export policy does not support blocking of static FECs.

FEC prefix import policy

A FEC prefix import policy controls the set of LDP prefixes and associated LDP label bindings received from other LDP peers that a router accepts. The router redistributes all accepted LDP prefixes that it receives from its neighbors to other LDP peers (unless rejected by the FEC prefix export policy).

The import policy can accept or reject label bindings received from LDP peers. By default, the router imports all FEC prefixes from its LDP peers.

When applied globally, LDP import policies apply to FECs received from all neighbors. To control the import of FECs from a specific LDP neighbor, you can apply an LDP import prefix policy to the specified peer.

Routing policies

The filtering of label bindings in LDP FEC import and export policies is based on prefix lists that are defined using routing policies.

Configuring routing policies for LDP FEC import and export policies

To configure routing policies for LDP FEC import and export policies, use the routing-policy command.

Configure routing policy for global LDP FEC import and export

The following shows an example configuration of global LDP FEC import and export policies, defining match rules to accept an import prefix set, and reject an export prefix set.

--{ * candidate shared default }--[ ]--
# info routing-policy
    routing-policy {
        prefix-set export-prefix-set-test {
            prefix 10.1.1.2/32 mask-length-range exact {
            }
            prefix 10.1.1.3/32 mask-length-range exact {
            }
        }
        prefix-set import-prefix-set-test {
            prefix 10.1.1.1/32 mask-length-range exact {
            }
            prefix 10.1.1.2/32 mask-length-range exact {
            }
            prefix 10.1.1.3/32 mask-length-range exact {
            }
        }
        policy export-fec-test {
            default-action {
                policy-result accept
            }
            statement export-statement-test {
                match {
                    prefix-set export-prefix-set-test
                }
                action {
                    policy-result reject
                }
            }
        }
        policy import-fec-test {
            default-action {
                policy-result accept
            }
            statement import-statement-test {
                match {
                    prefix-set import-prefix-set-test
                }
                action {
                    policy-result accept
                }
            }
        }
    }

Configure routing policy for peer LDP FEC import and export

The following shows an example configuration of peer LDP FEC import and export policies, defining match rules to reject an import prefix set and an export prefix set for the specified peer.

--{ * candidate shared default }--[ ]--
# info routing-policy
    routing-policy {
        prefix-set peer-export-prefix-test {
            prefix 10.1.1.4/32 mask-length-range exact {
            }
        }
        prefix-set peer-import-prefix-test {
            prefix 10.1.1.5/32 mask-length-range exact {
            }
        }
        policy peer-export-test {
            statement peer-export-statement-test {
                match {
                    prefix-set export-prefix-set-test
                }
                action {
                    policy-result reject
                }
            }
        }
        policy peer-import-test {
            statement peer-import-statement-test {
                match {
                    prefix-set import-prefix-set-test
                }
                action {
                    policy-result reject
                }
            }
        }
    }

Applying global LDP FEC import and export policies

Use the ldp import-prefix-policy and ldp export-prefix-policy commands to apply global LDP FEC import and export policies.

The following example applies global export and import policies to LDP FECs.

Apply global LDP import and export policies

--{ +* candidate shared default }--[  ]--
# info network-instance default protocols ldp
    network-instance default {
        protocols {
            ldp {
                export-prefix-policy export-fec-test
                import-prefix-policy import-fec-test
            }
        }
    }

Applying per-peer LDP FEC import and export policies

Use the import-prefix-policy and export-prefix-policy commands under the ldp peers peer context to apply per-peer LDP FEC import and export policies.

The following example applies LDP FEC export and import policies to LDP peer 10.10.10.1.

Apply global LDP import and export policies

--{ +* candidate shared default }--[  ]--
# info network-instance default protocols ldp peers peer 10.10.10.1 label-space-id 1
    network-instance default {
        protocols {
            ldp {
                peers {
                    peer 10.10.10.1 label-space-id 1 {
                        export-prefix-policy peer-export-test
                        import-prefix-policy peer-import-test
                    }
                }
            }
        }
    }

LSP ping and trace for LDP tunnels

To check connectivity and trace the path to any midpoint or endpoint of an LDP tunnel, SR Linux supports the following OAM commands:
  • tools oam lsp-ping ldp fec <prefix>
  • tools oam lsp-trace ldp fec <prefix>

Supported parameters include destination-ip, source-ip, timeout, ecmp-next-hop-select, and traffic-class. However, the only mandatory parameter is fec.

Results from the lsp-ping and lsp-trace operations are displayed using info from state commands.

For more information, see the SR Linux OAM and Diagnostics Guide.