Configuring LDP
This chapter provides information about configuring Label Distribution Protocol (LDP) on SR Linux. It contains the following topics:
Enabling LDP
You must enable LDP for the protocol to be active.
The following example administratively enables LDP for the default network-instance.
--{ * candidate shared default }--[ ]--
# info network-instance default protocols ldp admin-state
network-instance default {
protocols {
ldp {
admin-state enable
}
}
}
Configuring LDP neighbor discovery
You can configure LDP neighbor discovery, which allows SR Linux to discover and connect to LDP peers without manually specifying the peers. SR Linux supports basic LDP discovery for discovering LDP peers, using multicast UDP hello messages.
The following example configures LDP neighbor discovery for a network-instance and enables it for a subinterface. The hello-interval parameter specifies the number of seconds between LDP link hello messages. The hello-holdtime parameter specifies how long the LDP link hello adjacency is maintained in the absence of link hello messages from the LDP neighbor.
--{ * candidate shared default }--[ ]--
# info network-instance default protocols ldp discovery
network-instance default {
protocols {
ldp {
discovery {
interfaces {
hello-holdtime 30
hello-interval 10
interface ethernet-1/1.1 {
ipv4 {
admin-state enable
}
}
}
}
}
}
}
Configuring LDP peers
You can configure settings that apply to connections between the SR Linux and LDP peers, including session keepalive parameters. For individual LDP peers, you can configure the maximum number of FEC-label bindings that can be accepted by the peer.
If LDP receives a FEC-label binding from a peer that puts the number of received FECs from this peer at the configured FEC limit, the peer is put into overload. If the peer advertised the Nokia-overload capability (if it is another SR Linux router or an SR OS device) then the overload TLV is transmitted to the peer and the peer stops sending any further FEC-label bindings. If the peer did not advertise the Nokia-overload capability, then no overload TLV is sent to the peer. In either case, the received FEC-label binding is deleted.
The following example configures settings for LDP peers:
--{ * candidate shared default }--[ ]--
# info network-instance default protocols ldp peers
network-instance default {
protocols {
ldp {
peers {
session-keepalive-holdtime 240
session-keepalive-interval 90
}
peer 1.1.1.1 label-space-id 1254 {
fec-limit 1024
}
}
}
}
}
In this example, the session-keepalive-holdtime parameter specifies the number of seconds an LDP session can remain inactive (no LDP packets received from the peer) before the LDP session is terminated and the corresponding TCP session closed. The session-keepalive-interval parameter specifies the number of seconds between LDP keepalive messages. SR Linux sends LDP keepalive messages at this interval only when no other LDP packets are transmitted over the LDP session.
For an individual LDP peer, indicated by its LSR ID and label space ID, a FEC limit is specified. SR Linux deletes FEC-label bindings received from this peer beyond this limit.
Configuring a label block for LDP
To configure LDP, you must specify a reference to a predefined range of labels, called a label block. A label block configuration includes a start-label value and an end-label value. LDP uses labels in the range between the start-label and end-label in the label block.
A label block can be static or dynamic. See Static and dynamic label blocks for information about each type of label block and how to configure them. LDP requires a dynamic, non-shared label block.
The following example configures LDP to use a dynamic label block named
d1
. The dynamic label block is configured with a start-label
value and an end-label value. See Configuring label blocks for
an example of a dynamic label block configuration.
--{ * candidate shared default }--[ ]--
# info network-instance default protocols ldp dynamic-label-block
network-instance default {
protocols {
ldp {
dynamic-label-block d1
}
}
}
Configuring longest-prefix match for IPv4 FEC resolution
By default, SR Linux supports /32 IPv4 FEC resolution using IGP routes. You can optionally enable longest-prefix match for IPv4 FEC resolution. When this is enabled, IPv4 prefix FECs can be resolved by less-specific IPv4 routes in the route table, as long as the prefix bits of the route match the prefix bits of the FEC. The IP route with the longest prefix match is the route that is used to resolve the FEC.
The following example enables longest-prefix match for IPv4 FEC resolution.
--{ * candidate shared default }--[ ]--
# info network-instance default protocols ldp ipv4
network-instance default {
protocols {
ldp {
ipv4 {
fec-resolution {
longest-prefix true
}
}
}
}
}
Configuring load balancing over equal-cost paths
ECMP support for LDP on SR Linux performs load balancing for LDP-based LSPs by using multiple outgoing next-hops for an IP prefix on ingress and transit LSRs. You can specify the maximum number of next-hops (up to 64) to be used for load balancing toward a specific FEC.
The following example configures the maximum number of next-hops the SR Linux can use for load balancing toward a FEC.
--{ * candidate shared default }--[ ]--
# info network-instance default protocols ldp multipath
network-instance default {
protocols {
ldp {
multipath {
max-paths 64
}
}
}
}
Configuring graceful restart helper capability
Graceful restart allows a router that has restarted its control plane but maintained its forwarding state to restore LDP with minimal disruption.
To do this, the router relies on LDP peers, which have also been configured for graceful restart, to maintain forwarding state while the router restarts. LDP peers configured in this way are known as graceful restart helpers.
You can configure the SR Linux to operate as a graceful restart helper for LDP. When the graceful restart helper capability is enabled, the SR Linux advertises to its LDP peers by carrying the fault tolerant (FT) session TLV in the LDP initialization message, which assists the LDP peer to preserve its LDP forwarding state across the restart.
The following example enables the graceful restart helper capability for LDP.
--{ * candidate shared default }--[ ]--
# info network-instance default protocols ldp graceful--restart
network-instance default {
protocols {
ldp {
graceful-restart {
helper-enable true
max-reconnect-time 180
max-recovery-time 240
}
}
}
}
In this example, the max-reconnect-time parameter specifies the number of seconds the SR Linux waits for the remote LDP peer to reconnect after an LDP communication failure. The max-recovery-time parameter specifies the number of seconds the SR Linux router preserves its MPLS forwarding state after receiving the LDP initialization message from the restarted LDP peer.
Configuring LDP-IGP synchronization
You can configure synchronization between LDP and interior gateway protocols (IGPs). LDP-IGP synchronization is supported for IS-IS and OSPF.
When LDP-IGP synchronization is configured, LDP notifies the IGP to advertise the maximum cost for a link whenever the LDP hello adjacency goes down, the LDP session goes down, or LDP is not configured on an interface.
The following apply when LDP-IGP synchronization is configured:
-
If a session goes down, the IGP increases the metric of the corresponding interface to max-metric.
-
When the LDP adjacency is reestablished, the IGP starts a hold-down timer (default 60 seconds). When this timer expires, the IGP restores the normal metric, if it has not been restored already.
-
When LDP informs the IGP that all label-FEC mappings have been received from the peer, the IGP can be configured to immediately restore the normal metric, even if time remains on the hold-down timer.
LDP-IGP synchronization does not take place on LAN interfaces unless the IGP has a point-to-point connection over the LAN, and does not take place on IGP passive interfaces.
The following example configures LDP-IGP synchronization for an IS-IS instance.
--{ * candidate shared default }--[ ]--
# info network-instance default protocols isis instance i1 ldp-synchronization
network-instance default {
protocols {
isis {
instance i1 {
ldp-synchronization {
hold-down-timer 120
end-of-lib true
}
}
}
}
}
In this example, if the LDP adjacency goes down, the IGP increases the metric of the corresponding interface to max-metric. If the adjacency is subsequently reestablished, the IGP waits for the amount of time configured with the hold-down-timer parameter before restoring the normal metric.
When the end-of-lib parameter is set to true, it causes the IGP to restore the normal metric when all label-FEC mappings have been received from the peer, even if time remains in the hold-down timer.
LSP ping and trace for LDP tunnels
- tools oam lsp-ping ldp fec <prefix>
- tools oam lsp-trace ldp fec <prefix>
Supported parameters include destination-ip, source-ip, timeout, and ecmp-next-hop-select. However, the only mandatory parameter is fec.
Results from the lsp-ping and lsp-trace operations are displayed using info from state commands.
Performing an LSP ping to an LDP tunnel endpoint
To check connectivity to an LDP tunnel endpoint, use the tools oam lsp-ping ldp command, specifying the FEC prefix of the LDP tunnel. To display the results, use the info from state oam lsp-ping ldp command, specifying the session ID output from the lsp-ping.
Perform an LSP ping to an LDP tunnel endpoint
--{ + running }--[ ]--
# tools oam lsp-ping ldp fec 10.20.1.6/32
/oam/lsp-ping/ldp/fec[prefix=10.20.1.6/32]:
Initiated LSP Ping to prefix 10.20.1.6/32 with session id 49152
Display results of the LSP ping
--{ + running }--[ ]--
# info from state oam lsp-ping ldp fec 10.20.1.6/32 session-id 49152
oam {
lsp-ping {
ldp {
fec 10.20.1.6/32 {
session-id 49152 {
test-active false
statistics {
round-trip-time {
minimum 4292
maximum 4292
average 4292
standard-deviation 0
}
}
path-destination {
ip-address 127.0.0.1
}
sequence 1 {
probe-size 48
request-sent true
out-interface ethernet-1/33.1
reply {
received true
reply-sender 10.20.1.6
udp-data-length 40
mpls-ttl 255
round-trip-time 4292
return-code replying-router-is-egress-for-fec-at-stack-depth-n
return-subcode 1
}
}
}
}
}
}
}
Performing an LSP trace for an LDP tunnel
To trace the path to any midpoint or endpoint of an LDP tunnel, use the tools oam lsp-trace command, specifying the FEC prefix of the LDP tunnel. To display the results, use the info from state oam lsp-trace ldp command, specifying the session ID output from the lsp-trace.
Perform an LSP trace to an LDP tunnel endpoint
--{ + running }--[ ]--
# tools oam lsp-trace ldp fec 10.20.1.6/32
/oam/lsp-trace/ldp/fec[prefix=10.20.1.6/32]:
Initiated LSP Trace to prefix 10.20.1.6/32 with session id 49153
Display results of the LSP trace
--{ + running }--[ ]--
# info from state oam lsp-trace ldp fec 10.20.1.6/32 session-id 49153
oam {
lsp-trace {
ldp {
fec 10.20.1.6/32 {
session-id 49153 {
test-active false
path-destination {
ip-address 127.0.0.1
}
hop 1 {
probe 1 {
probe-size 76
probes-sent 1
reply {
received true
reply-sender 10.20.1.2
udp-data-length 60
mpls-ttl 1
round-trip-time 4824
return-code label-switched-at-stack-depth-n
return-subcode 1
}
downstream-detailed-mapping 1 {
mtu 1500
address-type ipv4-numbered
downstream-router-address 10.10.4.4
downstream-interface-address 10.10.4.4
mpls-label 1 {
label 2002
protocol ldp
}
}
}
}
hop 2 {
probe 1 {
probe-size 76
probes-sent 1
reply {
received true
reply-sender 10.20.1.4
udp-data-length 60
mpls-ttl 2
round-trip-time 4693
return-code label-switched-at-stack-depth-n
return-subcode 1
}
downstream-detailed-mapping 1 {
mtu 1500
address-type ipv4-numbered
downstream-router-address 10.10.9.6
downstream-interface-address 10.10.9.6
mpls-label 1 {
label 2000
protocol ldp
}
}
}
}
hop 3 {
probe 1 {
probe-size 76
probes-sent 1
reply {
received true
reply-sender 10.20.1.6
udp-data-length 32
mpls-ttl 3
round-trip-time 4597
return-code replying-router-is-egress-for-fec-at-stack-depth-n
return-subcode 1
}
}
}
}
}
}
}
}