OAM fault and performance tools and protocols

This chapter provides information about the operation, administration, and maintenance (OAM) tools and protocols. The tools and protocols are used for:

  • fault detection and isolation

  • performance measurement

This chapter covers the following sections:

IP OAM tools and protocols

This section provides information about the IP OAM tools and protocols.

ICMP ping and trace

Overview

Internet Control Message Protocol (ICMP) is part of the IP suite as defined in RFC 792, Internet Control Message Protocol, for IPv4 and RFC 4443, Internet Control Message Protocol (ICMPv6) for the Internet Protocol Version 6 (IPv6) Specification. ICMP and ICMPv6 send and receive control and error messages used to manage the behavior of the TCP/IP stack. ICMP and ICMPv6 provide the following:

  • debugging tools and error reporting mechanisms to assist in troubleshooting an IP network

  • the ability to send and receive error and control messages to far-end IP entities

Ping

The ping command uses an echo request message to elicit an echo response from a host or gateway. The ping6 command is the IPv6 version of the ping command. See Performing an ICMP ping for more information.

Traceroute

The traceroute command is used to trace the route that the packets take from the current system to the destination. It uses the time to live (TTL) parameter to elicit an ICMP time exceeded response from each gateway along the path to the host. The traceroute6 command is the IPv6 version of the traceroute command. See Performing an ICMP trace for more information.

Performing an ICMP ping

Use the ping (IPv4) or ping6 (IPv6) command to contact an IP address. Use this command in any mode.

ping for IPV4
--{ running }--[  ]--
# ping 192.168.1.1 network-instance default
Pinging 192.168.1.1 in srbase-default
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.027 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.030 ms
^C
--- 192.168.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 6165ms
rtt min/avg/max/mdev = 0.027/0.030/0.033/0.005 ms

Performing an ICMP trace

To display the path a packet takes to a destination, use the traceroute (IPv4) or traceroute6 (IPv6) command.

To trace the route using TCP SYN packets instead of UDP or ICMP echo packets, use the tcptraceroute command.

traceroute for IPv4
--{ running }--[  ]--
# traceroute 1.1.1.1  network-instance mgmt
Using network instance srbase-mgmt
traceroute to 10.1.1.1 (10.1.1.1), 30 hops max, 60 byte packets
 1  172.18.18.1 (172.18.18.1)  1.268 ms  1.260 ms  1.256 ms
 2  172.21.40.1 (172.21.40.1)  1.253 ms  1.848 ms  1.851 ms
 3  172.22.35.230 (172.22.35.230)  1.835 ms  1.834 ms  1.828 ms
 4  66.201.62.1 (66.201.62.1)  3.222 ms  3.222 ms  3.216 ms
 5  66.201.34.17 (66.201.34.17)  5.474 ms  5.475 ms  5.480 ms
 6  * * *
 7  206.81.81.10 (206.81.81.10)  32.577 ms  32.542 ms  32.400 ms
 8  10.1.1.1 (10.1.1.1)  22.627 ms  22.637 ms  22.638 ms

TWAMP

Two-Way Active Measurement Protocol (TWAMP) is a standards-based method to measure the IP performance between two devices including packet loss, delay, and jitter. TWAMP leverages the methodology and architecture of One-Way Active Measurement Protocol (OWAMP) to define a method to measure two-way or round-trip metrics.

Components

The following are the four logical entities in TWAMP:

  • control client: initiates the TWAMP control session and negotiates the session information to be used and the tests to be performed with the server

  • server: negotiates with the control client request to establish the control session

  • session sender: transmits test packets to the session reflector

  • session reflector: transmits a packet to the session sender in response to each packet it receives

The control client and session sender are implemented in one physical device which is referred to as the client. The server and session reflector are implemented in a second physical device which is referred to as the server. The router acts as the server and the session reflector.

See Configuring a TWAMP server for more information about steps to configure a TWAMP server.

Protocols

The following protocols are used in TWAMP sessions:

  • TWAMP control protocol: used to establish and manage control sessions between the control client and the server

  • TWAMP test protocol: used to generate and send test traffic between the session sender and session reflector, and to measure network performance metrics like delay

Establishing a control session

The control client initiates a TCP connection and exchanges TWAMP control messages over this connection. The server accepts the TCP control session from the control client and responds with a server greeting message. This greeting includes the modes that are supported by the server. The modes are in the form of a bit mask. Each bit in the mask represents a functionality supported on the server.

Server mode support includes:

  • unauthenticated server

  • individual session control (mode bit 4: value 16)

  • reflected octets (mode bit 5: value 32)

  • symmetrical size test packet (mode bit 6: value 64)

To start testing, the control client communicates the test parameters to the server, requesting any of the modes that the server supports. If the server agrees to conduct the described tests, the test begins as soon as the control client sends a start sessions or start-n-session message.

Executing a test session

The session sender initiates the test session by sending a stream of UDP-based TWAMP test packets to the session reflector. The session reflector responds to each received packet with a UDP-response TWAMP test packet. The exchange of TWAMP test PDUs is referred to as a TWAMP test. The session sender calculates the various delay and loss metric based on the received TWAMP test PDUs . The TWAMP test PDU does not achieve symmetrical packet size in both directions unless the frame is padded with a minimum of 27 bytes. The session sender is responsible for applying the required padding. After the frame is appropriately padded, the session reflector reduces the padding by the number of bytes needed to provide symmetry.

The control client eventually closes the control session, marking the end of the measurement process.

TWAMP statistics

The following TWAMP statistics are available in SR Linux:

  • system-level TWAMP statistics

  • server statistics

  • client connection statistics

  • control connection statistics

  • session reflector statistics

See Displaying TWAMP statistics for more information.

A clear command is available at the server network-instance level to clear all test session transmit and receive statistics and error counters. See Clearing TWAMP session statistics for more information.

Configuring a TWAMP server

To configure a TWAMP server (server and session reflector) for a network instance, use the oam twamp command and specify the network instance, client (control client and session sender) parameters, and server parameters as shown in the example.

Configure TWAMP server

The following example configures a TWAMP server.

--{ + candidate shared default }--[  ]--
# info oam twamp
    oam {
        twamp {
            server {
                network-instance default {
                    admin-state enable
                    servwait 900
                    control-packet-dscp 12
                    enforce-test-session-start-time true
                    client-connection 192.15.20.9/32 {
                        maximum-connections 32
                        maximum-sessions 32
                    }
                }
            }
        }
    }

Displaying TWAMP statistics

To display system-level TWAMP statistics, use the info from state oam twamp command.

Displaying TWAMP statistics

The following example displays TWAMP statistics.

--{ + candidate shared default }--[  ]--
# info from state oam twamp
    oam {
        twamp {
            server {
                network-instance default {
                    admin-state enable
                    oper-state up
                    servwait 60
                    control-packet-dscp CS7
  //this will be removed form yang//
                    enforce-test-session-start-time true
                    maximum-connections 64
                    maximum-sessions 128
                    modes [
                        unauthenticated
                        individual-session-control
                        reflect-octets
                        symmetrical-size
                    ]
                    statistics {
                        test-sessions-active 1
                        test-sessions-completed 0
                        test-sessions-rejected 0
                        test-sessions-aborted 0
                        test-packets-received 2
                        test-packets-transmitted 2
                        control-connections-active 1
                        control-connections-rejected 0
                    }
                    client-connection 10.32.5.0/24 {
                        maximum-connections 64
                        maximum-sessions 128
                        statistics {
                            test-sessions-active 1
                            test-sessions-completed 0
                            test-sessions-rejected 0
                            test-sessions-aborted 0
                            test-packets-received 2
                            test-packets-transmitted 2
                            control-connections-active 1
                            control-connections-rejected 0
                        }
                    }
                    client-connection 10.11.1.0/24 {
                        maximum-connections 64
                        maximum-sessions 128
                        statistics {
                            test-sessions-active 0
                            test-sessions-completed 0
                            test-sessions-rejected 0
                            test-sessions-aborted 0
                            test-packets-received 0
                            test-packets-transmitted 0
                            control-connections-active 0
                            control-connections-rejected 0
                        }
                    }
                    client-connection 10.12.1.0/24 {
                        maximum-connections 32
                        maximum-sessions 32
                        statistics {
                            test-sessions-active 0
                            test-sessions-completed 0
                            test-sessions-rejected 0
                            test-sessions-aborted 0
                            test-packets-received 0
                            test-packets-transmitted 0
                            control-connections-active 0
                            control-connections-rejected 0
                        }
                    }
                    client-connection 2001:db8:101:1:1::/120 {
                        maximum-connections 64
                        maximum-sessions 128
                        statistics {
                            test-sessions-active 0
                            test-sessions-completed 0
                            test-sessions-rejected 0
                            test-sessions-aborted 0
                            test-packets-received 0
                            test-packets-transmitted 0
                            control-connections-active 0
                            control-connections-rejected 0
                        }
                    }
                    client-connection 2001:db8:101:1:1::/120 {
                        maximum-connections 32
                        maximum-sessions 32
                        statistics {
                            test-sessions-active 0
                            test-sessions-completed 0
                            test-sessions-rejected 0
                            test-sessions-aborted 0
                            test-packets-received 0
                            test-packets-transmitted 0
                            control-connections-active 0
                            control-connections-rejected 0
                        }
                    }
                    client-connection 2001:db8:101:1:1::/120 {
                        maximum-connections 64
                        maximum-sessions 128
                        statistics {
                            test-sessions-active 0
                            test-sessions-completed 0
                            test-sessions-rejected 0
                            test-sessions-aborted 0
                            test-packets-received 0
                            test-packets-transmitted 0
                            control-connections-active 0
                            control-connections-rejected 0
                        }
                    }
                    control-connection 10.32.5.0 client-tcp-port 58116 server-ip 10.20.1.3 server-tcp-port 862 {
                        state active
                        control-packet-dscp 20
                        statistics {
                            test-sessions-active 1
                            test-sessions-completed 0
                            test-sessions-rejected 0
                            test-sessions-aborted 0
                            test-packets-received 2
                            test-packets-transmitted 2
                        }
                    }
                    session-reflector {
                        test-session 10.32.5.0 sender-udp-port 20100 reflector-ip 10.20.1.3 reflector-udp-port 862 {
                            test-session-id 0A:14:01:03:EA:0B:06:CC:1A:36:F3:B2:A3:97:A2:55
                            parent-connection-client-ip 32.32.5.2
                            parent-connection-client-tcp-port 58116
                            parent-connection-server-ip 10.20.1.3
                            parent-connection-server-tcp-port 862
                            test-packet-dscp 0
                            last-sequence-number-transmitted 1
                            last-sequence-number-received 0
                            statistics {
                                test-packets-received 2
                                test-packets-transmitted 2
                            }
                        }
                    }
                }
            }
            statistics {
                dropped-connections {
                    tcp-connection-closed 0
                    tcp-connection-fatal-error 0
                    tcp-unexpected-event 0
                    message-send-error 0
                    memory-allocation-error 0
                    no-client-prefix-match 0
                    maximum-global-limit-exceed 0
                    maximum-prefix-limit-exceed 0
                    unspecified-mode 0
                    unsupported-mode 0
                    control-command-not-valid 0
                    incorrect-stop-session-count 0
                    connection-timeout 0
                    no-internal-resource 0
                    non-zero-sid-in-client-control-message 0
                    invalid-invalid-hmac 0
                }
                dropped-connection-states {
                    idle 0
                    setup-wait 0
                    started 0
                    active 0
                    process-started 0
                    process-stop 0
                    process-tw-session 0
                }
                rejected-session {
                    invalid-ip-address-version 0
                    non-local-ip-destination 0
                    bad-type-p 0
                    padding-too-big 0
                    non-zero-mbz-value 0
                    non-zero-session-sender-sid 0
                    timeout-too-large 0
                    maximum-global-session-exceed 0
                    maximum-prefix-session-exceed 0
                    client-source-ip-unreachable 0
                    udp-port-in-use 0
                    duplicate-session 0
                    no-internal-resource 0
                    refwait-timeout 0
                }
                dropped-test-packet {
                    incorrect-packet-size 0
                    incorrect-source-address 0
                    arrived-before-start-time 0
                    no-start-sessions 0
                    invalid-error-estimate 0
                    reply-error 0
                    invalid-server-octets 0
                    invalid-symmetric-mbz 0
                }
            }
        }
    }

Clearing TWAMP session statistics

You can clear the TWAMP session statistics for each network instance using the tools oam twamp server network-instance default clear command.

Clearing TWAMP session statistics

The following example clears TWAMP session statistics for each network instance.

--{ + candidate shared default }--[  ]--
# tools oam twamp server network-instance default clear

STAMP

The Simple Two-Way Active Measurement Protocol (STAMP) defined in RFC 8762 is a standards-based method to measure the IP performance without the use of a control channel to pre-signal session parameters.

The PDU structure allows for the collection of frame delay, frame delay range, inter-frame delay variation, and frame loss ratio. The RFC 8972 STAMP Optional Extensions specification maintains the existing structure of the STAMP PDU but redefines existing fields and adds the capability to include TLVs. SR Linux supports RFC 8762 and the structural changes with TLV processing in the options draft RFC 8972.

STAMP operation

For each routed network instance, the STAMP session sender transmits STAMP test packets to the destination UDP port of the session reflector. The session reflector receives the packets, processes the STAMP test packet, and sends them back to the session sender. The session sender receives the reflected packets and uses the timestamps and sequence numbers to calculate delay and loss performance metrics. The session reflector supports a prefix list which filters based on IPv4 or IPv6 addressing. The reflector is stateful and uses the tuple sSIP, DIP, SP, DP, or SSID to identify individual STAMP test sessions.

See Configuring STAMP session reflector for more information about how to configure a session reflector.

Session sender packet format

STAMP defines the STAMP test request packets sent by the STAMP session sender to the STAMP session reflector as probes for performance measurement. The following table lists the key protocol elements.

Table 1. Fields in a test request packet
Field Description

Sequence Number

Packet sequence number generated based on the transmission sequence. For each new session, its value starts at 0 and is incremented by one with each transmitted packet.

Timestamp

Timestamp when a test packet is sent.

Error Estimate

Estimated error field. The format is as follows:

  • S bit is set to 0 regardless of time synchronization.

  • Z bit is set to 0 because the timestamp format is NTP.

  • Scale bits are set to 0.

  • Multiplier bits are non-zero.

SSID Session Sender ID (SSID) automatically generated by the system.
MBZ Must-Be-Zero (MBZ). The value must be 0. This field is used to ensure data packet symmetry between the session sender and session reflector.

Session reflector packet format

STAMP defines the STAMP test response packets reflected by the STAMP session reflector to the STAMP session sender. The following table lists the key protocol elements.

Table 2. Fields in a test response packet
Fields Description

Sequence Number

Packet sequence number generated based on the transmission sequence. For each new session, its value starts at 0 and is incremented by one with each transmitted packet.

Timestamp

Timestamp when a test packet is transmitted from the session reflector

Error Estimate

Estimated error field. The format is as follows:

  • S bit is set to 0 regardless of time synchronization.

  • Z bit is set to 0 because the timestamp format is NTP.

  • Scale bits are set to 0.

  • Multiplier bits are non-zero.

SSID Session Sender ID (SSID) automatically generated by the system.

MBZ

Must-Be-Zero. The value must be 0. This field is used to ensure data packet symmetry between the Session-Sender and Session-Reflector.

Receive Timestamp

Timestamp when a test packet is received on the session reflector.

Session-Sender Sequence Number

It is copied from the Sequence Number field of the STAMP Test request packet.

Session-Sender Timestamp

It is copied from the Timestamp field of the STAMP Test request packet.

Session-Sender Error Estimate

It is copied from the Error Estimate field of the STAMP Test request packet.

Session-Sender TTL

TTL value of a packet.

Interoperability of STAMP and TWAMP Light

The following guidelines ensure that interoperability exists between STAMP and TWAMP Light by defining rules for packet processing based on packet size and content, particularly the 45th byte, to distinguish between the two protocols:
  • UDP packets with a length less than 44 bytes are processed using TWAMP Light processing rules, which involves simple padding and symmetrical packet size handling.

  • UDP packets with a length equal to 44 bytes are processed as STAMP packets.

  • For UDP packets with a length equal to 45 bytes or more, the 45th byte is checked for the flags structure (100xxxxx).

    • If found, the packets are processed as STAMP packets.

    • If not found, the packet is assumed to be a TWAMP Light padded packet and processed accordingly. The TWAMP Light packet uses all zeros padding to avoid matching the 100xxxxx pattern by accident.

  • Multiple TLVs in a STAMP test packet are parsed using the length field.

  • If a TWAMP Light test packet mistakenly matches the 100xxxxx pattern at byte 45, the reflector attempts to parse the TLV. Failure to parse results in marking the byte as 110xxxxx and halting further STAMP TLV processing. However, the base STAMP packet continues to be processed.

  • TWAMP Light packets arriving on a STAMP session reflector must use all zeros padding to avoid unintentional mismatching.

STAMP statistics

The following STAMP statistics are available in SR Linux:

  • system-level session reflector statistics

  • session reflector statistics for each network instance

  • test session statistics

See Displaying STAMP statistics for more information.

Configuring STAMP session reflector

To configure a STAMP session reflector for a network instance, use the oam stamp command and specify the network instance, IP address prefix, and the UDP port as shown in the example.

Configuring STAMP session reflector

The following example configures a session reflector.

--{ + candidate shared default }--[  ]--
# info oam stamp
    oam {
        stamp {
            session-reflector {
                network-instance default {
                    description test
                    admin-state enable
                    udp-port 862
                    ip-prefix 192.20.13.20/32 {
                    }
                }
            }
        }
    }

Displaying STAMP statistics

To display system-level STAMP session reflector statistics, use the info from state oam stamp command.

Displaying STAMP statistics

The following example displays STAMP statistics.

--{ + candidate shared default }--[  ]--
# info from state oam stamp
  oam {
        stamp {
            session-reflector {
                inactivity-timer 900
                statistics {
                    test-frames-received 400
                    test-frames-sent 400
                    test-session-count 4
                    reflector-table-entries-full 0
                    packet-discards-on-reception 0
                    packet-discards-on-transmission 0
                    session-reflector-not-found 0
                    reflectors-configured 1
                    reflectors-operational 1
                    reflectors-not-operational 0
                }
                network-instance default {
                    admin-state enable
                    udp-port 862
                    oper-state up
                    ip-prefix  10.11.1.0/24 {
                    }
                    ip-prefix 10.10.11.0/24 {
                    }
                    ip-prefix 10.10.12.0/24 {
                    }
                    ip-prefix 10.10.14.0/24 {
                    }
                    ip-prefix 10.20.1.0/24 {
                    }
                    ip-prefix 10.12.1.0/24 {
                    }
                    ip-prefix 10.13.1.0/24 {
                    }
                    ip-prefix 10.14.1.0/24 {
                    }
                    ip-prefix 2001:db8:101:1:1/120 {
                    }
                    ip-prefix 2001:db8:102:1:1/120 {
                    }
                    ip-prefix 2001:db8:103:1:1/120 {
                    }
                    ip-prefix 2001:db8:104:1:1/120 {
                    }
                    ip-prefix 2001:db8:105:1:1/120 {
                    }
                    ip-prefix 2001:db8:106:1:1/120 {
                    }
                    ip-prefix 2001:db8:107:1:1/120 {
                    }
                    ip-prefix 2001:db8:108:1:1/120 {
                    }
                    statistics {
                        test-frames-received 400
                        test-frames-sent 400
                        test-sessions 4
                        prefix-match-failure 0
                        session-reflector-udp-port-registration-failure 0
                        malformed-packet 0
                        packet-discards-source-destination-equal 0
                    }
                    test-session-statistics 10.20.1.3 session-sender-udp 44000 session-reflector-ip 11.20.1.2 session-reflector-udp 862 session-identifier 1736 {
                        last-sequence-number-received 99
                        last-sequence-number-transmitted 99
                        test-frames-received 100
                        test-frames-sent 100
                        malformed-tlv 0
                    }
                    test-session-statistics 10.10.3.3 session-sender-udp 44000 session-reflector-ip 20.10.3.2 session-reflector-udp 862 session-identifier 1737 {
                        last-sequence-number-received 99
                        last-sequence-number-transmitted 99
                        test-frames-received 100
                        test-frames-sent 100
                        malformed-tlv 0
                    }
                    test-session-statistics 2001:db8:103:1:1 session-sender-udp 44000 session-reflector-ip fc00::b14:102 session-reflector-udp 862 session-identifier 1738 {
                        last-sequence-number-received 99
                        last-sequence-number-transmitted 99
                        test-frames-received 100
                        test-frames-sent 100
                        malformed-tlv 0
                    }
                    test-session-statistics 2001:db8:104:1:1 session-sender-udp 44000 session-reflector-ip fc00::140a:302 session-reflector-udp 862 session-identifier 1739 {
                        last-sequence-number-received 99
                        last-sequence-number-transmitted 99
                        test-frames-received 100
                        test-frames-sent 100
                        malformed-tlv 0
                    }
                }

MPLS OAM tools and protocols

This section provides information about the MPLS OAM tools and protocols.

LSP ping and trace

The LSP diagnostics include implementations of LSP ping and LSP trace based on RFC 8029, Detecting Multiprotocol Label Switched (MPLS) Data Plane Failures. LSP ping provides a mechanism to detect data plane failures in MPLS LSPs. LSP ping and LSP trace are modeled after the ICMP echo request or reply used by ping and trace to detect and localize faults in IP networks.

For a specific LDP FEC, LSP ping verifies whether the packet reaches the egress label edge router (LER), while for LSP trace, the packet is sent to the control plane of each transit Label Switching Router (LSR) that performs various checks to see if it is intended to be a transit LSR for the path.

The downstream mapping TLV is used in LSP ping and LSP trace to provide a mechanism for the sender and responder nodes to exchange and validate interface and label stack information for each downstream hop in the path of an LDP FEC.

See the following topics for more information about performing LSP ping and trace:

ECMP considerations for LSP ping and LSP trace

If an LSP trace is initiated without the destination IP address, the sender node does not include multi-path information in the Downstream Mapping TLV of the echo request message (multipath type=0). The responder node replies with a Downstream Mapping TLV for each outgoing interface which is part of the ECMP next hop set for the FEC. The sender node selects the first Downstream Mapping TLV to use for subsequent probes one hop further toward the destination.

If an LSP trace is initiated with the destination IP address, the sender node includes the multipath information in the Downstream Mapping TLV in the echo request message (multipath type=8). The ecmp-interface-select and ecmp-next-hop-select options allow the LER to exercise a specific ECMP path. If both the options are specified, the ecmp-interface-select takes precedence. The ecmp-interface-select and ecmp-next-hop-select options can be used to direct the echo request message at the sender node to be sent out to a specific outgoing interface which is part of an ECMP path set for the FEC.

LSP ping and trace for LDP tunnels

To check connectivity and trace the path to any midpoint or endpoint of an LDP tunnel, SR Linux supports the following OAM commands:
  • tools oam lsp-ping ldp fec <prefix>
  • tools oam lsp-trace ldp fec <prefix>

Supported parameters include destination-ip, source-ip, timeout, ecmp-next-hop-select, and traffic-class. However, the only mandatory parameter is fec.

Results from the lsp-ping and lsp-trace operations are displayed using info from state commands.

Performing an LSP ping to an LDP tunnel endpoint

To check connectivity to an LDP tunnel endpoint, use the tools oam lsp-ping ldp command, specifying the IPv4 and IPv6 FEC prefix of the LDP tunnel. To display the results, use the info from state oam lsp-ping ldp command, specifying the session ID output from the lsp-ping.

Perform an LSP ping to an LDP tunnel endpoint (IPv4)
--{ + running }--[  ]--
# tools oam lsp-ping ldp fec 10.20.1.6/32
/oam/lsp-ping/ldp/fec[prefix=10.20.1.6/32]:
    Initiated LSP Ping to prefix 10.20.1.6/32 with session id 49152 
Display results of the LSP ping (IPv4)
--{ + running }--[  ]--
# info from state oam lsp-ping ldp fec 10.20.1.6/32 session-id 49152
    oam {
        lsp-ping {
            ldp {
                fec 10.20.1.6/32 {
                    session-id 49152 {
                        test-active false
                        statistics {
                            round-trip-time {
                                minimum 4292
                                maximum 4292
                                average 4292
                                standard-deviation 0
                            }
                        }
                        path-destination {
                            ip-address 127.0.0.1
                        }
                        sequence 1 {
                            probe-size 48
                            request-sent true
                            out-interface ethernet-1/33.1
                            reply {
                                received true
                                reply-sender 10.20.1.6
                                udp-data-length 40
                                mpls-ttl 255
                                round-trip-time 4292
                                return-code replying-router-is-egress-for-fec-at-stack-depth-n
                                return-subcode 1
                            }
                        }
                    }
                }
            }
        }
    }
Perform an LSP ping to an LDP tunnel endpoint (IPv6)
--{ + running }--[  ]--
# tools oam lsp-ping ldp fec fc00::a14:106/128
/oam/lsp-ping/ldp/fec[prefix=fc00::a14:106/128]:
    Initiated LSP Ping to prefix fc00::a14:106/128 with session id 49169 
Display results of the LSP ping (IPv6)
--{ + running }--[  ]--
# info from state oam lsp-ping ldp fec fc00::a14:106/128 session-id 49169
    oam {
        lsp-ping {
            ldp {
                fec fc00::a14:106/128 {
                    session-id 49169 {
                        test-active false
                        statistics {
                            round-trip-time {
                                minimum 47539
                                maximum 47539
                                average 47539
                                standard-deviation 0
                            }
                        }
                        path-destination {
                            ip-address ::ffff:127.0.0.0
                        }
                        sequence 1 {
                            probe-size 60
                            request-sent true
                            out-interface ethernet-1/31.1
                            reply {
                                received true
                                reply-sender fc00::a14:106
                                udp-data-length 40
                                mpls-ttl 255
                                round-trip-time 47539
                                return-code replying-router-is-egress-for-fec-at-stack-depth-n
                                return-subcode 1
                            }
                        }
                    }
                }
            }
        }
    }
Performing an LSP trace for an LDP tunnel

To trace the path to any midpoint or endpoint of an LDP tunnel, use the tools oam lsp-trace command, specifying the IPv4 and IPv6 FEC prefix of the LDP tunnel. To display the results, use the info from state oam lsp-trace ldp command, specifying the session ID output from the lsp-trace.

Perform an LSP trace to an LDP tunnel endpoint (IPv4)
--{ + running }--[  ]--
# tools oam lsp-trace ldp fec 10.20.1.6/32
/oam/lsp-trace/ldp/fec[prefix=10.20.1.6/32]:
    Initiated LSP Trace to prefix 10.20.1.6/32 with session id 49153 
Display results of the LSP trace (IPv4)
--{ + running }--[  ]--
# info from state oam lsp-trace ldp fec 10.20.1.6/32 session-id 49153
    oam {
        lsp-trace {
            ldp {
                fec 10.20.1.6/32 {
                    session-id 49153 {
                        test-active false
                        path-destination {
                            ip-address 127.0.0.1
                        }
                        hop 1 {
                            probe 1 {
                                probe-size 76
                                probes-sent 1
                                reply {
                                    received true
                                    reply-sender 10.20.1.2
                                    udp-data-length 60
                                    mpls-ttl 1
                                    round-trip-time 4824
                                    return-code label-switched-at-stack-depth-n
                                    return-subcode 1
                                }
                                downstream-detailed-mapping 1 {
                                    mtu 1500
                                    address-type ipv4-numbered
                                    downstream-router-address 10.10.4.4
                                    downstream-interface-address 10.10.4.4
                                    mpls-label 1 {
                                        label 2002
                                        protocol ldp
                                    }
                                }
                            }
                        }
                        hop 2 {
                            probe 1 {
                                probe-size 76
                                probes-sent 1
                                reply {
                                    received true
                                    reply-sender 10.20.1.4
                                    udp-data-length 60
                                    mpls-ttl 2
                                    round-trip-time 4693
                                    return-code label-switched-at-stack-depth-n
                                    return-subcode 1
                                }
                                downstream-detailed-mapping 1 {
                                    mtu 1500
                                    address-type ipv4-numbered
                                    downstream-router-address 10.10.9.6
                                    downstream-interface-address 10.10.9.6
                                    mpls-label 1 {
                                        label 2000
                                        protocol ldp
                                    }
                                }
                            }
                        }
                        hop 3 {
                            probe 1 {
                                probe-size 76
                                probes-sent 1
                                reply {
                                    received true
                                    reply-sender 10.20.1.6
                                    udp-data-length 32
                                    mpls-ttl 3
                                    round-trip-time 4597
                                    return-code replying-router-is-egress-for-fec-at-stack-depth-n
                                    return-subcode 1
                                }
                            }
                        }
                    }
                }
            }
        }
    }
Perform an LSP trace to an LDP tunnel endpoint (IPv6)
--{ + running }--[  ]--
# tools oam lsp-trace ldp fec fc00::a14:106/128
/oam/lsp-trace/ldp/fec[prefix=fc00::a14:106/128]:
    Initiated LSP Trace to prefix fc00::a14:106/128 with session id 49168 
Display results of the LSP trace (IPv6)
--{ + running }--[  ]--
# info from state oam lsp-trace ldp fec fc00::a14:106/128 session-id 49168
    oam {
        lsp-trace {
            ldp {
                fec fc00::a14:106/128 {
                    session-id 49168 {
                        test-active false
                        path-destination {
                            ip-address ::ffff:127.0.0.0
                        }
                        hop 1 {
                            probe 1 {
                                probe-size 112
                                probes-sent 1
                                reply {
                                    received true
                                    reply-sender fc00::a14:102
                                    udp-data-length 84
                                    mpls-ttl 1
                                    round-trip-time 41527
                                    return-code label-switched-at-stack-depth-n
                                    return-subcode 1
                                }
                                downstream-detailed-mapping 1 {
                                    mtu 1500
                                    address-type ipv6-numbered
                                    downstream-router-address fe80::201:4ff:feff:1e
                                    downstream-interface-address fe80::201:4ff:feff:1e
                                    mpls-label 1 {
                                        label 2008
                                        protocol ldp
                                    }
                                }
                            }
                        }
                        hop 2 {
                            probe 1 {
                                probe-size 112
                                probes-sent 1
                                reply {
                                    received true
                                    reply-sender fc00::a14:104
                                    udp-data-length 84
                                    mpls-ttl 2
                                    round-trip-time 76569
                                    return-code label-switched-at-stack-depth-n
                                    return-subcode 1
                                }
                                downstream-detailed-mapping 1 {
                                    mtu 1500
                                    address-type ipv6-numbered
                                    downstream-router-address fe80::201:6ff:feff:3
                                    downstream-interface-address fe80::201:6ff:feff:3
                                    mpls-label 1 {
                                        label 2001
                                        protocol ldp
                                    }
                                }
                            }
                        }
                        hop 3 {
                            probe 1 {
                                probe-size 112
                                probes-sent 1
                                reply {
                                    received true
                                    reply-sender fc00::a14:106
                                    udp-data-length 32
                                    mpls-ttl 3
                                    round-trip-time 41739
                                    return-code replying-router-is-egress-for-fec-at-stack-depth-n
                                    return-subcode 1
                                }
                            }
                        }
                    }
                }
            }
        }
    }

LSP ping and trace for segment routing tunnels

To check connectivity and trace the path to any midpoint or endpoint of an SR-ISIS shortest path tunnel, SR Linux supports the following OAM commands:
  • tools oam lsp-ping sr-isis prefix-sid <prefix>
  • tools oam lsp-trace sr-isis prefix-sid <prefix>

Supported parameters include destination-ip, source-ip, timeout, ecmp-next-hop-select, igp-instance, and traffic-class. However, the only mandatory parameter is the prefix-sid.

Results from the lsp-ping and lsp-trace operations are displayed using info from state commands.

In the case of ECMP, even when the destination IP is configured, the SR Linux node may not exercise all NHLFEs.

Performing an LSP ping to a segment routing prefix

To check connectivity to a segment routing prefix, use the tools oam lsp-ping sr-isis command. To display the results, use the info from state oam lsp-ping sr-isis command, specifying the session ID output from the lsp-ping.

Perform an LSP ping to a destination segment routing prefix
# tools oam lsp-ping sr-isis prefix-sid 10.20.1.6/32
/oam/lsp-ping/sr-isis/prefix-sid[prefix=10.20.1.6/32]:
    Initiated LSP Ping to prefix 10.20.1.6/32 with session id 49152 
Display results of the LSP ping
--{ + running }--[  ]--
# info from state oam lsp-ping sr-isis prefix-sid 10.20.1.6/32 session-id 49152
    oam {
        lsp-ping {
            sr-isis {
                prefix-sid 10.20.1.6/32 {
                    session-id 49152 {
                        test-active false
                        statistics {
                            round-trip-time {
                                minimum 4292
                                maximum 4292
                                average 4292
                                standard-deviation 0
                            }
                        }
                        path-destination {
                            ip-address 127.0.0.1
                        }
                        sequence 1 {
                            probe-size 48
                            request-sent true
                            out-interface ethernet-1/33.1
                            reply {
                                received true
                                reply-sender 10.20.1.6
                                udp-data-length 40
                                mpls-ttl 255
                                round-trip-time 4292
                                return-code replying-router-is-egress-for-fec-at-stack-depth-n
                                return-subcode 1
                            }
                        }
                    }
                }
            }
        }
    }
Performing an LSP trace to a segment routing prefix

To trace the path to any midpoint or endpoint of a segment routing tunnel, use the tools oam lsp-trace command. To display the results, use the info from state oam lsp-trace sr-isis command, specifying the session ID output from the lsp-trace.

Perform an LSP trace to a destination segment routing prefix
# tools oam lsp-trace sr-isis prefix-sid 10.20.1.6/32
/oam/lsp-trace/sr-isis/prefix-sid[prefix=10.20.1.6/32]:
    Initiated LSP Trace to prefix 10.20.1.6/32 with session id 49153 
Display results of the LSP trace
--{ + running }--[  ]--
# info from state oam lsp-trace sr-isis prefix-sid 10.20.1.6/32 session-id 49153
    oam {
        lsp-trace {
            sr-isis {
                prefix-sid 10.20.1.6/32 {
                    session-id 49153 {
                        test-active false
                        path-destination {
                            ip-address 127.0.0.1
                        }
                        hop 1 {
                            probe 1 {
                                probe-size 76
                                probes-sent 1
                                reply {
                                    received true
                                    reply-sender 10.20.1.2
                                    udp-data-length 60
                                    mpls-ttl 1
                                    round-trip-time 2768
                                    return-code label-switched-at-stack-depth-n
                                    return-subcode 1
                                }
                                downstream-detailed-mapping 1 {
                                    mtu 1500
                                    address-type ipv4-numbered
                                    downstream-router-address 10.10.4.4
                                    downstream-interface-address 10.10.4.4
                                    mpls-label 1 {
                                        label 27000
                                        protocol isis
                                    }
                                }
                            }
                        }
                        hop 2 {
                            probe 1 {
                                probe-size 76
                                probes-sent 1
                                reply {
                                    received true
                                    reply-sender 10.20.1.4
                                    udp-data-length 60
                                    mpls-ttl 2
                                    round-trip-time 3414
                                    return-code label-switched-at-stack-depth-n
                                    return-subcode 1
                                }
                                downstream-detailed-mapping 1 {
                                    mtu 1500
                                    address-type ipv4-numbered
                                    downstream-router-address 10.10.9.6
                                    downstream-interface-address 10.10.9.6
                                    mpls-label 1 {
                                        label 27000
                                        protocol isis
                                    }
                                }
                            }
                        }
                        hop 3 {
                            probe 1 {
                                probe-size 76
                                probes-sent 1
                                reply {
                                    received true
                                    reply-sender 10.20.1.6
                                    udp-data-length 32
                                    mpls-ttl 3
                                    round-trip-time 4429
                                    return-code replying-router-is-egress-for-fec-at-stack-depth-n
                                    return-subcode 1
                                }
                            }
                        }
                    }
                }
            }
        }
    }

LSP ping and trace for uncolored SR-MPLS TE policy

To check connectivity and trace the path of a segment routing (SR) traffic-engineered (TE) tunnel using an uncolored SR-MPLS TE policy, SR Linux supports the following OAM commands:
  • tools oam lsp-ping te-policy sr-uncolored policy <policy name> protocol-origin <value>
  • tools oam lsp-trace te-policy sr-uncolored policy <policy-name> protocol-origin protocol-origin <value>

Supported parameters include destination-ip, source-ip, interval, segment-list-index, timeout, ecmp-interface-select mpls-ttl, ecmp-next-hop-select, send-count, traffic-class, and probe-size. The mandatory parameters are, policy and protocol-origin.

Results from the lsp-ping and lsp-trace operations are displayed using info from state commands.

Performing an LSP ping to an uncolored SR-MPLS TE policy

To check connectivity to a SR-TE tunnel using an uncolored SR-MPLS TE policy, execute the tools oam lsp-ping te-policy sr-uncolored policy command, specifying the uncolored SR-MPLS TE policy name and the protocol origin. To display the results, use the info from state oam lsp-ping te-policy sr-uncolored policy protocol-origin session-id command, specifying the session ID output from the lsp-ping.

Perform an LSP ping to an uncolored SR-MPLS TE policy
--{ + running }--[  ]--
# tools oam lsp-ping te-policy sr-uncolored policy polABCEF protocol-origin local
/:
Initiated LSP Ping for TE-policy polABCEF with session id 49152.
Please check "info from state oam" for result
Display results of the LSP ping
--{ + running }--[  ]--
# info from state oam lsp-ping te-policy sr-uncolored policy polABCEF protocol-origin local session-id 49152
    oam {
        lsp-ping {
            te-policy {
                sr-uncolored {
                    policy polABCEF protocol-origin local {
                        session-id 49152 {
                            test-active false
                            statistics {
                                round-trip-time {
                                    minimum 83
                                    maximum 83
                                    average 83
                                    standard-deviation 0
                                }
                            }
                            path-destination {
                                ip-address 127.0.0.1
                            }
                            sequence 1 {
                                probe-size 64
                                request-sent true
                                out-interface ethernet-1/31.1
                                reply {
                                    received true
                                    reply-sender 10.20.1.6
                                    udp-data-length 40
                                    mpls-ttl 255
                                    round-trip-time 83
                                    return-code replying-router-is-egress-for-fec-at-stack-depth-n
                                    return-subcode 1
                                }
                            }
                        }
                    }
                }
            }
        }
    }
Performing an LSP trace to an uncolored SR-MPLS TE policy

To trace the path of a SR-TE tunnel using an uncolored SR-MPLS TE policy, execute the tools oam lsp-trace te-policy sr-uncolored policy command, specifying the uncolored SR-MPLS TE policy name and the protocol origin. To display the results, use the info from state oam lsp-trace te-policy sr-uncolored policy protocol-origin session-id command, specifying the session ID output from the lsp-trace.

Perform an LSP trace to an uncolored SR-MPLS TE policy
--{ + running }--[  ]--
# tools oam lsp-trace te-policy sr-uncolored policy polABCEF protocol-origin local
/:
Initiated LSP Trace for TE-policy polABCEF with session id 49153.
Please check "info from state oam" for result

Display results of the LSP trace
--{ + running }--[  ]--
# info from state oam lsp-trace te-policy sr-uncolored policy polABCEF protocol-origin local session-id 49153
    oam {
        lsp-trace {
            te-policy {
                sr-uncolored {
                    policy polABCEF protocol-origin local {
                        session-id 49153 {
                            test-active false
                            path-destination {
                                ip-address 127.0.0.1
                            }
                            hop 1 {
                                probe 1 {
                                    probe-size 188
                                    probes-sent 1
                                    reply {
                                        received true
                                        reply-sender 10.20.1.2
                                        udp-data-length 32
                                        mpls-ttl 1
                                        round-trip-time 165172
                                        return-code replying-router-is-egress-for-fec-at-stack-depth-n
                                        return-subcode 4
                                    }
                                }
                                probe 2 {
                                    probe-size 156
                                    probes-sent 1
                                    reply {
                                        received true
                                        reply-sender 10.20.1.2
                                        udp-data-length 68
                                        mpls-ttl 1
                                        round-trip-time 54673
                                        return-code label-switched-at-stack-depth-n
                                        return-subcode 3
                                    }
                                    downstream-detailed-mapping 1 {
                                        mtu 1500
                                        address-type ipv4-numbered
                                        downstream-router-address 10.10.3.3
                                        downstream-interface-address 10.10.3.3
                                        mpls-label 1 {
                                            label IMPLICIT_NULL
                                            protocol isis
                                        }
                                        mpls-label 2 {
                                            label 70019
                                            protocol isis
                                        }
                                        mpls-label 3 {
                                            label 70009
                                            protocol isis
                                        }
                                    }
                                }
                            }
                            hop 2 {
                                probe 1 {
                                    probe-size 156
                                    probes-sent 1
                                    reply {
                                        received true
                                        reply-sender 10.20.1.3
                                        udp-data-length 32
                                        mpls-ttl 2
                                        round-trip-time 103751
                                        return-code replying-router-is-egress-for-fec-at-stack-depth-n
                                        return-subcode 3
                                    }
                                }
                                probe 2 {
                                    probe-size 124
                                    probes-sent 1
                                    reply {
                                        received true
                                        reply-sender 10.20.1.3
                                        udp-data-length 64
                                        mpls-ttl 2
                                        round-trip-time 8262
                                        return-code label-switched-at-stack-depth-n
                                        return-subcode 2
                                    }
                                    downstream-detailed-mapping 1 {
                                        mtu 1500
                                        address-type ipv4-numbered
                                        downstream-router-address 10.10.5.5
                                        downstream-interface-address 10.10.5.5
                                        mpls-label 1 {
                                            label IMPLICIT_NULL
                                            protocol isis
                                        }
                                        mpls-label 2 {
                                            label 70009
                                            protocol isis
                                        }
                                    }
                                }
                            }
                            hop 3 {
                                probe 1 {
                                    probe-size 124
                                    probes-sent 1
                                    reply {
                                        received true
                                        reply-sender 10.20.1.5
                                        udp-data-length 32
                                        mpls-ttl 3
                                        round-trip-time 57971
                                        return-code replying-router-is-egress-for-fec-at-stack-depth-n
                                        return-subcode 2
                                    }
                                }
                                probe 2 {
                                    probe-size 92
                                    probes-sent 1
                                    reply {
                                        received true
                                        reply-sender 10.20.1.5
                                        udp-data-length 60
                                        mpls-ttl 3
                                        round-trip-time 101694
                                        return-code label-switched-at-stack-depth-n
                                        return-subcode 1
                                    }
                                    downstream-detailed-mapping 1 {
                                        mtu 1500
                                        address-type ipv4-numbered
                                        downstream-router-address 10.10.10.6
                                        downstream-interface-address 10.10.10.6
                                        mpls-label 1 {
                                            label IMPLICIT_NULL
                                            protocol isis
                                        }
                                    }
                                }
                            }
                            hop 4 {
                                probe 1 {
                                    probe-size 92
                                    probes-sent 1
                                    reply {
                                        received true
                                        reply-sender 10.20.1.6
                                        udp-data-length 32
                                        mpls-ttl 4
                                        round-trip-time 14393
                                        return-code replying-router-is-egress-for-fec-at-stack-depth-n
                                        return-subcode 1
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }

Bidirectional Forwarding Detection

BFD is a lightweight mechanism used to monitor the liveliness of a remote neighbor. It is lightweight enough so that the ongoing sending and receiving mechanism can be implemented in the forwarding hardware. Because of this lightweight nature, BFD can send and receive messages at a much higher rate than other control plane hello mechanisms providing faster detection of connection failures.

SR Linux supports BFD asynchronous mode, where BFD control packets are sent between two systems to activate and maintain BFD neighbor sessions between them.

BFD can be configured to monitor connectivity for the following:

SR Linux supports one BFD session per port/connector, or up to 1152 sessions for an eight slot chassis, depending on the hardware configuration.

On SR Linux systems that support link aggregation groups (LAGs), SR Linux supports micro-BFD, where BFD sessions are established for individual members of a LAG. If the BFD session for one of the links indicates a connection failure, the link is taken out of service from the perspective of the LAG. See Micro-BFD.

Configuring BFD for a subinterface

You can enable BFD with an associated subinterface and set values for intervals and criteria for declaring a session down.

Timer values are in microseconds. The detection interval for the BFD session is calculated by multiplying the value of the negotiated transmission interval by the value specified in this field.

The following example configures BFD for a subinterface.

--{ candidate shared default }--[  ]--
# info bfd
    bfd {
        subinterface ethernet-1/2.1 {
            admin-state enable
            desired-minimum-transmit-interval 250000
            required-minimum-receive 250000
            detection-multiplier 3
        }
    }

Configuring BFD under the BGP protocol

You can configure BFD under the BGP protocol at the global, group, or neighbor level.

Before enabling BFD, you must first configure it for a subinterface and set timer values. See Configuring BFD for a subinterface.

Configure BFD under the BGP protocol at the global level

--{ candidate shared default }--[  ]--
# info network-instance default
    network-instance default {
        protocols {
            bgp {
                failure-detection {
                    enable-bfd true 
                    }
                }
            }
        }
    }

Configure BFD for a BGP peer group

The following example configures BFD for the links between peers within an associated BGP peer group.

--{ * candidate shared default }--[  ]--
# info network-instance default protocols bgp
    network-instance default {
        protocols {
            bgp {
                group test {
                    failure-detection {
                        enable-bfd true
                    }
                }
            }
        }
    }

Configure BFD for BGP neighbors

The following example configures BFD for the link between BGP neighbors.

--{ candidate shared default }--[  ]--
# info network-instance default protocols bgp
    network-instance default {
        protocols {
            bgp {
                neighbor 192.168.0.1 {
                    failure-detection {
                        enable-bfd true
                        fast-failover true
                    }
                }
            }
        }
    }

Configuring BFD for static routes

You can use BFD as a failure detection mechanism for monitoring the reachability of next hops for static routes. When BFD is enabled for a static route, it makes an active BFD session between the local router and the defined next hops required as a condition for a static route to be operationally active.

You enable BFD for specific next-hop groups; as a result, BFD is enabled for any static route that refers to the next-hop group. If multiple next hops are defined within the next-hop group, a BFD session is established between the local address and each next hop in the next-hop group.

A static route is considered operationally up if at least one of the configured next-hop addresses can establish a BFD session. If the BFD session fails, the associated next hop is removed from the FIB as an active next hop.

The following example enables BFD for a static route next hop:

--{ * candidate shared default }--[ network-instance black ]--
# info next-hop-groups
    next-hop-groups {
        group static-ipv4-grp {
            admin-state enable
            nexthop 1 {
                failure-detection {
                    enable-bfd {
                        local-address 192.0.2.1
                    }
                }
            }
        }
    }

A BFD session is established between the address configured with the local-address parameter and each next-hop address before that next-hop address is installed in the forwarding table.

All next-hop BFD sessions share the same timer settings, which are taken from the BFD configuration for the subinterface where the address in local-address parameter is configured. See Bidirectional Forwarding Detection.

Configuring BFD under OSPF

For OSPF and OSPFv2, you can enable BFD at the interface level to monitor the connectivity between the router and its attached network.

--{ candidate shared default }--[  ]--
# info network-instance default protocols ospf
    network-instance default {
        interface ethernet-1/1.1 {
            interface-ref {
                interface ethernet-1/1
                subinterface 1
            }
        }
        protocols {
            ospf {
                instance o1 {
                    version ospf-v2
                    area 1.1.1.1 {
                        interface ethernet-1/1.1 {
                            failure-detection {
                                enable-bfd true
                            }
                        }
                    }
                }
            }
        }
    }

Configuring BFD under IS-IS

You can configure BFD at the interface level for IS-IS. You can optionally configure a BFD-enabled TLV to be included for IPv4 or IPv6 on the IS-IS interface.

--{ candidate shared default }--[  ]--
# info network-instance default protocols isis
    network-instance default {
        interface ethernet-1/1.1 {
            interface-ref {
                interface ethernet-1/1
                subinterface 1
            }
        }
        protocols {
            isis {
                instance i1 {
                    ipv4-unicast {
                        admin-state enable
                    }
                    interface ethernet-1/1.1 {
                        ipv4-unicast {
                            enable-bfd true
                            include-bfd-tlv true
                        }
                    }
                }
            }
        }
    }

Viewing the BFD state

Use the info from state command to verify the BFD state for a network-instance.

# info from state bfd network-instance default peer 30 
    bfd {
        network-instance default {
            peer 30 {
                oper-state up
                local-address 192.168.1.5
                remote-address 192.168.1.3
                remote-discriminator 25
                subscribed-protocols bgp_mgr
                session-state UP
                remote-session-state UP
                last-state-transition 2020-01-24T16:22:55.224Z
                failure-transitions 0
                local-diagnostic-code NO_DIAGNOSTIC
                remote-diagnostic-code NO_DIAGNOSTIC
                remote-minimum-receive-interval 1000000
                remote-control-plane-independent false
                active-transmit-interval 250000
                active-receive-interval 250000
                remote-multiplier 3
                async {
                    last-packet-transmitted 2020-01-24T16:23:19.385Z
                    last-packet-received 2020-01-24T16:23:18.906Z
                    transmitted-packets 32
                    received-packets 32
                    up-transitions 1
                }
            }
        }
    }

Micro-BFD

Micro-BFD refers to running BFD over the individual links in a LAG to monitor the bidirectional liveliness of the Ethernet links that make up the LAG.

A LAG member cannot be made operational within the LAG until the micro-BFD session is fully established. If a micro-BFD session fails, the corresponding Ethernet link is taken out of service from the perspective of the LAG.

Micro-BFD is supported on Ethernet LAG interfaces with an IP interface. Micro-BFD sessions are associated with each individual link. When enabled, the state of the individual links depends on the micro-BFD session state:

  • Micro-BFD sessions must be established between both endpoints of a link before the link can be operationally up.

  • If the micro-BFD session fails, the associated Ethernet link becomes operationally down from the perspective of the LAG.

  • If LACP is not enabled for the LAG and the Ethernet port is up, the system attempts to re-establish the micro-BFD session with the far end of the link.

  • If LACP enabled for the LAG and the Ethernet port is up, the system attempts to re-establish the micro-BFD session with the far end of the link when LACP reaches distributing state.

If a link is not active for forwarding from the perspective of a LAG, ARP can still be performed across the link. For example, when a link is being brought up, and its micro-BFD session is not yet established, ARP can still be performed for the MAC address at the far end of the link, even though the link is not yet part of the LAG.

Micro-BFD packets bypass ingress and egress subinterface/interface ACLs, but received micro-BFD packets can be matched by CPM filters for filtering and logging.

Micro-BFD is supported on all SR Linux systems that also support LAGs: 7250 IXR; 7250 IXR-X, 7220 IXR-D1, D2, and D3; 7220 IXR-H2 and H3.

Configuring micro-BFD for a LAG interface

To configure micro-BFD for a LAG interface, you configure IP addresses to be used as the source address for IP packets and a remote address for the far end of the BFD session.

You can specify the minimum interval in microseconds between transmission of BFD control packets, as well as the minimum acceptable interval between received BFD control packets. The detection-multiplier setting specifies the number of packets that must be missed to declare the BFD session as down.

--{ * candidate shared default }--[  ]--
# info bfd micro-bfd-sessions
    micro-bfd-sessions {
        lag-interface lag1 {
            admin-state enable
            local-address 192.35.2.5
            remote-address 192.35.2.3
            desired-minimum-transmit-interval 250000
            required-minimum-receive 250000
            detection-multiplier 3
        }
    }

Viewing the micro-BFD state

Use the info from state command to verify the micro-BFD state for members of a LAG interface.

# info from state micro-bfd-sessions lag-interface lag1 member-interface ethernet 2/1
    micro-bfd-sessions
        lag-interface lag1 {
            admin-state UP
            local-address 192.0.2.5
            remote-address 192.0.2.3
            desired-minimum-transmit-interval 250000
            required-minimum-receive 250000
            detection-multiplier 3
            member-interface ethernet 2/1 {
                session-state UP
                remote-session-state UP
                last-state-transition 2020-01-24T16:22:55.224Z
                last-failure-time 2020-01-24T16:22:55.224Z
                failure-transitions 0
                local-discriminator 25
                remote-discriminator 25
                local-diagnostic-code NO_DIAGNOSTIC
                remote-diagnostic-code NO_DIAGNOSTIC
                remote-minimum-receive-interval 1000000
                remote-control-plane-independent false
                active-transmit-interval 250000
                active-receive-interval 250000
                remote-multiplier 3
                async {
                   last-clear 2020-01-23T16:21:19.385Z
                   last-packet-transmitted 2020-01-24T16:23:19.385Z
                   last-packet-received 2020-01-24T16:23:18.906Z
                   transmitted-packets 32
                   received-errored-packets 3
                   received-packets 32
                   up-transitions 1
                }
            }
        }
    }

Seamless Bidirectional Forwarding Detection (S-BFD)

Overview

BFD detects connection failures faster than other hello mechanisms. However, if many BFD sessions are configured to detect links, very long negotiation times result in reduced system performance. You can configure seamless bidirectional forwarding detection (S-BFD), which is a simplified mechanism that speeds up a BFD session by eliminating the negotiation and state establishment process. This is accomplished primarily by predetermining the session discriminator and using specific mechanisms to distribute the discriminators to a remote network entity. This allows client applications or protocols to quickly initiate and perform connectivity tests. A per-session state is maintained only at the head-end of a session. The tail-end reflects the BFD control packets back to the head-end.

Initiator and reflector

An S-BFD session is established between an initiator and a reflector. SR Linux supports only one instance of a reflector in each node. A discriminator is assigned to initiator and reflector.

The initiator initiates an S-BFD session on a network node and performs a continuity test by sending S-BFD packets to the reflector. The reflector receives the S-BFD packet and reflects the S-BFD packet back along with the state value based on its current state.

The following information is swapped in the S-BFD response:

  • The source and destination IP addresses

  • The source and destination UDP ports

  • The initiator and reflector discriminators

See Configuring an S-BFD reflector for information about how to configure a reflector. An SR Linux router can be both an initiator and a reflector, thereby allowing you to configure different S-BFD sessions.

S-BFD discriminator

SR Linux supports the following methods of mapping an S-BFD remote IP address with its discriminator:

  • Static configuration

  • Automatic learning using opaque IS-IS routing extensions

You can statically configure an S-BFD remote IP address and discriminator for each network instance. The S-BFD initiator immediately starts sending S-BFD packets if the discriminator value of the far-end reflector is known. A session set up is not required. The INIT state is not present in an S-BFD session. The initiator state changes from AdminDown to Up when it begins to send S-BFD packets. The following table lists the S-BFD packet information that the initiator sends to the reflector.

Table 3. Fields in S-BFD packet

Source IP address

This is the local session IP address. For IPv6, this is a global unicast address belonging to the node.

Destination IP address

This is the IP address of the reflector, and it needs to be configured.

My discriminator

This is the locally assigned discriminator.

Your discriminator

This is the discriminator value of the reflector, and it needs to be configured.

See Statically configuring an S-BFD discriminator for more information about how to configure an S-BFD discriminator.

If the initiator receives a valid response from the reflector with an Up state, the initiator declares the S-BFD session state as Up. If the initiator fails to receive a specific number of responses, as determined by the BFD multiplier in the BFD template for the session, the initiator declares the S-BFD session state as Failed. If any of the discriminators change, the session fails and the router attempts to restart with the new values. If the reflector discriminator changes at the far-end peer, the session fails. The mapping may not have been updated locally before the system checks for a new reflector discriminator from the local mapping table. Therefore the session is bounced and brought up with the new values. If any discriminator is deleted, the corresponding S-BFD sessions are deleted.

SR Linux supports automatic mapping of an S-BFD remote IP address with its discriminator using the IS-IS protocol extensions. The IS-IS protocol uses a sub-TLV of the capabilities TLV to advertise and distribute discriminators. See Automatically mapping an S-BFD discriminator for more information.

Routed and controlled return path

S-BFD supports the following forms of returning transmitted S-BFD packets back to the initiator:

  • Routed return

  • Controlled return path

In routed return, S-BFD uses an initiator-reflector model where an initiator sends S-BFD messages to a reflector using the discriminator of the reflectors. The reflector reflects the S-BFD message back to the initiator via IPv4 or IPv6 routing.

In controlled return path for SR-Policy, the initiating node embeds a SID, typically a binding SID that is used by the reflecting node, to determine the correct path back to the initiator. The S-BFD message is then forwarded to a path that is identical or similar to the original path that the message was sent by the initiator.

S-BFD state

S-BFD session state is reported at the network instance, policy, and system levels. See Viewing the S-BFD state for more information.

Statically configuring an S-BFD discriminator

To statically map an S-BFD remote IP address with its discriminator for each network instance, you configure the network-instance bfd seamless-bfd command and specify the peer IP address and discriminator.

Statically configuring an S-BFD discriminator

This is an example for statically configuring an S-BFD discriminator.

--{ + candidate shared default }--[  ]--
# info network-instance default bfd seamless-bfd
    network-instance default {
        bfd {
            seamless-bfd {
                peer 192.0.2.0 {
                    discriminator 30
                }
            }
        }
    }

Automatically mapping an S-BFD discriminator

SR Linux supports automatic mapping of an S-BFD remote IP address with its discriminator using IGP routing protocol extensions. The IS-IS protocol uses a sub-TLV of the capabilities TLV to distribute S-BFD discriminators. There is no explicit configuration to enable or disable router capability advertisement.

Output from BFD state

This example shows an output of an automatically mapped S-BFD discriminator.

--{ + running }--[  ]--
# info from state bfd
    bfd {
        total-bfd-sessions 2
        total-unmatched-bfd-packets 1
        network-instance base {
            peer 16385 {
                oper-state up
                local-address 1.1.1.3
                remote-address 127.0.64.1
                remote-discriminator 524289
                subscribed-protocols SRPOLICY
                session-state UP
                remote-session-state UP
                last-state-transition "2024-05-15T19:15:58.117Z (49 seconds ago)"
                failure-transitions 0
                local-diagnostic-code NO_DIAGNOSTIC
                remote-diagnostic-code NO_DIAGNOSTIC
                remote-minimum-receive-interval 1000000
                remote-control-plane-independent false
                active-transmit-interval 1000000
                active-receive-interval 1000000
                remote-multiplier 3
                te-policy-name C_to_Fipv4
                te-policy-segment-list-index 1
                te-policy-protocol-origin LOCAL
                te-policy-segment-list-lsp-index 216
                sr-policy-endpoint 1.1.1.6
                async {
                    last-packet-transmitted "2024-05-15T19:16:43.140Z (4 seconds ago)"
                    last-packet-received "2024-05-15T19:16:43.146Z (4 seconds ago)"
                    transmitted-packets 61
                    received-packets 61
                    up-transitions 1
                }
            }
            peer 16386 {
                oper-state up
                local-address 1.1.1.3
                remote-address 127.0.64.2
                remote-discriminator 524289
                subscribed-protocols SRPOLICY
                session-state UP
                remote-session-state UP
                last-state-transition "2024-05-15T19:15:58.119Z (49 seconds ago)"
                failure-transitions 0
                local-diagnostic-code NO_DIAGNOSTIC
                remote-diagnostic-code NO_DIAGNOSTIC
                remote-minimum-receive-interval 1000000
                remote-control-plane-independent false
                active-transmit-interval 1000000
                active-receive-interval 1000000
                remote-multiplier 3
                te-policy-name C_to_Fipv4
                te-policy-segment-list-index 2
                te-policy-protocol-origin LOCAL
                te-policy-segment-list-lsp-index 217
                sr-policy-endpoint 1.1.1.6
                async {
                    last-packet-transmitted "2024-05-15T19:16:43.651Z (4 seconds ago)"
                    last-packet-received "2024-05-15T19:16:43.695Z (4 seconds ago)"
                    transmitted-packets 62
                    received-packets 62
                    up-transitions 1
                }
            }
        }
    }

Configuring an S-BFD reflector

To enable and configure an S-BFD reflector, use the network-instance bfd seamless-bfd reflector command. You must allocate the discriminator value from the S-BFD reflector pool that ranges from 524288 to 526335.

Note:

Only a single reflector discriminator is supported for each network instance.

Configuring an S-BFD reflector

The following example configures an S-BFD reflector.

--{ + candidate shared default }--[  ]--
# info network-instance default bfd seamless-bfd reflector abc
    network-instance default {
        bfd {
            seamless-bfd {
                reflector abc {
                    local-discriminator 524289
                    admin-state enable
                    description test
                }
            }
        }
    }

Viewing the S-BFD state

Use the info from state command to verify the S-BFD state.

Viewing the S-BFD state at network instance level

The following example displays the S-BFD status at the network instance level.

--{ + running }--[  ]--
# info from state network-instance base bfd seamless-bfd
    network-instance base {
        bfd {
            seamless-bfd {
                peer 1.1.1.6 {
                    discriminator 524289
                }
                reflector 1.1.1.3 {
                    local-discriminator 524289
                    admin-state enable
                }
            }
        }
    }

Viewing the S-BFD state at the policy level

The following example displays the S-BFD status at the policy level.

--{ + running }--[  ]--
# info from state network-instance base maintenance-policies policy mp
    network-instance base {
        maintenance-policies {
            policy mp {
                revert-timer disable
                seamless-bfd {
                    detection-multiplier 3
                    desired-minimum-transmit-interval 1000000
                    hold-down-timer 4
                    wait-for-up-timer 3
                    mode linear
                    threshold 1
                }
            }
        }
    }

Viewing the S-BFD state at the system level

The following example displays the S-BFD status at the system level.

--{ + running }--[  ]--
# info from state bfd
    bfd {
        total-bfd-sessions 2
        total-unmatched-bfd-packets 1
        network-instance base {
            peer 16385 {
                oper-state up
                local-address 1.1.1.3
                remote-address 127.0.64.1
                remote-discriminator 524289
                subscribed-protocols SRPOLICY
                session-state UP
                remote-session-state UP
                last-state-transition "2024-05-15T19:15:58.117Z (49 seconds ago)"
                failure-transitions 0
                local-diagnostic-code NO_DIAGNOSTIC
                remote-diagnostic-code NO_DIAGNOSTIC
                remote-minimum-receive-interval 1000000
                remote-control-plane-independent false
                active-transmit-interval 1000000
                active-receive-interval 1000000
                remote-multiplier 3
                te-policy-name C_to_Fipv4
                te-policy-segment-list-index 1
                te-policy-protocol-origin LOCAL
                te-policy-segment-list-lsp-index 216
                sr-policy-endpoint 1.1.1.6
                async {
                    last-packet-transmitted "2024-05-15T19:16:43.140Z (4 seconds ago)"
                    last-packet-received "2024-05-15T19:16:43.146Z (4 seconds ago)"
                    transmitted-packets 61
                    received-packets 61
                    up-transitions 1
                }
            }
            peer 16386 {
                oper-state up
                local-address 1.1.1.3
                remote-address 127.0.64.2
                remote-discriminator 524289
                subscribed-protocols SRPOLICY
                session-state UP
                remote-session-state UP
                last-state-transition "2024-05-15T19:15:58.119Z (49 seconds ago)"
                failure-transitions 0
                local-diagnostic-code NO_DIAGNOSTIC
                remote-diagnostic-code NO_DIAGNOSTIC
                remote-minimum-receive-interval 1000000
                remote-control-plane-independent false
                active-transmit-interval 1000000
                active-receive-interval 1000000
                remote-multiplier 3
                te-policy-name C_to_Fipv4
                te-policy-segment-list-index 2
                te-policy-protocol-origin LOCAL
                te-policy-segment-list-lsp-index 217
                sr-policy-endpoint 1.1.1.6
                async {
                    last-packet-transmitted "2024-05-15T19:16:43.651Z (4 seconds ago)"
                    last-packet-received "2024-05-15T19:16:43.695Z (4 seconds ago)"
                    transmitted-packets 62
                    received-packets 62
                    up-transitions 1
                }
            }
        }
    }