Port features
This section contains information about the following topics:
Multilink point-to-point protocol
This section contains information about the following topics:
MLPPP overview
Multilink point-to-point protocol (MLPPP) is a method of splitting, recombining, and sequencing packets across multiple logical data links. MLPPP is defined in RFC 1990, The PPP Multilink Protocol (MP).
MLPPP allows multiple PPP links to be bundled together, providing a single logical connection between two routers. Data can be distributed across the multiple links within a bundle to achieve high bandwidth. As well, MLPPP allows for a single frame to be fragmented and transmitted across multiple links. This capability allows for lower latency and also for a higher maximum receive unit (MRU).
Multilink protocol is negotiated during the initial LCP option negotiations of a standard PPP session. A system indicates to its peer that it is willing to perform MLPPP by sending the MP option as part of the initial LCP option negotiation.
The system has the following capabilities:
The system offering the option is capable of combining multiple physical links into one logical link.
The system is capable of receiving upper layer protocol data units (PDUs) that are fragmented using the MP header and then reassembling the fragments back into the original PDU for processing.
The system is capable of receiving PDUs of size N octets, where N is specified as part of the option, even if N is larger than the maximum receive unit (MRU) for a single physical link.
When MLPPP has been successfully negotiated, the sending system is free to send PDUs encapsulated and/or fragmented with the MP header.
MP introduces a new protocol type with a protocol ID (PID) of 0x003d. MLPPP 24-bit fragment format and MLPPP 12-bit fragment format show the MLPPP fragment frame structure. Framing to indicate the beginning and end of the encapsulation is the same as that used by PPP and described in RFC 1662, PPP in HDLC-like Framing.
MP frames use the same HDLC address and control pair value as PPP: Address – 0xFF and Control – 0x03. The 2-octet protocol field is also structured the same way as in PPP encapsulation.
The required and default format for MP is the 24-bit format. During the LCP state, the 12-bit format can be negotiated. The 7705 SAR is capable of supporting and negotiating the alternate 12-bit frame format.
The maximum differential delay supported for MLPPP is 25 ms.
Protocol field
The protocol field (PID) is 2 octets. Its value identifies the datagram encapsulated in the Information field of the packet. For MP, the PID also identifies the presence of a 4-octet MP header (or 2-octet, if negotiated).
A PID of 0x003d identifies the packet as MP data with an MP header.
The LCP packets and protocol states of the MLPPP session follow those defined by PPP in RFC 1661. The options used during the LCP state for creating an MLPPP NCP session are described in the sections that follow.
B&E bits
The B&E bits are used to indicate the start and end of a packet. Ingress packets to the MLPPP process have an MTU, which may or may not be larger than the maximum received reconstructed unit (MRRU) of the MLPPP network. The B&E bits manage the fragmentation of ingress packets when the packet exceeds the MRRU.
The B-bit indicates the first (or beginning) packet of a given fragment. The E-bit indicates the last (or ending) packet of a fragment. If there is no fragmentation of the ingress packet, both B&E bits are set to true (=1).
Sequence number
Sequence numbers can be either 12 or 24 bits long. The sequence number is 0 for the first fragment on a newly constructed bundle and increments by one for each fragment sent on that bundle. The receiver keeps track of the incoming sequence numbers on each link in a bundle and reconstructs the required unbundled flow through processing of the received sequence numbers and B&E bits. For a detailed description of the algorithm, see RFC 1990.
Information field
The Information field is 0 or more octets. The Information field contains the datagram for the protocol specified in the protocol field.
The MRRU has the same default value as the MTU for PPP. The MRRU is always negotiated during LCP.
Padding
On transmission, the Information field of the ending fragment may be padded with an arbitrary number of octets up to the MRRU. It is the responsibility of each protocol to distinguish padding octets from real information. Padding must only be added to the last fragment (E-bit set to true).
FCS
The FCS field of each MP packet is inherited from the normal framing mechanism from the member link on which the packet is transmitted. There is no separate FCS applied to the reconstituted packet as a whole if it is transmitted in more than one fragment.
LCP
The link control protocol (LCP) is used to establish the connection through an exchange of configure packets. This exchange is complete, and the LCP opened state entered, once a Configure-Ack packet has been both sent and received.
LCP allows for the negotiation of multiple options in a PPP session. MP is somewhat different from PPP, and therefore the following options are set for MP and are not negotiated:
no async control character map
no magic number
no link quality monitoring
address and control field compression
protocol field compression
no compound frames
no self-describing padding
Any non-LCP packets received during this phase must be silently discarded.
T1/E1 link hold timers
T1/E1 link hold timers (or MLPPP link flap dampening) guard against the node reporting excessive interface transitions. Timers can be set to determine when link up and link down events are advertised; that is, up-to-down and down-to-up transitions of the interface are not advertised to upper layer protocols (are dampened) until the configured timer has expired.
Multiclass MLPPP
The 7705 SAR supports multiclass MLPPP (MC-MLPPP) to address end-to-end delay caused by low-speed links transporting a mix of small and large packets. With MC-MLPPP, large, low-priority packets are fragmented to allow opportunities to send high-priority packets. QoS for MC-MLPPP is described in QoS in MC-MLPPP.
MC-MLPPP allows for the prioritization of multiple types of traffic flowing over MLPPP links, such as traffic between the cell site routers and the mobile operator’s aggregation routers. MC-MLPPP, as defined in RFC 2686, The Multi-Class Extension to Multi-Link PPP, is an extension of the MLPPP standard. MC-MLPPP is supported on access ports wherever PPP/MLPPP is supported, except on the 2-port OC3/STM1 Channelized Adapter card. It allows multiple classes of fragments to be transmitted over an MLPPP bundle, with each class representing a different priority level mapped to a forwarding class. The highest-priority traffic is transmitted over the MLPPP bundle with minimal delay regardless of the order in which packets are received.
Original MLPPP header format shows the original MLPPP header format that allowed only two implied classes. The two classes were created by transmitting two interleaving flows of packets; one with MLPPP headers and one without. This resulted in two levels of priority sent over the physical link, even without the implementation of multiclass support.
MC-MLPPP header format shows the short and long sequence number fragment format MC-MLPPP headers. The short sequence number fragment format header includes two class bits to allow for up to four classes of service. Four class bits are available in the long sequence number fragment format header, but a maximum of four classes are still supported. This extension to the MLPPP header format is detailed in RFC 2686.
The new MC-MLPPP header format uses the previously unused bits before the sequence number as the class identifier to allow four distinct classes of service to be identified.
QoS in MC-MLPPP
MC-MLPPP on the 7705 SAR supports scheduling based on multiclass implementation. Instead of the standard profiled queue-type scheduling, an MC-MLPPP encapsulated access port performs class-based traffic servicing. The four MC-MLPPP classes are scheduled in a strict priority fashion, as shown in the following table.
MC-MLPPP class |
Priority |
---|---|
0 |
Priority over all other classes |
1 |
Priority over classes 2 and 3 |
2 |
Priority over class 3 |
3 |
No priority |
For example, if a packet is sent to an MC-MLPPP class 3 queue and all other queues are empty, the 7705 SAR fragments the packet according to the configured fragment size and begins sending the fragments. If a new packet arrives at an MC-MLPPP class 2 queue while the class 3 fragment is still being serviced, the 7705 SAR finishes sending any fragments of the class 3 packet that are on the wire, then holds back the remaining fragments in order to service the higher-priority packet.
The fragments of the first packet remain at the top of the class 3 queue. For packets of the same class, MC-MLPPP class queues operate on a first-in, first-out basis.
The user configures the required number of MLPPP classes to use on a bundle. The forwarding class of the packet, as determined by the ingress QoS classification, is used to determine the MLPPP class for the packet. The mapping of forwarding class to MLPPP class is a function of the user-configurable number of MLPPP classes. The mapping for 4-class, 3-class, and 2-class MLPPP bundles is shown in the following table.
FC ID |
FC name |
MLPPP class 4-class bundle |
MLPPP class 3-class bundle |
MLPPP class 2-class bundle |
---|---|---|---|---|
7 |
NC |
0 |
0 |
0 |
6 |
H1 |
0 |
0 |
0 |
5 |
EF |
1 |
1 |
1 |
4 |
H2 |
1 |
1 |
1 |
3 |
L1 |
2 |
2 |
1 |
2 |
AF |
2 |
2 |
1 |
1 |
L2 |
3 |
2 |
1 |
0 |
BE |
3 |
2 |
1 |
If one or more forwarding classes are mapped to a queue, the scheduling priority of the queue is based on the lowest forwarding class mapped to it. For example, if forwarding classes 0 and 7 are mapped to a queue, the queue is serviced by MC-MLPPP class 3 in a 4-class bundle model.
cHDLC
The 7705 SAR supports Cisco HDLC, which is an encapsulation protocol for information transfer. Cisco HDLC is a bit-oriented synchronous data-link layer protocol that specifies a data encapsulation method on synchronous serial links using frame characters and checksums.
Cisco HDLC monitors line status on a serial interface by exchanging keepalive request messages with peer network devices. The protocol also allows routers to discover IP addresses of neighbors by exchanging SLARP address-request and address-response messages with peer network devices.
The basic frame structure of a cHDLC frame is shown in the following table.
Flag |
Address |
Control |
Protocol |
Information |
FCS |
---|---|---|---|---|---|
0x7E |
0x0F, 0x8F |
0x00 |
0x0800, 0x8035 |
— |
16 or 32 bit |
The fields in the cHDLC frame have the following characteristics:
Address field – supports unicast (0x0F) and broadcast (0x8F) addresses
Control field – always set to 0x00
Protocol field – supports IP (0x0800) and SLARP (0x8035; see SLARP for information about limitations)
Information field – the length can be 0 to 9 kB
FCS field – can be 16 or 32 bits. The default is 16 bits for ports with a speed equal to or lower than OC3, and 32 bits for all other ports. The FCS for cHDLC is calculated with the same method and same polynomial as PPP.
SLARP
The 7705 SAR supports only the SLARP keepalive protocol.
For the SLARP keepalive protocol, each system sends the other a keepalive packet at a user configurable interval. The default interval is 10 seconds. Both systems must use the same interval to ensure reliable operation. Each system assigns sequence numbers to the keepalive packets it sends, starting with zero, independent of the other system. These sequence numbers are included in the keepalive packets sent to the other system. Also included in each keepalive packet is the sequence number of the last keepalive packet received from the other system, as assigned by the other system. This number is called the returned sequence number. Each system keeps track of the last returned sequence number it has received. Immediately before sending a keepalive packet, the system compares the sequence number of the packet it is about to send with the returned sequence number in the last keepalive packet it has received. If the two differ by 3 or more, it considers the line to have failed, and will not route higher-level data across it until an acceptable keepalive response is received.
IMA
Inverse Multiplexing over ATM (IMA) is a cell-based protocol where an ATM cell stream is inverse-multiplexed and demultiplexed in a cyclical fashion among ATM-supporting channels to form a higher bandwidth logical link. This logical link is called an IMA group. By grouping channels into an IMA group, customers gain bandwidth management capability at in-between rates (for example, between DS1 and DS3 or between E1 and E3) through the addition or removal of channels to or from the IMA group. The 7705 SAR supports the IMA protocol as specified by the Inverse Multiplexing for ATM (IMA) Specification version 1.1.
In the ingress direction, traffic coming over multiple ATM channels configured as part of a single IMA group is converted into a single ATM stream and passed for further processing to the ATM layer, where service-related functions (for example, Layer 2 traffic management or feeding into a pseudowire) are applied. In the egress direction, a single ATM stream (after service functions are applied) is distributed over all paths that are part of an IMA group after ATM layer processing takes place.
An IMA group interface compensates for differential delay and allows for only a minimal cell delay variation. The maximum differential delay supported for IMA is 75 ms on the 16-port T1/E1 ASAP Adapter card and 32-port T1/E1 ASAP Adapter card and 50 ms on the 2-port OC3/STM1 Channelized Adapter card.
The interface deals with links that are added or deleted, or that fail. The higher layers see only an IMA group and not individual links; therefore, service configuration and management is done using IMA groups, and not individual links that are part of it.
The IMA protocol uses an IMA frame as the unit of control. An IMA frame consists of a series of 128 consecutive cells. In addition to ATM cells received from the ATM layer, the IMA frame contains IMA OAM cells. Two types of cells are defined: IMA Control Protocol (ICP) cells and IMA filler cells. ICP cells carry information used by the IMA protocol at both ends of an IMA group (for example, IMA frame sequence number, link stuff indication, status and control indication, IMA ID, Tx and Rx test patterns, version of the IMA protocol). A single ICP cell is inserted at the ICP cell offset position (the offset may be different on each link of the group) of each frame. Filler cells are used by the transmitting side to fill up each IMA frame in case there are not enough ATM stream cells from the ATM layer, so a continuous stream of cells is presented to the physical layer. Those cells are then discarded by the receiving end. IMA frames are transmitted simultaneously on all paths of an IMA group, and when they are received out of sync at the other end of the IMA group link, the receiver compensates for differential link delays among all paths.
Network synchronization on ports and circuits
The 7705 SAR provides network synchronization on the following ports and CES circuits:
Network synchronization on T1/E1 and Ethernet ports
Line timing mode provides physical layer timing (Layer 1) that can be used as an accurate reference for nodes in the network. This mode is immune to any packet delay variation (PDV) occurring on a Layer 2 or Layer 3 link. Physical layer timing provides the best synchronization performance through a synchronization distribution network.
On the 7705 SAR-A variant with T1/E1 ports, line timing is supported on T1/E1 ports. Line timing is also supported on all synchronous Ethernet ports on both 7705 SAR-A variants. Synchronous Ethernet is supported on the XOR ports (1 to 4), configured as either RJ45 ports or SFP ports. Synchronous Ethernet is also supported on SFP ports 5 to 8. Ports 9 to 12 do not support synchronous Ethernet and therefore do not support line timing.
On the 7705 SAR-Ax, line timing is supported on all Ethernet ports.
On the 7705 SAR-H, line timing is supported on:
all Ethernet ports
T1/E1 ports on a chassis equipped with a 4-port T1/E1 and RS-232 Combination module
On the 7705 SAR-Hc, line timing is supported on all Ethernet ports.
On the 7705 SAR-M variants with T1/E1 ports, line timing is supported on T1/E1 ports. Line timing is also supported on all RJ45 Ethernet ports and SFP ports on all 7705 SAR-M variants.
In addition, line timing is supported on the following 7705 SAR-M modules:
2-port 10GigE (Ethernet) module
6-port SAR-M Ethernet module
On the 7705 SAR-Wx, line timing is supported on:
RJ45 Ethernet ports and optical SFP ports (these ports support synchronous Ethernet and IEEE 1588v2 PTP)
On the 7705 SAR-X, line timing is supported on T1/E1 ports and Ethernet ports.
On the 7705 SAR-8 Shelf V2 and 7705 SAR-18, line timing is supported on:
16-port T1/E1 ASAP Adapter card
32-port T1/E1 ASAP Adapter card
6-port Ethernet 10Gbps Adapter card
8-port Gigabit Ethernet Adapter card (dual-rate and copper SFPs do not support synchronous Ethernet)
2-port 10GigE (Ethernet) Adapter card
10-port 1GigE/1-port 10GigE X-Adapter card (supported on the 7705 SAR-18 only)
4-port DS3/E3 Adapter card
2-port OC3/STM1 Channelized Adapter card
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card
4-port OC3/STM1 Clear Channel Adapter card
Packet Microwave Adapter card on ports that support synchronous Ethernet and on ports that support PCR
Synchronous Ethernet is a variant of line timing and is automatically enabled on ports and SFPs that support it. The operator can select a synchronous Ethernet port as a candidate for the timing reference. The recovered timing from this port is then used to time the system. This ensures that any of the system outputs are locked to a stable, traceable frequency source.
Network synchronization on SONET/SDH ports
Each SONET/SDH port can be independently configured to be loop-timed (recovered from an Rx line) or node-timed (recovered from the SSU in the active CSM).
A SONET/SDH port’s receive clock rate can be used as a synchronization source for the node.
Network synchronization on DS3/E3 ports
Each clear channel DS3/E3 port on a 4-port DS3/E3 Adapter card can be independently configured to be loop-timed (recovered from an Rx line), node-timed (recovered from the SSU in the active CSM), or differential-timed (derived from the comparison of a common clock to the received RTP timestamp in TDM pseudowire packets). When a DS3 port is channelized, each DS1 or E1 channel can be independently configured to be loop-timed, node-timed, or differential-timed (differential timing on DS1/E1 channels is supported only on the first three ports of the card). When not configured for differential timing, a DS3/E3 port can be configured to be a timing source for the node.
Network synchronization on DS3 CES circuits
Each DS3 CES circuit on a 2-port OC3/STM1 Channelized Adapter card card can be loop-timed (recovered from an Rx line) or free-run (timing source is from its own clock). A DS3 circuit can be configured to be a timing source for the node.
Network synchronization on T1/E1 ports and circuits
Each T1/E1 port can be independently configured for loop-timing (recovered from an Rx line) or node-timing (recovered from the SSU in the active CSM).
In addition, T1/E1 CES circuits on the following can be independently configured for adaptive timing (clocking is derived from incoming TDM pseudowire packets):
16-port T1/E1 ASAP Adapter card
32-port T1/E1 ASAP Adapter card
7705 SAR-M (variants with T1/E1 ports)
7705 SAR-X
7705 SAR-A (variant with T1/E1 ports)
T1/E1 ports on the 4-port T1/E1 and RS-232 Combination module
T1/E1 CES circuits on the following can be independently configured for differential timing (recovered from RTP in TDM pseudowire packets):
16-port T1/E1 ASAP Adapter card
32-port T1/E1 ASAP Adapter card
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card (DS1/E1 channels)
4-port DS3/E3 Adapter card (DS1/E1 channels on DS3 ports; E3 ports cannot be channelized); differential timing on DS1/E1 channels is supported only on the first three ports of the card
7705 SAR-M (variants with T1/E1 ports)
7705 SAR-X
7705 SAR-A (variant with T1/E1 ports)
T1/E1 ports on the 4-port T1/E1 and RS-232 Combination module
A T1/E1 port can be configured to be a timing source for the node.
Node synchronization from GNSS receiver ports
The GNSS receiver port on the 7705 SAR-Ax, 7705 SAR-Wx, or 7705 SAR-H GPS Receiver module, and the GNSS Receiver card installed in a 7705 SAR-8 Shelf V2 or 7705 SAR-18, can provide a synchronization clock to the SSU in the router with the corresponding QL for SSM. This frequency can then be distributed to the rest of the router from the SSU as configured with the ref-order and ql-selection commands; see the 7705 SAR Basic System Configuration Guide for information. The GNSS reference is qualified only if the GNSS receiver port is operational, has sufficient satellites locked, and has a frequency successfully recovered. A PTP master/boundary clock can also use this frequency reference with PTP peers.
In the event of GNSS signal loss or jamming resulting in the unavailability of timing information, the GNSS receiver automatically prevents output of clock or synchronization data to the system, and the system can revert to alternate timing sources.
A 7705 SAR using GNSS or IEEE 1588v2 PTP for time of day/phase recovery can perform high-accuracy OAM timestamping and measurements. See the 7705 SAR Basic System Configuration Guide for information about node timing sources.
Flow control on Ethernet ports
IEEE 802.3x flow control, which is the process of pausing the transmission based on received pause frames, is supported on Fast Ethernet, Gigabit Ethernet, and 10-Gigabit Ethernet (SFP+) ports. In the transmit direction, the Ethernet ports generate pause frames if the buffer occupancy reaches critical values or if port FIFO buffers are overloaded. Pause frame generation is automatically handled by the Ethernet Adapter card when the system-wide constant thresholds are exceeded. The generation of pause frames ensures that newly arriving frames still can be processed and queued, mainly to maintain the SLA agreements.
If autonegotiation is on for an Ethernet port, enabling and disabling of IEEE 802.3x flow control is autonegotiated for receive and transmit directions separately. If autonegotiation is turned off, the reception and transmission of IEEE 802.3x flow control is enabled by default and cannot be disabled.
Ingress flow control for the 6-port SAR-M Ethernet module is Ethernet link-based and not port-based. When IEEE 802.3x flow control is enabled on the 6-port SAR-M Ethernet module, pause frames are multicast to all ports on the Ethernet link. There are two Ethernet links on the 6-port SAR-M Ethernet module: one for ports 1, 3, and 5, and one for ports 2, 4, and 6. Pause frames are sent to either ports 1, 3, and 5, or to ports 2, 4, and 6, depending on which link the pause frame originates.
Ethernet OAM
This section contains information about the following topics:
For more information about Ethernet OAM, see the 7705 SAR OAM and Diagnostics Guide, ‟Ethernet OAM capabilities”.
Ethernet OAM overview
802.3ah Clause 57 (EFM OAM) defines the operations, administration, and maintenance (OAM) sublayer, which is a link level Ethernet OAM. It provides mechanisms for monitoring link operations such as remote fault indication and remote loopback control.
Ethernet OAM gives network operators the ability to monitor the status of Ethernet links and quickly determine the location of failing links or fault conditions.
Because some of the sites where the 7705 SAR will be deployed will only have Ethernet uplinks, this OAM functionality is mandatory. For example, mobile operators must be able to request remote loopbacks from the peer router at the Ethernet layer in order to debug any connectivity issues. EFM OAM provides this capability.
EFM OAM is supported on network and access Ethernet ports and is configured at the port level. The access ports can be configured to tunnel the OAM traffic originated by the far-end devices.
EFM OAM has the following characteristics:
All EFM OAM, including loopbacks, operate on point-to-point links only.
EFM loopbacks are always line loopbacks (line Rx to line Tx).
When a port is in loopback, all frames (except EFM frames) are discarded. If dynamic signaling and routing is used (dynamic LSPs, OSPF, IS-IS, or BGP routing), all services also go down. If all signaling and routing protocols are static (static routes, LSPs, and service labels), the frames are discarded but services stay up.
The following EFM OAM functions are supported:
OAM capability discovery
configurable transmit interval with an Information OAMPDU
active or passive mode
OAM loopback
OAMPDU tunneling and termination (for Epipe service)
dying gasp at network and access ports
non-zero vendor-specific information field – the 32-bit field is encoded using the format 00:PP:CC:CC and references TIMETRA-CHASSIS-MIB
00 – must be zeros
PP – the platform type from tmnxHwEquippedPlatform
CC:CC – the chassis type index value from tmnxChassisType that is indexed in tmnxChassisTypeTable. The table identifies the specific chassis backplane.
The value 00:00:00:00 is sent for all releases that do not support the non-zero value or are unable to identify the required elements. There is no decoding of the peer or local vendor information fields on the network element. The hexadecimal value is included in the show port port-id ethernet efm-oam output.
With ignore-efm-state configured, if the EFM OAM protocol cannot negotiate a peer session or an established session fails, the port will enter the link up state. The link up state is used by many protocols to indicate that the port is administratively up and there is physical connectivity but a protocol (such as EFM OAM) has caused the port operational state to be down. The show port slot/mda/port command output includes a Reason Down field to indicate if the protocol is the underlying reason for the link up state. For EFM OAM, the Reason Down code is efmOamDown. This is shown in the following command output example, where port 1/1/3 is in a link up state.
*A:ALU-1># show port
===============================================================================
Ports on Slot 1
===============================================================================
Port Admin Link Port Cfg Oper LAG/ Port Port Port C/QS/S/XFP/
Id State State MTU MTU Bndl Mode Encp Type MDIMDX
-------------------------------------------------------------------------------
1/1/1 Down No Down 1578 1578 - netw null xcme
1/1/2 Down No Down 1578 1578 - netw null xcme
1/1/3 Up Yes Link Up 1522 1522 - accs qinq xcme
1/1/4 Down No Down 1578 1578 - netw null xcme
1/1/5 Down No Down 1578 1578 - netw null xcme
1/1/6 Down No Down 1578 1578 - netw null xcme
*A:ALU-1># show port 1/1/3
===============================================================================
Ethernet Interface
===============================================================================
Description : 10/100/Gig Ethernet SFP
Interface : 1/1/3 Oper Speed : N/A
Link-level : Ethernet Config Speed : 1 Gbps
Admin State : up Oper Duplex : N/A
Oper State : down Config Duplex : full
Reason Down : efmOamDown
Physical Link : Yes MTU : 1522
Single Fiber Mode : No Min Frame Length : 64 Bytes
IfIndex : 35749888 Hold time up : 0 seconds
Last State Change : 12/18/2012 15:58:29 Hold time down : 0 seconds
Last Cleared Time : N/A DDM Events : Enabled
Phys State Chng Cnt: 1
......
The EFM OAM protocol can be decoupled from the port state and operational state. In cases where an operator wants to remove the protocol, monitor only the protocol, migrate, or make changes, the ignore-efm-state command can be configured under the config>port>ethernet>efm-oam context.
When the ignore-efm-state command is configured on a port, the protocol behavior is normal. However, any failure in the EFM protocol state (discovery, configuration, time-out, loops, and so on) will not affect the port. Only a protocol warning message will be raised to indicate issues with the protocol. When the ignore-efm-state command is not configured on a port, the default behavior is that the port state will be affected by any EFM OAM protocol fault or clear conditions.
Enabling and disabling this command immediately affects the port state and operating state based on the active configuration, and this is displayed in the show port command output. For example, if the ignore-efm-state command is configured on a port that is exhibiting a protocol error, that protocol error does not affect the port state or operational state and there is no Reason Down code in the output. If the ignore-efm-state command is disabled on a port with an existing EFM OAM protocol error, the port will transition to port state link up, operational state down with reason code efmOamDown.
If the port is a member of a microwave link, the ignore-efm-state command must be enabled before the EFM OAM protocol can be activated. This restriction is required because EFM OAM is not compatible with microwave links.
CRC monitoring
Cyclic redundancy check (CRC) errors typically occur when Ethernet links are compromised due to optical fiber degradation, weak optical signals, bad optical connections, or problems on a third-party networking element. As well, higher-layer OAM options such as EFM and BFD may not detect errors and trigger appropriate alarms and switchovers if the errors are intermittent, since this does not affect the continuous operation of other OAM functions.
CRC error monitoring on Ethernet ports allows degraded links to be alarmed or failed in order to detect network infrastructure issues, trigger necessary maintenance, or switch to redundant paths. This is achieved through monitoring ingress error counts and comparing them to the configured error thresholds. The rate at which CRC errors are detected on a port can trigger two alarm states. Crossing the configured signal degrade (SD) threshold (sd-threshold) causes an event to be logged and an alarm to be raised, which alerts the operator to a potential issue on a link. Crossing the configured signal failure (SF) threshold (sf-threshold) causes the affected port to enter the operationally down state, and causes an event to be logged and an alarm to be raised.
The CRC error rates are calculated as M✕10E-N, which is the ratio of errored frames allowed for total frames received. The operator can configure both the threshold (N) and a multiplier (M). If the multiplier is not configured, the default multiplier (1) is used.
For example, setting the SD threshold to 3 results in a signal degrade error rate threshold of 1✕10E-3 (1 errored frame per 1000 frames). Changing the configuration to an SD threshold of 3 and a multiplier of 5 results in a signal degrade error rate threshold of 5✕10E-3 (5 errored frames per 1000 frames). The signal degrade error rate threshold must be lower than the signal failure error rate threshold because it is used to notify the operator that the port is operating in a degraded but not failed condition.
A sliding window (window-size) is used to calculate a statistical average of CRC error statistics collected every second. Each second, the oldest statistics are dropped from the calculation. For example, if the default 10-s sliding window is configured, at the 11th second the oldest second of statistical data is dropped and the 11th second is included. This sliding average is compared against the configured SD and SF thresholds to determine if the error rate over the window exceeds one or both of the thresholds, which will generate an alarm and log event.
When a port enters the failed condition as a result of crossing an SF threshold, the port is not automatically returned to service. Because the port is operationally down without a physical link, error monitoring stops. The operator can enable the port by using the shutdown and no shutdown port commands or by using other port transition functions such as clearing the MDA (clear mda command) or removing the cable. A port that is down due to crossing an SF threshold can also be re-enabled by changing or disabling the SD threshold. The SD state is self-clearing, and it clears if the error rate drops below 1/10th of the configured SD rate.
Remote loopback
EFM OAM provides a link-layer frame loopback mode, which can be controlled remotely.
To initiate a remote loopback, the local EFM OAM client sends a loopback control OAMPDU by enabling the OAM remote loopback command. After receiving the loopback control OAMPDU, the remote OAM client puts the remote port into local loopback mode.
OAMPDUs are slow protocol frames that contain appropriate control and status information used to monitor, test, and troubleshoot OAM-enabled links.
To exit a remote loopback, the local EFM OAM client sends a loopback control OAMPDU by disabling the OAM remote loopback command. After receiving the loopback control OAMPDU, the remote OAM client puts the port back into normal forwarding mode.
When a port is in local loopback mode (the far end requested an Ethernet OAM loopback), any packets received on the port will be looped back, except for EFM OAMPDUs. No data will be transmitted from the node; only data that is received on the node will be sent back out.
When the node is in remote loopback mode, local data from the CSM is transmitted, but any data received on the node is dropped, except for EFM OAMPDUs.
Remote loopbacks should be used with caution; if dynamic signaling and routing protocols are used, all services go down when a remote loopback is initiated. If only static signaling and routing is used, the services stay up. On the 7705 SAR, the Ethernet port can be configured to accept or reject the remote-loopback command.
802.3ah OAMPDU tunneling and termination for Epipe service
Customers who subscribe to Epipe service may have customer equipment running 802.3ah at both ends. The 7705 SAR can be configured to tunnel EFM OAMPDUs received from a customer device to the other end through the existing network using MPLS or GRE, or to terminate received OAMPDUs at a network or an access Ethernet port.
While tunneling offers the ability to terminate and process the OAM messages at the head-end, termination on the first access port at the cell site can be used to detect immediate failures or can be used to detect port failures in a timelier manner. The user can choose either tunneling or termination, but not both at the same time.
In the following figure, scenario 1 shows the termination of received EFM OAMPDUs from a customer device on an access port, while scenario 2 shows the same thing except for a network port. Scenario 3 shows tunneling of EFM OAMPDUs through the associated Ethernet PW. To configure termination (scenario 1), use the config>port>ethernet>efm-oam>no shutdown command.
Dying gasp
Dying gasp is used to notify the far end that EFM-OAM is disabled or shut down on the local port. The dying gasp flag is set on the OAMPDUs that are sent to the peer. The far end can then take immediate action and inform upper layers that EFM-OAM is down on the port.
When a dying gasp is received from a peer, the node logs the event and generates an SNMP trap to notify the operator.
Ethernet loopbacks
The following loopbacks are supported on Ethernet ports:
timed network line loopback
timed and untimed access line loopbacks
timed and untimed access internal loopbacks
persistent access line loopback
persistent access internal loopback
MAC address swapping
CFM loopback on network and access ports
CFM loopback on ring ports and v-port
Line and internal Ethernet loopbacks
A line loopback loops frames received on the corresponding port back toward the transmit direction. Line loopbacks are supported on ports configured for access or network mode.
Similarly, a line loopback with MAC addressing loops frames received on the corresponding port back toward the transmit direction, and swaps the source and destination MAC addresses before transmission. See MAC swapping for more information.
An internal loopback loops frames from the local router back to the framer. This is usually referred to as an equipment loopback. The transmit signal is looped back and received by the interface. Internal loopbacks are supported on ports configured in access mode.
If a loopback is enabled on a port, the port mode cannot be changed until the loopback has been disabled.
A port can support only one loopback at a time. If a loopback exists on a port, it must be disabled or the timer must expire before another loopback can be configured on the same port. EFM-OAM cannot be enabled on a port that has an Ethernet loopback enabled on it. Similarly, an Ethernet loopback cannot be enabled on a port that has EFM-OAM enabled on it.
When an internal loopback is enabled on a port, autonegotiation is turned off silently. This is to allow an internal loopback when the operational status of a port is down. Any user modification to autonegotiation on a port configured with an internal Ethernet loopback will not take effect until the loopback is disabled.
The loopback timer can be configured from 30 s to 86400 s. All non-zero timed loopbacks are turned off automatically under the following conditions: an adapter card reset, an activity switch, or timer expiry. Line or internal loopback timers can also be configured as a latched loopback by setting the timer to 0 s, or as a persistent loopback with the persistent keyword. Latched and persistent loopbacks are enabled indefinitely until turned off by the user. Latched loopbacks survive adapter card resets and activity switches, but are lost if there is a system restart. Persistent loopbacks survive adapter card resets and activity switches and can survive a system restart if the admin save or admin save detail command was executed before the restart. Latched loopbacks (untimed) and persistent loopbacks can be enabled only on Ethernet access ports.
Persistent loopbacks are the only Ethernet loopbacks saved to the database by the admin-save and admin-save-detail commands.
An Ethernet port loopback may interact with other features. See Interaction of Ethernet port loopback with other features for more information.
MAC swapping
Typically, an Ethernet port loopback only echoes back received frames. That is, the received source and destination MAC addresses are not swapped. However, not all Ethernet equipment supports echo mode, where the original sender of the frame must support receiving its own port MAC address as the destination MAC address.
The MAC swapping feature on the 7705 SAR is an optional feature that will swap the received destination MAC address with the source MAC address when an Ethernet port is in loopback mode. After the swap, the FCS is recalculated to ensure the validity of the Ethernet frame and to ensure that the frame is not dropped by the original sender due to a CRC error.
Interaction of Ethernet port loopback with other features
EFM OAM and line loopback are mutually exclusive. If one of these functions is enabled, it must be disabled before the other can be used.
However, a line loopback precedes the dot1x behavior. That is, if the port is already dot1x-authenticated it will remain so. If it is not, EAP authentication will fail.
Ethernet port-layer line loopback and Ethernet port-layer internal loopback can be enabled on the same port with the down-when-looped feature. EFM OAM cannot be enabled on the same port with the down-when-looped feature. For more information, see Ethernet port down-when-looped.
CFM loopbacks for OAM on Ethernet ports
This section contains information about the following topics:
CFM loopback overview
Connectivity fault management (CFM) loopback support for loopback messages (LBMs) on Ethernet ports allows operators to run standards-based Layer 1 and Layer 2 OAM tests on ports receiving unlabeled packets.
The 7705 SAR supports CFM MEPs associated with different endpoints (that is, Up and Down SAP MEPs, Up and Down spoke SDP MEPs, Up and Down mesh SDP MEPs, and network interface facility Down MEPs). In addition, for traffic received from an uplink (network ingress), the 7705 SAR supports CFM LBM for both labeled and unlabeled packets. CFM loopbacks are applied to the Ethernet port.
See the 7705 SAR OAM and Diagnostics Guide, ‟Ethernet OAM Capabilities”, for information about CFM MEPs.
The following figure shows an application where an operator leases facilities from a transport network provider in order to transport traffic from a cell site to their MTSO. The operator leases a certain amount of bandwidth between the two endpoints (the cell site and the MTSO) from the transport provider, who offers Ethernet Virtual Private Line (EVPL) or Ethernet Private Line (EPL) PTP service. Before the operator offers services on the leased bandwidth, the operator runs OAM tests to verify the SLA. Typically, the transport provider (MEN provider) requires that the OAM tests be run in the direction of (toward) the first Ethernet port that is connected to the transport network. This is done to eliminate the potential effect of queuing, delay, and jitter that may be introduced by an SDP or SAP.
The figure shows an Ethernet verifier at the MTSO that is directly connected to the transport network (in front of the 7750 SR). Therefore, the Ethernet OAM frames are not label-encapsulated. Because Ethernet verifiers do not support label operations and the transport provider mandates that OAM tests be run between the two hand-off Ethernet ports, the verifier cannot be relocated behind the 7750 SR node at the MTSO. Therefore, CFM loopback frames received are not MPLS-encapsulated, but are simple Ethernet frames where the type is set to CFM (dot1ag or Y.1731).
CFM loopback mechanics
The following are important facts to consider when working with CFM loopbacks:
CFM loopbacks can be enabled on a per-port basis, and:
the port can be in access or network mode
when enabled on a port, all received LBM frames are processed, regardless of the VLAN and the service that the VLAN or SAP is bound to
there is no associated MEP creation involved with this feature; therefore, no domain, association, or similar checks are performed on the received frame
upon finding a destination address MAC match, the LBM frame is sent to the CFM process
CFM loopback support on a physical ring port on the 2-port 10GigE (Ethernet) Adapter card or 2-port 10GigE (Ethernet) module differs from other Ethernet ports. For these ports, cfm-loopback is configured, optionally, using dot1p and match-vlan to create a list of up to 16 VLANs. The null VLAN is always applied. The CFM loopback message will be processed if it does not contain a VLAN header or if it contains a VLAN header with a VLAN ID that matches one in the configured match-vlan list.
received LBM frames undergo no queuing or scheduling in the ingress direction
at egress, loopback reply (LBR) frames are stored in their own queue; that is, a separate new queue is added exclusively for LBR frames
users can configure the way a response frame is treated among other user traffic stored in network queues; the configuration options are high-priority, low-priority, or dot1p, where dot1p applies only to physical ring ports
for network egress or access egress, where 4-priority scheduling is enabled:
high-priority – either cir = port_speed, which applies to all frames that are scheduled via an expedited in-profile scheduler, or RR for all other (network egress queue) frames that reside in expedited queues and are in an in-profile state
low-priority – either cir = 0, pir = port_speed, which applies to all frames that are scheduled via a best effort out-of-profile scheduler, or RR for all other frames that reside in best-effort queues and are in an out-of-profile state
for the 8-port Gigabit Ethernet Adapter card, the 10-port 1GigE/1-port 10GigE X-Adapter card, and the v-port on the 2-port 10GigE (Ethernet) Adapter card and 2-port 10GigE (Ethernet) module, for network egress, where 16-priority scheduling is enabled:
high-priority – has higher priority than any user frames
low-priority – has lower priority than any user frames
for the physical ring ports on the 2-port 10GigE (Ethernet) Adapter card and 2-port 10GigE (Ethernet) module, which can only operate as network egress, the priority of the LBR frame is derived from the dot1p setting of the received LBM frame. Based on the assigned ring-type network queue policy, dot1p-to-queue mapping is handled using the same mapping rule that applies to all other user frames.
the above queue parameters and scheduler mappings are all preconfigured and cannot be altered. The desired QoS treatment is selected by enabling the CFM loopback and specifying high-priority, low-priority, or dot1p.
Ethernet port down-when-looped
Newly provisioned circuits are often put into loopback with a physical loopback cable for testing and to ensure the ports meet the SLA. If loopbacks are not cleared, or physically removed, by the operator when the testing is completed, they can adversely affect the performance of all other SDPs and customer interfaces (SAPs). This is especially problematic for point-to-multipoint services such as VPLS, since Ethernet does not support TTL, which is essential in terminating loops.
The down-when-looped feature is used on the 7705 SAR to detect loops within the network and to ensure continued operation of other ports. When the down-when-looped feature is activated, a keepalive loop PDU is transmitted periodically toward the network. The Ethernet port then listens for returning keepalive loop PDUs. In unicast mode, a loop is detected if any of the received PDUs have an Ethertype value of 9000, which indicates a loopback (Configuration Test Protocol), and the source (SRC) and destination (DST) MAC addresses are identical to the MAC address of the Ethernet port. In broadcast mode, a loop is detected if any of the received PDUs have an Ethertype value of 9000 and the SRC MAC address matches the MAC address of the Ethernet port and the DST MAC address matches the broadcast MAC address. When a loop is detected, the Ethernet port is immediately brought down.
Ethernet port-layer line loopbacks and the down-when-looped feature can be enabled on the same port. The keepalive loop PDU is still transmitted; however, if the port receives its own keepalive loop PDU, the keepalive PDU is extracted and processed to avoid infinite looping.
Ethernet port-layer internal loopbacks and the down-when-looped feature can also be enabled on the same port. When the keepalive PDU is internally looped back, it is extracted and processed as usual. If the SRC MAC address matches the port MAC address, the port is disabled due to detection of a loop. If the SRC MAC address is a broadcast MAC address because the swap-src-dst-mac option in the loopback command is enabled, then there is no change to port status and it remains operationally up.
EFM OAM and down-when-looped cannot be enabled on the same port.
Ethernet ring (adapter card and module)
The 2-port 10GigE (Ethernet) Adapter card can be installed in a 7705 SAR-8 Shelf V2 or 7705 SAR-18 chassis and the 2-port 10GigE (Ethernet) module can be installed in a 7705 SAR-M to connect to and from access rings carrying a high concentration of traffic. For the maximum number of cards or modules supported per chassis, see Maximum number of cards/modules supported in each chassis.
A number of 7705 SAR nodes in a ring typically aggregate traffic from customer sites, map the traffic to a service, and connect to an SR node. The SR node acts as a gateway point out of the ring. A 10GigE ring allows for higher bandwidth services and aggregation on a per-7705 SAR basis. The 2-port 10GigE (Ethernet) Adapter card/module increases the capacity of backhaul networks by providing 10GigE support on the aggregation nodes, thus increasing the port capacity.
In a deployment of a 2-port 10GigE (Ethernet) Adapter card/module, each 7705 SAR node in the ring is connected to the east and west side of the ring over two different 10GigE ports. If 10GigE is the main uplink, the following are required for redundancy:
two cards per 7705 SAR-8 Shelf V2
two cards per 7705 SAR-18
two 7705 SAR-M nodes, each equipped with 2-port 10GigE (Ethernet) module
With two cards per 7705 SAR-8 Shelf V2 or 7705 SAR-18 node, for example, east and west links of the ring can be terminated on two different adapter cards, reducing the impact of potential hardware failure.
The physical ports on the 2-port 10GigE (Ethernet) Adapter card/module boot up in network mode and this network setting cannot be disabled or altered. At boot-up, the MAC address of the virtual port (v-port) is programmed automatically for efficiency and security reasons.
There is native built-in Ethernet bridging among the ring ports and the v-port. Bridging destinations for traffic received from one of the ring ports include the 10GigE ring port and the network interfaces on the v-port. Bridging destinations for traffic received from the v-port include one or both of the 10GigE ring ports.
With bridging, broadcast and multicast frames are forwarded over all ports except the received one. Unknown frames are forwarded to both 10GigE ports if received from the v-port or forwarded to the other 10GigE port only if received from one of the 10GigE ports (the local v-port MAC address is always programmed).
The bridge traffic of the physical 10GigE ports is based on learned and programmed MAC addresses.
MTU configuration guidelines
This section contains information about the following topics:
MTU configuration overview
Because of the services overhead (that is, pseudowire/VLL, MPLS tunnel, dot1q/qinq and dot1p overhead), it is crucial that configurable variable frame size be supported for end-to-end service delivery.
Observe the following general rules when planning your service and physical maximum transmission unit (MTU) configurations:
The 7705 SAR must contend with MTU limitations at many service points. The physical (access and network) port, service, and SDP MTU values must be individually defined. MTU points on the 7705 SAR identifies the various MTU points on the 7705 SAR.
The ports that will be designated as network ports intended to carry service traffic must be identified.
MTU values should not be modified frequently.
MTU values must conform to both of the following conditions:
the service MTU must be less than or equal to the SDP path MTU
the service MTU must be less than or equal to the access port (SAP) MTU
When the allow-fragmentation command is enabled on an SDP, the current MTU algorithm is overwritten with the configured path MTU. The administrative MTU and operational MTU both show the specified MTU value. If the path MTU is not configured or available, the operational MTU is set to 2000 bytes, and the administrative MTU displays a value of 0. When allow-fragmentation is disabled, the operational MTU reverts to the previous value.
For more information, see the ‟MTU Settings” section in the 7705 SAR Services Guide. To configure various MTU points, use the following commands:
port MTUs are set with the mtu command, under the config>port context, where the port type can be Ethernet, TDM, serial, or SONET/SDH
service MTUs are set in the appropriate config>service context
path MTUs are set with the path-mtu command under the config>service>sdp context
Frame size configuration is supported for an Ethernet port configured as an access or a network port.
For an Ethernet adapter card that does not support jumbo frames, all frames received at an ingress network or access port are policed against 1576 bytes (1572 + 4 bytes of FCS), regardless of the port MTU. Any frames longer than 1576 bytes are discarded and the ‟Too Long Frame” and ‟Error Stats” counters in the port statistics display are incremented. See Jumbo frames for more information.
At network egress, Ethernet frames are policed against the configured port MTU. If the frame exceeds the configured port MTU, the ‟Interface Out Discards” counter in the port statistics is incremented.
When the network group encryption (NGE) feature is used, additional bytes due to NGE packet overhead must be considered. See the ‟NGE Packet Overhead and MTU Considerations” section in the 7705 SAR Services Guide for more information.
IP fragmentation
IP fragmentation is used to fragment a packet that is larger than the MTU of the egress interface, so that the packet can be transported over that interface.
For IPv4, the router fragments or discards the IP packets based on whether the DF (Do not fragment) bit is set in the IP header. If the packet that exceeds the MTU cannot be fragmented, the packet is discarded and an ICMP message ‟Fragmentation Needed and Don’t Fragment was Set” is sent back to the source IP address.
For IPv6, the router cannot fragment the packet so must discard it. An ICMP message ‟Packet too big” is sent back to the source node.
As a source of self-generated traffic, the 7705 SAR can perform packet fragmentation.
Fragmentation can be enabled for GRE tunnels. See the ‟GRE Fragmentation” section in the 7705 SAR Services Guide for more information.
Jumbo frames
Jumbo frames are supported on all Ethernet ports.
The maximum MTU size for a jumbo frame on the 7705 SAR is 9732 bytes. The maximum MTU for a jumbo frame may vary depending on the Ethernet encapsulation type, as shown in the following table. The calculations of the other MTU values (service MTU, path MTU, and so on) are based on the port MTU. The values in the table are also maximum receive unit (MRU) values. MTU values are user-configured values. MRU values are the maximum MTU value that a user can configure on an adapter card that supports jumbo frames.
Encapsulation |
Maximum MTU (bytes) |
---|---|
Null |
9724 |
Dot1q |
9728 |
QinQ |
9732 |
For an Ethernet adapter card, all frames received at an ingress network or access port are policed against the MRU for the ingress adapter card, regardless of the configured MTU. Any frames larger than the MRU are discarded and the ‟Too Long Frame” and ‟Error Stats” counters in the port statistics display are incremented.
At network egress, frames are checked against the configured port MTU. If the frame exceeds the configured port MTU and the DF bit is set, then the ‟MTU Exceeded” discard counter will be incremented on the ingress IP interface statistics display, or on the MPLS interface statistics display if the packet is an MPLS packet.
For example, on adapter cards that do not support an MTU greater than 2106 bytes, fragmentation is not supported for frames greater than the maximum supported MTU for that card (that is, 2106 bytes). If the maximum supported MTU is exceeded, the following occurs:
An appropriate ICMP reply message (Destination Unreachable) is generated by the 7705 SAR. The router ensures that the ICMP generated message cannot be used as a DOS attack (that is, the router paces the ICMP message).
The appropriate statistics are incremented.
Jumbo frames offer better utilization of an Ethernet link because as more payload is packed into an Ethernet frame of constant size, the ratio of overhead to payload is minimized.
From the traffic management perspective, large payloads may cause long delays, so a balance between link utilization and delay must be found. For example, for ATM VLLs, concatenating a large number of ATM cells when the MTU is set to a very high value could generate a 9-kB ATM VLL frame. Transmitting a frame that large would take more than 23 ms on a 3-Mb/s policed Ethernet uplink.
Behavior of adapter cards not supporting jumbo frames
The 7705 SAR-8 Shelf V2 and the 7705 SAR-18 do not support ingress fragmentation, and this is true for jumbo frames. Therefore, any jumbo frame packet that gets routed to an adapter card that does not have Ethernet ports and therefore does not support jumbo frame MTU (for example, a 16-port T1/E1 ASAP Adapter card or a 4-port OC3/STM1 / 1-port OC12/STM4 Adapter card) is discarded if the packet size is greater than the TDM port’s maximum supported MTU. If the maximum supported MTU is exceeded, the following occurs:
An ICMP reply message (Destination Unreachable) is generated by the 7705 SAR. The router ensures that the ICMP-generated message cannot be used as a DOS attack (that is, the router paces the ICMP message).
The port statistics show IP or MPLS Interface MTU discards, for IP or MPLS traffic, respectively. MTU Exceeded Packets and Bytes counters exist separately for IPv4/6 and MPLS under the IP interface hierarchy for all discarded packets where ICMP Error messages are not generated.
For example, if a packet arrives on an 8-port Gigabit Ethernet Adapter card and is to be forwarded to a 16-port T1/E1 ASAP Adapter card with a maximum port MTU of 2090 bytes and a channel group configured for PPP with the port MTU of 1000 bytes, the following may occur:
If the arriving packet is 800 bytes, forward the packet.
If the arriving packet is 1400 bytes, forward the packet, which will be fragmented by the egress adapter card.
If the arriving packet is fragmented and the fragments are 800 bytes, forward the packet.
If the arriving packet is 2500 bytes, send an ICMP error message (because the egress adapter card has a maximum port MTU of 2090 bytes).
If the arriving packet is fragmented and the fragment size is 2500 bytes, there is an ICMP error.
Jumbo frame behavior on the fixed platforms
The 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-M, 7705 SAR-Wx, and 7705 SAR-X are able to fragment packets between Ethernet ports (which support jumbo frames) and TDM ports (which do not support jumbo frames). In this case, when a packet arrives from a port that supports jumbo frames and is routed to a port that does not support jumbo frames (that is, a TDM port) the packet will get fragmented to the port MTU of the TDM port.
For example, if a packet arrives on a 7705 SAR-A and is to be forwarded to a TDM port that has a maximum port MTU of 2090 bytes and a channel group configured for PPP with the port MTU of 1000 bytes (PPP port MTU), the following may occur:
If the arriving packet is 800 bytes, forward the packet.
If the arriving packet is 1400 bytes and the DF bit is 0, forward the packet, which will be fragmented to the PPP port MTU size.
If the arriving packet is 2500 bytes and the DF bit is 0, forward the packet, which will be fragmented to the PPP port MTU size.
Multicast support for jumbo frames
Jumbo frames are supported in a multicast configuration as long as all adapter cards in the multicast group support jumbo frames. If an adapter card that does not support jumbo frames is present in the multicast group, the replicated multicast jumbo frame packet will be discarded by the fabric because of an MRU error of the fabric port (Rx).
The multicast group replicates the jumbo frame for all adapter cards, regardless of whether they support jumbo frames, only when forwarding the packet through the fabric. The replicated jumbo frame packet is discarded on adapter cards that do not support jumbo frames.
PMC jumbo frame support
For the Packet Microwave Adapter card (PMC), ensure that the microwave hardware installed with the card supports the corresponding jumbo frame MTU. If the microwave hardware does not support the jumbo frame MTU, it is recommended that the MTU of the PMC port be set to the maximum frame size that is supported by the microwave hardware.
Default port MTU values
The following table displays the default and maximum port MTU values that are dependent upon the port type, mode, and encapsulation type.
Port type |
Mode |
Encap type |
Default (bytes) |
Max MTU (bytes) |
---|---|---|---|---|
10/100 Ethernet1 |
Access/ Network |
null |
1514 |
9724 2 |
dot1q |
1518 |
9728 2 |
||
qinq3 |
1522 (access only) |
9732 (access only) 2 |
||
GigE SFP 1and 10-GigE SFP+ |
Access/ Network |
null |
1514 (access) 1572 (network) |
9724 (access and network) |
dot1q |
1518 (access) 1572 (network) |
9728 (access and network) |
||
qinq3 |
1522 (access only) |
9732 (access only) |
||
Ring port |
Network |
null |
9728 (fixed) |
9728 (fixed) |
v-port (on Ring adapter card) |
Network |
null |
1572 |
9724 |
dot1q |
1572 |
9728 |
||
TDM (PW) |
Access |
cem |
1514 |
1514 |
TDM (ATM PW) |
Access |
atm |
1524 |
1524 |
TDM (FR PW) |
Access |
frame-relay |
1514 |
2090 |
TDM (HDLC PW) |
Access |
hdlc |
1514 |
2090 |
TDM (IW PW) |
Access |
cisco-hdlc |
1514 |
2090 |
TDM (PPP/MLPPP) |
Access |
ipcp |
1502 |
2090 |
Network |
ppp-auto |
1572 |
2090 |
|
Serial V.35 or X.21 (FR PW) 4 |
Access |
frame-relay |
1514 |
2090 |
Serial V.35 or X.21 (HDLC PW) 4 |
Access |
hdlc |
1514 |
2090 |
Serial V.35 or X.21 (IW PW) 4 |
Access |
frame-relay |
1514 |
2090 |
ipcp |
1502 |
2090 |
||
cisco-hdlc |
1514 |
2090 |
||
SONET/SDH |
Access |
atm |
1524 |
1524 |
SONET/SDH |
Network |
ppp-auto |
1572 |
2090 |
Notes:
- The maximum MTU value is supported only on cards that have buffer chaining enabled.
- On the Packet Microwave Adapter card, the MWA ports support 4 bytes less than the Ethernet ports. MWA ports support a maximum MTU of 9720 bytes (null) or 9724 bytes (dot1q). MWA ports do not support QinQ.
- QinQ is supported only on access ports.
- For X.21 serial ports at super-rate speeds.
For more information, see the ‟MTU Settings” section in the 7705 SAR Services Guide.
LAG
This section contains information about the following topics:
LAG overview
The 7705 SAR supports link aggregation groups (LAGs) based on the IEEE 802.1ax standard (formerly 802.3ad). Link aggregation provides:
increased bandwidth by combining multiple links into one logical link (in active/active mode)
load sharing by distributing traffic across multiple links (in active/active mode)
redundancy and increased resiliency between devices by having a standby link to act as backup if the active link fails (in active/standby mode)
In the 7705 SAR implementation, all links must operate at the same speed.
Packet sequencing must be maintained for any given session. The hashing algorithm deployed by Nokia routers is based on the type of traffic transported to ensure that all traffic in a flow remains in sequence while providing effective load sharing across the links in the LAG. See LAG and ECMP hashing for more information.
LAGs must be statically configured or formed dynamically with Link Aggregation Control Protocol (LACP). See LACP and active/standby operation for information about LACP.
All Ethernet-based supported services can benefit from LAG, including:
network interfaces and SDPs
spoke SDPs, mesh SDPs, and EVPN endpoints
IES and VPRN interfaces and SAPs
Ethernet and IP pseudowire SAPs
routed VPLS (r-VPLS) SAPs
LAGs are supported on access, network, and hybrid ports. A LAG can be in active/active mode or in active/standby mode for access, network, or hybrid ports. Active/standby mode is a subset of active/active mode if subgroups are enabled.
LAGs are supported on access ports on the following:
8-port Gigabit Ethernet Adapter card
10-port 1GigE/1-port 10GigE X-Adapter card (10-port GigE mode)
6-port Ethernet 10Gbps Adapter card
4-port SAR-H Fast Ethernet module
6-port SAR-M Ethernet module
Packet Microwave Adapter card (for ports not in a microwave link)
all fixed platforms
LAGs are supported on network ports on the following:
8-port Gigabit Ethernet Adapter card
10-port 1GigE/1-port 10GigE X-Adapter card
6-port Ethernet 10Gbps Adapter card
4-port SAR-H Fast Ethernet module
6-port SAR-M Ethernet module
Packet Microwave Adapter card (for ports not in a microwave link and ports in a 1+0 network microwave link; LAGs are not supported on ports in a 1+1 HSB microwave link)
all fixed platforms
LAGs are supported on hybrid ports on the following:
8-port Gigabit Ethernet Adapter card
10-port 1GigE/1-port 10GigE X-Adapter card (10-port GigE mode)
6-port Ethernet 10Gbps Adapter card
6-port SAR-M Ethernet module
Packet Microwave Adapter card (for ports not in a microwave link)
all fixed platforms
On access ports, a LAG supports active/active and active/standby operation. For active/standby operation the links must be in different subgroups. Links can be on the same platform or adapter card/module or distributed over multiple components. Load sharing is supported among the active links in a LAG group.
On network ports, a LAG supports active/active and active/standby operation. For active/standby operation the links must be in different subgroups. Links can be on the same platform or adapter card/module or distributed over multiple components. Load sharing is supported among the active links in a LAG group. Any tunnel type (for example, IP, GRE, or MPLS) transporting any service type, any IP traffic, or any labeled traffic (LER, LSR) can use the LAG load-sharing, active/active, and active/standby functionality.
LAGs are supported on network 1+0 microwave links. Ports that are in a microwave link can be added to the same LAG as ports that are not in a microwave link. Ports belonging to a microwave link must have limited autonegotiation enabled before the link can be added to a LAG.
A LAG that contains ports in a microwave link must have LACP enabled for active/standby operation. Static LAG configuration (without LACP) is not supported for active/standby LAGs with microwave-enabled ports.
On hybrid ports, a LAG supports active/active and active/standby operation. For active/standby operation the links must be in different subgroups. Links can be on the same platform or adapter card/module or distributed over multiple components. Load sharing is supported among the active links in a LAG group.
A LAG group with assigned members can be converted from one mode to another as long as the number of member ports are supported in the new mode and the ports all support the new mode, none of the members belong to a microwave link, and the LAG group is not associated with a network interface or a SAP.
A subgroup is a group of links within a LAG. On access, network, or hybrid ports, a LAG can have a maximum of four subgroups and a subgroup can have links up to the maximum number supported on the LAG. The LAG is active/active if there is only one sub-group and is active/standby if there is more than one subgroup.
When configuring a LAG, most port features (port commands) can only be configured on the primary member port. The configuration, or any change to the configuration, is automatically propagated to any remaining ports within the same LAG. Operators cannot modify the configurations on non-primary ports. For more information, see Configuring LAG parameters.
If the LAG has one member link on a second-generation (Gen-2) Ethernet adapter card and the other link on a third-generation (Gen-3) Ethernet adapter card or platform, a mix-and-match scenario exists for traffic management on the LAG SAP. In this case, all QoS parameters for the LAG SAP are configured but only those parameters applicable to the active member link are used. See LAG support on mixed-generation hardware for more information.
Configuring a multiservice site (MSS) aggregate rate can restrict the use of LAG SAPs. For more information, see the ‟MSS and LAG interaction on the 7705 SAR-8 Shelf V2 and 7705 SAR-18” section in the 7705 SAR Quality of Service Guide.
LACP and active/standby operation
On access, network, and hybrid ports, where multiple links in a LAG can be active at the same time, normal operation is that all non-failing links are active and traffic is load-balanced across all the active links. In some cases, however, it is desirable to have only some of the links active and the other links kept in standby mode. The Link Aggregation Control Protocol (LACP) is used to make the selection of the active links in a LAG predictable and compatible with any vendor equipment. The mechanism is based on the IEEE 802.1ax standard so that interoperability is ensured.
LACP is disabled by default and therefore must be enabled on the LAG if required. LACP can be used in either active mode or passive mode. The mode must match with connected CE devices for proper operation. For example, if the LAG on the 7705 SAR end is configured to be active, the CE end must be passive.
The following figure shows the interconnection between a DSLAM and a LAG aggregation node. In this configuration, LAG is used to protect against hardware failure. If the active link goes down, the link on standby takes over (see LAG on access failure switchover). The links are distributed across two different adapter cards to eliminate a single point of failure.
LACP handles active/standby operation of LAG subgroups as follows:
Each link in a LAG is assigned to a subgroup. On access, network, and hybrid ports, a LAG can have a maximum of four subgroups and a subgroup can have up to the maximum number of links supported for the LAG. The selection algorithm implemented by LACP ensures that only one subgroup in a LAG is selected as active.
The algorithm selects the active link as follows:
If multiple subgroups satisfy the selection criteria, the subgroup currently active remains active. Initially, the subgroup containing the highest-priority (lowest value) eligible link is selected as active.
An eligible member is a link that can potentially become active. This means it is operationally up, and if the slave-to-partner flag is set, the remote system did not disable its use (by signaling standby).
The selection algorithm works in a revertive mode (for details, see the IEEE 802.1ax standard). This means that every time the configuration or status of a subgroup changes, the selection algorithm reruns. If multiple subgroups satisfy the selection criteria, the subgroup currently active remains active. This behavior does not apply if the selection-criteria hold-time parameter is set to infinite.
Log events and traps are generated at both the LAG and link level to indicate any LACP changes. See the TIMETRA-LAG-MIB for details.
QoS adaptation for LAG on access
QoS on access port LAGs (access ports and hybrid ports in access mode) is handled differently from QoS on network port LAGs (see QoS for LAG on network). Based on the configured hashing, traffic on a SAP can be sent over multiple LAG ports or can use a single port of a LAG. There are two user-selectable adaptive QoS modes (distribute and link) that allow the user to determine how the configured QoS rate is distributed to each of the active LAG port SAP queue schedulers, SAP schedulers (H-QoS), and MSS schedulers. These modes are:
adapt-qos distribute
For SAP queue schedulers, SAP schedulers (H-QoS), and SAP egress MSS schedulers, distribute mode divides the QoS rates (as specified by the SLA) equally among the active LAG links (ports). For example, if a SAP queue PIR and CIR are configured on an active/active LAG SAP to be 200 Mb/s and 100 Mb/s respectively, and there are four active LAG ports, the SAP queue on each LAG port will be configured with a PIR of 50 Mb/s (200/4) and a CIR of 25 Mb/s (100/4).
For the SAP ingress MSS scheduler, the scheduler rate is configured on an MDA basis. Distributive adaptive QoS divides the QoS rates (as specified by the SLA) among the active link MDAs proportionally to the number of active links on each MDA.
For example, if an MSS shaper group with an aggregate rate of 200 Mb/s and a CIR of 100 Mb/s is assigned to an active/active LAG SAP where the LAG has two ports on MDA 1 and three ports on MDA 2, the MSS shaper group on MDA 1 will have an aggregate rate of 80 Mb/s (200 ✕ 2/5 of the SLA) and a CIR of 40 Mb/s (100 ✕ 2/5 of the SLA). MDA 2 will have an aggregate rate of 120 Mb/s (200 ✕ 3/5) and a CIR of 60 Mb/s (100 ✕ 3/5).
adapt-qos link (default)
For SAP queue schedulers, SAP schedulers (H-QoS), and SAP egress MSS schedulers, link mode forces the full QoS rates (as specified by the SLA) to be configured on each of the active LAG links. For example, if a SAP queue PIR and CIR are configured on an active/active LAG SAP to be 200 Mb/s and 100 Mb/s respectively, and there are two active LAG ports, the SAP queue on each LAG port will be configured to the full SLA, which is a PIR of 200 Mb/s and a CIR of 100 Mb/s.
For the SAP ingress MSS scheduler, the scheduler rate is configured on an MDA basis. In LAG link mode, each active LAG link MDA MSS shaper scheduler is configured with the full SLA. For example, if an MSS shaper group is configured with an aggregate rate of 200 Mb/s and CIR of 100 Mb/s and is assigned to an active/active LAG SAP with three ports on MDA 1 and two ports on MDA 2, the MSS shaper group on MDA 1 and MDA 2 are each configured with the full SLA of 200 Mb/s for the aggregate rate and 100 Mb/s for the CIR.
The following table shows examples of rate and bandwidth distributions based on the adapt-qos mode configuration.
Scheduler | Distribute |
Link |
---|---|---|
SAP queue scheduler |
Rate distributed = rate / number of active links |
100% rate configured on each LAG SAP queue |
SAP scheduler (H-QoS) |
Rate distributed = rate / number of active links |
100% rate configured on each SAP scheduler |
SAP egress MSS scheduler |
Rate distributed = rate / number of active links |
100% rate configured on each port’s MSS scheduler |
SAP ingress MSS scheduler |
Rate distributed per active LAG MDA = rate ✕ (number of active links on MDA / total number of active links) |
100% rate configured on each active LAG MDA MSS scheduler |
The following restrictions apply to ingress MSS LAG adaptive QoS (distribute mode):
A unique MSS shaper group must be used per LAG when a non-default ingress MSS shaper group is assigned to a LAG SAP using adaptive QoS.
When a shaper group is assigned to a LAG SAP using adaptive QoS, all ports in the LAG group must have their MDAs assigned to the same shaper policy.
The following restrictions apply to egress MSS LAG:
The shaper policy for all LAG ports in a LAG must be the same and can only be configured on the primary LAG port member.
The following limitations apply to adaptive QoS (distribute mode):
The QoS rates for an ingress LAG using adaptive QoS are only distributed among the active links when a non-default shaper group is used. If a default shaper group is used, the full QoS rates are configured for each port in the LAG as if link mode is being used.
The QoS rates for an ingress or egress LAG using adaptive QoS will not be distributed among the active links when a user sets the PIR/CIR on a SAP queue, or aggregate rate/CIR on a SAP scheduler or MSS scheduler, to the default values (max and 0).
Adaptive QoS examples (distribute mode)
The following examples can be used as guidelines for configuring adapt-qos distribute.
SLA distribution for SAP queue-level PIR/CIR configuration
Configure a qos sap-ingress policy with a queue ID of 2, a PIR of 200 Mb/s, and a CIR of 100 Mb/s. Assign it to an active/active LAG SAP with five active ports.
For each port, the PIR/CIR configuration of SAP queue 2 is calculated so that the PIR = 40 Mb/s and CIR = 20 Mb/s.
If one link goes down, the PIR/CIR configuration of SAP queue 2 on each active port is recalculated so that the PIR = 50 Mb/s and CIR = 25 Mb/s.
SLA distribution for ingress/egress (H-QoS)
Create a LAG SAP with two different ports (for example, port 1/1/1 and port 1/1/2) in a LAG subgroup.
Configure a LAG SAP aggregate rate of 200 Mb/s and a CIR of 100 Mb/s.
To maintain the SLA, the SAP aggregate rate and CIR must be divided by the number of operational links in the LAG group.
Because there are two active ports (links) in this LAG, the H-QoS aggregate rate and CIR are divided evenly between the two ports.
The port 1/1/1 SAP scheduler (H-QoS) aggregate rate is 100 Mb/s and the CIR is 50 Mb/s.
The port 1/1/2 SAP scheduler (H-QoS) aggregate rate is 100 Mb/s and the CIR is 50 Mb/s.
SLA distribution for Ingress MSS
Configure a shaper group with an ID of 2 with an aggregate rate of 200 Mb/s and a CIR of 100 Mb/s.
Create a LAG SAP using shaper group 2 that has two ports from one MDA (for example, ports 1/1/1 and 1/1/2) and three ports from a different MDA (for example, ports 1/2/1, 1/2/2, and 1/2/3) in its LAG group.
The ingress MSS scheduler rate is configured on an MDA basis. Adaptive QoS divides the QoS rates among the active link MDAs, proportionally to the number of active links on each MDA.
For MDA 1, the MSS shaper group aggregate rate is 80 Mb/s and the CIR is 40 Mb/s (2/5 of the bandwidth with two active links on MDA 1).
For MDA 2, the MSS shaper group aggregate rate is 120 Mb/s and the CIR is 60 Mb/s (3/5 of the bandwidth with three active links on MDA 2).
QoS for LAG on network
QoS on network port LAGs is handled differently from QoS on access port LAGs. The adapt-qos command is not supported on network port LAGs. However, QoS behavior on network port LAGs is similar to QoS on access port LAGs configured for adapt-qos link mode. For network queue and per-VLAN shapers, the full QoS rates are configured on each of the active LAG links. For example, if a per-VLAN shaper agg-rate-limit aggregate rate (PIR) and CIR are configured on an active/active LAG interface to be 200 Mb/s and 100 Mb/s respectively, and there are two active LAG ports, the per-VLAN shaper on each LAG port will be configured to an aggregate rate of 200 Mb/s and a CIR of 100 Mb/s.
Access ingress fabric shaping
To prevent traffic congestion and ease the effects of possible bursts, a fabric shaper is implemented on each adapter card. Traffic being switched to a LAG SAP on an access interface goes through fabric shapers that are either in aggregate mode or destination mode. When in destination mode, the multipoint shaper is used to set the rate on all adapter cards. For more information about the modes used in fabric shaping, see the 7705 SAR Quality of Service Guide, ‟Configurable ingress shaping to fabric (access and network)”.
Hold-down timers
Hold-down timers control how quickly a LAG responds to operational port state changes. The following timers are supported:
port-level hold-time (up/down) timer
This timer controls the delay before a port is added to or removed from a LAG when the port comes up or goes down. Each port in the LAG has the same timer value, which is configured on the primary LAG link (port). The timer is set with the config>port>ethernet>hold-time command.
subgroup-level hold-down timer
This timer controls the delay before a switch from the current subgroup to a new candidate subgroup, selected by the LAG subgroup selection algorithm. The timer is set with the config>lag>selection-criteria command.
The timer can be configured to never expire, which prevents a switch from an operationally up subgroup to a new candidate subgroup. This setting can be manually overridden by using the tools>perform>force>lag-id command (see the 7705 SAR OAM and Diagnostics Guide, ‟Tools Command Reference”, for information about this command).
If the port-level timer is set, it must expire before the subgroup selection occurs and this timer is started. The subgroup-level timer is supported only for LAGs running LACP.
LAG-level hold-down timer
This timer controls the delay before a LAG is declared operationally down when the available links fall below the required port or bandwidth minimum. This timer is recommended for MC-LAG operation. The timer prevents a LAG from being brought down when an MC-LAG switchover executes a make-before-break switch. The LAG-level timer is set with the config>lag>hold-time down command.
If the port-level timer is set, it must expire before the LAG operational status is processed and this timer is started.
Multi-chassis LAG
Multi-chassis LAG (MC-LAG) is a redundancy feature on the 7705 SAR, useful for nodes that are taken out of service for maintenance, upgrades, or relocation. MC-LAG also provides redundancy for incidents of peer nodal failure. See the ‟Multi-chassis LAG redundancy” section in the 7705 SAR Basic System Configuration Guide.
Static LAG (active/standby LAG operation without LACP)
Some Layer 2-capable network equipment devices support LAG protected links in an active/standby mode but without LACP. This is commonly referred to as static LAG. In order to interwork with these products, the 7705 SAR supports configuring LAG without LACP.
LACP provides a standard means of communicating health and status information between LAG peers. If LACP is not used, the peers must be initially configured in a way that ensures that the ports on each end are connected and communicating. Otherwise, LAG will not be active. Which LAG peer is made active is a local decision. If the port priority settings are the same for all ports, it is possible that the two ends will select ports on different physical links and LAG will not be active. Decide the primary link by setting the port priority for the LAG on each peer to ensure that the active ports on each end coincide with the same physical link.
The key parameters for configuring static LAG are selection-criteria (set to best-port) and standby-signaling (set to power-off). The selection criteria is used to determine which selection algorithm decides the primary port (the active port in a no-fault condition). It is always the subgroup with the best-port (the highest-priority port - lowest configured value) that is chosen as the active subgroup. The selection criteria must be set to best-port before standby signaling can be placed in power-off mode. Once the selection criteria is set to best-port, setting the standby-signaling parameter to power-off causes the transmitters on the standby ports to be powered down.
After a switchover caused by a failure on the active link, the transmitters on the standby link are powered on. The switch time for static LAG is typically longer than it is with LACP, due to the time it takes for the transmitters to come up and transmission to be established. When the fault is restored, static LAG causes a revertive switch to take place. The revertive switch is of shorter duration than the initial switchover since the system is able to prepare the other side for the switch and initiate the switchover once it is ready.
LAG support on mixed-generation hardware
This section contains information about the following topics:
LAG configuration at SAP level
The 6-port Ethernet 10Gbps Adapter card and the 7705 SAR-X are third-generation (Gen-3) hardware components. All other Ethernet hardware components are second-generation (Gen-2) components. See Ethernet adapter card, module, and platform generations for a list of second-generation and third-generation Ethernet adapter cards, ports, and platforms.
The 7705 SAR supports mix-and-match traffic management (TM) across LAG members, where one member is a port on a Gen-3 adapter card or platform and the other member is a port on a Gen-2 adapter card or platform. Mix-and-match LAG does not apply to the 7705 SAR-X because it has only Gen-3 Ethernet ports.
For mix-and-match LAG TM scenarios, the 7705 SAR supports a generic QoS configuration, where the operator can configure all the settings available on each generation adapter card, but it is the card responsible for transporting traffic that determines which settings are applicable. That is, only the settings that apply to the active member port are used.
For example, configuring scheduling-mode applies to Gen-2 adapter card SAPs but does not apply to the Gen-3 adapter card SAPs because Gen-3 cards support only one scheduling mode (4-priority), which is its implicit (default) scheduler mode and is not configurable.
Because it cannot be known whether SAP traffic rides over a Gen-2 or a Gen-3 adapter card and whether both adapter cards support H-QoS (tier 2, per-SAP shapers), the operator can choose to configure per-SAP aggregate CIR and PIR shaper rates. When the active link is on a Gen-2- or Gen-3-based port, per-SAP aggregate CIR and PIR rates are both used to enforce shaper rates, except when the active link is on a Gen-3-based port and traffic is in the network egress direction. In this case, only the PIR portion of the per-SAP aggregate rate is used to enforce shaper rates.
In the following descriptions of LAG configuration, scheduler-mode, agg-rate, and cir-rate refer to SAP configuration, as shown below for an Epipe SAP. Similar commands exist for SAPs in other services as well as for egress traffic.
- Example:
config>service>epipe>sap lag-id>ingress#
scheduler-mode {4-priority | 16-priority}
agg-rate-limit agg-rate [cir cir-rate]
The SAP identifier in the previous command has a lag-id (LAG SAP), not a port-id (regular SAP). A LAG SAP references two ports (one active and one standby), but only one port at a time carries traffic.
The agg-rate is a PIR rate.
For information about traffic management for Gen-3 adapter cards and platforms, see the ‟QoS for Gen-3 adapter cards and platforms” section in the 7705 SAR Quality of Service Guide.
For mix-and-match LAG configurations, the following behaviors apply:
The configured aggregate rate on the LAG SAP is used to dictate the per-SAP aggregate rate on the active LAG port, regardless of which generation of adapter card is used (Gen-3 or Gen-2) or the configured scheduler mode. On a Gen-2 adapter card, the aggregate rate only applies when the port is in 16-priority scheduler mode. This behavior implies the following points:
The scheduler mode can be set to 16-priority or 4-priority. When servicing packets, the Gen-2-based datapath uses the configured scheduler mode (16-priority or 4-priority), while the Gen-3-based datapath always uses 4-priority scheduling.
When the traffic is transported over a Gen-3-based port (that is, the active link is on a Gen-3-based adapter card), the aggregate rate (agg-rate) is used to enforce a maximum shaper rate, as is the aggregate rate CIR (cir-rate).
When the active link is on a Gen-2-based adapter card, both aggregate rate CIR and PIR (cir-rate and agg-rate) are used. The aggregate rate (PIR) enforces the per-SAP bandwidth limit, and the CIR is used to identify in-profile and out-of-profile packets for aggregate scheduling purposes.
In addition, the following items describe mix-and-match LAG configuration behavior (that is, how the LAG SAP settings are applied or ignored depending on the active member port):
For a LAG SAP, scheduler-mode, agg-rate, and cir-rate are all configurable on a per-SAP basis, regardless of the LAG member port combination (that is, both Gen-2 ports, both Gen-3 ports, or a Gen-2-/Gen-3 port mix).
The configured scheduler-mode can be set to 4-priority or 16-priority, regardless of the LAG member port combination.
Agg-rate and cir-rate can be set whether scheduler-mode is set to 4-priority or 16-priority.
The configured scheduler-mode applies to Gen-2-based LAG member ports only and is not used for Gen-3-based LAG member ports. Gen-3 cards always use 4-priority scheduler mode. The unshaped-sap-cir keyword does not apply to Gen-3 SAPs because Gen-3 SAPs are all shaped SAPs.
If scheduler-mode is 4-priority on the LAG SAP, where the LAG has one Gen-2-based port member and one Gen-3-based port member, the following points apply:
The Gen-2-based adapter card is configured with 4-priority scheduling, while agg-rate and cir-rate are not applied, and H-QoS is not enabled.
The Gen-3-based adapter card is configured with agg-rate and cir-rate, while scheduler-mode is ignored.
When LAG active/standby switching occurs from an active Gen-3-based port to an active Gen-2-based port, traffic management is changed from a 4-priority scheduler with H-QoS to a 4-priority scheduler without H-QoS that functions like an unshaped SAP.
For the reverse case, when LAG active/standby switching occurs from an active Gen-2-based port to an active Gen-3-based port, traffic management is changed from a 4-priority scheduler without H-QoS to a 4-priority scheduler with H-QoS.
If scheduler-mode is 16-priority on the LAG SAP, where the LAG has one Gen-2-based port member and one Gen-3-based port member, the following points apply:
The Gen-2-based adapter card is configured with 16-priority scheduling mode, agg-rate and cir-rate. This means that H-QoS is enabled.
The Gen-3-based adapter card is configured with agg-rate and cir-rate, while scheduler-mode is ignored.
When LAG active/standby switching occurs from an active Gen-3-based port to an active Gen-2-based port, traffic management is changed from a 4-priority scheduler with H-QoS using the agg-rate and cir-rate, to a 16-priority scheduler with H-QoS using the agg-rate and the cir-rate (that is, from 4-priority (Gen-3) mode to 16-priority mode for shaped SAPs).
For the reverse case, when LAG active/standby switching occurs from an active Gen-2-based port to an active Gen-3-based port, traffic management is changed from a 16-priority scheduler with H-QoS using the agg-rate and the cir-rate, to a 4-priority (Gen-3) scheduler with H-QoS enabled using the agg-rate and the cir-rate.
If scheduler-mode is 16-priority mode on the LAG SAP, the combination of a Gen-1-based port with a Gen-2-based or Gen-3-based port is blocked because Gen-1 adapter cards do not support 16-priority mode. The only valid option for this combination of ports is 4-priority scheduling mode.
Lastly, for LAG on access ports, the primary port configuration settings are applied to both the primary and secondary LAG ports. Therefore, in order to support unshaped SAPs when the primary port is a Gen-3-based port and the secondary port is a Gen-2-based port, configuring the unshaped-sap-cir on the Gen-3-based port is allowed, even though it does not apply to the Gen-3-based port. This is because unshaped-sap-cir is needed by the secondary Gen-2-based port when it becomes the active port. The full command is config>port>ethernet>access>egress> unshaped-sap-cir cir-rate.
LAG configuration at port level
The 7705 SAR allows all configurations on Gen-2 and Gen-3 ports, even if some or all of the configurations are not applicable to all the ports. The software uses only the settings that are applicable to the particular port and ignores those that are not applicable. Any change to the primary LAG member configuration propagates to all non-primary ports.
The following table lists the port commands that can be affected by LAG configuration, indicates the command’s applicability to Gen-2 and Gen-3 ports, and describes the LAG behavior for mixed LAG configuration.
CLI command |
Gen-2 port |
Gen-2 port on module1 |
Gen-3 port |
Configuration behavior |
---|---|---|---|---|
unshaped-if-cir |
Supported 2 |
Supported 2 |
Supported 3 |
Allowed on Gen-2 and Gen-3 hardware, but not on Fast Ethernet ports. All port members of the same LAG must have the same value. |
unshaped-sap-cir |
Supported |
Supported |
N/A |
Allowed on Gen-2 and Gen-3 hardware. All LAG members are allowed if all member ports have the same unshaped-sap-cir value. Change the value only on the primary member. The value is propagated to all other members. |
shaper-policy |
Supported |
Supported |
Supported |
Allowed on Gen-2 and Gen-3 hardware |
cbs |
Supported |
Supported |
Supported |
Allowed on Gen-2 and Gen-3 hardware. All LAG members must have the same value. Change the value only on the primary member. The value is propagated to all other members. |
src-pause |
Enable or disable |
Disable |
Enable or disable |
Allowed to change enable/disable on Gen-2 and Gen-3 hardware, except for a Gen-3 port on a 6-port SAR-M Ethernet Module, where only the no src-pause command is supported and cannot be changed. All LAG members must have same value. Change the value only on the primary member. The value is propagated to all other members. |
include-fcs |
Enable or disable |
Always enabled |
Enable or disable |
Allowed on Gen-2 and Gen-3 hardware |
scheduler-mode (for port) |
16-priority |
16-priority |
4-priority |
Allowed to configure per-port independently, whether the port is a standalone or an active/standby member. There is no propagation among ports within the same LAG. |
Notes:
- Refers to the 6-port SAR-M Ethernet module
- Not supported on Fast Ethernet ports.
- If the port is in network mode, the unshaped-if-cir command can be configured but does not take effect. If the port is in hybrid mode, the command takes effect.
As indicated in the table, each generation of adapter card uses its own configured scheduler mode or uses the only command option available for Gen-2 and Gen-3 adapter cards. For example, on a LAG where:
one member link is on Gen-2 hardware – this port uses 16-priority scheduler mode, which is the default mode and cannot be changed
one member link is on Gen-3 hardware – this port uses 4-priority (Gen-3) scheduler mode, which is the default mode and cannot be changed
BFD over LAG links (micro-BFD)
The 7705 SAR supports the application of BFD to monitor individual LAG link members in order to speed up the detection of link failures. When BFD is associated with an Ethernet LAG, BFD sessions are set up over each link member. These asynchronous independent sessions are referred to as micro-BFD sessions. When micro-BFD is configured, a link is not operational in the associated LAG until the associated micro-BFD session is fully established. The link member is taken out of the operational state in the LAG if the micro-BFD session fails.
Although ETH-EFM can be used on individual LAG links, EFM timers are limited to 100 ms. With micro-BFD, 10-ms timers are supported, which allows for much faster detection times. The micro-BFD sessions use the well-known destination UDP port 6784 over LAG links. The source MAC address is the local system MAC address for the LAG interface. The micro-BFD packets use the well-known destination MAC address 01:00:5e:90:00:01.
Configuration rules
The following table shows the rules to configure the micro-BFD IP addresses.
LAG and associated interface |
Local IP address |
Remote IP address |
---|---|---|
Null encap LAG and interface |
BFD IP must match the interface IP |
Same subset as interface IP |
Dot1q LAG and zero VLAN interface |
BFD IP must match the interface IP |
Same subset as interface IP |
Dot1q LAG and non-zero VLAN interface or no interface |
Any IP |
Any IP |
The remote-ip-address must match the BFD local-ip-address configured on the remote system.
If the LAG bundle is associated with a different IP interface, the local IP and remote IP addresses must be modified to match the new IP subnet.
The local and remote LAG nodes must be configured with the same values for the following micro-BFD parameters:
bfd-on-distributing-only
max-setup-time
max-admin-down-time
If these values do not match between the local and remote ends, micro-BFD and LAG may not come up.
The following table shows the services supported with micro-BFD.
LAG and associated interface |
Network interface |
IES |
VPRN |
Epipe |
Ipipe |
VPLS |
---|---|---|---|---|---|---|
Null encap LAG interface |
✓ |
✓ |
✓ |
|||
Dot1q LAG and zero VLAN interface |
✓ |
✓ |
||||
Dot1q LAG and non-zero VLAN interface or no interface |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
Configuration examples
This section provides micro-BFD configuration examples for null and dot1q encapsulation interface types.
Local LAG node configuration |
Remote LAG node configuration |
---|---|
|
|
The remote-ip-address must match the BFD local-ip-address configured on the remote system.
For LAG null encapsulation, an interface must be created and the micro-BFD IP addresses must match the interface IP address.
Local LAG node configuration |
Remote LAG node configuration |
---|---|
|
|
The remote-ip-address must match the BFD local-ip-address configured on the remote system.
For LAG dot1q encapsulation with no interface, the micro-BFD IP addresses can be any valid addresses.
Local LAG node configuration |
Remote LAG node configuration |
---|---|
|
|
|
|
The remote-ip-address must match the BFD local-ip-address configured on the remote system.
For LAG dot1q encapsulation with multiple non-zero VLAN interfaces, the micro-BFD IP addresses can be any valid addresses and the interfaces must have non-zero VLANs.
Local LAG node configuration |
Remote LAG node configuration |
---|---|
|
|
The remote-ip-address must match the BFD local-ip-address configured on the remote system.
For LAG dot1q encapsulation with a zero VLAN interface, the micro-BFD IP addresses must match the interface addresses of the zero VLAN interface.
LAG and ECMP hashing
If it is necessary to increase the available bandwidth for a logical link that exceeds the physical bandwidth or to add redundancy for a physical link, typically one of two methods is applied: LAG or ECMP. A system can also deploy both at the same time using ECMP of two or more LAGs or single links.
The 7705 SAR supports per-flow and per-service hashing, as described in the following sections:
Per-flow hashing
The 7705 SAR supports per-flow hashing for LAG and ECMP. Per-flow hashing uses information in a packet as an input to the hash function, ensuring that any given traffic flow maps to the same egress LAG port or ECMP path.
Depending on the type of traffic that needs to be distributed in an ECMP or LAG path, different variables are used as the input to the hashing algorithm that determines the selection of the next hop (ECMP) or port (LAG). The hashing result can be changed using the options described in Per-service hashing, LSR hashing, Layer 4 load balancing, TEID hashing for GTP-encapsulated traffic, and Entropy labels.
The following table summarizes the possible inputs to the hashing algorithm for ECMP and LAG.
Fragmented packets cannot use Layer 4 UDP/TCP ports or tunnel endpoint IDs (TEIDs). The datapath looks at IP source address and destination address only, even if configured to use Layer 4 UDP/TCP ports or TEID.
In the table, the hashing inputs in the Service ID column and the inputs in the other columns are mutually exclusive. Where checkmarks appear on both the per-service and per-flow sides of the table, see the table note in the Service ID column to determine when per-service hashing is used.
Traffic type |
Per- service |
Per-flow |
||||||||
---|---|---|---|---|---|---|---|---|---|---|
Service ID |
System IPv4 address 1 |
Ingress port 2 |
Source and destination |
TEID 4 |
Internal multicast group ID 5 |
MPLS label stack |
Entropy label |
|||
MAC address |
IP address |
UDP/TCP port 3 |
||||||||
ECMP |
||||||||||
IPv4 routed |
✓6 |
✓ |
✓ |
✓ |
✓ |
✓ |
||||
IPv6 routed |
✓6 |
✓ |
✓ |
✓ |
✓ |
✓ |
||||
MPLS LSR |
✓ |
✓ |
✓7 |
✓7 |
✓9 |
✓9 |
||||
MPLS MVPN (LSR, eLER) |
||||||||||
VPLS |
✓10 |
|||||||||
Epipe |
✓ |
|||||||||
Apipe, Cpipe, Fpipe, Ipipe, Hpipe |
✓ |
|||||||||
LAG |
||||||||||
IPv4 routed |
✓ |
✓ |
✓ |
✓ |
✓ |
|||||
IPv6 routed |
✓ |
✓ |
✓ |
✓ |
✓ |
|||||
MPLS LSR |
✓ |
✓ |
✓7 |
✓7 |
✓9 |
✓9 |
||||
MPLS MVPN (LSR, eLER) |
✓ |
✓ |
✓ |
✓ |
✓ |
|||||
VPLS |
✓11 |
✓ |
✓ |
✓ |
✓ |
✓ |
||||
Epipe |
✓11 |
✓ |
✓ |
✓ |
✓ |
|||||
Apipe, Cpipe, Fpipe, Ipipe, Hpipe |
✓ |
Notes:
-
The system IP address can be included as a hashing input using the system-ip-load-balancing command at the system level. For MPLS LSR, this configuration is ignored when the hashing algorithm is configured as lbl-only using the lsr-load-balancing command.
-
Optional hashing input that is included when the use-ingress-port option is enabled in the lsr-load-balancing command.
-
Optional hashing input that is included when the l4-load-balancing command is enabled (for all except MPLS LSR) or when the hashing algorithm is configured as lbl-ip-l4-teid using the lsr-load-balancing command (for MPLS LSR only). Layer 4 load balancing at the service level is not affected by Layer 4 load balancing at the system, router interface, or service interface levels (IES and VPRN).
-
Optional hashing input that is included when the teid-load-balancing command is enabled (for all except MPLS LSR) or when the hashing algorithm is configured as lbl-ip-l4-teid using the lsr-load-balancing command (for MPLS LSR only). TEID load balancing at the service level is not affected by TEID load balancing at the router interface or service interface levels (IES and VPRN).
-
Only applies to multicast traffic. The internal multicast group ID is generated from either the (S,G) record (IGMP snooping, MLD snooping, and PIM snooping), the point-to-multipoint label binding, or the VPLS service creation.
-
Only for Layer 3 traffic going to a Layer 3 spoke SDP interface.
-
Only included when the first 4 bits (first nibble) after the last MPLS header (bottom of stack = 1) has a value of 4 (decimal), in which case the next header encapsulation is considered to be an IPv4 header.
-
Optional hashing input that is included when LSR hashing is configured as lbl-ip or label-ip-l4-teid using the lsr-load-balancing command.
-
MPLS label stack and entropy label are mutually exclusive hashing inputs. When an entropy label indicator (ELI) and entropy label (EL) are found in the label stack, the MPLS labels are not used as hashing inputs.
-
When the per-service-hashing command is enabled in a VPLS service, the service ID and an internal spoke SDP binding ID are included as inputs to the hashing algorithm.
-
When the per-service-hashing command is enabled in a VPLS or Epipe service, only the service ID is included as an input to the hashing algorithm.
Per-service hashing
The 7705 SAR supports load balancing based on service ID, as shown in Hashing algorithm inputs (ECMP and LAG) , The 7705 SAR uses the service ID as the input to the hash function. Per-service and per-flow hashing are mutually exclusive features.
For IPv4 and IPv6 routed traffic under ECMP operation, the service ID is used as the hashing input for Layer 3 traffic going to a Layer 3 spoke SDP interface. Otherwise, per-flow load balancing is used.
For Epipe and VPLS services under LAG operation, the per-service-hashing command and the l4-load-balancing and teid-load-balancing commands are mutually exclusive. Load balancing via per-service hashing is configured under the config>service> epipe>load-balancing and config>service>vpls> load-balancing contexts.
If per-service-hashing is not enabled, a 4-byte hash value will be appended to internal overhead for VPLS multicast traffic at ingress. The egress internal hash value is discarded at egress before scheduling. Therefore, shaping rates at access and network ingress and for fabric policies may need to be adjusted accordingly. In addition, the 4-byte internal hash value may be included in any affected statistics counters.
The 7705 SAR supports multiple LSPs (RSVP-TE or segment routing TE (SR-TE)) in the same SDP as part of the mixed-LSP SDP feature (see the ‟Mixed-LSP SDPs” section in the 7705 SAR Services Guide for details). When an SDP is configured with multiple LSPs of the same type, it allows load balancing of the traffic in a similar manner as load balancing for LDP ECMP, but only at the iLER point. Therefore, the per-flow hashing and per-service hashing behavior described in this section for LDP ECMP at the iLER also applies to multiple LSPs (RSVP-TE or SR-TE) in the same SDP.
LSR hashing
LSR hashing operates on the label stack and can also include hashing on the IP header if the packet is an IPv4 packet. The label-IP hashing algorithm can also include the Layer 4 header and the TEID field. The default hash is on the label stack only. IPv4 is the only IP hashing supported on a 7705 SAR LSR.
When a 7705 SAR is acting as an LSR, it considers a packet to be IP if the first nibble following the bottom of the label stack is 4 (IPv4). This allows the user to include an IP header in the hashing routine at an LSR in order to spray labeled IP packets over multiple equal-cost paths in ECMP in an LDP LSP and/or over multiple links of a LAG group in all types of LSPs.
Other LSR hashing options include label stack profile options on the significance of the bottom-of-stack label (VC label), the inclusion or exclusion of the ingress port, and the inclusion or exclusion of the system IP address.
LSR load balancing is configured using the config>system>lsr-load-balancing or config>router>if>load-balancing>lsr-load-balancing command. Configuration at the router interface level overrides the system-level configuration for the specified interface.
If an ELI is found in the label stack, the entropy label is used as the hash result. Hashing continues based on the configuration of label-only (lbl-only), label-IP (lbl-ip), or label-IP with Layer 4 header and TEID (lbl-ip-l4-teid) options.
LSR label-only hashing
ECMP operation consists of an initial hash based on the system IP address, then on the global port number if the use-ingress-port option is enabled.
Each label in the stack is then hashed separately with the result of the previous hash, up to a maximum of 16 labels. The net result is used to select which LDP FEC next hop to send the packet to using a threshold hashing operation of the net result with the number of next hops. Threshold hashing is described in RFC 2992, Analysis of an Equal-Cost Multi-Path Algorithm.
If an ELI is found in the label stack, the entropy label replaces the MPLS label stack hashing result and hashing continues.
If the selected LDP FEC or LSP has its NHLFE programmed with a LAG interface, a second round of hashing is needed, using the net result of the first round of hashing as the hashing input.
LSR label-IP hashing
In the first round of hashing for LSR label IP hashing, the algorithm parses down the label stack as described in LSR label-only hashing.
When the algorithm reaches the bottom of the stack, it checks the next nibble. If the nibble value is 4, the packet is assumed to be an IPv4 packet and the result of the label hash is fed into another hash along with the source and destination address fields in the IP packet header. If the nibble value is not 4, the algorithm will just use the label stack hash already calculated for the ECMP path selection.
The second round of hashing for LAG reuses the net result of the first round of hashing.
LSR label-IP hashing with Layer 4 header and TEID
If the lbl-ip-l4-teid option is configured, the Layer 4 source and destination UDP or TCP port fields and the TEID field in the GTP header are included in the label-IP hashing calculation. See Layer 4 load balancing and TEID hashing for GTP-encapsulated traffic for more information.
Label stack profile options
The lsr-load-balancing command includes a bottom-of-stack option that determines the significance of the bottom-of-stack label (VC label) based on which label stack profile option is specified. The profiles are:
profile 1 – favors better load balancing for pseudowires when the VC label distribution is contiguous (default)
profile 2 – similar to profile 1 where the VC labels are contiguous, but provides an alternate distribution
profile 3 – all labels have equal influence in hash key generation
Ingress port
The use-ingress-port option, when enabled, specifies that the ingress port will be used by the hashing algorithm at the LSR. This option should be enabled for ingress LAG ports because packets with the same label stack can arrive on all ports of a LAG interface. In this case, using the ingress port in the hashing algorithm will result in better egress load balancing, especially for pseudowires.
The option should be disabled for LDP ECMP so that the ingress port is not used by the hashing algorithm. For ingress LDP ECMP, if the ingress port is used by the hashing algorithm, the hash distribution could be biased, especially for pseudowires.
Layer 4 load balancing
The IP Layer 4 load-balancing option includes the TCP/UDP source and destination port numbers in addition to the source and destination IP addresses in per-flow hashing of IP packets. By including the Layer 4 information, a source address/destination address default hash flow can be subdivided into multiple finer-granularity flows if the ports used between a source address and destination address vary.
Layer 4 load balancing is configured at the system level using the config>system>l4-load-balancing command. It can also be configured at the router interface level or the service interface level (IES and VPRN). Configuration at the router interface or service interface level overrides the system-level configuration for the specified interface or service.
For LSR LDP ECMP, Layer 4 load balancing is configured using the lbl-ip-l4-teid option in the lsr-load-balancing command at the system level or router interface level. Configuration at the router interface level overrides the system-level configuration for the specified interface.
Layer 4 load balancing can also be configured at the service level for Epipe and VPLS services. Layer 4 load balancing at the service level is not impacted by Layer 4 load balancing at the system, router interface, or service interface levels.
TEID hashing for GTP-encapsulated traffic
GTP is the GPRS (general packet radio service) tunneling protocol. The tunnel endpoint identifier (TEID) is a field in the GTP header. TEID hashing can be enabled on Layer 3 interfaces. The hash algorithm identifies the GTP-U protocol by checking the UDP destination port (2152) of an IP packet to be hashed. If the value of the port matches, the packet is assumed to be GTP-U. For GTPv1 packets, the TEID value from the expected header location is then included in the hash. For GTPv2 packets, the TEID flag value in the expected header is additionally checked to verify whether the TEID is present. If the TEID is present, it is included in the hash algorithm inputs.
TEID load balancing is configured at the router interface level using the config> router>if>teid-load-balancing command. It can also be configured at the IES or VPRN service interface level.
For LSR LDP ECMP, TEID load balancing is configured using the lbl-ip-l4-teid option in the lsr-load-balancing command at the system level or router interface level. Configuration at the router interface level overrides the system-level configuration for the specified interface.
TEID load balancing can also be configured at the service level for Epipe and VPLS services. TEID load balancing at the service level is not impacted by TEID load balancing at the router interface or service interface levels.
Entropy labels
The 7705 SAR supports MPLS entropy labels on RSVP-TE and SR-TE LSPs, as per RFC 6790. The entropy label provides greater granularity for load balancing on an LSR where load balancing is typically based on the MPLS label stack.
If an ELI is found in the label stack, the entropy label is used as the hash result and hashing continues based on the configuration of label-only (lbl-only) or label-IP (lbl-ip) options. For information about the behavior of LSR hashing when entropy label is enabled, see LSR hashing.
To support entropy labels on RSVP-TE and SR-TE LSPs:
the eLER must signal to the ingress node that entropy label capability (ELC) is enabled, meaning that the eLER can receive and process an entropy label for an LSP tunnel. Entropy labels are supported on RSVP-TE and SR-TE tunnels. Entropy labels are not supported on point-to-multipoint LSPs, BGP tunnels, or LDP FECs.
the iLER must receive the entropy label capability signal and be configured to enable the insertion of entropy labels for the spoke SDP, mesh SDP, or EVPN endpoint. Inserting an entropy label adds two labels in the MPLS label stack: the entropy label itself and the ELI.
At the eLER, use the config>router>rsvp>entropy-label-capability command to enable entropy label capability on RSVP-TE LSPs.
At the iLER, use the entropy-label command to enable the insertion of the entropy label into the label stack. The command is found under the following services and protocols:
Epipe and VPLS
config>service>epipe>spoke-sdp
config>service>epipe>bgp-evpn>mpls
config>service>vpls>spoke-sdp
config>service>vpls>mesh-sdp
config>service>vpls>bgp-evpn>mpls
IS-IS, OSPF, and MPLS
config>router>isis>segment-routing
config>router>ospf>segment-routing
config>router>mpls
For details on entropy labels, see the ‟MPLS entropy labels” section in the 7705 SAR MPLS Guide.
SPI load balancing
Security parameter index (SPI) load balancing provides a mechanism to improve the hashing performance of IPSec encrypted traffic. IPSec-tunneled traffic transported over a LAG typically relies on IP header hashing only. For example, in LTE deployments, TEID hashing cannot be performed because of encryption, and the system performs IP-only tunnel-level hashing. Because each SPI in the IPSec header identifies a unique SA, and therefore a unique flow, these flows can be hashed individually without impacting packet ordering. In this way,
The 7705 SAR allows enabling SPI hashing per Layer 3 interface (this is the incoming interface for hash on system egress) or per Layer 2 VPLS service. When SPI hashing is enabled, an SPI value from the ESP/AH header is used in addition to any other IP hash input based on the per-flow hash configuration: source/destination IPv4/IPv6 addresses and Layer 4 source/destination ports in case NAT traversal is required and Layer 4 load balancing is enabled. If the ESP/AH header is not present in a packet received on a given interface, the SPI will not be part of the hash inputs and the packet is hashed as per other hashing configurations. SPI hashing is not used for fragmented traffic in order to ensure that first and subsequent fragments use the same hash inputs.
SPI hashing is supported for IPv4 and IPv6 tunnel unicast traffic.
SONET/SDH
This section contains information about the following topics:
The 7705 SAR supports SONET/SDH ports on the following adapter cards:
4-port OC3/STM1 Clear Channel Adapter card
2-port OC3/STM1 Channelized Adapter card
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card
SONET/SDH ports can be clear channel (non-channelized) and channelized. The 4-port OC3/STM1 Clear Channel Adapter card supports only clear channel ports. The 2-port OC3/STM1 Channelized Adapter card supports channelized ports. The 4-port OC3/STM1 / 1-port OC12/STM4 Adapter card is a mixed-use adapter card that supports clear channel and channelized ports. The mda-mode command is used to configure the 4-port OC3/STM1 / 1-port OC12/STM4 Adapter card for 4-port or 1-port mode. See Configuring cards for details.
Clear channel ports use the whole port—other than overhead bytes—as a single stream of bits. Channelized ports use various channel hierarchies to split the larger bandwidth into smaller channels, such as DS1, E1, DS3, or E3. SONET hierarchy at STS-12 and SDH hierarchy at STM-4 show the standards-based channel mapping for SONET and SDH, respectively. Channelized ports on the 2-port OC3/STM1 Channelized Adapter card and the 4-port OC3/STM1 / 1-port OC12/STM4 Adapter card support a subset of the standards-based mapping options, as shown in SONET/SDH paths supported on the 7705 SAR .
For SONET, the basic frame format unit is STS-1 (51.84 Mb/s), which is carried in the optical carrier level 1 (OC-1) signal, and three STS-1 frames can be carried in an STS-3 frame at the OC-3 level. For SDH, the basic frame format unit is STM-1 (155.52 Mb/s), which is carried in an OC-3 signal. SDH STM-1 using OC-3 and SONET STS-3 are functionally equivalent.
SONET
The following figure shows the SONET hierarchy at STS-12 (OC-12).
A SONET multiplexing structure allows several combinations of signal transportation. For example, at the STS-3 (OC-3) level:
STS-3 is achieved by interleaving three STS-1s byte by byte
an STS-1 payload can be subdivided into virtual tributary groups (VTGs) and virtual tributaries (VTs). Each STS-1 may contain seven VTGs, which in turn carry sub-STS traffic in VTs. There are four VT sizes:
VT1.5 (1.728 Mb/s) (typically used for DS1, indicated in the CLI as vt15)
VT2 (2.304 Mb/s) (typically transports one E1)
VT3 (3.456 Mb/s) (not shown in the figure)
VT6 (6.912 Mb/s)
each VTG can contain four VT1.5s, three VT2s, two VT3s, or one VT6
each VTG can carry only VTs of the same size
SDH
The following figure shows the SDH hierarchy at STM-4 (OC-12).
An SDH multiplexing structure allows several combinations of signal transportation. For example, at the STM-1 (OC-3) level:
one STM-1 payload supports one administrative unit group (AUG)
each AUG can contain either three administrative units (AU-3s) or a single AU-4
For example, the hierarchical possibilities for a single AU-4 are:
each AU-4 transports data via a virtual container (VC-4)
a VC-4 consists of three tributary unit groups (TUG-3s), where either:
a single tributary unit (TU-3) can be multiplexed via a TUG-3, containing a VC-3 plus the VC-3 path overhead (POH) and TU-3 pointer
seven TUG-2s can be multiplexed via a TUG-3, where each TUG-2 can contain one TU-2, three TU-12s, or four TU-11s
the AU-4 structure addresses the data as (K, L, M), where K is the TUG-3 number, L is the TUG-2 number, and M is the TU-11/TU-12 number
SONET/SDH path support
The 7705 SAR supports a subset of the standards-based channel mapping options. The following table shows path support on the 7705 SAR.
The 7705 SAR CLI always uses the SONET VT frame convention. For example, the same SONET CLI syntax and nomenclature would be used to configure both a VT1.5 and a VC11. The framing {sonet | sdh} command determines whether VTs or VCs are being configured. Use the show>port-tree port-id command to display the SONET/SDH path containers.
Path type |
Port framing |
Path configuration |
2-port OC3/STM1 Channelized Adapter card |
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card |
|
---|---|---|---|---|---|
4-port mode |
1-port mode |
||||
OC3 clear channel |
SDH |
STM1>AUG1>VC4 | Yes |
||
OC3 clear channel |
SONET |
OC3>STS3>STS3c SPE | Yes |
||
E1 |
SDH |
STM1>AUG1>VC4>TUG3>TUG2>VC12 | Yes |
Yes |
|
E1 |
SDH |
STM1>AUG1>VC3>TUG2>VC12 | Yes |
Yes |
|
E1 |
SDH |
STM1>AUG1>VC4>TUG3>VC3>DS3 | Yes |
|
|
E1 |
SDH |
STM1>AUG1>VC3>DS3 | Yes |
|
|
E1 |
SONET |
OC3>STS1 SPE>DS3 | Yes |
||
DS1 |
SDH |
STM1>AUG1>VC4>TUG3>TUG2>TU11>VC11 | Yes |
Yes |
|
DS1 |
SDH |
STM1>AUG1>VC3>TUG2>VC11 | Yes |
Yes |
|
DS1 |
SDH |
STM1>AUG1>VC4>TUG3>VC3>DS3 | Yes |
|
|
DS1 |
SDH |
STM1>AUG1>VC3>DS3 | Yes |
|
|
DS1 |
SONET |
OC3>STS1 SPE>VT GROUP>VT1.5 SPE | Yes |
Yes |
|
DS1 |
SONET |
OC3>STS1 SPE>DS3 | Yes |
||
OC12 clear channel |
SDH |
STM4>AUG4>VC4-C4 | Yes |
||
OC12 clear channel |
SONET |
OC12>STS12>STS12c SPE | Yes |
||
E1 using STS-3 |
SDH |
STM4>AUG4>AUG1>VC4>TUG3> TUG2> VC12 | Yes |
||
E1 using STS-1 |
SDH |
STM4>AUG4>AUG1>VC3> TUG2> VC12 | Yes |
||
DS1 using STS-3 |
SDH |
STM4>AUG4>AUG1>VC4>TUG3> TUG2> TU11> VC11 | Yes |
||
DS1 using STS-1 |
SDH |
STM4>AUG4>AUG1>VC3>TUG2>VC11 | Yes |
||
DS1 using STS-1 |
SONET |
OC12>STS12>STS1 SPE >VT GROUP >VT1.5 SPE | Yes |
SONET/SDH channelized port ID
When configuring a SONET/SDH port, users configure both SONET/SDH and TDM aspects of a channel. The CLI uses the sonet-sdh-index variable to identify a channel in order to match SONET/SDH parameters with TDM parameters for the channel.
A channelized port ID has one of the syntaxes shown in the following table, as applicable to channelization and mapping options. In the table, the syntax contains port and path components, where the port is slot/mda/port and the path is the sonet-sdh-index. The sonet-sdh-index has one or more indexes (indicated by braces separated by a dot) and can have a high-level path label (indicated by bold text).
For example, in the highlighted row, port.sts1-{1 to 3} represents a SONET/SDH port divided into STS-1 (or STM-0) payloads identified as sts1-1, sts1-2, and sts1-3.
Port ID for physical port speed |
|||
---|---|---|---|
Channel speed |
OC12/STM4 |
OC3/STM1 |
DS3/E3 |
SONET/SDH |
|||
STS12/STM4 |
port.sts12 |
N/A |
N/A |
STS3/STM1 |
port.sts3-{1 to 4} |
port.sts3 |
N/A |
STS1/STM0 |
port.sts1-{1 to 4}.{1 to 3} |
port.sts1-{1 to 3} |
N/A |
TUG3 |
port.tug3-{1 to 4}.{1 to 3} |
port.tug3-{1 to 3} |
N/A |
TU3 |
N/A |
port.tu3-{1 to 3} |
N/A |
VT1.5/VC1.11 |
port.vt15-{1 to 4}.{1 to 3}.{1 to 4}.{1 to 7} |
port.vt15-{1 to 3}.{1 to 4}.{1 to 7} |
N/A |
VT2/VC12 1 |
port.vt2-{1 to 4}.{1 to 3}.{1 to 3}.{1 to 7} |
port.vt2-{1 to 3}.{1 to 3}.{1 to 7} |
N/A |
TDM |
|||
DS3/E3 |
N/A |
port.{1 to 3} |
port |
DS1 in DS3 |
N/A |
port.{1 to 3}.{1 to 28} |
port.{1 to 28} |
DS1 in VT2 |
port.{1 to 4}.{1 to 3}.{1 to 3}.{1 to 7} |
port.{1 to 3}.{1 to 3}.{1 to 7} |
N/A |
DS1 in VT1.5 |
port.{1 to 4}.{1 to 3}.{1 to 4}.{1 to 7} |
port.{1 to 3}.{1 to 4}.{1 to 7} |
N/A |
E1 in DS3 |
N/A |
port.{1 to 3}.{1 to 21} |
port.{1 to 21} |
E1 in VT2 |
port.{1 to 4}.{1 to 3}.{1 to 3}.{1 to 7} |
port.{1 to 3}.{1 to 3}.{1 to 7} |
N/A |
N*DS0 in DS1 in DS3 |
N/A |
port.{1 to 3}.{1 to 28}.{1 to 24} |
port.{1 to 28}.{1 to 24} |
N*DS0 in DS1 in VT2 |
port.{1 to 4}.{1 to 3}.{1 to 3}.{1 to 7}.{1 to 24} |
port.{1 to 3}.{1 to 3}.{1 to 7}.{1 to 24} |
N/A |
N*DS0 in DS1 in VT1.5 |
port.{1 to 4}.{1 to 3}.{1 to 4}.{1 to 7}.{1 to 24} |
port.{1 to 3}.{1 to 4}.{1 to 7}.{1 to 24} |
N/A |
N*DS0 in E1 in DS3 |
N/A |
port.{1 to 3}.{1 to 21}.{2 to 32} |
port.{1 to 21}.{2 to 32} |
N*DS0 in E1 in VT2 |
port.{1 to 4}.{1 to 3}.{1 to 3}.{1 to 7}.{2 to 32} |
port.{1 to 3}.{1 to 3}.{1 to 7}.{2 to 32} |
N/A |
Note:
- Supported by TDM satellite.
Automatic protection switching
This section contains information about the following topics:
APS overview
Automatic protection switching (APS) allows users to protect a SONET/SDH port or link with a backup (protection) facility of the same speed but from a different adapter card. APS provides protection against a port, signal, or adapter card failure. The 7705 SAR supports 1+1 APS protection in compliance with GR-253-CORE and ITU-T Recommendation G.841 to provide SONET/SDH carrier-grade reliability. All SONET/SDH paths and channels within a SONET/SDH port are protected.
When APS is enabled, the 7705 SAR constantly monitors the health of the APS links, APS ports, and APS-equipped adapter cards. If the signal on the active (working) port degrades or fails, the network proceeds through a predefined sequence of steps to transfer (or switch over) traffic processing to the protection port. This switchover is done very quickly to minimize traffic loss. Traffic is streamed from the protection port until the fault on the working port is cleared, at which time the traffic may optionally revert to the working port.
The 7705 SAR supports 1+1 single-chassis APS (SC-APS) and 1+1 multi-chassis APS (MC-APS). In an SC-APS group, both the working and protection circuit must be configured on the same node. In an MC-APS group, the working and protection circuits are configured on two separate nodes, providing protection from node failure in addition to protection from link and hardware failure.
Unidirectional and bidirectional modes are supported:
unidirectional APS (Uni-1Plus1) – in unidirectional mode, only the port in the failed direction switches to the protection port. Unidirectional mode is supported only on SC-APS.
bidirectional APS – in bidirectional mode, a failure in either direction causes both the near-end and far-end equipment to switch to the protection port in each direction. Bidirectional mode is the default mode and is supported on both SC-APS and MC-APS.
For SC-APS and MC-APS with MEF 8 services where the remote device performs source MAC validation, the MAC address of the channel group in each of the redundant interfaces may be configured to the same MAC address using the mac CLI command.
SC-APS
In an SC-APS group, both the working and protection circuits terminate on the same node. SC-APS is supported in unidirectional or bidirectional mode on:
2-port OC3/STM1 Channelized Adapter cards for TDM CES (Cpipes) and TDM CESoETH with MEF 8 with DS3/DS1/E1/DS0 channels
4-port OC3/STM1 / 1-port OC12/STM4 Adapter cards for MLPPP access ports or TDM CES (Cpipes) and TDM CESoETH (MEF 8) access ports with DS1/E1 channels, or on a network port configured for POS
4-port OC3/STM1 Clear Channel Adapter cards network side (configured for POS operation)
SC-APS with TDM access is supported on DS3, DS1, E1, and DS0 (64 kb/s) channels.
The working and protection circuits of an SC-APS group must be on two ports on different adapter cards.
The following figure shows an SC-APS group with physical port and adapter card failure protection. SC-APS application shows a packet network using SC-APS.
MC-APS
MC-APS extends the functionality offered by SC-APS to include protection against 7705 SAR node failure. MC-APS is supported in bidirectional mode on:
2-port OC3/STM1 Channelized Adapter cards for TDM CES (Cpipes) and TDM CESoETH with MEF 8 with DS3/DS1/E1/DS0 channels
4-port OC3/STM1 / 1-port OC12/STM4 Adapter cards for MLPPP access ports or CES (Cpipes) and TDM CESoETH (MEF 8) access ports with DS1/E1 channels
MC-APS with TDM access is supported on DS3, DS1, E1, and DS0 (64 kb/s) channels. TDM SAP-to-SAP with MC-APS is not supported.
With MC-APS, the working circuit of an APS group can be configured on one 7705 SAR node while the protection circuit of the same APS group is configured on a different 7705 SAR node. The working and protection nodes are connected by an IP link that establishes an MC-APS signaling path between the nodes.
The working and protection circuits must have compatible configurations, such as the same speed, framing, and port type. The circuits in an APS group on both the working and protection nodes must also have the same group ID, but they can have different port descriptions. In order for MC-APS to function correctly, pseudowire redundancy must be configured on both the working and protection circuits. For more information, see the 7705 SAR Services Guide. MC-APS with pseudowire redundancy also supports inter-chassis backup (ICB); see MC-APS and inter-chassis backup for more information.
The working and protection nodes can be different platforms, such as a 7705 SAR-8 Shelf V2 and a 7705 SAR-18. However, to prevent possible switchover performance issues, avoid mixing different platform types in the same MC-APS group. The 7705 SAR does not enforce configuration consistency between the working circuit and the protection circuit. Additionally, no service or network-specific configuration data is signaled or synchronized between the two routers.
An MC-APS signaling path is established using the IP link between the two routers by matching APS group IDs. A heartbeat protocol can also be used to add robustness. The signaling path verifies that one router is configured as the working circuit and the other is configured as the protection circuit. In case of a mismatch, an incompatible neighbor trap is generated. The protection router uses K1/K2 byte data, member circuit status, and the settings configured for the APS tools commands to select the working circuit. Changes in working circuit status are sent across the MC-APS signaling link from the working router to keep the protection router synchronized. External requests such as lockout, force, and manual switches are allowed only on the node with the protection circuit.
The following figure shows an MC-APS group with physical port, adapter card, and node protection. MC-APS application shows a packet network using MC-APS.
MC-APS and inter-chassis backup
Inter-chassis backup (ICB) spoke SDPs are supported for use with Cpipe services in an MC-APS configuration. ICB improves switch times, provides additional protection in case of network failures, and reduces packet loss when an active endpoint is switched from a failed MC-APS node to the protection node. The following figure shows an MC-APS group with pseudowire redundancy and ICB protection.
If the active link on the access side fails, an MC-APS switchover is triggered and a pseudowire switchover occurs. A failure on the network side triggers a pseudowire switchover but not an MC-APS switchover. For detailed information about pseudowire redundancy with ICB protection, see the 7705 SAR Services Guide, ‟PW Redundancy and Inter-Chassis Backup”.
K1 and K2 bytes
The APS protocol uses the K1 and K2 bytes of the SONET/SDH header to exchange commands and replies between the near end and far end.
The switch priority of a request is assigned by bits 1 through 4 of the K1 byte, as shown in the following table.
Bits |
Condition |
---|---|
1111 |
Lockout of Protection |
1110 |
Forced Switch |
1101 |
SF - High Priority (not used in 1+1 APS) |
1100 |
SF - Low Priority |
1011 |
SD - High Priority (not used in 1+1 APS) |
1010 |
SD - Low Priority |
1001 |
Not used |
1000 |
Manual Switch |
0111 |
Not used |
0110 |
Wait-to-Restore |
0101 |
Not used |
0100 |
Exercise |
0011 |
Not used |
0010 |
Reverse Request |
0001 |
Do Not Revert |
0000 |
No Request |
In unidirectional mode, the K1 and K2 bytes are not used to coordinate switch action; however, the K1 byte is still used to inform the other end of the local action, and bit 5 of the K2 byte is set to 0 to indicate 1+1 APS mode (see K2 byte functions ).
In bidirectional mode, the highest-priority local request is compared to the remote request (received from the far-end node using an APS command), and whichever request has the greater priority is selected. The requests can be automatically initiated (such as Signal Failure or Signal Degrade), external (such as Lockout, Forced Switch, Request Switch), or state requests (such as Revert-Time timers).
The channels requesting the switch action are assigned by bits 5 through 8. Only channel number codes 0 and 1 are supported on the 7705 SAR. If channel 0 is selected, the condition bits show the received protection channel status. If channel 1 is selected, the condition bits shows the received working channel status.
The K2 byte is used to indicate bridging actions performed at the line termination equipment (LTE), the provisioned architecture, and mode of operation, as shown in the following table.
Bits |
Function |
|
---|---|---|
1 to 4 |
— |
Channel number codes |
5 |
0 |
Provisioned for 1+1 mode |
1 |
Provisioned for 1:n mode |
|
6 to 8 |
111 |
Line AIS |
110 |
Line RDI |
|
101 |
Provisioned for bidirectional switching |
|
100 |
Provisioned for unidirectional switching |
|
011 |
Reserved for future use |
|
010 |
Reserved for future use |
|
001 |
Reserved for future use |
|
000 |
Reserved for future use |
Bidirectional 1+1 APS example
The following table outlines the steps that the bidirectional APS process will go through during a typical automatic switching event. The example is read row by row, from left to right, to provide the complete process of the bidirectional switching event.
Status |
APS commands sent in K1 and K2 bytes on protection line |
Action |
||
---|---|---|---|---|
B to A |
A to B |
At Site B |
At Site A |
|
No failure (protection line is not in use) |
‟No request” |
‟No request” |
No action |
No action |
Working line degraded in direction A to B |
‟SD” on working channel 1 |
‟No request” |
Failure detected, notify A and switch to protection line |
No action |
Site A receives SD failure condition |
Same |
‟Reverse request” |
No action |
Remote failure detected, acknowledge and switch to protection line |
Site B receives ‟Reverse request” |
Same |
Same |
No action |
No action |
Revertive mode
1+1 APS provides revertive and non-revertive modes; non-revertive mode is the default option. In revertive mode, the activity is switched back to the working port after the working line has recovered from a failure (or the manual switch is cleared). In non-revertive mode, a switch to the protection line is maintained even after the working line has recovered from a failure (or the manual switch is cleared).
To prevent frequent automatic switches that result from intermittent failures, a revert-time is defined for revertive switching. The revert-time is configurable from 0 to 60 min in increments of 1 min; the default value is 5 min. In some scenarios, performance issues can occur if the revert-time is set to 0; therefore, it is recommended that the revert-time always be set to a value of 1 or higher. Any change in the revert-time value takes effect upon the next initiation of the wait-to-restore (WTR) timer. The change does not modify the duration of a WTR timer that has already been started. The WTR timer of a non-revertive switch can be assumed to be infinite.
If both working and protection lines fail, the line that has less-severe errors will be active. If there is signal degradation on both ports, the active port that failed last will stay active. If there is signal failure on both ports, the working port will always be active because signal failure on the protection line is a higher priority than on the working line.
APS tools commands
Lockout protection
The lockout protection command (tools>perform>aps>lockout) disables use of the protection line. Because the command has the highest priority, a failed working line using the protection line is switched back to itself even if it is in a fault condition. No switches to the protection line are allowed when the line is locked out. See the 7705 SAR OAM and Diagnostics Guide, ‟Tools”, for information about the APS lockout command.
Request switch of active to protection
The request or manual switch of active to protection command (tools>perform>aps> request) switches the active line to use the protection line (by issuing a manual switch request) unless a request of equal or higher priority is already in effect. If the active line is already on the protection line, no action takes place. See the 7705 SAR OAM and Diagnostics Guide, ‟Tools”, for information about the APS request command.
Request switch of active to working
The request or manual switch of active to working command (tools>perform>aps> request) switches the active line back from the protection line to the working line (by issuing a manual switch request) unless a request of equal or higher priority is already in effect. If the active line is already on the working line, no action takes place. See the 7705 SAR OAM and Diagnostics Guide, ‟Tools”, for information about the APS request command.
Forced switch from working to protection
The command tools>perform>aps>force>working forces an activity switch away from the working line to the protection line unless a request of equal or higher priority is already in effect. When the forced switch of working to protection command is in effect, it may be overridden either by a lockout command or by a signal fault on the protection line. If the active line is already on the protection line, no action takes place. See the 7705 SAR OAM and Diagnostics Guide, ‟Tools”, for information about the APS force command.
Forced switch from protection to working
The command tools>perform>aps>force>protect forces an activity switch away from the protection line and back to the working line unless a request of equal or higher priority is already in effect. See the 7705 SAR OAM and Diagnostics Guide, ‟Tools”, for information about the APS force command.
Exercise
The exercise command (tools>perform>aps>exercise) is only supported in 1+1 APS bidirectional mode. The command exercises the protection line by sending an exercise request over the protection line to the far end and expecting a reverse request response back. The switch is not completed during the exercise routine. See the 7705 SAR OAM and Diagnostics Guide, ‟Tools”, for information about the APS exercise command.
APS failure codes
Protection switching byte failure (APS-PSB)
This failure indicates that the received K1 byte is either invalid or inconsistent. An invalid code defect occurs if the same K1 value is received for three consecutive frames and is either an unused code or irrelevant for the specific switching operation. An inconsistent code defect occurs when no 3 consecutive received K1 bytes of the last 12 frames are the same.
If the failure persists for 2.5 s, a Protection Switching Byte alarm is raised. When this failure is declared, the protection line is treated as if it were in the SF state. The received signal is then selected from the working line.
When the failure is absent for 10 s, the alarm is cleared and the SF state of the protection line is removed.
This alarm can only be raised by the active port operating in bidirectional mode.
Channel mismatch failure (APS-CM)
This failure indicates that there is a channel mismatch between the transmitted K1 bytes and the received K2 bytes. A defect is declared when the received K2 channel number differs from the transmitted K1 channel number for more than 50 ms after 3 identical K1 bytes are sent. The monitoring for this condition is continuous, not just when the transmitted value of K1 changes.
If the failure persists for 2.5 s, a Channel Mismatch Failure alarm is raised. When this failure is declared, the protection line is treated as if it were in the SF state. The received signal is then selected from the working line.
When the failure is absent for 10 s, the alarm is cleared and the SF state of the protection line is removed.
This alarm can only be raised by the active port operating in bidirectional mode.
APS mode mismatch failure (APS-MM)
This failure can occur for two reasons. The first reason is that the received K2 byte indicates that 1:N protection switching is being used by the far end of the OC-N line, instead of 1+1 protection switching. The second reason is that the received K2 byte indicates that unidirectional mode is being used by the far end while the near end is using bidirectional mode. This defect is detected within 100 ms of receiving a K2 byte that indicates either of these conditions.
If the failure persists for 2.5 s, a Mode Mismatch Failure alarm is raised. When this failure is declared, if the defect indicates that the far end is configured for unidirectional mode, then the OC-N port reverts from its current bidirectional mode to unidirectional mode. However, the port continues to monitor the received K2 byte, and if the K2 byte indicates that the far end has switched to bidirectional mode, the OC-N port then reverts to bidirectional mode as well. The monitoring stops if the user explicitly reconfigures the local port to operate in unidirectional mode.
When the failure is absent for 10 s, the alarm is cleared, and the configured mode, which is 1+1 bidirectional, is used.
This alarm can only be raised by the active port operating in bidirectional mode.
Far-end protection line failure (APS-FEPL)
This failure occurs when a K1 byte is received in three consecutive frames that indicates a signal fail (SF) at the far end of the protection line. This failure forces the received signal to be selected from the working line.
If the failure persists for 2.5 s, a Far-End Protection Line Failure alarm is raised. This alarm can only be raised by the active port operating in bidirectional mode. When the failure is absent for 10 s, the alarm is cleared.
T1/E1 line card redundancy
This section contains information about the following topics:
T1/E1 LCR overview
T1/E1 line card redundancy (LCR) uses redundant adapter cards to protect T1/E1 services in case of hardware failures. T1/E1 LCR provides protection against adapter card or node failures. When T1/E1 LCR is used in conjunction with pseudowire redundancy, the network path between the endpoints is also protected. Protection is provided specifically for Cpipe services at the clear channel level and at the channelized level.
When T1/E1 LCR is enabled, the 7705 SAR constantly monitors the health of the adapter cards. If the active (working) adapter card fails (for example, because a card has been removed or due to a bus error), the system proceeds through a predefined sequence of steps to transfer (or switch over) traffic processing to the protection MDA. This switchover is done very quickly to minimize traffic loss. Traffic is moved to the protection adapter card until the fault on the working adapter card is cleared, at which time the traffic may optionally revert to the working adapter card.
T1/E1 LCR is supported on the following cards on the 7705 SAR-8 Shelf V2 and the 7705 SAR-18:
16-port T1/E1 ASAP Adapter card
32-port T1/E1 ASAP Adapter card
T1/E1 LCR includes support for single-chassis LCR (SC-LCR) and multi-chassis LCR (MC-LCR). In an SC-LCR group, both the working and protection adapter cards must be configured on the same node. In an MC-LCR group, the working adapter card and protection adapter card are configured on two separate nodes, providing protection from node failure in addition to protection from adapter card hardware failure.
SC-LCR
In an SC-LCR group, both the working and protection adapter cards are configured with the same LCR group ID on the same node. The working and protection adapter cards are required to be the same type.
SC-LCR is supported for TDM CES (Cpipes). SC-LCR with TDM access is supported on DS1, E1, and DS0 (64 kb/s) channels.
SC-LCR supports TDM SAP-to-SAP connections when both SAPs are configured as LCR SAPs.
SC-LCR also supports TDM SAP-to-spoke SDP connections over an MPLS network. In this configuration, the far-end connection may or may not be configured for LCR.
MC-LCR
MC-LCR extends the functionality offered by SC-LCR to include protection against 7705 SAR node failure. With MC-LCR, the working adapter card of an LCR group is configured on one 7705 SAR node while the protection adapter card of the same LCR group is configured on a different 7705 SAR node. The working and protection nodes are connected by an IP link (directly or indirectly) that establishes a multi-chassis protocol (MCP) link between the nodes.
MC-LCR is supported for TDM CES (Cpipes). MC-LCR with TDM access is supported on DS1, E1, and DS0 (64 kb/s) channels.
MC-LCR supports TDM SAP-to-SAP connections when both LCR SAPs are configured using the same adapter card on each node.
MC-LCR also supports TDM SAP-to-spoke SDP connections over an MPLS network. In this configuration, the far-end connection may or may not be configured for LCR.
The working and protection adapter cards must be the same type and must have compatible configurations, such as the same speed, framing, and port type. The adapter cards in an LCR group on both the working and protection nodes must also have the same group ID. The LCR groups can have different descriptions. In order for MC-LCR to function correctly, pseudowire redundancy must be configured on both the working and protection adapter cards. For information about pseudowire redundancy, see the 7705 SAR Services Guide, ‟Pseudowire Redundancy”. MC-LCR with pseudowire redundancy also supports inter-chassis backup (ICB); see MC-LCR and inter-chassis backup for more information.
An MCP link can be established using the IP link between the two nodes by matching LCR group IDs. The signaling path verifies that one node is configured as the working adapter card and the other is configured as the protection adapter card. In case of a mismatch, an incompatible neighbor trap is generated. The protection node uses member adapter card status and the settings configured in the LCR tools commands to select the working adapter card. Changes in working adapter card status are sent across the MC-LCR signaling link from the working node to keep the protection node synchronized. External requests such as lockout and force switch are allowed only on the node with the protection adapter card.
MC-LCR and inter-chassis backup
ICB spoke SDPs are supported for use with Cpipe services in an MC-LCR configuration. ICB improves switch times, provides additional protection in case of network failures, and reduces packet loss when an active endpoint is switched from a failed MC-LCR node to the protection node.
If the active link on the access side fails, an MC-LCR switchover is triggered and a pseudowire switchover is subsequently triggered. A failure on the network side triggers a pseudowire switchover but not an MC-LCR switchover. For detailed information about pseudowire redundancy with ICB protection, see the 7705 SAR Services Guide, ‟PW Redundancy and Inter-Chassis Backup”.
Revertive mode
T1/E1 LCR provides revertive and non-revertive modes; non-revertive mode is the default option. In revertive mode, the activity is switched back to the working adapter card after it has recovered from a failure. In non-revertive mode, a switch to the protection adapter card is maintained even after the working adapter card has recovered from a failure.
To prevent frequent automatic switches that result from intermittent failures, a revert-time is defined for revertive switching. The revert-time is configurable from 0 to 60 min in increments of 1 min; the default value is 5 min. In some scenarios, performance issues can occur if the revert-time is set to 0; therefore, it is recommended that the revert-time always be set to a value of 1 or higher. Any change in the revert-time value takes effect upon the next initiation of the wait-to-restore (WTR) timer. The change does not modify the duration of a WTR timer that has already been started. The WTR timer of a non-revertive switch can be assumed to be infinite.
LCR tools commands
The LCR tools commands can only be executed on the node used in an SC-LCR group or on the protection node of an MC-LCR group. The commands cannot be executed on the working node of an MC-LCR group.
Force activity from working card
The tools>perform>lcr>force>working command forces activity away from the working adapter card to the protection adapter card so that the protection adapter card becomes active unless an internal request of equal or higher priority is already in effect. When this command is in effect, it can be overridden either by a tools>perform>lcr>lockout command or by a signal fault on the protection adapter card. If the protection adapter card is already the active adapter card, no action takes place. See the 7705 SAR OAM and Diagnostics Guide, ‟Tools”, for information about the LCR force command.
Force activity from protection card
The tools>perform>lcr>force>protect command forces activity away from the protection adapter card to the working adapter card so that the working adapter card becomes active unless an internal request of equal or higher priority is already in effect. See the 7705 SAR OAM and Diagnostics Guide, ‟Tools”, for information about the LCR force command.
Deploying preprovisioned components
When a CSM or adapter card is installed in a preprovisioned slot, the system tests for discrepancies between the preprovisioned card and card type configurations and the types actually installed. Error messages are displayed if there are inconsistencies, and the card will not initialize. When the correct preprovisioned cards are installed in the appropriate chassis slot, then alarm, status, and performance details will be displayed on the CLI.
Microwave link
This section contains information about the following topics:
Microwave link overview
A microwave link allows a 7705 SAR-8 Shelf V2 or 7705 SAR-18 to be connected to a 9500 MPR-e radio node. The MPR-e is the zero-footprint (outdoor) microwave solution offered by Nokia that allows customers to migrate from TDM microwave to pure packet microwave. The following MPR-e radio variants are supported:
MPT-MC - Microwave Packet Transport, Medium Capacity (ODU)
MPT-HC V2/9558HC - Microwave Packet Transport, High Capacity Version 2 (ODU)
MPT-XP - Microwave Packet Transport, High Capacity (very high power version of the MPT-HC V2/9558HC) (ODU)
MPT-HQAM - Microwave Packet Transport, High Capacity (MPT-HC-QAM) or Extended Power (MPT-XP-QAM) with 512/1024 QAM (ODU)
MPT-HLC and MPT-HLC plus - Microwave Packet Transport, High-Capacity Long-Haul Cubic (ANSI) (IDU)
A microwave link is configured on a 7705 SAR-8 Shelf V2 or 7705 SAR-18 as a virtual port object (not as a physical port) using the CLI command mw-link-id (for more information about how to configure a microwave link, see Microwave link commands).
The supported microwave link types are 1+0 and 1+1 Hot Standby (HSB). To deploy an N+0 link (with N ≥ 2), multiple links of 1+0 can be configured separately.
A microwave link connection is made from ports 1 through 4 on a Packet Microwave Adapter card to an MPR-e radio using one of the methods described in the 7705 SAR Packet Microwave Adapter Card Installation Guide, ‟Delivering Data to an MPR-e Radio”. The radio can be configured in standalone mode to provide a basic microwave connection as described in Standalone mode or in single network element (single NE) mode to provide the advanced networking capabilities described in Single NE mode. The default configuration is single NE mode.
When connected to an MPR-e radio, these ports, with a microwave link configured, operate as Gigabit Ethernet ports and provide the same features as the other ports (ports 5 through 8), except for the following:
802.1x authentication
active/standby operation on Ethernet access ports configured as LAGs
hard policing on Ethernet ports
If a microwave link is not configured on ports 1 through 4, they provide all of the same features as the other Gigabit Ethernet ports (ports 5 through 8).
Standalone mode
A microwave link from ports 1 through 4 on a Packet Microwave Adapter card to an MPR-e radio that is configured in standalone mode provides a basic microwave connection to the MPR-e radio. In standalone mode, each MPR-e radio that is connected to a 7705 SAR-8 Shelf V2 or 7705 SAR-18 is managed as a separate standalone NE by the MPT Craft Terminal (MCT) Element Manager.
Single NE mode
A microwave link from ports 1 through 4 on a Packet Microwave Adapter card to an MPR-e radio that is configured in single NE mode provides the following networking capabilities to the radio over the microwave link:
Single NE management
MWA allows the 7705 SAR-8 Shelf V2 or 7705 SAR-18 and the MPR-e radios to which it is connected to be integrated and managed as a single NE. The following features are part of single NE management:
One management IP address
The individual management and IP addresses of the MPR-e radios are no longer required for network management. When managing a microwave network (consisting of a 7705 SAR-8 Shelf V2 or 7705 SAR-18 that is connected to one or more MPR-e radios) using an element/network manager, only the IP address of the 7705 SAR-8 Shelf V2 or 7705 SAR-18 needs to be entered. This capability optimizes the microwave network’s IP addressing plan.
MPR-e radio configuration management
For an MPR-e configuration, the required MWA-specific parameters are configured on the 7705 SAR side using the CLI and the required non-MWA parameters are configured on the MPR-e side using the MCT.
The following MWA-specific parameters are configured on the 7705 SAR side:
1+1 HSB parameters
Epipe VLAN SAP parameters (in a mixed microwave link scenario, where there is interworking between a 7705 SAR MPR-e system and a Wavence MSS system using a TDM2Ethernet service, specific MPR-e system parameters are configured under the Epipe VLAN SAP; for more information, see the 7705 SAR Services Guide, ‟Configuring Epipe SAP Microwave Link Parameters for Interworking with TDM2 Ethernet”).
The following parameters are configured on the MPR-e side:
radio link parameters
QoS classification parameters
Configuration done on the MPR-e side is collected in a configuration file; this file can be saved to a 7705 SAR-8 Shelf V2 or 7705 SAR-18 using the Commit button function on the MCT or an admin>save CLI command on the 7705 SAR-8 Shelf V2 or 7705 SAR-18.
MPR-e radio alarm management
An MPR-e radio generates alarms for fault conditions pertaining to the MPR-e hardware and to the microwave link over which it is connected. The alarms are sent to the 7705 SAR-8 Shelf V2 or 7705 SAR-18, which turns the alarm notifications into SNMP traps and log events. These log events are controlled in the same way as all other events on the 7705 SAR-8 Shelf V2 and 7705 SAR-18 and can be displayed using the show>log>event-control>mwmgr command. See the 7705 SAR System Management Guide, ‟Event and accounting logs”, for more information.
MPR-e radio software and upgrade management
The single NE capability optimizes the MPR-e radio software installation and upgrade process. The MPR-e radio software is bundled with the 7705 SAR software as one package, there is no need to look for and download the MPR-e radio software separately. The 7705 SAR software package containing the MPR-e radio software can be downloaded from a directory on OLCS. The operator can copy the software package onto a compact flash or network store on the 7705 SAR-8 Shelf V2 or 7705 SAR-18.
MPR-e radio configuration database file management
An MPR-e radio’s database file is stored and backed up on a 7705 SAR-8 Shelf V2 or 7705 SAR-18. If an old MPR-e radio is replaced by a new one, the new MPR-e radio downloads the MPR-e radio software from the 7705 SAR-8 Shelf V2 or 7705 SAR-18, along with the backed-up database file of the old MPR-e radio. This means that the MPR-e radio does not need to be reconfigured after a radio hardware replacement.
A separate database file is required for each managed MPR-e radio. The user specifies the filename of the database file to be used during provisioning of the radio on the 7705 SAR-8 Shelf V2 or 7705 SAR-18 using the config>port>mw>radio> database CLI command.
MPR-e radio inventory and microwave link performance statistics
The following MPR-e radio system information and microwave link information and statistics can be accessed through a CLI session on the 7705 SAR-8 Shelf V2 or 7705 SAR-18:
MPR-e radio system information
equipment type
inventory information
radio frequency band
temperature
radio transmit status
-
microwave link statistics
-
MPR-e radio Ethernet statistics
-
local Tx power
-
local Rx power
-
remote Tx power
-
remote Rx power
Note: Local/remote Rx power monitoring and local/remote Tx power monitoring are also known as receive signal level (RSL) monitoring and transmit signal level (TSL) monitoring, respectively.
-
MPR-e radio reset control
MPR-e radio reset control is provided on the 7705 SAR-8 Shelf V2 or 7705 SAR-18. During an MPR-e radio reset, the microwave link is brought down and an upper layer applications action is triggered, such as message rerouting and clock source switching by the system synchronization unit (SSU).
MPR-e radio mute control
MPR-e radio mute control can be enabled through the CLI/SNMP or by using the MCT. The MCT and CLI are synchronized to show the current state of the MPR-e radio mute function.
Microwave link fast fault detection
The microwave link fast fault detection (FFD) capability allows a 7705 SAR-8 Shelf V2 or 7705 SAR-18 to directly detect MPR-e radio or microwave link faults using proprietary messaging. The following fault types are detected by FFD:
a radio signal failure
an MPR-e radio hardware failure
an incompatible MPR-e radio setting
a high bit error rate (HBER) condition
a remote defect indication (RDI) condition
FFD does not cause the SSU to disqualify the microwave link as a clock source if a fault condition is detected; SSM must be enabled in order to provide this function.
The microwave link hold time (hold-up time and hold-down time) must be configured in order to suppress link flapping. The hold-up and hold-down times delay advertising the transition of the microwave link status to the upper layer applications, including IP/MPLS and SSU. The hold-time range is between 0 and 900 s.
If microwave link faults are detected, an event is logged and the link is disabled. Some detected faults may be selectively suppressed using the suppress-faults command. When faults are suppressed, the event is still logged, but the microwave link is not disabled. Operators can suppress HBER faults, RSL threshold crossing faults, or RDI faults. By default, the system does not suppress faults for FFD.
1+1 HSB
MWA uses 1+1 HSB to protect against microwave link, MPR-e radio, and Packet Microwave Adapter card failures, as well as frequency channel selective fading. Additionally, hitless (errorless) switching provides zero packet loss if a switchover occurs from a main to a spare MPR-e radio.
The following are required for 1+1 HSB:
-
one frequency channel
-
two MWA Gigabit Ethernet ports (configured in network mode) on two different Packet Microwave Adapter cards installed in adjacent slots (for example, slot 1/2 or slot 5/6); port 1 on one card protects port 1 on the adjacent card, port 2 protects port 2 on the adjacent card, and so on.
-
two MPR-e radios (one main and one spare), each connected to one of the MWA Gigabit Ethernet ports on a Packet Microwave Adapter card
Note: An MPR-e radio that is connected to an odd-numbered port on the Packet Microwave Adapter card must be configured as the main radio.
The following protection schemes make up 1+1 HSB:
These protection schemes are enabled using the config>port>mw>protection command, with the exception of transmit diversity antenna, which is enabled via the MCT. They interwork with each other as described in the sections that follow.
1+1 equipment protection switching (EPS)
EPS protects against MPR-e radio, MWA Gigabit Ethernet link, and Packet Microwave Adapter card failures. After the radio frames are processed by the active EPS MPR-e radio, the radio sends the Ethernet traffic down to the 7705 SAR-8 Shelf V2 or 7705 SAR-18. The standby EPS MPR-e radio does not send any Ethernet traffic down to the 7705 SAR-8 Shelf V2 or 7705 SAR-18.
The switching criteria for EPS are:
-
an MPR-e radio hardware failure
-
an MWA Gigabit Ethernet link failure between the 7705 SAR-8 Shelf V2 or 7705 SAR-18 and an MPR-e radio
-
a Packet Microwave Adapter card connected to an active EPS MPR-e going into a missing or failure state
1+1 transmission protection switching (TPS)
In a 1+1 HSB configuration, TPS protects against a microwave link transmission failure by ensuring that only one MPR-e radio at a time uses the antenna for signaling. The 7705 SAR-8 Shelf V2 or 7705 SAR-18 sends traffic to both the active and standby TPS MPR-e radios. Upon receiving the baseband traffic, both radios modulate it and up-convert it to signals. However, only the active TPS MPR-e radio transmits the RF signals; the standby TPS MPR-e radio suppresses the signals. When the active TPS MPR-e radio fails, standby radio becomes active and restores the microwave link channel.
The switching criteria for TPS are identical to EPS.
-
The states of the EPS and TPS MPR-e radios are linked to each other. If an alarm occurs, an automatic switchover for EPS and TPS is activated simultaneously. However, if a manual switchover is configured, the switchover is decoupled and the state of the EPS and TPS MPR-e radios is no longer identical.
-
A manual switchover can be configured for EPS but not for TPS.
1+1 radio protection switching (RPS)
RPS is a hitless radio function that provides space diversity protection for the microwave channel. On the receive side, each MPR-e radio monitors the same radio frequency channel, with the main MPR-e radio being the active receiver by default. Both active and standby RPS MPR-e radios receive both streams of radio frames. The standby RPS MPR-e radio sends the stream of radio frames that it receives to the active EPS MPR-e radio.
The following figure shows a typical application of 1+1 HSB with SD deployment. Only one microwave frequency channel is active and only the main MPR-e radio is transmitting data to the remote ends; the spare MPR-e radio is acting as a standby.
1+1 HSB transmit diversity antenna (TDA)
The TDA feature provides another layer of protection over a microwave link. The TDA configuration uses a main antenna mounted on one MPT-HLC or HLC plus radio and a diversity antenna mounted on another MPT-HLC or HLC plus radio. In combination with the 1+1 HSB radio configuration (redundant MPR-e radios), the traffic is transmitted on either the main antenna or the diversity antenna to achieve the space diversity (SD) receiver configuration.
TDA provides protection switching independent of TPS. TDA is capable of counter-acting either negative propagation conditions or permanent antenna failure.
The main antenna is the default main unit that controls the antenna traffic flow using the TDA algorithm. If the main unit fails, the TDA algorithm is no longer operational on the main unit; its transmission switches over to the diversity antenna.
The non-operation of the main antenna switch does not affect transmission, even while the TDA algorithm is being transmitted on the diversity antenna.
TDA configuration is done via the MCT. TDA status is available using the 7705 SAR CLI/SNMP and via the MCT. The CLI command that is used is show>mw>link. The status information includes the current TDA configuration, which antenna is active, and the active antenna position.
The following figure shows an example of a TDA application.
Communication method between the main and spare MPR-e radios
In a 1+1 HSB configuration, the communication path between the main (active) and spare (standby) MPR-e radios installed on a tower is set up using a tight cable.
1+1 switching operation
The following list defines the types of EPS, TPS, and RPS MPR-e radio switching operations that can be enabled using the tools>perform>mw>link command:
-
lockout – prevents the spare MPR-e radio from ever becoming the main radio, even when the main MPR-e radio fails; this operation overrides any forced, automatic, or manual operation
-
forced – forces the spare MPR-e radio to become the main MPR-e radio, even though it may not be in a fit state to assume the role. A forced switch operation overrides any automatic or manual switch operation that is in place.
-
automatic – allows an MPR-e radio to perform an automatic switchover if a fault condition exists. An automatic switch operation overrides any manual switch operation that is in place.
-
manual – attempts to switch the main/spare status of an MPR-e radio; however, if port failures, equipment failures, and reception failures do not allow the switchover, an automatic switch operation is triggered.
See the 7705 SAR OAM and Diagnostics Guide, ‟Tools command reference”, ‟Tools perform commands”, for more information.
Revertive switching can also be configured for RPS and EPS/TPS (when revertive switching is configured for EPS, it is also applied to TPS; revertive switching for TPS cannot be configured separately). Revertive switching occurs when the MPR-e radio operation switches from the spare radio back to the main radio after a fault condition is cleared.
Frequency synchronization
Depending on the type of Gigabit Ethernet microwave link used to connect the Packet Microwave Adapter card and an MPR-e radio, different frequency synchronization mechanisms can be used.
When using optical 1000Base-SX to connect the Packet Microwave Adapter card and an MPR-e radio, synchronous Ethernet and SSM are the frequency synchronization mechanisms that are used. SSM is used as the mechanism to detect a microwave link failure, including loss of frame and MPR-e radio hardware failure.
When using electrical 1000Base-T to connect the Packet Microwave Adapter card and an MPR-e radio, PCR is the frequency synchronization mechanism that is used (a copper SFP is mandatory on ports 3 and 4).
For more information about PCR, synchronous Ethernet, and SSM, see the 7705 SAR Basic System Configuration Guide, ‟Node timing”.
RSL history
An MPR-e radio that is connected to the 7705 SAR can automatically upload its received signal level (RSL) history file to the 7705 SAR host. The RSL file contains a history of radio attributes and alarms that radio operators can use to isolate and diagnose radio-layer problems that may exist in the network.
Up to 24 MPR-e radios can independently upload their RSL history file every 15 minutes when the rsl-history command is configured on the 7705 SAR for each radio. When uploaded, the file is stored on the 7705 SAR compact flash. Each RSL file can be up to 1 MB and contain up to 10 000 lines. Each time a new file from a specific MPR-e radio is sent to the 7705 SAR, the new file overwrites the previous version for that radio. When uploaded to the 7705 SAR, the operator can view the file in its raw format using the file>type command or FTP it to an external server.
The following table lists the attributes in the RSL history file.
Attribute |
Description |
---|---|
Time |
Time of record |
LocTxPower |
Local transmit power |
RemTxPower |
Remote transmit power |
LocRxPower |
Local received power |
RemRxPower |
Remote received power |
LocDivRxPower |
Local diversity received power (significant for diversity configuration only) |
RemDivRxPower |
Remote diversity received power (significant for diversity configuration only) |
LocXPD |
Local cross-polar discrimination (significant for XPIC configuration only) |
RemXPD |
Remote cross-polar discrimination (significant for XPIC configuration only) |
LocMSE |
Local mean squared error |
RemMSE |
Remote mean squared error |
TxMod |
Transmitter modulation |
RxMod |
Receiver modulation |
LocEPS |
Local equipment protection switching |
RemEPS |
Remote equipment protection switching |
LocRPS |
Local radio protection switching |
RemRPS |
Remote radio protection switching |
LocTPS |
Local transmit protection switching |
RemTPS |
Remote transmit protection switching |
LocHBERAlm |
Local high bit error rate alarm |
RemHBERAlm |
Remote high bit error rate alarm |
LocEWAlm |
Local early warning alarm |
RemEWAlm |
Remote early warning alarm |
LocDemFailAlm |
Local demodulation failure alarm |
RemDemFailAlm |
Remote demodulation failure alarm |
Custom alarms on Ethernet ports
The 7705 SAR supports custom alarms on Ethernet ports without the need to deploy a dry-contact alarm aggregator. Custom alarms can be created and assigned to any RJ45 port; the port must be configured for 100Base-Tx operation with autonegotiation disabled. One alarm input can be configured for each port with the following:
name
description
association with a user-defined alarm
Alarm inputs must be associated with an alarm in order for them to be triggered. Alarm inputs consist of an Ethernet LOS event caused by breaking contact loops between pins 1 and 3 or 2 and 6 on the Ethernet port. Breaking either loop will trigger the port alarm, and reconnecting the loops will clear the alarm.
For information about configuring the alarm inputs, see Configuring Auxiliary Alarm card, chassis, and Ethernet port external alarm parameters.