DCBX
Data Center Bridging eXchange (DCBX) protocol is a discovery and exchange protocol for advertising configurations and capabilities between directly connected peers. In addition to propagating configurations, DCBX also allows for the detection of misconfigurations between peers.
DCBX is defined by Section 38 of the IEEE 802.1Q-2022 specification. The DCBX protocol information is propagated by LLDP using TLVs as defined in Annex D.2.10 of the 802.1Q specification.
Because LLDP is a unidirectional protocol, each node sends its local configuration to its neighbor, and the remote neighbor's state machine determines how to process and apply the received information.
DCBX can support the exchange of PFC information.
DCBX operation
In SR Linux, DCBX is enabled on every interface by default. The DCBX TLV for PFC propagates the status for each PFC priority (0 to 7) to the peer as follows:
- If the interface has one or more PFC priorities enabled, DCBX advertises the per-priority status (enabled or disabled) for all PFC priorities.
- If the interface has no PFC priorities enabled, DCBX advertises all PFC priorities as disabled.
In state, the system maintains information about the operational state of DCBX for the local node and for the remote peer. With SR Linux, all DCBX-enabled interfaces are effectively in unwilling state, which means the system never reacts to the state received from the remote peer. Instead, the system maintains the local and remote state to be available for display. Any discrepancy identified between the local and remote state allows for the detection of misconfigurations. You can then update the configuration as required to address the discrepancy.
Configuring DCBX
By default, DCBX is enabled on every interface. It is possible to disable it administratively under the qos interfaces interface dcbx admin-state context. If disabled, LLDP stops advertising the DCBX capability.
To configure DCBX on an interface, use the dcbx admin-state command in the qos interfaces interface context.
Configuring DCBX
The following example enables DCBX on interface eth-1/4.
--{ candidate shared default }--[ ]--
# info with-context qos interfaces interface eth-1/4
qos {
interfaces {
interface eth-1/4 {
interface-ref {
interface ethernet-1/4
}
dcbx {
admin-state enable
}
}
}
}
The DCBX configuration is at interface level only. There is no system-level admin-state setting for DCBX.
Displaying DCBX state information
To display the local and remote operational state for DCBX, use the info from state command.
--{ state }--[ ]--
# info from state qos interfaces interface eth-1/4 dcbx
admin-state disable
oper-state down
oper-state-reason dcbx-admin-disabled
pfc-priority 0 {
oper-state down
remote-state remote-down
}
pfc-priority 1 {
oper-state down
remote-state remote-down
}
...
pfc-priority 7 {
oper-state down
remote-state remote-down
}
--{ state }--[ ]--
# info from state qos interfaces interface ethernet-2/3 dcbx
admin-state enable
oper-state down
oper-state-reason remote-dcbx-down
pfc-priority 0 {
oper-state up
remote-state remote-down
}
...
pfc-priority 7 {
oper-state up
remote-state remote-down
}
--{ state }--[ ]--
# info from state qos interfaces interface ethernet-1/3 dcbx
admin-state enable
oper-state down
oper-state-reason lldp-oper-state-down
pfc-priority 0 {
oper-state up
remote-state remote-down
}
...
pfc-priority 7 {
oper-state up
remote-state remote-down
}
If an interface has DCBX enabled, but it does not receive a DCBX capability message from the peer (DCBX is disabled on the remote node), the DCBX state is shown as oper-down, with a reason code of remote-dcbx-down.