Multi-Chassis Endpoint for VPLS Active/Standby Pseudowire
This chapter provides information about multi-chassis endpoint for VPLS active/standby pseudowire.
Topics in this chapter include:
Applicability
This chapter was initially written for SR OS Release 7.0.R6, but the CLI in this edition is based on SR OS Release 19.5.R2.
Overview
When implementing a large VPLS, one of the limiting factors is the number of T-LDP sessions required for the full mesh of SDPs. Mesh-SDPs are required between all PEs participating in the VPLS with a full mesh of T-LDP sessions.
This solution is not scalable, because the number of sessions grows more rapidly than the number of participating PEs. Several options exist to reduce the number of T-LDP sessions required in a large VPLS.
The first option is hierarchical VPLS (H-VPLS) with spoke-SDPs. By using spoke-SDPs between two clouds of fully meshed PEs, any-to-any T-LDP sessions for all participating PEs are not required.
However, if spoke-SDP redundancy is required, STP must be used to avoid a loop in the VPLS. Management VPLS can be used to reduce the number of STP instances and separate customer and STP traffic (H-VPLS with STP).
VPLS pseudowire redundancy provides H-VPLS redundant spoke connectivity. The active spoke is in forwarding state, while the standby spoke is in blocking state. Therefore, STP is not needed anymore to break the loop, as illustrated in VPLS pseudowire redundancy.
However, the PE implementing the active and standby spokes represents a single point of failure in the network.
Multi-chassis endpoint (MC-EP) for VPLS active/standby pseudowire expands on the VPLS pseudowire redundancy and allows the removal of the single point of failure.
Only one spoke-SDP is in forwarding state; all standby spoke-SDPs are in blocking state. Mesh and square resiliency are supported.
Mesh resiliency can protect against simultaneous node failure in the core and in the MC-EP (double failure), but requires more SDPs (and therefore more T-LDP sessions). Mesh resiliency is illustrated in Multi-chassis endpoint with mesh resiliency.
Square resiliency provides single failure node protection, and requires less SDPs (and thus less T-LDP sessions). Square resiliency is illustrated in Multi-chassis endpoint with square resiliency.
Example topology
The network topology is displayed in Example topology.
The setup consists of:
Two core nodes (PE-1 and PE-2), and three nodes for each metro area (PE-3, PE-4, PE-5 and PE-6, PE-7, PE-8, respectively).
VPLS 1 is the core VPLS, used to interconnect the two metro areas represented by VPLS 2 and VPLS 3.
VPLS 2 will be connected to the core VPLS in mesh resiliency.
VPLS 3 will be connected to the core VPLS in square resiliency.
Three separate VPLS identifiers are used for clarity. However, the same identifier could be used for each. For interoperation, only the same VC-ID is required to be used on both ends of the spoke-SDPs.
The following configuration tasks should be done first:
IS-IS or OSPF throughout the network.
RSVP or LDP-signaled LSPs over the paths used for mesh/spoke-SDPs.
Configuration
SDP configuration
On each PE, SDPs are created to match the topology described in Example topology.
The convention for the SDP naming is: XY where X is the originating node and Y the target node.
An example of the SDP configuration in PE-3 (using LDP):
# on PE-3:
configure
service
sdp 31 mpls create
far-end 192.0.2.1
ldp
no shutdown
exit
sdp 32 mpls create
far-end 192.0.2.2
ldp
no shutdown
exit
sdp 34 mpls create
far-end 192.0.2.4
ldp
no shutdown
exit
sdp 35 mpls create
far-end 192.0.2.5
ldp
no shutdown
exit
Verification of the SDPs on PE-3:
*A:PE-3# show service sdp
============================================================================
Services: Service Destination Points
============================================================================
SdpId AdmMTU OprMTU Far End Adm Opr Del LSP Sig
----------------------------------------------------------------------------
31 0 1556 192.0.2.1 Up Up MPLS L TLDP
32 0 1556 192.0.2.2 Up Up MPLS L TLDP
34 0 1556 192.0.2.4 Up Up MPLS L TLDP
35 0 1556 192.0.2.5 Up Up MPLS L TLDP
----------------------------------------------------------------------------
Number of SDPs : 4
----------------------------------------------------------------------------
Legend: R = RSVP, L = LDP, B = BGP, M = MPLS-TP, n/a = Not Applicable
I = SR-ISIS, O = SR-OSPF, T = SR-TE, F = FPE
============================================================================
Full mesh VPLS configuration
Next, three fully meshed VPLS services are configured.
VPLS 1 is the core VPLS, on PE-1 and PE-2
VPLS 2 is the metro 1 VPLS, on PE-3, PE-4 and PE-5
VPLS 3 is the metro 2 VPLS, on PE-6, PE-7 and PE-8
On PE-1 (similar configuration on PE-2):
# on PE-1:
configure
service
vpls 1 name ‟VPLS 1” customer 1 create
description "core VPLS"
mesh-sdp 12:1 create
exit
no shutdown
exit
On PE-3 (similar configuration on PE-4 and PE-5):
# on PE-3:
configure
service
vpls 2 name ‟VPLS 2” customer 1 create
description "Metro 1 VPLS"
mesh-sdp 34:2 create
exit
mesh-sdp 35:2 create
exit
no shutdown
exit
On PE-6 (similar configuration on PE-7 and PE-8):
configure
service
vpls 3 name ‟VPLS 3” customer 1 create
description "Metro 2 VPLS"
mesh-sdp 67:3 create
exit
mesh-sdp 68:3 create
exit
no shutdown
exit
Verification of the VPLS:
The service must be operationally up.
All mesh-SDPs must be up in the VPLS service.
On PE-6 (similar on other nodes):
*A:PE-6# show service id 3 base
===============================================================================
Service Basic Information
===============================================================================
Service Id : 3 Vpn Id : 0
Service Type : VPLS
MACSec enabled : no
Name : VPLS 3
Description : Metro 2 VPLS
Customer Id : 1 Creation Origin : manual
Last Status Change: 06/21/2019 08:08:29
Last Mgmt Change : 06/21/2019 08:08:24
Etree Mode : Disabled
Admin State : Up Oper State : Up
MTU : 1514
SAP Count : 0 SDP Bind Count : 2
Snd Flush on Fail : Disabled Host Conn Verify : Disabled
SHCV pol IPv4 : None
Propagate MacFlush: Disabled Per Svc Hashing : Disabled
Allow IP Intf Bind: Disabled
Fwd-IPv4-Mcast-To*: Disabled Fwd-IPv6-Mcast-To*: Disabled
Mcast IPv6 scope : mac-based
Def. Gateway IP : None
Def. Gateway MAC : None
Temp Flood Time : Disabled Temp Flood : Inactive
Temp Flood Chg Cnt: 0
SPI load-balance : Disabled
TEID load-balance : Disabled
Src Tep IP : N/A
VSD Domain : <none>
-------------------------------------------------------------------------------
Service Access & Destination Points
-------------------------------------------------------------------------------
Identifier Type AdmMTU OprMTU Adm Opr
-------------------------------------------------------------------------------
sdp:67:3 M(192.0.2.7) Mesh 0 1556 Up Up
sdp:68:3 M(192.0.2.8) Mesh 0 1556 Up Up
===============================================================================
* indicates that the corresponding row element may have been truncated.
Multi-chassis configuration
Multi-chassis will be configured on the MC peers PE-3, PE-4 and PE-6, PE-7. The peer system address is configured, and mc-endpoint will be enabled.
On PE-3 (similar configuration on PE-4, PE-6, and PE-7):
configure
redundancy
multi-chassis
peer 192.0.2.4 create
mc-endpoint
no shutdown
exit
no shutdown
exit
Verification of the multi-chassis synchronization (MCS):
If the MCS fails, both nodes will fall back to single-chassis mode. In that case, two spoke-SDPs could become active at the same time. It is important to verify the MCS before enabling the redundant spoke-SDPs.
*A:PE-3# show redundancy multi-chassis mc-endpoint peer 192.0.2.4
===============================================================================
Multi-Chassis MC-Endpoint
===============================================================================
Peer Addr : 192.0.2.4 Peer Name :
Admin State : up Oper State : up
Last State chg : Source Addr :
System Id : 04:0d:ff:00:00:00 Sys Priority : 0
Keep Alive Intvl: 10 Hold on Nbr Fail : 3
Passive Mode : disabled Psv Mode Oper : No
Boot Timer : 300 BFD : disabled
Last update : 06/21/2019 08:08:44 MC-EP Count : 0
===============================================================================
Mesh resiliency configuration
PE-3 and PE-4 will be connected to the core VPLS in mesh resiliency.
First an endpoint is configured.
The no suppress-standby-signaling is needed to block the standby spoke-SDP.
The multi-chassis endpoint peer is configured. The mc-endpoint ID must match between the two peers.
On PE-3 (similar on PE-4):
configure
service
vpls 2
endpoint "CORE" create
no suppress-standby-signaling
mc-endpoint 1
mc-ep-peer 192.0.2.4
exit
exit
After this configuration, the MP-EP count in the preceding show command changes to 1, as follows:
*A:PE-3# show redundancy multi-chassis mc-endpoint peer 192.0.2.4
===============================================================================
Multi-Chassis MC-Endpoint
===============================================================================
Peer Addr : 192.0.2.4 Peer Name :
Admin State : up Oper State : up
Last State chg : Source Addr :
System Id : 04:0d:ff:00:00:00 Sys Priority : 0
Keep Alive Intvl: 10 Hold on Nbr Fail : 3
Passive Mode : disabled Psv Mode Oper : No
Boot Timer : 300 BFD : disabled
Last update : 06/21/2019 08:10:07 MC-EP Count : 1
===============================================================================
Two spoke-SDPs are configured on each peer of the multi-chassis to the two nodes of the core VPLS (mesh resiliency). Each spoke-SDP refers to the endpoint CORE.
The precedence is defined on the spoke-SDPs as follows:
Spoke-SDP 31 on PE-3 will be active. It is configured as primary (= precedence 0).
Spoke-SDP 32 on PE-3 will be the first backup. It is configured with precedence 1.
Spoke-SDP 41 on PE-4 will be the second backup. It is configured with precedence 2.
Spoke-SDP 42 on PE-4 will be the third backup. It is configured with precedence 3.
On PE-3:
configure
service
vpls 2
spoke-sdp 31:1 endpoint "CORE" create
precedence primary
exit
spoke-sdp 32:1 endpoint "CORE" create
precedence 1
exit
On PE-4:
configure
service
vpls 2
spoke-sdp 41:1 endpoint "CORE" create
precedence 2
exit
spoke-sdp 42:1 endpoint "CORE" create
precedence 3
exit
Verification of the spoke-SDPs:
On PE-3 and PE-4, the spoke-SDPs must be up.
*A:PE-3# show service id 2 sdp
===============================================================================
Services: Service Destination Points
===============================================================================
SdpId Type Far End addr Adm Opr I.Lbl E.Lbl
-------------------------------------------------------------------------------
31:1 Spok 192.0.2.1 Up Up 524277 524278
32:1 Spok 192.0.2.2 Up Up 524276 524278
34:2 Mesh 192.0.2.4 Up Up 524279 524279
35:2 Mesh 192.0.2.5 Up Up 524278 524279
-------------------------------------------------------------------------------
Number of SDPs : 4
-------------------------------------------------------------------------------
===============================================================================
The endpoints on PE-3 and PE-4 can be verified. One spoke-SDP is in Tx-Active mode (31 on PE-1 because it is configured as primary).
*A:PE-3# show service id 2 endpoint "CORE" | match "Tx Active"
Tx Active (SDP) : 31:1
Tx Active Up Time : 0d 01:16:04
Tx Active Change Count : 1
Last Tx Active Change : 06/21/2019 08:10:41
There is no active spoke-SDP on PE-4.
*A:PE-4# show service id 2 endpoint "CORE" | match "Tx Active"
Tx Active : none
Tx Active Up Time : 0d 00:00:00
Tx Active Change Count : 0
Last Tx Active Change : 06/21/2019 07:59:47
On PE-1 and PE-2, the spoke-SDPs are operationally up.
*A:PE-1# show service id 1 sdp
===============================================================================
Services: Service Destination Points
===============================================================================
SdpId Type Far End addr Adm Opr I.Lbl E.Lbl
-------------------------------------------------------------------------------
12:1 Mesh 192.0.2.2 Up Up 524279 524279
13:1 Spok 192.0.2.3 Up Up 524278 524277
14:1 Spok 192.0.2.4 Up Up 524277 524277
-------------------------------------------------------------------------------
Number of SDPs : 3
-------------------------------------------------------------------------------
===============================================================================
However, because pseudowire signaling has been enabled, only one spoke-SDP will be active, the others are set in standby.
On PE-1, spoke-SDP 13:1 is active (no pseudowire bit signaled from peer PE-3) and the spoke-SDP 14:1 is signaled in standby by peer PE-4.
*A:PE-1# show service id 1 sdp 13:1 detail | match "Peer Pw Bits"
Peer Pw Bits : None
*A:PE-1# show service id 1 sdp 14:1 detail | match "Peer Pw Bits"
Peer Pw Bits : pwFwdingStandby
On PE-2, both spoke-SDPs are signaled in standby by peers PE-3 and PE-4.
*A:PE-2# show service id 1 sdp 23:1 detail | match "Peer Pw Bits"
Peer Pw Bits : pwFwdingStandby
*A:PE-2# show service id 1 sdp 24:1 detail | match "Peer Pw Bits"
Peer Pw Bits : pwFwdingStandby
There is one active and three standby spoke-SDPs.
Square resiliency configuration
PE-6 and PE-7 will be connected to the core VPLS in square resiliency.
First an endpoint is configured.
The no suppress-standby-signaling is needed to block the standby spoke-SDP.
The multi-chassis endpoint peer is configured. The mc-endpoint ID must match between the two peers.
On PE-7 (similar on PE-6):
One spoke-SDP is configured on each peer of the multi-chassis to one node of the core VPLS (square resiliency). Each spoke-SDP refers to the endpoint CORE.
# on PE-7:
configure
service
vpls 3
endpoint "CORE" create
no suppress-standby-signaling
mc-endpoint 1
mc-ep-peer 192.0.2.6
exit
exit
exit
exit
exit
The precedence will be defined on the spoke-SDPs as follows:
Spoke-SDP 72:1 on PE-7 will be active. It is configured as primary (= precedence 0)
Spoke-SDP 61:1 on PE-6 will be the first backup with precedence 1.
On PE-7:
configure
service
vpls 3
spoke-sdp 72:1 endpoint "CORE" create
precedence primary
exit
exit
exit
On PE-6:
configure
service
vpls 3
spoke-sdp 61:1 endpoint "CORE" create
precedence 1
exit
exit
exit
Verification of the spoke-SDPs.
*A:PE-7# show service id 3 sdp
===============================================================================
Services: Service Destination Points
===============================================================================
SdpId Type Far End addr Adm Opr I.Lbl E.Lbl
-------------------------------------------------------------------------------
72:1 Spok 192.0.2.2 Up Up 524277 524276
76:3 Mesh 192.0.2.6 Up Up 524279 524279
78:3 Mesh 192.0.2.8 Up Up 524278 524278
-------------------------------------------------------------------------------
Number of SDPs : 3
-------------------------------------------------------------------------------
===============================================================================
On PE-6 and PE-7, the spoke-SDPs must be up.
The endpoints on PE-7 and PE-6 can be verified. One spoke-SDP is in Tx-Active mode (72 on PE-7 because it is configured as primary).
*A:PE-7# show service id 3 endpoint | match "Tx Active"
Tx Active (SDP) : 72:1
Tx Active Up Time : 0d 00:17:24
Tx Active Change Count : 1
Last Tx Active Change : 06/21/2019 08:13:18
There are no active spoke-SDP on PE-6.
*A:PE-6# show service id 3 endpoint | match "Tx Active"
Tx Active : none
Tx Active Up Time : 0d 00:00:00
Tx Active Change Count : 2
Last Tx Active Change : 06/21/2019 08:13:18
The output shows that on PE-1, spoke-SDP 16 is signaled with peer in standby mode.
*A:PE-1# show service id 1 sdp 16:1 detail | match "Peer Pw Bits"
Peer Pw Bits : pwFwdingStandby
On PE-2, the spoke-SDP 27 is signaled with peer active (no pseudowire bits).
*A:PE-2# show service id 1 sdp 27:1 detail | match "Peer Pw Bits"
Peer Pw Bits : None
There is one active and one standby spoke-SDP.
Additional parameters
Multi-chassis
*A:PE-3# configure redundancy multi-chassis peer 192.0.2.4 mc-endpoint
- mc-endpoint
- no mc-endpoint
[no] bfd-enable - Configure BFD
[no] boot-timer - Configure boot timer interval
[no] hold-on-neighb* - Configure hold time applied on neighbor failure
[no] keep-alive-int* - Configure keep alive interval for this MC-Endpoint
[no] passive-mode - Configure passive-mode
[no] shutdown - Administratively enable/disable the multi-chassis
peer end-point
[no] system-priority - Configure system priority
These parameters will be explained in the following sections.
Peer failure detection
The default mechanism is based on the keep-alive messages exchanged between the peers.
The keep-alive interval is the interval at which keep-alive messages are sent to the MC peer. It is set in tenths of a second from 5 to 500), with a default value of 5.
Hold-on-neighbor failure is the number of keep-alive intervals that the node will wait for a packet from the peer before assuming it has failed. After this interval, the node will revert to single chassis behavior. It can be set from 2 to 25 with a default value of 3.
BFD session
BFD is another peer failure detection mechanism. It can be used to speed up the convergence in case of peer loss.
*A:PE-3# configure
redundancy
multi-chassis
peer 192.0.2.4
mc-endpoint
bfd-enable
exit
exit
It is using the centralized BFD session. BFD must be enabled on the system interface.
*A:PE-3# configure
router
interface "system"
address 192.0.2.3/32
bfd 100 receive 100 multiplier 3
exit
Verification of the BFD session:
*A:PE-3# show router bfd session
===============================================================================
Legend:
Session Id = Interface Name | LSP Name | Prefix | RSVP Sess Name | Service Id
wp = Working path pp = Protecting path
===============================================================================
BFD Session
===============================================================================
Session Id State Tx Pkts Rx Pkts
Rem Addr/Info/SdpId:VcId Multipl Tx Intvl Rx Intvl
Protocols Type LAG Port LAG ID
-------------------------------------------------------------------------------
system Up 175 53
192.0.2.4 3 100 100
mcep central N/A N/A
-------------------------------------------------------------------------------
No. of BFD sessions: 1
===============================================================================
Boot timer
The boot-timer command specifies the time after a reboot that the node will try to establish a connection with the MC peer before assuming a peer failure. In case of failure, the node will revert to single chassis behavior.
System priority
The system priority influences the selection of the MC master. The lowest priority node will become the master.
In case of equal priorities, the lowest system ID (=chassis MAC address) will become the master.
VPLS endpoint and spoke-SDP
Ignore standby pseudowire bits
*A:PE-1# configure service vpls 1 spoke-sdp 14:1
---snip---
[no] ignore-standby* - Ignore 'standby-bit' received from LDP peer
---snip---
The peer pseudowire status bits are ignored and traffic is forwarded over the spoke-SDP.
It can speed up convergence for multicast traffic in case of spoke-SDP failure.
Traffic sent over the standby spoke-SDP will be discarded by the peer.
In this topology, if the ignore-standby-signaling command is enabled on PE-1, it sends MC traffic to PE-3 and PE-4 (and to PE-6). If PE-3 fails, PE-4 can start forwarding traffic in the VPLS as soon as it detects PE-3 being down. There is no signaling needed between PE-1 and PE-4.
Block-on-mesh failure
*A:PE-3# configure service vpls 2 endpoint "CORE"
---snip---
[no] block-on-mesh-* - Block traffic on mesh-SDP failure
---snip---
In case a PE loses all the mesh-SDPs of a VPLS, it should block the spoke-SDPs to the core VPLS, and inform the MC-EP peer that can activate one of its spoke-SDPs.
If block-on-mesh-failure is enabled, the PE will signal all the pseudowires of the endpoint in standby.
In this topology, if PE3 does not have any valid mesh-SDP to the VPLS 2 mesh, it will set the spoke-SDPs under endpoint CORE in standby.
When block-on-mesh-failure is activated under an endpoint, it is automatically set under the spoke-SDPs belonging to this endpoint.
*A:PE-3# configure service vpls 2
*A:PE-3>config>service>vpls# info
----------------------------------------------
description "Metro 1 VPLS"
stp
shutdown
exit
endpoint "CORE" create
no suppress-standby-signaling
mc-endpoint 1
mc-ep-peer 192.0.2.4
exit
exit
spoke-sdp 31:1 endpoint "CORE" create
stp
shutdown
exit
precedence primary
no shutdown
exit
spoke-sdp 32:1 endpoint "CORE" create
stp
shutdown
exit
precedence 1
no shutdown
exit
mesh-sdp 34:2 create
no shutdown
exit
mesh-sdp 35:2 create
no shutdown
exit
no shutdown
----------------------------------------------
*A:PE-3>config>service>vpls# endpoint "CORE" block-on-mesh-failure
*A:PE-3>config>service>vpls# info
----------------------------------------------
description "Metro 1 VPLS"
stp
shutdown
exit
endpoint "CORE" create
no suppress-standby-signaling
block-on-mesh-failure
mc-endpoint 1
mc-ep-peer 192.0.2.4
exit
exit
spoke-sdp 31:1 endpoint "CORE" create
stp
shutdown
exit
block-on-mesh-failure
precedence primary
no shutdown
exit
spoke-sdp 32:1 endpoint "CORE" create
stp
shutdown
exit
block-on-mesh-failure
precedence 1
no shutdown
exit
mesh-sdp 34:2 create
no shutdown
exit
mesh-sdp 35:2 create
no shutdown
exit
no shutdown
----------------------------------------------
Precedence
*A:PE-3# configure service vpls 2 spoke-sdp 31:1
---snip---
[no] precedence - Configure the spoke-sdp precedence
---snip---
The precedence is used to indicate in which order the spoke-SDPs should be used. The value is from 0 to 4 (0 being primary), the lowest having higher priority. The default value is 4.
Revert time
*A:PE-3# configure service vpls 2 endpoint "CORE"
---snip---
[no] revert-time - Configure the time to wait before reverting to primary spoke-sdp
---snip---
If the precedence is equal between the spoke-SDPs, there is no revertive behavior. Changing the precedence of a spoke-SDP will not trigger a revert. The default is no revert.
MAC flush parameters
When a spoke-SDP goes from standby to active (due to the active spoke-SDP failure), the node will send a flush-all-but-mine message.
After a restoration of the spoke-SDP, a new flush-all-but-mine message will be sent.
*A:PE-1# configure service vpls 1 propagate-mac-flush
A node configured with propagate MAC flush will forward the flush messages received on the spoke-SDP to its other mesh/spoke-SDPs.
A node configured with send flush on failure will send a flush-all-from-me message when one of its SDPs goes down.
A:PE-1# configure service vpls 1 send-flush-on-failure
Failure scenarios
For the subsequent failure scenarios, the configuration of the nodes is as described in the Configuration section.
Core node failure
When the core node PE-1 fails, the spoke-SDPs from PE-3 and PE-4 go down.
Because the spoke-SDP 31 between PE-3 and PE-4 was active, the MC master (PE-3 in this case) will select the next best spoke-SDP, which will be 32 between PE-3 and PE-2 (precedence 1). See Core Node Failure.
*A:PE-3# show service id 2 endpoint
===============================================================================
Service 2 endpoints
===============================================================================
Endpoint name : CORE
Description : (Not Specified)
Creation Origin : manual
Revert time : 0
Act Hold Delay : 0
Ignore Standby Signaling : false
Suppress Standby Signaling : false
Block On Mesh Fail : true
Multi-Chassis Endpoint : 1
MC Endpoint Peer Addr : 192.0.2.4
Psv Mode Active : No
Tx Active (SDP) : 32:1
Tx Active Up Time : 0d 00:00:12
Revert Time Count Down : N/A
Tx Active Change Count : 1
Last Tx Active Change : 06/21/2019 08:16:48
-------------------------------------------------------------------------------
Members
-------------------------------------------------------------------------------
Spoke-sdp: 31:1 Prec:0 Oper Status: Down
Spoke-sdp: 32:1 Prec:1 Oper Status: Up
===============================================================================
===============================================================================
*A:PE-4# show service id 2 endpoint
===============================================================================
Service 2 endpoints
===============================================================================
Endpoint name : CORE
Description : (Not Specified)
Creation Origin : manual
Revert time : 0
Act Hold Delay : 0
Ignore Standby Signaling : false
Suppress Standby Signaling : false
Block On Mesh Fail : false
Multi-Chassis Endpoint : 1
MC Endpoint Peer Addr : 192.0.2.3
Psv Mode Active : No
Tx Active : none
Tx Active Up Time : 0d 00:00:00
Revert Time Count Down : N/A
Tx Active Change Count : 0
Last Tx Active Change : 06/21/2019 07:59:47
-------------------------------------------------------------------------------
Members
-------------------------------------------------------------------------------
Spoke-sdp: 41:1 Prec:2 Oper Status: Down
Spoke-sdp: 42:1 Prec:3 Oper Status: Up
===============================================================================
===============================================================================
Multi-chassis node failure
When the multi-chassis node PE-3 fails, both spoke-SDPs from PE-3 go down.
PE-4 reverts to single chassis mode and selects the best spoke-SDP, which will be 41 between PE-4 and PE-1 (precedence 2). See Multi-chassis node failure.
*A:PE-4# show redundancy multi-chassis mc-endpoint peer 192.0.2.3
===============================================================================
Multi-Chassis MC-Endpoint
===============================================================================
Peer Addr : 192.0.2.3 Peer Name :
Admin State : up Oper State : down
Last State chg : Source Addr :
System Id : 04:0f:ff:00:00:00 Sys Priority : 0
Keep Alive Intvl: 10 Hold on Nbr Fail : 3
Passive Mode : disabled Psv Mode Oper : No
Boot Timer : 300 BFD : enabled
Last update : 06/21/2019 08:13:23 MC-EP Count : 1
===============================================================================
*A:PE-4# show service id 2 endpoint
===============================================================================
Service 2 endpoints
===============================================================================
Endpoint name : CORE
Description : (Not Specified)
Creation Origin : manual
Revert time : 0
Act Hold Delay : 0
Ignore Standby Signaling : false
Suppress Standby Signaling : false
Block On Mesh Fail : false
Multi-Chassis Endpoint : 1
MC Endpoint Peer Addr : 192.0.2.3
Psv Mode Active : No
Tx Active (SDP) : 41:1
Tx Active Up Time : 0d 00:02:40
Revert Time Count Down : N/A
Tx Active Change Count : 1
Last Tx Active Change : 06/21/2019 08:17:47
-------------------------------------------------------------------------------
Members
-------------------------------------------------------------------------------
Spoke-sdp: 41:1 Prec:2 Oper Status: Up
Spoke-sdp: 42:1 Prec:3 Oper Status: Up
===============================================================================
===============================================================================
Multi-chassis communication failure
If the multi-chassis communication is interrupted, both nodes will revert to single chassis mode.
To simulate a communication failure between the two nodes, define a static route on PE-3 that will black-hole the system address of PE-4.
# on PE-3:
configure
router
static-route-entry 192.0.2.4/32
black-hole
no shutdown
exit
exit
Verify that the MC synchronization is down.
*A:PE-4# show redundancy multi-chassis mc-endpoint peer 192.0.2.3
===============================================================================
Multi-Chassis MC-Endpoint
===============================================================================
Peer Addr : 192.0.2.3 Peer Name :
Admin State : up Oper State : down
Last State chg : Source Addr :
System Id : 04:0f:ff:00:00:00 Sys Priority : 0
Keep Alive Intvl: 10 Hold on Nbr Fail : 3
Passive Mode : disabled Psv Mode Oper : No
Boot Timer : 300 BFD : enabled
Last update : 06/21/2019 08:13:23 MC-EP Count : 1
===============================================================================
The spoke-SDPs are active on PE-3 and on PE-4.
*A:PE-3# show service id 2 endpoint | match "Tx Active"
Tx Active (SDP) : 31:1
Tx Active Up Time : 0d 00:05:58
Tx Active Change Count : 6
Last Tx Active Change : 06/21/2019 08:19:09
*A:PE-4# show service id 2 endpoint | match "Tx Active"
Tx Active (SDP) : 41:1
Tx Active Up Time : 0d 00:04:56
Tx Active Change Count : 3
Last Tx Active Change : 06/21/2019 08:19:05
This can potentially cause a loop in the system. The section Passive mode describes how to avoid this loop.
Passive mode
As in the preceding Multi-chassis communication failure subsection, if there is a failure in the multi-chassis communication, both nodes will assume that the peer is down and will revert to single-chassis mode. This can create loops because two spoke-SDPs can become active.
One solution is to synchronize the two core nodes, and configure them in passive mode. See Multi-chassis passive mode.
In passive mode, both peers will stay dormant as long as one active spoke-SDP is signaled from the remote end. If more than one spoke-SDP becomes active, the MC-EP algorithm will select the best SDP. All other spoke-SDPs are blocked locally (in Rx and Tx directions). There is no signaling sent to the remote PEs.
If one peer is configured in passive mode, the other peer will be forced to passive mode as well.
The no suppress-standby-signaling and no ignore-standby-signaling commands are required.
The following output shows the multi-chassis configuration on PE-1 (similar on PE-2).
# on PE-1:
configure
redundancy
multi-chassis
peer 192.0.2.2 create
mc-endpoint
no shutdown
passive-mode
exit
no shutdown
exit
exit
The following output shows the VPLS spoke-SDPs configuration on PE-1 (similar on PE-2)
# on PE-1:
configure
service
vpls 1
endpoint "METRO1" create
no suppress-standby-signaling
mc-endpoint 1
mc-ep-peer 192.0.2.2
exit
exit
spoke-sdp 13:1 endpoint "METRO1" create
exit
spoke-sdp 14:1 endpoint "METRO1" create
exit
no shutdown
exit
To simulate a communication failure between the two nodes, a static route is defined on PE-3 that will black-hole the system address of PE-4.
# on PE-3:
configure
router
static-route-entry 192.0.2.4/32
black-hole
no shutdown
exit
exit
The spoke-SDPs are active on PE-3 and on PE-4.
*A:PE-3# show service id 2 endpoint | match "Tx Active"
Tx Active (SDP) : 31:1
Tx Active Up Time : 0d 00:00:28
Tx Active Change Count : 8
Last Tx Active Change : 06/21/2019 08:20:24
*A:PE-4# show service id 2 endpoint | match "Tx Active"
Tx Active (SDP) : 41:1
Tx Active Up Time : 0d 00:00:22
Tx Active Change Count : 5
Last Tx Active Change : 06/21/2019 08:20:25
PE-1 and PE-2 have blocked one spoke-SDP which avoids a loop in the VPLS.
*A:PE-1# show service id 1 endpoint "METRO1" | match "Tx Active"
Tx Active (SDP) : 13:1
Tx Active Up Time : 0d 00:00:58
Tx Active Change Count : 5
Last Tx Active Change : 06/21/2019 08:20:50
*A:PE-2# show service id 1 endpoint "METRO1" | match "Tx Active"
Tx Active : none
Tx Active Up Time : 0d 00:00:00
Tx Active Change Count : 2
Last Tx Active Change : 06/21/2019 08:20:15
The passive nodes do not set the pseudowire status bits; therefore, the nodes PE-3 and PE-4 are not aware that one spoke-SDP is blocked.
Conclusion
Multi-chassis endpoint for VPLS active/standby pseudowire allows the building of hierarchical VPLS without single point of failure, and without requiring STP to avoid loops.
Care must be taken to avoid loops. The multi-chassis peer communication is important and should be possible on different interfaces.
Passive mode can be a solution to avoid loops in case of multi-chassis communication failure.