IEEE 802.1ah Provider Backbone Bridging

PBB overview

IEEE 802.1ah draft standard (IEEE802.1ah), also known as Provider Backbone Bridges (PBB), defines an architecture and bridge protocols for interconnection of multiple Provider Bridge Networks (PBNs - IEEE802.1ad QinQ networks). PBB is defined in IEEE as a connectionless technology based on multipoint VLAN tunnels. IEEE 802.1ah employs Provider MSTP as the core control plane for loop avoidance and load balancing. As a result, the coverage of the solution is limited by STP scale in the core of large service provider networks.

Virtual Private LAN Service (VPLS), RFC 4762, Virtual Private LAN Service (VPLS) Using Label Distribution Protocol (LDP) Signaling, provides a solution for extending Ethernet LAN services using MPLS tunneling capabilities through a routed, traffic-engineered MPLS backbone without running (M)STP across the backbone. As a result, VPLS has been deployed on a large scale in service provider networks.

The Nokia implementation fully supports a native PBB deployment and an integrated PBB-VPLS model where desirable PBB features such as MAC hiding, service aggregation and the service provider fit of the initial VPLS model are combined to provide the best of both worlds.

PBB features

This section provides information about PBB features.

Integrated PBB-VPLS solution

HVPLS introduced a service-aware device in a central core location to provide efficient replication and controlled interaction at domain boundaries. The core network facing provider edge (N-PE) devices have knowledge of all VPLS services and customer MAC addresses for local and related remote regions resulting in potential scalability issues as depicted in Large HVPLS deployment.

Figure 1. Large HVPLS deployment

In a large VPLS deployment, it is important to improve the stability of the overall solution and to speed up service delivery. These goals are achieved by reducing the load on the N-PEs and respectively minimizing the number of provisioning touches on the N-PEs.

The integrated PBB-VPLS model introduces an additional PBB hierarchy in the VPLS network to address these goals as depicted in Large PBB-VPLS deployment.

Figure 2. Large PBB-VPLS deployment

PBB encapsulation is added at the user facing PE (U-PE) to hide the customer MAC addressing and topology from the N-PE devices. The core N-PEs need to only handle backbone MAC addressing and do not need to have visibility of each customer VPN. As a result, the integrated PBB-VPLS solution decreases the load in the N-PEs and improves the overall stability of the backbone.

The Nokia PBB-VPLS solution also provides automatic discovery of the customer VPNs through the implementation of IEEE 802.1ak MMRP minimizing the number of provisioning touches required at the N-PEs.

PBB technology

IEEE 802.1ah specification encapsulates the customer or QinQ payload in a provider header as shown in QinQ payload in provider header example.

Figure 3. QinQ payload in provider header example

PBB adds a regular Ethernet header where the B-DA and B-SA are the backbone destination and respectively, source MACs of the edge U-PEs. The backbone MACs (B-MACs) are used by the core N-PE devices to switch the frame through the backbone.

A special group MAC is used for the backbone destination MAC (B-DA) when handling an unknown unicast, multicast or broadcast frame. This backbone group MAC is derived from the I-service instance identifier (ISID) using the rule: a standard group OUI (01-1E-83) followed by the 24 bit ISID coded in the last three bytes of the MAC address.

The BVID (backbone VLAN ID) field is a regular DOT1Q tag and controls the size of the backbone broadcast domain. When the PBB frame is sent over a VPLS pseudowire, this field may be omitted depending on the type of pseudowire used.

The following ITAG (standard Ether-type value of 0x88E7) has the role of identifying the customer VPN to which the frame is addressed through the 24 bit ISID. Support for service QoS is provided through the priority (3 bit I-PCP) and the DEI (1 bit) fields.

PBB mapping to existing VPLS configurations

The IEEE model for PBB is organized around a B-component handling the provider backbone layer and an I-component concerned with the mapping of the customer/provider bridge (QinQ) domain (MACs, VLANs) to the provider backbone (B-MACs, B-VLANs): for example, the I-component contains the boundary between the customer and backbone MAC domains.

The Nokia implementation is extending the IEEE model for PBB to allow support for MPLS pseudowires using a chain of two VPLS context linked together as depicted in PBB mapping to VPLS configurations.

Figure 4. PBB mapping to VPLS configurations

A VPLS context is used to provide the backbone switching component. The white circle marked B, referred to as backbone-VPLS (B-VPLS), operates on backbone MAC addresses providing a core multipoint infrastructure that may be used for one or multiple customer VPNs. The Nokia B-VPLS implementation allows the use of both native PBB and MPLS infrastructures.

Another VPLS context (I-VPLS) can be used to provide the multipoint I-component functionality emulating the E-LAN service (see the triangle marked ‟I” in PBB mapping to VPLS configurations). Similar to B-VPLS, I-VPLS inherits from the regular VPLS the pseudowire (SDP bindings) and native Ethernet (SAPs) handoffs accommodating this way different types of access: for example, direct customer link, QinQ or HVPLS.

To support PBB E-Line (point-to-point service), the use of an Epipe as I-component is allowed. All Ethernet SAPs supported by a regular Epipe are also supported in the PBB Epipe.

SAP and SDP support

This section provides information about SAP and SDP support.

PBB B-VPLS

  • The following describes SAP support for PBB B-VPLS:

    • Ethernet DOT1Q and QinQ are supported. This is applicable to most PBB use cases, for example, one backbone VLAN ID used for native Ethernet tunneling. In the case of QinQ, a single tag x is supported on a QinQ encapsulation port for example (1/1/1:x.* or 1/1/1:x.0).

    • Ethernet null is supported. This is supported for a direct connection between PBB PEs, for example, no BVID is required.

    • Default SAP types are blocked in the CLI for the B-VPLS SAP.

  • The following rules apply to the SAP processing of PBB frames:

    • For ‟transit frames” (not destined for a local B-MAC), there is no need to process the ITAG component of the PBB frames. Regular Ethernet SAP processing is applied to the backbone header (B-MACs and BVID).

    • If a local I-VPLS instance is associated with the B-VPLS, ‟local frames” originated/terminated on local I-VPLSs are PBB encapsulated/de-encapsulated using the pbb-etype provisioned under the related port or SDP component.

  • The following describes SDP support for PBB B-VPLS:

    • For MPLS, both mesh and spoke-SDPs with split horizon groups are supported.

    • Similar to regular pseudowire, the outgoing PBB frame on an SDP (for example, B-pseudowire) contains a BVID qtag only if the pseudowire type is Ethernet VLAN. If the pseudowire type is ‛Ethernet’, the BVID qtag is stripped before the frame goes out.

PBB I-VPLS

  • port level

    All existing Ethernet encapsulation types are supported (for example, null, dot1q, qinq).

  • SAPs

    The following describes SAP support for PBB I-VPLS:

    • The I-VPLS SAPs can coexist on the same port with SAPs for other business services, for example, VLL, VPLS SAPs.

    • All existing Ethernet encapsulation are supported: null, dot1q, qinq.

  • SDPs

    GRE and MPLS SDP are spoke-sdp only. Mesh SDPs can just be emulated by using the same split horizon group everywhere.

Existing SAP processing rules still apply for the I-VPLS case; the SAP encapsulation definition on Ethernet ingress ports defines which VLAN tags are used to determine the service that the packet belongs to:

  • For null encapsulations defined on ingress, any VLAN tags are ignored and the packet goes to a default service for the SAP

  • For dot1q encapsulations defined on ingress, only the first VLAN tag is considered.

  • For qinq encapsulations defined on ingress, both VLAN tags are considered; wildcard support is for the inner VLAN tag.

  • For dot1q/qinq encapsulations, traffic encapsulated with VLAN tags for which there is no definition is discarded.

  • Any VLAN tag used for service selection on the I-SAP is stripped before the PBB encapsulation is added. Appropriate VLAN tags are added at the remote PBB PE when sending the packet out on the egress SAP.

I-VPLS services do not support the forwarding of PBB encapsulated frames received on SAPs or spoke SDPs through their associated B-VPLS service. PBB frames are identified based on the configured PBB Ethertype (0x88e7 by default).

PBB packet walkthrough

This section describes the walkthrough for a packet that traverses the B-VPLS and I-VPLS instances using the example of a unicast frame between two customer stations as depicted in the following network diagram PBB packet walkthrough.

Figure 5. PBB packet walkthrough

The station with C-MAC (customer MAC) X wants to send a unicast frame to C-MAC Y through the PBB-VPLS network. A customer frame arriving at PBB-VPLS U-PE1 is encapsulated with the PBB header. The local I-VPLS FDB on U-PE1 is consulted to determine the destination B-MAC of the egress U-PE for C-MAC Y. In our example, B2 is assumed to be known as the B-DA for Y. If C-MAC Y is not present in the U-PE1 forwarding database, the PBB packet is sent in the B-VPLS using the standard group MAC address for the ISID associated with the customer VPN. If the up link to the N-PE is a spoke pseudowire, the related PWE3 encapsulation is added in front of the B-DA.

Next, only the Backbone Header in green is used to switch the frame through the green B-VPLS/VPLS instances in the N-PEs. At the receiving U-PE2, the C-MAC X is learned as being behind B-MAC B1; then the PBB encapsulation is removed and the lookup for C-MAC Y is performed. In the case where a pseudowire is used between N-PE and U-PE2, the pseudowire encapsulation is removed first.

PBB control planes

PBB technology can be deployed in a number of environments. Natively, PBB is an Ethernet data plane technology that offers service scalability and multicast efficiency.

Environment:

  • MPLS (mesh and spoke-SDPs)

  • Ethernet SAPs

Within these environments, SR OS offers a number of optional control planes:

In general a control plane is required on Ethernet SAPs, or SDPs where there could be physical loops. Some network configurations of Mesh and Spoke SDPs can avoid physical loops and no control plane is required.

The choice of control plane is based on the requirement of the networks. SPBM for PBB offers a scalable link state control plane without B-MAC flooding and learning or MMRP. RSTP and MSTP offer Spanning tree options based on B-MAC flooding and learning. MMRP is used with flooding and learning to improve multicast.

SPBM

Shortest Path Bridging (SPB) enables a next generation control plane for PBB based on IS-IS that adds the stability and efficiency of link state to unicast and multicast services. Specifically this is an implementation of SPBM (SPB MAC mode). Current SR OS PBB B-VPLS offers point-to-point and multipoint to multipoint services with large scale. PBB B-VPLS is deployed in both Ethernet and MPLS networks supporting Ethernet VLL and VPLS services. SPB removes the flooding and learning mode from the PBB Backbone network and replaces MMRP for ISID Group MAC Registration providing flood containment. SR OS SPB provides true shortest path forwarding for unicast and efficient forwarding on a single tree for multicast. It supports selection of shortest path equal cost tie-breaking algorithms to enable diverse forwarding in an SPB network.

Flooding and learning versus link state

SPB brings a link state capability that improves the scalability and performance for large networks over the xSTP flooding and learning models. Flooding and learning has two consequences. First, a message invoking a flush must be propagated, second the data plane is allowed to flood and relearn while flushing is happening. Message based operation over these data planes may experience congestion and packet loss.

Table 1. B-VPLS control planes
PBB B-VPLS

Control plane

Flooding and learning Multipath Convergence time

xSTP

Yes

MSTP

xSTP + MMRP

G.8032

Yes

Multiple Ring instances

Ring topologies only

Eth-OAM based + MMRP

SPB-M

No

Yes –ECT based

IS-IS link state (incremental)

Link state operates differently in that only the information that truly changes needs to be updated. Traffic that is not affected by a topology change does not have to be disturbed and does not experience congestion because there is no flooding. SPB is a link state mechanism that uses restoration to reestablish the paths affected by topology change. It is more deterministic and reliable than RSTP and MMRP mechanisms. SPB can handle any number of topology changes and as long as the network has some connectivity, SPB does not isolate any traffic.

SPB for B-VPLS

The SR OS model supports PBB Epipes and I-VPLS services on the B-VPLS. SPB is added to B-VPLS in place of other control planes (see B-VPLS control planes). SPB runs in a separate instance of IS-IS. SPB is configured in a single service instance of B-VPLS that controls the SPB behavior (via IS-IS parameters) for the SPB IS-IS session between nodes. Up to four independent instances of SPB can be configured. Each SPB instance requires a separate control B-VPLS service. A typical SPB deployment uses a single control VPLS with zero, one or more user B-VPLS instances. SPB is multi-topology (MT) capable at the IS-IS LSP TLV definitions however logical instances offer the nearly the same capability as MT. The SR OS SPB implementation always uses MT topology instance zero. Area addresses are not used and SPB is assumed to be a single area. SPB must be consistently configured on nodes in the system. SPB Regions information and IS-IS hello logic that detect mismatched configuration are not supported.

SPB Link State PDUs (LSPs) contains B-MACs, I-SIDs (for multicast services) and link and metric information for an IS-IS database. Epipe I-SIDs are not distributed in SR OS SPB allowing high scalability of PBB Epipes. I-VPLS I-SIDs are distributed in SR OS SPB and the respective multicast group addresses are automatically populated in forwarding in a manner that provides automatic pruning of multicast to the subset of the multicast tree that supports I-VPLS with a common I-SID. This replaces the function of MMRP and is more efficient than MMRP so that in the future, SPB scales to a greater number of I-SIDs.

SPB on SR OS can leverage MPLS networks or Ethernet networks or combinations of both. SPB allows PBB to take advantage of multicast efficiency and at the same time leverage MPLS features such as resiliency.

Control B-VPLS and user B-VPLS

Control B-VPLS are required for the configuration of the SPB parameters and as a service to enable SPB. Control B-VPLS therefore must be configured everywhere SPB forwarding is expected to be active even if there are no terminating services. SPB uses the logical instance and a Forwarding ID (FID) to identify SPB locally on the node. The FID is used in place of the SPB VLAN identifier (Base VID) in IS-IS LSPs enabling a reference to exchange SPB topology and addresses. More specifically, SPB advertises B-MACs and I-SIDs in a B-VLAN context. Because the service model in SR OS separates the VLAN Tag used on the port for encapsulation from the VLAN ID used in SPB the SPB VLAN is a logical concept and is represented by configuring a FID. B-VPLS SAPs use VLAN Tags (SAPs with Ethernet encapsulation) that are independent of the FID value. The encapsulation is local to the link in SR/ESS so the SAP encapsulation has been configured the same between neighboring switches. The FID for a specified instance of SPB between two neighbor switches must be the same. The independence of VID encapsulation is inherent to SR OS PBB B-VPLS. This also allows spoke-SDP bindings to be used between neighboring SPB instances without any VID tags. The one exception is mesh SDPs are not supported but arbitrary mesh topologies are supported by SR OS SPB.

Control and user B-VPLS with FIDs shows two switches where an SPB control B-VPLS configured with FID 1 and uses a SAP with 1/1/1:5 therefore using a VLAN Tag 5 on the link. The SAP 1/1/1:1 could also have been be used but in SR OS the VID does not have to equal FID. Alternatively an MPLS PW (spoke-SDP binding) could be for some interfaces in place of the SAP. Control and user B-VPLS with FIDs shows a control VPLS and two user B-VPLS. The User B-VPLS must share the same topology and are required to have interfaces on SAPs/Spoke SDPs on the same links or LAG groups as the B-VPLS. To allow services on different B-VPLS to use a path when there are multiple paths a different ECT algorithm can be configured on a B-VPLS instance. In this case, the user B-VPLS still fate shared the same topology but they may use different paths for data traffic; see Shortest path and single tree.

Figure 6. Control and user B-VPLS with FIDs

Each user BVPLS offers the same service capability as a control B-VPLS and are configured to ‟follow” or fate share with a control B-VPLS. User B-VPLS must be configured as active on the whole topology where control B-VPLS is configured and active. If there is a mismatch between the topology of a user B-VPLS and the control B-VPLS, only the user B-VPLS links and nodes that are in common with the control B-VPLS function. The services on any B-VPLS are independent of a particular user B-VPLS so a misconfiguration of one of the user B-VPLS does not affect other B-VPLS. For example if a SAP or spoke-SDP is missing in the user B-VPLS any traffic from that user B-VPLS that would use that interface, is missing forwarding information and traffic is dropped only for that B-VPLS. The computation of paths is based only on the control B-VPLS topology.

User B-VPLS instances supporting only unicast services (PBB-Epipes) may share the FID with the other B-VPLS (control or user). This is a configuration short cut that reduces the LSP advertisement size for B-VPLS services but results in the same separation for forwarding between the B-VPLS services. In the case of PBB-Epipes only B-MACs are advertised per FID but B-MACs are populated per B-VPLS in the FDB. If I-VPLS services are to be supported on a B-VPLS that B-VPLS must have an independent FID.

Shortest path and single tree

IEEE 802.1aq standard SPB uses a source specific tree model. The standard model is more computationally intensive for multicast traffic because in addition to the SPF algorithm for unicast and multicast from a single node, an all pairs shorted path needs to be computed for other nodes in the network. In addition, the computation must be repeated for each ECT algorithm. While the standard yields efficient shortest paths, this computation is overhead for systems where multicast traffic volume is low. Ethernet VLL and VPLS unicast services are popular in PBB networks and the SR OS SPB design is optimized for unicast delivery using shortest paths. Ethernet supporting unicast and multicast services are commonly deployed in Ethernet transport networks. SR OS SPB Single tree multicast (also called shared tree or *,G) operates similarly today. The difference is that SPB multicast never floods unknown traffic.

The SR OS implementation of SPB with shortest path unicast and single tree multicast, requires only two SPF computations per topology change reducing the computation requirements. One computation is for unicast forwarding and the other computation is for multicast forwarding.

A single tree multicast requires selecting a root node much like RSTP. Bridge priority controls the choice of root node and alternate root nodes. The numerically lowest Bridge Priority is the criteria for choosing a root node. If multiple nodes have the same Bridge Priority, then the lowest Bridge Identifier (System Identifier) is the root.

In SPB the source-bmac can override the chassis-mac allowing independent control of tie breaking, The shortest path unicast forwarding does not require any special configuration other than selecting the ECT algorithm by configuring a B-VPLS use a FID with low-path-id algorithm or high-path-id algorithm to be the tiebreaker between equal cost paths. Bridge priority allows some adjustment of paths. Configuring link metrics adjusts the number of equal paths.

To illustrate the behavior of the path algorithms an example network is shown in Example partial mesh network.

Figure 7. Example partial mesh network

Assume that Node A is the lowest Bridge Identifier and the Multicast root node and all links have equal metrics. Also, assume that Bridge Identifiers are ordered such that Node A has a numerically lower Bridge identifier than Node B, and Node B has lower Bridge Identifier than Node C, and so on, Unicast paths are configured to use shortest path tree (SPT). Unicast paths for low-path-id and high-path-id shows the shortest paths computed from Node A and Node E to Node F. There are only two shortest paths from A to F. A choice of low-path-id algorithm uses Node B as transit node and a path using high-path-id algorithm uses Node D as transit node. The reverse paths from Node F to A are the same (all unicast paths are reverse path congruent). For Node E to Node F there are three paths E-A-B-F, E-A-D-F, and E-C-D-F. The low-path-id algorithm uses path E-A-B-F and the high-path-id algorithm uses E-C-D-F. These paths are also disjoint and are reverse path congruent. Any nodes that are directly connected in this network have only one path between them (not shown for simplicity).

Figure 8. Unicast paths for low-path-id and high-path-id

For Multicast paths the algorithms used are the same low-path-id or high-path-id but the tree is always a single tree using the root selected as described earlier (in this case Node A). Multicast paths for low-path-id and high-path-id shows the multicast paths for low-path-id and high-path-id algorithm.

Figure 9. Multicast paths for low-path-id and high-path-id

All nodes in this network use one of these trees. The path for multicast to/from Node A is the same as unicast traffic to/from Node A for both low-path-id and high-path-id. However, the multicast path for other nodes is now different from the unicast paths for some destinations. For example, Node E to Node F is now different for high-path-id because the path must transit the root Node A. In addition, the Node E multicast path to C is E-A-C even though E has a direct path to Node C. A rule of thumb is that the node chosen to be root should be a well-connected node and have available resources. In this example, Node A and Node D are the best choices for root nodes.

The distribution of I-SIDs allows efficient pruning of the multicast single tree on a per I-SID basis because only MFIB entries between nodes on the single tree are populated. For example, if Nodes A, B and F share an I-SID and they use the low-path–id algorithm only those three nodes would have multicast traffic for that I-SID. If the high-path-id algorithm is used traffic from Nodes A and B must go through D to get to Node F.

Data path and forwarding

The implementation of SPB on SR OS uses the PBB data plane. There is no flooding of B-MAC based traffic. If a B-MAC is not found in the FDB, traffic is dropped until the control plane populates that B-MAC. Unicast B-MAC addresses are populated in all FDBs regardless of I-SID membership. There is a unicast FDB per B-VPLS both control B-VPLS and user BVPLS. B-VPLS instances that do not have any I-VPLS, have only a default multicast tree and do not have any multicast MFIB entries.

The data plane supports an ingress check (reverse path forwarding check) for unicast and multicast frames on the respective trees. Ingress check is performed automatically. For unicast or multicast frames the B-MAC of the source must be in the FDB and the interface must be valid for that B-MAC or traffic is dropped. The PBB encapsulation (See PBB technology) is unchanged from current SR OS. Multicast frames use the PBB Multicast Frame format and SPBM distributes I-VPLS I-SIDs which allows SPB to populate forwarding only to the relevant branches of the multicast tree. Therefore, SPB replaces both spanning tree control and MMRP functionality in one protocol.

By using a single tree for multicast the amount of MFIB space used for multicast is reduced (per source shortest path trees for multicast are not currently offered on SR OS). In addition, a single tree reduces the amount of computation required when there is topology change.

SPB Ethernet OAM

Ethernet OAM works on Ethernet services and use a combination of unicast with learning and multicast addresses. SPB on SR OS supports both unicast and multicast forwarding, but with no learning and unicast and multicast may take different paths. In addition, SR OS SPB control plane offers a wide variety of show commands. The SPB IS-IS control plane takes the place of many Ethernet OAM functions. SPB IS-IS frames (Hello and PDU and so on) are multicast but they are per SPB interface on the control B-VPLS interfaces and are not PBB encapsulated.

All Client Ethernet OAM is supported from I-VPLS interfaces and PBB Epipe interfaces across the SPB domain. Client OAM is the only true test of the PBB data plane. The only forms of Eth-OAM supported directly on SPB B-VPLS are Virtual MEPS (vMEPs). Only CCM is supported on these vMEPs; vMEPs use a S-TAG encapsulation and follow the SPB multicast tree for the specified B-VPLS. Each MEP has a unicast associated MAC to terminate various ETH-CFM tools. However, CCM messages always use a destination Layer 2 multicast using 01:80:C2:00:00:3x (where x = 0 to 7). vMEPs terminate CCM with the multicast address. Unicast CCM can be configured for point to point associations or hub and spoke configuration but this would not be typical (when unicast addresses are configured on vMEPs they are automatically distributed by SPB in IS-IS).

Up MEPs on services (I-VPLS and PBB Epipes) are also supported and these behave as any service OAM. These OAM use the PBB encapsulation and follow the PBB path to the destination.

Link OAM or 802.1ah EFM is supported below SPB as standard. This strategy of SPB IS-IS and OAM gives coverage.

Table 2. SPB Ethernet OAM operation summary
OAM origination Data plane support Comments

PBB-Epipe or Customer CFM on PBB Epipe.

Up MEPs on PBB Epipe.

Fully Supported.

Unicast PBB frames encapsulating unicast/multicast.

Transparent operation.

Uses Encapsulated PBB with Unicast B-MAC address.

I-VPLS or Customer CFM on I-VPLS.

Up MEPs on I-VPLS.

Fully Supported.

Unicast/Multicast PBB frames determined by OAM type.

Transparent operation.

Uses Encapsulated PBB frames with Multicast/Unicast B-MAC address.

vMEP on B-VPLS Service.

CCM only. S-Tagged Multicast Frames.

Ethernet CCM only. Follows the Multicast tree. Unicast addresses may be configured for peer operation.

In summary SPB offers an automated control plane and optional Eth-CFM/Eth-EFM to allow monitoring of Ethernet Services using SPB. B-VPLS services PBB Epipes and I-VPLS services support the existing set of Ethernet capabilities.

SPB levels

Levels are part of IS-IS. SPB supports Level 1 within a control B-VPLS. Future enhancements may make use of levels.

SPBM to non-SPBM interworking

By using static definitions of B-MACs and ISIDs interworking of PBB Epipes and I-VPLS between SPBM networks and non-SPBM PBB networks can be achieved.

Static MACs and static ISIDs

To extend SPBM networks to other PBB networks, static MACs and ISIDs can be defined under SPBM SAPs/SDPs. The declaration of a static MAC in an SPBM context allows a non-SPBM PBB system to receive frames from an SPBM system. These static MACs are conditional on the SAP/SDP operational state. Currently this is only supported for SPBM because SPBM can advertise these B-MACs and ISIDs without any requirement for flushing. The B-MAC (and B-MAC to ISID) must remain consistent when advertised in the IS-IS database.

The declaration of static-isids allows an efficient connection of ISID based services. The ISID is advertised as supported on the local nodal B-MAC and the static B-MACs which are the true destinations for the ISIDs are also advertised. When the I-VPLS learn the remote B-MAC they associate the ISID with the true destination B-MAC. Therefore if redundancy is used the B-MACs and ISIDs that are advertised must be the same on any redundant interfaces.

If the interface is an MC-LAG interface the static MAC and ISIDs on the SAPs/SDPs using that interface are only active when the associated MC-LAG interface is active. If the interface is a spoke-SDP on an active/ standby pseudo wire (PW) the ISIDs and B-MACs are only active when the PW is active.

Epipe static configuration

For Epipe only, the B-MACs need to be advertised. There is no multicast for PBB Epipes. Unicast traffic follows the unicast path shortest path or single tree. By configuring remote B-MACs Epipes can be setup to non-SPBM systems. A special conditional static-mac is used for SPBM PBB B-VPLS SAPs/SDPs that are connected to a remote system. In the diagram ISID 500 is used for the PBB Epipe but only conditional MACs A and B are configured on the MC-LAG ports. The B-VPLS advertises the static MAC either always or optionally based on a condition of the port forwarding.

Figure 10. Static MACs example
I-VPLS static config

I-VPLS static config consists of two components: static-mac and static ISIDs that represent a remote B-MAC-ISID combination.

The static-MACs are configured as with Epipe, the special conditional static-mac is used for SPBM PBB B-VPLS SAPs/SDPs that are connected to a remote system. The B-VPLS advertises the static MAC either always or optionally based on a condition of the port forwarding.

The static-isids are created under the B-VPLS SAP/SDPs that are connected to a non-SPBM system. These ISIDs are typically advertised but may be controlled by ISID policy.

For I-VPLS ISIDs the ISIDs are advertised and multicast MAC are automatically created using PBB-OUI and the ISID. SPBM supports the pruned multicast single tree. Unicast traffic follows the unicast path shortest path or single tree. Multicast/and unknown Unicast follow the pruned single tree for that ISID.

Figure 11. Static ISIDs example

SPBM ISID policies

ISID policies are an optional aspect of SPBM which allow additional control of ISIDs for I-VPLS. PBB services using SPBM automatically populate multicast for I-VPLS and static-isids. Incorrect use of isid-policy can create black holes or additional flooding of multicast.

To enable more flexible multicast, ISID policies control the amount of MFIB space used by ISIDs by trading off the default Multicast tree and the per ISID multicast tree. Occasionally customers want services that use I-VPLS that have multiple sites but use primarily unicast. The ISID policy can be used on any node where an I-VPLS is defined or static ISIDs are defined.

The typical use is to suppress the installation of the ISID in the MFIB using use-def-mcast and the distribution of the ISID in SPBM by using no advertise-local.

The use-def-mcast policy instructs SPBM to use the default B-VPLS multicast forwarding for the ISID range. The ISID multicast frame remains unchanged by the policy (the standard format with the PBB OUI and the ISID as the multicast destination address) but no MFIB entry is allocated. This causes the forwarding to use the default BVID multicast tree which is not pruned. When this policy is in place it only governs the forwarding locally on the current B-VPLS.

The advertise local policy ISID policies are applied to both static ISIDs and I-VPLS ISIDs. The policies define whether the ISIDs are advertised in SPBM and whether the use the local MFIB. When ISIDs are advertised they use the MFIB in the remote nodes. Locally the use of the MFIB is controlled by the use-def-mcast policy.

The types of interfaces are summarized in SPBM ISID policies table.

Table 3. SPBM ISID policies table
Service type ISID policy on B-VPLS Notes

Epipe

No effect

PBB Epipe ISIDs are not advertised or in MFIB.

I-VPLS

None:

Uses ISID Multicast tree.

Advertised ISIDs of I-VPLS.

I-VPLS uses dedicated (pruned) multicast tree. ISIDs are advertised.

I-VPLS (for Unicast)

use-def-mcast

no advertise-local

I-VPLS uses default Multicast. Policy only required where ISIDs are defined. ISIDs not advertised. must be consistently defined on all nodes with same ISIDs.

I-VPLS (for Unicast)

use-def-mcast

advertise-local

I-VPLS uses default Multicast. Policy only required where ISIDs are defined. ISIDs advertised and pruned tree used elsewhere. May be inconsistent for an ISID.

Static ISIDs for I-VPLS interworking

None: (recommended)

Uses ISID Multicast tree

I-VPLS uses dedicated (pruned) multicast tree. ISIDs are advertised.

Static ISIDs for I-VPLS interworking (defined locally)

use-def-mcast

I-VPLS uses default Multicast. Policy only required where ISIDs are configured or where I-VPLS is located.

No MFIB for any ISIDs

Policy defined on all nodes

use-def-mcast

no advertise-local

Each B-VPLS with the policy does not install MFIB. Policy defined on all switches ISIDs are defined. ISIDs advertised and pruned tree used elsewhere. May be inconsistent for an ISID.

ISID policy control

Static ISID advertisement

Static ISIDs are advertised between using the SPBM Service Identifier and Unicast Address sub-TLV in IS-IS when there is no ISID policy. This TLV advertises the local B-MAC and one or more ISIDs. The B-MAC used is the source-bmac of the Control/User VPLS. Typically remote B-MACs (the ultimate source-bmac) and the associated ISIDs are configured as static under the SPBM interface. This allows all remote B-MACs and all remote ISIDs to be configured one time per interface.

I-VPLS for unicast service

If the service is using unicast only an I-VPLS still uses MFIB space and SPBM advertises the ISID. By using the default multicast tree locally, a node saves MFIB space. By using the no advertise-local SPBM does not advertise the ISIDs covered by the policy. Note the actual PBB multicast frames are the same regardless of policy. Unicast traffic is the not changed for the ISID policies.

The Static B-MAC configuration is allowed under Multi-Chassis LAG (MC-LAG) based SAPs and active/standby PW SDPs.

Unicast traffic follows the unicast path shortest path or single tree. By using the ISID policy Multicast/and unknown Unicast traffic (BUM) follows the default B-VPLS tree in the SPBM domain. This should be used sparingly for any high volume of multicast services.

Figure 12. ISID policy example

Default behaviors

When static ISIDs are defined the default is to advertise the static ISIDs when the interface parent (SAP or SDP) is up.

If the advertisement is not needed, an ISID policy can be created to prevent advertising the ISID.

  • use-def-mcast

    If a policy is defined with use-def-mcast the local MFIB does not contain an Multicast MAC based on the PBB OUI+ ISID and the frame is flooded out the local tree. This applies to any node where the policy is defined. On other nodes if the ISID is advertised the ISID uses the MFIB for that ISID.

  • no advertise-local

    If a policy of no advertise-local is defined in the ISIDs, the policy is not advertised. This combination should be used everywhere there is an I-VPLS with the ISID or where the Static ISID is defined to prevent black holes. If an ISID is to be moved from advertising to no advertising it is advisable to use use-def-mcast on all the nodes for that ISID which allows the MFIB to not be installed and starts using the default multicast tree at each node with that policy. Then the no advertise-local option can be used.

Each Policy may be used alone or in combination.

Example network configuration

Figure 13. Example network

Example network shows an example network showing four nodes with SPB B-VPLS. The SPB instance is configured on the B-VPLS 100001. B-VPLS 100001 uses FID 1 for SPB instance 1024. All B-MACs and I-SIDs are learned in the context of B-VPLS 100001. B-VPLS 100001 has an i-vpls 10001 service, which also uses the I-SID 10001. B-VPLS 100001 is configured to use VID 1 on SAPs 1/2/2 and 1/2/3 and while the VID does not need to be the same as the FID the VID does however need to be the same on the other side (Dut-B and Dut-C).

A user B-VPLS service 100002 is configured and it uses B-VPLS 10001 to provide forwarding. It fate shares the control topology. In Example network, the control B-VPLS uses the low-path-id algorithm and the user B-VPLS uses high-path-id algorithm. Any B-VPLS can use any algorithm. The difference is illustrated in the path between Dut A and Dut D. The short dashed line through Dut-B is the low-path-id algorithm and the long dashed line thought Dut C is the high-path-id algorithm.

Example configuration for Dut-A


Dut-A:
Control B-VPLS:*A:Dut-A>config>service>vpls# pwc
-------------------------------------------------------------------------------
Present Working Context :
-------------------------------------------------------------------------------
 <root>
  configure
  service
  vpls "100001"
-------------------------------------------------------------------------------
*A:Dut-A>config>service>vpls# info
----------------------------------------------
            pbb
                source-bmac 00:10:00:01:00:01
            exit
            stp
                shutdown
            exit
            spb 1024 fid 1 create
                level 1
                    ect-algorithm fid-range 100-100 high-path-id
                exit
                no shutdown
            exit
            sap 1/2/2:1.1 create
                spb create
                    no shutdown
                exit
            exit
            sap 1/2/3:1.1 create
                spb create
                    no shutdown
                exit
            exit
            no shutdown
----------------------------------------------
User B-VPLS:
*A:Dut-A>config>service>vpls# pwc
-------------------------------------------------------------------------------
Present Working Context :
-------------------------------------------------------------------------------
 <root>
  configure
  service
  vpls "100002"
-------------------------------------------------------------------------------
*A:Dut-A>config>service>vpls# info
----------------------------------------------
            pbb
                source-bmac 00:10:00:02:00:01
            exit
            stp
                shutdown
            exit
            spbm-control-vpls 100001 fid 100
            sap 1/2/2:1.2 create
            exit
            sap 1/2/3:1.2 create
            exit
            no shutdown
----------------------------------------------

I-VPLS:
configure service
        vpls 10001 customer 1 i-vpls create
            service-mtu 1492
            pbb
                backbone-vpls 100001
                exit
            exit
            stp
                shutdown
            exit
            sap 1/2/1:1000.1 create
            exit
            no shutdown
        exit
        vpls 10002 customer 1 i-vpls create
            service-mtu 1492
            pbb
                backbone-vpls 100002
                exit
            exit
            stp
                shutdown
            exit
            sap 1/2/1:1000.2 create
            exit
            no shutdown
        exit
exit
Show commands outputs

The show base command outputs a summary of the instance parameters under a control B-VPLS. The show command for a user B-VPLS indicates the control B-VPLS. The base parameters except for Bridge Priority and Bridge ID must match on neighbor nodes.

*A:Dut-A# show service id 100001 spb base
===============================================================================
Service SPB Information
===============================================================================
Admin State        : Up                 Oper State         : Up
ISIS Instance      : 1024               FID                : 1
Bridge Priority    : 8                  Fwd Tree Top Ucast : spf
Fwd Tree Top Mcast : st
Bridge Id          : 80:00.00:10:00:01:00:01
Mcast Desig Bridge : 80:00.00:10:00:01:00:01
===============================================================================
ISIS Interfaces
===============================================================================
Interface                        Level CircID  Oper State   L1/L2 Metric
-------------------------------------------------------------------------------
sap:1/2/2:1.1                    L1    65536   Up           10/-
sap:1/2/3:1.1                    L1    65537   Up           10/-
-------------------------------------------------------------------------------
Interfaces : 2
===============================================================================
FID ranges using ECT Algorithm
-------------------------------------------------------------------------------
1-99      low-path-id
100-100   high-path-id
101-4095  low-path-id
===============================================================================

The show adjacency command displays the system ID of the connected SPB B-VPLS neighbors and the associated interfaces to connect those neighbors.

*A:Dut-A# show service id 100001 spb adjacency
===============================================================================
ISIS Adjacency
===============================================================================
System ID                Usage State Hold Interface                     MT Enab
-------------------------------------------------------------------------------
Dut-B                    L1    Up    19   sap:1/2/2:1.1                 No
Dut-C                    L1    Up    21   sap:1/2/3:1.1                 No
-------------------------------------------------------------------------------
Adjacencies : 2
===============================================================================

Details about the topology can be displayed with the database command. There is a detail option that displays the contents of the LSPs.

*A:Dut-A# show service id 100001 spb database
===============================================================================
ISIS Database
===============================================================================
LSP ID                                  Sequence  Checksum Lifetime Attributes
-------------------------------------------------------------------------------
Displaying Level 1 database
-------------------------------------------------------------------------------
Dut-A.00-00                             0xc       0xbaba   1103     L1
Dut-B.00-00                             0x13      0xe780   1117     L1
Dut-C.00-00                             0x13      0x85a    1117     L1
Dut-D.00-00                             0xe       0x174a   1119     L1
Level (1) LSP Count : 4
===============================================================================

The show routes command shows the next hop if for the MAC addresses both unicast and multicast. The path to 00:10:00:01:00:04 (Dut-D) shows the low-path-id algorithm ID. For FID one the neighbor is Dut-B and for FID 100 the neighbor is Dut-C. Because Dut-A is the root of the multicast single tree the multicast forwarding is the same for Dut-A. However, unicast and multicast routes differ on most other nodes. Also the I-SIDs exist on all of the nodes so I-SID base multicast follows the multicast tree exactly. If the I-SID had not existed on Dut-B or Dut-D then for FID 1 there would be no entry. Note only designated nodes (root nodes) show metrics. Non-designated nodes do not show metrics.

*A:Dut-A# show service id 100001 spb routes
================================================================
MAC Route Table
================================================================
Fid  MAC                                          Ver.   Metric
      NextHop If                    SysID
----------------------------------------------------------------
Fwd Tree: unicast
----------------------------------------------------------------
1    00:10:00:01:00:02                            10     10
      sap:1/2/2:1.1                 Dut-B
1    00:10:00:01:00:03                            10     10
      sap:1/2/3:1.1                 Dut-C
1    00:10:00:01:00:04                            10     20
      sap:1/2/2:1.1                 Dut-B
100  00:10:00:02:00:02                            10     10
      sap:1/2/2:1.1                 Dut-B
100  00:10:00:02:00:03                            10     10
      sap:1/2/3:1.1                 Dut-C
100  00:10:00:02:00:04                            10     20
      sap:1/2/3:1.1                 Dut-C

Fwd Tree: multicast
----------------------------------------------------------------
1    00:10:00:01:00:02                            10     10
      sap:1/2/2:1.1                 Dut-B
1    00:10:00:01:00:03                            10     10
      sap:1/2/3:1.1                 Dut-C
1    00:10:00:01:00:04                            10     20
      sap:1/2/2:1.1                 Dut-B
100  00:10:00:02:00:02                            10     10
      sap:1/2/2:1.1                 Dut-B
100  00:10:00:02:00:03                            10     10
      sap:1/2/3:1.1                 Dut-C
100  00:10:00:02:00:04                            10     20
      sap:1/2/3:1.1                 Dut-C
----------------------------------------------------------------
No. of MAC Routes: 12
================================================================

================================================================
ISID Route Table
================================================================
Fid  ISID                                         Ver.
      NextHop If                    SysID
----------------------------------------------------------------
1    10001                                        10
      sap:1/2/2:1.1                 Dut-B
      sap:1/2/3:1.1                 Dut-C
100  10002                                        10
      sap:1/2/2:1.1                 Dut-B
      sap:1/2/3:1.1                 Dut-C
----------------------------------------------------------------
No. of ISID Routes: 2
================================================================

The show service spb fdb command shows the programmed unicast and multicast source MACs in SPB-managed B-VPLS service.

*A:Dut-A# show service id 100001 spb fdb

==============================================================================
User service FDB information
==============================================================================
MacAddr           UCast Source          State   MCast Source          State
------------------------------------------------------------------------------
00:10:00:01:00:02 1/2/2:1.1             ok      1/2/2:1.1             ok
00:10:00:01:00:03 1/2/3:1.1             ok      1/2/3:1.1             ok
00:10:00:01:00:04 1/2/2:1.1             ok      1/2/2:1.1             ok
------------------------------------------------------------------------------
Entries found: 3
==============================================================================

*A:Dut-A# show service id 100002 spb fdb
==============================================================================
User service FDB information
==============================================================================
MacAddr           UCast Source          State   MCast Source          State
------------------------------------------------------------------------------
00:10:00:02:00:02 1/2/2:1.2             ok      1/2/2:1.2             ok
00:10:00:02:00:03 1/2/3:1.2             ok      1/2/3:1.2             ok
00:10:00:02:00:04 1/2/3:1.2             ok      1/2/3:1.2             ok
------------------------------------------------------------------------------
Entries found: 3
==============================================================================

The show service spb mfib command shows the programmed multicast ISID MAC addresses in SPB-managed B-VPLS service shows the multicast ISID pbb group mac addresses in SPB-managed B-VPLS. Other types of *,G multicast traffic is sent over the multicast tree and these MACs are not shown. OAM traffic that uses multicast (for example vMEP CCM) takes this path for example.

*A:Dut-A# show service id 100001 spb mfib
===============================================================================
User service MFIB information
===============================================================================
MacAddr           ISID     Status
-------------------------------------------------------------------------------
01:1E:83:00:27:11 10001    Ok
-------------------------------------------------------------------------------
Entries found: 1
===============================================================================
*A:Dut-A# show service id 100002 spb mfib
===============================================================================
User service MFIB information
===============================================================================
MacAddr           ISID     Status
-------------------------------------------------------------------------------
01:1E:83:00:27:12 10002    Ok
-------------------------------------------------------------------------------
Entries found: 1
===============================================================================
Debug commands

Use the following commands to debug an SPB-managed B-VPLS service:

  • debug service id svc-id spb

  • debug service id svc-id spb adjacency [{sap sap-id | spoke-sdp sdp-id:vc-id | nbr-system-id}]

  • debug service id svc-id spb interface [{sap sap-id | spoke-sdp sdp-id:vc-id}]

  • debug service id svc-id spb l2db

  • debug service id svc-id spb lsdb [{system-id | lsp-id}]

  • debug service id svc-id spb packet [packet-type] [{sap sap-id | spoke-sdp sdp-id:vc-id}] [detail]

  • debug service id svc-id spb spf system-id

Tools commands

Use the following commands to troubleshoot an SPB-managed B-VPLS service:

  • tools perform service id svc-id spb run-manual-spf

  • tools dump service id svc-id spb

  • tools dump service id svc-id spb default-multicast-list

  • tools dump service id svc-id spb fid fid default-multicast-list

  • tools dump service id svc-id spb fid fid forwarding-path destination isis-system-id forwarding-tree {unicast | multicast}

Clear commands

Use the following commands to clear SPB-related data:

  • clear service id svc-id spb

  • clear service id svc-id spb adjacency system-id

  • clear service id svc-id spb database system-id

  • clear service id svc-id spb spf-log

  • clear service id svc-id spb statistics

IEEE 802.1ak MMRP for service aggregation and zero touch provisioning

IEEE 802.1ah supports an M:1 model where multiple customer services, represented by ISIDs, are transported through a common infrastructure (B-component). The Nokia PBB implementation supports the M:1 model allowing for a service architecture where multiple customer services (I-VPLS or Epipe) can be transported through a common B-VPLS infrastructure as depicted in Customer services transported in 1 B-VPLS (M:1 model).

Figure 14. Customer services transported in 1 B-VPLS (M:1 model)

The B-VPLS infrastructure represented by the white circles is used to transport multiple customer services represented by the triangles of different colors. This service architecture minimizes the number of provisioning touches and reduces the load in the core PEs: for example, G and H use less VPLS instances and pseudowire.

In a real life deployment, different customer VPNs do not share the same community of interest – for example, VPN instances may be located on different PBB PEs. The M:1 model depicted in Flood containment requirement in M:1 model requires a per VPN flood containment mechanism so that VPN traffic is distributed just to the B-VPLS locations that have customer VPN sites: for example, flooded traffic originated in the blue I-VPLS should be distributed just to the PBB PEs where blue I-VPLS instances are present – PBB PE B, E and F.

Per customer VPN distribution trees need to be created dynamically throughout the BVPLS as new customer I-VPLS instances are added in the PBB PEs.

The Nokia PBB implementation employs the IEEE 802.1ak Multiple MAC Registration Protocol (MMRP) to dynamically build per I-VPLS distribution trees inside a specific B-VPLS infrastructure.

IEEE 802.1ak Multiple Registration Protocol (MRP) – Specifies changes to IEEE Std 802.1Q that provide a replacement for the GARP, GMRP and GVRP protocols. MMRP application of IEEE 802.1ak specifies the procedures that allow the registration/de-registration of MAC addresses over an Ethernet switched infrastructure.

In the PBB case, as I-VPLS instances are enabled in a specific PE, a group B-MAC address is by default instantiated using the standard based PBB Group OUI and the ISID value associated with the I-VPLS.

When a new I-VPLS instance is configured in a PE, the IEEE 802.1ak MMRP application is automatically invoked to advertise the presence of the related group B-MAC on all active B-VPLS SAPs and SDP bindings.

When at least two I-VPLS instances with the same ISID value are present in a B-VPLS, an optimal distribution tree is built by MMRP in the related B-VPLS infrastructure as depicted in Flood containment requirement in M:1 model.

Figure 15. Flood containment requirement in M:1 model

MMRP support over B-VPLS SAPs and SDPs

MMRP is supported in B-VPLS instances over all the supported BVPLS SAPs and SDPs, including the primary and standby pseudowire scheme implemented for VPLS resiliency.

When a B-VPLS with MMRP enabled receives a packet destined for a specific group B-MAC, it checks its own MFIB entries and if the group B-MAC does not exist, it floods it everywhere. This should never happen as this kind of packet is generated at the I-VPLS/PBB PE when a registration was received for a local I-VPLS group B-MAC.

I-VPLS changes and related MMRP behavior

This section describes the MMRP behavior for different changes in IVPLS.

  • When an ISID is set for a specific I-VPLS and a link to a related B-VPLS is activated (for example, through the configure service vpls backbone-vpls vpls id:isid command), the group B-MAC address is declared on all B-VPLS virtual ports (SAPs or SDPs).

  • When the ISID is changed from one value to a new one, the old group B-MAC address is undeclared on all ports and the new group B-MAC address is declared on all ports in the B-VPLS.

  • When the I-VPLS is disassociated with the B-VPLS, the old group B-MAC is no longer advertised as a local attribute in the B-VPLS if no other peer B-VPLS PEs have it declared.

  • When an I-VPLS goes operationally down (either all SAPs/SDPs are down) or the I-VPLS is shutdown, the associated group B-MAC is undeclared on all ports in the B-VPLS.

  • When the I-VPLS is deleted, the group B-MAC should already be undeclared on all ports in the B-VPLS because the I-VPLS has to be shutdown to delete it.

Limiting the number of MMRP entries on a per B-VPLS basis

The MMRP exchanges create one entry per attribute (group B-MAC) in the B-VPLS where MMRP protocol is running. When the first registration is received for an attribute, an MFIB entry is created for it.

The Nokia implementation allows the user to control the number of MMRP attributes (group B-MACs) created on a per B-VPLS basis. Control over the number of related MFIB entries in the B-VPLS FDB is inherited from previous releases through the use of the configure service vpls mfib-table-size table-size command. This ensures that no B-VPLS takes up all the resources from the total pool.

Optimization for improved convergence time

Assuming that MMRP is used in a specific B-VPLS, under failure conditions the time it takes for the B-VPLS forwarding to resume may depend on the data plane and control plane convergence plus the time it takes for MMRP exchanges to settle down the flooding trees on a per ISID basis.

To minimize the convergence time, the Nokia PBB implementation offers the selection of a mode where B-VPLS forwarding reverts for a short time to flooding so that MMRP has enough time to converge. This mode can be selected through configuration using the configure service vpls b-vpls mrp flood-time value command where value represents the amount of time in seconds that flooding is enabled.

If this behavior is selected, the forwarding plane reverts to B-VPLS flooding for a configurable time period, for example, for a few seconds, then it reverts back to the MFIB entries installed by MMRP.

The following B-VPLS events initiate the switch from per I-VPLS (MMRP) MFIB entries to ‟B-VPLS flooding”:

  • Reception or local triggering of a TCN

  • B-SAP failure

  • Failure of a B-SDP binding

  • Pseudowire activation in a primary/standby HVPLS resiliency solution

  • SF/CPM switchover because of the STP reconvergence

Controlling MRP scope using MRP policies

MMRP advertises the Group B-MACs associated with ISIDs throughout the whole BVPLS context regardless of whether a specific IVPLS is present in one or all the related PEs or BEBs. When evaluating the overall scalability the resource consumption in both the control and data plane must be considered:

  • control plane

    The control plane is responsible for MMRP processing and the number of attributes advertised.

  • data plane

    One tree is instantiated per ISID or Group B-MAC attribute.

In a multi-domain environment, for example multiple MANs interconnected through a WAN, the BVPLS and implicitly MMRP advertisement may span across domains. The MMRP attributes are flooded throughout the BVPLS context indiscriminately, regardless of the distribution of IVPLS sites.

The solution described in this section limits the scope of MMRP control plane advertisements to a specific network domain using MRP Policy. ISID-based filters are also provided as a safety measure for BVPLS data plane.

Figure 16. Inter-domain topology

Inter-domain topology shows the case of an Inter-domain deployment where multiple metro domains (MANs) are interconnected through a wide area network (WAN). A BVPLS is configured across these domains running PBB M:1 model to provide infrastructure for multiple IVPLS services. MMRP is enabled in the BVPLS to build per IVPLS flooding trees. To limit the load in the core PEs or PBB BCBs, the local IVPLS instances must use MMRP and data plane resources only in the MAN regions where they have sites. A solution to the above requirements is depicted in Limiting the scope of MMRP advertisements. The case of native PBB metro domains inter-connected via a MPLS core is used in this example. Other technology combinations are possible.

Figure 17. Limiting the scope of MMRP advertisements

An MRP policy can be applied to the edge of MAN1 domain to restrict the MMRP advertisements for local ISIDs outside local domain. Or the MRP policy can specify the inter-domain ISIDs allowed to be advertised outside MAN1. The configuration of MRP policy is similar with the configuration of a filter. It can be specified as a template or exclusively for a specific endpoint under service mrp object. An ISID or a range of ISIDs can be used to specify one or multiple match criteria that can be used to generate the list of Group MACs to be used as filters to control which MMRP attributes can be advertised. An example of a simple mrp-policy that allows the advertisement of Group B-MACs associated with ISID range 100-150 is provided below:

*A:ALA-7>config>service>mrp# info
----------------------------------------------
            mrp-policy "test" create
                default-action block
                entry 1 create
                    match
                        isid 100 to 150
                    exit 
        action allow
                exit
            exit
----------------------------------------------

A special action end-station is available under mrp-policy entry object to allow the emulation on a specific SAP/PW of an MMRP end-station. This is usually required when the operator does not want to activate MRP in the WAN domain for interoperability reasons or if it prefers to manually specify which ISID is interconnected over the WAN. In this case the MRP transmission is shutdown on that SAP/PW and the configured ISIDs are used the same way as an IVPLS connection into the BVPLS, emulating a static entry in the related BVPLS MFIB. Also if MRP is active in the BVPLS context, MMRP declares the related GB-MACs continuously over all the other BVPLS SAP/PWs until the mrp-policy end-station action is removed from the mrp-policy assigned to that BVPLS context.

The MMRP usage of the mrp-policy ensures automatically that traffic using GB-MAC is not flooded between domains. There could be though small transitory periods when traffic originated from PBB BEB with unicast B-MAC destination may be flooded in the BVPLS context as unknown unicast in the BVPLS context for both IVPLS and PBB Epipe. To restrict distribution of this traffic for local PBB services a new ISID match criteria is added to existing mac-filters. The mac-filter configured with ISID match criteria can be applied to the same interconnect endpoints, BVPLS SAP or PW, as the mrp-policy to restrict the egress transmission any type of frames that contain a local ISID. An example of this new configuration option is as follows:

----------------------------------------------
A;ALA-7>config>filter# info
----------------------------------------------
mac-filter 90 create
description "filter-wan-man"
type isid
scope template
entry 1 create
description "drop-local-isids"
match
isid from 100 to 1000
exit
action drop
exit
----------------------------------------------

These filters are applied as required on a per B-SAP or B-PW basis just in the egress direction. The ISID match criteria is exclusive with any other criteria under mac-filter. A new mac-filter type attribute is defined to control the use of ISID match criteria and must be set to isid to allow the use of isid match criteria. The ISID tag is identified using the PBB ethertype provisioned under config>port>ethernet>pbb-etype.

PBB and BGP-AD

BGP auto-discovery is supported only in the BVPLS to automatically instantiate the BVPLS pseudowires and SDPs.

PBB E-Line service

E-Line service is defined in PBB (IEEE 802.1ah) as a point-to-point service over the B-component infrastructure. The Nokia implementation offers support for PBB E-Line through the mapping of multiple Epipe services to a backbone VPLS infrastructure.

The use of Epipe scales the E-Line services as no MAC switching, learning or replication is required to deliver the point-to-point service.

All packets that ingress the customer SAP/spoke SDP are PBB encapsulated and unicasted through the B-VPLS ‟tunnel” using the backbone destination MAC of the remote PBB PE. The Epipe service does not support the forwarding of PBB encapsulated frames received on SAPs or spoke SDPs through their associated B-VPLS service. PBB frames are identified based on the configured PBB Ethertype (0x88e7 by default).

All the packets that ingress the B-VPLS destined for the Epipe are PBB de-encapsulated and forwarded to the customer SAP/spoke SDP.

A PBB E-Line service support the configuration of a SAP or non-redundant spoke SDP.

Non-redundant PBB Epipe spoke termination

This feature provides the capability to use non-redundant pseudowire connections on the access side of a PBB Epipe, where previously only SAPs could be configured.

PBB using G.8031 protected Ethernet tunnels

IEEE 802.1ah Provider Backbone Bridging (PBB) specification employs provider MSTP (PMSTP) to ensure loop avoidance in a resilient native Ethernet core. The usage of P-MSTP means failover times depend largely on the size and the connectivity model used in the network. The use of MPLS tunnels provides a way to scale the core while offering fast failover times using MPLS FRR. There are still service provider environments where Ethernet services are deployed using native Ethernet backbones. A solution based on native Ethernet backbone is required to achieve the same fast failover times as in the MPLS FRR case.

The Nokia PBB implementation offers the capability to use core Ethernet tunnels compliant with ITU-T G.8031 specification to achieve 50 ms resiliency for backbone failures. This is required to comply with the stringent SLAs provided by service providers in the current competitive environment. The implementation also allows a LAG-emulating Ethernet tunnel providing a complimentary native Ethernet E-LAN capability. The LAG-emulating Ethernet tunnels and G.8031 protected Ethernet tunnels operate independently.

The next section describes an applicability example where an Ethernet service provider using native PBB offers a carrier of carrier backhaul service for mobile operators.

Solution overview

A simplified topology example for a PBB network offering a carrier of carrier service for wireless service providers is depicted in Mobile backhaul use case.

Figure 18. Mobile backhaul use case

The wireless service provider in this example purchases an E-Line service between the ENNIs on PBB edge nodes, BEB1 and BEBn. PBB services are employing a type of Ethernet tunneling (Eth-tunnels) between BEBs where primary and backup member paths controlled by G.8031 1:1 protection are used to ensure faster backbone convergence. Ethernet CCMs based on IEEE 802.1ag specification may be used to monitor the liveliness for each individual member paths.

The Ethernet paths span a native Ethernet backbone where the BCBs are performing simple Ethernet switching between BEBs using an Epipe or a VPLS service.

Although the network diagram shows just the Epipe case, both PBB E-Line and E-LAN services are supported.

Detailed solution description

This section discusses the details of the Ethernet tunneling for PBB. The main solution components are depicted in PBB-Epipe with B-VPLS over Ethernet tunnel.

Figure 19. PBB-Epipe with B-VPLS over Ethernet tunnel

The PBB E-Line service is represented in the BEBs as a combination of an Epipe mapped to a BVPLS instance. A eth-tunnel object is used to group two possible paths defined by specifying a member port and a control tag. In our example, the blue-circle representing the eth-tunnel is associating in a protection group the two paths instantiated as (port, control-tag/bvid): a primary one of port 1/1/1, control-tag 100 and respectively a secondary one of port 2/2/2, control tag 200.

The BCBs devices stitch each BVID between different BEB-BCB links using either a VPLS or Epipe service. Epipe instances are recommended as the preferred option because of the increased tunnel scalability.

Fast failure detection on the primary and backup paths is provided using IEEE 802.1ag CCMs that can be configured to transmit at 10 msec interval. Alternatively, the link layer fault detection mechanisms like LoS/RDI or 802.3ah can be employed.

Path failover is controlled by an Ethernet protection module, based on standard G.8031 Ethernet Protection Switching. The Nokia implementation of Ethernet protection switching supports only the 1:1 model which is common practice for packet based services because it makes better use of available bandwidth. The following additional functions are provided by the protection module:

  • Synchronization between BEBs such that both send and receive on the same Ethernet path in stable state.

  • Revertive / non-revertive choices.

  • Compliant G.8031 control plane.

The secondary path requires a MEP to exchange the G.8031 APS PDUs. The following Ethernet CFM configuration in the eth-tunnel>path>eth-cfm>mep context can be used to enable the G.8031 protection without activating the Ethernet CCMs:

  1. Create the domain (MD) in CFM.

  2. Create the association (MA) in CFM and do not put remote MEPs.

  3. Create the MEP.

  4. Configure control-mep and no shutdown on the MEP.

  5. Use the no ccm-enable command to keep the CCM transmission disabled.

If a MEP is required for troubleshooting issues on the primary path, the configuration described above for the secondary path must be used to enable the use of Link Layer OAM on the primary path.

LAG loadsharing is offered to complement G.8031 protected Ethernet tunnels for situations where unprotected VLAN services are to be offered on some or all of the same native Ethernet links.

Figure 20. G.8031 P2P tunnels and LAG-like loadsharing coexistence

In G.8031 P2P tunnels and LAG-like loadsharing coexistence , the G.8031 Ethernet tunnels are used by the B-SAPs mapped to the green BVPLS entities supporting the E-Line services. A LAG-like loadsharing solution is provided for the Multipoint BVPLS (white circles) supporting the E-LAN (IVPLS) services. The green G.8031 tunnels coexist with LAG-emulating Ethernet tunnels (loadsharing mode) on both BEB-BCB and BCB-BCB physical links.

The G.8031-controlled Ethernet tunnels select an active tunnel based on G.8031 APS operation, while emulated-LAG Ethernet tunnels hash traffic within the configured links. Upon failure of one of the links the emulated-LAG tunnels rehash traffic within the remaining links and fail the tunnel when the number of links breaches the minimum required (independent of G.8031-controlled Ethernet tunnels on the links shared emulated-LAG).

Detailed PBB emulated LAG solution description

This section discusses the details of the emulated LAG Ethernet tunnels for PBB. The main solution components are depicted in Ethernet tunnel overlay which overlays Ethernet Tunnels services on the network from PBB-Epipe with B-VPLS over Ethernet tunnel.

Figure 21. Ethernet tunnel overlay

For a PBB Ethernet VLAN to make efficient use of an emulated LAG solution, a Management-VPLS (m-VPLS) is configured enabling Provider Multi-Instance Spanning Tree Protocol (P-MSTP). The m-VPLS is assigned to two SAPs; the eth-tunnels connecting BEB1 to BCB-E and BCB-F, respectively, reserving a range of VLANs for P-MSTP.

The PBB P-MSTP service is represented in the BEBs as a combination of an Epipe mapped to a BVPLS instance as before but now the PBB service is able to use the Ethernet tunnels under the P-MSTP control and load share traffic on the emulated LAN. In our example, the blue-circle representing the BVPLS is assigned to the SAPs which define two paths each. All paths are specified as primary precedence to load share the traffic.

A Management VPLS (m-VPLS) is first configured with a VLAN-range and assigned to the SAPs containing the path to the BCBs. The load shared eth-tunnel objects are defined by specifying a member ports and a control tag of zero. Then individual B-VPLS services can be assigned to the member paths of the emulated LAGs and defining the path encapsulation. Then individual services such as the IVPLS service can be assigned to the B-VPLS.

At the BCBs the tunnels are terminated the next BVPLS instance controlled by P-MSTP on the BCBs to forward the traffic.

In the event of link failure, the emulated LAG group automatically adjusts the number of paths. A threshold can be set whereby the LAG group is declared down. All emulated LAG operations are independent of any 8031-1to1 operation.

Support service and solution combinations

The following considerations apply when Ethernet tunnels are configured under a VPLS service:

  • Only ports in access or hybrid mode can be configured as eth-tunnel path members. The member ports can be located on the same or different IOMs, MDAs, XCMs, or XMAs.

  • Dot1q and QinQ ports are supported as eth-tunnel path members.

  • The same port cannot be used as member in both a LAG and an Ethernet-tunnel.

  • A mix of regular and multiple eth-tunnel SAPs and PWs can be configured in the same BVPLS.

  • Split horizon groups in BVPLS are supported on eth-tunnel SAPs. The use of split horizon groups allows the emulation of a VPLS model over the native Ethernet core, eliminating the need for P-MSTP.

  • STP and MMRP are not supported in a BVPLS using eth-tunnel SAPs.

  • Both PBB E-Line (Epipe) and E-LAN (IVPLS) services can be transported over a BVPLS using Ethernet-tunnel SAPs.

  • MC-LAG access multihoming into PBB services is supported in combination with Ethernet tunnels:

    • MC-LAG SAPs can be configured in IVPLS or Epipe instances mapped to a BVPLS that uses eth-tunnel SAPs

    • Blackhole Avoidance using native PBB MAC flush/MAC move solution is also supported

  • Support is also provided for BVPLS with P-MSTP and MMRP control plane running as ships-in-the-night on the same links with the Ethernet tunneling which is mapped by a SAP to a different BVPLS.

    Epipes must be used in the BCBs to support scalable point-to-point tunneling between the eth-tunnel endpoints when management VPLS is used.

  • The following solutions or features are not supported in the current implementation for the 7450 ESS and 7750 SR and are blocked:

    • Capture SAP

    • Subscriber management

    • Application assurance

    • Eth-tunnels usage as a logical port in the config>redundancy>multi-chassis>peer>sync>port context

For more information, see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Services Overview Guide.

Periodic MAC notification

Virtual B-MAC learning frames (for example, the frames sent with the source MAC set to the virtual B-MAC) can be sent periodically, allowing all BCBs/BEBs to keep the virtual B-MAC in their Layer 2 forwarding database.

This periodic mechanism is useful in the following cases:

  • A new BEB is added after the current mac-notification method has stopped sending learning frames.

  • A new combination of [MC-LAG:SAP|A/S PW]+[PBB-Epipe]+[associated B-VPLS]+[at least one B-SDP|B-SAP] becomes active. The current mechanism only sends learning frames when the first such combination becomes active.

  • A BEB containing the remote endpoint of a dual-homed PBB-Epipe is rebooted.

  • Traffic is not seen for the MAC aging timeout (assuming that the new periodic sending interval is less than the aging timeout).

  • There is unidirectional traffic.

In each of the above cases, all of the remote BEB/BCBs learn the virtual MAC in the worst case after the next learning frame is sent.

In addition, this allows all of the above when to be used in conjunction with discard-unknown in the B-VPLS. Currently, if discard-unknown is enabled in all related B-VPLSs (to avoid any traffic flooding), all above cases could experience an increased traffic interruption, or a permanent loss of traffic, as only traffic toward the dual homed PBB-Epipe can restart bidirectional communication. For example, it reduces the traffic outage when:

The PBB-Epipe virtual MAC is flushed on a remote BEB/BCB because of the failover of an MC-LAG or A/S pseudowires within the customer’s access network, for example, in between the dual homed PBB-Epipe peers and their remote tunnel endpoint.

There is a failure in the PBB core causing the path between the two BEBs to pass through a different BCB.

It should be noted that this does not help in the case where the remote tunnel endpoint BEB fails. In this case traffic is flooded when the remote B-MAC ages out if discard-unknown is disabled. If discard-unknown is enabled, then the traffic follows the path to the failed BEB but is eventually dropped on the source BEB when the remote B-MAC ages out on all systems.

To scale the implementation it is expected that the timescale for sending the periodic notification messages is much longer than that used for the current notification messages.

MAC flush

PBB resiliency for B-VPLS over pseudowire infrastructure

The following VPLS resiliency mechanisms are also supported in PBB VPLS:

  • Native Ethernet resiliency supported in both I-VPLS and B-VPLS contexts

  • Distributed LAG, MC-LAG, RSTP

  • MSTP in a management VPLS monitoring (B- or I-) SAPs and pseudowire

  • BVPLS service resiliency, loop avoidance solutions - Mesh, active/standby pseudowires and multi-chassis endpoint

  • IVPLS service resiliency, loop avoidance solutions - Mesh, active/standby pseudowires (PE-rs only role), BGP multihoming

To support these resiliency options, extensive support for blackhole avoidance mechanisms is required.

Porting existing VPLS LDP MAC flush in PBB VPLS

Both the I-VPLS and B-VPLS components inherit the LDP MAC flush capabilities of a regular VPLS to fast age the related FDB entries for each domain: C-MACs for I-VPLS and B-MACs for B-VPLS. Both types of LDP MAC flush are supported for I-VPLS and B-VPLS domains:

  • flush-all-but-mine

    This refers to flushing on a positive event, for example:

    • pseudowire activation (VPLS resiliency using active/standby pseudowire)

    • reception of a STP TCN

  • flush-all-from-me

    This refers to flushing on a negative event, for example:

    • SAP failure (link down or MC-LAG out-of-sync)

    • pseudowire or endpoint failure

In addition, only for the B-VPLS domain, changing the backbone source MAC of a B-VPLS triggers an LDP MAC flush-all-from-me to be sent in the related active topology. At the receiving PBB PE, a B-MAC flush automatically triggers a flushing of the C-MACs associated with the old source B-MAC of the B-VPLS.

PBB blackholing issue

In the PBB VPLS solution, a B-VPLS may be used as infrastructure for one or more I-VPLS instances. B-VPLS control plane (LDP Signaling or P-MSTP) replaces I-VPLS control plane throughout the core. This is raising an additional challenge related to blackhole avoidance in the I-VPLS domain as described in this section.

To address the PBB blackholing issue, assuming that the link between PE A1 and node 5 is active, the remote PEs participating in the orange VPN (for example, PE D) learn the C-MAC X associated with backbone MAC A1. Under failure of the link between node 5 and PE A1 and activation of link to PE A2, the remote PEs (for example, PE D) blackhole the traffic destined for customer MAC X to B-MAC A1 until the aging timer expires or a packet flows from X to Y through the PE A2. This may take a long time (default aging timer is 5 minutes) and may affect a large number of flows across multiple I-VPLSs.

A similar issue occurs in the case where node 5 is connected to A1 and A2 I-VPLS using active/standby pseudowires. For example, when node 5 changes the active pseudowire, the remote PBB PE keeps sending to the old PBB PE.

Another case is when the QinQ access network dual-homed to a PBB PE uses RSTP or M-VPLS with MSTP to provide loop avoidance at the interconnection between the PBB PEs and the QinQ SWs. In the case where the access topology changes, a TCN event is generated and propagated throughout the access network. Similarly, this change needs to be propagated to the remote PBB PEs to avoid blackholing.

A solution is required to propagate the I-VPLS events through the backbone infrastructure (B-VPLS) to flush the customer MAC to B-MAC entries in the remote PBB. As there are no IVPLS control plane exchanges across the PBB backbone, extensions to B-VPLS control plane are required to propagate the I-VPLS MAC flush events across the B-VPLS.

LDP MAC flush solution for PBB blackholing

In the case of an MPLS core, B-VPLS uses T-LDP signaling to set up the pseudowire forwarding. The following I-VPLS events must be propagated across the core B-VPLS using LDP MAC flush-all-but-mine or flush-all-from-me indications:

For flush-all-but-mine indication (‟positive flush”):

  • TCN event in one or more of the I-VPLS or in the related M-VPLS for the MSTP use case.

  • Pseudowire/SDP binding activation with active/standby pseudowire (standby, active or down, up)

  • Reception of an LDP MAC withdraw ‟flush-all-but-mine” in the related I-VPLS

For flush-all-from-me indication (‟negative flush”):

  • MC-LAG failure does not require send-flush-on-failure to be enabled in I-VPLS.

  • Failure of a local SAP requires send-flush-on-failure to be enabled in I-VPLS.

  • Failure of a local pseudowires/SDP binding requires send-flush-on-failure to be enabled in I-VPLS.

  • Reception of an LDP MAC withdraw flush-all-from-me in the related I-VPLS.

To propagate the MAC flush indications triggered by the above events, the PE that originates the LDP MAC withdraw message must be identified. In regular VPLS ‟mine”/”me” is represented by the pseudowire associated with the FEC and the T-LDP session on which the LDP MAC withdraw was received. In PBB, this is achieved using the B-VPLS over which the signaling was propagated and the B-MAC address of the originator PE.

Nokia PBB-VPLS solution addresses this requirement by inserting in the BVPLS LDP MAC withdraw message a new PBB-TLV (type-length-value) element. The new PBB TLV contains the source B-MAC identifying the originator (‟mine”/”me”) of the flush indication and the ISID list identifying the I-VPLS instances affected by the flush indication.

There are a number of advantages to this approach. Firstly, the PBB-TLV presence indicates this is a PBB MAC flush. As a result, all PEs containing only the B-VPLS instance automatically propagate the LDP MAC withdraw in the B-VPLS context respecting the split-horizon and active link topology. There is no flushing of the B-VPLS FDBs throughout the core PEs. Subsequently, the receiving PBB VPLS PEs uses the B-MAC and ISID list information to identify the specific I-VPLS FDBs and the C-MAC entries pointing to the source B-MAC included in the PBB TLV.

An example of processing steps involved in PBB MAC Flush is depicted in TCN triggered PBB flush-all-but-mine procedure for the case when a Topology Change Notification (TCN) is received on PBB PE 2 from a QinQ access in the I-VPLS domain.

Figure 22. TCN triggered PBB flush-all-but-mine procedure

The received TCN may be related to one or more I-VPLS domains. This generates a MAC flush in the local I-VPLS instances and if configured, it originates a PBB MAC flush-all-but-mine throughout the related B-VPLS contexts represented by the white circles 1 to 8 in our example.

A PBB-TLV is added by PE2 to the regular LDP MAC flush-all-but-mine. B-MAC2, the source B-MAC associated with B-VPLS on PE2 is carried inside the PBB TLV to indicate who ‟mine” is. The ISID list identifying the I-VPLS affected by the TCN is also included if the number of affected I-VPLS is 100 or less. No ISID list is included in the PBB-TLV if more than 100 ISIDs are affected. If no ISID list is included, then the receiving PBB PE flushes all the local I-VPLS instances associated with the B-VPLS context identified by the FEC TLV in the LDP MAC withdraw message. This is done to speed up delivery and processing of the message.

Recognizing the PBB MAC flush, the B-VPLS only PEs 3, 4, 5 and 6 refrain from flushing their B-VPLS FDB tables and propagate the MAC flush message regardless of their ‟propagate-mac-flush” setting.

When LDP MAC withdraw reaches the terminating PBB PEs 1 and 7, the PBB-TLV information is used to flush from the I-VPLS FDBs all C-MAC entries except those associated with the originating B-MAC BM2. If specific I-VPLS ISIDs are indicated in the PBB TLV, then the PBB PEs flush only the C-MAC entries from the specified I-VPLS except those mapped to the originating B-MAC. Flush-all-but-mine indication is not propagated further in the I-VPLS context to avoid information loops.

The other events that trigger Flush-all-but-mine propagation in the B-VPLS (pseudowire/SDP binding activation, Reception of an LDP MAC Withdraw) are handled similarly. The generation of PBB MAC flush-all-but-mine in the B-VPLS must be activated explicitly on a per I-VPLS basis with the send-bvpls-flush all-but-mine command. The generation of PBB MAC flush-all-from-me in the B-VPLS must be activated explicitly on a per I-VPLS basis with the send-bvpls-flush all-from-me command.

Access multihoming for native PBB (B-VPLS over SAP infrastructure)

Nokia PBB implementation allows the operator to use a native Ethernet infrastructure as the PBB core. Native Ethernet tunneling can be emulated using Ethernet SAPs to interconnect the related B-VPLS instances. This kind of solution may fit in specific operational environments where Ethernet services was provided in the past using QinQ solution. The drawback is that no LDP signaling is available to provide support for access multihoming for Epipe (pseudowire active/standby status) or I-VPLS services (LDP MAC Withdraw). An alternate solution is required.

A PBB network using Native Ethernet core is depicted in Access dual-homing into PBB BEBs - topology view. MC-LAG is used to multihome a number of edge switches running QinQ to PBB BEBs.

Figure 23. Access dual-homing into PBB BEBs - topology view

The interrupted line from the MC-LAG represents the standby, inactive link; the solid line is the active link. The BEBs are dual-homed to two core switches BCB1 and BCB2 using native Ethernet SAPs on the B-VPLS side. Multi-point B-VPLS with MSTP for loop avoidance can be used as the PBB core tunneling. Alternatively point-to-point, G.8031 protected Ethernet tunnels can be also used to interconnect B-VPLS instances in the BEBs as described in the PBB over G.8031 protected Ethernet tunnels.

Nokia implementation provides a solution for both PBB E-Line (Epipe) and E-LAN (IVPLS) services that avoids PBB blackholing when the active ES11-BEB1 link fails. It also provides a consistent behavior for both service type and for different backbone types: for example, native Ethernet, MPLS, or a combination. Only MC-LAG is supported initially as the access-multihoming mechanism.

Solution description for I-VPLS over native PBB core

The use case described in the previous section is addressed by enhancing the existing native PBB solution to provide for blackhole avoidance.

The topology depicted in PBB active topology and access multihoming describes the details of the solution for the I-VPLS use case. Although the native PBB use case is used, the solution works the same for any other PBB infrastructure: for example, G.8031 Ethernet tunnels, pseudowire/MPLS, or a combination.

Figure 24. PBB active topology and access multihoming

ES1 and ES2 are dual-homed using MC-LAG into two BEB devices: ES1 to BEB C and BEB D, ES2 to BEB A and BEB B. MC-LAG P11 on BEB C and P9 on BEB A are active on each side.

In the service context, the triangles are I-VPLS instances while the small circles are B-VPLS components with the related, per BVPLS source B-MACs indicated next to each BVPLS instances. P-MSTP or RSTP may be used for loop avoidance in the multi-point BVPLS. For simplicity, only the active SAPs (BEB P2, P4, P6 and P8) are shown in the diagram.

In addition to the source B-MAC associated with each BVPLS, there is an additional B-MAC associated with each MC-LAG supporting multihomed I-VPLS SAPs. The BEBs that are in a multihomed MC-LAG configuration share a common B-MAC on the related MC-LAG interfaces. For example, a common B-MAC C1 is associated in this example with ports P11 and P12 participating in the MC-LAG between BEB C and BEB D while B-MAC A1 is associated with ports P9 and P10 in the MC-LAG between BEB A and BEB B. While B-MAC C1 is associated through the I-VPLS SAPs with both BVPLS instances in BEB C and BEB D, it is actively used for forwarding to I-VPLS SAPs only on BEB C containing the active link P11.

MC-LAG protocol keeps track of which side (port or LAG) is active and which is standby for a specified MC-LAG grouping and activates the standby in case the active one fails. The source B-MAC C1 and A1 are used for PBB encapsulation as traffic arrives at the IVPLS SAPs on P11 and P9, respectively. MAC Learning in the BVPLS instances installs MAC FDB entries in BCB-E and BEB A as depicted in PBB active topology and access multihoming.

Active link (P11) or access node (BEB C) failures are activating through MC-LAG protocol the standby link (P12) participating in the MC-LAG on the pair MC-LAG device (BEB D).

Access multihoming - link failure shows the case of access link failure.

Figure 25. Access multihoming - link failure

On failure of the active link P11 on BEB C the following processing steps apply:

  1. MC-LAG protocol activates the standby link P12 on the pair BEB D.

  2. B-MAC C1 becomes active on BEB D and any traffic received on BEB D with destination B-MAC C1 is forwarded on the corresponding I-VPLS SAPs on P12.

  3. BEB D determines the related B-VPLS instances associated with all the I-VPLS SAPs mapped to P12, the newly activated MC-LAG links/LAG components.

  4. Subsequently, BEB D floods in the related B-VPLS instances an Ethernet CFM-like message using C1 as source B-MAC. A vendor CFM opcode is used followed by an Nokia OUI.

  5. As a result, all the FDB entries in BCBs or BEBs along the path are automatically updated to reflect the move of B-MAC C1 to BEB D.

In this particular configuration, the entries on BEB A do not need to be updated saving MAC Flush operation.

In other topologies, it is possible that the B-MAC C1 FDB entries in the B-VPLS instance on the remote BEBs (like BEB A) need to move between B-SAPs. This involves a move of all C-MAC using as next hop B-MAC C1 and the new egress line card.

An identical procedure is used when the whole BEB C fails.

Solution description for PBB Epipe over G.8031 Ethernet tunnels

This section discusses the access multihoming solution for PBB E-Line over an infrastructure of G.8031 Ethernet tunnels. Although a specific use case is used, the solution works the same for any other PBB infrastructure: for example, native PBB, pseudowire/MPLS, or a combination.

The PBB E-Line service and the related BVPLS infrastructure are depicted in Access multihoming solution for PBB Epipe.

Figure 26. Access multihoming solution for PBB Epipe

The E-Line instances are connected through the B-VPLS infrastructure. Each B-VPLS is interconnected to the BEBs in the remote pair using the G.8031, Ethernet Protection Switched (EPS) tunnels. Only the active Ethernet paths are shown in the network diagram to simplify the explanation. Split Horizon Groups may be used on EPS tunnels to avoid running MSTP/RSTP in the PBB core.

The same B-MAC addressing scheme is used as in the E-LAN case: a B-MAC per B-VPLS and additional B-MACs associated with each MC-LAG connected to an Epipe SAP. The B-MACs associated with the active MC-LAG are actively used for forwarding into B-VPLS the traffic ingressing related Epipe SAPs.

MC-LAG protocol keeps track of which side is active and which is standby for a specified MC-LAG grouping and activates the standby link in a failure scenario. The source B-MACs C1 and A1 are used for PBB encapsulation as traffic arrives at the Epipe SAPs on P11 and P9, respectively. MAC learning in the B-VPLS instances installs MAC FDB entries in BEB C and BEB A as depicted in Access multihoming solution for PBB Epipe. The highlighted Ethernet tunnel (EPS) is used to forward the traffic between BEB A and BEB C.

Active link (P11) or access node (BEB C) failures are activating through MC-LAG protocol, the standby link (P12) participating in the MC-LAG on the pair MC-LAG device (BEB D). The failure of BEB C is depicted in Access dual-homing for PBB E-Line - BEB failure. The same procedure applies for the link failure case.

Figure 27. Access dual-homing for PBB E-Line - BEB failure

The following process steps apply:

  1. BEB D loses MC-LAG communication with its peer BEB C, no more keep alives from BEB C or next-hop tracking may kick in.

  2. BEB D assumes BEB C is down and activates all shared MC-LAG links, including P12.

  3. B-MAC C1 becomes active on BEB D and any traffic received on BEB C with destination B-MAC C1 is forwarded on the corresponding Epipe SAPs on P12.

  4. BEB D determines the related B-VPLS instances associated with all the Epipe SAPs mapped to P12, the newly activated MC-LAG links/LAG components.

  5. Subsequently, BEB D floods in the related B-VPLS instances the same Ethernet CFM message using C1 as source B-MAC.

  6. As a result, the FDB entries in BEB A and BEB B are automatically updated to reflect the move of B-MAC C1 from EP1 to EP2 and from EP3 to EP4, respectively.

The same process is executed for all the MC-LAGs affected by BEB C failure so BEB failure can be the worst case scenario.

Dual-homing into PBB Epipe - local switching use case

When the service SAPs were mapped to MC-LAGs belonging to the same pair of BEBs in earlier releases, an IVPLS had to be configured even if there were just two SAPs active at any point in time. Since then, the PBB Epipe model has been enhanced to support configuring in the same Epipe instance two SAPs and a BVPLS up link as depicted in Solution for access dual-homing with local switching for PBB E-Line/Epipe.

Figure 28. Solution for access dual-homing with local switching for PBB E-Line/Epipe

The PBB Epipe represented by the yellow diamond on BEB1 points through the BVPLS up link to the B-MAC associated with BEB2. The destination B-MAC can be either the address associated with the green BVPLS on BEB2 or the B-MAC of the SAP associated with the pair MC-LAG on BEB2 (preferred option).

The Epipe information model is expanded to accommodate the configuration of two SAPs (I-SAPs) and of a BVPLS up link in the same time. For this configuration to work in an Epipe environment, only two of them are active in the forwarding plane at any point in time, specifically:

  • SAP1 and SAP2 when both MC-LAG links are active on the local BEB1 (see Solution for access dual-homing with local switching for PBB E-Line/Epipe)

  • The Active SAP and the BVPLS uplink if one of the MC-LAG links is inactive on BEB1

    • PBB tunnel is considered as a backup path only when the SAP is operationally down.

    • If the SAP is administratively down, then all traffic is dropped.

  • Although the CLI allows configuration of two SAPs and a BVPLS up link in the same PBB Epipe, the BVPLS up link is inactive as long as both SAPs are active.

    The traffic received through PBB tunnel is dropped if BVPLS up link is inactive. The same rules apply to BEB2.

BGP multihoming for I-VPLS

This section describes the application of BGP multihoming to I-VPLS services. BGP multihoming for I-VPLS uses the same mechanisms as those used when BGP multihoming is configured in a non-PBB VPLS service, which are described in detail in this guide.

The multihomed sites can be configured with either a SAP or spoke-SDP, and support both split horizon groups and fate-sharing by the use of oper-groups.

When the B-VPLS service is using LDP signaled pseudowires, blackhole protection is supported after a multihoming failover event when send-flush-on-failure and send-bvpls-flush flush-all-from-me is configured within the I-VPLS. This causes the system on which the site object fails to send a MAC flush all-from-me message so that customer MACs are flushed on the remote backbone edge bridges using a flush-all-from-me message. The message sent includes a PBB TLV which contains the source B-MAC identifying the originator (‟mine”/‟me”) of the flush indication and the ISID list identifying the I-VPLS instances affected by the flush indication, see section LDP MAC flush solution for PBB blackholing.

The VPLS preference sent in BGP multihoming updates is always set to zero, however, if a non-zero value is received in a valid BGP multihoming update it is used to influence the designated forwarder (DF) election.

Access multihoming over MPLS for PBB Epipes

It is possible to connect backbone edge bridges (BEBs) configured with PBB Epipes to an edge device using active/standby pseudowires over an MPLS network. This is shown in Active/standby PW into PBB Epipes.

Figure 29. Active/standby PW into PBB Epipes

In this topology, the edge device (CE1) is configured with multiple Epipes to provide virtual lease line (VLL) connectivity across a PBB network. CE1 uses active/standby pseudowires (PWs) which terminate in PBB Epipe services on BEB1 and BEB2 and are signaled accordingly using the appropriate pseudowire status bits.

Traffic is sent from CE1 on the active pseudowires into the PBB Epipe services, then onto the remote devices through the B-VPLS service. It is important that traffic sent to CE1 is directed to the BEB that is attached to the active pseudowire connected to CE1. To achieve this, a virtual backbone MAC (vB-MAC) is associated with the services on CE1.

The vB-MAC is announced into the PBB core by the BEB connected to the active pseudowire using SPBM configured in the B-VPLS services; therefore, SPBM is mandatory. In Active/standby PW into PBB Epipes, the vB-MAC would be announced by BEB1; if the pseudowires failed over to BEB2, BEB1 would stop announcing the vB-MAC and BEB2 starts announcing it.

The remote services are configured to use the vB-MAC as the backbone destination MAC (backbone-dest-mac) which results in traffic being sent to the specified BEB.

The vB-MAC is configured under the SDP used to connect to the edge device’s active/standby pseudowires using the command source-bmac-lsb. This command defines a sixteen (16) bit value which overrides the sixteen least-significant-bits of source backbone MAC (source-bmac) to create the vB-MAC. The operator must ensure that the vB-MACs match on the two peering BEBs for a corresponding SDP.

The PBB Epipe pseudowires are identified to be connected to an edge device active/standby pseudowire using the spoke-sdp parameter use-sdp-bmac. Enabling this parameter causes traffic forwarded from this spoke-SDP into the B-VPLS domain to use the vB-MAC as its source MAC address when both this, and the control pseudowire, are in the active state on this BEB.

PBB Epipe pseudowires connected to edge device’s non-active/standby pseudowires are still able to use the same SDP.

To cater for the case where there are multiple edge device active/standby pseudowires using a specified SDP, one pseudowire must be identified to be the control pseudowire (using the source-bmac-lsb parameter control-pw-vc-id). The state of the control pseudowire determines the announcing of the vB-MAC by SPBM into the B-VPLS based on the following conditions:

  • The source-bmac-lsb and control-pw-vc-id have both been configured.

  • The spoke-SDP referenced by the control-pw-vc-id has use-sdp-bmac configured.

  • The spoke-SDP referenced by the control-pw-vc-id is operationally up and the ‟Peer Pw Bits” do not include pwFwdingStandby.

  • If multiple B-VPLS services are used with different SPBM Forward IDs (FIDs), the vB-MAC is advertised into any FID which has a PBB Epipe with a spoke-SDP configured with use-sdp-bmac that is using an SDP with source-bmac-lsb configured (regardless of whether the PBB Epipe spoke-SDP defined as the control pseudowire is associated with the B-VPLS).

It is expected that pseudowires configured using an SDP with source-bmac-lsb and with the parameter use-sdp-bmac are in the same state (up, down, active, standby) as the control pseudowire. If this is not the case, the following scenarios are possible (based on Active/standby PW into PBB Epipes):

  • If any non-control pseuodwires are active on BEB2 and standby on BEB1, then this continues to allow bidirectional traffic for the related services as the return traffic to CE1 is sent to BEB1, specifically to the BEB announcing the vB-MAC. As the non-control PW is in standby state it is used to send this traffic to the edge device. If this operation is not needed, it is possible to prevent traffic being sent on a standby PW using the standby-signaling-slave parameter under the spoke-SDP definition.

  • If any non-control pseudowires are active on BEB2 but down on BEB1, then only unidirectional traffic is possible. The return traffic to CE1 is sent to BEB1, as it is announcing the vB-MAC but the pseudowire on BEB1 is down for this service.

Alarms are raised to track if, on the BEB with the control pseudowire in the standby/down state, any non-control pseudowires go active. Specifically, there is an alarm when the first non-control pseudowire becomes active and another alarm when the last non-control pseudowire becomes standby/down.

If both control pseudowires are active (neither in standby) then both BEBs would announce the vB-MAC – this would happen if the edge device was a 7450 ESS, 7750 SR, and 7950 XRS using an Epipe service without standby-signaling-master configured. Traffic from remote BEBs on any service related to the vB-MAC would be sent to the nearest SPBM BEB and it would depend on the state of the pseudowires on each BEB as to whether it could reach the edge device. Similarly, the operator must ensure that the corresponding service pseudowires on each BEB are configured as the control pseudowire, otherwise SPBM may advertise the vB-MAC from both BEBs resulting in the same consequences.

All traffic received from the edge device on a pseudowire into a PBB Epipe, on the BEB with the active control pseudowire, is forwarded by the B-VPLS using the vB-MAC as the source backbone MAC, otherwise the source-bmac is used.

The control pseudowire can be changed dynamically without shutting down the spoke-SDPs, SDP or withdrawing the SPBM advertisement of the vB-MAC; this allows a graceful change of the control pseudowire. Clearly, any change should be performed on both BEBs as closely in time as possible to avoid an asymmetric configuration, ensuring that the new control pseudowire is in the same state as the current control pseudowire on both BEBs during the change.

The following are not supported:

  • active/standby pseudowires within the PBB Epipe. Consequently, the following are not supported:

    • configuration of endpoints

    • configuration of precedence under the spoke-SDP

  • PW switching

  • BGP-MH support, namely configuring the pseuodwires to be part of a multihomed site

  • network-domains

  • support for the following tunneling technologies:

    • RFC 8277

    • GRE

    • L2TPv3

PBB and IGMP/MLD snooping

The IGMP/MLD snooping feature provided for VPLS is supported similarly in the PBB I-VPLS context, to provide efficient multicast replication in the customer domain. The difference from regular VPLS is the handling of IGMP/MLD messages arriving from the B-VPLS side over a B-VPLS SAP or SDP.

The first IGMP/MLD join message received over the local B-VPLS adds all the B-VPLS SAP and SDP components into the related multicast table associated with the I-VPLS context. This is in line with the PBB model, where the B-VPLS infrastructure emulates a backbone LAN to which every I-VPLS is connected by one virtual link.

When the querier is connected to a remote I-VPLS instance, over the B-VPLS infrastructure, its location is identified by the B-VPLS SDP and SAP on which the query was received. It is also identified by the source B-MAC address used in the PBB header for the query message. This is the B-MAC associated with the B-VPLS instance on the remote PBB PE.

It is also possible to configure that a multicast router exists in a remote I-VPLS service. This can be achieved using the mrouter-dest command to specify the MAC name of the destination B-MAC to be used to reach the remote I-VPLS service. This command is available in the VPLS service PBB IGMP and MLD snooping contexts.

The following are not supported in a PBB I-VPLS context with IGMP snooping or MLD snooping:

  • multicast VPLS Registration (MVR)

  • multicast CAC

  • configuration under a default SAP

The following are not supported in a PBB I-VPLS context with MLD snooping:

  • configuration of the maximum number of multicast group sources allowed per group

  • configuration of the maximum number of multicast sources allowed per group

PBB and PIM snooping

The PIM snooping feature for IPv4 is supported in the PBB I-VPLS context to provide efficient multicast replication in the customer domain. This is similar to PIM snooping for IPv4 in a regular VPLS with the difference being the handling of PIM messages arriving from the B-VPLS side over a B-VPLS SAP or SDP.

The first PIM join message received over the local B-VPLS adds all the B-VPLS SAP and SDP components into the related multicast table associated with the I-VPLS context, and the multicast for the join is flooded throughout the B-VPLS. This is in line with the PBB model, where the B-VPLS infrastructure emulates a backbone LAN to which every I-VPLS is connected by one virtual link.

When a neighbor is located on a remote I-VPLS instance over the B-VPLS infrastructure, its location is identified by the B-VPLS SDP and SAP on which the hello message was received. The neighbor is also identified by the source B-MAC address used in the PBB header of the hello message. This is the B-MAC associated with the B-VPLS instance on the remote PBB PE.

PIM snooping for IPv4 in an I-VPLS is not supported with the following forms of default SAP:

  • :*

  • *.null

  • *.*

PBB QoS

For PBB encapsulation, the configuration used for DE and dot1p in SAP and SDP policies applies to the related bits in both backbone dot1q (BTAG) and ITAG fields.

The following QoS processing rules apply for PBB B-VPLS SAPs and SDPs:

B-VPLS SAP ingress

  • If dot1p, DE based classification is enabled, the BTAG fields are used by default to evaluate the internal forwarding class (fc) and discard profile if there is a BTAG field. The 802.1ah ITAG is used only if the BTAG is absent (null SAP).

  • If either one of the dot1p or DE based classification is not explicitly enabled or the packets are untagged then the default fc and profile is assigned.

B-VPLS SAP egress

  • If the sap-egress policy for the SAP contains an fc to dot1p/de mapping, this entry is used to set the dot1p and DE bits from the BTAG of the frame going out from the SAP. The same applies for the ITAG on frames originated locally from an I-VPLS. The mapping does not have any effect on the ITAG of frames transiting the B-VPLS.

  • If no explicit mapping exists, the related dot1p DE bits are set to zero on both ITAG and BTAG if the frame is originated locally from an I-VPLS. If the frame is transiting the B-VPLS the ITAG stays unchanged, the BTAG is set according to the type of ingress SAP.

    • If the ingress SAP is tagged, the values of the dot1p, DE bits are preserved in the BTAG going out on the egress SAP.

    • If the ingress SAP is untagged, the dot1p, DE bits are set to zero in the BTAG going out on the egress SAP.

B-VPLS SDP (network) ingress policy

QoS policies for dot1p and DE bits apply only for the outer VLAN ID: this is the VLAN ID associated with the link layer and not the PBB BTAG. As a result, the dot1p DE bits are checked if an outer VLAN ID exists in the packets that ingress the SDP. If that VLAN ID is absent, nothing above the pseudowire SL is checked - for example, no dot1p bits in the BTAG or ITAG are checked. It is expected that the EXP bits are used to transport QoS information across the MPLS backbone and into the PEs.

B-VPLS SDP (network) egress policy

  • When building PBB packets originating from a local I-VPLS, the BTAG and ITAG values (dot1p, DE bits) are set according to the network egress policy. The same applies for newly added BTAG (VLAN mode pseudowires) in a packet transiting the B-VPLS (SAP/SDP to SDP). If either dot1p or DE based classification is not explicitly enabled in the CLI, the values from the default fc to dot1p, DE mapping are assumed.

  • Dot1p, DE bits for existing BTAGs remain unchanged - for example, applicable to packets transiting the B-VPLS and going out on SDP.

Transparency of customer QoS indication through PBB backbone

Similar to PW transport, operators want to allow their customers to preserve all eight Ethernet CoS markings (three dot1p bits) and the discard eligibility indication (DE bit) while transiting through a PBB backbone.

This means any customer CoS marking on the packets inbound to the ingress SAP must be preserved when going out on the egress SAP at the remote PBB PE even if the customer VLAN tag is used for SAP identification at the ingress.

A solution to the above requirements is depicted in PCP, DE bits transparency in PBB .

Figure 30. PCP, DE bits transparency in PBB

The PBB BVPLS is represented by the blue pipe in the middle with its associated CoS represented through both the service (I-tag) and tunnel CoS (BVID dot1p+DE or PW EXP bits).

The customer CoS is contained in the orange dot1q VLAN tags managed in the customer domains. There may be one (CVID) or two (CVID, SVID) tags used to provide service classification at the SAP. IVPLS or PBB Epipe instances (orange triangles) are used to provide a Carrier-of-Carrier service.

As the VLAN tags are stripped at the ingress SAP and added back at the egress SAP, the PBB implementation must provide a way to maintain the customer QoS marking. This is done using a force-qtag-forwarding configuration on a per IVPLS/Epipe basis under the node specifying the up link to the related BVPLS. When force-qtag-forwarding is enabled, a new VLAN tag is added right after the C-MAC addresses using the configured qtag. The dot1p, DE bits from the specified outer/inner customer qtag are copied in the newly added tag.

If the force-qtag-forwarding is enabled in one IVPLS/PBB Epipe instance, it is enabled in all of the related instances.

At the remote PBB PE/BEB on the egress SAPs or SDPs, the first qtag after the C-MAC addresses is removed and its dot1p, DE bits are copied in the newly added customer qtags.

Configuration examples

This section gives usage examples for the new commands under PBB Epipe or IVPLS instances.

PBB IVPLS usage, example:

configure service vpls 100 ivpls
  sap 1/1/1:101
  pbb
      backbone-vpls 10 isid 100
      force-qtag-forwarding

PBB Epipe Usage, example:

configure service epipe 200
  sap 1/1/1:201
  pbb
      tunnel 10 backbone-dest-mac ab-bc-cd-ef-01-01 
        isid 200
      force-qtag-forwarding
Details solution description

PCP, DE bits transparency in PBB shows a specific use case. Keeping the same topology, an ingress PBB PE, a PBB core and an egress PBB PE, consider the generic use case where:

  1. the packet arrives on the ingress PBB PE on an I-SAP or an I-SDP binding/PW and it is assigned to a PBB service instance (Epipe/IVPLS)

  2. goes next through a PBB core (native Ethernet B-SAPs or PW/MPLS based B-SDP)

  3. and finally, egresses at another PBB PE through a PBB service instance on either an I-SAP or I-SDP binding/PW.

Similar to the Ethernet-VLAN VC Type, the following packet processing steps apply for different scenarios:

  • Ingress PE, ingress I-SAP case with force-qtag-forwarding enabled under PBB Epipe or IVPLS

    The qtag is inserted automatically right after C-MAC addresses; an ethertype value of 8100 is used.

    • case 1

      SAP type = null/dot1q default (1/1/1 or 1/1/1.*) so there is no service delimiting tag used and stripped on the ingress side.

      VLAN and Dot1p+DE bits on the inserted qtag are set to zero regardless of ingress QoS policy.

    • case 2

      SAP type = dot1q or qinq default (1/1/1.100 or 1/1/1.100.*) so there is a service delimiting tag used and stripped.

      The service delimiting qtag (dot1p + DE bits and VLAN) is copied as is in the inserted qtag.

    • case 3

      SAP type = qinq (1/1/1.100.10) so there are two service delimiting tags used and stripped.

      The service delimiting qtag (VLAN and dot1p + DE bits) is copied as is from the inner tag in the inserted qtag.

  • Ingress PE, ingress I-SDP/PW case with force-qtag-forwarding enabled under PBB Epipe or IVPLS

    The qtag is inserted automatically right after C-MAC addresses; an ethertype value of 8100 is used.

    • case 1

      SDP vc-type = Ethernet (force-vlan-vc-forwarding= not supported for I-PW) so there is no service delimiting tag stripped on the ingress side.

      VLAN and Dot1p+DE bits on the inserted qtag are set to zero regardless of ingress QoS policy.

    • case 2

      SDP vc-type = Ethernet VLAN so there is a service delimiting tag stripped.

      VLAN and Dot1p + DE bits on the inserted qtag are preserved from the service delimiting tag.

PBB packets are tunneled through the core the same way for native ETH/MPLS cases.

  • Egress PE, egress I-SAP case with force-qtag-forwarding enabled under PBB Epipe or VPLS

    • The egress QoS policy (FC->dot1p+DE bits) is used to determine the QoS settings of the added qtags. If it required to preserve the ingress QoS, no egress policy should be added.

      If QinQ SAP is used, at least qinq-mark-top-only option must be enabled to preserve the CTAG.

    • The ‟core qtag” (core = received over the PBB core, 1st after C-MAC addresses) is always removed after QoS information is extracted.

      If no force-qtag-forwarding is used at egress PE, the inserted qtag is maintained.

    • If egress SAP is on the ingress PE, then the dot1p+DE value is read directly from the procedures described in Ingress PE, ingress I-SAP and Ingress PE, ingress I-SDP/PW cases. The use cases below still apply:

      • case 1

        SAP type = null/dot1q default (2/2/2 or 2/2/2.*) so there is no service delimiting tag added on the egress side.

        Dot1p+DE bits and the VLAN value contained in the qtag are ignored.

      • case 2

        SAP type = dot1q/qinq default (3/1/1.300 or 3/1/1.300.*) so a service delimiting tag is added on egress.

        The FC->dot1p, DE bit entries in the SAP egress QoS policy are applied.

        If there are no such entries, then the values of the dot1p+DE bits from the stripped qtag are used.

      • case 3

        SAP type = qinq (3//1/1.300.30) so two service delimiting tags are added on egress.

        The FC->dot1p, DE bit entries in the SAP egress QoS policy are applied.

        If the qinq-mark-top-only command under vpls>sap>egress is not enabled (default), the policy is applied to both service delimiting tags.

        If the qinq-mark-top-only command is enabled, the policy is applied only to the outer service delimiting tag.

        On the tags where the egress QoS policies do not apply the values of the dot1p+DE bits from the stripped qtag are used.

  • Egress PE, egress I-SDP case with force-qtag-forwarding enabled under PBB Epipe or IVPLS

    • case 1

      I-SDP vc-type = Ethernet VLAN so there is service delimiting tag added after PW encapsulation.

      The dot1p+DE bits from the qtag received over the PBB core side are copied to the qtag added on the I-SDP.

      The VLAN value in the qtag may change to match the provisioned value for the I-SDP configuration.

    • case 2

      I-SDP vc-type = Ethernet (force-vlan-vc-forwarding=not supported for I-SDPs) so there is no service delimiting tag added on egress PW.

      The qtag received over the PBB core is stripped and the QoS information is lost.

Egress B-SAP per ISID shaping

This feature allows users to perform egress data path shaping of packets forwarded within a B-VPLS SAP. The shaping is performed within a more granular context within the SAP. The context for a B-SAP is an ISID.

B-SAP egress ISID shaping configuration

Users can enable the per-ISID shaping on the egress context of a B-VPLS SAP by configuring an encapsulation group, referred to as encap-group in CLI, under the QoS sub-context, referred to as encap-defined-qos.

config>service>vpls>sap>egress>encap-defined-qos>encap-group group-name [type group-type] [qos-per-member] [create]

The group name is unique across all member types. The isid type is currently the only option.

The user adds or removes members to the encap-group, one at a time or as a range of contiguous values. However, when the qos-per-member option is enabled, members must be added or removed one at a time. These members are also referred to as ISID contexts.

config>service>vpls>sap>egress>encap-defined-qos>encap-group

[no] member encap-id [to encap-id]

The user can configure one or more encap-groups in the egress context of the same B-SAP, defining different ISID values and applying each a different SAP egress QoS policy, and optionally a different scheduler policy/agg-rate-limit. ISID values are unique within the context of a B-SAP. The same ISID value cannot be re-used in another encap-group under the same B-SAP but can be re-used in an encap-group under a different B-SAP. Finally, if the user adds to an encap-group an ISID value which is already a member of this encap-group, the command causes no effect. The same if the user attempts to remove an ISID value which is not a member of this encap-group.

When a group is created, the user assigns a SAP egress QoS policy, and optionally a scheduler policy or aggregate rate limit, using the following commands:

configure service vpls sap egress encap-defined-qos encap-group qos sap-egress-policy-id

configure service vpls sap egress encap-defined-qos encap-group scheduler-policy scheduler-policy-name

configure service vpls sap egress encap-defined-qos encap-group agg-rate-limit kilobits-per-second

A SAP egress QoS policy must first be assigned to the created encap-group before the user can add members to this group. Conversely, the user cannot perform the no qos command until all members are deleted from the encap-group.

An explicit or the default SAP egress QoS policy continues to be applied to the entire B-SAP but this serves to create the set of egress queues which is used to store and forward a packet which does not match any of the defined ISID values in any of the encap-groups for this SAP.

Only the queue definition and fc-to-queue mapping from the encap-group SAP egress QoS policy is applied to the ISID members. All other parameters configurable in a SAP egress QoS policy must be inherited from egress QoS policy applied to the B-SAP.

Furthermore, any other CLI option configured in the egress context of the B-SAP continues to apply to packets matching a member of any encap-group defined in this B-SAP.

Note also that the SAP egress QoS policy must not contain an active policer or an active queue-group queue or the application of the policy to the encap-group fails. A policer or a queue-group queue is referred to as active if one or more FC map to it in the QoS policy or the policer is referenced within the action statement of an IP or IPv6 criteria statement. Conversely, the user is not allowed to assign a FC to a policer or a queue-group queue, or reference a policer within the action statement of an IP or IPv6 criteria statement, after the QoS policy is applied to an encap-group.

The qos-per-member keyword allows the user to specify that a separate queue set instance and scheduler/agg-rate-limit instance are created for each ISID value in the encap-group. By default, shared instances are created for the entire encap-group.

When the B-SAP is configured on a LAG port, the ISID queue instances defined by all the encap-groups applied to the egress context of the SAP are replicated on each member link of the LAG. The set of scheduler/agg-rate-limit instances are replicated per link or per IOM or XMA depending if the adapt-qos option is set to link/port-fair mode or distribute mode. This is the same behavior as that applied to the entire B-SAP in the current implementation.

Provisioning model

The main objective of this proposed provisioning model is to separate the definition of the QoS attributes from the definition of the membership of an encap-group. The user can apply the same SAP egress QoS policy to a large number of ISID members without having to configure the QoS attributes for each member.

The following are conditions of the provisioning model:

  • A SAP egress policy ID must be assigned to an encap-group before any member can be added regardless of the setting of the qos-per-member option.

  • When qos-per-member is specified in the encap-group creation, the user must add or remove ISID members one at a time. The command is failed if a range is entered.

  • When qos-per-member is specified in the encap-group creation, the sap-egress QoS policy ID and the scheduler policy name cannot be changed unless the group membership is empty. However, the agg-rate-limit parameter value can be changed or the command removed (no agg-rate-limit).

  • When qos-per-member is not specified in the encap-group creation, the user may add or remove ISID members as a singleton or as a range of contiguous values.

  • When qos-per-member is not specified in the encap-group creation, the sap-egress QoS policy ID and the scheduler policy name or agg-rate-limit parameter value may be changed at any time. Note however that the user cannot still remove the SAP egress QoS policy (no qos) while there are members defined in the encap-group.

  • The QoS policy or the scheduler policy itself may be edited and modified while members are associated with the policy.

  • There is a maximum number of ISID members allowed in the lifetime of an encap-group.

Operationally, the provisioning consists of the following steps:

  1. Create an encap-group.

  2. Define and assign a SAP egress QoS policy to the encap-group. This step is mandatory; if it is not performed, the user is allowed to add members to the encap-group.

  3. Manage memberships for the encap-group using the member command (or SNMP equivalent).
    Note: The member command supports both range and singleton ISIDs.

    The following restrictions apply to the member command:

    • An ISID cannot be added if it already exists on the SAP in another encap-group. If the member fails for this reason, the following applies:

      • The member command is all-or-nothing. No ISID in a range is added if one fails.

      • The first ISID that fails is the only one identified in the error message.

      • An ISID that already exists on the SAP in another encap-group must be removed from its encap-group using the no member command before it can be added to a new one.

    • Specifying an ISID in a group that already exists within the group is a no-op (no failure)

    • If insufficient queues or scheduler policies or FC-to-Queue lookup table space exists to support a new member or a modified membership range, the command is fails.

  4. Optionally, define and assign a scheduling policy or agg-rate-limit for the encap-group.

Logically, the encap-group membership operation can be viewed as three distinct functions:

  • Creation or deletion of new queue sets and optionally scheduler/agg-rate-limit at QoS policy association time.

  • Mapping or un-mapping the member ISID to either the group queue set and scheduler (group QoS) or the ISID specific queue set and scheduler (qos-per-member).

  • Modifying the groups objective membership based on newly created or expanded ranges or singletons based on the membership operation.

Egress queue scheduling

Figure 31. Egress queue scheduling

Egress queue scheduling displays an example of egress queue scheduling.

The queuing and scheduling re-uses existing scheduler policies and port scheduler policy with the difference that a separate set of FC queues are created for each defined ISID context according to the encap-group configured under the egress context of the B-SAP. This is in addition to the set of queues defined in the SAP egress QoS policy applied to the egress of the entire SAP.

The user type in Egress queue scheduling maps to a specific encap-group defined for the B-SAP in CLI. The operator has the flexibility of scheduling many user types by assigning different scheduling parameters as follows:

  • A specific scheduler policy to each encap-group with a root scheduler which shapes the aggregate rate of all queues in the ISID context of the encap-group and provides strict priority scheduling to its children.

    A second tier scheduler can be used as a WFQ scheduler to aggregate a subset of the ISID context FC queues. Alternatively, the operator can apply an aggregate rate limit to the ISID context instead of a scheduler policy.

  • A specific priority level when parenting the ISID queues or the root of the scheduler policy serving the ISID queues to the port scheduler.

  • Ability to use the weighted scheduler group to further distribute the bandwidth to the queues or root schedulers within the same priority level according to configured weights.

To make the shaping of the ISID context reflect the SLA associated with each user type, it is required to subtract the operator’s PBB overhead from the Ethernet frame size. For that purpose, a packet byte-offset parameter is added to the context of a queue.

config>qos>sap-egress>queue>packet-byte-offset {add bytes | subtract bytes}

When a packet-byte-offset value is applied to a queue instance, it adjusts the immediate packet size. This means that the queue rates, like the operational PIR and CIR, and queue bucket updates use the adjusted packet size. In addition, the queue statistics also reflect the adjusted packet size. Scheduler policy rates, which are data rates, use the adjusted packet size.

The port scheduler max-rate and priority level rates and weights, if a Weighted Scheduler Group is used, are always ‟on-the-wire” rates and therefore use the actual frame size. The same applies to the agg-rate-limit on a SAP, a subscriber, or a Multi-Service Site (MSS) when the queue is port-parented.

When the user enables frame-based-accounting in a scheduler policy or queue-frame-based-accounting with agg-rate-limit in a port scheduler policy, the queue rate is capped to a user- configured ‟on-the-wire” rate and the packet-byte-offset is not included; however, the packet-byte-offset is applied to the statistics.

B-SAP per-ISID shaping configuration example

The following CLI configuration for B-SAP per-ISID shaping achieves the specific use case shown in Egress queue scheduling.

config
 qos
       port-scheduler-policy "bvpls-backbone-port-scheduler"
       group scheduler-group1 create
       rate 1000
       level 3 rate 1000 group scheduler-group1 weight w1
       level 4 rate 1000 group scheduler-group1 weight w4
       level 5 rate 1000 cir-rate 100
       level 7 rate 5000 cir-rate 5000
       level 8 rate 500 cir-rate 500
exit

       scheduler-policy "user-type1"
       tier 1
       scheduler root
port-parent level 8 rate pir1 weight w-pir1 cir-level 8 cir-rate cir1 
cir-weight w-cir1
            exit
       tier 3
       scheduler wfq
           rate pir1
       parent root
            exit
        exit
exit

       scheduler-policy "user-type2"
       tier 1
       scheduler root
port-parent level 7 rate pir2 weight w-pir2 cir-level 7 cir-rate cir2 
cir-weight w-cir2
            exit
       tier 3
       scheduler wfq
           rate pir2
       parent root
            exit
        exit
exit

       scheduler-policy "b-sap"
       tier 1
       scheduler root
port-parent level 5 rate pir5 weight w-pir5 cir-level 1 cir-rate cir5 cir-weight 
w-cir5
            exit
       tier 3
       scheduler wfq
           rate pir5
       parent root
            exit
        exit
exit

    sap-egress 100 // user type 1 QoS policy
    queue 1
                parent wfq weight x level 3 cir-weight x cir-level 3
            packet-byte-offset subtract bytes 22
 queue 2
            packet-byte-offset subtract bytes 22
                parent wfq weight y level 3 cir-weight y cir-level 3
 queue 3
            packet-byte-offset subtract bytes 22
                parent wfq weight z level 3 cir-weight z cir-level 3
 queue 4 
                parent root level 8 cir-level 8
            packet-byte-offset subtract bytes 22
 fc be queue 1
 fc l2 queue 2 
 fc h2 queue 3 
 fc ef queue 4 
exit
     
       sap-egress 200 // user type 2 QoS policy
 queue 1 
                parent wfq weight x level 3 cir-weight x cir-level 3
            packet-byte-offset subtract bytes 26
 queue 2
                parent wfq weight y level 3 cir-weight y cir-level 3
            packet-byte-offset subtract bytes 26
queue 3
                parent wfq weight z level 3 cir-weight z cir-level 3
            packet-byte-offset subtract bytes 26
queue 4
                parent root level 8 cir-level 8
            packet-byte-offset subtract bytes 26
 fc be queue 1
 fc l2 queue 2 
 fc h2 queue 3 
 fc ef queue 4 
exit

       sap-egress 300 // User type 3 QoS policy
 queue 1
                port-parent level 4 rate pir3 weight w-pir3 cir-level  
       4 cir-rate cir3 cir-weight w-cir3
            packet-byte-offset subtract bytes 22
 fc be queue 1
exit

       sap-egress 400 // User type 4 QoS policy
 queue 1 
                port-parent level 3 rate pir4 weight w-pir4 cir-level  
       3 cir-rate cir4 cir-weight w-cir4
            packet-byte-offset subtract bytes 22
 fc be queue 1
exit

       sap-egress 500 // B-SAP default QoS policy
queue 1 
                parent wfq weight x level 3 cir-weight x cir-level 3
queue 2 
                parent wfq weight y level 3 cir-weight y cir-level 3
 queue 3   
                parent wfq weight z level 3 cir-weight z cir-level 3
 queue 4   
                parent root level 8 cir-level 8
 fc be queue 1
 fc l2 queue 2 
 fc h2 queue 3 
 fc ef queue 4 
exit
 exit
exit

config
       service
       vpls 100 bvpls
            sap 1/1/1:100 
                egress
                    encap-defined-qos
                        encap-group type1-grouped type isid
       member 1 to 10
                               qos 100
       scheduler-policy user-type1
                        exit
encap-group type1-separate type isid qos-per-member
       member 16
                               qos 100
       scheduler-policy user-type1
       exit
       encap-group type2-grouped type isid
       member 21 to 30
                               qos 200
       scheduler-policy user-type2
                        exit
encap-group type2-separate type isid qos-per-member
       member 36
                               qos 200
       scheduler-policy user-type2
       exit
       encap-group type3-grouped type isid
       member 41 to 50
                               qos 300
                        exit
                           encap-group type4-grouped type isid
       member 61 to 70
                               qos 400
       exit
                    qos 500
       scheduler-policy b-sap
       exit
            exit
        exit
    exit
exit

PBB OAM

The Nokia PBB implementation supports both MPLS and native Ethernet tunneling. In the case of an MPLS, SDP bindings are used as the B-VPLS infrastructure while T-LDP is used for signaling. As a result, the existing VPLS, MPLS diagnostic tools are supported in both I-VPLS and B-VPLS domains as depicted in PBB OAM view for MPLS infrastructure.

Figure 32. PBB OAM view for MPLS infrastructure

When an Ethernet switching backbone is used for aggregation between PBB PEs, a SAP is used as the B-VPLS up link instead of an SDP. No T-LDP signaling is available.

The existing IEEE 802.1ag implemented for regular VPLS SAPs may be used to troubleshoot connectivity at I-VPLS and B-VPLS layers.

Mirroring

There are no restrictions for mirroring in I-VPLS or B-VPLS.

OAM commands

All VPLS OAM commands may be used in both I-VPLS and B-VPLS instances.

I-VPLS
  • The following OAM commands are meaningful only toward another I-VPLS service instance (spoke-SDP in I-VPLS):

    • LSP-ping
    • LSP-trace
    • SDP-MTU
  • The following I-VPLS OAM exchanges are transparently transported over the B-VPLS core:

    • SVC-ping
    • MAC-ping
    • MAC-trace
    • MAC-populate
    • MAC-purge
    • CPE-ping (toward customer CPE)
    • 802.3ah EFM
    • SAA
  • PBB up links using MPLS/SAP; there are no PBB specific OAM commands.

B-VPLS

In case of Ethernet switching backbone (B-SAPs on B-VPLS), 802.1ag OAM is supported on B-SAP, operating on:

  • the customer level (C-SA/C-DA and C-type layer)

  • the tunnel level (B-SA/B-DA and B-type layer)

CFM support

There is no special 802.1ag CFM (Connectivity Fault Management) support for PBB. B-component and I-components run their own maintenance domain and levels. CFM for I-components run transparently over the PBB network and appears as directly connected.

Configuration examples

Use the CLI syntax displayed to configure PBB.

PBB using G.8031 protected Ethernet tunnels

The following displays PBB configuration examples:

Ethernet links on BEB1:

BEB1 to BEB1 L1:

BEB1 to BCB1 L1: 1/1/1 – Member port of LAG-emulation ET1, terminate ET3

BEB1 to BCB1 L2: 2/1/1 – Member port of LAG-emulation ET1

BEB1 to BCB1 L3: 3/1/1 – Member port of LAG-emulation ET1

BEB1 to BCB2: 4/1/1 – terminate ET3

*A:7750_ALU>config>eth-tunnel 1
        description "LAG-emulation to BCB1 ET1"
        protection-type loadsharing
        ethernet
            mac 00:11:11:11:11:12
            encap-type dot1q
        exit
        ccm-hold-time down 5 up 10 // 50 ms down, 1 sec up
        lag-emulation
           access adapt-qos distribute
           path-threshold 1
        exit
        path 1
            member 1/1/1
            control-tag 0 
            eth-cfm 
                …
            exit
            no shutdown
        exit
        path 2
            member 2/1/1
            control-tag 0
            eth-cfm 
                …
            exit
            no shutdown
        exit
        path 3
            member 3/1/1
            control-tag 0
            eth-cfm
                …
            exit
            no shutdown
        exit
        no shutdown
--------------------------------------------------
*A:7750_ALU>config>eth-tunnel 3
        description "G.8031 tunnel ET3"
        protection-type 8031_1to1
        ethernet
            mac 00:11:11:11:11:11
            encap-type dot1q
        exit
        ccm-hold-time down 5 // 50 ms down, no up hold-down
        path 1
            member 1/1/1
            control-tag 5
            precedence primary
            eth-cfm
                mep 2 domain 1 association 1
                    ccm-enable
                    control-mep
                    no shutdown
                exit
            exit
            no shutdown
        exit
        path 2
            member 4/1/1
            control-tag 5 
            eth-cfm
                mep 2 domain 1 association 2
                    ccm-enable
                    control-mep
                    no shutdown
                exit
            exit
            no shutdown
        exit
        no shutdown
--------------------------------------------------
# Service config
--------------------------------------------------
*A:7750_ALU>config>service vpls 1 customer 1 m-vpls b-vpls create
   description "m-VPLS for multipoint traffic"
   stp
      mst-name  "BVPLS"
      mode p-mstp
      mst-instance 10
         mst-priority 4096
         vlan-range 100-199
      exit
      mst-instance 20
         mst-priority 8192
         vlan-range 200-299
      exit
      no shutdown
   exit

   sap eth-tunnel-1 create // BSAP0 to BCB E
   sap 4/1/1:0 create // physical link to BCB F (NOTE 0 or 0.*)
                      // indicate untagged for m-VPLS)
   exit
   no shutdown
---------------------------------------------------
# Service config: one of the same-fate SAP over
# loadsharing tunnel
---------------------------------------------------
A:7750_ALU>config service vpls 100 customer 1 b-vpls create
   sap eth-tunnel-1:1 create //to BCB E
      // must specify tags for each path for loadsharing
      eth-tunnel 
path 1 tag 100
            path 2 tag 100
            path 3 tag 100
   exit
   no shutdown   …
   sap 3/1/1:200 // to BCBF
   …

A:7750_ALU>config service vpls 1000 customer 1 i-vpls create
   pbb backbone-vpls 100 isid 1000
   sap 4/1/1:200 // access SAP to QinQ
…
--------------------------------------------------
# Service config: one of epipes into b-VPLS protected tunnel
# as per R7.0 R4
--------------------------------------------------
A:7750_ALU>config service service vpls 3 customer 1 b-vpls create
   sap eth-tunnel-3 create
   …
service epipe 2000
    pbb-tunnel 100 backbone-dest-mac to-AS20 isid 2000
    sap 3/1/1:400 create

Example:

port 1/1/1
        — ethernet
            — encap-type dot1q
    port 2/2/2
        — ethernet
            — encap-type dot1q
    config eth-tunnel 1
        — path 1
            — member 1/1/1
            — control-tag 100
            — precedence primary
            — eth-cfm
                — mep 51 domain 1 association 1 direction down
                — ccm-enable
                — low-priority-defect allDef
                — mac-address 00:AE:AE:AE:AE:AE
                — control-mep
                — no shutdown
            — no shutdown
        — path 2
            — member 2/2/2 
            — control-tag 200
            — eth-cfm 
                — mep
                    — mep 52 domain 1 association 2 direction down
                    — ccm-enable
                    — low-priority-defect allDef
                    — mac-address 00:BE:BE:BE:BE:BE
                    — control-mep
                    — no shutdown
            — no shutdown
    
    config service vpls 1 b-vpls
        — sap eth-tunnel-1
    config service epipe 1000
        — pbb-tunnel 1 backbone-dest-mac remote-beb
        — sap 3/1/1:400.10 

MC-LAG multihoming for native PBB

This section describes a configuration example for BEB C configuration considering the following assumptions:

  • BEB C and BEB D are MC-LAG peers

  • B-VPLS 100 on BEB C and BEB D

  • VPLS 1000 on BEB C and BEB D

  • MC-LAG 1 on BEB C and BEB D

CLI syntax:

service pbb
        — source-bmac ab-ac-ad-ef-00-00
    port 1/1/1
        — ethernet
            — encap-type qinq
    lag 1
        — port 1/1/1 priority 20
        — lacp active administrative-key 32768
    redundancy
        — multi-chassis
            — peer 10.1.1.3 create
                — source-address 10.1.1.1
                — mc-lag
                    — lag 1 lacp-key 1 system-id 00:00:00:01:01:01 
                    — system-priority 100
                    — source-bmac-lsb use-lacp-key 
    service vpls 100 bvpls
        — sap 2/2/2:100 // bvid 100 
        — mac-notification
            — no shutdown
    
    service vpls 101 bvpls
        — sap 2/2/2:101 // bvid 101
        — mac-notification
            — no shutdown
    // no per BVPLS source-bmac configuration, the chassis one (ab-ac-ad-ef-00-00) is used
    
    service vpls 1000 ivpls
        — backbone-vpls 100
        — sap lag-1:1000 //automatically associates the SAP with ab-ac-ad-ef-00-01 (first 36 bits from BVPLS 100 sbmac+16bit source-bmac-lsb)
    
    service vpls 1001 ivpls
        — backbone-vpls 101
        — sap lag-1:1001 //automatically associates the SAP with ab-ac-ad-ef-00-01(first 36 bits from BVPLS 101 sbmac+16bit source-bmac-lsb)

Access multihoming over MPLS for PBB Epipes

This section gives an example configuration for BEB1 from Active/standby PW into PBB Epipes.

*A:BEB1>config>service# info
----------------------------------------------
        pbb
            source-bmac 00:00:00:00:11:11
            mac-name "remote-BEB" 00:44:44:44:44:44
        exit
        sdp 1 mpls create
            far-end 10.1.1.4
            ldp
            keep-alive
                shutdown
            exit
            source-bmac-lsb 33:33 control-pw-vc-id 100
            no shutdown
        exit
        vpls 10 customer 1 b-vpls create
            service-mtu 1532
            stp
                shutdown
            exit
            spb 1024 fid 1 create
                no shutdown
            exit
            sap 1/1/1:10 create
                spb create
                    no shutdown
                exit
            exit
            sap 1/1/5:10 create
                spb create
                    no shutdown
                exit
            exit
            no shutdown
        exit
        epipe 100 customer 1 create
            pbb
                tunnel 10 backbone-dest-mac "remote-BEB" isid 100
            exit
            spoke-sdp 1:100 create
                use-sdp-bmac
                no shutdown
            exit
            no shutdown
        exit
        epipe 101 customer 1 create
            pbb
                tunnel 10 backbone-dest-mac "remote-BEB" isid 101
            exit
            spoke-sdp 1:101 create
                use-sdp-bmac
                no shutdown
            exit
            no shutdown
        exit
----------------------------------------------
*A:BEB1>config>service#

The SDP control pseudowire information can be seen using this command:

*A:BEB1# show service sdp 1 detail

===============================================================================
Service Destination Point (Sdp Id : 1) Details
===============================================================================
-------------------------------------------------------------------------------
 Sdp Id 1  -10.1.1.4
-------------------------------------------------------------------------------
Description           : (Not Specified)
SDP Id               : 1                     SDP Source         : manual
...
Src B-MAC LSB        : 33-33                 Ctrl PW VC ID      : 100
Ctrl PW Active       : Yes

...
===============================================================================
*A:BEB1#

The configuration of a pseudowire to support remote active/standby PBB Epipe operation can be seen using this command:

*A:BEB1# show service id 100 sdp 1:100 detail

===============================================================================
Service Destination Point (Sdp Id : 1:100) Details
===============================================================================
-------------------------------------------------------------------------------
 Sdp Id 1:100  -(10.1.1.4)
-------------------------------------------------------------------------------
Description     : (Not Specified)
SDP Id             : 1:100                    Type              : Spoke
...
Use SDP B-MAC      : True
...
===============================================================================
*A:BEB1#8.C