Virtual Private Routed Network service
VPRN service overview
RFC 2547b is an extension to the original RFC 2547, BGP/MPLS VPNs, which details a method of distributing routing information using BGP and MPLS forwarding data to provide a Layer 3 Virtual Private Network (VPN) service to end customers.
Each Virtual Private Routed Network (VPRN) consists of a set of customer sites connected to one or more PE routers. Each associated PE router maintains a separate IP forwarding table for each VPRN. Additionally, the PE routers exchange the routing information configured or learned from all customer sites via MP-BGP peering. Each route exchanged via the MP-BGP protocol includes a Route Distinguisher (RD), which identifies the VPRN association and handles the possibility of IP address overlap.
The service provider uses BGP to exchange the routes of a particular VPN among the PE routers that are attached to that VPN. This is done in a way which ensures that routes from different VPNs remain distinct and separate, even if two VPNs have an overlapping address space. The PE routers peer with locally connected CE routers and exchange routes with other PE routers to provide end-to-end connectivity between CEs belonging to a specific VPN. Because the CE routers do not peer with each other there is no overlay visible to the CEs.
When BGP distributes a VPN route it also distributes an MPLS label for that route. On an SR series router, the method of allocating a label to a VPN route depends on the VPRN label mode and the configuration of the VRF export policy. SR series routers support three label allocation methods: label per VRF, label per next hop, and label per prefix.
Before a customer data packet travels across the service provider's backbone, it is encapsulated with the MPLS label that corresponds, in the customer's VPN, to the route which best matches the packet's destination address. The MPLS packet is further encapsulated with one or additional MPLS labels or GRE tunnel header so that it gets tunneled across the backbone to the correct PE router. Each route exchanged by the MP-BGP protocol includes a route distinguisher (RD), which identifies the VPRN association. Thus the backbone core routers do not need to know the VPN routes. Virtual Private Routed Network displays a VPRN network diagram example.
Routing prerequisites
RFC 4364 requires the following features:
multiprotocol extensions to BGP
extended BGP community support
BGP capability negotiation
Tunneling protocol options are as follows:
Label Distribution Protocol (LDP)
MPLS RSVP-TE tunnels
Generic Router Encapsulation (GRE) tunnels
BGP route tunnel (RFC 8277)
Core MP-BGP support
BGP is used with BGP extensions mentioned in Routing prerequisites to distribute VPRN routing information across the service provider’s network.
BGP was initially designed to distribute IPv4 routing information. Therefore, multiprotocol extensions and the use of a VPN-IP address were created to extend BGP’s ability to carry overlapping routing information. A VPN-IPv4 address is a 12-byte value consisting of the 8-byte route distinguisher (RD) and the 4-byte IPv4 IP address prefix. A VPN-IPv6 address is a 24-byte value consisting of the 8-byte RD and 16-byte IPv6 address prefix. Service providers typically assign one or a small number of RDs per VPN service network-wide.
Route distinguishers
The route distinguisher (RD) is an 8-byte value consisting of two major fields, the Type field and Value field. The Type field determines how the Value field should be interpreted. The 7750 SR and 7950 XRS implementation supports the three (3) Type values as defined in the standard.
The three Type values are:
Type 0: Value Field — Administrator subfield (2 bytes)
-
Assigned number subfield (4 bytes)
-
The administrator field must contain an AS number (using private AS numbers is discouraged). The Assigned field contains a number assigned by the service provider.
-
-
Type 1: Value Field — Administrator subfield (4 bytes)
-
Assigned number subfield (2 bytes)
-
The administrator field must contain an IP address (using private IP address space is discouraged). The Assigned field contains a number assigned by the service provider.
-
-
Type 2: Value Field — Administrator subfield (4 bytes)
-
Assigned number subfield (2 bytes)
-
The administrator field must contain a 4-byte AS number (using private AS numbers is discouraged). The Assigned field contains a number assigned by the service provider.
-
eiBGP load balancing
eiBGP load balancing allows a route to have multiple next hops of different types, using both IPv4 next hops and MPLS LSPs simultaneously.
Basic eiBGP topology displays a basic topology that could use eiBGP load balancing. In this topology CE1 is dual homed and therefore reachable by two separate PE routers. CE 2 (a site in the same VPRN) is also attached to PE1. With eiBGP load balancing, PE1 uses its own local IPv4 next hop as well as the route advertised by MP-BGP, by PE2.
Another example displayed in Extranet load balancing shows an extra net VPRN (VRF). The traffic ingressing the PE that should be load balanced is part of a second VPRN and the route over which the load balancing is to occur is part of a separate VPRN instance and are leaked into the second VPRN by route policies.
Here, both routes can have a source protocol of VPN-IPv4 but one still has an IPv4 next hop and the other can have a VPN-IPv4 next hop pointing out a network interface. Traffic is still load balanced (if eiBGP is enabled) as if only a single VRF was involved.
Traffic is load balanced across both the IPv4 and VPN-IPv4 next hops. This helps to use all available bandwidth to reach a dual-homed VPRN.
Route reflector
The use of Route Reflectors is supported in the service provider core. Multiple sets of route reflectors can be used for different types of BGP routes, including IPv4 and VPN-IPv4 as well as multicast and IPv6 (multicast and IPv6 apply to the 7750 SR only).
CE to PE route exchange
Routing information between the Customer Edge (CE) and Provider Edge (PE) can be exchanged by the following methods:
Static Routes
EBGP
RIP
OSPF
OSPF3
Each protocol provides controls to limit the number of routes learned from each CE router.
Route redistribution
Routing information learned from the CE-to-PE routing protocols and configured static routes should be injected in the associated local VPN routing/forwarding (VRF). In the case of dynamic routing protocols, there may be protocol specific route policies that modify or reject specific routes before they are injected into the local VRF.
Route redistribution from the local VRF to CE-to-PE routing protocols is to be controlled via the route policies in each routing protocol instance, in the same manner that is used by the base router instance.
The advertisement or redistribution of routing information from the local VRF to or from the MP-BGP instance is specified per VRF and is controlled by VRF route target associations or by VRF route policies.
- MD-CLI
configure policy-options policy-statement entry from protocol name
- classic
CLI
configure router policy-options policy-statement entry from protocol
CPE connectivity check
Static routes are used within many IES services and VPRN services. Unlike dynamic routing protocols, there is no way to change the state of routes based on availability information for the associated CPE. CPE connectivity check adds flexibility so that unavailable destinations are removed from the VPRN routing tables dynamically and minimize wasted bandwidth.
The following figure shows a setup with a directly connected IP target.
The following figure shows the setup with multiple hops to an IP target.
The availability of the far-end static route is monitored through periodic polling. The polling period is configured. If the poll fails a specified number of sequential polls, the static route is marked as inactive.
Either ICMP ping or unicast ARP mechanism can be used to test the connectivity. ICMP ping is preferred.
If the connectivity check fails and the static route is deactivated, the router continues to send polls and reactivate any routes that are restored.
RT Constraint
Constrained VPN Route Distribution based on route targets
Constrained Route Distribution (or RT Constraint) is a mechanism that allows a router to advertise Route Target membership information to its BGP peers to indicate interest in receiving only VPN routes tagged with specific Route Target extended communities. Upon receiving this information, peers restrict the advertised VPN routes to only those requested, minimizing control plane load in terms of protocol traffic and possibly also RIB memory.
The Route Target membership information is carried using MP-BGP, using an AFI value of 1 and SAFI value of 132. In order for two routers to exchange RT membership NLRI they must advertise the corresponding AFI/SAFI to each other during capability negotiation. The use of MP-BGP means RT membership NLRI are propagated, loop-free, within an AS and between ASes using well-known BGP route selection and advertisement rules.
ORF can also be used for RT-based route filtering, but ORF messages have a limited scope of distribution (to direct peers) and therefore do not automatically create pruned inter-cluster and inter-AS route distribution trees.
Configuring the route target address family
RT Constraint is supported only by the base router BGP instance. When the family command at the BGP router group or neighbor CLI context includes the route-target command option, the RT Constraint capability is negotiated with the associated set of EBGP and IBGP peers.
ORF and RT Constraint are mutually exclusive on a particular BGP session. The CLI does not attempt to block this configuration, but if both capabilities are enabled on a session, the ORF capability is not included in the OPEN message sent to the peer.
Originating RT constraint routes
When the base router has one or more RTC peers (BGP peers with which the RT Constraint capability has been successfully negotiated), one RTC route is created for each RT extended community imported into a locally-configured L2 VPN or L3 VPN service. Use the following command to configure these imported route targets.
configure service vprn
configure service vprn mvpn
By default, these RTC routes are automatically advertised to all RTC peers, without the need for an export policy to explicitly ‟accept” them. Each RTC route has a prefix, a prefix length and path attributes. The prefix value is the concatenation of the origin AS (a 4-byte value representing the 2- or 4-octet AS of the originating router, as configured using the configure router autonomous-system command) and 0 or 16-64 bits of a route target extended community encoded in one of the following formats: 2-octet AS specific extended community, IPv4 address specific extended community, or 4-octet AS specific extended community.
Use the following commands to configure a router to send the default RTC route to any RTC peer.
- MD-CLI
configure router bgp group default-route-target true configure router bgp group neighbor default-route-target true
- classic
CLI
configure router bgp group default-route-target configure router bgp group neighbor default-route-target
The default RTC route is a special type of RTC route that has zero prefix length. Sending the default RTC route to a peer conveys a request to receive all VPN routes (regardless of route target extended community) from that peer. The default RTC route is typically advertised by a route reflector to its clients. The advertisement of the default RTC route to a peer does not suppress other more specific RTC routes from being sent to that peer.
Receiving and re-advertising RT Constraint routes
All received RTC routes that are deemed valid are stored in the RIB-IN. An RTC route is considered invalid and treated as withdrawn, if any of the following applies:
The prefix length is 1-31.
The prefix length is 33-47.
The prefix length is 48-96 and the 16 most-significant bits are not 0x0002, 0x0102 or 0x0202.
If multiple RTC routes are received for the same prefix value then standard BGP best path selection procedures are used to determine the best of these routes.
The best RTC route per prefix is re-advertised to RTC peers based on the following rules:
The best path for a default RTC route (prefix-length 0, origin AS only with prefix-length 32, or origin AS plus 16 bits of an RT type with prefix-length 48) is never propagated to another peer.
A PE with only IBGP RTC peers that is neither a route reflector nor an ASBR does not re-advertise the best RTC route to any RTC peer because of standard IBGP split horizon rules.
- A route reflector that receives its best RTC route for a prefix from a client peer re-advertises that route (subject to export policies) to all of its client and non-client IBGP peers (including the originator), per standard RR operation. When the route is re-advertised to client peers, the RR:
- sets the ORIGINATOR_ID to its own router ID
- modifies the NEXT_HOP to be its local address for the sessions (for example, system IP)
A route reflector that receives its best RTC route for a prefix from a non-client peer re-advertises that route (subject to export policies) to all of its client peers, per standard RR operation. If the RR has a non-best path for the prefix from any of its clients, it advertises the best of the client-advertised paths to all non-client peers.
An ASBR that is neither a PE nor a route reflector that receives its best RTC route for a prefix from an IBGP peer re-advertises that route (subject to export policies) to its EBGP peers. It modifies the NEXT_HOP and AS_PATH of the re-advertised route per standard BGP rules. No aggregation of RTC routes is supported.
An ASBR that is neither a PE nor a route reflector that receives its best RTC route for a prefix from an External Border Gateway Protocol (EBGP) peer re-advertises that route (subject to export policies) to its EBGP and IBGP peers. When re-advertised routes are sent to EBGP peers, the ASBR modifies the NEXT_HOP and AS_PATH per standard BGP rules. No aggregation of RTC routes is supported.
These advertisement rules do not handle hierarchical RR topologies properly. This is a limitation of the current RT constraint standard.
Using RT Constraint routes
In general (ignoring IBGP-to-IBGP rules, Add-Path, Best-external, and so on), the best VPN route for every prefix/NLRI in the RIB is sent to every peer supporting the VPN address family, but export policies may be used to prevent some prefix/NLRI from being advertised to specific peers. These export policies may be configured statically or created dynamically based on use of ORF or RT constraint with a peer. ORF and RT Constraint are mutually exclusive on a session.
When RT Constraint is configured on a session that also supports VPN address families using route targets (that is: VPN-IPv4,VPN-IPv6, L2-VPN, MVPN-IPv4, MVPN-IPv6, Mcast-VPN-IPv4 or EVPN), the advertisement of the VPN routes is affected as follows:
-
When the session comes up, the advertisement of the VPN routes is delayed for a short while to allow RTC routes to be received from the peer.
-
After the initial delay, the received RTC routes are analyzed and acted upon. If S1 is the set of routes previously advertised to the peer and S2 is the set of routes that should be advertised based on the most recent received RTC routes, the following applies:
-
The set of routes in S1 but not in S2 should be withdrawn immediately (subject to MRAI).
-
The set of routes in S2 but not in S1 should be advertised immediately (subject to MRAI).
-
If a default RTC route is received from a peer P1, the VPN routes that are advertised to P1 is the set that meets all of the following requirements:
-
The set is eligible for advertisement to P1 per BGP route advertisement rules.
-
The set has not been rejected by manually configured export policies.
-
The set has not been advertised to the peer.
In this context, a default RTC route is any of the following:
-
a route with NLRI length = zero
-
a route with NLRI value = origin AS and NLRI length = 32
-
a route with NLRI value = {origin AS+0x0002 | origin AS+0x0102 | origin AS+0x0202} and NLRI length = 48
-
If an RTC route for prefix A (origin-AS = A1, RT = A2/n, n > 48) is received from an IBGP peer I1 in autonomous system A1, the VPN routes that are advertised to I1 is the set that meets all of the following requirements:
-
The set is eligible for advertisement to I1 per BGP route advertisement rules.
-
The set has not been rejected by manually configured export policies.
-
The set carries at least one route target extended community with value A2 in the n most significant bits.
-
The set has not been advertised to the peer.
Note: This applies whether I1 advertised the best route for A or not. -
-
If the best RTC route for a prefix A (origin-AS = A1, RT = A2/n, n > 48) is received from an IBGP peer I1 in autonomous system B, the VPN routes that are advertised to I1 is the set that meets all of the following requirements:
-
The set is eligible for advertisement to I1 per BGP route advertisement rules.
-
The set has not been rejected by manually configured export policies.
-
The set carries at least one route target extended community with value A2 in the n most significant bits.
-
The set has not been advertised to the peer.
Note: This applies only if I1 advertised the best route for A. -
-
If the best RTC route for a prefix A (origin-AS = A1, RT = A2/n, n > 48) is received from an EBGP peer E1, the VPN routes that are advertised to E1 is the set that meets all of the following requirements:
-
The set is eligible for advertisement to E1 per BGP route advertisement rules.
-
The set has not been rejected by manually configured export policies.
-
The set carries at least one route target extended community with value A2 in the n most significant bits.
-
The set has not been advertised to the peer.
Note: This applies only if E1 advertised the best route for A. -
-
BGP fast reroute in a VPRN
BGP fast reroute is a feature that brings together indirection techniques in the forwarding plane and pre-computation of BGP backup paths in the control plane to support fast reroute of BGP traffic around unreachable/failed next-hops. In a VPRN context BGP fast reroute is supported using unlabeled IPv4, unlabeled IPv6, VPN-IPv4, and VPN-IPv6 VPN routes. The supported VPRN scenarios are described in BGP fast reroute scenarios (VPRN context).
BGP fast reroute information specific to the base router BGP context is described in the BGP Fast Reroute section of the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide.
Ingress packet | Primary route | Backup route | Prefix independent convergence |
---|---|---|---|
IPv4 (ingress PE) |
IPv4 route with next-hop A resolved by an IPv4 route |
IPv4 route with next-hop B resolved by an IPv4 route |
Yes |
IPv4 (ingress PE) |
VPN-IPv4 route with next-hop A resolved by a GRE, LDP, RSVP or BGP tunnel |
VPN-IPv4 route with next-hop A resolved by a GRE, LDP, RSVP or BGP tunnel |
Yes |
MPLS (egress PE) |
IPv4 route with next-hop A resolved by an IPv4 route |
IPv4 route with next-hop B resolved by an IPv4 route |
Yes |
MPLS (egress PE) |
IPv4 route with next-hop A resolved by an IPv4 rout |
VPN-IPv4 route* with next-hop B resolved by a GRE, LDP, RSVP or BGP tunnel |
Yes |
IPv6 (ingress PE) |
IPv6 route with next-hop A resolved by an IPv6 route |
IPv6 route with next-hop B resolved by an IPv6 route |
Yes |
IPv6 (ingress PE) |
VPN-IPv6 route with next-hop A resolved by a GRE, LDP, RSVP or BGP tunnel |
VPN-IPv6 route with next-hop B resolved by a GRE, LDP, RSVP or BGP tunnel |
Yes |
MPLS (egress) |
IPv6 route with next-hop A resolved by an IPv6 route |
IPv6 route with next-hop B resolved by an IPv6 route |
Yes |
MPLS (egress) |
IPv6 route with next-hop A resolved by an IPv6 route |
Yes |
VPRN label mode must be VRF. VPRN must export its VPN-IP routes with RD ≠ y. For the best performance the backup next-hop must advertise the same VPRN label value with all routes (per VRF label). |
BGP fast reroute in a VPRN configuration
In a VPRN context, BGP fast reroute is optional and must be enabled. Fast reroute can be applied to all IPv4 prefixes, all IPv6 prefixes, all IPv4 and IPv6 prefixes, or to a specific set of IPv4 and IPv6 prefixes.
If all IP prefixes require backup-path protection, use a combination of the following BGP instance-level and VPRN-level commands:
- MD-CLI
configure router bgp backup-path ipv4 true configure router bgp backup-path ipv6 true configure service vprn bgp-vpn-backup ipv4 true configure service vprn bgp-vpn-backup ipv6 true
- classic
CLI
configure router bgp backup-path configure service vprn enable-bgp-vpn-backup
Use the following commands to enable BGP fast reroute for all IPv4 prefixes or all IPv6 prefixes, or both, that have a best path through a VPRN BGP peer:
- MD-CLI
configure service vprn bgp backup-path ipv4 true configure service vprn bgp backup-path ipv6 true configure service vprn bgp backup-path label-ipv4 true configure service vprn bgp backup-path label-ipv6 true
- classic
CLI
configure service vprn bgp backup-path
By enabling BGP VPN backup at the VPRN-level BGP fast reroute is enabled for all IPv4 prefixes or all IPv6 prefixes, or both, that have a best path through a remote PE peer.
If only some IP prefixes require backup path protection, use the following commands in the route policy to apply the install-backup-path action to the best paths of the IP prefixes requiring protection:
- MD-CLI
configure policy-options policy-statement default-action install-backup-path configure policy-options policy-statement action install-backup-path
- classic
CLI
configure router policy-options policy-statement default-action install-backup-path configure router policy-options policy-statement entry action install-backup-path
See the ‟BGP Fast Reroute” section of the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide for more information.
Export of inactive VPRN BGP routes
A BGP route learned from a VPRN BGP peer is exportable as a VPN-IP route, only if it is the best route for the prefix and is installed in the route table of the VPRN. Use the following command to relax this rule and allow the best inactive VPRN BGP route to be exportable as a VPN-IP route, provided that the active installed route for the prefix is an imported VPN-IP route.
configure service vprn export-inactive-bgp
Use the following command to further relax the preceding rule and allow the best inactive VPRN BGP route (best amongst all routes received from all CEs) to be exportable as a VPN-IP route, regardless of the route type of the active installed route.
configure service vprn export-inactive-bgp-enhanced
The export-inactive-bgp command (or the export-inactive-bgp-enhanced command, which is a superset of the functionality) is useful in a scenario where two or more PE routers connect to a multihomed site, and the failure of one of the PE routers or a PE-CE link must be handled by rerouting the traffic over the alternate paths. The traffic failover time in this situation can be reduced if all PE routers have prior knowledge of the potential backup paths and do not have to wait for BGP route advertisements or withdrawals to reprogram their forwarding tables. Achieving this can be challenging with normal BGP procedures. A PE router is not allowed to advertise a BGP route that it has learned from a connected CE device to other PE routers, if that route is not its active route for the destination in the route table. If the multihoming scenario requires all traffic destined for an IP prefix to be carried over a preferred primary path (passing through PE1-CE1, for example), all other PE routers (PE2, PE3, and so on) have that VPN route set as their active route for the destination and are unable to advertise their own routes for the same IP prefix.
For the best inactive BGP route to be exported, it must be matched and accepted by the VRF export policy or there must be an equivalent VRF target configuration.
When a VPN-IP route is advertised because either the export-inactive-bgp command or the export-inactive-bgp-enhanced command is enabled, the label carried in the route is either a per next-hop label corresponding to the next-hop IP address of the CE-BGP route, or a per-prefix label. This helps to avoid packet looping issues caused by unsynchronized IP FIBs.
When a PE router that advertised an inactive VPRN BGP route for an IP prefix receives a withdrawal for the VPN-IP route that was the active primary route, the inactive backup path may be promoted to the primary path; that is, the CE-BGP route may become the active route for the destination. In this case, the PE router is required to readvertise the VPN-IP route with a per-VRF label, if that is the behavior specified by the default allocation policy, and there is no label-per-prefix policy override. In the time it takes for the new VPN-IP route to reach all ingress routers and for the ingress routers to update their forwarding tables, traffic continues to be received with the old per next-hop label. The egress PE drops the in-flight traffic, unless the following command is used to configure label retention.
configure router mpls-labels bgp-labels-hold-timer
The bgp-labels-hold-timer command configures a delay (in seconds) between the withdrawal of a VPN-IP route with a per next-hop label and the deletion of the corresponding label forwarding entry in the IOM. The value of the bgp-labels-hold-timer command must be large enough to account for the propagation delay of the route withdrawal to all ingress routers.
VPRN features
This section describes various VPRN features and any special capabilities or considerations as they relate to VPRN services.
IP interfaces
VPRN customer IP interfaces can be configured with most of the same options found on the core IP interfaces. The advanced configuration options supported are as follows:
VRRP
Cflowd
secondary IP addresses
ICMP options
QoS Policy Propagation Using BGP
This section discusses QPPB as it applies to VPRN, IES, and router interfaces. See the QoS Policy Propagation Using BGP (QPPB) section and the IP Router Configuration section in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Router Configuration Guide.
The QoS Policy Propagation Using BGP (QPPB) feature applies only to the 7450 ESS and 7750 SR.
QoS policy propagation using BGP (QPPB) is a feature that allows a route to be installed in the routing table with a forwarding-class and priority so that packets matching the route can receive the associated QoS. The forwarding-class and priority associated with a BGP route are set using BGP import route policies. In the industry, this feature is called QPPB, and even though the feature name refers to BGP specifically. On SR OS, QPPB is supported for BGP (IPv4, IPv6, VPN-IPv4, VPN-IPv6), RIP and static routes.
While SAP ingress and network QoS policies can achieve the same end result as QPPB, the effort involved in creating the QoS policies, keeping them up-to-date, and applying them across many nodes is much greater than with QPPB. This is because of assigning a packet, arriving on a particular IP interface, to a specific forwarding-class and priority/profile, based on the source IP address or destination IP address of the packet. In a typical application of QPPB, a BGP route is advertised with a BGP community attribute that conveys a particular QoS. Routers that receive the advertisement accept the route into their routing table and set the forwarding-class and priority of the route from the community attribute.
QPPB applications
The typical applications of QPPB are as follows:
-
coordination of QoS policies between different administrative domains
-
traffic differentiation within a single domain, based on route characteristics
Inter-AS coordination of QoS policies
The user of an administrative domain A can use QPPB to signal to a peer administrative domain B that traffic sent to specific prefixes advertised by domain A should receive a particular QoS treatment in domain B. More specifically, an ASBR of domain A can advertise a prefix XYZ to domain B and include a BGP community attribute with the route. The community value implies a particular QoS treatment, as agreed by the two domains (in their peering agreement or service level agreement, for example). When the ASBR and other routers in domain B accept and install the route for XYZ into their routing table, they apply a QoS policy on selected interfaces that classifies traffic toward network XYZ into the QoS class implied by the BGP community value.
QPPB may also be used to request that traffic sourced from specific networks receive appropriate QoS handling in downstream nodes that may span different administrative domains. This can be achieved by advertising the source prefix with a BGP community, as discussed above. However, in this case other approaches are equally valid, such as marking the DSCP or other CoS fields based on source IP address so that downstream domains can take action based on a common understanding of the QoS treatment implied by different DSCP values.
In the above examples, coordination of QoS policies using QPPB could be between a business customer and its IP VPN service provider, or between one service provider and another.
Traffic differentiation based on route characteristics
There may be times when a network user wants to provide differentiated service to certain traffic flows within its network, and these traffic flows can be identified with known routes. For example, the user of an ISP network may want to give priority to traffic originating in a particular ASN (the ASN of a content provider offering over-the-top services to the ISP’s customers), following a specific AS_PATH, or destined for a particular next-hop (remaining on-net vs. off-net).
Use of QPPB to differentiate traffic in an ISP network shows an example of an ISP that has an agreement with the content provider managing AS300 to provide traffic sourced and terminating within AS300 with differentiated service appropriate to the content being transported. In this example we presume that ASBR1 and ASBR2 mark the DSCP of packets terminating and sourced, respectively, in AS300 so that other nodes within the ISP’s network do not need to rely on QPPB to determine the correct forwarding class to use for the traffic. The DSCP or other CoS markings could be left unchanged in the ISP’s network and QPPB used on every node.
QPPB
There are two main aspects of the QPPB feature on the 7450 ESS and 7750 SR:
the ability to associate a forwarding class and priority with specific routes in the routing table
the ability to classify an IP packet arriving on a particular IP interface to the forwarding class and priority associated with the route that best matches the packet
Associating an FC and priority with a route
Use the following commands to set the forwarding class and optionally the priority associated with routes accepted by a route-policy entry:
- MD-CLI
configure policy-options policy-statement entry action forwarding-class fc configure policy-options policy-statement entry action forwarding-class priority
- classic
CLI
configure router policy-options policy-statement entry action fc priority
The following example shows the configuration of the forwarding class and priority.
MD-CLI
[ex:/configure policy-options]
A:admin@node-2# info
community "gold" {
member "300:100" { }
}
policy-statement "qppb_policy" {
entry 10 {
from {
community {
name "gold"
}
protocol {
name [bgp]
}
}
action {
action-type accept
forwarding-class {
fc h1
priority high
}
}
}
}
classic CLI
A:node-2>config>router>policy-options# info
----------------------------------------------
community "gold"
members "300:100"
exit
policy-statement "qppb_policy"
entry 10
from
protocol bgp
community "gold"
exit
action accept
fc h1 priority high
exit
exit
exit
----------------------------------------------
The fc command is supported with all existing from and to match conditions in a route policy entry and with any action other than reject, it is supported with the next-entry, next-policy, and accept actions. If a next-entry or next-policy action results in multiple matching entries, the last entry with a QPPB action determines the forwarding class and priority.
A route policy that includes the fc command in one or more entries can be used in any import or export policy, but the fc command has no effect except in the following types of policies:
-
MD-CLI
-
VRF import policies
configure service vprn bgp-evpn mpls vrf-import configure service vprn bgp-ipvpn mpls vrf-import configure service vprn bgp-ipvpn srv6 vrf-import configure service vprn mvpn vrf-import
-
BGP import policies
configure router bgp import configure router bgp group import configure router bgp neighbor import configure service vprn bgp import configure service vprn bgp group import configure service vprn bgp neighbor import
-
RIP import policies
configure router rip import-policy configure router rip group import-policy configure router rip group neighbor import-policy configure service vprn rip import-policy configure service vprn rip group import-policy configure service vprn rip group neighbor import-policy
-
-
classic CLI
-
VRF import policies
configure service vprn bgp-evpn mpls vrf-import configure service vprn bgp-ipvpn mpls vrf-import configure service vprn bgp-ipvpn segment-routing-v6 vrf-import configure service vprn mvpn vrf-import
-
BGP import policies
configure router bgp import configure router bgp group import configure router bgp group neighbor import configure service vprn bgp import configure service vprn bgp group import configure service vprn bgp group neighbor import
-
RIP import policies
configure router rip import configure router rip group import configure router rip group neighbor import configure service vprn rip import configure service vprn rip group import configure service vprn rip group neighbor import
-
The QPPB route policies support routes learned from RIP and BGP neighbors of a VPRN as well as for routes learned from RIP and BGP neighbors of the base/global routing instance.
QPPB is supported for BGP routes belonging to any of the address families listed below:
-
IPv4 (AFI=1, SAFI=1)
-
IPv6 (AFI=2, SAFI=1)
-
VPN-IPv4 (AFI=1, SAFI=128)
-
VPN-IPv6 (AFI=2, SAFI=128)
A VPN-IP route may match both a VRF import policy entry and a BGP import policy entry (if vpn-apply-import is configured in the base router BGP instance). In this case the VRF import policy is applied first and then the BGP import policy, so the QPPB QoS is based on the BGP import policy entry.
This feature also introduces the ability to associate a forwarding class and optionally priority with IPv4 and IPv6 static routes. This is achieved by specifying the forwarding class within the static route entry using the next-hop or indirect commands.
Priority is optional when specifying the forwarding class of a static route, but when configured it can only be deleted and returned to unspecified by deleting the entire static route.
Displaying QoS information associated with routes
Use the commands in the following contexts to show the forwarding class and priority associated with the displayed routes.
show router route-table
show router fib
show router bgp routes
show router rip database
show router static-route
Use the following command to show an additional line per route entry that displays the forwarding class and priority of the route. When the qos command option is specified, the output includes an additional line per route entry that displays the forwarding class and priority of the route. If a route has no forwarding class and priority information, the third line is blank.
show router route-table 10.1.5.0/24 qos
===============================================================================
Route Table (Router: Base)
===============================================================================
Dest Prefix Type Proto Age Pref
Next Hop[Interface Name] Metric
QoS
-------------------------------------------------------------------------------
10.1.5.0/24 Remote BGP 15h32m52s 0
PE1_to_PE2 0
h1, high
-------------------------------------------------------------------------------
No. of Routes: 1
===============================================================================
Enabling QPPB on an IP interface
To enable QoS classification of ingress IP packets on an interface based on the QoS information associated with the routes that best match the packets, configure the qos-route-lookup command in the IP interface. The qos-route-lookup command has command options to indicate whether the QoS result is based on lookup of the source or destination IP address in every packet. There are separate qos-route-lookup commands for the IPv4 and IPv6 packets on an interface, which allows QPPB to be enabled for IPv4 only, IPv6 only, or both IPv4 and IPv6. Currently, QPPB based on a source IP address is not supported for IPv6 packets or for ingress subscriber management traffic on a group interface.
The qos-route-lookup command is supported on the following types of IP interfaces:
-
Base router network interfaces
configure router interface
-
VPRN SAP and spoke SDP interfaces
configure service vprn interface
-
VPRN group-interfaces
configure service vprn subscriber-interface group-interface
-
IES SAP and spoke SDP interfaces
configure service ies interface
-
IES group interfaces
configure service ies subscriber-interface group-interface
When the qos-route-lookup command with the destination command option is applied to an IP interface and the destination address of an incoming IP packet matches a route with QoS information, the packet is classified to the FC and priority associated with that route. The command overrides the FC and priority/profile determined from the SAP ingress or network QoS policy associated with the IP interface (see section 5.7 for more information). If the destination address of the incoming packet matches a route with no QoS information, the FC and priority of the packet remain as determined by the sap-ingress or network qos policy.
Similarly, when the qos-route-lookup command with the source command option is applied to an IP interface and the source address of an incoming IP packet matches a route with QoS information, the packet is classified to the FC and priority associated with that route. The command overrides the FC and priority/profile determined from the SAP ingress or network QoS policy associated with the IP interface. If the source address of the incoming packet matches a route with no QoS information, the FC and priority of the packet remain as determined by the SAP ingress or network QoS policy.
Currently, QPPB is not supported for ingress MPLS traffic on CsC PE or CsC CE interfaces or on network interfaces under the following context.
configure service vprn network-interface
QPPB when next hops are resolved by QPPB routes
In some circumstances (IP VPN inter-AS model C, Carrier Supporting Carrier, indirect static routes, and so on) an IPv4 or IPv6 packet may arrive on a QPPB-enabled interface and match a route A1 whose next-hop N1 is resolved by a route A2 with next-hop N2 and perhaps N2 is resolved by a route A3 with next-hop N3, and so on. The QPPB result is based only on the forwarding-class and priority of route A1. If A1 does not have a forwarding-class and priority association then the QoS classification is not based on QPPB, even if routes A2, A3, and so on have forwarding-class and priority associations.
QPPB and multiple paths to a destination
When ECMP is enabled some routes may have multiple equal-cost next-hops in the forwarding table. When an IP packet matches such a route the next-hop selection is typically based on a hash algorithm that tries to load balance traffic across all the next-hops while keeping all packets of a specific flow on the same path. The QPPB configuration model described in Associating an FC and priority with a route allows different QoS information to be associated with the different ECMP next hops of a route. The forwarding-class and priority of a packet matching an ECMP route is based on the particular next-hop used to forward the packet.
When BGP FRR is enabled some BGP routes may have a backup next-hop in the forwarding table in addition to the one or more primary next-hops representing the equal-cost best paths allowed by the ECMP/multipath configuration. When an IP packet matches such a route a reachable primary next-hop is selected (based on the hash result) but if all the primary next-hops are unreachable then the backup next-hop is used. The QPPB configuration model described in Associating an FC and priority with a route allows the forwarding-class and priority associated with the backup path to be different from the QoS characteristics of the equal-cost best paths. The forwarding class and priority of a packet forwarded on the backup path is based on the forwarding class and priority configured for the backup route.
QPPB and policy-based routing
When an IPv4 or IPv6 packet with destination address X arrives on an interface with both QPPB and policy-based-routing enabled:
There is no QPPB classification if the IP filter action redirects the packet to a directly connected interface, even if X is matched by a route with a forwarding-class and priority
QPPB classification is based on the forwarding-class and priority of the route matching IP address Y if the IP filter action redirects the packet to the indirect next-hop IP address Y, even if X is matched by a route with a forwarding class and priority.
QPPB and GRT lookup
Source-address based QPPB is not supported on any SAP or spoke SDP interface of a VPRN configured with the grt-lookup command.
QPPB interaction with SAP ingress QoS policy
When QPPB is enabled on a SAP IP interface the forwarding class of a packet may change from fc1, the original forwarding class determined by the SAP ingress QoS policy to fc2, the new forwarding class determined by QPPB. In the ingress datapath SAP ingress QoS policies are applied in the first P chip and route lookup/QPPB occurs in the second P chip. This has the implications listed below:
Ingress remarking (based on profile state) is always based on the original fc (fc1) and sub-class (if defined).
The profile state of a SAP ingress packet that matches a QPPB route depends on the configuration of fc2 only. If the de-1-out-profile flag is enabled in fc2 and fc2 is not mapped to a priority mode queue then the packet is marked out of profile if its DE bit = 1. If the profile state of fc2 is explicitly configured (in or out) and fc2 is not mapped to a priority mode queue, then the packet is assigned this profile state. In both cases there is no consideration of whether fc1 was mapped to a priority mode queue.
The priority of a SAP ingress packet that matches a QPPB route depends on several factors. If the de-1-out-profile flag is enabled in fc2 and the DE bit is set in the packet then priority is low regardless of the QPPB priority or fc2 mapping to profile mode queue, priority mode queue or policer. If fc2 is associated with a profile mode queue then the packet priority is based on the explicitly configured profile state of fc2 (in profile = high, out profile = low, undefined = high), regardless of the QPPB priority or fc1 configuration. If fc2 is associated with a priority mode queue or policer then the packet priority is based on QPPB (unless DE=1), but if no priority information is associated with the route then the packet priority is based on the configuration of fc1 (if fc1 mapped to a priority mode queue then it is based on DSCP/IP prec/802.1p and if fc1 mapped to a profile mode queue then it is based on the profile state of fc1).
QPPB interactions with SAP ingress QoS summarizes these interactions.
Original FC object mapping | New FC object mapping | Profile | Priority (drop preference) | DE=1 override | In/out of profile marking |
---|---|---|---|---|---|
Profile mode queue |
Profile mode queue |
From new base FC unless overridden by DE=1 |
From QPPB, unless packet is marked in or out of profile in which case follows profile. Default is high priority. |
From new base FC |
From original FC and sub-class |
Priority mode queue |
Priority mode queue |
Ignored |
If DE=1 override then low otherwise from QPPB. If no DEI or QPPB overrides then from original dot1p/exp/DSCP mapping or policy default. |
From new base FC |
From original FC and sub-class |
Policer |
Policer |
From new base FC unless overridden by DE=1 |
If DE=1 override then low otherwise from QPPB. If no DEI or QPPB overrides then from original dot1p/exp/DSCP mapping or policy default. |
From new base FC |
From original FC and sub-class |
Priority mode queue |
Policer |
From new base FC unless overridden by DE=1 |
If DE=1 override then low otherwise from QPPB. If no DEI or QPPB overrides then from original dot1p/exp/DSCP mapping or policy default. |
From new base FC |
From original FC and sub-class |
Policer |
Priority mode queue |
Ignored |
If DE=1 override then low otherwise from QPPB. If no DEI or QPPB overrides then from original dot1p/exp/DSCP mapping or policy default. |
From new base FC |
From original FC and sub-class |
Profile mode queue |
Priority mode queue |
Ignored |
If DE=1 override then low otherwise from QPPB. If no DEI or QPPB overrides then follows original FC’s profile mode rules. |
From new base FC |
From original FC and sub-class |
Priority mode queue |
Profile mode queue |
From new base FC unless overridden by DE=1 |
From QPPB, unless packet is marked in or out of profile in which case follows profile. Default is high priority. |
From new base FC |
From original FC and sub-class |
Profile mode queue |
Policer |
From new base FC unless overridden by DE=1 |
If DE=1 override then low otherwise from QPPB. If no DEI or QPPB overrides then follows original FC’s profile mode rules. |
From new base FC |
From original FC and sub-class |
Policer |
Profile mode queue |
From new base FC unless overridden by DE=1 |
From QPPB, unless packet is marked in or out of profile in which case follows profile. Default is high priority. |
From new base FC |
From original FC and sub-class |
Object grouping and state monitoring
This feature introduces a generic operational group object which associates different service endpoints (pseudowires and SAPs) located in the same or in different service instances. The operational group status is derived from the status of the individual components using specific rules specific to the application using the concept. A number of other service entities, the monitoring objects, can be configured to monitor the operational group status and to perform specific actions as a result of status transitions. For example, if the operational group goes down, the monitoring objects are brought down.
VPRN IP interface applicability
This concept is used by an IPv4 VPRN interface to affect the operational state of the IP interface monitoring the operational group. Individual SAP and spoke SDPs are supported as monitoring objects.
The following rules apply:
-
An object can only belong to one group at a time.
-
An object that is part of a group cannot monitor the status of a group.
-
An object that monitors the status of a group cannot be part of a group.
-
An operational group may contain any combination of member types: SAP or spoke SDPs.
-
An operational group may contain members from different VPLS service instances.
-
Objects from different services may monitor the operational group.
- Identify a set of objects whose forwarding state should be considered as a whole group then group them under an operational group using the oper-group command.
-
Associate the IP interface to the operational group using the
monitor-oper-group command.
The status of the operational group is dictated by the status of one or more members according to the following rules:
-
The operational group goes down if all the objects in the operational group go down. The operational group comes up if at least one of the components is up.
-
An object in the group is considered down if it is not forwarding traffic in at least one direction. That could be because the operational state is down or the direction is blocked through some validation mechanism.
-
If a group is configured but no members are specified yet, its status is considered up.
-
As soon as the first object is configured the status of the operational group is dictated by the status of the provisioned members.
The following configuration shows the operational group g1, the VPLS SAP that is mapped to it and the IP interfaces in IES service 2001 monitoring the operational group g1. This example uses an R‑VPLS context.
Operational group g1 has a single SAP (1/1/1:2001) mapped to it and the IP interfaces in the IES service 2001 derive their state from the state of operational group g1.
MD-CLIIn the MD-CLI, the VPLS instance includes the name v1. The IES interface links to the VPLS using the vpls command.
[ex:/configure service] A:admin@node-2# info oper-group "g1" { } ies "2001" { customer "1" interface "i2001" { monitor-oper-group "g1" vpls "v1" { } ipv4 { primary { address 192.168.1.1 prefix-length 24 } } } } vpls "v1" { admin-state enable service-id 1 customer "1" routed-vpls { } stp { admin-state disable } sap 1/1/1:2001 { oper-group "g1" eth-cfm { mep md-admin-name "1" ma-admin-name "1" mep-id 1 { } } sap 1/1/2:2001 { admin-state enable } sap 1/1/3:2001 { admin-state enable } }
Classic CLIIn the classic CLI, the VPLS instance includes the allow-ip-int-bind and the name v1. The IES interface links to the VPLS using the vpls command.
A:node-2>config>service# info ---------------------------------------------- oper-group "g1" create exit vpls 1 name "v1" customer 1 create allow-ip-int-bind exit stp shutdown exit sap 1/1/1:2001 create oper-group "g1" eth-cfm mep 1 domain 1 association 1 direction down ccm-enable no shutdown exit sap 1/1/2:2001 create no shutdown exit sap 1/1/3:2001 create no shutdown exit no shutdown exit ies 2001 name "2001" customer 1 create shutdown interface "i2001" create monitor-oper-group "g1" address 192.168.1.1/24 vpls "v1" exit exit exit
-
Subscriber interfaces
Subscriber interfaces are composed of a combination of two key technologies, subscriber interfaces and group interfaces. While the subscriber interface defines the subscriber subnets, the group interfaces are responsible for aggregating the SAPs.
Subscriber interfaces apply only to the 7450 ESS and 7750 SR.
subscriber interface
This is an interface that allows the sharing of a subnet among one or many group interfaces in the routed CO model.
group interface
This interface aggregates multiple SAPs on the same port.
redundant interfaces
This is a special spoke-terminated Layer 3 interface. It is used in a Layer 3 routed CO dual-homing configuration to shunt downstream (network to subscriber) to the active node for a specific subscriber.
SAPs
Encapsulations
The following SAP encapsulations are supported on the 7750 SR and 7950 XRS VPRN service:
-
Ethernet null
-
Ethernet dot1q
-
QinQ
-
LAG
-
Tunnel (IPsec or GRE)
Pseudowire SAPs
Pseudowire SAPs are supported on VPRN interfaces for the 7750 SR in the same way as on IES interfaces.
QoS policies
When applied to a VPRN SAP, service ingress QoS policies only create the unicast queues defined in the policy if PIM is not configured on the associated IP interface; if PIM is configured, the multipoint queues are applied as well.
With VPRN services, service egress QoS policies function as with other services where the class-based queues are created as defined in the policy.
Both Layer 2 and Layer 3 criteria can be used in the QoS policies for traffic classification in an VPRN.
Filter policies
Ingress and egress IPv4 and IPv6 filter policies can be applied to VPRN SAPs.
DSCP marking
Specific DSCP, forwarding class, and Dot1P command options can be specified to be used by every protocol packet generated by the VPRN. This enables prioritization or de-prioritization of every protocol (as required). The markings effect a change in behavior on ingress when queuing. For example, if OSPF is not enabled, then traffic can be de-prioritized to best effort (be) DSCP. This change de-prioritizes OSPF traffic to the CPU complex.
DSCP marking for internally generated control and management traffic by marking the DSCP value should be used for the specific application. This can be configured per routing instance. For example, OSPF packets can carry a different DSCP marking for the base instance and then for a VPRN service. ISIS and ARP traffic is not an IP-generated traffic type and is not configurable. See DSCP/FC marking.
When an application is configured to use a specified DSCP value then the MPLS EXP, Dot1P bits are marked in accordance with the network or access egress policy as it applies to the logical interface the packet is egressing.
The DSCP value can be set per application. This setting is forwarded to the egress line card. The egress line card does not alter the coded DSCP value and marks the LSP-EXP and IEEE 802.1p (Dot1P) bits according to the appropriate network or access QoS policy.
Protocol | IPv4 | IPv6 | DSCP marking | Dot1P marking | Default FC |
---|---|---|---|---|---|
ARP |
— |
— |
— |
Yes |
NC |
BGP |
Yes |
Yes |
Yes |
Yes |
NC |
BFD |
Yes |
— |
Yes |
Yes |
NC |
RIP |
Yes |
Yes |
Yes |
Yes |
NC |
PIM (SSM) |
Yes |
Yes |
Yes |
Yes |
NC |
OSPF |
Yes |
Yes |
Yes |
Yes |
NC |
SMTP |
Yes |
— |
— |
— |
AF |
IGMP/MLD |
Yes |
Yes |
Yes |
Yes |
AF |
Telnet |
Yes |
Yes |
Yes |
Yes |
AF |
TFTP |
Yes |
— |
Yes |
Yes |
AF |
FTP |
Yes |
— |
— |
— |
AF |
SSH (SCP) |
Yes |
Yes |
Yes |
Yes |
AF |
SNMP (get, set, and so on) |
Yes |
Yes |
Yes |
Yes |
AF |
SNMP trap/log |
Yes |
Yes |
Yes |
Yes |
AF |
syslog |
Yes |
Yes |
Yes |
Yes |
AF |
OAM ping |
Yes |
Yes |
Yes |
Yes |
AF |
ICMP ping |
Yes |
Yes |
Yes |
Yes |
AF |
Traceroute |
Yes |
Yes |
Yes |
Yes |
AF |
TACPLUS |
Yes |
Yes |
Yes |
Yes |
AF |
DNS |
Yes |
Yes |
Yes |
Yes |
AF |
SNTP/NTP |
Yes |
— |
— |
— |
AF |
RADIUS |
Yes |
— |
— |
— |
AF |
Cflowd |
Yes |
— |
— |
— |
AF |
DHCP 7450 ESS and 7750 SR only |
Yes |
Yes |
Yes |
Yes |
AF |
Bootp |
Yes |
— |
— |
— |
AF |
IPv6 Neighbor Discovery |
Yes |
— |
— |
— |
NC |
Default DSCP mapping table
DSCP NameDSCP ValueDSCP ValueDSCP ValueLabel
Decimal Hexadecimal Binary
=============================================================
Default 00x00 0b000000be
nc1 48 0x30 0b110000h1
nc2 56 0x38 0b111000nc
ef 46 0x2e 0b101110ef
af11100x0a0b001010assured
af12120x0c0b001100assured
af13140x0e0b001110assured
af21 18 0x12 0b010010l1
af22 20 0x14 0b010100l1
af23220x160b010110l1
af31 26 0x1a 0b011010l1
af32 28 0x1c 0b011100l1
af33 30 0x1d 0b011110l1
af41 34 0x22 0b100010h2
af42 36 0x24 0b100100h2
af43 38 0x26 0b100110h2
default*0
*The default forwarding class mapping is used for all DSCP names or values for which there is no explicit forwarding class mapping.
Configuration of TTL propagation for VPRN routes
This feature allows the separate configuration of TTL propagation for in transit and CPM generated IP packets, at the ingress LER within a VPRN service context. Use the following commands to configure TTL propagation.
configure router ttl-propagate vprn-local
configure router ttl-propagate vprn-transit
You can enable TTL propagation behavior separately as follows:
for locally generated packets by CPM (using the vprn-local command)
for user and control packets in transit at the node (using the vprn-transit command)
The following command options can be specified:
all – enables TTL propagation from the IP header into all labels in the stack, for VPN-IPv4 and VPN-IPv6 packets forwarded in the context of all VPRN services in the system.
vc-only – reverts to the default behavior by which the IP TTL is propagated into the VC label but not to the transport labels in the stack. You can explicitly set the default behavior by configuring the vc-only value.
none – disables the propagation of the IP TTL to all labels in the stack, including the VC label. This is needed for a transparent operation of UDP traceroute in VPRN inter-AS Option B such that the ingress and egress ASBR nodes are not traced.
In the classic CLI, the vprn-local command does not use a no version.
In the MD-CLI, use the delete command to remove the configuration.
Use the following commands to override the global configuration within each VPRN instance.
configure service vprn ttl-propagate local
configure service vprn ttl-propagate transit
The default behavior for a VPRN instance is to inherit the global configuration for the same command. You can explicitly set the default behavior by configuring the inherit value.
In the classic CLI, the local and transit commands do not have no versions.
In the MD-CLI, use the delete command to remove the configuration.
The commands do not apply when the VPRN packet is forwarded over GRE transport tunnel.
If a packet is received in a VPRN context and a lookup is done in the Global Routing Table (GRT), when leaking to GRT is enabled, for example, the behavior of the TTL propagation is governed by the LSP shortcut configuration as follows:
- when the matching route is an RSVP LSP shortcut
configure router mpls shortcut-transit-ttl-propagate
- when the matching route is an LDP LSP shortcut
configure router ldp shortcut-transit-ttl-propagate
When the matching route is a RFC 8277 label route or a 6PE route, It is governed by the BGP label route configuration.
When a packet is received on one VPRN instance and is redirected using Policy Based Routing (PBR) to be forwarded in another VPRN instance, the TTL propagation is governed by the configuration of the outgoing VPRN instance.
Packets that are forwarded in different contexts can use different TTL propagation over the same BGP tunnel, depending on the TTL configuration of each context. An example of this may be VPRN using a BGP tunnel and an IPv4 packet forwarded over a BGP label route of the same prefix as the tunnel.
CE to PE routing protocols
The 7750 SR and 7950 XRS VPRN supports the following PE to CE routing protocols:
BGP
Static
RIP
OSPF
PE to PE tunneling mechanisms
The 7750 SR and 7950 XRS support multiple mechanisms to provide transport tunnels for the forwarding of traffic between PE routers within the 2547bis network.
The 7750 SR and 7950 XRS VPRN implementation supports the use of:
RSVP-TE protocol to create tunnel LSPs between PE routers
LDP protocol to create tunnel LSP's between PE routers
GRE tunnels between PE routers
These transport tunnel mechanisms provide the flexibility of using dynamically created LSPs where the service tunnels are automatically bound (the autobind feature) and the ability to provide specific VPN services with their own transport tunnels by explicitly binding SDPs if needed. When the autobind is used, all services traverse the same LSPs and do not allow alternate tunneling mechanisms (like GRE) or the ability to craft sets of LSPs with bandwidth reservations for specific customers as is available with explicit SDPs for the service.
Per VRF route limiting
The 7750 SR and 7950 XRS allow setting the maximum number of routes that can be accepted in the VRF for a VPRN service. There are options to specify a percentage threshold at which to generate an event that the VRF table is near full and an option to disable additional route learning when full or only generate an event.
Spoke SDPs
Distributed services use service distribution points (SDPs) to direct traffic to another router via service tunnels. SDPs are created on each participating router and then bound to a specific service. SDP can be created as either GRE or MPLS. See the 7450 ESS, 7750 SR, 7950 XRS, and VSR Services Overview Guide for information about configuring SDPs.
This feature provides the ability to cross-connect traffic entering on a spoke SDP, used for Layer 2 services (VLLs or VPLS), on to an IES or VPRN service. From a logical point of view, the spoke SDP entering on a network port is cross-connected to the Layer 3 service as if it entered by a service SAP. The main exception to this is traffic entering the Layer 3 service by a spoke SDP is handled with network QoS policies and not access QoS policies.
SDP-ID and VC label service identifiers depicts traffic terminating on a specific IES or VPRN service that is identified by the SDP ID and VC label present in the service packet.
See ‟VCCV BFD support for VLL, Spoke SDP Termination on IES and VPRN, and VPLS Services” in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Layer 2 Services and EVPN Guide for information about using VCCV BFD in spoke-SDP termination.
show router ldp bindings services
T-LDP status signaling for spoke-SDPs terminating on IES/VPRN
T-LDP status signaling and PW active/standby signaling capabilities are supported on Ipipe and Epipe spoke SDPs.
Spoke SDP termination on an IES or VPRN provides the ability to cross-connect traffic entering on a spoke SDP, used for Layer 2 services (VLLs or VPLS), on to an IES or VPRN service. From a logical point of view the spoke SDP entering on a network port is cross-connected to the Layer 3 service as if it had entered using a service SAP. The main exception to this is traffic entering the Layer 3 service using a spoke SDP is handled with network QoS policies instead of access QoS policies.
When a SAP down or SDP binding down status message is received by the PE in which the Ipipe or Ethernet Spoke-SDP is terminated on an IES or VPRN interface, the interface is brought down and all associated routes are withdrawn in a similar way when the Spoke-SDP goes down locally. The same actions are taken when the standby T-LDP status message is received by the IES/VPRN PE.
This feature can be used to provide redundant connectivity to a VPRN or IES from a PE providing a VLL service, as shown in Active/standby VRF using resilient Layer 2 circuits.
Spoke SDP redundancy into IES/VPRN
This feature can be used to provide redundant connectivity to a VPRN or IES from a PE providing a VLL service, as shown in Active/standby VRF using resilient Layer 2 circuits, using either Epipe or Ipipe spoke-SDPs. This feature is supported on the 7450 ESS and 7750 SR only.
In Active/standby VRF using resilient Layer 2 circuits, PE1 terminates two spoke SDPs that are bound to one SAP connected to CE1. PE1 chooses to forward traffic on one of the spoke SDPs (the active spoke-SDP), while blocking traffic on the other spoke SDP (the standby spoke SDP) in the transmit direction. PE2 and PE3 take any spoke SDPs for which PW forwarding standby has been signaled by PE1 to an operationally down state.
The 7450 ESS, 7750 SR, and 7950 XRS routers are expected to fulfill both functions (VLL and VPRN/IES PE), while the 7705 SAR must be able to fulfill the VLL PE function. Spoke SDP redundancy model illustrates the model for spoke SDP redundancy into a VPRN or IES.
Weighted ECMP for spoke-SDPs terminating on IES/VPRN and R-VPLS interfaces
ECMP and weighted ECMP into RSVP-TE and SR-TE LSPs is supported for Ipipe and Epipe spoke SDPs terminating on IP interfaces in an IES or VPRN, or for spoke SDP termination on a routed VPLS. It is also supported for SDPs using LDP over RSVP tunnels. The following example shows the configuration of weighted ECMP under the SDP used by the service.
MD-CLI
[ex:/configure service]
A:admin@node-2# info
...
sdp 1 {
delivery-type mpls
weighted-ecmp true
}
classic CLI
A:node-2>config>service# info
----------------------------------------------
sdp 1 mpls create
shutdown
weighted-ecmp
exit
When a service uses a provisioned SDP on which weighted ECMP is configured, a path is selected based on the configured hash. Paths are then load balanced across LSPs used by an SDP according to normalized LSP load balancing weights. If one or more LSPs in the ECMP set to a specific next hop has no load-balancing-weight value configured, regular ECMP spraying is used.
IP-VPNs
Using OSPF in IP-VPNs
Using OSPF as a CE to PE routing protocol allows OSPF that is currently running as the IGP routing protocol to migrate to an IP-VPN backbone without changing the IGP routing protocol, introducing BGP as the CE-PE or relying on static routes for the distribution of routes into the service providers IP-VPN. The following features are supported:
Advertisement/redistribution of BGP-VPN routes as summary (type 3) LSAs flooded to CE neighbors of the VPRN OSPF instance. This occurs if the OSPF route type (in the OSPF route type BGP extended community attribute carried with the VPN route) is not external (or NSSA) and the locally configured domain-id matches the domain-id carried in the OSPF domain ID BGP extended community attribute carried with the VPN route.
OSPF sham links; a sham link is a logical PE-to-PE unnumbered point-to-point interface that essentially rides over the PE-to-PE transport tunnel. A sham link can be associated with any area and can therefore appear as an intra-area link to CE routers attached to different PEs in the VPN.
IPCP subnet negotiation
This feature enables negotiation between Broadband Network Gateway (BNG) and customer premises equipment (CPE) so that CPE is allocated to both the IP address and associated subnet.
Some CPEs use the network up-link in PPPoE mode and perform the DHCP server function for all ports on the LAN side. Instead of wasting 1 subnet for p2p uplink, CPEs use the allocated subnet for the LAN portion as shown in CPEs network up-link mode.
From a BNG perspective, the specific PPPoE host is allocated a subnet (instead of /32) by RADIUS, external DHCP server, or local user database. Locally, the host is associated with a managed route. This managed route is a subset of the subscriber-interface subnet (on a 7450 ESS or 7750 SR), and also, the subscriber host IP address is from the managed-route range. The negotiation between BNG and CPE allows CPE to be allocated both an IP address and associated subnet.
Cflowd for IP-VPNs
The cflowd feature allows service providers to collect IP flow data within the context of a VPRN. This data can used to monitor types and general proportion of traffic traversing an VPRN context. This data can also be shared with the VPN customer to see the types of traffic traversing the VPN and use it for traffic engineering.
This feature should not be used for billing purposes. Existing queue counters are designed for this purpose and provide very accurate per bit accounting records.
Inter-AS VPRNs
Inter-AS IP-VPN services have been driven by the popularity of IP services and service provider expansion beyond the borders of a single Autonomous System (AS) or the requirement for IP VPN services to cross the AS boundaries of multiple providers. Three options for supporting inter-AS IP-VPNs are described in RFC 4364, BGP/MPLS IP Virtual Private Networks (VPNs).
This feature applies to the 7450 ESS and 7750 SR only.
The first option, referred to as Option-A (Inter-AS Option-A: VRF-to-VRF model), is considered inherent in any implementation. This method uses a back-to-back connection between separate VPRN instances in each AS. As a result, each VPRN instance views the inter-AS connection as an external interface to a remote VPRN customer site. The back-to-back VRF connections between the ASBR nodes require individual sub-interfaces, one per VRF.
The second option, referred to as Option-B (Inter-AS Option-B), relies heavily on the AS Boundary Routers (ASBRs) as the interface between the autonomous systems. This approach enhances the scalability of the EBGP VRF-to-VRF solution by eliminating the need for per-VPRN configuration on the ASBRs. However, it requires that the ASBRs provide a control plan and forwarding plane connection between the autonomous systems. The ASBRs are connected to the PE nodes in its local autonomous system using Interior Border Gateway Protocol (IBGP) either directly or through route reflectors. This means the ASBRs receive all the VPRN information and forward these VPRN updates, VPN-IPV4, to all its EBGP peers, ASBRs, using itself as the next-hop. It also changes the label associated with the route. This means the ASBRs must maintain an associate mapping of labels received and labels issued for those routes. The peer ASBRs in turn forward those updates to all local IBGP peers.
This form of inter-AS VPRNs performs all necessary mapping functions and the PE routers do not need to perform any additional functions than in a non-Inter-AS VPRN.
On the 7750 SR, this form of inter-AS VPRNs does not require instances of the VPRN to be created on the ASBR, as in Option-A. As a result, there is less management overhead. This is also the most common form of Inter-AS VPRNs used between different service providers as all routes advertised between autonomous systems can be controlled by route policies on the ASBRs by using the following command.
configure router bgp next-hop-resolution labeled-routes transport-tunnel
The third option, referred to as Option-C (Option-C example), allows for a higher scale of VPRNs across AS boundaries but also expands the trust model between ASNs. As a result this model is typically used within a single company that may have multiple ASNs for various reasons. This model differs from Option-B, in that in Option-B all direct knowledge of the remote AS is contained and limited to the ASBR. As a result, in Option-B the ASBR performs all necessary mapping functions and the PE routers do not need to perform any additional functions than in a non-Inter-AS VPRN.
With Option-C, knowledge from the remote AS is distributed throughout the local AS. This distribution allows for higher scalability but also requires all PEs and ASBRs involved in the Inter-AS VPRNs to participate in the exchange of inter-AS routing information.
In Option-C, the ASBRs distribute reachability information for remote PE’s system IP addresses only. This is done between the ASBRs by exchanging MP-EBGP labeled routes, using RFC 8277, Using BGP to Bind MPLS Labels to Address Prefixes. Either RSVP-TE or LDP LSP can be selected to resolve next-hop for multihop EBGP peering by using the following command.
configure router bgp next-hop-resolution labeled-routes transport-tunnel
Distribution of VPRN routing information is handled by either direct MP-BGP peering between PEs in the different ASNs or more likely by one or more route reflectors in ASN.
VPRN label security at inter-AS boundary
This feature allows the user to enforce security at an inter-AS boundary and to configure a router, acting in a PE role or in an ASBR role, or both, to accept packets of VPRN prefixes only from direct EBGP neighbors to which it advertised a VPRN label.
Feature configuration
To use this feature, network IP interfaces that can have the feature enabled must first be identified. Participating interfaces are identified as having the untrusted state. The router supports a maximum of 15 network interfaces that can participate in this feature.
Use the following command to configure the state of untrusted for a network IP interface.
configure router interface untrusted
Normally, the user applies the untrusted command to an inter-AS interface and PIP keeps track of the untrusted status of each interface. In the datapath, an inter-AS interface that is flagged by PIP causes the default forwarding to be set to the value of the default-forwarding option (forward or drop).
For backward compatibility, default-forwarding on the interface is set to the forward command option. This means that labeled packets are checked in the normal way against the table of programmed ILMs to decide if it should be dropped or forwarded in a GRT, a VRF, or a Layer 2 service context.
If the user sets the default-forwarding argument to the drop command option, all labeled packets received on that interface are dropped. For details, see Datapath forwarding behavior.
This feature sets the default behavior for an untrusted interface in the data path and for all ILMs. To allow the data path to provide an exception to the normal way of forwarding handling away from the default for VPRN ILMs, BGP must flag those ILMs to the data path.
Use the following command to enable the exceptional ILM forwarding behavior, on a per-VPN-family basis.
configure router bgp neighbor-trust vpn-ipv4
configure router bgp neighbor-trust vpn-ipv6
At a high level, BGP tracks each direct EBGP neighbor over an untrusted interface and to which it sent a VPRN prefix label. For each of those VPRN prefixes, BGP programs a bit map in the ILM that indicates, on a per-untrusted interface basis, whether the matching packets must be forwarded or dropped. For details, see CPM behavior.
CPM behavior
This feature affects PIP behavior for management of network IP interfaces and in BGP for the resolution of BGP VPN-IPv4 and VPN-IPv6 prefixes.
The following are characteristics of CPM behavior related to PIP and the VPRN label security at inter-AS boundary feature:
PIP manages the status of an untrusted interface based on the user configuration on the interface, as described in Feature configuration. It programs the interface record in the data path using a 4-bit untrusted interface identification number. A trusted interface has no untrusted record.
BGP determines the status of trusted or untrusted of an EBGP neighbor by checking the untrusted record provided by PIP for the index of the interface used by the EBGP session to the neighbor.
BGP only tracks the status of trusted or untrusted for directly connected EBGP neighbors. The neighbor address and the local address must be on the same local subnet.
BGP includes the neighbor status of trusted or untrusted in the tribe criteria. For example, if a group consists of two untrusted EBGP neighbors and one trusted EBGP neighbor and all three neighbors have the same local-AS, neighbor-AS, and export policy, then the result is two different tribes.
As a result, if the interface status changes from trusted to untrusted or untrusted to trusted, the EBGP neighbors on that interface bounce.
When the feature is enabled for a specified VPN family and BGP advertises a label for one or more resolved VPN prefixes to a group of trusted and untrusted EBGP neighbors, it creates a 16-bit map in the ILM record in which it sets the bit position corresponding to the identification number of each untrusted interface used by a EBGP session to a neighbor to which it sent the label.
A bit in the ILM record bit-map is referred to as the untrusted interface forwarding bit. The bit position corresponding to the identification number of any other untrusted interface is left clear.
For details on the data path of the ILM bit-map record, see Datapath forwarding behavior.
Because the same label value is advertised for prefixes in the same VRF (label per-VRF mode) and for prefixes with the same next hop (label per-next-hop mode), BGP programs the forwarding bit position in the ILM bit map for both VPN IPv4 and VPN IPv6 prefixes sharing the same label, as long as the feature is enabled for at least one of the two VPN families.
BGP tracks, on a per-untrusted interface basis, the number of RIB-Out entries to EBGP neighbors that reference a specific VPN label. When that reference transitions from zero to a positive value or from a positive value to zero, the label for the ILM of the VPN prefix is re-downloaded to the IOM with the forwarding bit position in the ILM bit map record updated accordingly (set or unset, respectively).
This feature supports label per-VRF and label per-next-hop modes for the PE role. The feature supports label per-next-hop mode for the ASBR role.
The feature is not supported with label per-prefix mode in a PE role and is not supported in a Carrier Supporting Carrier (CSC) PE role.
Datapath forwarding behavior
ILM forwarding on a trusted interface behaves as in earlier releases and is not changed. The ILM forwarding bit map is ignored and packets are forwarded normally.
ILM forwarding on an untrusted interface follows these rules:
Only the top-most label in the label stack in a received packet is checked against the next set of rules. The top label can correspond to any one of the following applications:
a transport label with a pop or swap operation of static, RSVP-TE, SR-TE, LDP, SR-ISIS, SR-OSPF, or BGP-LU
a BGP VPRN inter-AS option B label with a swap operation when the router acts in the ASBR role for VPN routes
a service delimiting label for a local VRF when the router acts as a PE in a VPRN service
The datapath checks the bit position in the bit map in the ILM record, when present, that corresponds to the untrusted interface identification number in the interface record and then makes a forwarding decision to drop or forward.
A decision to forward means that a labeled packet proceeds to the regular ILM processing and its label stack is checked against the table of programmed ILMs to decide if the packet should be:
dropped
forwarded to CPM
forwarded as an MPLS packet
forwarded as an IP packet in a GRT or a VRF context
forwarded as a packet in a Layer 2 service context
The following are the processing rules of the ILM:
interface default-forwarding = forward and ILM bit-map not present ⇒ forward packet
interface default-forwarding = forward and interface forwarding bit position in the ILM bit-map 1 ⇒ forward packet
interface default-forwarding = forward and interface forwarding bit position in the ILM bit-map zero ⇒ drop packet
interface default-forwarding = drop and ILM bit-map not present ⇒ drop packet
interface default-forwarding = drop and interface forwarding bit position in the ILM bit-map zero ⇒ drop packet
interface default-forwarding = drop and interface forwarding bit position in the ILM bit-map 1 ⇒ forward packet
When the EBGP neighbor is not directly connected, BGP does not track that neighbor (see CPM behavior). In this case, the VPRN packet is received with a transport label or without a transport label if implicit-null is enabled in LDP or RSVP-TE for the transport label. Either way, the forwarding decision for the packet is solely dictated by the configuration of the default-forwarding command option on the incoming interface.
If the direct EBGP neighbor sends a VPRN packet using the MPLS-over-GRE encapsulation, the datapath does not check the interface forwarding bit position in the ILM bit map. In this case, the forwarding decision of the packet is solely dictated by the configuration of the default-forwarding command option on the incoming interface.
SR OS EBGP neighbors never use the MPLS-over-GRE encapsulation over an inter-AS link, but third party implementations may do this.
CSC
Carrier Supporting Carrier (CSC) is a solution for the 7750 SR and 7950 XRS that allows one service provider (the Customer Carrier) to use the IP VPN service of another service provider (the Super Carrier) for some or all of its backbone transport. RFC 4364 defines a Carrier Supporting Carrier solution for BGP/MPLS IP VPNs that uses MPLS on the interconnections between the two service providers to provide a scalable and secure solution.
CSC support in SR OS allows a 7750 SR or 7950 XRS to be deployed as any of the following devices shown in Carrier Supporting Carrier reference diagram:
PE1 (service provider PE)
CSC-CE1, CSC-CE2 and CSC-CE3 (CE device from the point of view of the backbone service provider)
CSC-PE1, CSC-PE2 and CSC-PE3 (PE device of the backbone service provider)
ASBR1 and ASBR2 (ASBR of the backbone service provider)
Terminology
- CE
- customer premises equipment dedicated to one particular business/enterprise
- PE
- service provider router that connects to a CE to provide a business VPN service to the associated business/enterprise
- CSC-CE
- an ASBR/peering router that is connected to the CSC-PE of another service provider for purposes of using the associated CsC IP VPN service for backbone transport
- CSC-PE
- a PE router belonging to the backbone service provider that supports one or more CSC IP VPN services
CSC connectivity models
A PE router deployed by a customer service provider to provide Internet access, IP VPNs, or L2 VPNs may connect directly to a CSC-PE device, or it may back haul traffic within its local ‟site” to the CSC-CE that provides this direct connection. Here, ‟site” means a set of routers owned and managed by the customer service provider that can exchange traffic through means other than the CSC service. The function of the CSC service is to provide IP/MPLS reachability between isolated sites.
The CSC-CE is a ‟CE” from the perspective of the backbone service provider. There may be multiple CSC-CEs at a specific customer service provider site and each one may connect to multiple CSC-PE devices for resiliency/multihoming purposes.
The CSC-PE is owned and managed by the backbone service provider and provides CSC IP VPN service to connected CSC-CE devices. In many cases, the CSC-PE also supports other services, including regular business IP VPN services. A single CSC-PE may support multiple CSC IP VPN services. Each customer service provider is allocated its own VRF within the CSC-PE; VRFs maintain routing and forwarding separation and allow the use of overlapping IP addresses by different customer service providers.
A backbone service provider may not have the geographic span to connect, with reasonable cost, to every site of a customer service provider. In this case, multiple backbone service providers may coordinate to provide an inter-AS CSC service. Different inter-AS connectivity options are possible, depending on the trust relationships between the different backbone service providers.
The CSC Connectivity Models apply to the 7750 SR and 7950 XRS only.
CSC-PE configuration and operation
This section applies to CSC-PE1, CSC-PE2 and CSC-PE3 in Carrier Supporting Carrier reference diagram.
This section applies only to the 7750 SR.
CSC interface
From the point of view of the CSC-PE, the IP/MPLS interface between the CSC-PE and a CSC-CE has these characteristics:
The CSC interface is associated with one (and only one) VPRN service. Routes with the CSC interface as next-hop are installed only in the routing table of the associated VPRN.
The CSC interface supports EBGP or IBGP for exchanging labeled IPv4 routes (RFC 8277). The BGP session may be established between the interface addresses of the two routers or else between a loopback address of the CSC-PE VRF and a loopback address of the CSC-CE. In the latter case, the BGP next-hop is resolved by either a static or OSPFv2 route.
An MPLS packet received on a CSC interface is dropped if the top-most label was not advertised over a BGP (RFC 8277) session associated with one of the VPRN’s CSC interfaces.
The CSC interface supports ingress QoS classification based on 802.1p or MPLS EXP. It is possible to configure a default FC and default profile for the CSC interface.
The CSC interface supports QoS (re)marking for egress traffic. Policies to remark 802.1p or MPLS EXP based on forwarding-class and profile are configurable per CSC interface.
By associating a port-based egress queue group instance with a CSC interface, the egress traffic can be scheduled/shaped with per-interface, per-forwarding-class granularity.
By associating a forwarding-plane based ingress queue group instance with a CSC interface, the ingress traffic can be policed to per-interface, per-forwarding-class granularity.
Ingress and egress statistics and accounting are available per CSC interface. The exact set of collected statistics depends on whether a queue-group is associated with the CSC interface, the traffic direction (ingress vs. egress), and the stats mode of the queue-group policers.
An Ethernet port or LAG with a CSC interface can be configured in hybrid mode or network mode. The port or LAG supports null, dot1q, or QinQ encapsulation. Use the following commands to create a CSC interface on a port or LAG in null mode.
configure service vprn network-interface port port-id
configure service vprn network-interface lag lag-id
Use the following commands to create a CSC interface on a port or LAG in dot1q mode.
configure service vprn network-interface port port-id:qtag1
configure service vprn network-interface lag lag-id:qtag1
Use the following commands to create a CSC interface on a port or LAG in QinQ mode.
configure service vprn network-interface port port-id:qtag1.qtag2
configure service vprn network-interface port port-id:qtag1.*
configure service vprn network-interface lag lag-id:qtag1.qtag2
configure service vprn network-interface lag lag-id:qtag1.*
A CSC interface supports the same capabilities (and supports the same commands) as a base router network interface, except it does not support:
IPv6
LDP
RSVP
Proxy ARP (local/remote)
Network domain configuration
DHCP
Ethernet CFM
Unnumbered interfaces
QoS
Egress
Egress traffic on a CSC interface can be shaped and scheduled by associating a port-based egress queue-group instance with the CSC interface. The steps for doing this are summarized below:
- Create an egress queue-group-template.
- Define one or more queues in the egress queue-group. For each one specify scheduling command options such as CIR, PIR, CBS and MBS and, if using H-QoS, the parent scheduler.
- Apply an instance of the egress queue-group template to the network egress context of the Ethernet port with the CSC interface. When doing so, and if applicable, associate an accounting policy or a scheduler policy, or both, with this instance.
- Create a network QoS policy.
- In the egress part of the network QoS policy define EXP remarking rules, if necessary.
-
In the egress part of the network QoS policy map a forwarding class to a
queue ID using the port-redirect-group command.
MD-CLI
[ex:/configure qos network "2" egress fc l2] A:admin@node-2# info port-redirect-group { queue 5 }
Classic CLIA:node-2>config>qos>network>egress$ info ---------------------------------------------- fc l2 port-redirect-group queue 5 exit ----------------------------------------------
- Apply the network QoS policy created in step 4 to the CSC interface and specify the name of the egress queue-group created in step 1 and the specific instance defined in step 3.
Ingress
- Create an ingress queue-group-template.
- Define one or more policers in the ingress queue-group. For each one specify command options such as CIR, PIR, CBS and MBS and, if using H-Pol, the parent arbiter.
-
Apply an instance of the ingress queue-group template to the network
ingress context of the forwarding plane with the CSC interface.
When doing so, and if applicable, associate an accounting policy or a policer-control-policy, or both, with this instance.
- Create a network QoS policy.
- In the ingress part of the network QoS policy define EXP classification rules, if necessary.
-
In the ingress part of the network QoS policy map a forwarding class to a
policer ID using the fp‑redirect‑group policer command.
MC-CLI
[ex:/configure qos network "3" ingress fc l2] A:admin@node-2# info fp-redirect-group { policer 5 }
classic CLIA:node-2>config>qos>network>ingress$ info ---------------------------------------------- fc l2 fp-redirect-group policer 5 exit ----------------------------------------------
- Apply the network QoS policy created in step 4 to the CSC interface and specify the name of the ingress queue-group created in step 1 and the specific instance defined in step 3.
MPLS
BGP-8277 is used as the label distribution protocol on the CSC interface. When BGP in a CSC VPRN needs to distribute a label corresponding to a received VPN-IPv4 route, it takes the label from the global label space. The allocated label is not re-used for any other FEC regardless of the routing instance (base router or VPRN). If a label L is advertised to the BGP peers of CSC VPRN A then a received packet with label L as the top most label is only valid if received on an interface of VPRN A, otherwise the packet is discarded.
To use BGP-8277 as the label distribution protocol on the CSC interface, add the family label-ipv4 command to the family configuration at the instance, group, or neighbor level. This causes the capability to send and receive labeled-IPv4 routes {AFI=1, SAFI=4} to be negotiated with the CSC-CE peers.
CSC VPRN service configuration
To configure a VPRN to support CSC service, the carrier-carrier-vpn command must be enabled. The command fails if the VPRN service has any existing SAP or spoke SDP interfaces. A CSC interface can be added to a VPRN (using the network-interface command) only if the carrier-carrier-vpn command is enabled.
A VPRN service with the carrier-carrier-vpn command may be provisioned to use auto-bind-tunnel, configured spoke SDPs, or some combination. All SDP types are supported except for:
- GRE SDPs
- LDP over RSVP-TE tunnel SDPs
Other aspects of VPRN configuration are the same in a CSC model as in a non-CSC model.
Node management using VPRN
There are two basic approaches that can be used to manage a node using a VPRN. In both cases, management traffic is received and sent in the VPRN router instance:
- Management traffic can target the IP address of a VPRN interface
- Management traffic can target the system address in the base router instance (using GRT leaking)
In the first approach, node management can be enabled using the local interface of any VPRN service. A management VPRN is separated from other traffic using an MPLS transport tunnel. This provides IP domain separation and security for the management domain. The management domain supports IPv4 and IPv6 address families and the AAA server is connected to the same VPRN for authentication, authorization, and accounting. The SR OS allows management using a VPRN as long the management packet is destined for a local interface of the VPRN, in addition it allows configuration of the AAA servers within a VPRN; see VRF network example.
In the second approach, node management is achieved using GRT leaking. In this case the management traffic uses an IP address in the Base routing context. See Management via VPRN using GRT leaking for details on this method.
The remainder of this section describes node management using the local VPRN interfaces (non GRT leaking).
VPRN management
VPRN management can be enabled by configuring the appropriate management protocol within the VPRN from the following context.
configure service vprn management
The following protocols can be enabled in this context:
FTP
gRPC
NETCONF
SSH
Telnet
Telnetv6
configure service vprn management
configure service vprn management
By default, all protocols are disabled. When one of these protocols is enabled, that VRF becomes a management VRF.
All other gRPC configurations remain global under the following context.
configure system grpc
All other NETCONF configurations remain global under the following context.
configure system management-interface netconf
Configure the TLS profiles needed for gRPC under the following context.
configure system security tls
AAA management
- MD-CLI
configure system security user-params authentication-order order
- classic
CLI
configure system security password authentication-order
Use the commands in the following context to configure the system local user profile configuration for local user authentication and authorization, including VPRNs:
- MD-CLI
configure system security aaa local-profiles profile
- classic
CLI
configure system security profile
Use the commands in the following contexts to configure AAA servers:
- MD-CLI
- System AAA
servers
configure system security aaa
- AAA remote servers under the
VPRN
configure service vprn aaa remote-servers
- System AAA
servers
- classic CLI
- System AAA
servers
configure system security
- AAA remote servers under the
VPRN
configure service vprn
- System AAA
servers
When AAA servers are configured using the preceding commands, they are used as follows:
-
If servers are configured under the VPRN AAA, only the VPRN AAA servers are used.
For example, the authentication-order command lists the order as local, TACACS+, and RADIUS, while the VPRN only has a RADIUS server configured, and under the system AAA servers both TACACS+ and RADIUS are configured. In this case, if a management session connects to the VPRN and the destination IP matches a local interface in the VPRN, the SR OS tries the local AAA first, and then RADIUS as configured in the VPRN. The SR OS does not try the system AAA servers because there is a AAA server configured in the VPRN.
- If servers are configured under VPRN AAA and the VPRN AAA command options are configured for in-band, out-of-band, or VPRN, the servers can be used for the VPRN and the system.
-
If no AAA servers are configured under VPRN AAA, the system AAA servers are used.
SNMP management
The SR OS SNMP agent can be reached via a VPRN interface address when the following command is enabled:
- MD-CLI
configure service vprn snmp access true
- classic
CLI
configure service vprn snmp access
Using an SNMP community defined inside the VPRN context or a user associated with an SNMPv3 USM access group defined in the system context allows access to a subset of the full SNMP data model.
Use the following command to define an SNMP community inside the VPRN context.
configure service vprn snmp community
Use the following command to define a user associated with an SNMPv3 USM access group in the system context.
configure system snmp access
Use the following command to view this subset.
show system security view
Use an SNMP community defined in the system context to allow access to the full SNMP data model (unless otherwise restricted used SNMP views). Use the following command to create the SNMP community strings for SNMPv1 and SNMPv2 access.
configure system security snmp community
Alternatively, GRT leaking and a Base routing IP address can be used (along with an SNMP community defined at the system context) to allow access to the entire SNMP data model (see the allow-local-management command).
A network manager using SNMP, cannot discover or fully manage an SR OS router using an SNMP community defined inside the VPRN context. Full SNMP access requires using one of the approaches described above.
SNMP communities configured under a VPRN are associated with the SNMP context "vprn". For example, walking the ifTable (IF-MIB) using the community configured for VPRN 5 returns counters and status for interfaces in VPRN 5 only.
configure system security snmp community
To access VPRN ifTable entries, use the community string that is defined inside that VPRN context.
configure service vprn snmp community
Events and notifications
Syslog, SNMP traps, and NETCONF notifications are generated via the Event Logging System.
Use the commands in the following context to define the VPRN syslog destinations.
configure service vprn log syslog
Use the commands in the following context to define the SNMP trap destination.
configure service vprn log snmp-trap-group
Use the following command to direct events for the whole system to a destination within the management VPRN.
configure log services-all-events
See the 7450 ESS, 7750 SR, 7950 XRS, and VSR System Management Guide for more information about this command.
DNS resolution
DNS default domain name and DNS servers for domain name resolution can be defined within a VPRN in the following context.
configure service vprn dns
Traffic leaking to GRT
Traffic leaking to Global Route Table (GRT) for the 7750 SR and 7950 XRS allows service providers to offer VPRN and Internet services to their customers over a single VRF interface.
Packets entering a local VRF interface can have route processing results derived from the VPRN forwarding table or the GRT. The leaking and preferred lookup results are configured on a per VPRN basis. Configuration can be general (for example, any lookup miss in the VPRN forwarding table can be resolved in the GRT), or specific (for example, specific routes should only be looked up in the GRT and ignored in the VPRN). To provide operational simplicity and improve streamlining, the CLI configuration is contained within the context of the VPRN service.
Use the commands in following context to configure the traffic leaking to GRT feature:
- MD-CLI
In the MD-CLI, configuring grt-lookup to true enables the basic functionality.
configure service vprn grt-leaking
- classic CLI
In the classic CLI, the enable-grt command establishes the basic functionality.
configure service vprn grt-lookup
This is an administrative context and provides the container under which the user can enter all specific commands, except policy definition. Policy definitions remain unchanged but are referenced from this context.
When it is configured, any lookup miss in the VRF table is resolved in the GRT, if available. By itself, this only provides part of the solution. Packet forwarding within GRT must route packets back to the correct node and to the specific VPRN from which the destination exists. Destination prefixes must be leaked from the VPRN to the GRT through the use of policy. Use the commands in the following context to create the policies:
- MD-CLI
configure policy-options
- classic
CLI
configure router policy-options
By default, the number of prefixes leaked from the VPRN to the GRT is limited to five. Use the following command to override the default or remove the limit:
- MD-CLI
configure service vprn grt-leaking export-limit
- classic
CLI
configure service vprn grt-lookup export-limit
When a VPRN forwarding table consists of a default route or an aggregate route, the customer may require the service provider to poke holes in those, or provide more specific route resolution in the GRT. In this case, the service provider may configure a static-route-entry and specify the GRT as the nexthop type.
The lookup result prefers any successful lookup in the GRT that is equal to or more specific than the static route, bypassing any successful lookup in the local VPRN.
This feature and Unicast Reverse Path Forwarding (uRPF) are mutually exclusive. When a VPRN service is configured with either of these functions, the other cannot be enabled. Also, prefixes leaked from any VPRN should never conflict with prefixes leaked from any other VPRN or existing prefixes in the GRT. Prefixes should be globally unique within the service provider network and if these are propagated outside a single provider network, they must be from the public IP space and globally unique. Network Address Translation (NAT) is not supported as part of this feature. The following type of routes are not leaked from VPRN into the Global Routing Table (GRT):
-
Aggregate routes
-
BGP VPN extranet routes
Management via VPRN using GRT leaking
In addition to node management using the IP addresses of VPRN interfaces, see Node management using VPRN, management via a VPRN can also be achieved using IP addresses in the Base routing instance and GRT leaking.
When a management packet arrives on a VPRN, there is a lookup for the destination IP address of the packet. If the destination IP is resolved using VPRN and the corresponding protocol is enabled under VPRN management, then the packet is extracted to CPM.
If the destination IP address is not a VRF IP and GRT leaking is enabled, a second lookup is done in the GRT FIB. If the IP address belongs to a local interface in GRT and allow-local-management is enabled under the following context, the packet is extracted using GRT leaking to the CPM.
- MD-CLI
configure service vprn grt-leaking
- classic
CLI
configure service vprn grt-lookup enable-grt
Traffic leaking from VPRN to GRT for IPv6
This feature allows IPv6 destination lookups in two distinct routing tables and applies only to the 7750 SR and 7950 XRS. IPv6 packets within a Virtual Private Routed Network (VPRN) service is able to perform a lookup for IPv6 destination against the Global Route Table (GRT) as well as within the local VPRN.
Currently, VPRN to VPRN routing exchange is accomplished through the use of import and export policies based on Route Targets (RTs), the creation of extranets. This new feature allows the use of a single VPRN interface for both corporate VPRN routing and other services (for example, Internet) that are reachable outside the local routing context. This feature takes advantage of the dual lookup capabilities in two separate routing tables in parallel.
This feature enables IPv6 capability in addition to the existing IPv4 VPRN-to-GRT Route Leaking feature.
RIP metric propagation in VPRNs
When RIP is used as the PE-CE protocol for VPRNs (IP-VPNs), the RIP metric is only used by the local node running RIP with the Customer Equipment (CE). The metric is not used to or encoded into and MP-BGP path attributes exchanged between PE routers.
The RIP metric can also be used to be exchanged between PE routers so if a customer network is dual homed to separate PEs the RIP metric learned from the CE router can be used to choose the best route to the destination subnet. By using the learned RIP metric to set the BGP MED attribute, remote PEs can choose the lowest MED and in turn the PE with the lowest advertised RIP metric as the preferred egress point for the VPRN. RIP metric propagation in VPRNs shows RIP metric propagation in VPRNs.
NTP within a VPRN service
Communication to external NTP clocks through VPRNs is supported in two ways: communication with external servers and peers, and communication with external clients.
Communication with external servers and peers are controlled using the same commands as used for access via base routing (see Network Time Protocol (NTP) in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Basic System Configuration Guide). Communication with external clients is controlled via the VPRN routing configuration. The support for the external clients can be as unicast or broadcast service. In addition, authentication keys for external clients are configurable on a per-VPRN basis.
Only a single instance of NTP remains in the node that is time sourced to as many as five NTP servers attached to the base or management network.
The NTP show command displays NTP servers and all known clients. Because NTP is UDP-based only, no state is maintained. As a result, the show system ntp command output only displays when the last message from the client has been received.
PTP within a VPRN service
The PTP within a VPRN service provides access to the PTP clock within the 7750 SR through one or more VPRN services. Only one VPRN or the base routing instance may have configured peers, but all may have discovered peers. If needed, a limit on the maximum number of dynamic peers allowed may be configured on a per routing instance basis.
For more information about PTP see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Basic System Configuration Guide.
VPN route label allocation
The method used for allocating a label value to an originated VPN-IP route (exported from a VPRN) depends on the configuration of the VPRN service and its VRF export policies. SR OS supports three 3 label modes:
label per VRF
label per next hop (LPN)
label per prefix (LPP)
Label per VRF is the label allocation default. It is used when the label mode is configured as VRF (or not configured) and the VRF export policies do not apply an advertise-label per-prefix action. All routes exported from the VPRN with the per-VRF label have the same label value. When the PE receives a terminating MPLS packet with a per-VRF label, the label value selects the VRF context in which to perform a forwarding table lookup and this lookup determines the outgoing interface (or set of interfaces if ECMP applies).
Label per next hop is used when the exported route is not a local or aggregate route, the label mode is configured as next-hop, and the VRF export policies do not apply an advertise-label per-prefix override. It is also used when an inactive (backup path) BGP route is exported by effect of the export-inactive-bgp command if there is no advertise-label per-prefix override. All LPN-exported routes with the same primary next hop have the same per-next-hop label value. When the PE receives a terminating MPLS packet with a per-next-hop label, the label lookup selects the outgoing interface for forwarding, without any FIB lookup that may cause problems with overlapping prefixes. LPN does not support ECMP, BGP fast reroute, QPPB, or policy accounting features that may otherwise apply.
Label per-prefix is used when a qualifying IP route is exported by matching a VRF export policy action with advertise-label per-prefix. Any IPv4 or IPv6 route that is not a local route, aggregate route, BGP-VPN route, or GRT lookup static route qualifies. With LPP, every prefix is associated with its own unique label value that does not change while the route is present in the route table. When the PE receives a terminating MPLS packet with a per-prefix label value, the packet is forwarded as if the FIB lookup found only the matching prefix route and not any of the more specific prefix routes that would normally be selected. LPP supports ECMP, QPPB, and policy accounting as part of the egress forwarding decision. It does not support BGP fast reroute or BGP sticky ECMP.
The following points summarize the logic that determines the label allocation method for an exported route:
If the IP route is LOCAL, AGGREGATE, or BGP-VPN always use the VRF label.
If the IP route is accepted by a VRF export policy with the advertise-label per-prefix action, use LPP.
If the IP (BGP) route is exported by the export-inactive-bgp command (VPRN best external), use LPN.
If the IP route is exported by a VPRN configured for label-mode next-hop, use LPN.
Else, use the per-VRF label.
Configuring the service label mode
Use the following command to change the service label mode of the VPRN for the 7750 SR.
configure service vprn label-mode
The default mode (if the command is not present in the VPRN configuration) is vrf, meaning distribution of one service label for all routes of the VPRN. When a VPRN X is configured with the label-mode next-hop command option, the service label that it distributes with an IPv4 or IPv6 route that it exports depends on the type of route as summarized in Service labels distributed in service label per next hop mode.
Route type | Distributed service label |
---|---|
remote route with IP A (associated with a SAP) as resolved next-hop |
platform-wide unique label allocated to next-hop A |
remote route with IP B (associated with a spoke SDP) as resolved next-hop |
platform-wide unique label allocated to next-hop B |
local route |
platform-wide unique label allocated to VPRN X |
aggregate route |
platform-wide unique label allocated to VPRN X |
ECMP route |
platform-wide unique label allocated to next-hop A (the lowest next-hop address in the ECMP set) |
BGP route with a backup next-hop (BGP FRR) |
platform-wide unique label allocated to next-hop A (the lowest next-hop address of the primary next-hops) |
In the classic CLI, a change to the label mode of a VPRN requires the VPRN to first be administratively disabled.
Restrictions and usage notes
The service label per next-hop mode has the following restrictions (applies only to the 7750 SR):
ECMP
The VPRN label mode should be set to VRF if distribution of traffic across the multiple PE-CE next-hop interfaces of an ECMP route is needed.
hub and spoke VPN
The VPRN label mode should not be set to next-hop if the user does not want the hub-connected CE to be involved in the forwarding of spoke-to-spoke traffic.
BGP next-hop indirection
BGP next-hop indirection has no benefit in service label per next-hop mode. When the resolved next-hop interface of a BGP next-hop changes all of the affected BGP routes must be re-advertised to VPRN peers with the new service label corresponding to the new resolved next-hop.
BGP anycast
When a PE failure results in redirection of MPLS packets to the other PE in a dual-homed pair, the service label mode is forced to VRF, for example, FIB lookup determines the next-hop even if the label mode of the VPRN is configured as next-hop.
U-turn routing
U-turn routing is effectively disabled by service-label per next-hop.
Carrier Supporting Carrier
The label-mode configuration of a VPRN with CSC interfaces is ignored for BGP-8277 routes learned from connected CSC-CE devices.
VPRN Support for BGP FlowSpec
When a VPRN BGP instance receives an IPv4 or IPv6 flow route, and that route is valid/best, the system attempts to construct an IPv4 or IPv6 filter entry from the NLRI contents and the actions encoded in the UPDATE message. If the attempt is successful, the filter entry is added to the system-created ‟fSpec-n” IPv4 or IPv6 embedded filter, where n is the service-id of the VPRN. These embedded filters may be inserted into configured IPv4 and IPv6 filter policies that are applied to ingress traffic on a selected set of the VPRN’s IP interfaces. These interfaces can include SAP and spoke SDP interfaces, but not CsC network interfaces.
When FlowSpec rules are embedded into a user-defined filter policy, the insertion point of the rules is configurable through the offset command option in the following contexts:
- MD-CLI
configure filter ip-filter embed filter configure filter ipv6-filter embed filter
- classic
CLI
configure filter ip-filter embed-filter configure filter ipv6-filter embed-filter
The sum of the ip-filter-max-size and offset must not exceed the maximum filter entry ID range.
MPLS entropy label and hash label
The router supports both the MPLS entropy label (RFC 6790) and the Flow Aware Transport label, known as the hash label (RFC 6391). LSR nodes in a network can load-balance labeled packets in a more granular way than by hashing on the standard label stack. See the 7450 ESS, 7750 SR, 7950 XRS, and VSR MPLS Guide for more information.
The entropy label is supported for VPRN services, as well as Epipe and Ipipe spoke-SDP termination on VPRN interfaces. To configure insertion of the entropy label, use the entropy-label command in the vprn context or spoke-sdp context of an interface.
The hash label is also supported for Epipe and Ipipe spoke-SDP termination on VPRN and VPRN services bound to any MPLS-type encapsulated SDP, as well as to a VPRN service using the auto-bind-tunnel command with the resolution-filter configured as any MPLS tunnel type. Configure the hash label using the hash-label command in the following contexts.
configure service vprn
configure service vprn spoke-sdp
configure service vprn interface spoke-sdp
Either the hash label or the entropy label can be configured on one object, but not both.
LSP tagging for BGP next hops or prefixes and BGP-LU
It is possible to constrain the tunnels used by the system for resolution of BGP next-hops or prefixes and BGP labeled unicast routes using LSP administrative tags. See the ‟LSP Tagging and Auto-Bind Using Tag Information‟ section of the 7450 ESS, 7750 SR, 7950 XRS, and VSR MPLS Guide for more information.
Route leaking from the global route table to VPRN instances
The Global Route Table (GRT) to VPRN route leaking feature allows actual routes from the global route table to be exported into specific VPRN instances allowing those routes to be used for forwarding as well as being re-advertised within the VPRN context.
There are two stages for route leaking. The first stage requires the configuration of a set of leak-export route policies that identify which GRT routes are subject to being exported into VPRN services. The leak‑export command in the configure router context is used to configure between one and four route policies. The GRT routes must match the policy entries configured with the following command to accept:
- MD-CLI
configure policy-options policy-statement entry action action-type
- classic
CLI
configure router policy-options policy-statement entry action
In addition, the leak-export-limit command is used to specify the maximum number of GRT routes that can be included in the GRT leak pool.
The second stage requires the configuration of an import GRT policy that specifies which routes within the GRT leak pool that are leaked into the associated VPRN instances route table. The import-grt command in the following context is used and accepts one route policy:
- MD-CLI
configure service vprn grt-leaking
- classic
CLI
configure service vprn grt-lookup
For the GRT route to be leaked into the local VPRN, the route must match a policy entry with following command set to accept:
- MD-CLI
configure policy-options policy-statement entry action action-type
- classic
CLI
configure router policy-options policy-statement entry action
If a GRT route passes both stages, it is added into the VPRN route table which it to be used for IP forwarding as well as re-advertisement within other routing protocols within the VPRN context.
Both IPv4 and IPv6 routes can be leaked using this process from the GRT into one or more VPRN instances. The GRT route types that can be leaked using this process are:
RIP, OSPF, and IS-IS routes
Direct routes
Static routes
Class-based forwarding of VPN-v4/v6 prefixes over RSVP-TE or SR-TE LSPs
This feature enables class-based forwarding (CBF) with ECMP of BGP VPN-v4/v6 prefixes that are resolved using RSVP-TE or SR-TE configured as auto-bind-tunnel.
Feature configuration
To configure this feature:
-
Enable resolution to RSVP-TE or SR-TE tunnels in the auto-bind-tunnel context.
-
Enable ECMP in the auto-bind-tunnel context.
-
Enable class-forwarding in the vprn context.
-
Define at least one class forwarding policy in the mpls context, the FC to sets associations and the LSP to (policy, set) associations.
The SR OS CBF implementation supports spraying of packets over a maximum of six forwarding sets of ECMP LSPs only when the system profile is profile-b that is supported on an FP4 or later-based CPM. In any other case, the maximum number of forwarding sets of ECMP LSPs is four.
MD-CLI
[ex:/configure router "Base" mpls]
A:admin@node-2# info
class-forwarding-policy "test" {
fc l2 {
forwarding-set 2
}
fc af {
forwarding-set 3
}
fc l1 {
forwarding-set 4
}
}
classic CLI
A:node-2>config>router>mpls$ info
----------------------------------------------
...
class-forwarding-policy "test"
fc l2 forwarding-set 2
fc af forwarding-set 3
fc l1 forwarding-set 4
exit
----------------------------------------------
All FCs are mapped to set 1 as soon as the policy is created. The user can make changes to the mapping of FCs as required. An FC that is not added to the class-forwarding policy, is always mapped to set 1. At most, an FC can be mapped to a single forwarding set. One or more FCs can be mapped to the same set. The user can indicate the initial default set by including the default-set option.
The default forwarding set is used to forward packets of any FC in cases where all LSPs of the forwarding set the FC maps to become operationally DOWN. The router uses the user-configured default set as the initial default set. Otherwise, the router selects the lowest numbered set as the default forwarding set in a class-forwarding policy. When the last LSP in a default forwarding set goes into an operationally DOWN state, the router designates the next lowest-numbered set as the new default forwarding set.
A mapping to a class-forwarding policy and a set is added to the existing CBF configuration of an RSVP‑TE or SR-TE LSP or to an LSP template. Use the following commands to perform this function:
- MD-CLI
configure router mpls lsp class-forwarding forwarding-set policy configure router mpls lsp class-forwarding forwarding-set set configure router mpls lsp-template class-forwarding forwarding-set policy configure router mpls lsp-template class-forwarding forwarding-set set
- classic
CLI
configure router mpls lsp class-forwarding forwarding-set policy set configure router mpls lsp-template class-forwarding forwarding-set policy set
An MPLS LSP only maps to a single class-forwarding policy and forwarding set. Multiple LSPs can map to the same policy and set. If they form an ECMP set, from the IGP shortcut perspective, packets of the FCs mapped to this set are sprayed over these LSPs based on a modulo operation of the output of the hash routine on the headers of the packet and the number of LSPs in the set.
Feature behavior
When a VPN-v4/v6 prefix is resolved, the default behavior of the data path is to spray the packets over the entire ECMP set using a modulo operation of the number of resolved next hops in the ECMP set and the output of the hash on the packet header fields. With class-based forwarding enabled, the FC of the packet, is used to look up the forwarding set ID. Then, a modulo operation is performed on the tunnel next hops of this set ID only, to spray packets of this FC. The data path concurrently implements ECMP within the tunnels of each set ID.
The CBF information of the LSPs forming the ECMP set is checked for consistency before programming. If more than a single class-forwarding policy exists, the set is considered inconsistent from a CBF perspective and no CBF information is programmed in the data-path and regular ECMP occurs.
Also, regardless of the CBF consistency check, the system programs the data-path with the full ECMP set.
The following describes the fallback behavior in data path of the CBF feature.
An FC, for which all LSPs in the forwarding set are operationally DOWN, has its packets forwarded over the default forwarding set. The default forwarding set is either the initial default forwarding set configured by the user or the lowest numbered set in the class-forwarding policy that has one or more LSPs in the operationally UP state. If the initial or subsequently elected default forwarding set has all its LSPs operationally DOWN, the next lower numbered forwarding set, which has at least one LSP in the operationally UP state, is elected as the default forwarding set.
If all LSPs of all forwarding sets become operationally DOWN, the router resumes regular ECMP spraying on the remaining LSPs in the full ECMP set.
Whenever the first LSP in a forwarding set becomes operationally UP, the router triggers the re-election of the default set and selects this set as the new default set, if it is the initial default set, otherwise, it selects the lowest numbered set.
SR OS implements a hierarchical ECMP architecture for BGP prefixes. The first level is the ECMP at the VPRN Level between different BGP next hop, and the second level is ECMP at the auto-bind-tunnel level, having the same next hop. This CBF feature is applied at the auto-bind-tunnel level. Weighted ECMP and the CBF feature are mutually exclusive on a per-BGP next-hop basis. When both are configured, Weighted ECMP takes the preference. CPM-originated packets on the router, including control plane and OAM packets, are forwarded over a single LSP from the set of LSPs that the packet's FC is mapped to, as per the CBF configuration.
Weighted ECMP and ECMP for VPRN IPv4 and IPv6 over MPLS LSPs
ECMP over MPLS LSPs for VPRN services refers to spraying packets across multiple named RSVP and SR-TE LSPs within the same ECMP set.
The ECMP-like spraying consists of hashing the relevant fields in the header of a labeled packet and selecting the next-hop tunnel based on the modulo operation of the output of the hash and the number of ECMP tunnels. The maximum number of ECMP tunnels selected from the TTM matches the value of the user-configured ecmp command option. Only LSPs with the same lowest LSP metric can be part of the ECMP set. If the number of these LSPs is higher than the value configured in the ecmp command option, the LSPs with the lowest tunnel IDs are selected first.
Weighted ECMP is supported where the service resolves directly to an ECMP set of RSVP or SR-TE LSPs with a configured load balancing weight, or where it resolves to a BGP tunnel which in turn uses an ECMP set of RSVP or SR-TE LSPs with a configured load balancing weight. The weight of the LSP is configured using the following commands:
configure router mpls lsp load-balancing-weight
configure router mpls lsp-template load-balancing-weight
If one or more LSPs in the ECMP set have no load-balancing-weight configured, and the ECMP is set to a specific next hop, regular ECMP spraying is used.
Weighted ECMP is configured for VPRN services with SDP auto bind by using the following commands:
- MD-CLI
configure service vprn bgp-evpn mpls auto-bind-tunnel ecmp configure service vprn bgp-evpn mpls auto-bind-tunnel weighted-ecmp
- classic
CLI
configure service vprn auto-bind-tunnel ecmp configure service vprn auto-bind-tunnel weighted-ecmp
The ecmp command allows explicit configuration of the number of tunnels that auto-bind-tunnel can use to resolve for a VPRN.
If weighted ECMP is enabled, a path is selected based on the output of the hashing algorithm. Packet paths are mapped to LSPs in the SDP in proportion to the configured load balancing weight of the LSP. The hash is based on the system load balancing configuration.
IP VPN independent domains using BGP attribute set
In most IP VPN deployments, the network behind each CE router is relatively simple. In these networks, BGP is typically used to create an EBGP session between the CE device and service provider PE router. Occasionally, the service provider is required to provide an IP VPN service to a larger enterprise customer, or to another service provider that has complex sites using BGP for intra-site routing. Under such circumstances, it is useful to isolate the customer BGP domain from the service provider BGP domain, to ensure that routing policies applied in one domain do not affect the other domain. This functionality is achieved using BGP independent domains (based on RFC 6368), which introduce an optional transitive BGP path attribute called attribute set (ATTR_SET).
ATTR_SET offers benefits for both the service provider and customer. For example, ATTR_SET can hide the global AS number of the service provider from customer domains, even in inter-AS VPN scenarios. For the customer, ATTR_SET ensures that BGP routing decisions based on LOCAL_PREF or MED attributes (for example) are not affected when the service provider manipulates the same attributes in the core domain.
The following is the expected flow of route advertisements when a VPRN supports an independent domain:
-
The customer CE router advertises a BGP route to the service provider PE router, and it is received by the VPRN BGP instance. Although it is not expected, the BGP route may have an ATTR_SET attached. Use the following command to configure the router to remove ATTR_SETs from BGP routes received by the VPRN BGP instance:
-
MD-CLI
configure service vprn bgp attribute-set remove true
-
classic CLI
configure service vprn bgp attribute-set remove
Note: If the configuration of the remove command is changed, ROUTE_REFRESH messages are sent to all PE-CE peers of the VPRN. -
-
If the BGP route in step 1 is matched and accepted by the VRF export policy of the VPRN (or the equivalent VRF target configuration takes effect), an ATTR_SET is added to the VPN-IP route created by the export process. Use the following command to configure the router to add ATTR_SETs to exported VPN-IP routes:
-
MD-CLI
configure service vprn bgp-ipvpn attribute-set export true
-
classic CLI
configure service vprn bgp-ipvpn attribute-set export
The ATTR_SET contains an exact copy of the BGP path attributes (post import policy) of the BGP route from the step 1, excluding the NEXT_HOP, MP_REACH, MP_UNREACH, AS4_PATH, and AS4_AGGREGATOR attributes. After the ATTR_SET is added to the VPN-IP route, the other path attributes of the VPN-IP route are initialized to the basic values that apply to exported local routes. The regenerated path attributes are influenced by the VRF export policy of the VPRN or the export policy that applies to the base router BGP session carrying the VPN-IP routes, provided that the vpn-apply-export command is configured. Neither the VRF export policy nor a regular BGP export policy can modify the contents of the ATTR_SET.
-
-
The VPN-IP route in step 2 is received by another PE router and imported into a VPRN service that participates in the independent domain. Use the following command option to configure the VPRN service to accept and process ATTR_SETs in received VPN-IP routes.
configure service vprn bgp-ipvpn attribute-set import accept
Note: For a VPRN service not participating in an independent domain, Nokia recommends configuring the import command to drop. If the import command is configured to accept, only the attributes contained in the ATTR_SET influence the comparison of the route containing the ATTR_SET with other BGP routes in the VPRN BGP context. If BGP must compare an imported VPN-IP route containing an ATTR_SET to an imported VPN-IP route without an ATTR_SET, BGP compares ATTR_SET attributes against non-ATTR_SET attributes. However, Nokia recommends avoiding this scenario. -
If the imported VPN-IP route in step 3 is the best overall route for the prefix, it is advertised to BGP CE peers of the VPRN. The following cases are exceptions where the best route with an ATTR_SET is not advertised to BGP CE peers:
-
If the AS number of the VPRN is equal to the origin AS signaled inside the ATTR_SET, BGP routes with attributes derived from ATTR_SET are not advertised to non-client IBGP peers of the VPRN (peers not covered by a cluster configuration).
-
BGP routes with attributes derived from ATTR_SET are not advertised to confederation EBGP or IBGP peers of the VPRN.
-
-
In the routes advertised to BGP CE peers of the VPRN, the signaled attribute values are generally copies of the attribute values contained inside the ATTR_SET. However, the BGP export policy can modify the final values. If a BGP CE route is derived from a VPN-IP route with an ATTR_SET, the attributes in the route advertised to the CE are not based on the path attributes of the VPN-IP route.
Note: As per RFC 6368, when the AS number of the importing VPRN is not equal to the origin AS signaled inside the ATTR_SET, the origin AS is prepended to the AS path before advertising the route to the CE.
QoS on ingress bindings
Traffic is tunneled between VPRN service instances on different PEs over service tunnels bound to MPLS LSPs or GRE tunnels.
configure service vprn spoke-sdp
QoS control can be applied to the service tunnels for traffic ingressing into a VPRN service; see Ingress QoS control on VPRN bindings.
An ingress queue group must be configured and applied to the ingress network FP where the traffic is received for the VPRN. All traffic received on that FP for any binding in the VPRN (either automatically or statically configured) which is redirected to a policer in the FP queue group is controlled by that policer. Use the following command to configure the redirection in the network QoS policy.
configure qos network ingress fc fp-redirect-group
As a result, the traffic from all such bindings is treated as a single entity (per forwarding class) with regard to ingress QoS control. The following commands in the network QoS policy are ignored for this traffic (IP multicast traffic would use the ingress network queues or queue group related to the network interface).
configure qos network ingress fc fp-redirect-group broadcast-policer
configure qos network ingress fc fp-redirect-group mcast-policer
configure qos network ingress fc fp-redirect-group unknown-policer
configure qos network ingress ler-use-dscp
Ingress bandwidth control does not take into account the outer Ethernet header, the MPLS labels/control word or GRE headers, or the FCS of the incoming frame.
Use the following command to associate the network QoS policy and the FP queue group and instance within the network ingress of a VPRN.
configure service vprn network ingress qos fp-redirect-group instance
The preceding command overrides the QoS applied to the related network interfaces for unicast traffic arriving on bindings in that VPRN. The IP and IPv6 criteria statements are not supported in the applied network QoS policy.
This is supported for all available transport tunnel types and is independent of the allocation mode for VPRN service labels (vrf or next-hop) used within the VPRN. It is also supported for Carrier-Supporting-Carrier VPRNs.
The ingress network interfaces on which the traffic is received must be on FP2- and higher-based hardware.
Multicast in IP-VPN applications
This section and its subsections focus on Multicast in IP VPN functionality. See the 7450 ESS, 7750 SR, 7950 XRS, and VSR Multicast Routing Protocols Guide for information about multicast protocols.
Applications for this feature include enterprise customer implementing a VPRN solution for their WAN networking needs, customer applications including stock-ticker information, financial institutions for stock and other types of trading data and video delivery systems.
Implementation of multicast in IP VPNs entails the support and separation of the providers core multicast domain from the various customer multicast domains and the various customer multicast domains from each other.
Multicast in IP-VPN applications depicts an example of multicast in an IP-VPN application. The provider’s domain encompasses the core routers (1 through 4) and the edge routers (5 through 10). The various IP-VPN customers each have their own multicast domain, VPN-1 (CE routers 12, 13 and 16) and VPN-2 (CE Routers 11, 14, 15, 17 and 18). Multicast in this VPRN example, the VPN-1 data generated by the customer behind router 16 is multicast only by PE 9 to PE routers 6 and 7 for delivery to CE routers 12 and 13 respectively. Data generated for VPN-2 generated by the customer behind router 15 is forwarded by PE 8 to PE routers 5, 7 and 10 for delivery to CE routers 18, 11, 14 and 17 respectively.
The demarcation of these domains is in the PE’s (routers 5 through 10). The PE router participates in both the customer multicast domain and the provider’s multicast domain. The customer’s CEs are limited to a multicast adjacency with the multicast instance on the PE specifically created to support that specific customer’s IP-VPN. This way, customers are isolated from the provider’s core multicast domain and other customer multicast domains while the provider’s core routers only participate in the provider’s multicast domain and are isolated from all customers’ multicast domains.
The PE for a specific customer’s multicast domain becomes adjacent to the CE routers attached to that PE and to all other PE’s that participate in the IP-VPN (or customer) multicast domain. This is achieved by the PE who encapsulates the customer multicast control data and multicast streams inside the provider’s multicast packets. These encapsulated packets are forwarded only to the PE nodes that are attached to the same customer’s edge routers as the originating stream and are part of the same customer VPRN. This prunes the distribution of the multicast control and data traffic to the PEs that participate in the customer’s multicast domain. The Rosen draft refers to this as the default multicast domain for this multicast domain; the multicast domain is associated with a unique multicast group address within the provider’s network.
Use of data MDTs
Using the above method, all multicast data offered by a specific CE is always delivered to all other CEs that are part of the same multicast. It is possible that a number of CEs do not require the delivery of a particular multicast stream because they have no downstream receivers for a specific multicast group. At low traffic volumes, the impact of this is limited. However, at high data rates this could be optimized by devising a mechanism to prune PEs from the distribution tree that although forming part of the customer multicast have no need to deliver a specific multicast stream to the CE attached to them. To facilitate this optimization, the Rosen draft specifies the use of data MDTs. These data MDTs are signaled after the bandwidth for a specific SG exceeds the configurable threshold.
When a PE detects it is transmitting data for the SG in excess of this threshold, it sends an MDT join TLV (at 60 second intervals) over the default MDT to all PEs. All PEs that require the SG specified in the MDT join TLV join the data MDT that is used by the transmitting PE to send the specific SG. PEs that do not require the SG do not join the data MDT, therefore pruning the multicast distribution tree to just the PEs requiring the SG. After providing sufficient time for all PEs to join the data MDT, the transmitting PE switches the specific multicast stream to the data MDT.
PEs that do not require the SG to be delivered, keep state to allow them to join the data MDT as required.
When the bandwidth requirement no longer exceeds the threshold, the PE stops announcing the MDT join TLV. At this point the PEs using the data MDT leave this group and transmission resumes over the default MDT.
Sampling to check if an s,g has exceeded the threshold occurs every ten seconds. If the rate has exceeded the configured rate in that sample period then the data MDT is created. If during that period the transmission rate has not exceeded the configured threshold then the data MDT is not created. If the data MDT is active and the transmission rate in the last sample period has not exceeded the configured rate then the data MDT is torn down and the multicast stream resumes transmission over the default MDT.
Multicast protocols supported in the provider network
When MVPN auto-discovery is disabled, PIM-SM can be used for I-PMSI, and PIM-SSM or PIM-SM (Draft-Rosen Data MDT) can be used for S-PMSI; When MVPN S-PMSI auto-discovery is enabled, both PIM-SM and PIM SSM can be used for I-PMSI, and PIM-SSM can be used for S-PMSI. In the customer network, both PIM-SM and PIM-SSM are supported.
An MVPN is defined by two sets of sites: sender sites set and receiver sites set, with the following properties:
Hosts within the sender sites set could originate multicast traffic for receivers in the receiver sites set.
Receivers not in the receiver sites set should not be able to receive this traffic.
Hosts within the receiver sites set could receive multicast traffic originated by any host in the sender sites set.
Hosts within the receiver sites set should not be able to receive multicast traffic originated by any host that is not in the sender sites set.
A site could be both in the sender sites set and receiver sites set, which implies that hosts within such a site could both originate and receive multicast traffic. An extreme case is when the sender sites set is the same as the receiver sites set, in which case all sites could originate and receive multicast traffic from each other.
Sites within a specific MVPN may be either within the same, or in different organizations, which implies that an MVPN can be either an intranet or an extranet. A site may be in more than one MVPN, which implies that MVPNs may overlap. Not all sites of a specific MVPN have to be connected to the same service provider, which implies that an MVPN can span multiple service providers.
Another way to look at MVPN is to say that an MVPN is defined by a set of administrative policies. Such policies determine both sender sites set and receiver site set. Such policies are established by MVPN customers, but implemented by MVPN service providers using the existing BGP/MPLS VPN mechanisms, such as route targets, with extensions, as necessary.
MVPN membership autodiscovery using BGP
BGP-based autodiscovery is performed by a multicast VPN address family. Any PE that attaches to an MVPN must issue a BGP update message containing an NLRI in this address family, along with a specific set of attributes.
The PE router uses route targets to specify MVPN route import and export. The route target may be the same as the one used for the corresponding unicast VPN, or it may be different. The PE router can specify separate import route targets for sender sites and receiver sites for a specific MVPN.
The route distinguisher (RD) that is used for the corresponding unicast VPN can also be used for the MVPN.
configure service vprn mvpn c-mcast-signaling pim
Supported configuration combinations and Supported configuration combinations describe the supported configuration combinations. If the CLI combination is not allowed, the system returns an error message. If the CLI command is marked as ‟ignored” in the table, the configuration is not blocked, but its value is ignored by the software.
Auto-discovery | Inclusive PIM SSM | Action |
---|---|---|
Yes |
Yes |
Allowed |
MDT-SAFI |
Yes |
Allowed |
No |
Yes |
Not Allowed |
Yes or No |
No |
Allowed |
MDT-SAFI |
No |
Ignored |
MDT-SAFI |
No (RSVP and MLDP) |
Not Allowed |
Auto-discovery | C-mcast-signaling | s-PMSI auto-discovery | Action |
---|---|---|---|
Yes |
BGP |
Ignored |
Allowed |
Yes |
PIM |
Yes |
Allowed |
Yes |
PIM |
No |
Allowed |
No |
BGP |
Ignored |
Not Allowed |
No |
PIM |
Ignored |
Allowed |
MDT-SAFI |
Ignored (PIM behavior) |
Ignored (‟No” behavior) |
Allowed |
C-multicast signaling in BGP requires autodiscovery to be enabled.
If c-mcast-signaling bgp is configured, disabling autodiscovery in the following context fails.
configure service vprn mvpn provider-tunnel selective
The error message states as follows.
C-multicast signaling in BGP requires autodiscovery to be enabled
When c-mcast-signaling bgp is configured, S-PMSI A-D is always enabled (its configuration is ignored).
When autodiscovery is disabled, S-PMSI A-D is always disabled (its configuration is ignored).
When autodiscovery is enabled and c-multicast-signaling pim is configured, the S-PMSI A-D configuration value is used.
MDT-SAFI uses PIM C-multicast signaling and S-PMSI signaling regardless of what is configured. A C-multicast signaling or S-PMSI signaling configuration is ignored, but both pim and bgp command options are allowed.
MDT-SAFI is only applicable to PIM-SSM I-PMSI. PIM-SM (ASM) I-PMSI is configurable but is ignored. RSVP and MLDP I-PMSI are not allowed.
MVPN implementation based on RFC 6037, Cisco Systems’ Solution for Multicast in MPLS/BGP IP VPNs can support membership autodiscovery using BGP MDT-SAFI. A CLI option is provided per MVPN instance to enable auto-discovery using either BGP MDT-SAFI or NG-MVPN. Only PIM-MDT is supported with the BGP MDT-SAFI method.
PE-PE transmission of C-multicast routing using BGP
MVPN c-multicast routing information is exchanged between PEs by using c-multicast routes that are carried using MCAST-VPN NLRI.
VRF route import extended community
VRF route import is an IP address-specific extended community, of an extended type, and is transitive across AS boundaries (RFC 4360, BGP Extended Communities Attribute).
To support MVPN, in addition to the import/export route target extended communities used by the unicast routing, each VRF on a PE must have an import route target extended community that controls imports of C-multicast routes into a particular VRF.
The c-multicast import RT uniquely identifies a VRF, and is constructed as follows:
The Global Administrator field of the c-multicast import RT must be set to an IP address of the PE. This address should be common for all the VRFs on the PE (this address may be the PE’s loopback address).
The Local Administrator field of the c-multicast import RT associated with a specific VRF contains a 2 octets long number that uniquely identifies that VRF within the PE that contains the VRF.
A PE that has sites of a specific MVPN connected to it communicates the value of the c-multicast import RT associated with the VRF of that MVPN on the PE to all other PEs that have sites of that MVPN. To accomplish this, a PE that originates a (unicast) route to VPN-IP addresses includes in the BGP updates message that carries this route the VRF route import extended community that has the value of the c-multicast import RT of the VRF associated with the route, except if it is known a priori that none of these addresses act as multicast sources or RP, or both, in which case the (unicast) route need not carry the VRF Route Import extended community.
All c-multicast routes with the c-multicast import RT specific to the VRF must be accepted. In this release, vrf-import and vrf-target policies do not apply to C-multicast routes.
The decision flow path is shown below.
if (route-type == c-mcast-route)
if (route_target_list includes C-multicast_Import_RT){
else
drop;
else
Run vrf_import or vrf-target, or both;
Provider tunnel support
Point-to-Multipoint Inclusive (I-PMSI) and Selective (S-PMSI) Provider Multicast Service Interface
BGP C-multicast signaling must be enabled for an MVPN instance to use P2MP RSVP-TE or LDP as I-PMSI (equivalent to ‛Default MDT’, as defined in draft Rosen MVPN) and S-PMSI (equivalent to ‛Data MDT’, as defined in draft Rosen MVPN).
By default, all PE nodes participating in an MVPN receive data traffic over I-PMSI. Optionally, (C-*, C-*) wildcard S-PMSI can be used instead of I-PMSI. See section Wildcard (C-*, C-*) P2MP LSP S-PMSI for more information. For efficient data traffic distribution, one or more S-PMSIs can be used, in addition to the default PMSI, to send traffic to PE nodes that have at least one active receiver connected to them. For more information, see P2MP LSP S-PMSI.
Only one unique multicast flow is supported over each P2MP RSVP-TE or P2MP LDP LSP S-PMSI. Number of S-PMSI that can be initiated per MVPN instance is restricted by the maximum-p2mp-spmsi command. P2MP LSP S-PMSI cannot be used for more than one (S,G) stream (that is, multiple multicast flow) as number of S-PMSI per MVPN limit is reached. Multicast flows that cannot switch to S-PMSI remain on I-PMSI.
P2MP RSVP-TE I-PMSI and S-PMSI
Point-to-Multipoint RSVP-TE LSP as inclusive or selective provider tunnel is available with BGP NG-MVPN only. P2MP RSVP-TE LSP is dynamically setup from root node on auto discovery of leaf PE nodes that are participating in multicast VPN. Each RSVP-TE I-PMSI or S-PMSI LSP can be used with a single MVPN instance only.
RSVP-TE LSP template must be defined (see 7450 ESS, 7750 SR, 7950 XRS, and VSR MPLS Guide) and bound to MVPN as inclusive or selective (S-PMSI is for efficient data distribution and is optional) provider tunnel to dynamically initiate P2MP LSP to the leaf PE nodes learned via NG-MVPN auto-discovery signaling. Each P2MP LSP S2L is signaled based on parameters defined in LSP template.
P2MP LDP I-PMSI and S-PMSI
Point-to-Multipoint LDP LSP as inclusive or selective provider tunnel is available with BGP NG-MVPN only. P2MP LDP LSP is dynamically setup from leaf nodes on autodiscovery of leaf node PE nodes that are participating in multicast VPN. Each LDP I-PMSI or S-PMSI LSP can be used with a single MVPN instance only.
The multicast-traffic command must be configured per LDP interface to enable P2MP LDP setup. P2MP LDP must also be configured as inclusive or selective (S-PMSI is for efficient data distribution and is optional) provider tunnel per MVPN to dynamically initiate P2MP LSP to leaf PE nodes learned via NG-MVPN auto-discovery signaling.
Wildcard (C-*, C-*) P2MP LSP S-PMSI
Wildcard S-PMSI allows the use of selective tunnel as a default tunnel for a specific MVPN. Users can avoid a full mesh of LSPs between the MVPN PEs, reducing related signaling, state, and bandwidth consumption for multicast distribution. No traffic is sent to PEs unless receivers are active on the default PMSI.
Use the following command to configure the wildcard S-PMSI functionality for NG-MVPN using LDP and RSVP-TE in P-instance.
configure service vprn mvpn provider-tunnel inclusive wildcard-spmsi
The support includes:
IPv4 and IPv6
PIM ASM and SSM
directly attached receivers
The SR OS (C-*, C-*) wildcard implementation uses wildcard S-PMSI instead of I-PMSI for a specific MVPN. A VPRN shutdown is required to switch MVPN from I-PMSI to (C-*, C-*) S-PMSI. ISSU and Upstream Multicast Hop (UMH) redundancy can be used to minimize the impact.
To minimize outage, the following upgrade order is recommended:
Route Reflector
receiver PEs
backup UMH
active UMH
Use the following command to configure RSVP-TE/mLDP under the inclusive provider tunnel. The configuration applies to the wildcard S-PMSI when enabled.
configure service vprn mvpn provider-tunnel inclusive
Wildcard C-S and C-G values are encoded as defined in RFC6625; that is, using zero for Multicast Source Length and Multicast Group Length, and omitting Multicast Source and Multicast Group values respectively in MCAST_VPN_NLRI. For example, a (C-*, C-*) is advertised as: RD, 0x00, 0x00, and the IP address of the originating router.
Procedures implemented by SR OS are compliant with section 3 and 4 of RFC 6625. Wildcards encoded as described in the preceding paragraph are carried in the NLRI field of MP_REACH_NLRF_ATTRIBUTE. Both IPv4 and IPv6 are supported: (AFI) of 1 or 2 and a Subsequent AFI (SAFI) of MCAST-VPN.
The (C-*, C-*) S-PMSI is established as follows:
-
UMH PEs advertise I-PMSI A-D routes without tunnel information present (empty PTA), and encoded in accordance with RFC 6513 and RFC 6514, before advertising wildcard S-PMSI. I-PMSI must be signaled and installed on the receiver PEs because (C-*, C-*) S-PMSI is only installed when a first receiver is added. However, no LSP is established for I-PMSI.
-
UMH PEs advertise the S-PMSI A-D route whose NLRI contains (C-*, C-*) with tunnel information encoded in accordance with RFC 6625.
Receiver PEs join wildcard S-PMSI if receivers are present.
To ensure correct operation between PEs with (C-*, C-*) S-PMSI signaling, two BSR modes of operation are implemented. These modes are the BSR unicast (default) and BSR S-PMSI.
The following applies to the BSR unicast mode:
-
BSR PDUs are sent or forwarded as unicast PDUs to neighbor PEs when I-PMSI with pseudo-tunnel interface is installed.
-
At every BSR interval timer, the BSR Unicast PDUs are sent to all I-PMSI interfaces when this is an elected BSR.
-
BSMs received as multicast from C-instance interfaces are flooded as unicast in the P-instance.
-
All PEs process BSR PDUs received on the I-PMSI pseudo-tunnel interface as unicast packets.
-
BSR PDUs are not forwarded to the management control interface of the PE.
-
BSR unicast PDUs use PE's system IP address of the PE as the destination IP, and sender the system address of the sender PE as the source IP.
-
The BSR unicast functionality ensures that no special state needs to be created for BSR when (C-*, C-*) S-PMSI is enabled, which is beneficial considering the low volume of BSR traffic.
For BSR unicast, the base IPv4 system address (IPv4) or the mapped version of the base IPv4 system address (IPv6) must be configured under the VPRN to ensure BSR unicast messages can reach the VPRN.
For BSR S-PMSI, the base IPv4 or IPv6 system address must be configured under the VPRN to ensure BSR S-PMSIs are established.
The BSR S-PMSI mode can be enabled to allow interoperability with other vendors. In this mode, full mesh S-PMSI is required and created between all PEs in MVPN to exchange BSR PDUs. To operate as expected, the BSR S-PMSI mode requires a selective P-tunnel configuration. For IPv6 support (including dual-stack) of BSR S-PMSI mode, the IPv6 default system interface address must be configured as a loopback interface address under the VPRN and VPRN PIM contexts. Changing the BSR signaling requires a VPRN shutdown.
Other key feature interactions and restrictions for (C-*, C-*) include the following:
Extranet is fully supported with wildcard S-PMSI trees.
(C-S, C-G) S-PMSIs are supported when (C-*, C-*) S-PMSI is configured (including both BW and receiver PE driven thresholds).
Geo-redundancy is supported (deploying with geo-redundancy eliminates traffic duplication when geo-redundant source has no active receivers at a cost of slightly increased outage upon a switch because wildcard S-PMSI may need to be re-establish).
PIM in P-instance is not supported.
-
The implementation requires wildcard encoding as described in RFC 6625 and I-PMSI/S-PMSI signaling as defined above (I-PMSI signaled with empty PTA then S-PMSI signaled with P-tunnel PTA) for interoperability. Implementations that do not adhere to the RFC 6625 encoding, or signal both I-PMSI and S-PMSI with P-tunnel PTA, do not interoperate with the SR OS implementation).
P2MP LSP S-PMSI
NG-MVPN support P2MP RSVP-TE and P2MP LDP LSPs as selective provider multicast service interface (S-PMSI). S-PMSI is used to avoid sending traffic to PEs that participate in multicast VPN, but do not have any receivers for a specific C-multicast flow. This allows more-BW efficient distribution of multicast traffic over the provider network, especially for high bandwidth multicast flows. S-PMSI is spawned dynamically based on configured triggers as described in S-PMSI trigger thresholds.
In MVPN, the head-end PE discovers all the leaf PEs via I-PMSI A-D routes. It then signals the P2MP LSP to all the leaf PEs using RSVP-TE. In the scenario of S-PMSI:
The head-end PE sends an S-PMSI A-D route for a specific C-flow with the Leaf Information Required bit set.
The PEs interested in the C-flow respond with Leaf A-D routes.
The head-end PE signals the P2MP LSP to all the leaf PEs using RSVP-TE.
Because the receivers may come and go, the implementation supports dynamically adding and pruning leaf nodes to and from the P2MP LSP.
When the tunnel type in the PMSI attribute is set to RSVP-TE P2MP LSP, the tunnel identifier is <Extended Tunnel ID, Reserved, Tunnel ID, P2MP ID>, as carried in the RSVP-TE P2MP LSP SESSION Object.
The PE can also learn via an A-D route that it needs to receive traffic on a particular RSVP-TE P2MP LSP before the LSP is actually set up. In this case, the PE must wait until the LSP is operational before it can modify its forwarding tables as directed by the A-D route.
Because of the way that LDP normally works, mLDP P2MP LSPs are set up without solicitation from the leaf PEs toward the head-end PE. The leaf PE discovers the head-end PE via I-PMSI or S-PMSI A-D routes. The tunnel identifier carried in the PMSI attribute is used as the P2MP FEC element. The tunnel identifier consists of the address of the head-end PE and a Generic LSP identifier value. The Generic LSP identifier value is automatically generated by the head-end PE.
Dynamic multicast signaling over P2MP LDP in VRF
This feature provides a multicast signaling solution for IP-VPNs, allowing the connection of IP multicast sources and receivers in C-instances, which are running PIM multicast protocol using Rosen MVPN with BGP SAFI and P2MP mLDP in P-instance. The solution dynamically maps each PIM multicast flow to a P2MP LDP LSP on the source and receiver PEs.
The feature uses procedures defined in RFC 7246: Multipoint Label Distribution Protocol In-Band Signaling in Virtual Routing and Forwarding (VRF) Table Context. On the receiver PE, PIM signaling is dynamically mapped to the P2MP LDP tree setup. On the source PE, signaling is handed back from the P2MP mLDP to the PIM. Because of dynamic mapping of multicast IP flow to P2MP LSP, provisioning and maintenance overhead is eliminated as multicast distribution services are added and removed from the VRF. Per (C-S, C-G) IP multicast state is also removed from the network, because P2MP LSPs are used to transport multicast flows.
Dynamic mLDP signaling for IP multicast in VPRN illustrates dynamic mLDP signaling for IP multicast in VPRN.
As illustrated in Dynamic mLDP signaling for IP multicast in VPRN, P2MP LDP LSP signaling is initiated from the receiver PE that receives PIM JOIN from a downstream router (Router A). Use the commands in the following context to enable dynamic multicast signaling on PIM customer-facing interfaces for the specific VPRN of Router A.
configure service vprn pim interface p2mp-ldp-tree-join
This enables handover of multicast tree signaling from the PIM to the P2MP LDP LSP. Being a leaf node of the P2MP LDP LSP, Router A selects the upstream-hop as the root node of P2MP LDP FEC, based on a routing table lookup. If an ECMP path is available for a specific route, then the number of trees are equally balanced toward multiple root nodes. The PIM joins are carried in the Transit Source PE (Router B), and multicast tree signaling is handed back to the PIM and propagated upstream as native-IP PIM JOIN toward C-instance multicast source.
The feature is supported with IPv4 and IPv6 PIM SSM and IPv4 mLDP. Directly connected IGMP/MLD receivers are also supported, with PIM enabled on outgoing interfaces and SSM mapping configured, if required.
The following are feature restrictions:
Dynamic mLDP signaling in a VPRN instance and Rosen or NG-MVPN are mutually exclusive.
A single instance of P2MP LDP LSP is supported between the receiver PE and Source PE per multicast flow; there is no stitching of dynamic trees.
Extranet functionality is not supported.
The router LSA link ID or the advertising router ID must be a routable IPv4 address (including IPv6 into IPv4 mLDP use cases).
IPv6 PIM with dynamic IPv4 mLDP signaling is not supported with EBGP or IBGP with IPv6 next-hop.
Inter-AS and IGP inter-area scenarios where the originating router is altered at the ASBR and ABR respectively, (therefore PIM has no way to create the LDP LSP toward the source), are not supported.
When dynamic mLDP signaling is deployed, a change in Route Distinguisher (RD) on the Source PE is not acted upon for any (C-S, C-G)s until the receiver PEs learn about the new RD (via BGP) and send explicit delete and create with the new RD.
Procedures of Section 2 of RFC 7246 for a case where UMH and the upstream PE do not have the same IP address are not supported.
MVPN sender-only/receiver-only
In multicast MVPN, if multiple PE nodes form a peering with a common MVPN instance, each PE node originates by default a multicast tree locally toward the remaining PE nodes that are members of this MVPN instance. This behavior creates a mesh of I-PMSI across all PE nodes in the MVPN. It is often the case that a specific VPN has many sites that host multicast receivers, but only a few sites that either host both receivers and sources or sources only.
MVPN Sender-only/Receiver-only allows optimization of control and data plane resources by preventing unnecessary I-PMSI mesh when a specific PE hosts multicast sources only or multicast receivers only for a specific MVPN.
For PE nodes that host only multicast sources for a specific VPN, the user can block those PEs, through configuration, from joining I-PMSIs from other PEs in this MVPN. For PE nodes that host only multicast receivers for a specific VPN, the user can block those PEs, through configuration, to set-up a local I-PMSI to other PEs in this MVPN.
MVPN Sender-only/Receiver-only is supported with NG-MVPN using IPv4 RSVP-TE or IPv4 LDP provider tunnels for both IPv4 and IPv6 customer multicast. MVPN sender-only/receiver-only example depicts 4-site MVPN with sender-only, receiver-only and sender-receiver (default) sites.
Extra attention needs to be paid to BSR/RP placement when Sender-only/Receiver-only is enabled. Source DR sends unicast encapsulated traffic toward RP, therefore, RP shall be at sender-receiver or sender-only site, so that *G traffic can be sent over the tunnel. BSR shall be deployed at the sender-receiver site. BSR can be at sender-only site if the RPs are at the same site. BSR needs to receive packets from other candidate-BSR and candidate-RPs and also needs to send BSM packets to everyone.
S-PMSI trigger thresholds
The mLDP and RSVP-TE S-PMSIs support two types of data thresholds: bandwidth-driven and receiver-PE-driven. The threshold evaluation and bandwidth driven threshold functionality are described in Use of data MDTs.
In addition to the bandwidth threshold functionality, the user can enable receiver-PE-driven threshold behavior. Receiver PE thresholds ensure that S-PMSI is only created when BW savings in P-instance justify extra signaling required to establish a new S-PMSI. For example, the number of receiver PEs interested in a specific C-multicast flow is meaningfully smaller than the number of receiver PEs for default PMSI (I-PMSI or wildcard S-PMSI). To ensure that S-PMSI is not constantly created/deleted, two thresholds need to be specified: receiver PE add threshold and receiver PE delete threshold (expected to be significantly higher).
When a (C-S, C-G) crosses a data threshold to create S-PMSI, instead of regular S-PMSI signaling, sender PE originates S-PMSI explicit tracking procedures to detect how many receiver PEs are interested in a specific (C-S, C-G). When receiver PEs receive an explicit tracking request, each receiver PE responds, indicating whether there are multicast receivers present for that (C-S, C-G) on the specific PE (PE is interested in a specific (C-S, C-G)). If the geo-redundancy feature is enabled, receiver PEs do not respond to explicit tracking requests for suppressed sources and therefore only Receiver PEs with an active join are counted against the configured thresholds on Source PEs.
Upon regular sampling and check interval, if the previous check interval had a non-zero receiver PE count (one interval delay to not trigger S-PMSI prematurely) and current count of receiver PEs interested in the specific (C-S, C-G) is non-zero and is less than the configured receiver PE add threshold, Source PE sets up S-PMSI for this (C-S, C-G) following standard ng-MVPN procedures augmented with explicit tracking for S-PMSI being established.
Data threshold timer should be set to ensure enough time is given for explicit tracking to complete (for example, setting the timer to value that is too low may create S-PMSI prematurely).
Upon regular data-delay-interval expiry processing, when BW threshold validity is being checked, a current receiver PE count is also checked (for example, explicit tracking continues on the established S-PMSI). If BW threshold no longer applies or the receiver PEs exceed receiver PE delete threshold, the S-PMSI is torn down and (C-S, C-G) joins back the default PMSI.
Changing of thresholds (including enabling disabling the thresholds) is allowed in service. The configuration change is evaluated at the next periodic threshold evaluation.
The explicit tracking procedures follow RFC 6513/6514 with clarification and wildcard S-PMSI explicit tracking extensions as described in IETF Draft: draft-dolganow-l3vpn-expl-track-00.
Migration from existing Rosen implementation
The existing Rosen implementation is compatible to provide an easy migration path.
The following migration procedures are supported:
Upgrade all the PE nodes that need to support MVPN to the newer release.
The old configuration is converted automatically to the new style.
Node by node, MCAST-VPN address-family for BGP is enabled. Enable auto-discovery using BGP.
Change PE-to-PE signaling to BGP.
Policy-based S-PMSI
SR OS creates a single Selective P-Multicast Service Interface (S-PMSI) per multicast stream: (S,G) or (*,G). To better manage bandwidth allocation in the network, multiple multicast streams are often bundled starting from the same root node into a single, multistream S-PMSI. Network bandwidth is usually managed per package or group of packages, instead of per channel.
Multi-stream S-PMSI supports a single S-PMSI for one or more IPv4 (C-S, C-G) or IPv6 (C-S, C-G) streams. Multiple multicast streams starting from the same VPRN going to the same set of leafs can use a single S-PMSI. Multi-stream S-PMSIs can:
carry exclusively IPv4 or exclusively IPv6, or a mix of channels
coexist with a single group S-PMSI
To create a multistream S-PMSI, an S-PMSI policy needs to be configured in the VPN context on the source node. This policy maps multiple (C-S, C-G) streams to a single S-PMSI. Because this configuration is done per MVPN, multiple VPNs can have identical policies, each configured for its own VPN context.
When mapping a multicast stream to a multistream S-PMSI policy, the data traverses the S-PMSI without first using the I-PMSI. (Before this feature, when a multicast stream was sourced, the data used the I-PMSI first until a configured threshold was met. After this, if the multicast data exceeded the threshold, it would signal the S-PMSI tunnel and switch from I-PMSI to S-PMSI.)
For multistream S-PMSI, if the policy is configured and the multicast data matches the policy rules, the multistream S-PMSI is used from the start without using the default I-PMSI.
Multiple multistream S-PMSI policies could be assigned to a specific S-PMSI configuration. In this case, the policy acts as a link list. The first (lowest index) that matches the multistream S-PMSI policy is used for that specific stream.
The rules for matching a multistream S-PMSI on the source node are listed here.
S-PMSI to (C-S, C-G) mapping on Source-PE, in sequence:
The multistream S-PMSI policy is evaluated, starting from the lowest numerical policy index. This allows the feature to be enabled in the service when per-(C-S, C-G) stream configuration is present. Only entries that are not shut down are evaluated. First, the multistream S-PMSI (the lowest policy index) that the (C-S, C-G) stream maps to is selected.
If (C-S, C-G) does not map to any of multistream S-PMSIs, per-(C-S, C-G) S-PMSIs are used for transmission if a (C-S, C-G) maps to an existing S-PMSI (based on data-thresholds).
If S-PMSI cannot be used, the default PMSI is used.
To address multistream S-PMSI P-tunnel failure, if an S-PMSI P-tunnel is not available, a default PMSI tunnel is used. When an S-PMSI tunnel fails, all (C-S, C-G) streams using this multistream S-PMSI move to the default PMSI. The groups move back to S-PMSI after the S-PMSI tunnel is restored.
Supported MPLS tunnels
Multi-stream S-PMSI is configured in the context of an autodiscovery default (that is, NG-MVPN). It supports all existing per-mLDP/RSVP-TE P2MP S-PMSI tunnel functionality for multistream S-PMSI LSP templates (RSVP-TE P-instance).
-
Per-multicast group statistics are not available for multistream S-PMSIs on S-PMSI level.
-
GRE tunnels are not supported for multistream S-PMSI.
Supported multicast features
S-PMSI is supported with PIM ASM and PIM SSM in C-instances.
-
The multistream S-PMSI model uses BSR RP co-located with the source PE or an RP between the source PE and multicast source, that is, upstream of receivers. Either of the BSR signaling type options, unicast or spmsi, can be deployed as applicable.
-
The model also supports other RP types.
In-service changes to multistream S-PMSI
The user can change the mapping in service; that is, the user can move active streams (C-S, C-G) from one S-PMSI to another using the configuration, or from the default PMSI to the S-PMSI, without having to stop data transmission or having to disable a PMSI.
The change is performed by moving a (C-S, C-G) stream from a per-group S-PMSI to a multistream S-PMSI and the other way around, and moving a (C-S, C-G) stream from one multistream S-PMSI to another multistream S-PMSI.
-
During re-mapping, a changed (C-S, C-G) stream is first moved to the default PMSI before it is moved to a new S-PMSI, regardless of the type of move. Unchanged (C-S, C-G) streams must remain on an existing PMSIs.
-
Any change to a multistream S-PMSI policy or to a preferred multistream S-PMSI policy (for example, an index change equivalent to less than or equal to the current policy) should be performed during a maintenance window. Failure to perform these types of changes in a maintenance window could potentially cause a traffic outage.
Configuration example
In the following example, two policies are created on the source node: multi-stream S-PMSI 1 and multi-stream-S-PMSI 10.
A multicast stream with group 224.0.0.0 and source 192.0.2.0/24 maps to the first multi-stream policy. The group in the range of 224.0.0.0/24 and source 192.0.1.0/24 maps to policy 10.
MD-CLI
[ex:/configure service vprn "5" mvpn]
A:admin@node-2# info
c-mcast-signaling bgp
auto-discovery {
type bgp
}
provider-tunnel {
inclusive {
mldp {
admin-state enable
}
}
selective {
auto-discovery true
data-threshold {
group-prefix 239.0.0.0/8 {
threshold 1
}
group-prefix 239.70.1.0/24 {
threshold 1
}
}
multistream-spmsi 1 {
admin-state enable
group-prefix 224.0.0.0/24 source-prefix 192.0.2.0/24 { }
}
multistream-spmsi 10 {
admin-state enable
group-prefix 224.0.0.0/24 source-prefix 192.0.1.0/24 { }
}
mldp {
admin-state enable
}
}
}
classic CLI
A:node-2:>config>service>vprn>mvpn# info
auto-discovery default
c-mcast-signaling bgp
provider-tunnel
inclusive
mldp
no shutdown
exit
exit
selective
mldp
no shutdown
exit
no auto-discovery-disable
data-threshold 239.0.0.0/8 1
data-threshold 239.70.1.0/24 1
multistream-spmsi 1 create
group 224.0.0.0/24
source 192.0.2.0/24
exit
exit
multistream-spmsi 10 create
group 224.0.0.0/24
source 192.0.1.0/24
exit
exit
exit
exit
Policy-based data MDT
A single data MDT can transport one or more IPv4 (C-S, C-G) streams. This allows multiple multicast streams starting from the same VPRN going to the same set of leafs to use a single data MDT. Characteristics of an MDT include:
a multistream data MDT can carry IPv4 only
a multistream data MDT can coexist with a single-group data MDT
a default MDT must be configured
To create a multistream MDT data, an MDA data policy must be configured in the context of MVPN on the source node. This policy maps multiple (C-S, C-G) streams to a single data MDT. Because this configuration is per MVPN, multiple VPNs can have identical policies configured, each for its own VPN context.
When a multicast stream is mapped to a multistream data MDT policy, the data traverses the default MDT first. The data delay timer is used to switch the data from the default MDT to the multistream data MDT.
When the multistream data MDT is deleted, the traffic switches back to the default MDT.
In some cases when a new multistream data MDT is configured which is better suited, some streams may prefer this new multistream data MDT. To switch, the traffic switches to the default MDT and then to the new multistream data MDT.
MDT data can be configured as SSM or ASM.
There can be multiple multistream policies assigned to a single data MDT configuration. In this case, the policy acts as a link list, the first (lowest index) matched multistream policy is used for that specific stream. The following are the rules of matching a multistream data MDT to (C-S, C-G) mapping on a source PE (in order):
The multistream policy is evaluated to enable the feature in service when per-(C-S, C-G) configuration is present, starting from the lowest numerical policy index (only entries that are not shutdown are evaluated). The first multistream data MDT (the lowest policy index) the (C-S, C-G) maps to is selected.
If (C-S, C-G) does not map to any of the multistream data MDTs, per-(C-S, C-G) single data MDTs are used if a (C-S, C-G) maps to an existing MDT based on data-thresholds.
The default MDT is used if no policy matches to a data MDT.
When a (C-S, C-G) arrives, it start on the default MDT but can switch to a data MDT when the data delay interval expires. It does not check the data-threshold before switching.
When going from one multistream data MDT to a more suitable one, the traffic first switches to the default MDT and then switch to the new multistream data MDT based on the data-delay-interval.
When a data MDT tunnel fails, all (C-S, C-G)s using this multistream data MDT move to the default MDT, and the groups move back to the data MDT when it is restored.
MVPN (NG-MVPN) upstream multicast hop fast failover
MVPN upstream PE or P node fast failover detection method is supported with RSVP P2MP I-PMSI only. A receiver PE achieves fast upstream failover based on the capability to subscribe multicast flow from multiple UMH nodes and the capability to monitor the health of the upstream PE and intermediate P nodes using an unidirectional multipoint BFD session running over the provider tunnel.
A receiver PE subscribes multicast flow from multiple upstream PE nodes to have active redundant multicast flow available during failure of primary flow. Active redundant multicast flow from standby upstream PE allows instant switchover of multicast flow during failure of primary multicast flow.
Faster detection of multicast flow failure is achieved by keeping track of unidirectional multipoint BFD sessions enabled on the provider tunnel. Multi-point BFD sessions must be configured with 10 ms transmit interval on sender (root) PE to achieve sub-50ms fast failover on receiver (leaf) PE.
Configure the tunnel-status command option of the following command on the receiver PE for upstream fast failover.
configure service vprn mvpn umh-selection
Primary and standby upstream PE pairs must be configured on the receiver PE to enable active redundant multicast flow to be received from the standby upstream PE.
Multicast VPN extranet
Multicast VPN extranet distribution allows multicast traffic to flow across different routing instances. A routing instance that received a PIM/IGMP JOIN but cannot reach source of multicast source directly within its own instance is selected as receiver routing instance (receiver C-instance). A routing instance that has source of multicast stream and accepts PIM/IGMP JOIN from other routing instances is selected as source routing instance (source C-instance). A routing instance that does not have either source or receivers but is used in the core is selected as a transit instance (transit P-instance). The following subsections detail supported functionality.
Multicast extranet for Rosen MVPN for PIM SSM
Multicast extranet is supported for Rosen MVPN with MDT SAFI. Extranet is supported for IPv4 multicast stream for default and data MDTs (PIM and IGMP).
The following extranet cases are supported:
local replication into a receiver VRF from a source VRF on a source PE
transit replication from a source VRF onto a tunnel of a transit core VRF on a source PE
A source VRF can replicate its streams into multiple core VRFs as long as any specific stream from source VRF uses a single core VRF (the first tunnel in any core VRF on which a join for the stream arrives). Streams with overlapping group addresses (same group address, different source address) are supported in the same core VRF.
remote replication from source or transit VRF into one or more receiver VRFs on receiver PEs
multiple replications from multiple source or transit VRFs into a receiver VRF on receiver PEs
Rosen MVPN extranet requires routing information exchange between the source VRF and the receiver VRF based on route export or import policies:
Routing information for multicast sources must be exported using an RT export policy from the source VRF instance.
Routing information must be imported into the receiver or transit VRF instance using an RT import policy.
The following are restrictions:
The source VRF instance and receiver VRF instance of extranet must exist on a common PE node (to allow local extranet mapping).
SSM translation is required for IGMP (C-*, C-G).
An I-PMSI route cannot be imported into multiple VPRNs, and NG-MVPN routes do not need to be imported into multiple VPRNs.
In Multicast VPN traffic flow, VPRN-1 is the source VPRN instance and VPRN-2 and VPRN-3 are receiver VPRN instances. The PIM/IGMP JOIN received on VPRN-2 or VPRN-3 is for (S1, G1) multicast flow. Source S1 belongs to VPRN-1. Because of the route export policy in VPRN-1 and the import policy in VPRN-2 and VPRN-3, the receiver host in VPRN-2 or VPRN-3 can subscribe to the stream (S1, G1).
Multicast extranet for NG-MVPN for PIM SSM
Multicast extranet is supported for ng-MVPN with IPv4 RSVP-TE and mLDP I-PMSIs and S-PMSIs including (C-*, C-*) S-PMSI support where applicable. Extranet is supported for IPv4 C-multicast traffic (PIM/IGMP joins).
The following extranet cases are supported:
local replication into a receiver C-instance MVPNs on a source PE from a source P-instance MVPN
remote replication from P-instance MVPN into one or more receiver C-instance MVPNs on receiver PEs
multiple replications from multiple source/transit P-instance MVPNs into a receiver C-instance MVPN on receiver PEs
Multicast extranet for ng-MVPN, similarly to extranet for Rosen MVPN, requires routing information exchange between source ng-MVPN and receiver ng-MVPN based on route export and import policies. Routing information for multicast sources is exported using an RT export policy from a source ng-MVPN instance and imported into a receiver ng-MVPN instance using an RT import policy. S-PMSI/I-PMSI establishment and C-multicast route exchange occurs in a source ng-MVPN P-instance only (import and export policies are not used for MVPN route exchange). Sender-only functionality must not be enabled for the source/transit ng-MVPN on the receiver PE. It is recommended to enable receiver-only functionality on a receiver ng-MVPN instance.
The following are restrictions:
Source P-instance MVPN and receiver C-instance MVPN must reside on the receiver PE (to allow local extranet mapping).
SSM translation is required for IGMP (C-*, C-G).
Multicast extranet with per-group mapping for PIM SSM
The architecture displayed in the preceding figures requires a source routing instance MVPN to place its multicast streams into one or more transit core routing instance MVPNs (each stream mapping to a single transit core instance only). It also requires receivers within each receiver routing instance MVPN to know which transit core routing instance MVPN they need to join for each of the multicast streams. To achieve this functionality, transit replication from a source routing instance MVPN onto a tunnel of a transit core routing instance MVPN on a source PE (see earlier sub-sections for MVPN topologies supporting transit replication on source PEs) and per-group mapping of multicast groups from receiver routing instance MVPNs to transit core routing instance MVPNs (as defined below) are required.
For per-group mapping on a receiver PE, the user must configure a receiver routing instance MVPN per-group mapping to one or more source/transit core routing instance MVPNs. The mapping allows propagation of PIM joins received in the receiver routing instance MVPN into the core routing MVPN instance defined by the map. All multicast streams sourced from a single multicast source address are always mapped to a single core routing instance MVPN for a specific receiver routing instance MVPN (multiple receiver MVPNs can use different core MVPNs for groups from the same multicast source address). If per-group map in receiver MVPN maps multicast streams sourced from the same multicast source address to multiple core routing instance MVPNs, then the first PIM join processed for those streams selects the core routing instance MVPN to be used for all multicast streams from a specific source address for this receiver MVPN. PIM joins for streams sourced from the source address not carried by the selected core VRF MVPN instance remains unresolved. When a PIM join or prune is received in a receiver routing instance MVPN with per-group mapping configured, if no mapping is defined for the PIM join’s group address, non-extranet processing applies when resolving how to forward the PIM join/prune.
The main attributes for per-group SSM extranet mapping on receiver PE include support for:
Rosen MVPN with MDT SAFI. RFC6513/6514 NG-MVPN with IPv4 RSVP-TE/mLDP in P-instance (a P-instance tunnel must be in the same VPRN service as multicast source)
IPv4 PIM SSM
IGMP (C-S, C-G), and for IGMP (C-*, C-G) using SSM translation
a receiver routing instance MVPN to map groups to multiple core routing instance MVPNs
in-service changes of the map to a different transit/source core routing instance (this is service affecting)
The following are restrictions:
When a receiver routing instance MVPN is on the same PE as a source routing instance MVPN, basic extranet functionality and not per-group (C-S, C-G) mapping must be configured (extranet from receiver routing instance to core routing instance to source routing instance on a single PE is not supported).
Local receivers in the core routing instance MVPN are not supported when per-group mapping is deployed.
Receiver routing instance MVPN that has per-group mapping enabled cannot have tunnels in its OIF lists.
Per-group mapping is blocked if GRT/VRF extranet is configured.
Multicast GRT-source/VRF-receiver extranet with per group mapping for PIM SSM
Multicast GRT-source/VRF-receiver (GRT/VRF) extranet with per-group mapping allows multicast traffic to flow from GRT into VRF MVPN instances. A VRF routing instance that received a PIM/IGMP join but cannot reach the source multicast stream directly within its own instance is selected as receiver routing instance. A GRT instance that has sources of multicast streams and accepts PIM joins from other VRF MVPN instances is selected as source routing instance.
GRT/VRF extranet shows an example deployment.
Routing information is exchanged between GRT and VRF receiver MVPN instances of extranet by enabling grt-extranet under a receiver MVPN PIM configuration for all or a subset of multicast groups. When enabled, multicast receivers in a receiver routing instances can subscribe to streams from any multicast source node reachable in GRT source instance.
The main feature attributes are:
GRT/VRF extranet can be performed on all streams or on a configured group of prefixes within a receiver routing instance.
GRT instance requires Classic Rosen multicast.
IPv4 PIM joins are supported in receiver VRF instances.
Local receivers using IGMP: (C-S, C-G) and (C-*, C-G) using SSM translation are supported.
The feature is blocked if a per-group mapping extranet is configured in receiver VRF.
Multicast extranet with per-group mapping for PIM ASM
Multicast extranet with per-group mapping for PIM ASM allows multicast traffic to flow from a source routing instance to a receiver routing instance when a PIM ASM join is received in the receiver routing instance.
Multicast extranet with per group PIM ASM mapping depicts PIM ASM extranet map support.
PIM ASM extranet is achieved by local mapping from the receiver to source routing instances on a receiver PE. The mapping allows propagation of anycast RP PIM register messages between the source and receiver routing instances over an auto-created internal extranet interface. This PIM register propagation allows the receiver routing instance to resolve PIM ASM joins to multicast sources and to propagate PIM SSM joins over an auto-created extranet interface to the source routing instance. PIM SSM joins are then propagated toward the multicast source within the source routing instance.
The following MVPN topologies are supported:
Rosen MVPN with MDT SAFI: a local replication on a source PE and multiple-source/multiple-receiver replication on a receiver PE
RFC 6513/6514 NG-MVPN (including RFC 6625 (C-*, C-*) wildcard S-PMSI): a local replication on a source PE and a multiple source/multiple receiver replication on a receiver PE
Extranet for GRT-source/VRF receiver with a local replication on a source PE and a multiple-receiver replication on a receiver PE
Locally attached receivers are supported without SSM mapping.
To achieve the extranet replication:
Configure in one of the following contexts, as applicable, the local PIM ASM mapping on a receiver PE from a receiver routing instance to a source routing instance.
- MD-CLI
configure service vprn mvpn rpf-select core-mvpn configure service vprn pim ipv4 grt-extranet
- classic
CLI
configure service vprn mvpn rpf-select core-mvpn configure service vprn pim grt-extranet
- MD-CLI
an anycast RP mesh between source and receiver PEs in the source routing instance
The following are restrictions:
This feature is supported for IPv4 multicast.
The multicast source must reside in the source routing instance the ASM map points to on a receiver PE (the deployment of transit replication extranet from source instance to core instance on Source PE with ASM map extranet from receiver instance to core instance on a receiver PE is not supported).
A specific multicast group can be mapped in a receiver routing instance using either PIM SSM mapping or PIM ASM mapping but not both.
A specific multicast group cannot map to multiple source routing instances.
Non-congruent unicast and multicast topologies for multicast VPN
The users that prefer to keep unicast and multicast traffic on separate links in a network have the option to maintain two separate instances of the route table (unicast and multicast) per VPRN.
Multicast BGP can be used to advertise separate multicast routes using Multicast NLRI (SAFI 2) on PE-CE link within VPRN instance. Multicast routes maintained per VPRN instance can be propagated between PE-PE using BGP Multicast-VPN NLRI (SAFI 129).
SR OS supports option to perform RPF check per VPRN instance using multicast route table, unicast route table or both.
Non-congruent unicast and multicast topology is supported with NG-MVPN. Draft Rosen is not supported.
Automatic discovery of Group-to-RP mappings (auto-RP)
Auto-RP is a proprietary group discovery and mapping mechanism for IPv4 PIM that is described in cisco-ipmulticast/pim-autorp-spec, Auto-RP: Automatic discovery of Group-to-RP mappings for IP multicast. The functionality is similar to the IETF standard bootstrap router (BSR) mechanism that is described in RFC 5059, Bootstrap Router (BSR) Mechanism for Protocol Independent Multicast (PIM), to dynamically learn about the availability of Rendezvous Points (RPs) in a network. Use the following command to configure the router as an RP-mapping agent that listens to the CISCO-RP-ANNOUNCE (224.0.1.39) group and caches the announced mappings:
- MD-CLI
configure router pim rp ipv4 auto-rp-discovery
- classic
CLI
configure router pim rp auto-rp-discovery
The RP-mapping agent then periodically sends out RP-mapping packets to the CISCO-RP-DISCOVERY (224.0.1.40) group. SR OS supports version 1 of the auto-RP specification, so the ability to deny RP-mappings by advertising negative group prefixes is not supported.
PIM dense-mode (PIM-DM) as described in RFC 3973, Protocol Independent Multicast - Dense Mode (PIM-DM): Protocol Specification (Revised), is used for the auto-RP groups to support multihoming and redundancy. The RP-mapping agent supports announcing, mapping, and discovery functions; candidate RP functionality is not supported.
Auto-RP is supported for IPv4 in multicast VPNs and in the global routing instance. Either BSR or auto-RP for IPv4 can be configured; the two mechanisms cannot be enabled together. BSR for IPv6 and auto-RP for IPv4 can be enabled together. In a multicast VPN, auto-RP cannot be enabled together with sender-only or receiver-only multicast distribution trees (MDTs), or wildcard S-PMSI configurations that could block flooding.
IPv6 MVPN support
IPv6 multicast support in SR OS allows the users to offer customers IPv6 multicast MVPN service. IPv4 mLDP or RSVP-TE core is used to carry IPv6 c-multicast traffic inside IPv4 mLDP or RSVP-TE provider tunnels (p-tunnels). The IPv6 customer multicast on a specific MVPN can be blocked, enabled on its own or in addition to IPv4 multicast per PE or per interface. When both IPv4 and IPv6 multicast is enabled for a specific MVPN, a single tree is used to carry both IPv6 and IPv4 traffic. IPv6 MVPN example shows an example of a user with IPv4 MPLS backbone providing IPv6 MVPN service to Customer 1 and Customer 2.
SR OS IPv6 MVPN multicast implementation provides the following functionality:
IPv6 C-PIM-SM (ASM and SSM)
MLDv1 and MLDv2
SSM mapping for MLDv1
I-PMSI and S-PMSI using IPv4 P2MP mLDP p-tunnels
I-PMSI and S-PMSI using IPv4 P2MP RSVP p-tunnels
BGP auto-discovery
PE-PE transmission of C-multicast routing using BGP mvpn-ipv6 address family
IPv6 BSR/RP functions on functional par with IPv4 (auto-RP using IPv4 only)
Embedded RP
Inter-AS Option A
The following known restrictions exist for IPv6 MVPN support:
Non-congruent topologies are not supported.
IPv6 is not supported in MCAC.
If IPv4 and IPv6 multicast is enabled, per-MVPN multicast limits apply to entire IPv4 and IPv6 multicast traffic as it is carried in a single PMSI. For example IPv4 AND IPv6 S-PMSIs are counted against a single S-PMSI maximum per MVPN.
IPv6 Auto-RP is not supported.
Multicast core diversity for Rosen MDT_SAFI MVPNs
Multicast core diversity depicts Rosen MVPN core diversity deployment:
Core diversity allows the user to optionally deploy multicast MVPN in either default IGP instance or one of two non-default IGP instances to provide, for example, topology isolation or different level of services. The following describes main feature attributes:
-
Rosen MVPN IPv4 multicast with MDT SAFI is supported with default and data MDTs.
-
Rosen MVPN can use a non-default OSPF or ISIS instance (using their loopback addresses instead of a system address).
-
Up to 3 distinct core instances are supported: system + 2 non-default OSPF instances – referred as ‟red” and ‟blue” below.
-
The BGP Connector also uses non-default OSPF loopback as NH, allowing Inter-AS Option B/C functionality to work with Core diversity as well.
-
The feature is supported with CSC-VPRN.
On source PEs (PE1: UMH, PE2: UMH in Multicast core diversity), an MVPN is assigned to a non-default IGP core instance as follows:
MVPN is statically pointed to use one of the non-default ‟red”/‟blue” IGP instances loopback addresses as source address instead of system loopback IP.
MVPN export policy is used to change unicast route next-hop VPN address (no longer required as of SR OS Release 12.0.R4 - BGP Connector support for non-default instances).
The above configuration ensures that MDT SAFI and IP-VPN routes for the non-default core instance use non-default IGP loopback instead of system IP. This ensures PIM advertisement/joins run in the correct core instance and GRE tunnels for multicast can be set-up using and terminated on non-system IP.
If BGP export policy is used to change unicast route next-hop VPN address, unicast traffic must be forwarded in non-default ‟red” or ‟blue” core instance LDP or RSVP (terminating on non-system IP) must be used. GRE unicast traffic termination on non-system IP is not supported, and any GRE traffic arriving at the PE in ‟blue”, ‟red” instances destined for non-default IGP loopback IP is forwarded to CPM (ACL or CPM filters can be used to prevent the traffic from reaching the CPM). This limitation does not apply if BGP connector attribute is used to resolve the multicast route.
No configuration is required on non-source PEs.
The following are feature restrictions:
VPRN instance must be shutdown to change the mdt-safi source-address. The CLI rollback that includes change of the auto-discovery is therefore service impacting.
To reset mdt-safi source-address to system IP, the user must first execute no auto-discovery (or auto-discovery default) then auto-discovery mdt-safi
Configuring system IP as a source-address consumes one of the 2 IP addresses allowed, therefore it should not be done.
The users must configure correct IGP instance loopback IP addresses within Rosen MVPN context and must configure correct BGP policies (Before Release 12.0.R4) for the feature to operate as expected. There is no verification that the address entered for MVPN provider tunnel source-address is such an address or is not a system IP address.
NG-MVPN core diversity
See Logical networks using multi-instance IGP for an operational example of logical networks using Multi-Instance IGP.
SR OS is being positioned in multi-instance IGP as a virtualization or migration strategy in numerous cases. One specific application is as a virtual LSR core whereby various topologies are created using separate IGP instances. Specifically, the migration to SR solutions requires the deployment of multi-instance IGP with service migrations. The objective is to more cleanly segregate protocols and service bindings to specific routing instances. NG-MVPN does not currently allow non-system loopbacks to be used for PMSI (for example, MVPN address family).
The ability to support binding MVPN with mLDP tunnels to different loopback interfaces is one of the main requirements. In addition, assigning these loopback interfaces to different IGP instances creates parallel NG-MVPN services, each running over a separate IGP instance.
This feature has two main components to it:
-
The ability of advertising a MVPN route, via a loopback interface, and generating an mLDP FEC with that loopback interface.
-
The ability of creating multiple IGP instances and assigning a loopback to each IGP instance. This, combined with the component above, creates parallel NG-MVPN services on different IGP instances.
NG-MVPN to loopback interface
SystemIP is usually used for management purposes and, therefore, it may be needed to create services to a separate loopback interface. Because of security concerns, many users do not want to create services to the systemIP address. See NG-MVPN setup via loopback interface and Intra-AS basic opaque FEC to loopback interface.
The first portion of this feature allows MVPN routes be advertised with a nexthop specific loopback.
This can be achieved via two methods:
-
Having IBGP session to the loopback interface of the peer, and using the corresponding local loopback for local-address.
-
Having IBGP session to the systemIP but using policies to change the nexthop of the AD route to the corresponding loopback interface.
After the AD routes are advertised via the loopback interface as nexthop, PIM generates an mLDP FEC for the loopback interface.
The preceding configuration allows an NG-MVPN to be established via a specific loopback.
It should be noted that this setup works in a single area or multi-area through an ABR.
NG-MVPN core diversity
In Core diversity with parallel NG-MVPN services on parallel IGP instances, Red and Blue networks correspond to separate IGP instances tied to separate loopback interfaces. In core diversity, MVPN services on the LER are bound to a domain by advertising MP-BGP routes, with next hop set to the appropriate loopback address, and having the receiving LERs resolve those BGP next hops, based on the corresponding IGP instance. All subsequent MVPN BGP routing exchanges must set and resolve the BGP next hop with the configured non-system loopback.
With this feature, an MP-BGP next hop would resolve to a link LDP label which is indirectly associated with a specific IGP instance. Services which advertise BGP routes with next hop Red loopback would result in traffic flowing over the Red network using LDP and Blue loopback would result in traffic flowing over Blue network. See Core diversity with parallel NG-MVPN services on parallel IGP instances.
Most importantly, the common routes in each IGP instance must have a unique and active label. This is required to ensure that the same route advertised in two different IGP domains do not resolve to the same label. An LDP local LSR ID can be used to ensure FECs and label mapping is advertised via the right instance of IGP.
Core diversity allows the user to optionally deploy multicast NG-MVPN in either default IGP instance or one of the non-default IGP instances to provide, for example, topology isolation or different level of services. The following describes the main feature attributes:
NG- MVPN can use IPv4/IPv6 multicast.
NG-MVPN can use a non-default OSPF or ISIS instance.
Note: This is accomplished by using their loopback addresses instead of a system address.The BGP Connector also uses non-default OSPF loopback as NH, via two methods:
-
setting the BGP local-address to a loopback interface IP address
-
creating a routing policy to set the NH of an MVPN AD route to the corresponding loopback IP
-
RSVP-TE/LDP transport tunnels also use non-systemIP loopback for session creation. As an example, RSVP-TE p2mp LSPs would be created to non systemIP loopbacks and mLDP creates a session via the local LSR ID.
Need to add the source address for the MVPN autodiscovery default.
On the source PEs, a NG-MVPN is assigned to a non-default IGP core instance as follows:
NG-MVPN is statically configured to use one of the non-default Red/Blue IGP instances loopback addresses as source address instead of system loopback IP.
MVPN export policy is used to change unicast route next-hop VPN address (BGP Connector support for non-default instances).
Alternatively, the BGP local-address can be set to the correct loopback interface assigned for that specific instance.
The preceding configuration ensures that MVPN-IPv4/v6 and IP-VPN routes for the non-default core instance use non-default IGP loopback instead of system IP. This ensures MVPN-IPv4/v6 advertisement/joins run in the correct core instance and mLDP and P2MP RSVP tunnels (I-PMSI and S-PMSI) for multicast can be set-up using and terminating on non-system IP.
If BGP export policy is used to change unicast route next-hop VPN address, then unicast traffic must be forwarded in non-default Red or Blue core instance LDP or RSVP (terminating on non-system IP) must be used.
P2MP RSVP-TE core diversity with UFD for UMH redundancy
Each NG-MVPN can be established over a separate IGP instance for core diversity support. In P2MP RSVP-TE core diversity with UFD for UMH redundancy, there are two IGP instances, Blue and Red, each having its own dedicated NG-MVPN service. The Blue plane is identified with the system IP address, and Red plane is identified with a loopback IP address. MVPN Autodiscovery (AD) routes are advertised and installed accordingly based on their NLRI in each instance. To advertise these MVPN AD routes in the corresponding planes, route policies are used to change the next hops of the AD routes. For example, for the Blue plane the AD routes are advertised by default with the system IP address as their next hop, so no route policy is necessary. In the Red plane, a route policy can change the next hop of the corresponding NG-MVPN AD routes to use the loopback IP address.
Core diversity also supports UMH redundancy solutions. In the case of P2MP RSVP-TE, which uses UFD for source UMH failure, the UFD packets are generated for each IGP instance (Blue, Red) accordingly, with the correct source and destination IP address that is part of that IGP instance.
UFD packet generation
SR OS generates the UFD packets with the correct source and destination IP addresses for each IGP instance used in the core diversity configuration. SR OS uses the NLRI next-hop information of the AD route for the destination IP address and derives the source IP address from the MVPN autodiscovery default source-address loopback for the UFD packets. This ensures the UFD packets are traversing the appropriate core corresponding to their UMH reachability.
For example, in P2MP RSVP-TE core diversity with UFD for UMH redundancy, the Blue plane UFD packets are generated from the root PE to the leaf PE with the source IP address as the local system IP on the root and the destination IP address as the system IP of the leaf. The Red plane generates the UFD packet with the loopback IP address to which it is corresponding to as its source and destination IP address.
Configuration example
In Core diversity with parallel NG-MVPN services on parallel IGP instances, there are three IGP instances, the default Instance 0, Instance 1, and Instance 2. Each instance binds to its own loopback interface. For Instance 0, it is the SystemIP system interface used as loopback. LDP, RSVP and MP-BGP need to run between the corresponding loopback associated with each instance.
For example, for the Blue Instance 2, both MP-BGP and LDP need to be configured to its corresponding loopback loopback2 and the next-hop for BGP MVPN-IP4/IPv6 and the VPN-IPv4/IPv6 need to be loopback2.
From a configuration point of view, the following steps need to be performed:
-
For MLDP, configure LDP with local-lsr-id with loopback
interface of instance 2 loopback2 as shown in the following
example.
MD-CLI
[ex:/configure router "Base" ldp] A:admin@node-2# info interface-parameters { interface "loopback2" { ipv4 { } } interface "to-root" { ipv4 { local-lsr-id { interface-name "loopback2" } } } }
classic CLIA:node-2>config>router>ldp# info ... interface-parameters interface "to-root" dual-stack ipv4 local-lsr-id interface-name "loopback2" no shutdown exit no shutdown exit interface "loopback2" dual-stack ipv4 no shutdown exit no shutdown exit exit targeted-session exit no shutdown
-
Configure the source address for default autodiscovery.
MD-CLI
[ex:/configure service vprn "2"] A:admin@node-2# info ... mvpn { auto-discovery { type bgp source-address LOOPBack1 } vrf-export { policy ["vprnexp100"] } } [ex:/configure service vprn "3"] A:admin@node-2# info ... mvpn { auto-discovery { type bgp source-address LOOPBack2 } vrf-export { policy ["vprnexp101"] } }
classic CLIA:node-2>config>service#info vprn 2 name "2" customer 1 create ... mvpn auto-discovery default source-address LOOPBack1 vrf-export ‟vprnexp100” exit exit vprn 3 name "3" customer 1 create ... mvpn auto-discovery default source-address LOOPBack2 vrf-export ‟vprnexp101” exit
-
Define the community vprnXXXX for each VPRN using
non-default core-instance and define a policy to tag each VPRN with either a
blue or red standard community
attribute.
MD-CLI
[ex:/configure policy-options] A:admin@node-2# info policy-options { community "blue" { } community "red" { } community "vprn2" { member "target:70:70" { } } community "vprn3" { member "target:80:80" { } } policy-statement "vprnexp2" { entry 10 { from { protocol { name [direct] } } action { action-type accept community { add ["vprn2" "red"] } } } } policy-statement "vprnexp3" { entry 10 { from { protocol { name [direct] } } action { action-type accept community { add ["vprn3" "blue"] } } } } }
classic CLIA:node-2>config>router>policy-options# info ---------------------------------------------- community "red" exit community "blue" exit community "vprn2" members "target:70:70" exit community "vprn3" members "target:80:80" exit policy-statement "vprnexp2" entry 10 from protocol direct exit action accept community add "vprn2" "red" exit exit exit policy-statement "vprnexp3" entry 10 from protocol direct exit action accept community add "vprn3" "blue" exit exit exit ----------------------------------------------
-
Define a single global BGP policy to change the next hop for
red and blue MVPNs.
MD-CLI
[ex:/configure policy-options] A:admin@node-2# info policy-statement "MVPN_CoreDiversity_Exp" { entry 10 { from { community { name "red" } } to { protocol { name [bgp-vpn] } } action { action-type accept next-hop "@loopback1@" } } entry 20 { from { community { name "blue" } } to { protocol { name [bgp-vpn] } } action { action-type accept next-hop "@loopback2@" } } }
classic CLIA:node-2config>router>policy-options# info policy-statement "MVPN_CoreDiversity_Exp" entry 10 from community "red" exit to protocol bgp-vpn exit action accept next-hop @loopback1@ exit exit entry 20 from community "blue" exit to protocol bgp-vpn exit action accept next-hop @loopback2@ exit exit exit
-
Configure the BGP default MVPN export in the group as required.
MD-CLI
[ex:/configure router "Base"] A:admin@node-2# info bgp { group "mvpn" { export { policy ["MVPN_CoreDiversity_Exp"] } } } mpls { } rsvp { }
classic CLIA:node-2>config>router>bgp$ info ---------------------------------------------- group "mvpn" export "MVPN_CoreDiversity_Exp" exit no shutdown ----------------------------------------------
-
Configure each VPRN to use the correct IGP source address and correct VRF
export policy.
MD-CLI
[ex:/configure service vprn "2" mvpn] A:admin@node-2# auto-discovery source-address loopback1 *[ex:/configure service vprn "2" mvpn] A:admin@node-2# vrf-export policy "vprnexp2" [ex:/configure service vprn "3" mvpn] A:admin@node-2# auto-discovery source-address loopback2 *[ex:/configure service vprn "3" mvpn] A:admin@node-2# vrf-export policy "vprnexp3"
classic CLI*A:node-2>service>vprn 2 mvpn auto-discovery default source-address loopback1 vrf-export "vprnexp2" *A:node-2>config>service>vprn 3 mvpn auto-discovery default source-address loopback2 vrf-export "vprnexp3"
NG-MVPN multicast source geo-redundancy
Multicast source geo-redundancy is targeted primarily for MVPN deployments for multicast delivery services like IPTV. The solutions allows the users to configure a list of geographically dispersed redundant multicast sources (with different source IPs) and then, using configured BGP policies, ensure that each Receiver PE (a PE with receivers in its C-instance) selects only a single, most-preferred multicast source for a specific group from the list. Although the data may still be replicated in P-instance (each multicast source sends (C-S, C-G) traffic onto its I-PMSI tree or S-PMSI tree), each Receiver PE only forwards data to its receivers from the preferred multicast source. This allows the users to support multicast source geo-redundancy without the replication of traffic for each (C-S, C-G) in the C-instance while allowing fast recovery of service when an active multicast source fails.
Preferred source selection for multicast source geo-redundancy shows an operational example of multicast source geo-redundancy.
The users can configure a list of prefixes for multicast source redundancy per MVPN on Receiver PEs. Up to 8 multicast source prefixes per VPRN are supported. Any multicast source that is not part of the source prefix list is treated as a unique source and automatically joined in addition to joining the most preferred source from the redundant multicast source list.
A Receiver PE selects a single, most-preferred multicast source from the list of pre-configured sources for a specific MVPN during (C-*, C-G) processing as follows:
A single join for the group is sent to the most preferred multicast source from the user-configured multicast source list. Joins to other multicast sources for a specific group are suppressed. The user can see active and suppressed joins on a Receiver PE. Although a join is sent to a single multicast source only, (C-S, C-G) state is created for every source advertising Type-5 S-A route on the Receiver PE.
The most preferred multicast source is a reachable source with the highest local preference for Type-5 SA route based on the BGP policy, as described later in this section.
On a failure of the preferred multicast source or when a new multicast source with a better local preference is discovered, Receiver PE joins the new most-preferred multicast source. The outage experienced depends on how quickly Receiver PE receives Type-5 S-A route withdrawal or loses unicast route to multicast source, and how quickly the network can process joins to the newly selected preferred multicast sources.
Local multicast sources on a Receiver PE are not subject to the most-preferred source selection, regardless of whether they are part of redundant source list or not.
BGP policy on Type-5 SA advertisements is used to determine the most preferred multicast source based on the best local preference as follows:
-
Each Source PE (a PE with multicast sources in its C-instance) tags Type-5 SA routes with a unique standard community attribute using global BGP policy or MVPN vrf-export policy. Depending on multicast topology, the policy may require source-aware tagging in the policy. Either all MVPN routes or Type-5 SA routes only can be tagged in the policy (attribute mvpn-type 5). Use the following command to configure the MVPN type to 5:
- MD-CLI
configure policy-options policy-statement entry from mvpn-type
- classic
CLI
configure router policy-options policy-statement entry from mvpn-type
- MD-CLI
- Each receiver PE has a BGP VRF import policy that sets local preference using match on Type-5 SA routes (attribute mvpn-type 5) and standard community attribute value (as tagged by the Source PEs). Using policy statements that also include group address match allows the receiver PEs to select the best multicast source per group. Use the following command to apply the BGP VRF import policy.
configure service vprn mvpn vrf-import
Use the following command to configure the default action to accept; otherwise, all MVPN routes other than those matched by specified entries are rejected:
- MD-CLI
configure policy-options policy-statement default-action action-type
- classic
CLI
configure router policy-options policy-statement default-action
In addition, the VRF route target must be configured as a community match condition because the MVPN route target configuration is ignored when the VRF import policy is defined.
Use the following command to configure the VRF route target.
configure service vprn mvpn vrf-target
Use the following command to configure the VRF import policy.
configure service vprn mvpn vrf-import
- MD-CLI
The users can change redundant source list or BGP policy affecting source selection in service. If such a change of the list/policy results in a new preferred multicast source election, make-before-break is used to join the new source and prune the previously best source.
For the correct operations, MVPN multicast source geo-redundancy requires the router:
To maintain the list of eligible multicast sources on Receiver PEs, Source PE routers must generate Type-5 S-A route even if the Source PE sees no active joins from any receiver for a specific group.
To trigger a switch from a currently active multicast source on a Receiver PE, Source PE routers must withdraw Type-5 S-A route when the multicast source fails or alternatively unicast route to multicast source must be withdrawn or go down on a Receiver PE.
MVPN multicast source redundancy solutions is supported for the following configurations only. Enabling the feature in unsupported configuration must be avoided.
-
NG-MVPN with RSVP-TE or mLDP or PIM with BGP c-multicast signaling in P-instance. Both I-PMSI and S-PMSI trees are supported.
-
IPv4 and IPv6 (C-*, C-G) PIM ASM joins in the C-instance.
-
Configuring the use of inter-site shared C-trees is optional. Use the following command to configure using inter-site shared C-trees.
configure service vprn mvpn intersite-shared
If intersite-shared is configured, the user must enable generation of Type-5 S-A routes even in the absence of receivers seen on Source PEs (intersite-shared persistent-type5-adv must be enabled).
-
The Source PEs must be configured as a sender-receiver, the Receiver PEs can be configured as a sender-receiver or a receiver-only.
-
The RPs must be on the Source PEs side. Static RP, anycast-RP, embedded-RP types are supported.
-
UMH redundancy can be deployed to protect Source PE to any multicast source. When deployed, UMH selection is executed independently of source selection after the most preferred multicast source had been chosen. Supported umh-selection command options include: highest-ip, hash-based, tunnel-status (not supported for IPv6), and unicast-rt-pref.
Multicast core diversity for Rosen MDT SAFI MVPNs
Multicast core diversity shows a Rosen MVPN core diversity deployment.
Core diversity allows the users to optionally deploy multicast MVPN in either default IGP instance. or one of two non-default IGP instances to provide; for example, topology isolation or different level of services. The following describes the main feature attributes:
Rosen MVPN IPv4 multicast with MDT SAFI is supported with default and data MDTs.
Rosen MVPN can use a non-default OSPF or ISIS instance (using their loopback addresses instead of a system address).
Up to 3 distinct core instances are supported: system + 2 non-default OSPF instances shown in Multicast core diversity.
The BGP Connector also uses non-default OSPF loopback as NH, allowing Inter-AS Option B/C functionality to work with Core diversity as well.
The feature is supported with CSC-VPRN.
On source PEs (PE1: UMH, PE2: UMH in the above picture), an MVPN is assigned to a non-default IGP core instance as follows:
MVPN is statically pointed to use one of the non-default IGP instances loopback addresses as source address instead of system loopback IP.
MVPN export policy is used to change unicast route next-hop VPN address.
BGP Connector support for non-default instances.
The configuration shown above ensures that MDT SAFI and IP-VPN routes for the non-default core instance use non-default IGP loopback instead of system IP. This ensures PIM advertisement/joins run in the correct core instance and GRE tunnels for multicast can be set-up using and terminated on non-system IP. If BGP export policy is used to change unicast route next-hop VPN address instead of BGP Connector attribute-based processing and unicast traffic must be forwarded in non-default core instances 1 or 2, LDP or RSVP (terminating on non-system IP) must be used. GRE unicast traffic termination on non-system IP is not supported and any GRE traffic arriving at the PE in instances 1 or 2, destined for non-default IGP loopback IP is forwarded to CPM (ACL or CPM filters can be used to prevent the traffic from reaching the CPM).
No configuration is required on the non-source PEs.
Known feature restrictions include:
VPRN instance must be shut down to change the MDT SAFI source address. The CLI rollback that includes changing the autodiscovery is therefore service impacting.
To reset the MDT SAFI source address to the system IP, the user must configure no autodiscovery (or autodiscovery default), then autodiscovery MDT-SAFI.
Configuring the system IP as a source address consumes one of the 2 IP addresses allowed, therefore it should not be done.
The users must configure the correct IGP instance loopback IP addresses within the Rosen MVPN context and must configure the correct BGP policies (before Release 12.0.R4) for the feature to operate as expected. There is no verification that the address entered for the MVPN provider tunnel source address is such an address or is not a system IP address.
Inter-AS MVPN
The Inter-AS MVPN feature allows set-up of Multicast Distribution Trees (MDTs) that span multiple Autonomous Systems (ASes).This section focuses on multicast aspects of the Inter-AS MVPN solution.
To support Inter-AS option for MVPNs, a mechanism is required that allows setup of Inter-AS multicast tree across multiple ASes. Because of limited routing information across AS domains, it is not possible to setup the tree directly to the source PE. Inter-AS VPN Option A does not require anything specific to inter-AS support as customer instances terminate on ASBR and each customer instance is handed over to the other AS domain via a unique instance. This approach allows the users to provide full isolation of ASes, but the solution is the least scalable case, as customer instances across the network have to exist on ASBR.
Inter-AS MVPN Option B allows the users to improve upon the Option A scalability while still maintaining AS isolation, while Inter-AS MVPN Option C further improves Inter-AS scale solution but requires exchange of Inter-AS routing information and therefore is typically deployed when a common management exists across all ASes involved in the Inter-AS MVPN. The following sub-sections provide further details on Inter-AS Option B and Option C functionality.
BGP connector attribute
BGP connector attribute is a transitive attribute (unchanged by intermediate BGP speaker node) that is carried with VPNv4 advertisements. It specifies the address of source PE node that originated the VPNv4 advertisement.
With Inter-AS MVPN Option B, BGP next-hop is modified by local and remote ASBR during re-advertisement of VPNv4 routes. On BGP next-hop change, information about the originator of prefix is lost as the advertisement reaches the receiver PE node.
BGP connector attribute allows source PE address information to be available to receiver PE, so that a receiver PE is able to associate VPNv4 advertisement to the corresponding source PE.
PIM RPF vector
In case of Inter-AS MVPN Option B, routing information toward the source PE is not available in a remote AS domain, because IGP routes are not exchanged between ASes. Routers in an AS other than that of a source PE, have no routes available to reach the source PE and therefore PIM JOINs would never be sent upstream. To enable setup of MDT toward a source PE, BGP next-hop (ASBR) information from that PE's MDT-SAFI advertisement is used to fake a route to the PE. If the BGP next-hop is a PIM neighbor, the PIM JOINs would be sent upstream. Otherwise, the PIM JOINs would be sent to the immediate IGP next-hop (P) to reach the BGP next-hop. Because the IGP next-hop does not have a route to source PE, the PIM JOIN would not be propagated forward unless it carried extra information contained in RPF Vector.
In case of Inter-AS MVPN Option C, unicast routing information toward the source PE is available in a remote AS PEs/ASBRs as BGP 8277 tunnels, but unavailable at remote P routers. If the tunneled next-hop (ASBR) is a PIM neighbor, the PIM JOINs would be sent upstream. Otherwise, the PIM JOINs would be sent to the immediate IGP next-hop (P) to reach the tunneled next-hop. Because the IGP next-hop does not have a route to source PE, the PIM JOIN would not be propagated forward unless it carried extra information contained in RPF Vector.
To enable setup of MDT toward a source PE, PIM JOIN therefore carries BGP next hop information in addition to source PE IP address and RD for this MVPN. For option-B, both these pieces of information are derived from MDT-SAFI advertisement from the source PE. For option-C, both these pieces of information are obtained from the BGP tunneled route.
Use the following command to add the RPF vector to a PIM join at a PE router.
configure router pim rpfv
Also use the preceding command to configure P routers and ASBR routers to allow RPF Vector processing. If not configured, the RPF Vector is dropped and the PIM JOIN is processed as if the PIM Vector were not present.
For further details about RPF Vector processing please see [RFCs 5496, 5384 and 6513]
Inter-AS MVPN Option B
Inter-AS Option B is supported for Rosen MVPN PIM SSM using BGP MDT SAFI, PIM RPF Vector and BGP Connector attribute. Inter-AS Option B default MDT setup is an example of a default MDT setup.
SR OS inter-AS Option B is designed to be standard compliant based on the following RFCs:
- RFC 5384
- The Protocol Independent Multicast (PIM) Join Attribute Format
- RFC 5496
- The Reverse Path Forwarding (RPF) Vector TLV
- RFC 6513
- Multicast in MPLS/BGP IP VPNs
The SR OS implementation was designed also to interoperate with older routers Inter-AS implementations that do not comply with the RFC 5384 and RFC 5496.
Inter-AS MVPN Option C
Inter-AS Option C is supported for Rosen MVPN PIM SSM using BGP MDT SAFI and PIM RPF Vector. Inter-AS Option C default MDT setup depicts a default MDT setup:
Additional restrictions for Inter-AS MVPN Option B and C support are the following:
-
Inter-AS MVPN Option B is not supported with duplicate PE addresses.
-
For Inter-AS Option C, BGP 8277 routes are installed into unicast rtm (rtable-u), unless routes are installed by some other means into multicast rtm (rtable-m), and Option C does not build core MDTs, therefore, rpf-table is configured to rtable-u or both.
Additional Cisco interoperability notes are the following:
- RFC 5384
- The Protocol Independent Multicast (PIM) Join Attribute Format
- RFC 5496
- The Reverse Path Forwarding (RPF) Vector TLV
- RFC 6513
- Multicast in MPLS/BGP IP VPNs
The SR OS implementation was designed to inter-operate with Cisco routers Inter-AS implementations that do not comply with the RFC5384 and RFC5496.
The following command configures RPF Vector processing for Inter-AS Rosen MVPN Option B and Option C.
configure router pim rpfv mvpn
For interoperability, when such a configuration is used, Cisco routers need to be configured to include RD in an RPF vector using the following command: ip multicast vrf vrf-name rpf proxy rd vector.
When Cisco routers are not configured to include RD in an RPF vector, use the following commands to configure the SR OS router (if supported):
- MD-CLI
configure router pim rpfv core configure router pim rpfv mvpn
- classic
CLI
configure router pim rpfv core mvpn
PIM joins received can be a mix of core and mvpn RPF vectors.
NG-MVPN non-segmented inter-AS solution
This feature allows multicast services to use segmented protocols and span them over multiple autonomous systems (ASs), as done in unicast services. As IP VPN or GRT services span multiple IGP areas or multiple ASs, either because of a network designed to deal with scale or as result of commercial acquisitions, the users may require Inter-AS VPN (unicast) connectivity. For example, an Inter-AS VPN can break the IGP, MPLS and BGP protocols into access segments and core segments, allowing higher scaling of protocols by segmenting them into their own islands. SR OS also allows for similar provision of multicast services and for spanning these services over multiple IGP areas or multiple ASs.
For multicast VPN (MVPN), SR OS previously supported Inter-AS Model A/B/C for Rosen MVPN; however, when MPLS was used, only Model A was supported for Next Generation Multicast VPN (NG-MVPN) and d-mLDP signaling.
For unicast VPRNs, the Inter-AS or Intra-AS Option B and C breaks the IGP, BGP and MPLS protocols at ABR routers (in case of multiple IGP areas) and ASBR routers (in case of multiple ASs). At ABR and ASBR routers, a stitching mechanism of MPLS transport is required to allow transition from one segment to next, as shown in Unicast VPN Option B with segmented MPLS and Unicast VPN Option C with segmented MPLS.
In Unicast VPN Option B with segmented MPLS, the Service Label (S) is stitched at the ASBR routers.
In Unicast VPN Option C with segmented MPLS, the 8277 BGP Label Route (LR) is stitched at ASBR1 and ASBR3. At ASBR1, the LR1 is stitched with LR2, and at ASBR3, the LR2 is stitched with TL2.
Previously, in case of NG-MVPN, segmenting an LDP MPLS tunnel at ASBRs or ABRs was not possible. As such, RFC 6512 and 6513 used a non-segmented mechanism to transport the multicast data over P-tunnels end-to-end through ABR and ASBR routers. The signaling of LDP needed to be present and possible between two ABR routers or two ASBR routers in different ASs.
SR OS now has d-mLDP non-segmented intra-AS and inter-AS signaling for NG-MVPN and GRT multicast. The non-segmented solution for d-mLDP is possible for inter-ASs as Option B and C.
Non-segmented d-mLDP and inter-AS VPN
There are three types of VPN Inter-AS solutions:
Options B and C use recursive opaque types 8 and 7 respectively, from Recursive opaque types.
Opaque type | Opaque name | RFC | SR OS use |
---|---|---|---|
1 |
Basic Type |
RFC 6388 |
VPRN Local AS |
3 |
Transit IPv4 |
RFC 6826 |
IPv4 multicast over mLDP in GRT |
4 |
Transit IPv6 |
RFC 6826 |
IPv6 multicast over mLDP in GRT |
7 |
Recursive Opaque (Basic Type) |
RFC 6512 |
Inter-AS Option C MVPN over mLDP |
8 |
Recursive Opaque (VPN Type) |
RFC 6512 |
Inter-AS Option B MVPN over mLDP |
Inter-AS Option A
In Inter-AS Option A, ASBRs communicate using VPN access interfaces, which need to be configured under PIM for the two ASBRs to exchange multicast information.
Inter-AS Option B
The recursive opaque type used for Inter-AS Option B is the Recursive Opaque (VPN Type), shown as opaque type 8 in Recursive opaque types.
Inter-AS option B support for NG-MVPN
Inter-AS Option B requires additional processing on ASBR routers and recursive FEC encoding than that of Inter-AS Option A. Because BGP adjacency is not e2e, ASBRs must cache and use a PMSI route to build the tree. For that, mLDP recursive FEC must carry RD information—therefore, VPN recursive FEC is required (opaque type 8).
In Inter-AS Option B, the PEs in two different ASs do not have their system IP address in the RTM. As such, for NG-MVPN, a recursive opaque value in mLDP FEC is required to signal the LSP to the first ASBR in the local AS path.
Because the system IPs of the peer PEs (Root-1 and Root-2) are not installed on the local PE (leaf), it is possible to have two PEs in different ASs with same system IP address, as shown in Identical system IP on multiple PEs (Option B). However, SR OS does not support this topology. The system IP address of all nodes (root or leaf) in different ASs must be unique.
For inter-AS Option B and NG-MVPN, SR OS as a leaf does not support multiple roots in multiple ASs with the same system IP and different RDs; however, the first root that is advertised to an SR OS leaf is used by PIM to generate an MLDP tunnel to this actual root. Any dynamic behavior after this point, such as removal of the root and its replacement by a second root in a different AS, is not supported and the SR OS behavior is nondeterministic.
I-PMSI and S-PMSI establishment
I-PMSI and S-PMSI functionality follows RFC 6513 section 8.1.1 and RFC 6512 sections 3.1 and 3.2.1. For routing, the same rules as for GRT d-mLDP use case apply, but the VRR Route Import External community now encodes the VRF instance in the local administrator field.
Option B uses an outer opaque of type 8 and inter opaque of type 1 (see Recursive opaque types).
Non-segmented mLDP PMSI establishment (Option B) depicts the processing required for I-PMSI and S-PMSI Inter-AS establishment.
For non-segmented mLDP trees, A-D procedures follow those of the Intra-AS model, with the exception that NO EXPORT community must be excluded; LSP FEC includes mLDP VPN-recursive FEC.
For I-PMSI on Inter-AS Option B:
-
A-D routes must be installed by ASBRs and next-hop information is changed as the routes are propagated, as shown in Non-segmented mLDP PMSI establishment (Option B).
-
PMSI A-D routes are used to provide inter-domain connectivity on remote ASBRs.
On a receipt of an Intra-AS PMSI A-D route, PE2 resolves PE1’s address (next-hop in PMSI route) to a labeled BGP route with a next-hop of ASBR3, because PE1 (Root-1) is not known via IGP. Because ASBR3 is not the originator of the PMSI route, PE2 sources an mLDP VPN recursive FEC with a root node of ASBR3, and an opaque value containing the information advertised by Root-1 (PE-1) in the PMSI A-D route, shown below, and forwards the FEC to ASBR 3 using IGP.
PE-2 LEAF FEC: (Root ASBR3, Opaque value {Root: ROOT-1, RD 60:60, Opaque Value: P2MPLSP-ID xx}}
When the mLDP VPN-recursive FEC arrives at ASBR3, it notes that it is the identified root node, and that the opaque value is a VPN-recursive opaque value. Because Root-1 PE1 Is not known via IGP, ASBR3 resolves the root node of the VPN-Recursive FEC using PMSI A-D (I or S) matching the information in the VPN-recursive FEC (the originator being PE1 (Root-1), RD being 60:60, and P2MP LSP ID xx). This yields ASBR1 as next hop. ASBR3 creates a new mLDP FEC element with a root node of ASBR1, and an opaque value being the received recursive opaque value, as shown below. ASBR then forwards the FEC using IGP.
ASBR-3 FEC: {Root ASBR 1, Opaque Value {Root: ROOT-1, RD 60:60, Opaque Value: P2MPLSP-ID xx}}
When the mLDP FEC arrives at ASBR1, it notes that it is the root node and that the opaque value is a VPN-recursive opaque value. As PE1’s ROOT-1 address is known to ASBR1 through the IGP, no further recursion is required. Regular processing begins, using received Opaque mLDP FEC information.
ASBR-1 FEC: {Root: ROOT-1, Opaque Value: P2MP LSP-ID xx
The functionality as described above for I-PMSI applies also to S-PMSI and (C-*, C-*) S-PMSI.
C-multicast Route Processing
C-multicast route processing functionality follows RFC 6513 section 8.1.2 (BGP used for route exchange). The processing is analogous to BGP Unicast VPN route exchange described in Unicast VPN Option B with segmented MPLS and Unicast VPN Option C with segmented MPLS. Non-segmented mLDP C-multicast exchange (Option B) shows C-multicast route processing with non-segmented mLDP PMSI details.
Inter-AS Option C
In Inter-AS Option C, the PEs in two different ASs have their system IP address in the RTM, but the intermediate nodes in the remote AS do not have the system IP of the PEs in their RTM. As such, for NG-MVPN, a recursive opaque value in mLDP FEC is needed to signal the LSP to the first ASBR in the local AS path.
The recursive opaque type used for Inter-AS Option B is the Recursive Opaque (Basic Type), shown as opaque type 7 in Recursive opaque types.
Inter-AS Option C support for NG-MVPN
For Inter-AS Option C, on a leaf PE, a route exists to reach root PE’s system IP and, as ASBRs can use BGP unicast routes, recursive FEC processing using BGP unicast routes, and not VPN recursive FEC processing using PMSI routes, is required.
I-PMSI and S-PMSI establishment
I-PMSI and S-PMSI functionality follows RFC 6513 section 8.1.1 and RFC 6512 Section 2. The same rules as per the GRT d-mLDP use case apply, but the VRR Route Import External community now encodes the VRF instance in the local administrator field.
Option C uses an outer opaque of type 7 and inter opaque of type 1.
Non-segmented mLDP PMSI establishment (Option C) shows the processing required for I-PMSI and S-PMSI Inter-AS establishment.
For non-segmented mLDP trees, A-D procedures follow those of the Intra-AS model, with the exception that NO EXPORT Community must be excluded; LSP FEC includes mLDP recursive FEC (and not VPN recursive FEC).
For I-PMSI on Inter-AS Option C:
-
A-D routes are not installed by ASBRs and next-hop information is not changed in MVPN A-D routes.
-
BGP-labeled routes are used to provide inter-domain connectivity on remote ASBRs.
On a receipt of an Intra-AS I-PMSI A-D route, PE2 resolves PE1’s address (N-H in PMSI route) to a labeled BGP route with a next-hop of ASBR3, because PE1 is not known via IGP. PE2 sources an mLDP FEC with a root node of ASBR3, and an opaque value, shown below, containing the information advertised by PE1 in the I-PMSI A-D route.
PE-2 LEAF FEC: {root = ASBR3, opaque value: {Root: ROOT-1, opaque value: P2MP-ID xx}}
When the mLDP FEC arrives at ASBR3, it notes that it is the identified root node, and that the opaque value is a recursive opaque value. ASBR3 resolves the root node of the Recursive FEC (ROOT-1) to a labeled BGP route with the next-hop of ASBR1, because PE-1 is not known via IGP. ASBR3 creates a new mLDP FEC element with a root node of ASBR1, and an opaque value being the received recursive opaque value.
ASBR3 FEC: {root: ASBR1, opaque value: {root: ROOT-1, opaque value: P2MP-ID xx}}
When the mLDP FEC arrives at ASBR1, it notes that it is the root node and that the opaque value is a recursive opaque value. As PE-1’s address is known to ASBR1 through the IGP, no further recursion is required. Regular processing begins, using the received Opaque mLDP FEC information.
The functionality as described above for I-PMSI applies to S-PMSI and (C-*, C-*) S-PMSI.
C-multicast route processing
C-multicast route processing functionality follows RFC 6513 section 8.1.2 (BGP used for route exchange). The processing is analogous to BGP Unicast VPN route exchange. Non-segmented mLDP C-multicast exchange (Option C) shows C-multicast route processing with non-segmented mLDP PMSI details.
LEAF node cavities
The LEAF (PE-2) has to have the ROOT-1 system IP installed in RTM via BGP. If the ROOT-1 is installed in RTM via IGP, the LEAF does not generate the recursive opaque FEC. As such, the ASBR 3 does not process the LDP FEC correctly.
Configuration example
No configuration is required for Option B or Option C on ASBRs. For Option B, use the following command to configure inter-as-non-segmented MLDP through the ASBR router:
- MD-CLI
configure router bgp inter-as-vpn
- classic
CLI
configure router bgp enable-inter-as-vpn
A policy is required for a root or leaf PE for removing the NO_EXPORT community from MVPN routes, which can be configured using an export policy on the PE.
The following example displays a policy configured on PEs to remove the no-export community.
MD-CLI
[ex:/configure router "Base" bgp]
A:admin@node-2# info
community "no-export" {
member "no-export" { }
}
policy-statement "remNoExport" {
default-action {
action-type accept
community {
remove ["no-export"]
}
}
}
classic CLI
A:node-2>config>router>policy-options# info
----------------------------------------------
community "no-export"
members "no-export"
exit
policy-statement "remNoExport"
default-action accept
community remove "no-export"
exit
exit
The following is an example for configuring in BGP the policy in a global, group, or peer context.
MD-CLI
[ex:/configure router "Base" bgp]
A:admin@node-2# info
vpn-apply-export true
export {
policy ["remNoExport"]
}
classic CLI
A:node-2>config>router>bgp# info
----------------------------------------------
vpn-apply-export
export "remNoExport"
Inter-AS non-segmented MLDP
See the ‟Inter-AS Non-segmented MLDP” section of the 7450 ESS, 7750 SR, 7950 XRS, and VSR MPLS Guide for more information.
ECMP
See the ‟ECMP” section of the 7450 ESS, 7750 SR, 7950 XRS, and VSR MPLS Guide for more information about ECMP.
mLDP non-Segmented intra-AS (inter-area) MVPN solution
SR OS supports intra-AS (Inter-Area) option B and C. The following interaction between inter and intra is as follows:
Intra-AS option B with inter-AS option B
Intra-AS option C with inter-AS option C
Intra-AS and inter-AS Option B
For intra/inter-as option B, the root is not visible on the leaf. LDP is responsible for building the recursive FEC and signaling the FEC to ABR/ASBR on the leaf. ABR/ASBR must have the PMSI AD router to re-build the FEC (recursive or basic) depending on whether they are connected to another ABR/ASBR or root node. As such, LDP must import the MVPN PMSI AD routes. To save resources, importing MVPN PMSI AD routes are performed manually by the user using configuration commands.
Use the following command to configure LDP to request BGP to provide the LDP task with all the MVPN PMSI AD routes and LDP internally caches these routes.
configure router ldp import-pmsi-routes mvpn-no-export-community
When mvpn-no-export-community is disabled, MVPN discards the catch routes to save resources.
In a scenario where a node running an older image that does not support the mvpn-no-export-community command and that node upgrades to a new image that does support the mvpn-no-export-community command, and if the older image had an inter-as MVPN configuration, after the upgrade to the newer image, the mvpn-no-export-community command is enabled by default, to ensure a smooth upgrade. This is to verify that all the routes are imported to mLDP, so the inter-as functionality works after the upgrade.
SR OS supports two major upgrades to enable the MVPN command if an upgrade to a load supporting this knob.
MVPN next hop self on ABRs
For option B, the ABR routers must change the next-hop of MVPN AD routes to be the ABR systemIP or the loopback IP for core diversity. Currently, the next-hop-self BGP command does not change the next hop of the MVPN AD routes. This functionality will be available in a future release.
In the meantime, a BGP policy can be used to change the MVPN AD routes next hop at the ABR.
MVPN next-hop-self policy example
MVPN Type 1 route (intra-AS IPMSI AD route) and MVPN Type 3 (S-PMSI AD route) must have a policy to set their next hop to be the ABR systemIP. In the following example, the ABR systemIP is 10.20.1.4 with the same token as the unicast vpn-ipv4 family and can be configured within the policy to have the next hop changed to the ABR systemIP.
Configure three policies on all ABRs:
a policy to change mvpn-ipv4 IntraAD Route Type 1 next hop to next-hop-self
a policy to change vpn-ipv4 next hop to next-hop-self
a policy to change mvpn-ipv4 IntraAD Route Type 3 to next-hop-self
MD-CLI
[ex:/configure policy-options]
A:admin@node-2# info
policy-statement "mod_nh_10.20.1.4" {
entry 1 {
from {
mvpn-type intra-as-ipmsi-auto-discovery
}
to {
neighbor {
ip-address 10.20.1.4
}
}
action {
action-type accept
}
}
default-action {
action-type next-policy
}
}
policy-statement "mod_nh_spmsi_10.20.1.4" {
entry 1 {
from {
mvpn-type s-pmsi-auto-discovery
to {
neighbor {
ip-address 10.20.1.4
}
}
action {
action-type accept
}
}
default-action {
action-type next-policy
}
}
policy-statement "mod_nh_vpn_10.20.1.4" {
entry 1 {
from {
family [vpn-ipv4]
}
to {
neighbor {
ip-address 10.20.1.4
}
}
action {
action-type accept
}
}
default-action {
action-type next-policy
}
}
classic CLI
A:node-2>config>router>policy-options# info
----------------------------------------------
policy-statement "mod_nh_10.20.1.4"
entry 1
from
mvpn-type 1
exit
action accept
next-hop 10.20.1.4
exit
exit
default-action next-policy
exit
exit
policy-statement "mod_nh_vpn_10.20.1.4"
entry 1
from
family vpn-ipv4
exit
action accept
next-hop 10.20.1.4
exit
exit
default-action next-policy
exit
exit
policy-statement "mod_nh_spmsi_10.20.1.4"
entry 1
from
mvpn-type 3
exit
action accept
next-hop 10.20.1.4
exit
exit
default-action next-policy
exit
exit
----------------------------------------------
LDP configuration example
Use the following commands to import all inter-AS and intra-AS (inter-area) routes on all ABR and non-ABR routers.
configure router ldp import-pmsi-routes mvpn
configure router ldp import-pmsi-routes mvpn-no-export-community
The following example shows the configuration.
MD-CLI
[ex:/configure router "Base" ldp]
A:admin@node-2# info
mp-mbb-time 10
generate-basic-fec-only true
fast-reroute {
}
import-pmsi-routes {
mvpn true
mvpn-no-export-community true
}
classic CLI
A:node-2>config>router>ldp# info
----------------------------------------------
fast-reroute
mp-mbb-time 10
generate-basic-fec-only
import-pmsi-routes
mvpn
mvpn-no-export-community
exit
BGP configuration example
configure router bgp group next-hop-self
configure router bgp group family vpn-ipv4
In addition, for unicast vpn-ipv4 connectivity, use the following command to configure inter-as-non-segmented MLDP through the ASBR router:
- MD-CLI
configure router bgp inter-as-vpn
- classic
CLI
configure router bgp enable-inter-as-vpn
The following example shows the BGP configuration.
MD-CLI
[ex:/configure router "Base" bgp]
A:admin@node-2# info
connect-retry 10
keepalive 10
vpn-apply-export true
vpn-apply-import true
inter-as-vpn true
hold-time {
seconds 30
}
family {
vpn-ipv4 true
ipv6 true
vpn-ipv6 true
mvpn-ipv4 true
mcast-vpn-ipv4 true
mvpn-ipv6 true
mcast-vpn-ipv6 true
}
...
rapid-update {
vpn-ipv4 true
vpn-ipv6 true
mvpn-ipv4 true
mcast-vpn-ipv4 true
mvpn-ipv6 true
mcast-vpn-ipv6 true
}
group "ibgp_A" {
next-hop-self true
cluster {
cluster-id 10.20.1.2
}
}
group "ibgp_D" {
next-hop-self true
cluster {
cluster-id 10.180.4.2
}
}
classic CLI
A:node-2>config>router>bgp# info
----------------------------------------------
family ipv4 ipv6 vpn-ipv4 vpn-ipv6 mvpn-ipv4 mcast-vpn-ipv4 mvpn-ipv6 mcast-vpn-ipv6
vpn-apply-import
vpn-apply-export
connect-retry 10
keepalive 10
hold-time 30
enable-inter-as-vpn
rapid-update vpn-ipv4 vpn-ipv6 mvpn-ipv4 mcast-vpn-ipv4 mvpn-ipv6 mcast-vpn-ipv6
group "ibgp_A"
next-hop-self
cluster 10.20.1.2
export "mod_nh_10.20.1.2" "mod_nh_spmsi_10.20.1.2" "mod_nh_vpn_10.20.1.2"
neighbor 10.20.1.1
local-address 10.20.1.2
med-out 100
peer-as 100
exit
exit
group "ibgp_D"
next-hop-self
cluster 10.180.4.2
export "mod_nh_10.20.1.2" "mod_nh_spmsi_10.20.1.2" "mod_nh_vpn_10.20.1.2"
exit
no shutdown
----------------------------------------------
UMH redundancy using bandwidth monitoring
Bandwidth monitoring is used in MVPN for NG-MVPN and mLDP transport. It is used for multicast source redundancy, where both sources have the same IP address but are connected to two different root nodes. Bandwidth monitoring can be used with basic or recursive mLDP FEC. Upstream Multicast Hop (UMH) redundancy for bandwidth monitoring is supported for mLDP basic FEC and recursive FEC type 7 and 8 only.
With bandwidth monitoring, the leaf node sends a single (S1,G1) join to both root nodes. PIM SSM and ASM can be used between the receiver and the leaf, or between the UMH and the source. For ASM, bandwidth monitoring works only when traffic is switched from <*,G> to <S,G>.
After the source starts the multicast flow toward the root PEs, both root nodes transport the traffic to the leaf node on the PMSI (I-PMSI or S-PMSI).
The leaf listens to the active PMSI, blocks the other PMSI, and monitors the traffic rate on both the active and inactive PMSI. For faster than 50 ms switchover, both the active and the inactive PMSIs must arrive on the same IOM, because a single IOM must make the decision about which PMSI the leaf listens to and which PMSI to block.
The threshold for the rate of traffic lost between the active PMSI and the inactive PMSI is configured on the leaf PE. If the rate exceeds the configured value, the traffic switches from the active PMSI to the inactive PMSI. Rate monitoring is per PMSI, and not per (C-S,C-G).
After the active PMSI traffic rate is stored, there is a revertive behavior, which has a configurable timer. The revertive timer starts after the active PMSI traffic is recovered. When the timer expires and the primary PMSI traffic is stable, the traffic is switched back to the primary path. If the traffic goes below the threshold while the timer is decrementing, the timer is reset. This feature supports 1K of PMSI switchovers within 50 ms.
Fault recovery mitigation at PMSI switchover time
<S,G> switching between I-PMSI and S-PMSI is not symmetrical (synchronized in time) on the active and the inactive UMH. While the active UMH attempts to switch an <S,G> between I-PMSI and S-PMSI, the active PMSI traffic rate arriving from the active UMH may be different from that arriving from the inactive UMH. This asymmetrical behavior can generate a premature switch from the active PMSI to the inactive PMSI.
The traffic rate delta can be set to account for this behavior. For example, if a 1080P channel uses 5 Mb/s, the traffic rate delta can be set to 15 Mb/s to avoid the switchover from the primary to the secondary PMSI if one or two 1080 <S,G>s are switched between I-PMSI and S-PMSI. This provides a 10 Mb/s tolerance of asymmetric traffic.
S-PMSI behavior
If the network FDV is large or the sources are not synchronized, switching from I-PMSI to S-PMSI can happen at a different time on the primary and backup UMHs. This can cause asymmetric traffic on the I-PMSI and S-PMSI, resulting in a switch from the active UMH. The <S,G> traffic can arrive for the I-PMSI from the backup UMH and for S-PMSI from the active UMH, which causes temporary duplicate traffic until both UMHs switch to S-PMSI.
Multistream S-PMSI provides a solution for this case by mapping an <S,G> to an S-PMSI. The <S,G> is locked to the multistream S-PMSI, which is always configured and never torn down, even if the traffic goes down to 0, so the multistream S-PMSI is not susceptible to S-PMSI traffic drops.
Use the following commands to configure the maximum number of S-PMSI for the MVPN selective provider tunnel.
- MD-CLI
configure service vprn mvpn provider-tunnel selective mldp maximum-p2mp-spmsi configure service vprn mvpn provider-tunnel selective rsvp maximum-p2mp-spmsi
- classic
CLI
configure service vprn mvpn provider-tunnel selective maximum-p2mp-spmsi
The number of <S,G>s must be less than, or equal to, this number. Otherwise, different <S,G>s can switch to the S-PMSI and when the S-PMSI limit is exhausted, the primary and backup UMHs become out of sync.
Bandwidth monitoring on single IOMs
Bandwidth monitoring is supported on single IOMs, that is, both the active and the backup PMSI terminate on the same IOM. The IOM monitors the statistics of both PMSIs and makes the switchover decision. The IOM does not include the CPM in any of the bandwidth monitoring decisions, which ensures fast detection times and switchover times under 50 ms.
All leaf and bud nodes must be configured with the same UMH PEs, I-PMSI and S-PMSI, and bandwidth threshold configuration to avoid traffic drops.
All LAG members must be in the same IOM that is performing the bandwidth monitoring function. The LAG interfaces spanning between multiple port members that belong to different IOMs have an unpredictable behavior, including traffic duplication.
ASM behavior
Bandwidth monitoring is supported with ASM only after the traffic is switched from <*,G> to <S,G>. Traffic arriving on separate IOMs from the active UMH and the inactive UMH results in traffic duplication because the pairing of the active and inactive UMH is per IOM, and the IOMs do not have a view of the pair. If the active traffic and the backup traffic arrive on different IOMs, each IOM treats the flow as the active flow and processes the traffic accordingly.
Low traffic rate
At low traffic rates, the packet on the active PMSI and the packet on the inactive PMSI can arrive at different times. If the packets arrive at the point that the statistics are read, there can be an inconsistency, resulting in a switchover. To avoid a switchover, UMH redundancy using bandwidth monitoring should be used only when the traffic rate is higher than 2 or 3 packets per second.
Revertive timer
Use the following commands to configure the revertive timer.
configure service vprn mvpn provider-tunnel inclusive umh-rate-monitoring revertive-timer
configure service vprn mvpn provider-tunnel selective umh-rate-monitoring group source revertive-timer
If the active PMSI traffic goes below the threshold while the revertive timer is running, the timer is reset.
MVPN upstream PE fast failover
- to select two UMH nodes
- to monitor the upstream PE health using UFD
MVPN upstream PE fast failover for tree SID
MVPN upstream PE fast failover for tree SID is supported as follows:
- Only inclusive PMSIs are supported.
- SM and SSM modes are supported. For SM, only fast switchover is supported on SPT. Fast protection is not supported on a shared tree.
- UFD sessions with 10-millisecond interval are supported on the CPM.
- For the MVPN UMH redundancy feature with BFD fault detection, if using P2MP RSVP or P2MP SR-policy transport, the switchover time from the primary to the standby tunnels increases linearly as the number of tunnel pairs increases.
- Traffic duplication for restoration of the primary stream from the standby can occur for up to a second or more, depending on the CPU load during switchover.
- The P2MP policy tunnel ID is used as the UFD discriminator. Consequently, the solution is not interoperable with other vendors.
Multicast-only Fast Reroute
Multicast-only Fast Reroute (MoFRR) is not supported when UMH redundancy with bandwidth monitoring is enabled.
FIB prioritization
The RIB processing of specific routes can be prioritized using the rib-priority command. This command allows specific routes to be prioritized through the protocol processing so that updates are propagated to the FIB as quickly as possible.
The rib-priority command can be configured within the VPRN instance of the OSPF or IS-IS routing protocols. For OSPF, a prefix list can be specified that identifies which route prefixes should be considered high priority.
Use the following command to configure all routes learned through the specified interface as high priority.
configure service vprn ospf area interface rib-priority high
configure service vprn ospf3 area interface rib-priority high
For the IS-IS routing protocol, RIB prioritization can be either specified though a prefix-list or an IS-IS tag value. If a prefix list is specified then route prefixes matching any of the prefix list criteria are considered high priority. If instead, an IS-IS tag value is specified, then any IS-IS route with that tag value is considered high priority.
The routes that have been designated as high priority are the first routes processed and then passed to the FIB update process so that the forwarding engine can be updated. All known high priority routes should be processed before the routing protocol moves on to other standard-priority routes. This feature has the most impact when there are a large number of routes being learned through routing protocols.
Configuring a VPRN service with CLI
This section provides information to configure Virtual Private Routed Network (VPRN) services using the command line interface.
Basic configuration
The following command options require specific input (there are no defaults) to configure a basic VPRN service:
-
customer ID (see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Services Overview Guide)
-
interface command options
-
spoke SDP command options
The following example displays the configuration of a VPRN service.
MD-CLI
[ex:/configure service vprn "32"]
A:admin@node-2# info
admin-state enable
customer "1"
autonomous-system 10000
ecmp 8
pim {
apply-to all
rp {
ipv4 {
bsr-candidate {
admin-state disable
}
}
ipv6 {
bsr-candidate {
admin-state disable
}
}
}
}
bgp-ipvpn {
mpls {
admin-state enable
route-distinguisher "10001:1"
vrf-target {
community "target:10001:1"
}
vrf-import {
policy ["vrfImpPolCust1"]
}
vrf-export {
policy ["vrfExpPolCust1"]
}
auto-bind-tunnel {
resolution filter
resolution-filter {
ldp true
}
}
}
}
bgp {
router-id 10.0.0.1
ebgp-default-reject-policy {
import false
export false
}
group "to-ce1" {
peer-as 65101
export {
policy ["vprnBgpExpPolCust1"]
}
}
neighbor "10.1.1.2" {
group "to-ce1"
}
}
interface "to-ce1" {
ipv4 {
primary {
address 10.1.0.1
prefix-length 24
}
neighbor-discovery {
proxy-arp-policy ["test"]
}
dhcp {
admin-state enable
description "DHCP test"
}
vrrp 1 {
}
}
sap 1/1/9:2 {
ingress {
qos {
sap-ingress {
policy-name "100"
}
}
}
egress {
qos {
sap-egress {
policy-name "1010"
}
}
filter {
ip "10"
}
}
}
}
static-routes {
route 10.5.0.0/24 route-type unicast {
next-hop "10.1.1.2" {
admin-state enable
}
}
}
rip {
admin-state enable
export-policy ["vprnRipExpPolCust1" "vprnRipExpoPolCust1"]
group "ce1" {
admin-state enable
neighbor "to-ce1" {
admin-state enable
}
}
group "cel" {
neighbor "to-ce1" {
}
}
}
classic CLI
A:node-2>config>service>vprn# info
----------------------------------------------
ecmp 8
autonomous-system 10000
interface "to-ce1" create
address 10.1.0.1/24
dhcp
no shutdown
description "DHCP test"
exit
vrrp 1
exit
proxy-arp-policy "test"
sap 1/1/9:2 create
ingress
qos 100
exit
egress
qos 1010
filter ip 10
exit
exit
exit
static-route-entry 10.5.0.0/24
next-hop 10.1.1.2
no shutdown
exit
exit
bgp-ipvpn
mpls
auto-bind-tunnel
resolution-filter
ldp
exit
resolution filter
exit
route-distinguisher 10001:1
vrf-import "vrfImpPolCust1"
vrf-export "vrfExpPolCust1"
vrf-target target:10001:1
no shutdown
exit
exit
bgp
router-id 10.0.0.1
group "to-ce1"
export "vprnBgpExpPolCust1"
peer-as 65101
neighbor 10.1.1.2
exit
exit
no shutdown
exit
pim
apply-to all
rp
static
exit
bsr-candidate
shutdown
exit
rp-candidate
shutdown
exit
exit
no shutdown
exit
rip
export "vprnRipExpPolCust1"
group "ce1"
neighbor "to-ce1"
no shutdown
exit
no shutdown
exit
no shutdown
exit
no shutdown
Common configuration tasks
This section provides a brief overview of the tasks that must be performed to configure a VPRN service and provides the CLI commands.
- Associate a VPRN service with a customer ID.
- Optionally define an autonomous system.
- Define a route distinguisher (mandatory).
- Define VRF route-target associations or VRF import/export policies.
- Optionally define PIM command options.
- Create a subscriber interface (applies to the 7750 SR only and is optional).
- Create an interface.
-
Define SAP command options on the interface.
-
Select nodes and ports.
-
Optionally select QoS policies other than the default (configured in the configure qos context).
-
Optionally select filter policies (configured in the configure filter context).
-
Optionally select accounting policy (configured in the configure log context).
-
Optionally configure DHCP features (applies to the 7450 ESS and 7750 SR).
-
-
Optionally define BGP command options.
BGP must be enabled in the configure router bgp context.
- Optionally define RIP command options.
- Optionally spoke SDP command options.
- Optionally create confederation autonomous systems within an AS.
- Enable the service.
Configuring VPRN components
This section provides VPRN configuration examples.
Creating a VPRN service
Use the commands in the following context to create a VPRN service.
configure service vprn
A route distinguisher must be defined and the VPRN service must be administratively up in order for VPRN to be operationally active.
The following example displays a VPRN service configuration.
MD-CLI
[ex:/configure service]
A:admin@node-2# info
vprn "test" {
admin-state enable
service-id 1
customer "1"
...
admin-state enable
route-distinguisher "10001:0"
}
classic CLI
A:node-2>config>service# info
----------------------------------------------
...
vprn 1 name test customer 1 create
route-distinguisher 10001:0
no shutdown
exit
...
----------------------------------------------
Configuring a global VPRN service
The following example displays a VPRN service with configured command options.
MD-CLI
[ex:/configure service]
A:admin@node-2# info
vprn "test" {
admin-state enable
service-id 27
customer "1"
autonomous-system 10000
bgp-ipvpn {
mpls {
admin-state enable
route-distinguisher "10001:1"
vrf-import {
policy ["vrfImpPolCust1"]
}
vrf-export {
policy ["vrfExpPolCust1"]
}
}
}
spoke-sdp 2:27 {
}
classic CLI
A:node-2>config>service# info
...
vprn 27 name "test" customer 1 create
autonomous-system 10000
bgp-ipvpn
mpls
route-distinguisher 10001:1
vrf-import "vrfImpPolCust1"
vrf-export "vrfExpPolCust1"
no shutdown
exit
exit
spoke-sdp 2:27 create
Configuring a VPRN log
The following example displays a VPRN log configuration.
MD-CLI
[ex:/configure service vprn "101"]
A:admin@node-2# info
customer "1"
log {
filter "1" {
named-entry "1" {
action forward
}
}
log-id "1" {
filter "1"
source {
main true
change true
}
destination {
syslog "1"
}
}
log-id "32" {
filter "1"
source {
main true
change true
}
destination {
snmp {
}
}
}
snmp-trap-group "32" {
trap-target "3" {
address 3ffe::e01:403
port 9000
version snmpv2c
notify-community "vprn1"
}
}
syslog "1" {
address 3ffe::e01:403
log-prefix "vprn1"
}
}
snmp {
access true
community "dMHKqSM+0Ki7WFsaGl3Fy9Sn6wDeooe9Ltjrwvc5lw== hash2" {
access-permissions r
}
community "80Ixno7aOLReeFUINhWFGeYS0vjzfLCX167ZYtjQp2o= hash2" {
access-permissions rw
version v2c
}
}
dhcp-server {
dhcpv4 "vprn_1" {
admin-state enable
force-renews true
pool-selection {
use-pool-from-client {
}
}
}
}
classic CLI
A:node-2>config>service>vprn# info
----------------------------------------------
dhcp
local-dhcp-server "vprn_1" create
use-pool-from-client
force-renews
no shutdown
exit
exit
snmp
community "YsMv96H2KZVKQeakNAq.38gvyr.MH9vA" hash2 r version both
community "gkYL94l90FFgu91PiRNvn3Rnl0edkMU1" hash2 rw version v2c
access
exit
log
filter 1 name "1"
default-action forward
entry 1 name "1"
action forward
exit
exit
syslog 1 name "1"
address 3ffe::e01:403
log-prefix "vprn1"
exit
snmp-trap-group 32 name "32"
snmp-trap-group 32 name "32"
trap-target "3" address 3ffe::e01:403 port 9000 snmpv2c notify-community "vprn1"
exit
log-id 1 name "1"
filter 1
from main change
to syslog 1
exit
log-id 32 name "32"
filter 1
from main change
to snmp
no shutdown
exit
exit
----------------------------------------------
Configuring a spoke SDP
Use the commands in the following context to create or enable a spoke SDP.
configure service vprn spoke-sdp
Use the commands in the following context to configure the spoke-SDP command options.
configure service vprn interface spoke-sdp
Configuring VPRN protocols - PIM
The following example displays a VPRN PIM configuration.
MD-CLI
[ex:/configure service]
A:admin@node-2# info
vprn "101" {
admin-state enable
customer "1"
pim {
apply-to all
interface "if1" {
}
interface "if2" {
}
rp {
ipv4 {
bsr-candidate {
admin-state disable
}
}
ipv6 {
bsr-candidate {
admin-state disable
}
}
}
}
bgp-ipvpn {
mpls {
admin-state enable
route-distinguisher "1:11"
}
}
interface "if1" {
loopback true
ipv4 {
primary {
address 10.13.14.15
prefix-length 32
}
}
}
interface "if2" {
ipv4 {
primary {
address 10.13.14.1
prefix-length 24
}
}
sap 1/1/9:0 {
}
}
classic CLI
A:node-2>config>service# info
----------------------------------------------
vprn 101 name "101" customer 1 create
interface "if1" create
address 10.13.14.15/32
loopback
exit
interface "if2" create
address 10.13.14.1/24
sap 1/1/9:0 create
exit
exit
bgp-ipvpn
mpls
route-distinguisher 1:11
no shutdown
exit
exit
pim
interface "if1"
exit
interface "if2"
exit
rp
static
exit
bsr-candidate
shutdown
exit
rp-candidate
shutdown
exit
exit
no shutdown
exit
no shutdown
exit
Configuring router interfaces
For the MD-CLI command descriptions and information to configure router interfaces, see the 7450 ESS, 7750 SR, 7950 XRS, and VSR MD-CLI Command Reference Guide.
For the classic CLI command descriptions and information to configure router interfaces, see 7450 ESS, 7750 SR, 7950 XRS, and VSR Classic CLI Command Reference Guide.
The following example displays a router interface configuration.
MD-CLI
[ex:/configure service vprn "32"]
A:admin@node-2# info
...
interface "if1" {
port 1/1/33
ipv4 {
primary {
address 10.2.2.1
prefix-length 24
}
}
}
interface "if2" {
port 1/1/34
ipv4 {
primary {
address 10.49.1.46
prefix-length 24
}
}
}
...
classic CLI
A:node-2>config>router# info
#------------------------------------------
echo "IP Configuration"
#------------------------------------------
...
interface "if1"
address 10.2.2.1/24
port 1/1/33
exit
interface "if2"
address 10.49.1.46/24
port 1/1/34
exit
Configuring VPRN protocols - BGP
The autonomous system number and router ID configured in the VPRN context only applies to that particular service.
The minimum command options that should be configured for a VPRN BGP instance are:
-
Specify an autonomous system number for the router. See Configuring a global VPRN service.
-
Specify a router ID. If a new or different router ID value is entered in the BGP context, then the new values takes precedence and overwrites the VPRN-level router ID. See Configuring a global VPRN service.
-
Specify a VPRN BGP peer group.
-
Specify a VPRN BGP neighbor with which to peer.
-
Specify a VPRN BGP peer-AS that is associated with the above peer.
VPRN BGP is administratively enabled upon creation. There are no default VPRN BGP groups or neighbors. Each VPRN BGP group and neighbor must be explicitly configured.
All command options configured for VPRN BGP are applied to the group and are inherited by each peer, but a group command option can be overridden on a specific basis. VPRN BGP command hierarchy consists of three levels:
-
global
configure service vprn bgp
-
group
configure service vprn bgp group
-
neighbor
configure service vprn bgp neighbor
The local address must be explicitly configured if two systems have multiple BGP peer sessions between them for the session to be established.
For more information about the BGP protocol, see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Router Configuration Guide.
Configuring VPRN BGP groups and neighbors
A group is a collection of related VPRN BGP peers. The group name should be a descriptive name for the group. Follow your group, name, and ID naming conventions for consistency and to help when troubleshooting faults.
All command options configured for a peer group are applied to the group and are inherited by each peer (neighbor), but a group command option can be overridden on a specific neighbor-level basis.
After a group name is created and options are configured, neighbors can be added within the same autonomous system to create IBGP connections or neighbors in different autonomous systems to create EBGP peers. All command options configured for the peer group level are applied to each neighbor, but a group command option can be overridden on a specific neighbor basis.
Configuring route reflection
Route reflection can be implemented in autonomous systems with a large internal BGP mesh to reduce the number of IBGP sessions required. One or more routers can be selected to act as focal points, for internal BGP sessions. Several BGP-speaking routers can peer with a route reflector. A route reflector forms peer connections to other route reflectors. A router assumes the role as a route reflector by configuring the commands in the cluster context. No other command is required unless you want to disable reflection to specific peers.
If you configure the cluster command at the global level, then all subordinate groups and neighbors are members of the cluster. The route reflector cluster ID is expressed in dotted-decimal notation. The ID should be a significant topology-specific value. No other command is required unless you want to disable reflection to specific peers.
If a route reflector client is fully meshed, use the following command to stop the route reflector from reflecting redundant route updates to a client.
- MD-CLI
configure router bgp client-reflect false
- classic
CLI
configure router bgp disable-client-reflect
Configuring BGP confederations
A VPRN can be configured to belong to a BGP confederation. BGP confederations are one technique for reducing the degree of IBGP meshing within an AS. When the confederation command is in the configuration of a VPRN the type of BGP session formed with a VPRN BGP neighbor is determined as follows:
The session is of type IBGP if the peer AS is the same as the local AS.
The session is of type confed-EBGP if the peer AS is different than the local AS AND the peer AS is listed as one of the members in the confederation command.
The session is of type EBGP if the peer AS is different than the local AS AND the peer AS is not listed as one of the members in the confederation command.
VPRN BGP configuration
The following example displays a VPRN BGP configuration.
MD-CLI
[ex:/configure service]
A:admin@node-2#
sdp 2 {
admin-state enable
keep-alive {
admin-state disable
}
far-end {
ip-address 1.2.3.4
}
}
...
vprn "27" {
customer "1"
autonomous-system 10000
ecmp 8
bgp-ipvpn {
mpls {
admin-state enable
route-distinguisher "10001:1"
vrf-target {
community "target:10001:1"
}
vrf-import {
policy ["vrfImpPolCust1"]
}
vrf-export {
policy ["vrfExpPolCust1"]
}
auto-bind-tunnel {
resolution filter
resolution-filter {
ldp true
}
}
}
}
bgp {
router-id 10.0.0.1
ebgp-default-reject-policy {
import false
export false
}
group "to-ce1" {
peer-as 65101
export {
policy ["vprnBgpExpPolCust1"]
}
}
neighbor "10.1.1.2" {
group "to-ce1"
}
}
interface "to-ce1" {
ipv4 {
primary {
address 10.1.0.1
prefix-length 24
}
}
sap 1/1/9:2 {
ingress {
qos {
sap-ingress {
policy-name "100"
}
scheduler-policy {
policy-name "SLA2"
}
}
}
egress {
qos {
sap-egress {
policy-name "1010"
}
scheduler-policy {
policy-name "SLA1"
}
}
filter {
ip "6"
}
}
}
}
spoke-sdp 2:27 {
}
static-routes {
route 10.5.0.0/24 route-type unicast {
next-hop "10.1.1.2" {
}
}
}
}
classic CLI
A:node-2>config>service# info
----------------------------------------------
sdp 2 create
far-end 1.2.3.4
keep-alive
shutdown
exit
no shutdown
exit
customer 1 name "1" create
description "Default customer"
exit
ipipe 1 name "1" customer 1 create
shutdown
exit
vprn 27 name "27" customer 1 create
shutdown
ecmp 8
autonomous-system 10000
interface "to-ce1" create
address 10.1.0.1/24
sap 1/1/9:2 create
ingress
scheduler-policy "SLA2"
qos 100
exit
egress
scheduler-policy "SLA1"
qos 1010
filter ip 6
exit
exit
exit
static-route-entry 10.5.0.0/24
next-hop 10.1.1.2
shutdown
exit
exit
bgp-ipvpn
mpls
auto-bind-tunnel
resolution-filter
ldp
exit
resolution filter
exit
route-distinguisher 10001:1
vrf-import "vrfImpPolCust1"
vrf-export "vrfExpPolCust1"
vrf-target target:10001:1
no shutdown
exit
exit
bgp
router-id 10.0.0.1
group "to-ce1"
export "vprnBgpExpPolCust1"
peer-as 65101
neighbor 10.1.1.2
exit
exit
no shutdown
exit
spoke-sdp 2:27 create
exit
exit
Configuring VPRN protocols - RIP
PE routers attached to a specific VPN must learn the set of addresses for each site in that VPN. There are several ways for a PE router to obtain this information, one of which is the Routing Information Protocol (RIP). RIP sends routing update messages that include entry changes to the routing table, which is updated with the new information.
RIP can be used as a PE/CE distribution technique. PE and CE routers can be configured as RIP peers, and the CE router can transmit RIP updates to inform the PE router about the set of address prefixes which are reachable at the CE router's site. When RIP is configured in the CE router, care must be taken to ensure that address prefixes from other sites (address prefixes learned by the CE router from the PE router) are never advertised to the PE router. Specifically, if a PE router receives a VPN-IPv4 route and, as a result, distributes an IPv4 route to a CE router, then that route must not be distributed back from that CE's site to a PE router (either the same router or different routers).
Use the commands in the following context to enable a VPRN RIP instance and enable the RIP protocol.
configure service vprn rip
VPRN RIP is administratively enabled upon creation. Configuring other RIP commands and command options is optional.
The command options configured at the VPRN RIP global level are inherited by the group and neighbor levels. Several hierarchical VPRN RIP commands can be modified on different levels; the most specific value is used. That is, a VPRN RIP group-specific command takes precedence over a global VPRN RIP command. A neighbor-specific command takes precedence over a global VPRN RIP and group-specific command. For example, if you modify a VPRN RIP neighbor-level command default, the new value takes precedence over VPRN RIP group- and global-level settings. VPRN RIP groups and neighbors are not created by default. Each VPRN RIP group and neighbor must be explicitly configured.
The minimal command options that should be configured for a VPRN instance are:
-
Specify a VPRN RIP peer group.
-
Specify a VPRN RIP neighbor with which to peer.
-
Specify a VPRN RIP peer-AS that is associated with the above peer.
The VPRN RIP command hierarchy consists of three levels:
-
the global level
-
the group level
-
the neighbor level
MD-CLI
[ex:/configure service vprn "1" rip]
A:admin@node-2# info
group "RIP-ALU-A" {
neighbor "to-ALU-4" {
}
}
classic CLI
A:node-2>config>router>rip# info
----------------------------------------------
group "RIP-ALU-A"
neighbor "to-ALU-4"
exit
exit
----------------------------------------------
Configuring VPRN RIP
The following example displays a VPRN RIP configuration.
MD-CLI
[ex:/configure service]
A:admin@node-2# info
...
vprn "1" {
admin-state enable
customer "1"
autonomous-system 10000
ecmp 8
bgp-ipvpn {
mpls {
admin-state enable
route-distinguisher "10001:1"
vrf-target {
community "target:1001:1"
}
vrf-import {
policy ["vrfImpPolCust1"]
}
vrf-export {
policy ["vrfExpPolCust1"]
}
auto-bind-tunnel {
resolution filter
resolution-filter {
ldp true
}
}
}
}
bgp {
router-id 10.0.0.1
ebgp-default-reject-policy {
import false
export false
}
group "to-ce1" {
peer-as 65101
export {
policy ["vprnBgpExpPolCust1"]
}
}
neighbor "10.1.1.2" {
group "to-ce1"
}
}
interface "to-ce1" {
ipv4 {
primary {
address 10.1.0.1
prefix-length 24
}
}
sap 1/1/10:1 {
ingress {
qos {
sap-ingress {
policy-name "100"
}
scheduler-policy {
policy-name "SLA2"
}
}
}
egress {
qos {
sap-egress {
policy-name "1010"
}
scheduler-policy {
policy-name "SLA1"
}
}
filter {
ip "6"
}
}
}
}
spoke-sdp 2:1 {
}
static-routes {
route 10.5.0.0/24 route-type unicast {
next-hop "10.1.1.2"
admin-state enable
}
}
}
rip {
admin-state enable
export-policy ["vprnRipExpPolCust1"]
group "ce1" {
admin-state enable
neighbor "to-ce1" {
admin-state enable
}
}
}
}
classic CLI
A:node-2>config>service# info
----------------------------------------------
...
vprn 1 name "1" customer 1 create
ecmp 8
autonomous-system 10000
interface "to-ce1" create
address 10.1.0.1/24
sap 1/1/10:1 create
ingress
scheduler-policy "SLA2"
qos 100
exit
egress
scheduler-policy "SLA1"
qos 1010
filter ip 6
exit
exit
exit
static-route-entry 10.5.0.0/24
next-hop 10.1.1.2
no shutdown
exit
exit
bgp-ipvpn
mpls
auto-bind-tunnel
resolution-filter
ldp
exit
resolution filter
exit
route-distinguisher 10001:1
vrf-import "vrfImpPolCust1"
vrf-export "vrfExpPolCust1"
vrf-target target:1001:1
no shutdown
exit
exit
bgp
router-id 10.0.0.1
group "to-ce1"
export "vprnBgpExpPolCust1"
peer-as 65101
neighbor 10.1.1.2
exit
exit
no shutdown
exit
rip
export "vprnRipExpPolCust1"
group "ce1"
neighbor "to-ce1"
no shutdown
exit
no shutdown
exit
no shutdown
exit
spoke-sdp 2:1 create
exit
no shutdown
exit
...
----------------------------------------------
For more information about the RIP protocol, see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Router Configuration Guide.
Configuring VPRN protocols - OSPF
Each VPN routing instance is isolated from any other VPN routing instance, and from the routing used across the backbone. OSPF can be run with any VPRN, independently of the routing protocols used in other VPRNs, or in the backbone itself. For more information about the OSPF protocol, see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Router Configuration Guide.
The OSPF backbone area, area 0.0.0.0, must be contiguous and all other areas must be connected to the backbone area. The backbone distributes routing information between areas. If it is not practical to connect an area to the backbone (see Area 0.0.0.5 in OSPF areas), the area border routers (such as routers Y and Z) must be connected via a virtual link. The two area border routers form a point-to-point-like adjacency across the transit area (see Area 0.0.0.4). A virtual link can only be configured while in the area 0.0.0.0 context.
Configuring VPRN OSPF
The following example displays the VPRN OSPF configuration shown in OSPF areas.
MD-CLI
[ex:/configure service]
A:admin@node-2# info
...
vprn "1" {
admin-state enable
customer "1"
interface "test" {
}
ospf 0 {
admin-state enable
area 0.0.0.0 {
virtual-link 1.2.3.4 transit-area 1.2.3.4 {
hello-interval 9
dead-interval 40
}
}
area 1.2.3.4 {
}
}
}
classic CLI
A:node-2>config>service# info
----------------------------------------------
...
vprn 1 name "1" customer 1 create
interface "test" create
exit
ospf
no shutdown
area 0.0.0.0
virtual-link 1.2.3.4 transit-area 1.2.3.4
hello-interval 9
dead-interval 40
exit
exit
area 1.2.3.4
exit
exit
no shutdown
exit
...
----------------------------------------------
For more information about the OSPF protocol, see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Router Configuration Guide.
Configuring a VPRN interface
Interface names associate an IP address to the interface, and then associate the IP interface with a physical port. The logical interface can associate attributes like an IP address, port, Link Aggregation Group (LAG) or the system.
There are no default interfaces.
You can configure a VPRN interface as a loopback interface by issuing the loopback command instead of the sap command. The loopback flag cannot be set on an interface where a SAP is already defined and a SAP cannot be defined on a loopback interface.
When using mtrace or mstat in a Layer 3 VPN context then the configuration for the VPRN should have a loopback address configured which has the same address as the core instance's system address (BGP next-hop).
The following example displays a VPRN interface configuration.
MD-CLI
[ex:/configure service]
A:admin@node-2# info
...
vprn "1" {
admin-state enable
customer "1"
autonomous-system 10000
ecmp 8
bgp-ipvpn {
mpls {
admin-state enable
route-distinguisher "10001:1"
vrf-target {
community "target:1001:1"
}
vrf-import {
policy ["vrfImpPolCust1"]
}
vrf-export {
policy ["vrfExpPolCust1"]
}
auto-bind-tunnel {
resolution filter
resolution-filter {
ldp true
}
}
}
}
interface "to-ce1" {
ipv4 {
primary {
address 10.1.0.1
prefix-length 24
}
}
}
spoke-sdp 2:1 {
}
static-routes {
route 10.5.0.0/24 route-type unicast {
next-hop "10.1.1.2" {
admin-state enable
}
}
}
}
...
classic CLI
A:node-2>config>service# info
----------------------------------------------
...
vprn 1 name "1" customer 1 create
ecmp 8
autonomous-system 10000
interface "to-ce1" create
address 10.1.0.1/24
exit
static-route-entry 10.5.0.0/24
next-hop 10.1.1.2
no shutdown
exit
exit
bgp-ipvpn
mpls
auto-bind-tunnel
resolution-filter
ldp
exit
resolution filter
exit
route-distinguisher 10001:1
vrf-import "vrfImpPolCust1"
vrf-export "vrfExpPolCust1"
vrf-target target:1001:1
no shutdown
exit
exit
spoke-sdp 2:1 create
exit
no shutdown
exit
...
----------------------------------------------
Configuring overload state on a single SFM
When a router has fewer than the full set of SFMs functioning, the forwarding capacity can be reduced. Some scenarios include:
fewer than the maximum number of SFMs installed in the system
one or more SFMs have failed
the system is in the ISSU process and the SFM is co-located on the CPM
An overload condition can be set for IS-IS and OSPF to enable the router to still participate in exchanging routing information, but route all traffic away from it when insufficient SFMs are active. Use the following commands to configure an overload condition.
- MD-CLI
configure router sfm-overload configure service vprn sfm-overload tools perform redundancy forced-single-sfm-overload
- classic
CLI
configure router single-sfm-overload configure service vprn single-sfm-overload tools perform redundancy forced-single-sfm-overload
These cause an overload state in the IGP to trigger the traffic reroute by setting the overload bit in IS-IS or setting the metric to maximum in OSPF. When PIM uses IS-IS or OSPF to find out the upstream router, a next-hop change in the IS-IS or OSPF causes PIM to join the new path and prune the old path, which effectively also reroutes the multicast traffic downstream as well as the unicast traffic.
When the problem is resolved, and the required compliment of SFMs become active in the router, the overload condition is cleared, which causes the traffic to be routed back to the router.
The conditions to set overload are:
For 7950 XRS-20, 7750 SR-12/SR-7 and 7450 ESS-12/ESS-7 platforms, if an SF/CPM fails, the protocol sets the overload.
For 7950-40 XRS and 7750 SR-12e platforms, if two SFMs fail (a connected pair on the XRS-40) the protocol sets the overload.
Configuring a VPRN interface SAP
A SAP is a combination of a port and encapsulation command options that identifies the service access point on the interface and within the SR. Each SAP must be unique within a router. A SAP cannot be defined if the interface loopback command is enabled.
When configuring VPRN interface SAP command options, a default QoS policy is applied to each ingress and egress SAP. Additional QoS policies and scheduler policies must be configured in the configure qos context. Filter policies are configured in the configure filter context and must be explicitly applied to a SAP. There are no default filter policies.
The following example displays a VPRN interface SAP configuration.
MD-CLI
[ex:/configure service]
A:admin@node-2# info
...
vprn "1" {
admin-state enable
customer "1"
autonomous-system 10000
ecmp 8
bgp-ipvpn {
mpls {
admin-state enable
route-distinguisher "10001:1"
vrf-target {
community "target:1001:1"
}
vrf-import {
policy ["vrfImpPolCust1"]
}
vrf-export {
policy ["vrfExpPolCust1"]
}
auto-bind-tunnel {
resolution filter
resolution-filter {
ldp true
}
}
}
}
interface "to-ce1" {
ipv4 {
primary {
address 10.1.0.1
prefix-length 24
}
}
sap 1/1/10:1 {
ingress {
qos {
sap-ingress {
policy-name "100"
}
scheduler-policy {
policy-name "SLA2"
}
}
}
egress {
qos {
sap-egress {
policy-name "1010"
}
scheduler-policy {
policy-name "SLA1"
}
}
filter {
ip "6"
}
}
}
}
spoke-sdp 2:1 {
}
static-routes {
route 10.5.0.0/24 route-type unicast {
next-hop "10.1.1.2" {
admin-state enable
}
}
}
classic CLI
A:node-2>config>service# info
----------------------------------------------
...
vprn 1 name "1" customer 1 create
ecmp 8
autonomous-system 10000
interface "to-ce1" create
address 10.1.0.1/24
sap 1/1/10:1 create
ingress
scheduler-policy "SLA2"
qos 100
exit
egress
scheduler-policy "SLA1"
qos 1010
filter ip 6
exit
exit
exit
static-route-entry 10.5.0.0/24
next-hop 10.1.1.2
no shutdown
exit
exit
bgp-ipvpn
mpls
auto-bind-tunnel
resolution-filter
ldp
exit
resolution filter
exit
route-distinguisher 10001:1
vrf-import "vrfImpPolCust1"
vrf-export "vrfExpPolCust1"
vrf-target target:1001:1
no shutdown
exit
exit
spoke-sdp 2:1 create
exit
no shutdown
exit
Service management tasks
This section discusses VPRN service management tasks.
Modifying a VPRN service
The following example shows a VPRN service configuration that is used in the following sections to show how to modify the configuration.
MD-CLI
[ex:/configure service]
A:admin@node-2# info
...
vprn "1" {
admin-state enable
customer "1"
autonomous-system 10000
ecmp 8
bgp-ipvpn {
mpls {
admin-state enable
vrf-import {
policy ["vrfImpPolCust1"]
}
vrf-export {
policy ["vrfExpPolCust1"]
}
}
}
bgp {
router-id 10.0.0.1
ebgp-default-reject-policy {
import false
export false
}
group "to-ce1" {
admin-state enable
peer-as 65101
export {
policy ["vprnBgpExpPolCust1"]
}
}
neighbor "10.1.1.2" {
group "to-ce1"
}
}
maximum-ipv4-routes {
value 2000
}
interface "to-ce1" {
ipv4 {
primary {
address 10.1.1.1
prefix-length 24
}
}
sap 1/1/10:1 {
}
}
spoke-sdp 2:1 {
}
static-routes {
route 10.5.0.0/24 route-type unicast {
next-hop "10.1.1.2" {
admin-state enable
}
}
}
classic CLI
A:node-2>config>service# info
----------------------------------------------
...
vprn 1 name "1" customer 1 create
no shutdown
ecmp 8
maximum-routes 2000
autonomous-system 10000
interface "to-ce1" create
address 10.1.1.1/24
sap 1/1/10:1 create
exit
exit
static-route-entry 10.5.0.0/24
next-hop 10.1.1.2
no shutdown
exit
exit
bgp-ipvpn
mpls
no shutdown
vrf-import "vrfImpPolCust1"
vrf-export "vrfExpPolCust1"
exit
exit
bgp
router-id 10.0.0.1
group "to-ce1"
export "vprnBgpExpPolCust1"
peer-as 65101
neighbor 10.1.1.2
exit
exit
no shutdown
exit
spoke-sdp 2:1 create
exit
exit
Deleting a VPRN service
The following example displays the deletion of a VPRN service.
MD-CLI
[ex:/configure service vprn "1"]
A:admin@node-2# exit
[ex:/configure service]
A:admin@node-2# delete vprn 1
classic CLI
In the classic CLI, a VPRN service cannot be deleted until SAPs and interfaces are administratively disabled and deleted. If protocols or a spoke-SDP, or both are defined, they must be shut down and removed from the configuration as well.
*A:node-2>config>service# vprn 1
*A:node-2>config>service>vprn# interface "to-ce1"
*A:node-2>config>service>vprn>if# sap 1/1/10:1
*A:node-2>config>service>vprn>if>sap# shutdown
*A:node-2>config>service>vprn>if>sap# exit
*A:node-2>config>service>vprn>if# no sap 1/1/10:1
*A:node-2>config>service>vprn>if# shutdown
*A:node-2>config>service>vprn>if# exit
*A:node-2>config>service>vprn# no interface "to-ce1"
*A:node-2>config>service>vprn# bgp
*A:node-2>config>service>vprn>bgp# shutdown
*A:node-2>config>service>vprn>bgp# exit
*A:node-2>config>service>vprn# no bgp
*A:node-2>config>service>vprn# rip
*A:node-2>config>service>vprn>rip$ shutdown
*A:node-2>config>service>vprn>rip$ exit
*A:node-2>config>service>vprn# no rip
*A:node-2>config>service>vprn# no spoke-sdp 2
*A:node-2>config>service>vprn# no ecmp
*A:node-2>config>service>vprn# static-route-entry 10.5.0.0/24
*A:node-2>config>service>vprn>static-route-entry# no next-hop 10.1.1.2
*A:node-2>config>service>vprn>static-route-entry# shutdown
*A:node-2>config>service>vprn>static-route-entry# exit
*A:node-2>config>service>vprn# no static-route-entry 10.5.0.0/24
*A:node-2>config>service>vprn# exit
*A:node-2>config>service# no vprn 1
Disabling a VPRN service
A VPRN service can be administratively disabled without deleting any service command options. The following example displays the disabling of a VPRN service.
MD-CLI
[ex:/configure service]
A:admin@node-2# vprn 1
[ex:/configure service vprn "1"]
A:admin@node-2# admin-state disable
[ex:/configure service vprn "1"]
A:admin@node-2# info
admin-state disable
customer "1"
...
classic CLI
A:node-2>config>service# vprn 1
A:node-2>config>service>vprn# shutdown
A:node-2>config>service>vprn# info
----------------------------------------------
shutdown
ecmp 8
...
Re-enabling a VPRN service
The following example displays the re-enabling of a VPRN service that had been administratively disabled.
MD-CLI
[ex:/configure service]
A:admin@node-2# vprn 1
[ex:/configure service vprn "1"]
A:admin@node-2# admin-state enable
[ex:/configure service vprn "1"]
A:admin@node-2# info
admin-state enable
customer "1"
...
classic CLI
A:node-2>config>service# vprn 1
A:node-2>config>service>vprn# no shutdown
A:node-2>config>service>vprn# info
----------------------------------------------
no shutdown
ecmp 8
...