QoS and QoS policies
This chapter provides an overview of the 7705 SAR Quality of Service (QoS) and information about QoS policy management.
Topics in this chapter include:
QoS overview
This section contains the following overview topics related to QoS:
Overview
In order to provide what network engineers call Quality of Service (QoS), the flow of data in the form of packets must be predetermined and resources must be somehow assured for that predetermined flow. Simple routing does not provide a predetermined path for the traffic, and priorities that are described by Class of Service (CoS) coding simply increase the odds of successful transit for one packet over another. There is still no guarantee of service quality. The guarantee of service quality is what distinguishes QoS from CoS. CoS is an element of overall QoS.
By using the traffic management features of the 7705 SAR, network engineers can achieve a QoS for their customers. Multiprotocol label switching (MPLS) provides a predetermined path, while policing, shaping, scheduling, and marking features ensure that traffic flows in a predetermined and predictable manner.
There is a need to distinguish between high-priority (that is, mission-critical traffic like signaling) and best-effort traffic priority levels when managing traffic flow. Within these priority levels, it is important to have a second level of prioritization, that is, between a certain volume of traffic that is contracted/needed to be transported, and the amount of traffic that is transported if the system resources allow. Throughout this guide, contracted traffic is referred to as in-profile traffic. Traffic that exceeds the user-configured traffic limits is either serviced using a lower priority or discarded in an appropriate manner to ensure that an overall quality of service is achieved.
The 7705 SAR must be properly configured to provide QoS. To ensure end-to-end QoS, each and every intermediate node together with the egress node must be coherently configured. Proper QoS configuration requires careful end-to-end planning, allocation of appropriate resources and coherent configuration among all the nodes along the path of a given service. Once properly configured, each service provided by the 7705 SAR will be contained within QoS boundaries associated with that service and the general QoS parameters assigned to network links.
The 7705 SAR is designed with QoS mechanisms at both egress and ingress to support different customers and different services per physical interface or card, concurrently and harmoniously (see Egress and ingress traffic direction for a definition of egress and ingress traffic). The 7705 SAR has extensive and flexible capabilities to classify, police, shape and mark traffic to make this happen.
The 7705 SAR supports multiple forwarding classes (FCs) and associated class-based queuing. Ingress traffic can be classified to multiple FCs, and the FCs can be flexibly associated with queues. This provides the ability to control the priority and drop priority of a packet while allowing the fine-tuning of bandwidth allocation to individual flows.
Each forwarding class is important only in relation to the other forwarding classes. A forwarding class allows network elements to weigh the relative importance of one packet over another. With such flexible queuing, packets belonging to a specific flow within a service can be preferentially forwarded based on the CoS of a queue. The forwarding decision is based on the forwarding class of the packet, as assigned by the ingress QoS policy defined for the service access point (SAP).
7705 SAR routers use QoS policies to control how QoS is handled at distinct points in the service delivery model within the device. QoS policies act like a template. Once a policy is created, it can be applied to many other similar services and ports. As an example, if there is a group of Node Bs connected to a 7705 SAR node, one QoS policy can be applied to all services of the same type, such as High-Speed Downlink Packet Access (HSDPA) offload services.
There are different types of QoS policies that cater to the different QoS needs at each point in the service delivery model. QoS policies are defined in a global context in the 7705 SAR and only take effect when the policy is applied to a relevant entity.
QoS policies are uniquely identified with a policy ID number or a policy ID name. Policy ID 1 and policy ID ‟default” are reserved for the default policy, which is used if no policy is explicitly applied.
The different QoS policies within the 7705 SAR can be divided into two main types.
QoS policies are used for classification, queue attributes, and marking.
Slope policies define default buffer allocations and Random Early Discard (RED) and Weighted Random Early Discard (WRED) slope definitions.
The sections that follow provide an overview of the QoS traffic management performed on the 7705 SAR.
Egress and ingress traffic direction
Throughout this document, the terms ingress and egress, when describing traffic direction, are always defined relative to the fabric. For example:
ingress direction describes packets moving into the switch fabric away from a port (on an adapter card)
egress direction describes packets moving from the switch fabric and into a port (on an adapter card)
When combined with the terms access and network, which are port and interface modes, the four traffic directions relative to the fabric are (see Egress and ingress traffic direction):
access ingress direction describes packets coming in from customer equipment and switched toward the switch fabric
network egress direction describes packets switched from the switch fabric into the network
network ingress direction describes packets switched in from the network and moving toward the switch fabric
access egress direction describes packets switched from the switch fabric toward customer equipment
Ring traffic
On the 2-port 10GigE (Ethernet) Adapter card and 2-port 10GigE (Ethernet) module, traffic can flow between the Layer 2 bridging domain and the Layer 3 IP domain (see Ingress and egress traffic on a 2-port 10GigE (Ethernet) Adapter card). In the bridging domain, ring traffic flows from one ring port to another, as well as to and from the add/drop port. From the network point of view, traffic from the ring toward the add/drop port and the v-port is considered ingress traffic (drop traffic). Similarly, traffic from the fabric toward the v-port and the add/drop port is considered egress traffic (add traffic).
The 2-port 10GigE (Ethernet) Adapter card or 2-port 10GigE (Ethernet) module functions as an add/drop card to a network side 10 Gb/s optical ring. Conceptually, the card or module should be envisioned as having two domains: a Layer 2 bridging domain where the add/drop function operates and a Layer 3 IP domain where the normal IP processing and IP nodal traffic flows are managed. Ingress and egress traffic flow remains in the context of the nodal fabric. The ring ports are considered to be east-facing and west-facing and are referenced as Port 1 and Port 2. A virtual port (or v-port) provides the interface to the IP domain within the structure of the card or module.
Forwarding classes
Queues can be created for each forwarding class to determine the manner in which the queue output is scheduled and the type of parameters the queue accepts. The 7705 SAR supports eight forwarding classes per SAP. The following table shows the default mapping of these forwarding classes in order of priority, with Network Control having the highest priority.
FC name |
FC designation |
Queue type |
Typical use |
---|---|---|---|
Network Control |
NC |
Expedited |
For network control and traffic synchronization |
High-1 |
H1 |
For delay/jitter sensitive traffic |
|
Expedited |
EF |
For delay/jitter sensitive traffic |
|
High-2 |
H2 |
For delay/jitter sensitive traffic |
|
Low-1 |
L1 |
Best Effort |
For best-effort traffic |
Assured |
AF |
For best-effort traffic |
|
Low-2 |
L2 |
For best-effort traffic |
|
Best Effort |
BE |
For best-effort traffic |
The traffic flows of different forwarding classes are mapped to the queues. This mapping is user-configurable. Each queue has a unique priority. Packets from high-priority queues are scheduled separately, before packets from low-priority queues. More than one forwarding class can be mapped to a single queue. In such a case, the queue type defaults to the priority of the lowest forwarding class (see Queue type for more information about queue type). By default, the following logical order is followed:
FC-8 - NC
FC-7 - H1
FC-6 - EF
FC-5 - H2
FC-4 - L1
FC-3 - AF
FC-2 - L2
FC-1 - BE
At access ingress, traffic can be classified as unicast traffic or one of the multipoint traffic types (broadcast, multicast, or unknown (BMU)). After classification, traffic can be assigned to a queue that is configured to support one of the four traffic types, namely:
unicast (or implicit)
broadcast
multicast
unknown
Scheduling modes
The scheduler modes available on adapter cards are 4-priority and 16-priority. Which modes are supported on a particular adapter card depends on whether the adapter card is a second-generation or third-generation card.
On Gen-3 hardware, 4-priority scheduling mode is the implicit, default scheduling mode and is not user-configurable. Gen-3 platforms with a TDM block support 4-priority scheduling mode. Gen-2 adapter cards support 16-priority and 4-priority scheduling modes.
For more information about differences between Gen-2 and Gen-3 hardware related to scheduling mode QoS behavior, see QoS for Gen-3 adapter cards and platforms.
For information about scheduling modes as they apply to traffic direction, see the following sections:
Intelligent discards
Most 7705 SAR systems are susceptible to network processor congestion if the packet rate of small packets received on a node or card exceeds the processing capacity. If a node or card receives a high rate of small packet traffic, the node or card enters overload mode. Before the introduction of intelligent discards, when a node or card entered an overload state, the network processor would randomly drop packets.
The ‟intelligent discards during overload” feature allows the network processor to discard packets according to a preset priority order. In the egress direction, intelligent discards is applied to traffic entering the card from the fabric.
Traffic is discarded in the following order: low-priority out-of-profile user traffic is discarded first, followed by high-priority out-of-profile user traffic, then low-priority in-profile user traffic, high priority in-profile user traffic, and lastly control plane traffic. In the ingress direction, intelligent discards is applied to traffic entering the card from the physical ports. Traffic is discarded in the following order: low-priority user traffic is always discarded first, followed by high-priority user traffic. This order ensures that low-priority user traffic is always the most susceptible to discards.
In the egress direction, the system differentiates between high-priority and low-priority user traffic based on the internal forwarding class and queue-type fabric header markers. In the ingress direction, the system differentiates between high-priority and low-priority user traffic based on packet header bits. The following table details the classification of user traffic in the ingress direction.
Fabric header marker |
High-priority values |
Low-priority values |
---|---|---|
MPLS TC |
7 to 4 |
3 to 0 |
IP DSCP |
63 to 32 |
31 to 0 |
Eth Dot1p |
7 to 4 |
3 to 0 |
Intelligent discards during overload ensures priority-based handling of traffic and helps existing traffic management implementations. It does not change how QoS-based classification, buffer management, or scheduling operates on the 7705 SAR. If the node or card is not in overload operation mode, there is no change to the way packets are handled by the network processor.
There are no commands to configure intelligent discards during overload; the feature is automatically enabled on the following cards, modules, and ports:
10-port 1GigE/1-port 10GigE X-Adapter card
2-port 10GigE (Ethernet) Adapter card (only on the 2.5 Gb/s v-port)
2-port 10GigE (Ethernet) module (only on the v-port)
8-port Gigabit Ethernet Adapter card
6-port Ethernet 10Gbps Adapter card
Packet Microwave Adapter card
4-port SAR-H Fast Ethernet module
6-port SAR-M Ethernet module
7705 SAR-A Ethernet ports
7705 SAR-Ax Ethernet ports
7705 SAR-Wx Ethernet ports
7705 SAR-M Ethernet ports
7705 SAR-H Ethernet ports
7705 SAR-Hc Ethernet ports
7705 SAR-X Ethernet ports
Buffering
Buffer space is allocated to queues based on the committed buffer space (CBS), the maximum buffer space (MBS) and availability of the resources, and the total amount of buffer space. The CBS and the MBS define the queue depth for a particular queue. The MBS represents the maximum buffer space that is allocated to a particular queue. Whether that much space can actually be allocated depends on buffer usage (that is, the number of other queues and their sizes).
Memory allocation is optimized to guarantee the CBS for each queue. The allocated queue space beyond the CBS is limited by the MBS and depends on the use of buffer space and the guarantees accorded to queues as configured in the CBS.
This section contains information about the following topics:
Buffer pools
The 7705 SAR supports two types of buffer pools that allocate memory as follows:
reserved pool – represents the CBS that is guaranteed for all queues. The reserved pool is limited to a maximum of 75% of the total buffer space.
shared pool – represents the buffer space that remains after the reserved pool has been allocated. The shared pool always has at least 25% of the total buffer space.
Both buffer pools can be displayed in the CLI using the show pools command.
CBS and MBS configuration
On the access side, CBS is configured in bytes and MBS in bytes or kilobytes using the CLI. See, for example, the config>qos>sap-ingress/egress>queue>cbs and mbs configuration commands.
On the network side, CBS and MBS values are expressed as a percentage of the total number of available buffers. If the buffer space is further segregated into pools (for example, ingress and egress, access and network, or a combination of these), the CBS and MBS values are expressed as a percentage of the applicable buffer pool. See the config>qos>network-queue>queue>cbs and mbs configuration commands.
The configured CBS and MBS values are converted to the number of buffers by dividing the CBS or MBS value by a fixed buffer size of 512 bytes or 2304 bytes, depending on the type of adapter card or platform. The number of buffers can be displayed for an adapter card using the show pools command.
Buffer allocation for multicast traffic
When a packet is being multicast to two or more interfaces on the egress adapter card or block of fixed ports, or when a packet at port ingress is mirrored, one extra buffer per packet is used.
In previous releases, this extra buffer was not added to the queue count. When checking CBS during multicast traffic enqueuing, the CBS was divided by two to prevent buffer overconsumption by the extra buffers. As a result, during multicast traffic enqueuing, the CBS buffer limit for the queue was considered reached when half of the available buffers were in use.
As of Release 8.0 of the 7705 SAR, the CBS is no longer divided by two. Instead, the extra buffers are added to the queue count when enqueuing, and are removed from the queue count when the multicast traffic exits the queue. The full CBS value is used, and the extra buffer allocation is visible in buffer allocation displays.
Buffer unit allocation and buffer chaining
Packetization buffers and queues are supported in the packet memory of each adapter card or platform. All adapter cards and platforms allocate a fixed space for each buffer. The 7705 SAR supports two buffer sizes: 512 bytes or 2304 bytes, depending on the type of adapter card or platform.
The adapter cards and platforms that support a buffer size of 2304 bytes do not support buffer chaining (see the description below) and only allow a 1-to-1 correspondence of packets to buffers.
The adapter cards and platforms that support a buffer of size of 512 bytes use a method called buffer chaining to process packets that are larger than 512 bytes. To accommodate packets that are larger than 512 bytes, these adapter cards or platforms divide the packet dynamically into a series of concatenated 512-byte buffers. An internal 64-byte header is prepended to the packet, so only 448 bytes of buffer space is available for customer traffic in the first buffer. The remaining customer traffic is split among concatenated 512-byte buffers.
The following table shows the supported buffer sizes on the 7705 SAR adapter cards and platforms. If a version number or variant is not specified, this implies all versions of the adapter card or variants of the platform. Adapter cards and platforms that support buffer chaining have 512 byte buffer size (‟Yes”); those that do not support buffer chaining have 2304 byte buffer size (‟No”).
Adapter card or platform |
Buffer space per card/platform (MB) |
Buffer chaining support |
---|---|---|
2-port 10GigE (Ethernet) Adapter card |
268 201 (for L2 bridging domain) |
Yes Yes (each buffer unit is 768 bytes) |
2-port 10GigE (Ethernet) module |
201 (for L2 bridging domain) |
Yes (each buffer unit is 768 bytes) |
2-port OC3/STM1 Channelized Adapter card |
310 |
No |
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card |
217 |
Yes |
4-port OC3/STM1 Clear Channel Adapter card |
352 |
No |
4-port DS3/E3 Adapter card |
280 |
No |
6-port E&M Adapter card |
38 |
No |
6-port FXS Adapter card |
38 |
No |
6-port Ethernet 10Gbps Adapter card |
1177 |
Yes |
8-port FXO Adapter card |
38 |
No |
8-port Gigabit Ethernet Adapter card |
268 |
Yes |
8-port Voice & Teleprotection card |
38 |
No |
8-port C37.94 Teleprotection card |
38 |
No |
10-port 1GigE/1-port 10GigE X-Adapter card |
537 |
Yes |
12-port Serial Data Interface card, version 2 and version 3 |
268 |
Yes |
16-port T1/E1 ASAP Adapter card |
38 |
No |
32-port T1/E1 ASAP Adapter card |
57 |
No |
Integrated Services card |
268 |
Yes |
Packet Microwave Adapter card |
268 |
Yes |
7705 SAR-A |
268 |
Yes |
7705 SAR-Ax |
268 |
Yes |
7705 SAR-H |
268 |
Yes |
7705 SAR-Hc |
268 |
Yes |
7705 SAR-M |
268 |
Yes |
7705 SAR-Wx |
268 |
Yes |
7705 SAR-X (Ethernet ports) 1 |
1177 |
Yes |
7705 SAR-X (TDM ports) 1 |
46 |
Yes |
- The 7705 SAR-X has three buffer pools. Each block of ports (MDA) has its own buffer pool.
Advantages of buffer chaining
Buffer chaining offers improved efficiency, which is especially evident when smaller packet sizes are transmitted. For example, to queue a 64-byte packet, a card with a fixed buffer of 2304 bytes allocates 2304 bytes, whereas a card with a fixed buffer of 512 bytes allocates only 512 bytes. To queue a 1280-byte packet, a card with a fixed buffer of 2304 bytes allocates 2304 bytes, whereas a card with a fixed buffer of 512 bytes allocates only 1536 bytes (that is, 512 bytes ✕ 3 buffers).
Per-SAP aggregate shapers (H-QoS) on Gen-2 hardware
This section contains overview information as well as information about the following topics:
This section provides information about per-SAP aggregate shapers for Gen-2 adapter cards and platforms. For information about Gen-3 adapter cards and platforms, see QoS for Gen-3 adapter cards and platforms.
Hierarchical QoS (H-QoS) provides the 7705 SAR with the ability to shape traffic on a per-SAP basis for traffic from up to eight CoS queues associated with that SAP.
On Gen-2 hardware, the per-SAP aggregate shapers apply to access ingress and access egress traffic and operate in addition to the 16-priority scheduler, which must be used for per-SAP aggregate shaping.
The 16-priority scheduler acts as a soft policer, servicing the SAP queues in strict priority order, with conforming traffic (less than CIR) serviced prior to non-conforming traffic (between CIR and PIR). The 16-priority scheduler on its own cannot enforce a traffic limit on a per-SAP basis; to do this, per-SAP aggregate shapers are required (see H-QoS example).
The per-SAP shapers are considered aggregate shapers because they shape traffic from the aggregate of one or more CoS queues assigned to the SAP.
Access ingress scheduling for 4-priority and 16-priority SAPs (with per-SAP aggregate shapers) and Access egress scheduling for 4-priority and 16-priority SAPs (with per-SAP aggregate shapers) (per port) illustrate per-SAP aggregate shapers for access ingress and access egress, respectively. They indicate how shaped and unshaped SAPs are treated.
H-QoS is not supported on the 4-port SAR-H Fast Ethernet module.
Shaped and unshaped SAPs
Shaped SAPs have user-configured rate limits (PIR and CIR)—called the aggregate rate limit—and must use 16-priority scheduling mode. Unshaped SAPs use default rate limits (PIR is maximum and CIR is 0 kb/s) and can use 4-priority or 16-priority scheduling mode.
Shaped 16-priority SAPs are configured with a PIR and a CIR using the agg-rate-limit command in the config>service>service-type service-id>sap context, where service-type is epipe, ipipe, ies, vprn, or vpls (including routed VPLS). The PIR is set using the agg-rate variable and the CIR is set using the cir-rate variable.
Unshaped 4-priority SAPs are considered unshaped by definition of the default PIR and CIR values (PIR is maximum and CIR is 0 kb/s). Therefore, they do not require any configuration other than to be set to 4-priority scheduling mode.
Unshaped 16-priority SAPs are created when 16-priority scheduling mode is selected, when the default PIR is maximum and the default CIR is 0 kb/s, which are same default settings of a 4-priority SAP. The main reason for preferring unshaped SAPs using 16-priority scheduling over unshaped SAPs using 4-priority scheduling is to have a coherent scheduler behavior (one scheduling model) across all SAPs.
In order for unshaped 4-priority SAPs to compete fairly for bandwidth with 16-priority shaped and unshaped SAPs, a single, aggregate CIR for all the 4-priority SAPs can be configured. This aggregate CIR is applied to all the 4-priority SAPs as a group, not to individual SAPs. In addition, the aggregate CIR is configured differently for access ingress and access egress traffic. On the 7705 SAR-8 Shelf V2 and 7705 SAR-18, access ingress is configured in the config>qos>fabric-profile context. On the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, and 7705 SAR-Wx, access ingress is configured in the config>system>qos>access-ingress-aggregate-rate context. For all platforms, access egress configuration uses the config>port>ethernet context.
For more information about access ingress scheduling and traffic arbitration from the 16-priority and 4-priority schedulers toward the fabric, see Access ingress per-SAP aggregate shapers (access ingress H-QoS).
Per-SAP aggregate shaper support
The per-SAP aggregate shapers are supported in both access ingress and access egress directions and can be enabled on the following Ethernet access ports:
6-port Ethernet 10Gbps Adapter card
8-port Gigabit Ethernet Adapter card
10-port 1GigE/1-port 10GigE X-Adapter card (10-port GigE mode)
Packet Microwave Adapter card
7705 SAR-A
7705 SAR-Ax
7705 SAR-M
7705 SAR-H(all Ethernet access ports except those on the 4-port SAR-H Fast Ethernet module)
7705 SAR-Hc
7705 SAR-Wx
7705 SAR-X
H-QoS example
A typical example in which H-QoS is used is where a transport provider uses a 7705 SAR as a PE device and sells 100 Mb/s of fixed bandwidth for point-to-point Internet access, and offers premium treatment to 10% of the traffic. A customer can mark up to 10% of their critical traffic such that it is classified into high-priority queues and serviced prior to low-priority traffic.
Without H-QoS, there is no way to enforce a limit to ensure that the customer does not exceed the leased 100 Mb/s bandwidth, as illustrated in the following two scenarios:
If a queue hosting high-priority traffic is serviced at 10 Mb/s and the low-priority queue is serviced at 90 Mb/s, then at a moment when the customer transmits less than 10 Mb/s of high-priority traffic, the customer bandwidth requirement is not met (the transport provider transports less traffic than the contracted rate).
If the scheduling rate for the high-priority queue is set to 10 Mb/s and the rate for low-priority traffic is set to 100 Mb/s, then when the customer transmits both high- and low-priority traffic, the aggregate amount of bandwidth consumed by customer traffic exceeds the contracted rate of 100 Mb/s and the transport provider transports more traffic than the contracted rate.
The second-tier shaper—that is, the per-SAP aggregate shaper—is used to limit the traffic at a configured rate on a per-SAP basis. The per-queue rates and behavior are not affected when the aggregate shaper is enabled. That is, as long as the aggregate rate is not reached then there are no changes to the behavior. If the aggregate rate limit is reached, then the per-SAP aggregate shaper throttles the traffic at the configured aggregate rate while preserving the 16-priority scheduling priorities that are used on shaped SAPs.
Per-VLAN network egress shapers
This section provides information about per-VLAN network egress shapers for Gen-2 adapter cards and platforms. For information about Gen-3 adapter cards and platforms, see QoS for Gen-3 adapter cards and platforms.
The 7705 SAR supports a set of eight network egress queues on a per-port or on a per-VLAN basis for network Ethernet ports. Eight unique per-VLAN CoS queues are created for each VLAN when a per-VLAN shaper is enabled. When using per-VLAN shaper mode, in addition to the per-VLAN eight CoS queues, there is a single set of eight queues for hosting traffic from all unshaped VLANs, if any. VLAN shapers are enabled on a per-interface basis (that is, per VLAN) when a network queue policy is assigned to the interface. See Per-VLAN shaper support for a list of cards and nodes that support per-VLAN shapers.
On a network port with dot1q encapsulation, shaped and unshaped VLANs can coexist. In such a scenario, each shaped VLAN has its own set of eight CoS queues and is shaped with its own configured dual-rate shaper. The remaining VLANs (that is, the unshaped VLANs) are serviced using the unshaped-if-cir rate, which is configured using the config>port>ethernet>network>egress>unshaped-if-cir command. Assigning a rate to the unshaped VLANs is required for arbitration between the shaped VLANs and the bulk (aggregate) of unshaped VLANs, where each shaped VLAN has its own shaping rate while the aggregate of the unshaped VLANs has a single rate assigned to it.
Per-VLAN shapers are supported on dot1q-encapsulated ports. They are not supported on null- or qinq-encapsulated ports.
The following figure illustrates the queuing and scheduling blocks for network egress VLAN traffic.
Shaped and unshaped VLANs
Shaped VLANs have user-configured rate limits (PIR and CIR)—called the aggregate rate limit—and must use 16-priority scheduling mode. Shaped VLANs operate on a per-interface basis and are enabled after a network queue policy is assigned to the interface. If a VLAN does not have a network queue policy assigned to the interface, it is considered an unshaped VLAN.
To configure a shaped VLAN with aggregate rate limits, use the agg-rate-limit command in the config>router>if>egress context. If the VLAN shaper is not enabled, the agg-rate-limit settings do not apply. The default aggregate rate limit (PIR) is set to the port egress rate.
Unshaped VLANs use default rate limits (PIR is the maximum possible port rate and CIR is 0 kb/s) and use 16-priority scheduling mode. All unshaped VLANs are classified, queued, buffered, and scheduled into an aggregate flow that gets prepared for third-tier arbitration by a single VLAN aggregate shaper.
In order for the aggregated unshaped VLANs to compete fairly for bandwidth with the shaped VLANs, a single, aggregate CIR for all the unshaped VLANs can be configured using the unshaped-if-cir command. The aggregate CIR is applied to all the unshaped VLANs as a group, not to individual VLANs, and is configured in the config>port>ethernet> network>egress context.
Per-VLAN shaper support
The following cards and nodes support network egress per-VLAN shapers:
6-port Ethernet 10Gbps Adapter card
8-port Gigabit Ethernet Adapter card
10-port 1GigE/1-port 10GigE X-Adapter card (1-port 10GigE mode and 10-port 1GigE mode)
Packet Microwave Adapter card (includes 1+1 redundancy)
v-port on the 2-port 10GigE (Ethernet) Adapter card/module
7705 SAR-A
7705 SAR-Ax
7705 SAR-H
7705 SAR-Hc
7705 SAR-M
7705 SAR-Wx
7705 SAR-X
- Fast Ethernet ports (including ports 9 to 12 on the 7705 SAR-A)
-
Gigabit Ethernet ports in Fast Ethernet mode
-
non-datapath Ethernet ports (for example, the Management port)
VLAN shaper applications
This section describes the following two scenarios:
VLAN shapers for dual uplinks
One of the main uses of per-VLAN network egress shapers is to enable load balancing across dual uplinks out of a spoke site. VLAN shapers for dual uplinks represents a typical hub-and-spoke mobile backhaul topology. To ensure high availability through the use of redundancy, a mobile operator invests in dual 7750 SR nodes at the MTSO. Dual 7750 SR nodes at the MTSO offer equipment protection, as well as protection against backhaul link failures.
In this example, the cell site 7705 SAR is dual-homed to 7750 SR_1 and SR_2 at the MTSO, using two disjoint Ethernet virtual connections (EVCs) leased from a transport provider. Typically, the EVCs have the same capacity and operate in an forwarding/standby manner. One of the EVCs—the 7750 SR—transports all the user/mobile traffic to and from the cell site at any given time. The other EVC transports only minor volumes of control plane traffic between network elements (the 7705 SAR and the 7750 SR). Leasing two EVCs with the same capacity and using only one of them actively wastes bandwidth and is expensive (the mobile operator pays for two EVCs with the same capacity).
Mobile operators with increasing volumes of mobile traffic look for ways to use both of the EVCs simultaneously, in an active/active manner. In this case, using per-VLAN shapers would ensure that each EVC is loaded up to the leased capacity. Without per-VLAN shapers, the 7705 SAR supports a single per-port shaper, which does not meet the active/active requirement:
If the egress rate is set to twice the EVC capacity, either one of the EVCs can end up with more traffic than its capacity.
If the egress rate is set to the EVC capacity, half of the available bandwidth can never be consumed, which is similar to the 7705 SAR having no per-VLAN egress shapers.
VLAN shapers for aggregation site
Another typical use of per-VLAN shapers at network egress is shown in VLAN shapers in aggregation site scenario. The figure shows a hub-and-spoke mobile backhaul network where EVCs leased from a transport provider are groomed to a single port, typically a 10-Gigabit Ethernet or a 1-Gigabit Ethernet port, at the hand-off point at the hub site. Traffic from different cell sites is handed off to the aggregation node over a single port, where each cell site is uniquely identified by the VLAN assigned to it.
In the network egress direction of the aggregation node, per-VLAN shaping is required to ensure traffic to different cell sites is shaped at the EVC rate. The EVC for each cell site would typically have a different rate. Therefore, every VLAN feeding into a particular EVC needs to be shaped at its own rate. For example, compare a relatively small cell site (Cell Site-1) at 20 Mb/s rate with a relatively large cell site (Cell Site-2) at 200 Mb/s rate. Without the granularity of per-VLAN shaping, shaping only at the per-port level cannot ensure that an individual EVC does not exceed its capacity.
Per-customer aggregate shapers (multiservice site) on Gen-2 hardware
This section provides information about per-customer aggregate shapers for Gen-2 adapter cards and platforms. For information about Gen-3 adapter cards and platforms, see QoS for Gen-3 adapter cards and platforms.
A per-customer aggregate shaper is an aggregate shaper into which multiple SAP aggregate shapers can feed. The SAPs can be shaped at a desired rate called the Multiservice Site (MSS) aggregate rate. At ingress, SAPs that are bound to a per-customer aggregate shaper can span a whole Ethernet MDA meaning that SAPs mapped to the same MSS can reside on any port on a given Ethernet MDA.
At egress, SAPs that are bound to a per-customer aggregate shaper can only span a port. Toward the fabric at ingress and toward the port at egress, multiple per-customer aggregate shapers are shaped at their respective configured rates to ensure fair sharing of available bandwidth among different per-customer aggregate shapers. Deep ingress queuing capability ensures that traffic bursts are absorbed rather than dropped. Multi-tier shapers are based on an end-to-end backpressure mechanism that uses the following order (egress is given as an example):
per-port egress rate (if configured), backpressures to
per-customer aggregate shapers, backpressures to
per-SAP aggregate shapers, backpressures to
per-CoS queues (in the scheduling priority order)
To configure per-customer aggregate shaping, a shaper policy must be created and shaper groups must be created within that shaper policy. For access ingress per-customer aggregate shaping, a shaper policy must be assigned to an Ethernet MDA and SAPs on that Ethernet MDA must be bound to a shaper group within the shaper policy bound to that Ethernet MDA. For access egress per-customer aggregate shaping, a shaper policy must be assigned to a port and SAPs on that port must be bound to a shaper group within the shaper policy bound to that port. The unshaped SAP shaper group within the policy provides the shaper rate for all the unshaped SAPs (4-priority scheduled SAPs). For each shaped SAP, however, an ingress or egress shaper group can be specified. For more information about shaper policies, see Applying a shaper QoS policy and shaper groups.
The access ingress shaper policy is configured at the MDA level for fixed platforms. The default value for an access ingress shaper policy for each MDA and module is blank, as configured using the no shaper-policy command. On all 7705 SAR fixed platforms (with the exception of the 7705 SAR-X), when no MSS is configured, the existing access ingress aggregate rate is used as the shaper rate for the bulk of access ingress traffic. In order to use MSS, a shaper policy must be assigned to the access ingress interface of one MDA, and the shaper policy change is cascaded to all MDAs and modules in the chassis.
Before the access ingress shaper policy is assigned, the config system qos access-ingress-aggregate-rate 10000000 unshaped-sap-cir max command must be configured. Once a shaper policy is assigned to an access ingress MDA, the values configured using the access-ingress-aggregate-rate command cannot be changed.
On all 7705 SAR fixed platforms (with the exception of the 7705 SAR-X), when a shaper policy is assigned to an Ethernet MDA for access ingress aggregate shaping, it is automatically assigned to all the Ethernet MDAs in that chassis. The shaper group members contained in the shaper policy span all the Ethernet MDAs. SAPs on different Ethernet MDAs configured with the same ingress shaper group will share the shaper group rate.
Once the first MSS is configured, traffic from the designated SAPs is mapped to the MSS and shaped at the configured rate. The remainder of the traffic is then shaped according to the configured unshaped SAP rate. When a second MSS is added, SAPs that are mapped to the second MSS are shaped at the configured rate and traffic is arbitrated between the first MSS, the second MSS and unshaped SAP traffic.
In the access egress direction, the default shaper policy is assigned to each MSS-capable port. Ports that cannot support MSS are assigned a blank value, as configured using the no shaper-policy command. The default egress shaper group is assigned to each egress SAP that supports MSS. If the SAP does not support MSS, the egress SAP is assigned a blank value, as configured using the no shaper-group command.
MSS support
The following cards, modules, and platforms support MSS:
6-port Ethernet 10Gbps Adapter card
8-port Gigabit Ethernet Adapter card
10-port 1GigE/1-port 10GigE X-Adapter card
Packet Microwave Adapter card
6-port SAR-M Ethernet module
7705 SAR-A
7705 SAR-Ax
7705 SAR-H
7705 SAR-Hc
7705 SAR-M
7705 SAR-Wx
7705 SAR-X
-
4-port SAR-H Fast Ethernet module
-
Fast Ethernet ports on the 7705 SAR-A
MSS and LAG interaction on the 7705 SAR-8 Shelf V2 and 7705 SAR-18
A SAP that uses a LAG can include two or more ports from the same adapter card or two different adapter cards.
In the access egress direction, each port can be assigned a shaper policy for access and can have shaper groups configured with different shaping rates. If a shaper group is not defined, the default shaper group is used. The port egress shaper policy, when configured on a LAG, must be configured on the primary LAG member. This shaper policy is propagated to each of the LAG port members, ensuring that each LAG port member has the same shaper policy.
The following egress MSS restrictions ensure that both active and standby LAG members have the same configuration:
When a SAP is created using a LAG, whether the LAG has any port members and whether the egress scheduler mode is 4-priority or 16-priority, the default shaper group is automatically assigned to the SAP.
Shaper groups cannot be changed from the default if they are assigned to SAPs that use LAGs with no port members.
The last LAG port member cannot be deleted from a LAG that is used by any SAP that is assigned a non-default shaper group.
The shaper policy or shaper group is not checked when the first port is added as a member of a LAG. When a second port is added as a member of a LAG, it can only be added if the shaper policy on the second port is the same as the shaper policy on the first member port of the LAG.
The shaper group of a LAG SAP can be changed to a non-default shaper group only if the new shaper group exists in the shaper policy used by the active LAG port member.
A shaper group cannot be deleted if it is assigned to unshaped SAPs (unshaped-sap-shaper-group command) or if it is used by at least one LAG SAP or non-LAG SAP.
The shaper policy assigned to a port cannot be changed unless all of the SAPs on that port are assigned to the default shaper group.
In the ingress direction, there can be two different shaper policies on two different adapter cards for the two port members in a LAG. When assigning a shaper group to an ingress LAG SAP, each shaper policy assigned to the LAG port MDAs must contain that shaper group or the shaper group cannot be assigned. In addition, after a LAG activity switch occurs, the CIR/PIR configuration from the subgroup of the policy of the adapter card of the newly active member will be used.
The following ingress MSS restrictions allow the configuration of shaper groups for LAG SAPs, but the router ignores shaper groups that do not meet the restrictions:
When a SAP is created using a LAG, whether the LAG has any port members and whether the ingress scheduler mode is 4-priority or 16-priority, the default shaper group is automatically assigned to the SAP.
Shaper groups cannot be changed from the default if they are assigned to SAPs that use LAGs with no port members.
The last LAG port member cannot be deleted from a LAG that is used by any SAP that is assigned a non-default shaper group.
The shaper policy or shaper group is not checked when the first port is added as a member of a LAG. When a second port is added as a member of a LAG, all SAPs using the LAG are checked to ensure that any non-default shaper groups already configured on the SAPs are part of the shaper policy assigned to the adapter card of the second port. If the check fails, the port member is rejected.
The shaper group of a LAG SAP can be changed to a non-default shaper group only if the new shaper group exists in the shaper policies used by all adapter cards of all LAG port members.
A shaper group cannot be deleted if it is assigned to unshaped SAPs (unshaped-sap-shaper-group command) or if it is used by at least one LAG SAP or non-LAG SAP.
The shaper policy assigned to an adapter card cannot be changed unless all of the SAPs on that adapter card are assigned to the default shaper group.
QoS for hybrid ports on Gen-2 hardware
This section provides information about QoS for hybrid ports on Gen-2 adapter cards and platforms. For information about Gen-3 adapter cards and platforms, see QoS for Gen-3 adapter cards and platforms.
In the ingress direction of a hybrid port, traffic management behavior is the same as it is for access and network ports. See Access ingress and Network ingress.
In the egress direction of a hybrid port, access and network aggregate shapers are used to arbitrate between the bulk (aggregate) of access and network traffic flows. As shown in Hybrid port egress shapers and schedulers on Gen-2 hardware, on the access side (above the solid line), both the access egress SAP aggregates (#1) and the unshaped SAP shaper (#2) feed into the access egress aggregate shaper (#3). On the network side (below the solid line), both the per-VLAN shapers (#4) and the unshaped interface shaper (#5) feed into the network egress aggregate shaper (#6). Then, the access and the network aggregate shapers are arbitrated in a dual-rate manner, in accordance with their respective configured committed and peak rates (#7). As a last step, the egress-rate for the port (when configured) applies backpressure to both the access and the network aggregate shapers, which apply backpressure all the way to the FC queues belonging to both access and network traffic.
As part of the hybrid port traffic management solution, access and network second-tier shapers are bound to access and network aggregate shapers, respectively. The hybrid port egress datapath can be visualized as access and network datapaths that coexist separately up until the access and network aggregate shapers at Tier 3 (#3 and #6).
In the figure, the top half is identical to access egress traffic management, where CoS queues (Tier 1) feed into either per-SAP shapers for shaped SAPs (#1) or a single second-tier shaper for all unshaped SAPs (#2). Up to the end of the second-tier, per-SAP aggregate shapers, the access egress datapath is maintained in the same manner as an Ethernet port in access mode. The same logic applies for network egress. The bottom half of the figure shows the datapath from the CoS queues to the per-VLAN shapers, which is identical to the datapath for any other Ethernet port in network mode.
The main difference between hybrid mode and access and network modes is shown when the access and the network traffic is arbitrated toward the port (Tier 3). At this point, a new set of dual-rate shapers (called shaper groups) are introduced: one shaper for the aggregate (bulk) of the access traffic (#3) and another shaper for the aggregate of the network traffic (#6), to ensure rate-based arbitration among access and network traffic.
Depending on the use and the application, the committed rate for any one mode of flow may need to be fine-tuned to minimize delay, jitter and loss. In addition, through the use of egress-rate limiting, a fourth level of shaping can be achieved.
When egress-rate is configured (under config>port>ethernet), the following events occur:
egress-rate applies backpressure to the access and network aggregate shapers
as a result, the aggregate shapers apply backpressure to the per-SAP and per-VLAN aggregate shapers
access aggregate shapers apply backpressure to the per-SAP aggregate shapers and the unshaped SAP aggregate shaper
network aggregate shapers apply backpressure to the per-VLAN aggregate shapers and the unshaped VLAN aggregate shaper
as a result, the per-SAP and per-VLAN aggregate shapers apply backpressure to their respective CoS queues
QoS for Gen-3 adapter cards and platforms
Third-generation (Gen-3) Ethernet adapter cards and Ethernet ports on Gen-3 platforms support 4-priority scheduling.
The main differences between Gen-3 hardware and Gen-2 hardware are that on Gen-3 hardware:
SAPs are shaped (that is, no unshaped SAPs)
SAPs and VLANs are shaped by 4-priority schedulers, not 16-priority schedulers
4-priority scheduling is done on a per-SAP basis
backpressure is applied according to relative priority across VLANs and interfaces. That is, scheduling is carried out in priority order, ignoring per-VLAN and per-interface boundaries. Conforming, expedited traffic across all queues is scheduled regardless of the VLAN boundaries. After all the conforming, expedited traffic across all queues has been serviced, the servicing of conforming, best effort traffic begins.
See Scheduling modes for a summary of scheduler mode support. For information about adapter card generations, see the ‟Evolution of Ethernet Adapter Cards, Modules, and Platforms” section in the 7705 SAR Interface Configuration Guide.
The following figure describes the access, network, and hybrid port scheduling behavior for Gen-3 hardware and compares it with the scheduling behavior of Gen-2 hardware.
Port type |
Gen-3 hardware with 4-priority mode |
Gen-2 hardware with 4-priority mode |
|
---|---|---|---|
Access |
Within a SAP |
EXP over BE |
EXP over BE |
Default configuration |
Simple round-robin (RR) scheduling among SAPs |
EXP (across all queues, no SAP boundaries) over BE |
|
H-QoS and MSS aggregate shapers |
RR among aggregates based on PIR and CIR (SAP at tier 2, MSS at tier 3) |
N/A |
|
Network |
Default configuration (8 queues per port) |
EXP over BE |
EXP over BE |
Per-VLAN shaper |
EXP over BE |
RR among VLAN shapers based on PIR and CIR |
|
Hybrid |
Default configuration (8 queues per port) |
EXP over BE |
EXP (across all SAPs and network queues) over BE |
Per-VLAN shaper |
RR among VLAN shapers based on PIR and CIR |
RR among VLAN shapers based on PIR and CIR |
In summary, the following updates to Gen-3 scheduling are implemented:
enabled CIR-based shaping for:
per-SAP aggregate ingress and egress shapers
per-customer aggregate ingress and egress shapers
per-VLAN shaper at network egress when port is in hybrid mode
access and network aggregate shapers for hybrid ports
disabled backpressure to the FC queues dependent on the relative priority among all VLAN-bound IP Interfaces at:
access ingress and access egress when port is in access mode
access ingress and access egress when port is in hybrid mode
network ingress
network egress when port is in hybrid mode
6-port SAR-M Ethernet module
The egress datapath shapers on a 6-port SAR-M Ethernet module operate on the same frame size as any other shaper. These egress datapath shapers are:
-
per-queue shapers
-
per-SAP aggregate shapers
-
per-customer aggregate (MSS) shapers
The egress port shaper on a 6-port SAR-M Ethernet module does not account for the 4-byte FCS. Packet byte offset can be used to make adjustments to match the desired operational rate or eliminate the implications of FCS. See Packet byte offset for more information.
4-priority scheduling behavior on Gen-3 hardware
- 4-priority scheduling at access ingress (Gen-3 hardware) (access ingress)
- 4-priority scheduling at access egress (Gen-3 hardware) (access egress)
- 4-priority scheduling at network ingress (Gen-3 hardware): per-destination mode (network ingress, destination mode)
- 4-priority scheduling at network ingress (Gen-3 hardware): aggregate mode (network ingress, aggregate mode)
For network egress traffic through a network port on Gen-3 hardware, the behavior of 4-priority scheduling is as follows: traffic priority is determined at the queue-level scheduler, which is based on the queue PIR and CIR and the queue type. The queue-level priority is carried through the various shaping stages and is used by the 4-priority Gen-3 VLAN scheduler at network egress. See 4-priority scheduling at network egress (Gen-3 hardware) on a network port and its accompanying description.
For hybrid ports, both access and network egress traffic use 4-priority scheduling that is similar to 4-priority scheduling on Gen-2 hardware. See 4-priority scheduling for hybrid port egress (Gen-3 hardware) and its accompanying description.
In the following figure, the shaper groups all belong within one shaper policy and only one shaper policy is assigned to an ingress adapter card. Each SAP can be associated with one shaper group. Multiple SAPs can be associated with the same shaper group. All the access ingress traffic flows to the access ingress fabric shaper, in-profile (conforming) traffic first, then out-of-profile (non-conforming) traffic. Network ingress traffic functions similarly.
The 4-priority schedulers on Gen-2 and Gen-3 hardware are very similar, except that 4-priority scheduling on Gen-3 hardware is done on a per-SAP basis.
The following figure shows 4-priority scheduling for access egress on Gen-3 hardware. QoS behavior for access egress is similar to QoS behavior for access ingress.
The 4-priority schedulers on Gen-2 and Gen-3 hardware are very similar, except that 4-priority scheduling on Gen-3 hardware is done on a per-SAP basis.
4-priority scheduling at network ingress (Gen-3 hardware): per-destination mode and 4-priority scheduling at network ingress (Gen-3 hardware): aggregate mode show network ingress scheduling for per-destination and aggregate modes, which are configured under the fabric-profile command. Traffic arriving on a network port is examined for its destination MDA and directed to the QoS block that sends traffic to the appropriate MDA. There is one set of queues for each block, and an additional set for multipoint traffic.
In the following figure, there is one per-destination shaper for each destination MDA.
In the following figure, there is a single shaper to handle all the traffic.
The following figure shows 4-priority scheduling for Gen-3 hardware at network egress. Queue-level CIR and PIR values and the queue type are determined at queuing and provide the scheduling priority for a specific flow across all shapers toward the egress port (#1 in the figure). At the per-VLAN aggregate level (#2), only a single rate—the total aggregate rate (PIR)—can be configured; CIR configuration is not supported at the per-VLAN aggregate-shaper level for network egress traffic. All VLANs are aggregated and scheduled by a 4-priority aggregate scheduler (#3). The flow is then fed to the port shaper and processed at the egress rate. In case of congestion, the port shaper provides backpressure, resulting in the buffering of traffic by individual FC queues until the congested state ends.
The following figure shows 4-priority scheduling for Gen-3 hardware where ports are in hybrid mode. The QoS behavior for both access and network egress traffic is similar except that the access egress path includes tier 3, per-customer aggregate shapers. Access and network shapers prepare and present traffic to the port shaper, which arbitrates between access and network traffic.
The 4-priority schedulers on the Gen-2 and Gen-3 hardware are very similar, except that 4-priority scheduling on Gen-3 hardware is done on a per-SAP or a per-VLAN basis (for access egress and network egress, respectively).
Gen-3 hardware and LAG
When a Gen-3-based port and a Gen-2-based port are attached to a LAG SAP (also referred to as mix-and-match LAG), configuring scheduler mode for the LAG SAP is required because it is used by Gen-2 ports, but it is ignored by the Gen-3 port.
For more information, see the ‟LAG Support on Third-Generation Ethernet Adapter Cards, Ports, and Platforms” section in the 7705 SAR Interface Configuration Guide.
QoS on a ring adapter card or module
This section contains overview information as well as information about the following topics:
Network QoS and network queue policies on a ring adapter card or module
Considerations for using ring adapter card or module QoS policies
The following figure shows a simplified diagram of the ports on a 2-port 10GigE (Ethernet) Adapter card (also known as a ring adapter card). The ports can also be conceptualized the same way for a 2-port 10GigE (Ethernet) module. A ring adapter card or module has physical Ethernet ports used for Ethernet bridging in a ring network (labeled Port 1 and Port 2 in the figure). These ports are referred to as the ring ports because they connect to the Ethernet ring. The ring ports operate on the Layer 2 bridging domain side of the ring adapter card or module, as does the add/drop port, which is an internal port on the card or module.
On the Layer 3 IP domain side of a ring adapter card or module, there is a virtual port (v-port) and a fabric port. The v-port is also an internal port. Its function is to help control traffic on the IP domain side of a ring adapter card or module.
To manage ring and add/drop traffic mapping to queues in the Layer 2 bridging domain, a ring type network QoS policy can be configured for the ring at the adapter card level (under the config>card>mda context). To manage ring and add/drop traffic queuing and scheduling in the Layer 2 bridging domain, network queue QoS policies can be configured for the ring ports and the add-drop port.
To manage add/drop traffic classification and remarking in the Layer 3 IP domain, IP-interface type network QoS policies can be configured for router interfaces on the v-port. To manage add/drop traffic queuing and scheduling in the Layer 3 IP domain, network queue QoS policies can be configured for the v-port and at network ingress at the adapter card level (under the config>card>mda context).
Network and network queue QoS policy types
All ports on a ring adapter card or module are possible congestion points and therefore can have network queue QoS policies applied to them.
In the bridging domain, a single ring type network QoS policy can be applied at the adapter card level and operates on the ring ports and the add/drop port. In the IP domain, IP interface type network QoS policies can be applied to router interfaces.
Network QoS policies are created using the config>qos>network command, which includes the network-policy-type keyword to specify the type of policy:
ring (for bridging domain policies)
ip-interface (for network ingress and network egress IP domain policies)
default (for setting the policy to policy 1, the system default policy)
When the policy has been created, its default action and classification mapping can be configured.
Network QoS and network queue policies on a ring adapter card or module
Network QoS policies are applied to the ring ports and the add/drop port using the qos-policy command found under the config>card>mda context. These ports are not explicitly specified in the command.
Network queue QoS policies are applied to the ring ports and the v-port using the queue-policy command found under the config>port context. Similarly, a network queue policy is applied to the add/drop port using the add-drop-port- queue-policy command, found under the config>card>mda context. The add/drop port is not explicitly specified in this command.
The CLI commands for applying QoS policies are listed in this guide. The CLI command descriptions are in the 7705 SAR Interface Configuration Guide.
Considerations for using ring adapter card or module QoS policies
The following notes apply to configuring and applying QoS policies to a ring adapter card or module, as well as other adapter cards:
The ring ports and the add/drop port cannot use a network queue policy that is being used by the v-port or the network ingress port or any other port on other cards, and vice versa. This does not apply to the default network queue policy.
If a network-queue policy is assigned to a ring port or the add/drop port, all queues that are configured in the network-queue policy are created regardless of any FC mapping to the queues. All FC queue mapping in the network-queue policy is meaningless and is ignored.
If the QoS policy has a dot1p value mapped to a queue that is not configured in the network-queue policy, the traffic of this dot1p value is sent to queue 1.
If a dot1q-to-queue mapping is defined in a network QoS policy, and if the queue is not configured on any ring port or the add/drop port, all traffic received from a port will be sent out from queue 1 on the other two ports. For example, if traffic is received on port 1, it will be sent out on port 2 and/or the add/drop port.
Upon provisioning an MDA slot for a ring adapter card or module (config>card>mda> mda-type) an additional eight network ingress queues are allocated to account for the network queues needed for the add/drop port. When the ring adapter card or module is deprovisioned, the eight queues are deallocated.
The number of ingress network queues is shown by using the tools>dump> system-resource command.
QoS for IPSec traffic
For specific information about QoS for IPSec traffic, see the ‟QoS” section in the ‟IPSec” chapter in the 7705 SAR Services Guide.
QoS for network group encryption traffic
The 7705 SAR provides priority and scheduling for traffic into the encryption and decryption engines on nodes that support network group encryption (NGE). This applies to traffic at network ingress or network egress.
For specific information, see the ‟QoS for NGE Traffic” section in the ‟Network Group Encryption” chapter in the 7705 SAR Services Guide.
Access ingress
This section contains the following topics for traffic flow in the access ingress direction:
Access ingress traffic classification
Traffic classification identifies a traffic flow and maps the packets belonging to that flow to a preset forwarding class, so that the flow can receive the required special treatment. Up to eight forwarding classes are supported for traffic classification. See Default forwarding classes for a list of these forwarding classes.
For TDM channel groups, all of the traffic is mapped to a single forwarding class. Similarly, for ATM VCs, each VC is linked to one forwarding class. On Ethernet ports and VLANs, up to eight forwarding classes can be configured based on 802.1p (dot1p) bits or DSCP bits classification. On PPP/MLPPP, FR (for Ipipes), or cHDLC SAPs, up to eight forwarding classes can be configured based on DSCP bits classification. FR (for Fpipes) and HDLC SAPs are mapped to one forwarding class.
If an Ethernet port is set to null encapsulation, the dot1p value has no meaning and cannot be used for classification purposes.
If a port or SAP is set to qinq encapsulation, use the match-qinq-dot1p top | bottom command to indicate which qtag contains the dot1p bits that are used for classification purposes. The match-qinq-dot1p command is found under the config>service context. See the 7705 SAR Services Guide, ‟VLL Services Command Reference”, for details.
After the classification takes place, forwarding classes are mapped to queues as described in the sections that follow.
Traffic classification types
The various traffic classification methods used on the 7705 SAR are described in the following table. A list of classification rules follows the table.
Traffic classification based on... |
Description |
---|---|
a channel group (n ✕ DS0) |
Applies to 16-port T1/E1 ASAP Adapter card and 32-port T1/E1 ASAP Adapter card ports, 2-port OC3/STM1 Channelized Adapter card ports, 12-port Serial Data Interface card ports, 4-port T1/E1 and RS-232 Combination module ports, and 6-port E&M Adapter card ports in structured or unstructured circuit emulation mode. In this mode, a number of DS0s are transported within the payload of the same Circuit Emulation over Packet Switched Networks (CESoPSN) packet, Circuit Emulation over Ethernet (CESoETH) packet, or Structure-Agnostic TDM over Packet (SAToP) packet. Therefore, the timeslots transporting the same type of traffic are classified all at once. |
an ATM VCI |
On ATM-configured ports, any virtual connection regardless of service category is mapped to the configured forwarding class. One-to-one mapping is the only supported option. VP- or VC-based classifications are both supported. A VC with a specified VPI and VCI is mapped to the configured forwarding class. A VP connection with a specified VPI is mapped to the configured forwarding class. |
an ATM service category |
Similar ATM service categories can be mapped against the same forwarding class. Traffic from a given VC with a specified service category is mapped to the configured forwarding class. VC selection is based on the ATM VC identifier. |
an Ethernet port |
All the traffic from an access ingress Ethernet port is mapped to the selected forwarding class. More granular classification can be performed based on dot1p or DSCP bits of the incoming packets. Classification rules applied to traffic flows on Ethernet ports function in the same way as access/filter lists. There can be multiple tiers of classification rules associated with an Ethernet port. In this case, classification is performed based on priority of classifier. The order of the priorities is described in Hierarchy of classification rules. |
an Ethernet VLAN (dot1q or qinq) |
Traffic from an access Ethernet VLAN (dot1q or qinq) interface can be mapped to a forwarding class. Each VLAN can be mapped to one forwarding class. |
IEEE 802.1p bits (dot1p) |
The dot1p bits in the Ethernet/VLAN ingress packet headers are used to map the traffic to up to eight forwarding classes. |
PPP/MLPPP, FR (for Ipipes), and cHDLC SAPs |
Traffic from an access ingress SAP is mapped to the selected forwarding class. More granular classification can be performed based on DSCP bits of the incoming packets. |
FR (for Fpipes) and HDLC SAPs |
Traffic from an access ingress SAP is mapped to the selected (default) forwarding class. |
DSCP bits |
When the Ethernet payload is IP, ingress traffic can be mapped to a maximum of eight forwarding classes based on DSCP bit values. DSCP-based classification supports untagged, single-tagged, double-tagged, and triple-tagged Ethernet frames. If an ingress frame has more than three VLAN tags, then dot1q or qinq dot1p-based classification must be used. |
Multi-field classifiers |
Traffic is classified based on any IP criteria currently supported by the 7705 SAR filter policies; for example, source and destination IP address, source and destination port, whether or not the packet is fragmented, ICMP code, and TCP state. For information about multi-field classification, see the 7705 SAR Router Configuration Guide, ‟Multi-field Classification (MFC)” and ‟IP, MAC, and VLAN Filter Entry Commands”. |
Hierarchy of classification rules
The following table shows classification options for various access entities (SAP identifiers) and service types. For example, traffic from a TDM port using a TDM (Cpipe) PW maps to one FC (all traffic has the same CoS). Traffic from an Ethernet port using a Epipe PW can be classified to as many as eight FCs based on DSCP classification rules, while traffic from a SAP with dot1q or qinq encapsulation can be classified to up to eight FCs based on dot1p or DSCP rules.
For Ethernet traffic, dot1p-based classification for dot1q or QinQ SAPs takes precedence over DSCP-based classification. For null-encapsulated Ethernet ports, only DSCP-based classification applies. In either case, when defining classification rules, a more specific match rule is always preferred to a general match rule.
For more information about hierarchy rules, see Forwarding class and enqueuing priority classification hierarchy based on rule type in the Service ingress QoS policies section.
Access type (SAP) |
Service type |
|||||||
---|---|---|---|---|---|---|---|---|
TDM PW |
ATM PW |
FR PW |
HDLC PW |
Ethernet PW |
IP PW |
VPLS |
VPRN |
|
TDM port |
1 FC |
|
|
|
|
|
|
|
Channel group |
1 FC |
|
|
|
|
|
|
|
ATM virtual connection identifier |
|
1 FC |
|
|
|
|
1 FC |
|
FR |
|
|
1 FC |
|
|
DSCP, up to 8 FCs |
|
|
HDLC |
|
|
|
1 FC |
|
|
|
|
PPP / MLPPP |
|
|
|
|
|
DSCP, up to 8 FCs |
|
DSCP, up to 8 FCs |
cHDLC |
|
|
|
|
|
DSCP, up to 8 FCs |
|
|
Ethernet port |
|
|
|
|
DSCP, up to 8 FCs |
DSCP, up to 8 FCs |
DSCP, up to 8 FCs |
DSCP, up to 8 FCs |
Dot1q encapsulation |
|
|
|
|
Dot1p or DSCP, up to 8 FCs |
Dot1p or DSCP, up to 8 FCs |
Dot1p or DSCP, up to 8 FCs |
Dot1p or DSCP, up to 8 FCs |
QinQ encapsulation |
|
|
|
|
Dot1p or DSCP, up to 8 FCs |
Dot1p or DSCP, up to 8 FCs |
Dot1p or DSCP, up to 8 FCs |
Dot1p or DSCP, up to 8 FCs |
Discard probability of classified traffic
When the traffic is mapped against a forwarding class, the discard probability for the traffic can be configured as high or low priority at ingress. When the traffic is further classified as high or low priority, different congestion management schemes could be applied based on this priority. For example WRED curves can then be run against the high- and low-priority traffic separately, as described in Slope policies (WRED and RED).
This ability to specify the discard probability is very significant because it controls the amount of traffic that is discarded under congestion or high usage. If you know the characteristics of your traffic, particularly the burst characteristics, the ability to change the discard probability can be used to great advantage. The objective is to customize the properties of the random discard functionality such that the minimal amount of data is discarded.
Access ingress queues
After the traffic is classified to different forwarding classes, the next step is to create the ingress queues and bind forwarding classes to these queues.
There is no restriction of a one-to-one association between a forwarding class and a queue. That is, more than one forwarding class can be mapped to the same queue. This capability is beneficial in that it allows a bulk-sum amount of resources to be allocated to traffic flows of a similar nature. For example, in the case of 3G UMTS services, HSDPA and OAM traffic are both considered BE in nature. However, HSDPA traffic can be mapped to a better forwarding class (such as L2) while OAM traffic can remain mapped to a BE forwarding class. But they both can be mapped to a single queue to control the total amount of resources for the aggregate of the two flows.
A large but finite amount of memory is available for the queues. Within this memory space, many queues can be created. The queues are defined by user-configurable parameters. This flexibility and complexity is necessary in order to create services that offer optimal quality of service and is much better than a restrictive and fixed buffer implementation alternative.
Memory allocation is optimized to guarantee the CBS for each queue. The allocated queue space beyond the CBS that is bounded by the MBS depends on the usage of buffer space and existing guarantees to queues (that is, the CBS). The CBS is defaulted to 8 kB (for 512 byte buffer size) or 18 kB (for 2304 byte buffer size) for all access ingress queues on the 7705 SAR. With a small default queue depth (CBS) allocated for each queue, all services at full scale are guaranteed to have buffers for queuing. The default value would need to be altered to meet the requirements of a specific traffic flow or flows.
Access ingress queuing and scheduling
Traffic management on the 7705 SAR uses a packet-based implementation of the dual leaky bucket model. Each queue has a guaranteed space limited with CBS and a maximum depth limited with MBS. New packets are queued as they arrive. Any packet that causes the MBS to be exceeded is discarded.
The packets in the queue are serviced by two different profiled (rate-based) schedulers, the In-Profile and Out-of-Profile schedulers, where CIR traffic is scheduled before PIR traffic. These two schedulers empty the queue continuously.
For 4-priority scheduling, rate-based schedulers (CIR and PIR) are combined with queue-type schedulers (EXP or BE). For 16-priority scheduling, the rate-based schedulers are combined with the strict priority schedulers (CoS-8 queue first to CoS-1 queue last).
Access ingress scheduling is supported on the adapter cards and ports listed in the following table. The supported scheduling modes are 4-priority scheduling and 16-priority scheduling. The table shows which scheduling mode each card and port supports at access ingress.
This section also contains information about the following topics:
Adapter card or port |
4-priority |
16-priority |
---|---|---|
8-port Gigabit Ethernet Adapter card |
✓ |
✓ |
Packet Microwave Adapter card |
✓ |
✓ |
6-port Ethernet 10Gbps Adapter card 1 |
✓ |
|
10-port 1GigE/1-port 10GigE X-Adapter card (in 10-port 1GigE mode) |
✓ |
✓ |
4-port SAR-H Fast Ethernet module |
✓ |
|
6-port SAR-M Ethernet module |
✓ |
✓ |
Ethernet ports on the 7705 SAR-A |
✓ |
✓ |
Ethernet ports on the 7705 SAR-Ax |
✓ |
✓ |
Ethernet ports on the 7705 SAR-H |
✓ |
✓ |
Ethernet ports on the 7705 SAR-Hc |
✓ |
✓ |
Ethernet ports on the 7705 SAR-M |
✓ |
✓ |
Ethernet ports on the 7705 SAR-Wx |
✓ |
✓ |
Ethernet ports on the 7705 SAR-X1 |
✓ |
|
16-port T1/E1 ASAP Adapter card |
✓ |
|
32-port T1/E1 ASAP Adapter card |
✓ |
|
2-port OC3/STM1 Channelized Adapter card |
✓ |
|
4-port OC3/STM1 Clear Channel Adapter card |
✓ |
|
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card |
✓ |
|
4-port DS3/E3 Adapter card |
✓ |
|
T1/E1 ASAP ports on the 7705 SAR-A |
✓ |
|
T1/E1 ASAP ports on the 7705 SAR-M |
✓ |
|
TDM ports on the 7705 SAR-X |
✓ |
|
12-port Serial Data Interface card |
✓ |
|
6-port E&M Adapter card |
✓ |
|
6-port FXS Adapter card |
✓ |
|
8-port FXO Adapter card |
✓ |
|
8-port Voice & Teleprotection card |
✓ |
|
8-port C37.94 Teleprotection card |
✓ |
|
Integrated Services card |
✓ |
- 4-priority scheduler for Gen-3 adapter card or platform.
Profiled (rate-based) scheduling
Each queue is serviced based on the user-configured CIR and PIR values. If the packets that are collected by a scheduler from a queue are flowing at a rate that is less than or equal to the CIR value, the packets are scheduled as in-profile. Packets with a flow rate that exceeds the CIR value but is less than the PIR value are scheduled as out-of-profile. 4-priority scheduling depicts this behavior by the ‟In-Prof” and ‟Out-Prof” labels. This behavior is comparable to the dual leaky bucket implementation in ATM networks. With in-profile and out-of-profile scheduling, traffic that flows at rates up to the traffic contract (that is, CIR) from all the queues is serviced prior to traffic that flows at rates exceeding the traffic contract. This mode of operation ensures that service-level agreements (SLAs) are honored and traffic that is committed to be transported is switched prior to traffic that exceeds the contract agreement.
Queue-type scheduling
As well as profiled scheduling described above, queue-type scheduling is supported at access ingress. Queues are divided into two categories, those that are serviced by the Expedited scheduler and those that are serviced by the Best Effort scheduler.
The Expedited scheduler has precedence over the Best Effort scheduler. Therefore, at access ingress, CoS queues that are marked with an Expedited priority are serviced first. Then the Best Effort marked queues are serviced. In a default configuration, the Expedited scheduler services the following CoS queues before the Best Effort scheduler services the rest:
Expedited scheduler: NC, H1, EF, H2
Best Effort scheduler: L1, AF, L2, BE
If a packet with an Expedited forwarding class arrives while a Best Effort marked queue is being serviced, the Expedited scheduler takes over and services the Expedited marked CoS queue as soon as the packet from the Best Effort scheduler is serviced.
The schedulers at access ingress in the 7705 SAR service the group of all Expedited queues exhaustively ahead of the group of all Best Effort queues. This means that all expedited queues have to be empty before any packet from a Best Effort queue is serviced.
The following basic rules apply to the queue-type scheduling of CoS queues:
Queues marked for Expedited scheduling are serviced in a round-robin fashion before any queues marked as Best Effort (in a default configuration, these would be queues CoS-8 through CoS-5).
These Expedited queues are serviced exhaustively within the round robin. For example, if in a default configuration there are two packets scheduled for service in both CoS-8 and CoS-5, one packet from CoS-8 is serviced, then one packet from CoS-5 is serviced, and then the scheduler returns back to CoS-8, until all the packets are serviced.
After the Expedited scheduler has serviced all the packets in the queues marked for Expedited scheduling, the Best Effort scheduler starts serving the queues marked as Best Effort. The same principle described in step 2 is followed, until all the packets in the Best Effort queues are serviced.
If a packet arrives at any of the queues marked for Expedited scheduling while the scheduler is servicing a packet from a Best Effort queue, the Best Effort scheduler finishes servicing the current packet and then the Expedited scheduler immediately activates to service the packet in the Expedited queue. If there are no other packets to be serviced in any of the Expedited queues, the Best Effort scheduler resumes servicing the packets in the Best Effort queues. If the queues are configured according to the tables and defaults described in this guide, CoS-4 will be scheduled prior to CoS-1 among queues marked as Best Effort.
After one cycle is completed across all the queues marked as Best Effort, the next pass is started until all the packets in all the queues marked as Best Effort are serviced, or a packet arrives to a queue marked as Expedited and is serviced as described in step 2.
4-priority scheduling
With 4-priority scheduling, profiled scheduling and queue-type scheduling are combined and the combination is applied to all of the access ingress queues. The profile and queue-type schedulers are combined and applied to the queues to provide maximum flexibility and scalability that meet the stringent QoS requirements of modern network applications. See Profiled (rate-based) scheduling and Queue-type scheduling for information about these types of scheduling.
Packets with a flow rate that is less than or equal to the CIR value of a queue are scheduled as in-profile. Packets with a flow rate that exceeds the CIR value but is less than the PIR value of a queue are scheduled as out-of-profile.
The scheduling cycle for 4-priority scheduling of CoS queues is shown in 4-priority scheduling. The following basic steps apply:
-
In-profile traffic from Expedited queues is serviced in round-robin fashion up to the CIR value. When a queue exceeds its configured CIR value, its state is changed to out-of-profile.
-
When all of the in-profile packets from the Expedited queues are serviced, in-profile packets from Best Effort queues are serviced in a round-robin fashion until the configured CIR value is exceeded. When a queue exceeds its configured CIR value, its state is changed to out-of-profile.
-
When all of the in-profile packets from the Best Effort queues are serviced, out-of-profile packets from Expedited queues are serviced in a round-robin fashion.
-
When all of the out-of-profile packets from the Expedited queues are serviced, the out-of-profile packets from the Best Effort queues are serviced in a round-robin fashion.
Note: If a packet arrives at any of the queues marked for Expedited scheduling while the scheduler is servicing a packet from a Best Effort queue or is servicing an out-of-profile packet, the scheduler finishes servicing the current packet and then returns to the Expedited queues immediately.
4-priority (Gen-3) scheduling
At access ingress, 4-priority scheduling for Gen-3 hardware is the same as 4-priority scheduling for Gen-2 hardware, except that scheduling is done on a per-SAP basis. For more information, see QoS for Gen-3 adapter cards and platforms.
16-priority scheduling
For 16-priority scheduling, the rate-based schedulers (CIR and PIR) are combined with the strict priority schedulers (CoS-8 queue first to CoS-1 queue last).
For general information about 16-priority scheduling, see Network egress 16-priority scheduling. Access ingress 16-priority scheduling functions in a similar fashion to network egress 16-priority scheduling.
Ingress queuing and scheduling for BMU traffic
The 7705 SAR treats broadcast, multicast, and unknown traffic in the same way as unicast traffic. After being classified, the BMU traffic can be mapped to individual queues in order to be forwarded to the fabric. Classification of unicast and BMU traffic does not differ, which means that BMU traffic that has been classified to a BMU-designated queue can be shaped at its own rate, offering better control and fairer usage of fabric resources. For more information, see BMU support.
Access ingress per-SAP aggregate shapers (access ingress H-QoS)
On the 7705 SAR, H-QoS adds second-tier (or second-level), per-SAP aggregate shapers. As shown in Access ingress scheduling for 4-priority and 16-priority SAPs (with per-SAP aggregate shapers), traffic ingresses at an Ethernet SAP and is classified and mapped to up to eight different CoS queues on a per-ingress SAP basis. The aggregate rate CIR and PIR values are then used to shape the traffic. The conforming loop (aggregate CIR loop) schedules the packets out of the eight CoS queues in strict priority manner (queue priority CIRs followed by queue priority PIRs). If the aggregate CIR is crossed at any time during the scheduling operation, regardless of the per-queue CIR/PIR configuration, then the aggregate conforming loop for the SAP ends and the aggregate non-conforming loop begins.
The aggregate non-conforming loop schedules the packets out of the eight CoS queues in strict priority manner. SAPs sending traffic to the 4-priority scheduler do not have a second-tier per-SAP aggregate shaper unless traffic arbitration is needed, in which case an aggregate CIR for all the 4-priority SAPs can be configured (see Access ingress per-SAP shapers arbitration). See Per-SAP aggregate shapers (H-QoS) on Gen-2 hardware for general information.
The aggregate rate limit for the per-SAP aggregate shaper is configured in the service context, using the sap>ingress>agg-rate-limit or sap>egress>agg-rate- limit command.
For per-SAP aggregate shaping on Gen-2 adapter cards, the SAP must be scheduled using a 16-priority scheduler.
The 16-priority scheduler can be used without setting an aggregate rate limit for the SAP, in which case traffic out of the SAP queues is serviced in strict priority order, the conforming traffic before the non-conforming traffic. Using 16-priority schedulers without a configured per-SAP aggregate shaper (PIR = maximum and CIR = 0 kb/s) may be preferred over 4-priority mode for the following reasons:
coherent scheduler behavior across SAPs (one scheduler model)
ease of configuration
As shown in the figure, all the traffic leaving from the shaped SAPs must be serviced using 16-priority scheduling mode.
The SAPs without an aggregate rate limit, which are called unshaped SAPs, can be scheduled using either 4-priority or 16-priority mode as one of the following:
unshaped SAPs bound to a 4-priority scheduler
unshaped SAPs bound to a 16-priority scheduler
The arbitration of access ingress traffic leaving the 4-priority and 16-priority schedulers and continuing toward the fabric is described in the following section.
Access ingress per-SAP shapers arbitration
The 7705 SAR provides per-SAP aggregate shapers for access ingress SAPs. With this feature, both shaped and unshaped SAPs can coexist on the same adapter card. When switching traffic from shaped and unshaped SAPs to the fabric, arbitration is required.
Access ingress per-SAP arbitration to fabric shows how the 7705 SAR arbitrates traffic to the fabric between 4-priority unshaped SAPs, and 16-priority shaped and unshaped SAPs.
All SAPs support configurable CIR and PIR rates on a per-CoS queue basis (per-queue level). In addition, each 16-priority SAP has its own configurable per-SAP aggregate CIR and PIR rates that operate one level above the per-queue rates.
To allow the 4-priority unshaped SAPs to compete for fabric bandwidth with the aggregate CIR rates of the shaped SAPs, the 4-priority unshaped SAPs (as a group) have their own configurable unshaped SAP aggregate CIR rate, which is configured on the 7705 SAR-8 Shelf V2 and 7705 SAR-18 under the config>qos> fabric-profile aggregate-mode context using the unshaped-sap-cir parameter. On the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, and 7705 SAR-Wx, the CIR rate is configured in the config>system>qos>access-ingress-aggregate-rate context.
The configured CIR and PIR for the 16-priority shaped SAPs dictates committed and uncommitted fabric bandwidth for each of these SAPs. Configuring the unshaped-sap-cir parameter for the group (aggregate) of 4-priority unshaped SAPs ensures that the unshaped SAPs will compete for fabric bandwidth with the aggregate CIR rate of the shaped SAPs. Otherwise, the unshaped SAPs would only be able to send traffic into the fabric after the aggregate CIR rates of all the shaped SAPs were serviced. The 16-priority unshaped SAPs are serviced as if they are non-conforming traffic for 16-priority shaped SAPs.
The aggregate fabric shaper shown in the figure performs round-robin selection between the 16-priority SAPs (shaped and unshaped) and the 4-priority unshaped SAP aggregate until:
the aggregate fabric shaper rate is exceeded
the conforming (CIR) traffic for every 16-priority SAP and the 4-priority unshaped SAP aggregate is exceeded
the non-conforming traffic for every 16-priority SAP and the 4-priority unshaped SAP aggregate is completed, provided that the aggregate PIR rate is not exceeded
Ingress shaping to fabric (access and network)
After the traffic is scheduled, it must be sent to the fabric interface. In order to avoid congestion in the fabric and ease the effects of possible bursts, a shaper is implemented on each adapter card.
The shapers smooth out any packet bursts and ease the flow of traffic onto the fabric. The shapers use buffer space on the adapter cards and eliminate the need for large ingress buffers in the fabric.
The ingress to-fabric shapers are user-configurable. For the 7705 SAR-8 Shelf V2 and the 7705 SAR-18, the maximum rate depends on a number of factors, including platform, chassis variant, and slot type. See Configurable ingress shaping to fabric (access and network) for details. For the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, and 7705 SAR-Wx, the shapers can operate at a maximum rate of 5 Gb/s. For the 7705 SAR-X, the shapers are not user-configurable. See Fabric shaping on the fixed platforms (access and network) for details.
After the shaping function, all of the traffic is forwarded to the fabric interface in round-robin fashion, one packet at a time, from every access ingress adapter card.
BMU support
Fabric shapers support both unicast and multipoint traffic. Multipoint traffic can be any combination of broadcast, multicast, and unknown (BMU) frames. From access ingress to the fabric, BMU traffic is treated as unicast traffic. A single copy of BMU traffic is handed off to the fabric, where it is replicated and sent to all potential destination adapter cards.
Aggregate mode BMU support
An aggregate mode shaper provides a single aggregate shaping rate. The rate defines the maximum bandwidth that an adapter card can switch through its fabric interface at any given time. The rate is a bulk value and is independent of the destination or the type of traffic. For example, in aggregate mode, an ingress adapter card may use the full rate to communicate with a single destination adapter card, or it may use the same rate to communicate with multiple egress adapter cards.
Aggregate mode and the aggregate rate apply to fabric shapers that handle combined unicast/BMU traffic, unicast-only traffic, or BMU-only traffic. One aggregate rate sets the rate on all adapter cards. The proportional distribution between unicast and BMU traffic can be fine-tuned using queue-level schedulers, while the to-fabric shaper imposes a maximum rate that ensures fairness on the fabric for traffic from all adapter cards.
When services (IES, VPRN, and VPLS) are enabled, the fabric profile mode for access ingress should be set to aggregate mode.
Destination mode BMU support
Destination mode offers granular to-fabric shaping rates on a per-destination adapter card basis. While destination mode offers more flexibility and gives more control than aggregate mode, it also requires a greater understanding of network topology and flow characteristics under conditions such as node failures and link, adapter card, or port failures.
In a destination mode fabric profile, the unicast traffic and BMU traffic are always shaped separately.
For unicast traffic, individual destination rates can be configured on each adapter card. For BMU traffic, one multipoint rate sets the rate on all adapter cards. Fairness among different BMU flows is ensured by tuning the QoS queues associated with the port.
LAG SAP support (access only)
Fabric shapers support access ingress traffic being switched from a SAP to another SAP residing on a port that is part of a link aggregation group (LAG). Either the aggregate mode or destination mode can be used for fabric shaping.
When the aggregate mode is used, one aggregate rate sets the rate on all adapter cards. When the destination mode is used, the multipoint shaper is used to set the fabric shaping rate for traffic switched to a LAG SAP.
Configurable ingress shaping to fabric (access and network)
The use of fabric profiles allows the ingress (to the fabric) shapers to be user-configurable for access ingress and network ingress traffic.
For the 7705 SAR-8 Shelf V2 and 7705 SAR-18, the maximum rates are:
2.5 Gb/s for the 7705 SAR-8 Shelf V2 (all 6 MDA slots)
-
10 Gb/s for the 7705 SAR-8 Shelf V2 (MDA slots 1 and 2)
1 Gb/s or 2.5 Gb/s for the 7705 SAR-18 (12 MDA slots)
10 Gb/s for the 7705 SAR-18 (4 XMDA slots)
For information about fabric shapers on the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, and 7705 SAR-X, see Fabric shaping on the fixed platforms (access and network).
By allowing a rate of 1 Gb/s or higher to be configured from any adapter card to the fabric, the fabric may become congested. Therefore, the collection and display of fabric statistics are provided. These statistics report about the fabric traffic flow and potential discards. See the 7705 SAR Interface Configuration Guide, ‟Configuring Adapter Card Fabric Statistics”, ‟Configuration Command Reference”, and ‟Show, Monitor, Clear, and Debug Command Reference” for information about how to configure, show, and monitor fabric statistics on an adapter card.
The ingress buffers for a card are much larger than the ingress buffers for the fabric; therefore, it is advantageous to use the larger card buffers for ingress shaping. In order to use the ingress card buffers and have much more granular control over traffic, two fabric profile modes are supported, per-destination mode and aggregate mode. Both modes offer shaping toward the fabric from an adapter card, but per-destination shapers offer the maximum flexibility by precisely controlling the amount of traffic to each destination card at a user-defined rate. Aggregate mode is used for simpler deployments, where the amount of traffic flowing to a destination adapter card is not controlled.
The default mode of operation for the 7705 SAR is set to aggregate, and the fixed aggregate rate of 200 Mb/s is set for both access ingress and network ingress traffic. Therefore, in a default configuration, each adapter card can switch up to 200 Mb/s of access ingress and network ingress traffic toward the fabric.
All the switched traffic can be destined for a single adapter card or it can be spread among multiple adapter cards. For higher-bandwidth applications, a network traffic analysis is recommended to determine which shaper rates would best suit the application and traffic patterns of a particular environment.
The to-fabric shapers are provided on the 7705 SAR to ensure adequate use of ingress buffers in case of congestion. With the ingress shapers, the large ingress card buffers can be configured to absorb bursty traffic and pace the traffic for better use of resources.
For example, if the average access ingress traffic bandwidth for an adapter card is 400 Mb/s and the peak bandwidth is 800 Mb/s, the rate of the to-fabric shapers can be configured to be 400 Mb/s. This allows the bursty ingress traffic to be paced by absorbing the bursty traffic after being shaped at 400 Mb/s. The initial burst is absorbed at the adapter card where the bursty traffic ingresses the 7705 SAR. The ingress buffers are used to absorb the burst and the fabric buffers are not exhausted by any single adapter card. The same example applies to network ingress traffic.
The following table summarizes the different capabilities offered by the two modes.
Capability |
Per-destination mode |
Aggregate mode |
---|---|---|
Access ingress to-fabric shapers |
✓ |
✓ |
Network ingress to-fabric shapers |
✓ |
✓ |
Individual shaping from an ingress card toward each destination card based on a user-defined rate |
✓ |
|
Aggregate/bulk sum shaping regardless of destination from an ingress card |
✓ |
Fabric shapers in per-destination mode and Fabric shapers in aggregate mode illustrate the functionality of fabric shapers in per-destination mode and aggregate mode, respectively.
In the following figure, after the per-destination prioritization and scheduling takes place as described in previous sections in this chapter, the per-destination adapter card shapers take effect. With per-destination shapers, the maximum amount of bandwidth that each destination adapter card can receive from the fabric can be controlled. For example, the maximum amount of bandwidth that adapter card 1 can switch to the remaining adapter cards, as well as the amount of bandwidth switched back to adapter card 1, can be configured at a set rate.
The following figure illustrates the functionality of fabric shapers in aggregate mode. After the policing, classification, queuing and per-destination based priority queuing takes place as described in previous sections in this chapter, the aggregate mode adapter card shapers take effect. In aggregate mode, the aggregate of all the access ingress and network ingress traffic is shaped at a user-configured rate regardless of the destination adapter card.
Mixing different fabric shaper modes within the same chassis and on the same adapter card is not recommended; however, it is supported. As an example, an 8-port Gigabit Ethernet Adapter card in a 7705 SAR-18 can be configured for aggregate mode for access ingress and for per-destination mode for network ingress. The same chassis can also contain an adapter card (for example, the 32-port T1/E1 ASAP Adapter card) that is configured for per-destination mode for all traffic. This setup is shown in the following example.
MDA |
Access fabric mode |
Network profile mode |
|
1/1 |
a8-1gb-v3-sfp |
Destination |
Destination |
1/2 |
a8-1gb-sfp |
Aggregate |
Destination |
1/3 |
a4-oc3 |
Destination |
Destination |
1/4 |
a32-chds1v2 |
Destination |
Destination |
1/X1 |
x-10GigE-v2 |
Aggregate |
Destination |
Gen-2 and Gen-3 adapter cards only support aggregate mode fabric shapers for access ingress traffic, regardless of the service types configured.
If multipoint services such as IES, VPRN, and VPLS are running on an adapter card, only aggregate mode fabric profile can be configured for the card at access ingress.
Fabric shaping on the fixed platforms (access and network)
The 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, and 7705 SAR-Wx support user-configurable fabric shapers at rates of up to 5 Gb/s for access ingress and network ingress traffic. The fabric interface on these nodes is a shared resource between both traffic types, and one buffer pool serves all MDAs (ports).
These nodes do not support fabric profiles; instead, they have a single aggregate rate limit for restricting access traffic into the fabric and a single aggregate rate limit for restricting network traffic into the fabric. These limits apply to all MDAs. Both access ingress and network ingress traffic can be configured to shaping rates of between 1 kb/s and 5 Gb/s. The default rate for access ingress traffic is 500 Mb/s. The default rate for network ingress traffic is 2 Gb/s. Statistics can be viewed for aggregate access and network traffic flow through the fabric and possible discards.
The 7705 SAR-X fabric shaper rate is not configurable for access ingress or network ingress traffic, and is set to the maximum rate for the platform. There are three buffer pools on the 7705 SAR-X, one for each MDA (block of ports).
Traffic flow across the fabric
The 7705 SAR uses an Ethernet-based fabric. Each packet that is sent to the fabric is equipped with a fabric header that contains its specific CoS requirement. Because all of the packets switched across the fabric are already classified, queued, scheduled and marked according to the required QoS parameters, each of these packets has been passed through the Traffic Management (TM) block on an adapter card, or the Control and Switching Module (CSM) in the case of control packets. Therefore, each packet arrives at the fabric having been already scheduled for optimal flow. The function of the fabric is to switch each packet through to the appropriate destination adapter card, or CSM in the case of control packets, in an efficient manner.
Because the traffic is shaped at a certain rate by the ingress adapter card (that is, bursts are smoothed by the traffic management function), minimal buffering should be needed on the switch fabric. However, the buffer space allocation and usage is in accordance with the priorities at the ingress adapter card. As is the case with schedulers at the adapter cards, there are two priorities supported on the switch fabric. The switch fabric serves the traffic in the following priority order:
-
Expedited
-
Best Effort
-
The switch fabric does not support profile scheduling.
-
Because the fabric has a limited buffer space, it is possible for tail drop to occur. Tail drop discards any packet that exceeds the maximum buffer space allocation. The shaping that is performed on the adapter cards helps to prevent or minimize congestion.
Network egress
This section contains the following topics for traffic flow in the network egress direction:
BMU traffic at network egress
BMU traffic at network egress is handled in the same way as unicast traffic in terms of scheduling, queuing, or port-level shaping. Both unicast and BMU traffic are mapped to queues as per the FC markings. Traffic from these queues, whether unicast or BMU, is scheduled according to user-configured rates. Port-level shapers treat all the queues identically, regardless of traffic type.
Network egress queuing aggregation
After traffic is switched through the fabric from one or several access ingress adapter cards to a network egress adapter card, queuing-level aggregation on a per-forwarding-class basis is performed on all of the received packets.
An adapter card that is used for network egress can receive—and will likely receive—packets from multiple adapter cards that are configured for access ingress operations, and from the CSM. Adapter cards that are configured for network access allow user configuration of queues and the association of forwarding classes to the queues. These are the same configuration principles that are used for adapter cards that are configured for access ingress connectivity. Like access ingress, more than one forwarding class can share the same queue.
Aggregation of different forwarding classes under queues takes place for each bundle or port. If a port is a member of a bundle, such as a Multilink Point-to-Point Protocol (MLPPP) bundle, then the aggregation and queuing is implemented for the entire bundle. If a port is a standalone port, that is, not a member of bundle, then the queuing takes place for the port.
Network egress per-VLAN queuing
Network Ethernet ports support network egress per-VLAN (per-interface) shapers with eight CoS queues per VLAN, which is an extension to the eight CoS queues per port shared by all unshaped VLANs. Eight unique per-VLAN CoS queues are created for each VLAN when the VLAN shaper is enabled. These per-VLAN CoS queues are separate from the eight unshaped VLAN queues. The eight CoS queues that are shared by all the remaining unshaped VLANs are referred to as unshaped VLAN CoS queues. VLAN shapers are enabled when the queue-policy command is used to assign a network queue policy to the interface.
For details on per-VLAN network egress queuing and scheduling, see Per-VLAN network egress shapers.
Network egress scheduling
Network egress scheduling is supported on the adapter cards and ports listed in the following table. The supported scheduling modes are 4-priority and 16-priority. The table shows which scheduling mode each card and port supports at network egress.
This section also contains information about the following topics:
Adapter card or port |
4-priority |
16-priority |
---|---|---|
8-port Gigabit Ethernet Adapter card |
✓ |
|
Packet Microwave Adapter card |
✓ |
|
2-port 10GigE (Ethernet) Adapter card/module |
✓ |
|
6-port Ethernet 10Gbps Adapter card1 |
✓ |
|
10-port 1GigE/1-port 10GigE X-Adapter card |
✓ |
|
4-port SAR-H Fast Ethernet module |
✓ |
|
6-port SAR-M Ethernet module |
✓ |
|
Ethernet ports on the 7705 SAR-A |
✓ |
|
Ethernet ports on the 7705 SAR-Ax |
✓ |
|
Ethernet ports on the 7705 SAR-H |
✓ |
|
Ethernet ports on the 7705 SAR-Hc |
✓ |
|
Ethernet ports on the 7705 SAR-M |
✓ |
|
Ethernet ports on the 7705 SAR-Wx |
✓ |
|
Ethernet ports on the 7705 SAR-X 1 |
✓ |
|
16-port T1/E1 ASAP Adapter card |
✓ |
|
32-port T1/E1 ASAP Adapter card |
✓ |
|
2-port OC3/STM1 Channelized Adapter card |
✓ |
|
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card |
✓ |
|
4-port OC3/STM1 Clear Channel Adapter card |
✓ |
|
4-port DS3/E3 Adapter card |
✓ |
|
T1/E1 ASAP ports on the 7705 SAR-A |
✓ |
|
T1/E1 ASAP ports on the 7705 SAR-M |
✓ |
|
TDM ports on the 7705 SAR-X |
✓ |
- 4-priority scheduler for Gen-3 adapter card or platform.
Network egress 4-priority scheduling
The implementation of network egress scheduling on the cards and ports listed in Scheduling modes supported by adapter cards and ports at network egress under ‟4-Priority” is very similar to the scheduling mechanisms used for adapter cards that are configured for access ingress traffic. 4-priority scheduling is a combination of queue-type scheduling (Expedited vs. Best-effort scheduling) and profiled scheduling (rate-based scheduling).
-
T1/E1 ports on the 7705 SAR-A
-
T1/E1 ports on the 7705 SAR-M
-
T1/E1 ports on the 7705 SAR-X
-
16-port T1/E1 ASAP Adapter card
-
32-port T1/E1 ASAP Adapter card
-
2-port OC3/STM1 Channelized Adapter card
-
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card
-
T1/E1 ports on the 4-port T1/E1 and RS-232 Combination module (on 7705 SAR-H)
Packets less than or up to the CIR are scheduled as in-profile. Packets that arrive at rates greater than the CIR, but less than the PIR, are scheduled as out-of-profile. In-profile traffic is exhaustively transmitted from the queues before out-of-profile traffic is transmitted. That is, all of the in-profile packets must be transmitted before any out-of-profile packets are transmitted. In addition, Expedited queues are always scheduled before Best Effort queues.
The default configuration of scheduling CoS queues provides a logical and consistent means to manage the traffic priorities. The default configuration is as follows:
CoS-8 to CoS-5 Expedited in-profile
CoS-4 to CoS-1 Best Effort in-profile
CoS-8 to CoS-5 Expedited out-of-profile
CoS-4 to CoS-1 Best Effort out-of-profile
The order shown below is maintained when scheduling the traffic on the adapter card’s network ports. A strict priority is applied between the four schedulers, and all four schedulers are exhaustive:
Expedited in-profile traffic
Best Effort in-profile traffic
Expedited out-of-profile traffic
Best Effort out-of-profile traffic
Network egress 4-priority (Gen-3) scheduling
The adapter cards and ports that support 4-priority scheduling for network egress traffic on Gen-3 hardware are identified in Scheduling modes supported by adapter cards and ports at network egress. This type of scheduling takes into consideration the traffic’s profile type and the CoS queue priority. It also uses priority information to apply backpressure to lower-level CoS queues. See QoS for Gen-3 adapter cards and platforms for details.
Network egress 16-priority scheduling
The adapter cards and ports that support 16-priority scheduling for network egress traffic are listed in Scheduling modes supported by adapter cards and ports at network egress. This type of scheduling takes into consideration the traffic’s profile type and the priority of the CoS queue that the traffic is coming from.
Packets less than or up to the CIR are scheduled as in-profile. Packets that arrive at rates greater than the CIR, but less than the PIR, are scheduled as out-of-profile. Eight CoS queues in total are available for packets to go through.
In-profile traffic is exhaustively transmitted from the queues, starting with the highest-priority CoS queue. A strict priority is applied between the eight CoS queues. If a packet arrives at a queue of higher priority than the one being serviced, the scheduler services the packet at the higher-priority queue as soon as it finishes servicing the current packet.
When all the in-profile traffic is transmitted, the out-of-profile traffic is transmitted, still maintaining priority of the queues. If an in-profile packet arrives and the scheduler is servicing an out-of-profile packet, the scheduler finishes servicing the out-of-profile packet and then immediately services the in-profile packet.
The order of priority in the default configuration is as follows:
CoS-8 in-profile traffic
CoS-7 in-profile traffic
CoS-6 in-profile traffic
CoS-5 in-profile traffic
CoS-4 in-profile traffic
CoS-3 in-profile traffic
CoS-2 in-profile traffic
CoS-1 in-profile traffic
CoS-8 out-of-profile traffic
CoS-7 out-of-profile traffic
CoS-6 out-of-profile traffic
CoS-5 out-of-profile traffic
CoS-4 out-of-profile traffic
CoS-3 out-of-profile traffic
CoS-2 out-of-profile traffic
CoS-1 out-of-profile traffic
Network egress shaping
All the network egress traffic is shaped at the bundle or interface rate. An interface may not necessarily correspond directly to a port, and an interface could be a sub-channel of a port. As an example, Fast Ethernet could be the choice of network egress, but the leased bandwidth could still be a fraction of the port speed. In this case, it is possible to shape at the interface rate of 15 Mb/s, for example.
The same also applies to MLPPP bundles. The shaping takes place per MLPPP bundle, and the traffic is shaped at the aggregate rate of the MLPPP bundle.
Network egress shaping for hybrid ports
Hybrid ports use a third-tier, dual-rate aggregate shaper to provide arbitration between the bulk of access and network egress traffic flows. For details, see QoS for hybrid ports on Gen-2 hardware.
Network egress per-VLAN shapers
Network egress VLAN traffic uses second-tier (or second-level), per-VLAN shapers to prepare network egress traffic for arbitration with the aggregate of the unshaped VLAN shaper. All the shaped VLAN shapers are arbitrated with one unshaped VLAN shaper.
As shown in the following figure, traffic from the fabric flows to one or more VLANs where it is classified and mapped to up to eight different CoS queues on a per-VLAN basis. The VLANs can be shaped or unshaped. Each shaped VLAN has its own set of CoS queues. The aggregate of unshaped VLANs uses the same set of CoS queues (that is, one set of queues for all unshaped VLANs).
For more information, see Per-VLAN network egress shapers and Shaped and unshaped VLANs.
Because the per-VLAN shapers are dual-rate shapers, their aggregate rate CIR and PIR values shape the traffic, as follows:
The conforming, in-profile loop (aggregate CIR loop) schedules the packets out of the eight CoS queues in strict priority manner (queue priority CIRs followed by queue priority PIRs).
If the aggregate CIR is crossed at any time during the scheduling operation, regardless of the per-queue CIR/PIR configuration, then the aggregate conforming loop for the VLAN ends and the aggregate non-conforming loop (out-of-profile) begins.
The aggregate non-conforming loop schedules the packets out of the eight CoS queues in strict priority manner.
A shaped VLAN configured with default aggregate rate limits (PIR = maximum and CIR = 0 kb/s) is equivalent to an unshaped VLAN except that its traffic flows through a per-VLAN shaper instead of getting combined with the bulk (aggregate) of the unshaped VLANs. Using a shaped VLAN in this way (default rate limits) may be preferred over using an unshaped VLAN for the following reasons:
coherent scheduler behavior across VLANs (that is, the use of only one scheduler model)
ease of configuration
higher throughput, as each shaped VLAN gets to transmit one packet at each pass of the out-of-profile scheduler as opposed to one packet from the aggregate of unshaped VLAN queues
The arbitration of shaped and unshaped VLAN traffic at the third-tier shaper is described in the following section.
Network egress per-VLAN shapers arbitration
For shaped VLANs, the configured CIR and PIR limits dictate committed and uncommitted port bandwidth for each of these VLANs. To ensure that the bulk (aggregate) of unshaped VLANs can compete for port bandwidth with the aggregate of CIR rates for the shaped VLANs, the unshaped VLANs (as a group) have their own aggregate CIR rate, which is configured using the unshaped-if-cir command (under the config>port> ethernet>network>egress context). Otherwise, without their own aggregate CIR rate, the unshaped VLANs are only able to send traffic into the port after the aggregate CIR rates of all the shaped VLANs are serviced. Shaped VLANs using default aggregate rate limits (PIR = maximum and CIR = 0 kb/s) are serviced as if they are non-conforming traffic for shaped VLANs.
Referring to Network egress shaped and unshaped VLAN queuing and scheduling, at the port shaper, conforming (CIR) traffic has priority over non-conforming traffic. The arbitration between the shaped VLANs and unshaped VLANs is handled in the following priority order:
committed traffic: the per-VLAN committed rate (CIR) for shaped VLANs as set by the agg-rate-limit command, and the aggregate committed rate for all the unshaped VLANs as set by the unshaped-if-cir command
uncommitted traffic: the per-VLAN uncommitted rate (PIR) for shaped VLANs as set by the agg-rate-limit command, and aggregate uncommitted rate for all the unshaped VLANs as set by the unshaped-if-cir command
Network egress marking and re-marking
The EXP bit settings can be marked at network egress. The EXP bit markings of the forwarding class are used for this purpose. The tunnel and pseudowire EXP bits are marked to the forwarding class value.
The default network egress QoS marking settings are listed in Default network QoS policy egress marking.
Network egress marking and re-marking on Ethernet ports
For MPLS tunnels, if network egress Ethernet ports are used, dot1p bit marking can be enabled in conjunction with EXP bit marking. In this case, the tunnel and pseudowire EXP bits do not have to be the same as the dot1p bits.
For GRE and IP tunnels, dot1p marking and pseudowire EXP marking can be enabled, and DSCP marking can also be enabled.
Network egress dot1p is supported for Ethernet frames, which can carry IPv4, IPv6, or MPLS packets. EXP re-marking is supported for MPLS packets.
Network ingress
This section contains the following topics for traffic flow in the network ingress direction:
Network ingress classification
Network ingress traffic originates from a network egress port located on another interworking device, such as a 7750 Service Router or another 7705 SAR, and flows from the network toward the fabric in the 7705 SAR.
The ingress MPLS packets can be mapped to forwarding classes based on EXP bits that are part of the headers in the MPLS packets. These EXP bits are used across the network to ensure an end-to-end network-wide QoS offering. With pseudowire services, there are two labels, one for the MPLS tunnel and one for the pseudowire. Mapping is performed using the EXP values from the outer tunnel MPLS label. This ensures that the EXP bit settings, which may have been altered along the path by the tandem label switch routers (LSRs), are used to identify the forwarding class of the encapsulated traffic.
Ingress GRE and IP packets are mapped to forwarding classes based on DSCP bit settings of the IP header. GRE tunnels are not supported for IPv6; therefore, DSCP bit classification of GRE packets is only supported for IPv4. DSCP bit classification of IP packets is supported for both IPv4 and IPv6.
Untrusted traffic uses multi-field classification (MFC), where the traffic is classified based on any IP criteria currently supported by the 7705 SAR filter policies; for example, source and destination IP address, source and destination port, whether the packet is fragmented, ICMP code, and TCP state. For information about MFC, see the 7705 SAR Router Configuration Guide, ‟Multi-field classification” and ‟IP, MAC, and VLAN filter entry commands”.
Network ingress tunnel QoS override
To simplify QoS management through the network core, some operators aggregate multiple forwarding classes of traffic at the ingress LER or PE and use two or three QoS markings instead of the eight different QoS markings that a customer device may be using to dictate QoS treatment. However, to ensure the end-to-end QoS enforcement required by the customer, the aggregated markings must be mapped back to their original forwarding classes at the egress LER (eLER) or PE.
For IP traffic (including IPSec packets) riding over MPLS or GRE tunnels that will be routed to the base router, a VPRN interface, or an IES interface at the tunnel termination point (the eLER), the 7705 SAR can be configured to ignore the EXP/DSCP bits in the tunnel header when the packets arrive at the eLER. Instead, classification is based on the inner IP header, which is essentially the customer IP packet header. This configuration is done using the ler-use-dscp command.
When the command is enabled on an ingress network IP interface, the IP interface will ignore the tunnel’s QoS mapping and will derive the internal forwarding class and associated profile state based on the DSCP values of the IP header ToS field rather than on the network QoS policy defined on the IP interface. This function is useful when the mapping for the tunnel QoS marking does not completely reflect the required QoS handling for the IP packet. The command applies only on the eLER where the tunnel or service is terminated and the next header in the packet is IP.
Network ingress queuing
Network ingress traffic can be classified in up to eight different forwarding classes, which are served by 16 queues (eight queues for unicast traffic and eight queues for multicast (BMU) traffic). Each queue serves at least one of the eight forwarding classes that are identified by the incoming EXP bits. These queues are automatically created by the 7705 SAR. The following table shows the default network QoS policy for the 16 CoS queues.
The value for CBS and MBS is a percentage of the size of the buffer pool for the adapter card. MBS can be shared across queues, which allows overbooking to occur.
Queue /FC |
CIR (%) |
PIR (%) |
CBS (%) |
MBS (%) |
---|---|---|---|---|
Queue-1/BE |
0 |
100 |
0.1 |
5 |
Queue-2/L2 |
25 |
100 |
0.25 |
5 |
Queue-3/AF |
25 |
100 |
0.75 |
5 |
Queue-4/L1 |
25 |
100 |
0.25 |
2.5 |
Queue-5/H2 |
100 |
100 |
0.75 |
5 |
Queue-6/EF |
100 |
100 |
0.75 |
5 |
Queue-7/H1 |
10 |
100 |
0.25 |
2.5 |
Queue-8/NC |
10 |
100 |
0.25 |
2.5 |
Queue-9/BE |
0 |
100 |
0.1 |
5 |
Queue-10/L2 |
5 |
100 |
0.1 |
5 |
Queue-11/AF |
5 |
100 |
0.1 |
5 |
Queue-12/L1 |
5 |
100 |
0.1 |
2.5 |
Queue-13/H2 |
100 |
100 |
0.1 |
5 |
Queue-14/EF |
100 |
100 |
0.1 |
5 |
Queue-15/H1 |
10 |
100 |
0.1 |
2.5 |
Queue-16/NC |
10 |
100 |
0.1 |
2.5 |
Network ingress queuing for BMU traffic
At network ingress, broadcast, multicast, and unknown (BMU) traffic identified using DSCP and/or EXP (also known as LSP TC) is mapped to a forwarding class (FC). Because BMU traffic is considered to be multipoint traffic, the queue hosting BMU traffic must be configured with the multipoint keyword. Queues 9 through 16 support multipoint traffic (see Default network ingress QoS policy). For any adapter card hosting any number of network ports, up to 16 queues can be configured to host 8 unicast and 8 multicast queues.
Similar to unicast queues, BMU queues require configuration of:
queue depth (committed and maximum)
scheduled rate (committed and peak)
In addition, as is the case for unicast queues, all other queue-based congestion management techniques apply to multipoint queues.
The benefits of using multipoint queues occur when the to-fabric shapers begin scheduling traffic toward the destination line card. To-fabric shapers can be configured for aggregate or per-destination mode. For more information, see BMU support.
Network ingress scheduling
Network ingress scheduling is supported on the adapter cards and ports listed in the following table. The supported scheduling modes are 4-priority and 16-priority. The table shows which scheduling mode each card and port supports at network ingress.
This section also contains information about the following topics:
Adapter card or port |
4-priority |
16-priority |
---|---|---|
8-port Gigabit Ethernet Adapter card |
✓ |
|
Packet Microwave Adapter card |
✓ |
|
2-port 10GigE (Ethernet) Adapter card/module |
|
✓ |
6-port Ethernet 10Gbps Adapter card1 |
✓ |
|
10-port 1GigE/1-port 10GigE X-Adapter card |
✓ |
|
4-port SAR-H Fast Ethernet module |
✓ |
|
6-port SAR-M Ethernet module |
✓ |
|
Ethernet ports on the 7705 SAR-A |
✓ |
|
Ethernet ports on the 7705 SAR-Ax |
✓ |
|
Ethernet ports on the 7705 SAR-M |
✓ |
|
Ethernet ports on the 7705 SAR-H |
✓ |
|
Ethernet ports on the 7705 SAR-Hc |
✓ |
|
Ethernet ports on the 7705 SAR-Wx |
✓ |
|
Ethernet ports on the 7705 SAR-X1 |
✓ |
|
16-port T1/E1 ASAP Adapter card |
✓ |
|
32-port T1/E1 ASAP Adapter card |
✓ |
|
2-port OC3/STM1 Channelized Adapter card |
✓ |
|
4-port OC3/STM1 Clear Channel Adapter card |
✓ |
|
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card |
✓ |
|
4-port DS3/E3 Adapter card |
✓ |
|
T1/E1 ASAP ports on the 7705 SAR-A |
✓ |
|
T1/E1 ASAP ports on the 7705 SAR-M |
✓ |
|
TDM ports on the 7705 SAR-X |
✓ |
- 4-priority scheduler for Gen-3 adapter card or platform.
Network ingress 4-priority scheduling
The adapter cards listed in Scheduling modes supported by adapter cards and ports at network ingress under ‟4-Priority” can receive network ingress traffic. One or more ports on the card are configured for PPP/MLPPP for this purpose.
The implementation of network ingress scheduling on the cards listed in the table under ‟4-Priority” is very similar to the scheduling mechanisms used for adapter cards that are configured for access ingress traffic. That is, 4-priority scheduling is used (queue-type scheduling combined with profiled scheduling).
-
T1/E1 ports on the 7705 SAR-A
-
T1/E1 ports on the 7705 SAR-M
-
T1/E1 ports on the 7705 SAR-X
-
16-port T1/E1 ASAP Adapter card
-
32-port T1/E1 ASAP Adapter card
-
2-port OC3/STM1 Channelized Adapter card
-
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card
-
T1/E1 ports on the 4-port T1/E1 and RS-232 Combination module (on 7705 SAR-H)
The adapter cards provide sets of eight queues for incoming traffic: 7 sets of queues for the 7705 SAR-8 Shelf V2 and 17 sets of queues for the 7705 SAR-18. Each set of queues is specific to a destination adapter card. For the 7705 SAR-8 Shelf V2 and 7705 SAR-18 (respectively), 6 and 16 sets of queues are automatically created for each access egress adapter card, plus 1 set of queues for multicast traffic.
There is one additional set of queues for slow-path (control) traffic destined for the CSMs.
The individual queues within each set of queues provide buffer space for traffic isolation based on the CoS values being applied (from the received EXP bits).
All of the network ingress ports of the adapter card share the same sets of queues, which are created automatically.
When the packets received from the network are mapped to queues, four access ingress-like queue-type and profile (rate-based) schedulers per destination card service the queues in strict priority. The following queue-type and profiled schedulers service the queues in the order listed:
Expedited in-profile scheduler
Best Effort in-profile scheduler
Expedited out-of-profile scheduler
Best Effort out-of-profile scheduler
To complete the operation, user-configurable shapers send the traffic into the fabric. See Configurable ingress shaping to fabric (access and network) for details. Throughout this operation, each packet retains its individual CoS value.
Network ingress 4-priority (Gen-3) scheduling
The adapter cards and ports that support 4-priority (Gen-3) scheduling for network ingress traffic are listed in Scheduling modes supported by adapter cards and ports at network ingress. See QoS for Gen-3 adapter cards and platforms for details.
Network ingress 16-priority scheduling
The cards and ports that support 16-priority scheduling for network ingress traffic are listed in Scheduling modes supported by adapter cards and ports at network ingress.
For a detailed description of how 16-priority scheduling functions, see Network egress 16-priority scheduling.
The 7705 SAR-8 Shelf V2 and 7705 SAR-18 adapter cards, and the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, and 7705 SAR-Wx ports provide sets of 8 queues for incoming traffic: 7 sets of queues for the 7705 SAR-8 Shelf V2, 17 sets of queues for the 7705 SAR-18, and 4 sets of queues for the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, and 7705 SAR-Wx.
Each set of queues is specific to a destination adapter card. For the 7705 SAR-8 Shelf V2, 6 sets of queues are automatically created for each access egress adapter card, plus 1 set of queues for multicast traffic. For the 7705 SAR-18, 16 sets of queues are automatically created, plus 1 set of queues for multicast traffic. For the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, and 7705 SAR-Wx, 3 sets of queues are automatically created, plus 1 set of queues for multicast traffic. For all these platforms, there is 1 additional set of queues for slow-path (control) traffic that is destined for the CSMs.
Each queue within each set provides buffer space for traffic isolation based on the classification carried out on EXP bits of the MPLS packet header (that is, the CoS setting).
All of the network ingress ports on an adapter card on a 7705 SAR-8 Shelf V2 or 7705 SAR-18 share the same sets of queues, which are created automatically. All of the network ingress ports across the entire 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, or 7705 SAR-Wx also share the same sets of queues, which are created automatically.
Network ingress shaping to fabric
After the traffic is scheduled, it must be sent to the fabric interface. In order to avoid congestion in the fabric and ease the effects of possible bursts, a shaper is implemented on each adapter card.
Network ingress shaping to the fabric operates in a similar fashion to access ingress shaping to the fabric. See Ingress shaping to fabric (access and network) for details.
Configurable network ingress shaping to fabric
Configuring network ingress shapers to the fabric is similar to configuring access ingress shapers to the fabric.
The ingress to-fabric shapers are user-configurable. For the 7705 SAR-8 Shelf V2 and the 7705 SAR-18, the maximum rate depends on a number of factors, including platform, chassis variant, and slot type. See Configurable ingress shaping to fabric (access and network) for details.
For information about fabric shapers on the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, and 7705 SAR-Wx, see Fabric shaping on the fixed platforms (access and network). The 7705 SAR-X does not support configurable network ingress shapers.
Network fabric shaping on the fixed platforms
The 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, and 7705 SAR-Wx support user-configurable fabric shapers at rates of up to 5 Gb/s for access ingress traffic and network ingress traffic.
On the 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, and 7705 SAR-Wx, network ingress shapers to the fabric operate similarly to access ingress shapers to the fabric. The 7705 SAR-X does not support configurable network ingress shapers. See Fabric shaping on the fixed platforms (access and network) for more information.
Access egress
This section contains the following topics for traffic flow in the access egress direction:
Access egress queuing and scheduling
The following sections discuss the queuing and scheduling of access egress traffic, which is traffic that egresses the fabric on the access side:
Access egress scheduling takes place at the native traffic layer. As an example, when the ATM pseudowire payload is delivered from the network ingress to the access egress, the playback of the ATM cells to the appropriate ATM SAP is done according to ATM traffic management specifications.
Access egress scheduling is supported on the adapter cards and ports listed in the following table. The supported scheduling modes are 4-priority and 16-priority. The table shows which scheduling mode each card and port supports at access egress.
Adapter card or port |
4-priority |
16-priority |
---|---|---|
8-port Gigabit Ethernet Adapter card |
✓ |
✓ |
Packet Microwave Adapter card |
✓ |
✓ |
6-port Ethernet 10Gbps Adapter card1 |
✓ |
|
10-port 1GigE/1-port 10GigE X-Adapter card (10-port 1GigE mode) |
✓ |
✓ |
4-port SAR-H Fast Ethernet module |
✓ |
|
6-port SAR-M Ethernet module |
✓ |
✓ |
Ethernet ports on the 7705 SAR-A |
✓ |
✓ |
Ethernet ports on the 7705 SAR-Ax |
✓ |
✓ |
Ethernet ports on the 7705 SAR-H |
✓ |
✓ |
Ethernet ports on the 7705 SAR-Hc |
✓ |
✓ |
Ethernet ports on the 7705 SAR-M |
✓ |
✓ |
Ethernet ports on the 7705 SAR-Wx |
✓ |
✓ |
Ethernet ports on the 7705 SAR-X1 |
✓ |
|
16-port T1/E1 ASAP Adapter card |
✓ |
|
32-port T1/E1 ASAP Adapter card |
✓ |
|
2-port OC3/STM1 Channelized Adapter card |
✓ |
|
4-port OC3/STM1 Clear Channel Adapter card |
✓ |
|
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card |
✓ |
|
4-port DS3/E3 Adapter card |
✓ |
|
T1/E1 ASAP ports on the 7705 SAR-A |
✓ |
|
T1/E1 ASAP ports on the 7705 SAR-M |
✓ |
|
TDM ports on the 7705 SAR-X |
✓ |
|
12-port Serial Data Interface card |
✓ |
|
6-port E&M Adapter card |
✓ |
|
6-port FXS Adapter card |
✓ |
|
8-port FXO Adapter card |
✓ |
|
8-port Voice & Teleprotection card |
✓ |
|
8-port C37.94 Teleprotection card |
✓ |
|
Integrated Services card |
✓ |
- 4-priority scheduler for Gen-3 adapter card or platform.
BMU traffic access egress queuing and scheduling
At access egress, the 7705 SAR handles traffic management for unicast and BMU traffic in the same way. Unicast or BMU traffic is mapped to a queue and the mapping is based on the FC classification. Individual queues are then scheduled based on the available traffic.
ATM access egress queuing and scheduling
After the ATM pseudowire is terminated at the access egress, all the ATM cells are mapped to the default queue, which is queue 1, and queuing is performed per SAP. ATM access egress queuing and scheduling applies to the 16-port T1/E1 ASAP Adapter card, 32-port T1/E1 ASAP Adapter card, and 2-port OC3/STM1 Channelized Adapter card with atm/ima encapsulation. ATM access egress queuing and scheduling applies to the 4-port OC3/STM1 Clear Channel Adapter card and 4-port DS3/E3 Adapter card with atm encapsulation.
After the per-SAP queuing takes place, the ATM scheduler services these queues in the fashion and order defined below, based on the service categories assigned to each of these SAPs.
At access egress, CBR and rt-VBR VCs are always shaped because there is no option for the user to turn shaping off. Shaping for nrt-VBR is optional.
Strict priority scheduling in an exhaustive fashion takes place for the shaped VCs in the following order:
CBR (always shaped)
rt-VBR (always shaped)
nrt-VBR (when shaped, user-configurable for shaped or unshaped)
UBR traffic is not shaped. To offer maximum flexibility to the user, nrt-VBR unshaped (also known as scheduled) is implemented.
ATM traffic is serviced in priority order. CBR traffic has the highest priority and is serviced ahead of all other traffic. After all of the CBR traffic has been serviced, rt-VBR traffic is serviced. Then, nrt-VBR traffic is serviced.
After scheduling all the other traffic from the CBR and VBR service categories, UBR is serviced. If there is no other traffic, UBR can burst up to the line rate. Scheduled nrt-VBR is treated the same way as UBR. Both UBR and unshaped nrt-VBR are scheduled using the weighted round-robin scheduler.
The scheduler weight assigned to queues hosting scheduled nrt-VBR and UBR traffic is determined by the configured traffic rate. The weight used by the scheduler for UBR+ VCs is dependent on the minimum information rate (MIR) defined by the user. UBR with no MIR traffic has an MIR of 0.
Similarly, the scheduler weight is dependent on the sustained information rate (SIR) for scheduled nrt-VBR. Weight used by the scheduler is programmed automatically based on the user-configured MIR/SIR value and is not user-configurable.
For UBR+, the following tables are used to determine the weight of a UBR+ VC. These tables are also applicable to scheduled nrt-VBR weight determination. Instead of the MIR, the SIR is used to determine the scheduler weight.
Minimum information rate |
Scheduler weight |
---|---|
<64 kb/s |
1 |
<128 kb/s |
2 |
<256 kb/s |
3 |
<512 kb/s |
4 |
<1024 kb/s |
5 |
<1536 kb/s |
6 |
<1920 kb/s |
7 |
≥1920 kb/s |
8 |
Range OC3 ATM |
Range DS3 ATM |
Weight |
---|---|---|
0 to 1 Mb/s |
0 to 512 kb/s |
1 |
>1 Mb/s to 4 Mb/s |
>512 kb/s to 1 Mb/s |
2 |
>4 Mb/s to 8 Mb/s |
>1 Mb/s to 2 Mb/s |
3 |
>8 Mb/s to 16 Mb/s |
>2 Mb/s to 4 Mb/s |
4 |
>16 Mb/s to 32 Mb/s |
>4 Mb/s to 8 Mb/s |
5 |
>32 Mb/s to 50 Mb/s |
>8 Mb/s to 16 Mb/s |
6 |
>50 Mb/s to 100 Mb/s |
>16 Mb/s to 32 Mb/s |
7 |
>100 Mb/s |
>32 Mb/s |
8 |
The access egress ATM scheduling behavior is shown in the following table. For UBR traffic, the scheduler weight of the lowest possible value is always used, which is the value of 1. Only cell-based operations are carried out.
Flow type |
Transmission rate |
Priority |
---|---|---|
Shaped CBR |
Limited to configured PIR |
Strict priority over all other traffic |
Shaped rt-VBR |
Limited to configured SIR, but with bursts up to PIR within MBS |
Strict priority over all but shaped CBR |
Shaped nrt-VBR |
Limited to configured SIR, but with bursts up to PIR within MBS |
Strict priority over all scheduled traffic |
Scheduled nrt-VBR |
Weighted share (according to SIR) of port bandwidth remaining after shaped traffic has been exhausted |
In the same WRR scheduler as UBR+ and UBR |
Scheduled UBR+ |
Weighted share (according to MIR) of port bandwidth remaining after shaped traffic has been exhausted |
In the same WRR scheduler as nrt-VBR and UBR |
Scheduled UBR |
Weighted share (with weight of 1) of port bandwidth remaining after shaped traffic has been exhausted |
In the same WRR scheduler as nrt-VBR and UBR+ |
Ethernet access egress queuing and scheduling
Ethernet access egress queuing and scheduling is very similar to the Ethernet access ingress behavior. When the Ethernet pseudowire is terminated, traffic is mapped to up to eight different forwarding classes per SAP. Mapping traffic to different forwarding classes is performed based on the EXP bit settings of the received Ethernet pseudowire by network ingress classification.
Queue-type and profile scheduling are both supported for Ethernet access egress ports. If the queues are configured according to the tables and defaults described in this guide (implying a default mode of operation), the configuration is as follows:
CoS-8 to CoS-5 Expedited in-profile
CoS-4 to CoS-1 Best Effort in-profile
CoS-8 to CoS-5 Expedited out-of-profile
CoS-4 to CoS-1 Best Effort out-of-profile
In this default configuration, for queue-type scheduling, CoS-8 to CoS-5 are serviced by the Expedited scheduler, and CoS-4 to CoS-1 are serviced by the Best Effort scheduler. This default mode of operation can be altered to better fit the operating characteristics of specific SAPs.
With profile scheduling, the Ethernet frames can be either in-profile or out-of-profile, and scheduling takes into account the state of the Ethernet frames in conjunction with the configured CIR and PIR rates.
After the queuing, an aggregate queue-type and profile scheduling takes place in the following order:
Expedited in-profile traffic
Best Effort in-profile traffic
Expedited out-of-profile traffic
Best Effort out-of-profile traffic
After the traffic is scheduled using the aggregate queue-type and profile schedulers, the per-port shapers shape the traffic at a sub-rate (that is, at the configured/shaped port rate). Per-port shapers ensure that a sub-rate is met and attainable at all times.
Access egress per-SAP aggregate shapers (access egress H-QoS)
Per-SAP aggregate shapers in the access egress direction operate in a similar fashion to aggregate shapers for access ingress, except that egress traffic goes through the schedulers to the egress port shaper instead of through the schedulers to the fabric port as in the access ingress case. For information about how access egress and access ingress per-SAP shaping is similar, see Access ingress per-SAP aggregate shapers (access ingress H-QoS). For general information about per-SAP shapers, see Per-SAP aggregate shapers (H-QoS) on Gen-2 hardware.
The arbitration of access egress traffic from the per-SAP aggregate shapers to the schedulers is described in the following section.
Access egress per-SAP shapers arbitration
The arbitration of traffic from 4-priority and 16-priority schedulers toward an access egress port is achieved by configuring a committed aggregate rate limit for the aggregate of all the 4-priority unshaped SAPs. By configuring the 4-priority unshaped SAPs committed aggregate rate, the arbitration between the 16-priority shaped SAPs, 16-priority unshaped SAPs, and 4-priority unshaped SAPs is handled in the following priority order:
committed traffic: 16-priority per-SAP agg-rate-limit committed for shaped SAPs and 4-priority aggregate committed rate for all the unshaped SAPs
uncommitted traffic: 16-priority per-SAP agg-rate-limit uncommitted for shaped and unshaped SAPs and 4-priority aggregate uncommitted rate for all the unshaped SAPs
The following figure illustrates the traffic treatment for a single Ethernet port. It also illustrates that the shaped SAP aggregate CIR rate competes with the unshaped 4-priority aggregate CIR rate for port bandwidth. When the aggregate CIR rates are satisfied, the shaped SAP aggregate PIR rate competes with the 4-priority PIR rate (always maximum) for port bandwidth.
The egress aggregate CIR rate limit for all the unshaped 4-priority SAPs is configured using the config>port>ethernet>access>egress>unshaped-sap-cir command.
Access egress shaping for hybrid ports
Hybrid ports use a third-tier, dual-rate aggregate shaper to provide arbitration between the bulk of access and network egress traffic flows. For details, see QoS for hybrid ports on Gen-2 hardware.
Access egress for 4-priority (Gen-3) scheduling
The adapter cards and ports that support 4-priority (Gen-3) scheduling for access egress traffic are listed in Scheduling modes supported by adapter cards and ports at access egress. See QoS for Gen-3 adapter cards and platforms for details.
Access egress marking and re-marking
At access egress, where the network-wide QoS boundary is reached, there may be a requirement to mark or re-mark the CoS indicators to match customer requirements. Dot1p and DSCP marking and re-marking is supported at Ethernet access egress.
Similar to access ingress for Ethernet, DSCP marking or re-marking is supported for untagged, single-tagged, or double-tagged Ethernet frames.
On Ipipe SAPs over an Ethernet VLAN, both dot1p and DSCP marking and re-marking is supported at access egress. On Ipipe SAPs over PPP/MLPPP, DSCP marking and re-marking is supported at access egress. DSCP re-marking is supported for Ipipes using FR or cHDLC SAPS at the access egress.
Packet byte offset
Packet byte offset (PBO), or internal headerless rate, allows 7705 SAR schedulers to operate on a modified packet size by adding or subtracting a certain number of bytes. The actual packet size remains the same but schedulers take into account the modified size as opposed to the actual size of the packet. One of the main uses of the packet byte offset feature is to allow scheduling, at access ingress, to be carried out on the received packet size without taking into account service (for example, PW, MPLS) or internal overhead. Transport providers who sell bandwidth to customers typically need the 7705 SAR shapers/schedulers to only take into account the received packet size without the added overhead in order to accurately calculate the bandwidth they need to provide to their customers. Packet byte offset addresses this requirement. Another common use is at egress where port shapers can take into account four additional bytes, associated with Ethernet FCS.
Packet byte offset is configured under QoS profiles. Packet size modification might be desired to accommodate inclusion or exclusion of certain headers or even fields of headers during the scheduling operation. The packet size that the schedulers take into account is altered to accommodate or omit the desired number of bytes. Both addition and subtraction options are supported by the packet-byte-offset command. The actual packet size is not modified by the command; only the size used by ingress or egress schedulers is changed. The scheduling rates are affected by the offset, as well as the statistics (accounting) associated with the queue. Packet byte offset does not affect port-level and service-level statistics. It only affects the queue statistics.
When a QoS policy configured with packet byte offset is applied to a SAP or network interface, all the octet counters and statistics operate and report based on the new adjusted value. If configured, per-SAP aggregate shapers and per-customer aggregate shapers also operate on the adjusted packet sizes. The only exceptions to this rule are port shapers. The egress port shapers do not take the adjusted packet size into account but operate only on the final packet size.
The following table shows PBO support on the 7705 SAR.
Traffic direction and PBO count |
Second and third generation adapter cards and platforms |
||||
---|---|---|---|---|---|
Per SAP CoS Queue |
Per SAP Shaper |
Per Customer Shaper |
Fabric Shaper 7705 SAR-8 Shelf V2 / 7705 SAR-18 |
||
Sum of adjusted MSS shapers ≤ fabric shapers |
Sum of adjusted MSS shapers > fabric shapers |
||||
Access ingress |
✓ |
✓ |
✓ |
internal packet size, no FCS |
internal packet size, no FCS |
auto |
3 |
3 |
3 |
3 |
internal packet size, no FCS (fabric shaper rate) |
add 50 |
2 |
2 |
2 |
2 |
3 |
subtract 50 |
6 |
6 |
6 |
6 |
3 |
Per CoS Queue |
Bypass |
Fabric Shaper (on Non-Chassis Based Nodes) Otherwise Bypass |
Fabric Shaper 7705 SAR-8 Shelf V2 / 7705 SAR-18 |
||
Network ingress |
✓ |
n/a |
✓ |
✓ |
|
add 50 |
2 |
n/a |
2 |
2 |
|
subtract 50 |
6 |
n/a |
6 |
6 |
|
Per SAP CoS Queue |
Per SAP Shaper |
Per Customer Shaper |
Port Shaper |
||
Sum of adjusted MSS shapers ≤ egress rate |
Sum of adjusted MSS shapers > egress rate |
||||
Access egress |
✓ |
✓ |
✓ |
✓ |
final packet size, FCS optional |
add 50 |
2 |
2 |
2 |
2 (room for 3) |
3 |
subtract 50 |
6 |
6 |
6 |
6 |
3 |
Per CoS Queue |
Per VLAN Shaper |
Bypass |
Port Shaper |
||
Sum of adjusted VLAN shapers ≤ egress rate |
Sum of adjusted VLAN shapers > egress rate |
||||
Network egress |
✓ |
✓ |
n/a |
✓ |
final packet size, FCS optional |
add 50 |
2 |
2 |
n/a |
2 (room for 3) |
2 (room for 3) |
subtract 50 |
6 |
6 |
n/a |
6 |
3 |
Per Access / Network CoS queue |
SAP / VLAN Shaper |
Per Customer Shaper Access / Network Arbitrator |
Port Shaper |
||
Sum of adjusted MSS/NW arbitrator shapers ≤ egress rate |
Sum of adjusted MSS/NW arbitrator shapers > egress rate |
||||
Hybrid egress |
✓ |
✓ |
✓ |
✓ |
final packet size, FCS optional |
add 50 |
2 |
2 |
2 |
2 (room for 3) |
3 |
subtract 50 |
6 |
6 |
6 |
6 |
3 |
QoS policies overview
This section contains the following topics related to QoS policies:
Overview
7705 SAR QoS policies are applied on service ingress, service egress, and network interfaces. The service ingress and service egress points may be considered as the network QoS boundaries for the service being provided.
The QoS policies define:
classification rules for how traffic is mapped to forwarding classes
how forwarding classes are aggregated under queues
the queue parameters used for policing, shaping, and buffer allocation
QoS marking/interpretation
There are several types of QoS policies (see QoS policy types and descriptions for summaries and references to details):
service ingress (also known as access ingress)
service egress (also known as access egress)
MC-MLPPP SAP egress
network (for ingress and egress and ring)
IP interface type policy for network ingress and egress
-
ring type policy for Ethernet bridging domain on a ring adapter card
network queue (for ingress and egress)
slope
ATM traffic descriptor profile
fabric profile
shaper
Note: The terms access ingress/egress and service ingress/egress are interchangeable. The previous sections used the term access, and the sections that follow use the term service.
Service ingress QoS policies are applied to the customer-facing SAPs and map traffic to forwarding class queues on ingress. The mapping of traffic to queues can be based on combinations of customer QoS marking (dot1p bits and DSCP values). The number of forwarding class queues for ingress traffic and the queue characteristics are defined within the policy. There can be up to eight ingress forwarding class queues in the policy, one for each forwarding class.
Within a service ingress QoS policy, up to three queues per forwarding class can be used for multipoint traffic for multipoint services. Multipoint traffic consists of broadcast, multicast, and unknown (BMU) traffic types. For VPLS, four types of forwarding are supported (which are not to be confused with forwarding classes): unicast, broadcast, multicast, and unknown. The BMU types are flooded to all destinations within the service, while the unicast forwarding type is handled in a point-to-point fashion within the service.
Service ingress QoS policies on the 7705 SAR allow flexible arrangement of these queues. For example, more than one FC can be mapped to a single queue, both unicast and multipoint (BMU) traffic can be mapped to a single queue, or unicast and BMU traffic can be mapped to separate queues. Therefore, customers are not limited to the default configurations that are described in this guide.
Service egress QoS policies are applied to egress SAPs and provide the configurations needed to map forwarding classes to service egress queues. Each service can have up to eight queues configured, since a service may require multiple forwarding classes. A service egress QoS policy also defines how to re-mark dot1p bits and DSCP values of the customer traffic in native format based on the forwarding class of the customer traffic.
Network ingress and egress QoS policies are applied to network interfaces. On ingress for traffic received from the network, the policy maps incoming EXP values to forwarding classes and profile states. On egress, the policy maps forwarding classes and profile states to EXP values for traffic to be transmitted into the network.
On the network side, there are two types of QoS policies: network and network queue (see QoS policy types and descriptions ). The network type of QoS policy is applied to the network interface under the config>router>interface command and contains the EXP marking rules for both ingress and egress. The network queue type of QoS policy defines all of the internal settings; that is, how the queues, or sets of queues (for ingress), are set up and used per physical port on egress and per adapter card for ingress.
A ring type network policy can be applied to the ring ports and the add/drop port on a ring adapter card. The policy is created under the config>qos>network command, and applied at the adapter card level under the config>card>mda command. The policy maps each dot1p value to a queue and a profile state.
If GRE or IP tunneling is enabled, policy mapping can be set up to use DSCP bits.
Network queue policies are applied on egress to network ports and channels and on ingress to adapter cards. The policies define the forwarding class queue characteristics for these entities.
Service ingress, service egress, and network QoS policies are defined with a scope of either template or exclusive. Template policies can be applied to multiple SAPs or interfaces, whereas exclusive policies can only be applied to a single entity.
One service ingress QoS policy and one service egress QoS policy can be applied to a specific SAP. One network QoS policy can be applied to a specific interface. A network QoS policy defines both ingress and egress behavior. If no QoS policy is explicitly applied to a SAP or network interface, a default QoS policy is applied.
The following table provides a summary of the major functions performed by the QoS policies.
Policy type |
Applied at… |
Description |
Section |
---|---|---|---|
Service Ingress |
SAP ingress |
Defines up to eight forwarding class queues and queue parameters for traffic classification Defines match criteria to map flows to the queues based on combinations of customer QoS (dot1p bits and DSCP values) |
|
Service Egress |
SAP egress |
Defines up to eight forwarding class queues and queue parameters for traffic classification Maps one or more forwarding classes to the queues |
|
MC-MLPPP |
SAP egress |
Defines up to eight forwarding class queues and queue parameters for traffic classification Maps one or more forwarding classes to the queues |
|
Network |
Network interface |
Packets are marked using QoS policies on edge devices, such as the 7705 SAR at access ingress. Invoking a QoS policy on a network port allows for the packets that match the policy criteria to be re-marked at network egress for appropriate CoS handling across the network |
|
Network Queue |
Adapter card network ingress and egress |
Defines forwarding class mappings to network queues |
|
Slope |
Adapter card ports |
Enables or disables the high-slope and low-slope parameters within the egress or ingress queue |
|
ATM Traffic Descriptor Profile |
SAP ingress |
Defines the expected rates and characteristics of traffic. Specified traffic parameters are used for policing ATM cells and for selecting the service category for the per-VC queue. |
|
SAP egress |
Defines the expected rates and characteristics of traffic. Specified traffic parameters are used for scheduling and shaping ATM cells and for selecting the service category for the per-VC queue. |
||
Fabric Profile |
Adapter card access and network ingress |
Defines access and network ingress to-fabric shapers at user-configurable rates |
|
Shaper |
Adapter card ports |
Defines dual-rate shaping parameters for a shaper group in a shaper policy |
Service ingress QoS policies
Service ingress QoS policies define ingress service forwarding class queues and map flows to those queues. When a service ingress QoS policy is created, it always has a default ingress traffic queue defined that cannot be deleted. These queues exist within the definition of the policy. The queues only get created when the policy is applied to a SAP.
In the simplest service ingress QoS policy, all traffic is treated as a single flow and mapped to a single queue. The required elements to define a service ingress QoS policy are:
a unique service ingress QoS policy ID
a QoS policy scope of template or exclusive
at least one default ingress forwarding class queue. The parameters that can be configured for a queue are discussed in Network and service QoS queue parameters.
Optional service ingress QoS policy elements include:
additional ingress queues up to a total of eight
QoS policy match criteria to map packets to a forwarding class
Each queue can have unique queue parameters to allow individual policing and rate shaping of the flow mapped to the forwarding class. The following figure depicts service traffic being classified into three different forwarding class queues.
The mapping of flows to forwarding classes is controlled by comparing each packet to the match criteria in the QoS policy. The ingress packet classification to forwarding class and enqueuing priority is subject to a classification hierarchy. Each type of classification rule is interpreted with a specific priority in the hierarchy.
The following table is an example for an Ethernet SAP (that is, a SAP defined over a whole Ethernet port, over a single VLAN, or over QinQ VLANs). The table lists the classification rules in the order in which they are evaluated.
Rule |
Forwarding class |
Enqueuing priority |
Comments |
---|---|---|---|
default-fc |
Set to the policy’s default FC |
Set to the policy default |
All packets match the default rule |
dot1p dot1p-value |
Set when an fc-name exists in the policy Otherwise, preserve from the previous match |
Set when the priority parameter is high or low Otherwise, preserve from the previous match |
Each dot1p-value must be explicitly defined. Each packet can only match a single dot1p rule. For QinQ applications, the dot1p-value used (top or bottom) is specified by the match-qinq-dot1p command. |
dscp dscp-name |
Set when an fc-name exists in the policy Otherwise, preserve from the previous match |
Set when the priority parameter is high or low in the entry Otherwise, preserve from the previous match |
Each dscp-name that defines the DSCP value must be explicitly defined. Each packet can only match a single DSCP rule. |
The enqueuing priority is specified as part of the classification rule and is set to high or low. The enqueuing priority relates to the forwarding class queue’s high-priority-only allocation, where only packets with a high enqueuing priority are accepted into the queue when the queue’s depth reaches the defined threshold. See High-priority-only buffers.
The mapping of ingress traffic to a forwarding class based on dot1p or DSCP bits is optional. The default service ingress policy is implicitly applied to all SAPs that do not explicitly have another service ingress policy assigned. The characteristics of the default policy are listed in the following table.
Characteristic |
Item |
Definition |
---|---|---|
Queues |
Queue 1 |
One queue for all ingress traffic:
|
Flows |
Default FC |
One flow defined for all traffic:
|
- See Buffer support on adapter cards and platforms for a list of adapter cards and buffer sizes.
Service egress QoS policies
Service egress queues are implemented at the transition from the service network to the service access network. The advantages of per-service queuing before transmission into the access network are:
per-service egress shaping, soft-policing capabilities
more granular, more fair scheduling per service into the access network
per-service statistics for forwarded and discarded service packets
The subrate capabilities and per-service scheduling control are required to make multiple services per physical port possible. Without egress shaping, it is impossible to support more than one service per port. There is no way to prevent service traffic from bursting to the available port bandwidth and starving other services.
For accounting purposes, per-service statistics can be logged. When statistics from service ingress queues are compared with service egress queues, the ability to conform to per-service QoS requirements within the service network can be measured. The service network statistics are a major asset to network provisioning tools.
Service egress QoS policies define egress service queues and map forwarding class flows to queues. In the simplest service egress QoS policy, all forwarding classes are treated as a single flow and mapped to a single queue.
To define a basic service egress QoS policy, the following are required:
a unique service egress QoS policy ID
a QoS policy scope of template or exclusive
at least one defined default queue. The parameters that can be configured for a queue are discussed in Network and service QoS queue parameters.
Optional service egress QoS policy elements include:
additional queues, up to a total of eight separate queues
dot1p priority and DSCP value re-marking based on forwarding class
Each queue in a policy is associated with one or more of the supported forwarding classes. Each queue can have its individual queue parameters, allowing individual rate shaping of the forwarding classes mapped to the queue. More complex service queuing models are supported in the 7705 SAR where each forwarding class is associated with a dedicated queue.
The forwarding class determination per service egress packet is determined at ingress. If the packet ingressed the service on the same 7705 SAR router, the service ingress classification rules determine the forwarding class of the packet. If the packet was received over a service transport tunnel, the forwarding class is marked in the tunnel transport encapsulation.
Service egress QoS policy ID 1 is reserved as the default service egress policy. The default policy cannot be deleted or changed.
The default service egress policy is applied to all SAPs that do not have another service egress policy explicitly assigned. The characteristics of the default policy are listed in the following table.
Characteristic |
Item |
Definition |
---|---|---|
Queues |
Queue 1 |
One queue defined for all traffic classes:
|
Flows |
Default action |
One flow defined for all traffic classes:
|
- See Buffer support on adapter cards and platforms for a list of adapter cards and buffer sizes.
MC-MLPPP SAP egress QoS policies
SAPs running MC-MLPPP have their own SAP egress QoS policies that differ from standard policies. Unlike standard SAP policies, MC-MLPPP SAP egress policies do not contain queue types, CIR, CIR adaptation rules, or dot1p re-marking.
Standard and MC-MLPPP SAP egress policies can never have the same policy ID except when the policy ID is 1 (default). Standard SAP egress QoS policies cannot be applied to SAPs running MC-MLPPP. Similarly, MC-MLPPP SAP egress QoS policies cannot be applied to standard SAPs. The default policy can be applied to both MC-MLPPP and other SAPs. It will remain the default policy regardless of SAP type.
MC-MLPPP on the 7705 SAR supports scheduling based on multiclass implementation. Instead of the standard profiled queue-type scheduling, an MC-MLPPP encapsulated access port performs class-based traffic servicing.
The four MC-MLPPP classes are scheduled in a strict priority fashion, as shown in the following table.
MC-MLPPP class |
Priority |
---|---|
0 |
Priority over all other classes |
1 |
Priority over classes 2 and 3 |
2 |
Priority over class 3 |
3 |
No priority |
For example, if a packet is sent to an MC-MLPPP class 3 queue and all other queues are empty, the 7705 SAR fragments the packet according to the configured fragment size and begins sending the fragments. If a new packet is sent to an MC-MLPPP class 2 queue, the 7705 SAR finishes sending any fragments of the class 3 packet that are on the wire, then holds back the remaining fragments in order to service the higher-priority packet. The fragments of the first packet remain at the top of the class 3 queue. For packets of the same class, MC-MLPPP class queues operate on a first-in, first-out basis.
The user configures the required number of MLPPP classes to use on a bundle. The forwarding class of the packet, as determined by the ingress QoS classification, is used to determine the MLPPP class for the packet. The mapping of forwarding class to MLPPP class is a function of the user-configurable number of MLPPP classes. The default mapping for a 4-class, 3-class, and 2-class MLPPP bundle is shown in the following table.
FC ID |
FC name |
MLPPP class 4-class bundle |
MLPPP class 3-class bundle |
MLPPP class 2-class bundle |
---|---|---|---|---|
7 |
NC |
0 |
0 |
0 |
6 |
H1 |
0 |
0 |
0 |
5 |
EF |
1 |
1 |
1 |
4 |
H2 |
1 |
1 |
1 |
3 |
L1 |
2 |
2 |
1 |
2 |
AF |
2 |
2 |
1 |
1 |
L2 |
3 |
2 |
1 |
0 |
BE |
3 |
2 |
1 |
If one or more forwarding classes are mapped to a queue, the scheduling priority of the queue is based on the lowest forwarding class mapped to it. For example, if forwarding classes 0 and 7 are mapped to a queue, the queue is serviced by MC-MLPPP class 3 in a 4-class bundle model.
Network and network queue QoS policies
The QoS mechanisms within the 7705 SAR are specialized for the type of traffic on the interface. For customer interfaces, there is service ingress and service egress traffic, and for network interfaces, there is network ingress and network egress traffic.
The 7705 SAR uses QoS policies applied to a SAP for a service or to a network port to define the queuing, queue attributes, and QoS marking/interpretation.
The 7705 SAR supports the following types of network and service QoS policies:
Service ingress QoS policies (described previously)
Service egress QoS policies (described previously)
Note: Queuing parameters are the same for both network and service QoS policies. See Network and service QoS queue parameters.
Network QoS policies
Network QoS policies define egress QoS marking and ingress QoS classification for traffic on network interfaces. The 7705 SAR automatically creates egress queues for each of the forwarding classes on network interfaces.
A network QoS policy defines ingress, egress, and ring handling of QoS on the network interface. The following functions are defined:
ingress
defines label switched path Experimental bit (LSP EXP) value mappings to forwarding classes
defines DSCP name mappings to forwarding classes
egress
defines forwarding class to LSP EXP and dot1p value markings
defines forwarding class to DSCP value markings
ring
defines dot1p bit value mappings to queue and profile state
The required elements to be defined in a network QoS policy are:
a unique network QoS policy ID
egress forwarding class to LSP EXP value mappings for each forwarding class used
a default ingress forwarding class and in-profile/out-of-profile state
a default queue and in-profile/out-of-profile state for ring type network QoS policy
Optional ip-interface type network QoS policy elements include the LSP EXP value or DSCP name to forwarding class and profile state mappings for all EXP values or DSCP values received. Optional ring type network QoS policy elements include the dot1p bits value to queue and profile state mappings for all dot1p bit values received.
Network policy ID 1 is reserved as the default network QoS policy. The default policy cannot be deleted or changed. The default network QoS policy is applied to all network interfaces and ring ports (for ring adapter cards) that do not have another network QoS policy explicitly assigned.
The following tables list the various network QoS policy default mappings:
- Default network QoS policy egress marking
- Default network QoS policy DSCP-to-forwarding class mappings
- Default network QoS policy LSP EXP-to-forwarding class mappings
- Default network QoS policy dot1p-to-queue class mappings
The following table lists the default mapping of forwarding class to LSP EXP values and DSCP names for network egress.
FC-ID |
FC name |
FC label |
DiffServ name |
Egress LSP EXP marking |
Egress DSCP marking |
||
---|---|---|---|---|---|---|---|
In-profile |
Out-of-profile |
In-profile name |
Out-of-profile name |
||||
7 |
Network Control |
nc |
NC2 |
111 - 7 |
111 - 7 |
nc2 111000 - 56 |
nc2 111000 - 56 |
6 |
High-1 |
h1 |
NC1 |
110 - 6 |
110 - 6 |
nc1 110000 - 48 |
nc1 110000 - 48 |
5 |
Expedited |
ef |
EF |
101 - 5 |
101 - 5 |
ef 101110 - 46 |
ef 101110 - 46 |
4 |
High-2 |
h2 |
AF4 |
100 - 4 |
100 - 4 |
af41 100010 - 34 |
af42 100100 - 36 |
3 |
Low-1 |
l1 |
AF2 |
011 - 3 |
010 - 2 |
af21 010010 - 18 |
af22 010100 - 20 |
2 |
Assured |
af |
AF1 |
011 - 3 |
010 - 2 |
af11 001010 - 10 |
af12 001100 - 12 |
1 |
Low-2 |
l2 |
CS1 |
001 - 1 |
001 - 1 |
cs1 001000 - 8 |
cs1 001000 - 8 |
0 |
Best Effort |
be |
BE |
000 - 0 |
000 - 0 |
be 000000 - 0 |
be 000000 - 0 |
For network ingress, the following table lists the default mapping of DSCP name to forwarding class and profile state for the default network QoS policy.
Ingress DSCP |
Forwarding class |
||||
---|---|---|---|---|---|
DSCP name |
DSCP value |
FC ID |
Name |
Label |
Profile state |
Default 1 |
0 |
Best-Effort |
be |
Out |
|
ef |
101110 - 46 |
5 |
Expedited |
ef |
In |
cs1 |
001000 - 8 |
1 |
Low-2 |
l2 |
In |
nc-1 |
110000 - 48 |
6 |
High-1 |
h1 |
In |
nc-2 |
111000 - 56 |
7 |
Network Control |
nc |
In |
af11 |
001010 - 10 |
2 |
Assured |
af |
In |
af12 |
001100 - 12 |
2 |
Assured |
af |
Out |
af13 |
001110 - 14 |
2 |
Assured |
af |
Out |
af21 |
010010 - 18 |
3 |
Low-1 |
l1 |
In |
af22 |
010100 - 20 |
3 |
Low-1 |
l1 |
Out |
af23 |
010110 - 22 |
3 |
Low-1 |
l1 |
Out |
af31 |
011010 - 26 |
3 |
Low-1 |
l1 |
In |
af32 |
011100 - 28 |
3 |
Low-1 |
l1 |
Out |
af33 |
011110 - 30 |
3 |
Low-1 |
l1 |
Out |
af41 |
100010 - 34 |
4 |
High-2 |
h2 |
In |
af42 |
100100 - 36 |
4 |
High-2 |
h2 |
Out |
af43 |
100110 - 38 |
4 |
High-2 |
h2 |
Out |
- The default forwarding class mapping is used for all DSCP name values for which there is no explicit forwarding class mapping.
The following table lists the default mapping of LSP EXP values to forwarding class and profile state for the default network QoS policy.
Ingress LSP EXP |
Forwarding class |
||||
---|---|---|---|---|---|
LSP EXP ID |
LSP EXP value |
FC ID |
Name |
Label |
Profile state |
Default 1 |
0 |
Best-Effort |
be |
Out |
|
1 |
001 - 1 |
1 |
Low-2 |
l2 |
In |
2 |
010 - 2 |
2 |
Assured |
af |
Out |
3 |
011 - 3 |
2 |
Assured |
af |
In |
4 |
100 - 4 |
4 |
High-2 |
h2 |
In |
5 |
101 - 5 |
5 |
Expedited |
ef |
In |
6 |
110 - 6 |
6 |
High-1 |
h1 |
In |
7 |
111 - 7 |
7 |
Network Control |
nc |
In |
- The default forwarding class mapping is used for all LSP EXP values for which there is no explicit forwarding class mapping.
The following table lists the default mapping of dot1p values to queue and profile state for the default network QoS policy.
Dot1p value |
Queue |
Profile state |
---|---|---|
0 |
1 |
Out |
1 |
2 |
In |
2 |
3 |
Out |
3 |
3 1 |
In |
4 |
5 |
In |
5 |
6 |
In |
6 |
7 |
In |
7 |
8 |
In |
- The default queue mapping for dot1p values 2 and 3 are both queue 3.
CoS marking for self-generated traffic
The 7705 SAR is the source of some types of traffic; for example, a link state PDU for sending IS-IS topology updates or an SNMP trap sent to indicate that an event has happened. This type of traffic that is created by the 7705 SAR is considered to be self-generated traffic (SGT). Another example of self-generated traffic is Telnet, but in that application, user commands initiate the sending of the Telnet traffic.
Network operators often have different QoS models throughout their networks and apply different QoS schemes to portions of the networks to better accommodate delay, jitter, and loss requirements of different applications. The class of service (DSCP or dot1p) bits of self-generated traffic can be marked on a per-application basis to match the network operator’s QoS scheme. This marking option enhances the ability of the 7705 SAR to match the various requirements of these applications.
The 7705 SAR supports marking self-generated traffic for the base routers and for virtual routers. See ‟QoS Policies” in the 7705 SAR Services Guide for information about SGT QoS as applied to virtual routers (for VPRN services).
The DSCP and dot1p values of the self-generated traffic, where applicable, are marked in accordance with the values that are configured under the sgt-qos command. In the egress direction, self-generated traffic is forwarded using the egress control queue to ensure premium treatment, unless SGT redirection is configured (see SGT redirection). PTP (IEEE 1588v2) and SAA-enabled ICMP traffic is forwarded using the CoS queue. The next-hop router uses the DSCP values to classify the traffic accordingly.
The following table lists various applications and indicates whether they have configurable DSCP or dot1p markings.
Application |
Supported marking |
Default DSCP/dot1p |
---|---|---|
ARP |
dot1p |
7 |
IS-IS |
dot1p |
7 |
BGP |
DSCP |
NC1 |
DHCP |
DSCP |
NC1 |
DNS |
DSCP |
AF41 |
FTP |
DSCP |
AF41 |
ICMP (ping) |
DSCP |
BE |
IGMP |
DSCP |
NC1 |
LDP (T-LDP) |
DSCP |
NC1 |
MCFW |
DSCP |
NC1 |
MLD |
DSCP |
NC1 |
NDIS |
DSCP |
NC1 |
NTP |
DSCP |
NC1 |
OSPF |
DSCP |
NC1 |
PIM |
DSCP |
NC1 |
1588 PTP |
DSCP |
NC1 |
RADIUS |
DSCP |
AF41 |
RIP |
DSCP |
NC1 |
RSVP |
DSCP |
NC1 |
SNMP (get, set, etc.) |
DSCP |
AF41 |
SNMP trap/log |
DSCP |
AF41 |
SSH (SCP) |
DSCP |
AF41 |
syslog |
DSCP |
AF41 |
TACACS+ |
DSCP |
AF41 |
Telnet |
DSCP |
AF41 |
TFTP |
DSCP |
AF41 |
Traceroute |
DSCP |
BE |
VRRP |
DSCP |
NC1 |
PTP in the context of SGT QoS is defined as Precision Timing Protocol and is an application in the 7705 SAR. The PTP application name is also used in areas such as event-control and logging. Precision Timing Protocol is defined in IEEE 1588-2008.
PTP in the context of IP filters is defined as Performance Transparency Protocol. IP protocols can be used as IP filter match criteria; the match is made on the 8-bit protocol field in the IP header.
SGT redirection
The 7705 SAR can be used in deployments where the uplink bandwidth capacity is considerably less than if the router is used for fixed or mobile backhaul applications. However, the 7705 SAR is optimized to operate in environments with megabits per second of uplink capacity for network operations. Therefore, many of the software timers are designed to ensure the fastest possible detection of failures, without considering bandwidth limitations. In deployments with very low bandwidth constraints, the system must also be optimized for effective operation of the routers without any interruption to mission-critical customer traffic.
In lower-bandwidth deployments, SGT can impact mission-critical user traffic such as TDM pseudowire traffic. To minimize the impact on this traffic, SGT can be redirected to a data queue rather than to the high-priority control queue on egress. All SGT applications can be redirected to a data queue, but the type of application must be considered because not all SGT is suitable to be scheduled at a lower priority. SGT applications such as FTP, TFTP, and syslog can be mapped to a lower-priority queue.
As an example, in a scenario where the uplink bandwidth is limited to a fractional E1 link with 2 x DS0 channel groups, downloading software for a new release can disrupt TDM pseudowire traffic, especially if SGT traffic is always serviced first over all other traffic flows. Having the option to map a subset of SGT to data queues will ensure that the mission-critical traffic flows effectively. For example, if FTP traffic is redirected to the best-effort forwarding queue, FTP traffic is then serviced only after all higher-priority traffic is serviced, including network control traffic and TDM pseudowire traffic. This redirection ensures the correct treatment of all traffic types matching the requirements of the network.
Redirection of SGT applications is done using the config>router>sgt-qos> application>fc-queue or config>service>vprn>sgt-qos>application>fc-queue command.
Redirection of the global ping application is not done through the sgt-qos menu hierarchy; this is configured using the fc-queue option in the ping command. See the 7705 SAR OAM and Diagnostics Guide, ‟OAM and SAA Command Reference”, for details.
SGT redirection is supported on the base router and the virtual routers on ports with Ethernet or PPP/MLPPP encapsulation.
Network queue QoS policies
Network queue policies define the queue characteristics that are used in determining the scheduling and queuing behavior for a forwarding class. Network queue policies are applied on ingress and egress network ports as well as on the ring ports and the add/drop port on the 2-port 10GigE (Ethernet) Adapter card and 2-port 10GigE (Ethernet) module.
Network queue policies are identified with a unique policy name that conforms to the standard 7705 SAR alphanumeric naming conventions. The policy name is user-configured when the policy is created.
Network queue policies can be configured to use up to 16 queues (8 unicast and 8 multicast). This means that the number of queues can vary. Not all user-created policies will require and use 16 queues; however, the system default network queue policy (named "default") does define 16 queues.
The queue characteristics that can be configured on a per-forwarding class basis are:
committed buffer size (CBS) as a percentage of the buffer pool
maximum buffer size (MBS) as a percentage of the buffer pool
high-priority-only buffers as a percentage of MBS
peak information rate (PIR) as a percentage of egress port bandwidth
committed information rate (CIR) as a percentage of egress port bandwidth
The following table describes the default network queue policy definition.
The system default network queue policy cannot be modified or deleted.
In the table, the value for Rate in the Definition column is the PIR value.
Forwarding class |
Queue |
Definition |
Queue |
Definition |
---|---|---|---|---|
Network-Control (nc) |
8 |
Rate = 100% CIR = 10% MBS = 2.5% CBS = 0.25% High-Prio-Only = 10% |
16 |
Rate = 100% CIR = 10% MBS = 2.5% CBS = 0.1% High-Prio-Only = 10% |
High-1 (h1) |
7 |
Rate = 100% CIR = 10% MBS = 2.5% CBS = 0.25% High-Prio-Only = 10% |
15 |
Rate = 100% CIR = 10% MBS = 2.5% CBS = 0.1% High-Prio-Only = 10% |
Expedited (ef) |
6 |
Rate = 100% CIR = 100% MBS = 5% CBS = 0.75% High-Prio-Only = 10% |
14 |
Rate = 100% CIR = 100% MBS = 5% CBS = 0.1% High-Prio-Only = 10% |
High-2 (h2) |
5 |
Rate = 100% CIR = 100% MBS = 5% CBS = 0.75% High-Prio-Only = 10% |
13 |
Rate = 100% CIR = 100% MBS = 5% CBS = 0.1% High-Prio-Only = 10% |
Low-1 (l1) |
4 |
Rate = 100% CIR = 25% MBS = 2.5% CBS = 0.25% High-Prio-Only = 10% |
12 |
Rate = 100% CIR = 5% MBS = 2.5% CBS = 0.25% High-Prio-Only = 10% |
Assured (af) |
3 |
Rate = 100% CIR = 25% MBS = 5% CBS = 0.75% High-Prio-Only = 10% |
11 |
Rate = 100% CIR = 5% MBS = 5% CBS = 0.1% High-Prio-Only = 10% |
Low-2 (l2) |
2 |
Rate = 100% CIR = 25% MBS = 5% CBS = 0.25% High-Prio-Only = 10% |
10 |
Rate = 100% CIR = 5% MBS = 5% CBS = 0.1% High-Prio-Only = 10% |
Best Effort (be) |
1 |
Rate = 100% CIR = 0% MBS = 5% CBS = 0.1% High-Prio-Only = 10% |
9 |
Rate = 100% CIR = 0% MBS = 5% CBS = 0.1% High-Prio-Only = 10% |
Network and service QoS queue parameters
The following queue parameters are provisioned on network and service queues:
Queue ID
The queue ID is used to uniquely identify the queue. The queue ID is only unique within the context of the QoS policy within which the queue is defined.
Committed information rate
The CIR for a queue defines a limit for scheduling. Packets queued at service ingress queues are serviced by in-profile or out-of-profile schedulers based on the queue’s CIR and the rate at which the packets are flowing. For each packet in a service ingress queue, the CIR is checked with the current transmission rate of the queue. If the current rate is at or below the CIR threshold, the transmitted packet is internally marked in-profile. If the flow rate is above the threshold, the transmitted packet is internally marked out-of-profile.
All 7705 SAR queues support the concept of in-profile and out-of-profile. The network QoS policy applied at network egress determines how or if the profile state is marked in packets transmitted into the network core. This is done by enabling or disabling the appropriate priority marking of network egress packets within a particular forwarding class. If the profile state is marked in the packets that are sent toward the network core, then out-of-profile packets are preferentially dropped over in-profile packets at congestion points in the network.
When defining the CIR for a queue, the value specified is the administrative CIR for the queue. The 7705 SAR maps a user-configured value to a hardware supported rate that it uses to determine the operational CIR for the queue. The user has control over how the administrative CIR is converted to an operational CIR if a slight adjustment is required. The interpretation of the administrative CIR is discussed in Adaptation rule.
The CIR value for a service queue is assigned to ingress and egress service queues based on service ingress QoS policies and service egress QoS policies, respectively.
The CIR value for a network queue is defined within a network queue policy specifically for the forwarding class. The queue-id parameter links the CIR values to the forwarding classes. The CIR values for the forwarding class queues are defined as a percentage of the network interface bandwidth.
Peak information rate
The PIR value defines the maximum rate at which packets are allowed to exit the queue. It does not specify the maximum rate at which packets may enter the queue; this is governed by the queue’s ability to absorb bursts and is user-configurable using its maximum burst size (MBS) value.
The PIR value is provisioned on ingress and egress service queues within service ingress QoS policies and service egress QoS policies, respectively.
The PIR values for network queues are defined within network queue policies and are specific for each forwarding class. The PIR value for each queue for the forwarding class is defined as a percentage of the network interface bandwidth.
When defining the PIR for a queue, the value specified is the administrative PIR for the queue. The 7705 SAR maps a user-configured value to a hardware supported rate that it uses to determine the operational PIR for the queue. The user has control over how the administrative PIR is converted to an operational CIR if a slight adjustment is required. The interpretation of the administrative PIR is discussed in Adaptation rule.
Adaptation rule
The schedulers on the network processor can only operate with a finite set of rates. These rates are called the operational rates. The configured rates for PIR and CIR do not necessarily correspond to the operational rates. In order to offer maximum flexibility to the user, the adaptation-rule command can be used to choose how an operational rate is selected based on the configured PIR or CIR rate.
The max parameter causes the network processor to be programmed at an operational rate that is less than the configured PIR or CIR rate by up to 1.0%. The min parameter causes the network processor to be programmed at an operational rate that is greater than the configured PIR or CIR rate by up to 1.0%. The closest parameter causes the network processor to be programmed at an operational rate that is closest to the configured PIR or CIR rate.
A 4-priority scheduler on the network processor of a third-generation (Gen-3) Ethernet adapter card or platform can be programmed at an operational CIR rate that exceeds 1.0% of the configured CIR rate. The PIR rate (that is, the maximum rate for the queue) and the SAP aggregate rates (CIR and PIR), maintain an accuracy of +/- 1.0% of the configured rates.
The average difference between the configured CIR rate and the programmed (operational) CIR rate is as follows:
2.0% for frame sizes that are less than 2049 bytes
4.0% for other frame sizes
The Gen-3 network processor PIR rate is programmed to an operational PIR rate that is within 1.0% of the configured rate, which ensures that the FC/CoS queue does not exceed its fair share of the total bandwidth.
Committed burst size
The CBS parameter specifies the committed buffer space allocated for a specific queue.
The CBS is provisioned on ingress and egress service queues within service ingress QoS policies and service egress QoS policies, respectively. The CBS for a queue is specified in kilobytes.
The CBS values for network queues are defined within network queue policies based on the forwarding class. The CBS values for the queues for the forwarding class are defined as a percentage of buffer space for the pool.
Maximum burst size
When the reserved buffers for a queue have been used, the queue contends with other queues for additional buffer resources up to the maximum burst size. The MBS parameter specifies the maximum queue depth to which a queue can grow. This parameter ensures that a traffic flow (that is, a customer or a traffic type within a customer port) that is massively or continuously oversubscribing the PIR of a queue will not consume all the available buffer resources. For high-priority forwarding class service queues, the MBS can be small because the high-priority service packets are scheduled with priority over other service forwarding classes. In other words, very small queues would be needed for high-priority traffic because the contents of the queues should have been scheduled by the best available scheduler.
The MBS value is provisioned on ingress and egress service queues within service ingress QoS policies and service egress QoS policies, respectively. The MBS value for a queue is specified in bytes or kilobytes.
The MBS values for network queues are defined within network queue policies based on the forwarding class. The MBS values for the queues for the forwarding class are defined as a percentage of buffer space for the pool.
High-priority-only buffers
High-priority-only buffers are defined on a queue and allow buffers to be reserved for traffic classified as high priority. When the queue depth reaches a specified level, only high-priority traffic can be enqueued. The high-priority-only reservation for a queue is defined as a percentage of the MBS value and has a default value of 10% of the MBS value.
On service ingress, the high-priority-only reservation for a queue is defined in the service ingress QoS policy. High-priority traffic is specified in the match criteria for the policy.
On service egress, the high-priority-only reservation for a queue is defined in the service egress QoS policy. Service egress queues are specified by forwarding class. High-priority traffic for a given traffic class is traffic that has been marked as in-profile either on ingress classification or based on interpretation of the QoS markings.
The high-priority-only buffers for network queues are defined within network queue policies based on the forwarding class. High-priority-only traffic for a specific traffic class is marked as in-profile either on ingress classification or based on interpretation of the QoS markings.
High and low enqueuing thresholds
The high/low priority feature allows a provider to offer a customer the ability to have some packets treated with a higher priority when buffered to the ingress queue. If the queue is configured with a high-prio-only setting (which set the high-priority MBS threshold higher than the queue’s low-priority MBS threshold), then a portion of the ingress queue’s allowed buffers are reserved for high-priority traffic. An access ingress packet must hit an ingress QoS action in order for the ingress forwarding plane to treat the packet as high priority (the default is low priority).
If the packet’s ingress queue is above the low-priority MBS, the packet will be discarded unless it has been classified as high priority. The priority of the packet is not retained after the packet is placed into the ingress queue. After the packet is scheduled out of the ingress queue, the packet will be considered in-profile or out-of-profile based on the dynamic rate of the queue relative to the queue’s CIR parameter.
If an ingress queue is not configured with a high-prio-only parameter (the parameter is set to 0%), the low-priority and high-priority MBS thresholds are the same. There is no difference in high-priority and low-priority packet handling. At access ingress, the priority of a packet has no effect on which packets are scheduled first. Only the first buffering decision is affected. At ingress and egress, the current dynamic rate of the queue relative to the queue’s CIR does affect the scheduling priority between queues going to the same destination (egress port).
From highest to lowest, the strict operating priority for queues is:
expedited queues within the CIR (conform)
best effort queues within the CIR (conform)
expedited queues above the CIR (exceed)
best effort queues above the CIR (exceed)
For access ingress, the CIR controls both dynamic scheduling priority and the marking threshold. At network ingress, the queue’s CIR affects the scheduling priority but does not provide a profile marking function (as the network ingress policy trusts the received marking of the packet based on the network QoS policy).
At egress, the profile of a packet is only important for egress queue buffering decisions and egress marking decisions, not for scheduling priority. The egress queue’s CIR determines the dynamic scheduling priority, but does not affect the packet’s ingress determined profile.
Queue counters
The 7705 SAR maintains extensive counters for queues within the system to allow granular or extensive debugging and planning; that is, the usage of queues and the scheduler used for servicing a queue or packet is extremely useful in network planning activities. The following separate billing and accounting counters are maintained for each queue:
counters for packets and octets accepted into the queue
counters for packets and octets rejected at the queue
counters for packets and octets transmitted in-profile
counters for packets and octets transmitted out-of-profile
Queue type
The 7705 SAR allows two kinds of queue types: Expedited queues and Best Effort queues. Users can configure the queue type manually using the expedite and best-effort keywords, or automatically using the auto-expedite keyword. The queue type is specified as part of the queue command (for example, config>qos>sap-ingress>queue queue-id queue-type create).
With expedite, the queue is treated in an expedited manner, independent of the forwarding classes mapped to the queue.
With best-effort, the queue is treated in a non-expedited (best-effort) manner, independent of the forwarding classes mapped to the queue.
With auto-expedite, the queue type is automatically determined by the forwarding classes that are assigned to the queue. The queues that are set as auto-expedite are still either Expedited or Best Effort queues, but whether a queue is Expedited or Best Effort is determined by its assigned forwarding classes. In the default configuration, four of the eight forwarding classes (NC, H1, EF, and H2) result in an Expedited queue type, while the other four forwarding classes (L1, AF, L2, and BE) result in a Best Effort queue type.
Assigning one or more L1, AF, L2, and BE forwarding class to an Expedited queue results in a Best Effort queue type. See Forwarding classes for more information about default configuration values.
The expedite, best-effort, and auto-expedite queue types are mutually exclusive. Each defines the method that the system uses to service the queue from a hardware perspective.
Queue mode
The 7705 SAR supports two queue modes: priority mode and profile mode. Users can configure the queue mode using the priority-mode or profile-mode keywords when issuing the config>qos>sap-ingress>queue queue-id queue-mode create command. The default is priority-mode.The queue mode defines how an ingress access packet is categorized as in-profile or out-of-profile.
With priority mode, an access packet’s in-profile or out-of-profile state is based on the dynamic rate of the ingress queue before being forwarded to the fabric. When the queue rate is lower than or equal to the configured CIR, the packet is considered in-profile. When the queue rate is higher than the CIR, the packet is considered out-of-profile. The profile state is determined when the packet is scheduled out of the queue, not when the packet is buffered into the queue.
With profile mode, the in-profile or out-of-profile state for packets assigned to a particular forwarding class is explicitly configured using the config>qos>sap-ingress>fc>profile command. This configuration places a forwarding class in a color-aware profile mode. Packets assigned to this forwarding class profile are only marked based on this profile marking if the forwarding class is mapped to a queue configured for profile-mode. If the forwarding class is mapped to a queue configured for priority-mode, the forwarding class profile setting is ignored and the packet’s state is defined as in-profile or out-of-profile based on the dynamic rate of the ingress queue.
When the profile in command is executed on a forwarding class that is mapped to a queue operating in profile mode, all packets associated with the class are handled as in-profile. When the profile out command is executed on a forwarding class that is mapped to a queue operating in profile mode, all packets associated with the class are handled as out-of-profile.
When the no profile command is executed on a forwarding class that is mapped to a queue operating in profile mode, the data packets using the forwarding class are marked as in-profile or out-of-profile based on the dynamic rate of the ingress queue relative to its CIR.
Color-aware profiling adds the ability to selectively treat packets received on a SAP as in-profile (green) or out-of-profile (yellow) regardless of the queue forwarding rate. For example, a network operator can color a packet out-of-profile with the intention of preserving in-profile bandwidth for higher-priority packets.
A queue operating in profile mode can support in-profile, out-of-profile, and non-profiled packets simultaneously because multiple forwarding classes with different forwarding class profiles can be assigned to a single queue.
All non-profiled and profiled packets are forwarded through the same ingress access queue to prevent out-of-sequence forwarding. Profiled packets that are in-profile are counted against the total number of packets flowing through the queue that are marked in-profile. This reduces the amount of CIR available to non-profiled packets, causing fewer packets to be marked in-profile. Profiled packets that are out-of-profile are not counted against the total number of packets flowing through the queue that are marked in-profile. This ensures that the number of non-profiled packets marked in-profile is not affected by the profiled out-of-profile packet rate.
A SAP ingress queue operating in profile mode is classified as high-priority or low-priority based on the configuration of the forwarding class profile rather than on the high-priority or low-priority configuration specified for DSCP or dot1p. All non-profile packets flowing through the queue are considered high priority. Profiled in-profile packets are also handled as high priority, while profiled out-of-profile packets are handled as low priority.
For SAP ingress queues in profile mode, statistics are collected for color in (for a forwarding class configured for profile in), color out (for a forwarding class configured for profile out), and uncolor (for a forwarding class configured for no profile).
Rate limiting
The 7705 SAR supports egress-rate limiting and ingress-rate limiting on Ethernet ports.
The egress rate is set at the port level in the config>port>ethernet context.
Egress-rate limiting sets a limit on the amount of traffic that can leave the port to control the total bandwidth on the interface. If the egress-rate limit is reached, the port applies backpressure on the queues, which stops the flow of traffic until the queue buffers are emptied. This feature is useful in scenarios where there is a fixed amount of bandwidth; for example, a mobile operator who has leased a fixed amount of bandwidth from the service provider.
The ingress-rate command configures a policing action to rate-limit the ingress traffic. Ingress-rate enforcement uses dedicated hardware for rate limiting; however, software configuration is required at the port level (ingress-rate limiter) to ensure that the network processor or adapter card or port never receives more traffic than they are optimized for.
The configured ingress rate ensures that the network processor does not receive traffic greater than this configured value on a per-port basis. When the ingress-rate value is reached, all subsequent frames are dropped. The ingress-rate limiter drops excess traffic without determining whether the traffic has a higher or lower priority.
For more information about egress and ingress rate limiting, see the egress-rate and ingress-rate command descriptions in the 7705 SAR Interface Configuration Guide.
Slope policies (WRED and RED)
As part of 7705 SAR queue management, policies for WRED or RED queue management (also known as congestion management or buffer management) to manage the queue depths can be enabled at both access and network ports and associated with both ingress and egress queues. WRED policies can also be enabled on bridged domain (ring) ports.
Without WRED and RED, when a queue reaches its maximum fill size, the queue discards any new packets arriving at the queue (tail drop).
WRED and RED policies prevent a queue from reaching its maximum size by starting random discards when the queue reaches a user-configured threshold value. This avoids the impact of discarding all the new incoming packets. By starting random discards at this threshold, customer devices at an end-system may be adjusted to the available bandwidth.
As an example, TCP has built-in mechanisms to adjust for packet drops. TCP-based flows lower the transmission rate when some of the packets fail to reach the far end. This mode of operation provides a much better way of dealing with congestion than dropping all the packets after the whole queue space is depleted.
The WRED and RED curve algorithms are based on two user-configurable thresholds (minThreshold and maxThreshold) and a discard probability factor (maxDProbability) (see WRED for high-priority and low-priority traffic in the same queue). The minThreshold (minT) indicates the level when where discards start and the discard probability is zero. The maxThreshold (maxT) is indicates the level where the discard probability reaches its maximum value. Beyond this the maxT level, all newly arriving packets are discarded. The steepness of the slope between minT and maxT is derived from the maxDProbability (maxDP). Therefore, the maxDP indicates the random discard probability at the maxT level.
The main difference between WRED and RED is that with WRED, there can be more than one curve managing the fill rate of the same queue.
WRED slope curves can run against high-priority and low-priority traffic separately for ingress and egress queues. This allows the flexibility to treat low-priority and high-priority traffic differently. WRED slope policies are used to configure the minT, maxT and maxDP values, instead of configuring these thresholds against every queue. It is the slope policies that are then applied to individual queues. Therefore, WRED slope policies affect how and when the high-priority and low-priority traffic is discarded within the same queue.
Referring to the following figure, one WRED slope curve can manage discards on high-priority traffic and another WRED slope curve can manage discards on low-priority traffic. The minT, maxT and maxDP values configured for high-priority and low-priority traffic can be different and can start discarding traffic at different thresholds. The start-avr, max-avr, and max-prob commands are used to set the minThreshold, maxThreshold, and maxDProbabilty values, respectively.
The formula to calculate the average queue size is:
average queue size = (previous average ✕ (1 – 1/2^TAF)) + (current queue size ✕ 1/2^TAF)
The Time Average Factor (TAF) is the exponential weight factor used in calculating the average queue size. The time_average_factor parameter is not user-configurable and is set to a system-wide default value of 3. By locking TAF to a static value of 3, the average queue size closely tracks the current queue size so that WRED can respond quickly to long queues.
WRED MinThreshold and MaxThreshold computation
CBS is configured in kilobytes through the CLI; MBS is configured in bytes or kilobytes. These configured values are converted to the corresponding number of buffers. The conversion factor is a non-user-configurable, fixed default value that is equal to the system-defined maximum frame size, ensuring that even the largest frames can be hosted in the allocated buffer pools. This type of WRED is called buffer-based WRED.
User-defined minThreshold and maxThreshold values, each defined as a percentage, are also converted to the number of buffers. The minT is converted to the system-minThreshold, and the maxT is converted to the system-maxThreshold.
The system-minT must be the absolute closest value to the minT that satisfies the formula below (2^x means 2 to the exponent x):
system-maxThreshold – system-minThreshold = 2^x
WRED on bridging domain (ring) queues
The bridging domain queues support the following DP (discard probability) values: 0% to 10%, 25%, 50%, 75%, and 100%. User-configured values are rounded down to match these DP values.
For example, configuring a DP to be 74% means that the actual value used is 50%.
Payload-based WRED
The third-generation Ethernet adapter cards and platforms use payload-based WRED rather than buffer-based WRED (see WRED MinThreshold and MaxThreshold computation). Payload-based WRED does not count the unused overhead space (empty space in the buffer) when making discard decisions, whereas buffer-based WRED counts the unused overhead space. Payload-based WRED is also referred to as byte-based WRED.
When a queue on an adapter card that uses payload-based WRED reaches its maximum fill (that is, the total byte count exceeds the configured maximum threshold), tail drop begins and operates in the same way as it does on any other adapter card or platform.
With payload-based WRED, the discard decision is based on the number of bytes in the queue instead of the number of buffers in the queue. For example, to accumulate 512 bytes of payload in a queue will take four buffers if the frame size is 128 bytes, but will take one buffer if the frame size is 512 bytes or more. Basing discards on bytes rather than buffers improves the efficient use of queues. In either case, byte- or buffer-based WRED, random discards begin at the minimum threshold (minT) point.
For example, assume a queue has MBS set to 512 kB (converts to 1000 buffers), minT (start-avg) is set to 10% (100 buffers), and maxT (max-avg) is set to 80% (800 buffers). The following table shows when discards and tail drop start when payload-based WRED is used.
Frame size |
Discards start |
Tail drop start |
---|---|---|
128 bytes |
400 buffers in the queue (100 x 4) |
3200 buffers in the queue (800 x 4) |
512 bytes |
100 buffers in the queue |
800 buffers in the queue |
1024 bytes |
100 buffers in the queue |
800 buffers in the queue |
For tail drop, if the high-priority-only threshold is set to 10%:
when any frame size is greater than or equal to 512 bytes, tail drop starts after 900 buffers are in use for low-priority traffic
If an adapter card or platform other than a third-generation adapter card or platform is used in the previous example (that is, an adapter card or platform with buffer-based WRED), both WRED and tail drop start after 100 buffers are consumed, because both 128-byte and 512-byte frames fill one buffer (512 bytes).
Because tail drop (which is buffer-based) and WRED (which is payload-based) operate differently, it is not recommended that tail drop and WRED be used in the same queue.
ATM traffic descriptor profiles
Traffic descriptor profiles capture the cell arrival pattern for resource allocation. Source traffic descriptors for an ATM connection include at least one of the following:
sustained information rate (SIR)
peak information rate (PIR)
minimum information rate (MIR)
maximum burst size (MBS)
QoS traffic descriptor profiles are applied on ATM VLL (Apipe) SAPs.
Fabric profiles
Fabric profiles allow access and network ingress to-fabric shapers to have user-configurable rates of switching throughput from an adapter card toward the fabric.
Two fabric profile modes are supported: per-destination mode and aggregate mode. Both modes offer shaping toward the fabric from an adapter card, but per-destination shapers offer the maximum flexibility by precisely controlling the amount of traffic to each destination card at a user-defined rate.
For the 7705 SAR-8 Shelf V2 and the 7705 SAR-18, the maximum rate depends on a number of factors, including platform, chassis variant, and slot type. See Configurable ingress shaping to fabric (access and network) for details. For information about fabric shaping on the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, and 7705 SAR-Wx, see Fabric shaping on the fixed platforms (access and network).
Shaper policies
Shaper policies define dual-rate shaper parameters that control access or network traffic by providing tier-3 aggregate shaping to:
shaped and unshaped SAP traffic for access ingress flows
shaped and unshaped SAP traffic for access egress flows
shaped and unshaped VLAN traffic for network egress flows
See Per-SAP aggregate shapers (H-QoS) on Gen-2 hardware and Per-VLAN network egress shapers for details on per-SAP and per-VLAN shapers.
QoS policy entities
Services are configured with default QoS policies. Additional policies must be explicitly created and associated. There is one default service ingress QoS policy, one default service egress QoS policy, and one default network QoS policy. Only one ingress QoS policy and one egress QoS policy can be applied to a SAP or network port.
When a user creates a new QoS policy, default values are provided for most parameters with the exception of the policy ID and queue ID values, descriptions, and the default action queue assignment. Each policy has a scope, default action, a description, and at least one queue. The queue is associated with a forwarding class.
All QoS policy parameters can be configured in the CLI. QoS policies can be applied to the following service types:
Epipe – both ingress and egress policies are supported on an Epipe SAP
Apipe – both ingress and egress policies are supported on an Apipe SAP
Cpipe – only ingress policies are supported on a Cpipe SAP
Fpipe – both ingress and egress policies are supported on an Fpipe SAP
Hpipe – both ingress and egress policies are supported on an Hpipe SAP
Ipipe – both ingress and egress policies are supported on an Ipipe SAP
QoS policies can be applied to the following network entities:
network ingress interface
network egress interface
Default QoS policies treat all traffic with equal priority and allow an equal chance of transmission (Best Effort forwarding class) and an equal chance of being dropped during periods of congestion. QoS prioritizes traffic according to the forwarding class and uses congestion management to control access ingress, access egress, and network traffic with queuing according to priority.
Configuration notes
The following guidelines and restrictions apply to the implementation of QoS policies:
Creating additional QoS policies is optional.
Default policies are created for service ingress, service egress, network, network-queue, and slope policies.
Associating a service with a QoS policy other than the default policy is optional.
A network queue, service egress, or service ingress QoS policy must consist of at least one queue. Queues define the forwarding class, CIR, and PIR associated with the queue.