Network queue policies are applied on egress to network ports and channels and on ingress to MDAs. The policies define the forwarding class queue characteristics for these entities.
Service ingress, service egress, and network QoS policies are defined with a scope of either template or
exclusive. Template policies can be applied to multiple SAPs or IP interfaces whereas exclusive policies can only be applied to a single entity.
For network ingress, Table 4 and
Table 5 list the default mapping of DSCP name and LSP EXP values to forwarding class and profile state for the default network QoS policy.
Network queue policies define the network forwarding class queue characteristics. Network queue policies are applied on egress on core network ports, channels and on ingress on MDAs. Network queue policies can be configured to use as many queues as needed This means that the number of queues can vary. Not all policies will use eight queues like the default network queue policy.
The system default network queue policy is named default and cannot be edited or deleted.
Table 5 describes the default network queue policy definition.
The router 20 Gbps Input/Output Module (IOM) uses a rate step value to define the granularity for both the CIR and PIR rates The adaptation rule controls the method the system uses to choose the rate step based on the administrative rates defined by the
rate command. The supported CIR and PIR values ranges and increments are summarized in
Table 6.
The MDA hardware rate-step values are listed in Table 7 for all MDAs (except deep channel MDAs).
To illustrate how the adaptation rule constraints minimum,
maximum and
closest are evaluated in determining the operational CIR or PIR for the router 20 Gbps IOM, assume there is a queue where the administrative CIR and PIR values are 401 Mbps and 403 Mbps, respectively. According to
Table 6, since the PIR value is given precedence and is in the range of 0 to 635 Mbps, the hardware rate step of 5 Mbps is used.
If the adaptation rule is minimum, the operational CIR and PIR values will be 405 Mbps as it is the native hardware rate greater than or equal to the administrative CIR and PIR values.
If the adaptation rule is maximum, the operational CIR and PIR values will be 400 Mbps.
If the adaptation rule is closest, the operational CIR and PIR values will be 400 Mbps and 405 Mbps, respectively, as those are the closest matches for the administrative values that are even multiples of the 5 Mbps rate step.
The expedite,
best-effort and
auto-expedite queue types are mutually exclusive to each other. Each defines the method that the system uses to service the queue from a hardware perspective. While parental virtual schedulers can be defined for the queue, they only enforce how the queue interacts for bandwidth with other queues associated with the same scheduler hierarchy. An internal mechanism that provides access rules when the queue is vying for bandwidth with queues in other virtual schedulers is also needed.
The lsp-exp command is will be supported in sap-ingress qos policy. This option can only be applied on Ethernet Layer 2 SAPs.
Figure 4 shows the ingress service 1 using forwarding classes AF and L1 that are overridden to L1 for the network egress, while it also shows ingress service 2 using forwarding classes L1, AF, and L2 that are overridden to AF for the network egress.
For network ingress, a buffer pool is created for the MDA and is used for all network ingress queues for ports on the MDA.
Default buffer pools exist (logically) at the port and MDA levels. Each physical port has two pools objects associated:
Node-level pools are used by ingress network queues and bundle access queues. A single ingress network pool is created at the node-level for ingress network queues.
SBAUn = Shared buffer average utilization for event n
SBAUn-1 = Shared buffer average utilization for event (n-1)
Table 17 shows the effect the allowed values of TAF have on the relative weighting of the instantaneous SBU and the previous SBAU (SBAU
n-1) has on the calculating the current SBAU (SBAU
n).
Slope policy ID default is reserved for the default slope policy. The default policy cannot be deleted or changed. The default slope policy is implicitly applied to all access buffer pools which do not have another slope policy explicitly assigned.
Table 18 lists the default values for the default slope policy.
Figure 6 depicts how child queues and schedulers interact with their parent scheduler to receive bandwidth. The scheduler distributes bandwidth to the children by first using each child’s CIR according to the CIR-level parameter (CIR L8 through CIR L1 weighted loops). The weighting at each CIR-Level loop is defined by the CIR weight parameter for each child. The scheduler then distributes any remaining bandwidth to the children up to each child’s rate parameter according to the Level parameter (L8 through L1 weighted loops). The weighting at each level loop is defined by the weight parameter for each child.
Note, that by using the default QoS profile, all ingress traffic is treated as best effort (be) (mapped to FC
be and to low priority scheduler). For an egress SAP using the default QoS profile, all egress traffic will use the same queue.
A scheduler policy can be applied either on a SAP (Figure 8) or on a multi-service customer site (a group of SAPs with common origination/termination point) (
Figure 9). Whenever a scheduler policy is applied, the individual schedulers comprising the policy are created on the object. When the object is an individual SAP, only queues created on that SAP can use the schedulers created by the policy association. When the object is a multi-service customer site, the schedulers are available to any SAPs associated with the site (also see
Scheduler Policies Applied to SAPs).
Refer to the Subscriber Services Overview section of the Services Guide for information about subscriber services, service entities, configuration, and implementation.
Once a site is created, it must be assigned to the chassis slot or a port (except in the 7750 SR-1 model, the slot is automatically set to 1). This allows the system to allocate the resources necessary to create the virtual schedulers defined in the ingress and egress scheduler policies. This also acts as verification that each SAP assigned to the site exists within the context of the customer ID and that the SAP was created on the correct slot, port, or channel. The specified slot or port must already be pre-provisioned (configured) on the system.
Note that Table 22 presents the default definitions for the forwarding classes. The forwarding class behavior, in terms of ingress marking interpretation and egress marking, can be changed by a
Network QoS Policies. All forwarding class queues support the concept of in-profile and out-of-profile.
The high-priority forwarding classes are Network Control (nc), Expedited (
ef), High 1 (
h1), and High 2 (
h2). High-priority forwarding classes are always serviced at congestion points over other forwarding classes; this behavior is determined by the router queue scheduling algorithm (
Virtual Hierarchical Scheduling).
The assured forwarding classes are Assured (af) and Low 1 (
l1). Assured forwarding classes provide services with a committed rate and a peak rate much like Frame Relay. Packets transmitted through the queue at or below the committed transmission rate are marked in-profile. If the core service network has sufficient bandwidth along the path for the assured traffic, all aggregate in-profile service packets will reach the service destination. Packets transmitted out the service queue that are above the committed rate will be marked out-of-profile. When an assured out-of-profile service packet is received at a congestion point in the network, it will be discarded before in-profile assured service packets.
The best-effort classes are Low 2 (l2) and Best-Effort (
be). The best-effort forwarding classes have no delivery guarantees. All packets within this class are treated, at best, like out-of-profile assured service packets.
Default QoS policies maps all traffic with equal priority and allow an equal chance of transmission (Best Effort (be) forwarding class) and an equal chance of being dropped during periods of congestion. QoS prioritizes traffic according to the forwarding class and uses congestion management to control access ingress, access egress, and network traffic with queuing according to priority
The Committed Burst Size (CBS) specifies the relative amount of reserved buffers for a specific ingress network MDA forwarding class queue or egress network port forwarding class queue. The value is entered as a percentage.
The Maximum Burst Size (MBS) command specifies the relative amount of the buffer pool space for the maximum buffers for a specific ingress network MDA forwarding class queue or egress network port forwarding class queue. The value is entered as a percentage.