Filter policies

ACL filter policy overview

ACL filter policies, also referred to as Access Control Lists (ACLs) or just ‟filters”, are sets of ordered rule entries specifying packet match criteria and actions to be performed to a packet upon a match. Filter policies are created with a unique filter ID and filter name. The filter name needs to be assigned during the creation of the filter policy. If a name is not specified at creation time, then SR OS assigns a string version of the filter ID as the name.

There are three main filter policies: ip-filter for IPv4, ipv6-filter for IPv6, and mac-filter for MAC level filtering. Additionally, the filter policy scope defines if the policy can be reused between different interfaces, embedded in another filter policy or applied at the system level:

  • exclusive filter

    An exclusive filter defines policy rules explicitly for a single interface. An exclusive filter allows the highest level of customization but uses the most resources, because each exclusive filter consumes hardware resources online cards on which the interface exists.

  • template filter

    A template filter uses an identical set of policy rules across multiple interfaces. Template filters use a single set of resources per line card, regardless of how many interfaces use a specific template filter policy on that line card. Template filter policies used on access interfaces consume resources online cards only if at least one access interface for a specific template filter policy is configured on a specific line card.

  • embedded filter

    An embedded filter defines a common set of policy rules that can then be used (embedded) by other exclusive or template filters in the system. This allows optimized management of filter policies.

  • system filter

    A system filter policy defines a common set of policy rules that can then be activated within other exclusive/template filters. It can be used, for example, as a system-level set of deny rules. This allows optimized management of common rules (similarly to embedded filters). However, active system filter policy entries are not duplicated inside each policy that activates the system policy (as is the case when embedding is used). The active system policy is downloaded after to line cards, and activated filter policies are chained to it.

After the filter policy is created, the policy must then be associated with interfaces, services, subscribers, or with other filter policies (if the created policy cannot be directly deployed on an interface, service, or subscriber), so the incoming or outgoing traffic can be subjected to filter rules. Filter policies are associated with interfaces, services, or subscribers separately in the ingress and egress directions. A policy deployed on ingress and egress direction can be the same or different. In general, Nokia recommends using different filter policies for the ingress and egress directions and to use different filter policies per service type, because filter policies support different match criteria and different actions for different directions/service contexts.

A filter policy is applied to a packet in the ascending rule entry order. When a packet matches all the command options specified in a filter entry’s match criteria, the system takes the action defined for that entry. If a packet does not match the entry command options, the packet is compared to the next higher numerical filter entry rule, and so on.

In classic CLI, if the packet does not match any of the entries, the system executes the default action specified in the filter policy: drop or forward.

In MD-CLI, if the packet does not match any of the entries, the system executes the default action specified in the filter policy: drop or accept.

For Layer 2, either an IPv4/IPv6 or MAC filter policy can be applied. For Layer 3 and network interfaces, an IPv4/IPv6 policy can be applied. For R-VPLS service, a Layer 2 filter policy can be applied to Layer 2 forwarded traffic and a Layer 3 filter policy can be applied to Layer 3 routed traffic. For dual-stack interfaces, if both IPv4 and IPv6 filter policies are configured, the policy applied are based on the outer IP header of the packet. Non-IP packets do not affect an IP filter policy, so the default action in the IP filter policy do not apply to these packets. Egress IPv4 QoS-based classification criteria are ignored when egress MAC filter policy is configured on the same interface.

Additionally, platforms that support Network Group Encryption (NGE) can use IP exception filters. IP exception filters scan all outbound traffic entering an NGE domain and allow packets that match the exception filter criteria to transit the NGE domain unencrypted. See Router encryption exceptions using ACLs for information about IP exception filters supported by NGE nodes.

Filter policy basics

The following subsections define main functionality supported by filter policies.

Filter policy packet match criteria

This section defines packet match criteria supported on SR OS for IPv4, IPv6, and MAC filters. Supported criteria types depend on the hardware platform and filter direction, see your Nokia representative for more information.

General notes:

  • If multiple unique match criteria are specified in a single filter policy entry, all criteria must be met in order for the packet to be considered a match against that filter policy entry (logical AND).

  • Any match criteria not explicitly defined is ignored during match.

  • An ACL filter policy entry with match criteria defined, but no action configured, is considered incomplete and inactive (an entry is not downloaded to the line card). A filter policy must have at least one entry active for the policy to be considered active.

  • An ACL filter entry with no match conditions defined matches all packets.

  • Because an ACL filter policy is an ordered list, entries should be configured (numbered) from the most explicit to the least explicit.

IPv4/IPv6 filter policy entry match criteria

This section describes the IPv4 and IPv6 match criteria supported by SR OS. The criteria are evaluated against the outer IPv4 or IPv6 header and a Layer 4 header that follows (if applicable). Support for match criteria may depend on hardware or filter direction. Nokia recommends not configuring a filter in a direction or on hardware where a match criterion is not supported because this may lead to unwanted behavior. Some match criteria may be grouped in match lists and may be autogenerated based on the router configuration; see Filter policy advanced topics for more information.

IPv4 and IPv6 filter policies support three different filter types, including normal, source MAC, and packet length, with each supporting a different set of match criteria.

The match criteria available using the normal filter type are defined in this section. Layer 3 match criteria include:

  • DSCP

    Match the specified DSCP command option against the Differentiated Services Code Point/Traffic Class field in the IPv4 or IPv6 packet header.

  • source IP, destination IP, or IP

    Match the specified source or destination IPv4 or IPv6 address prefix against the IP address field in the IPv4 or IPv6 packet header. The user can optionally configure a mask to be used in a match. The ip command can be used to configure a single filter-policy entry that provides non-directional matching of either the source or destination (logical OR).

  • flow label

    Match the specified flow label against the Flow label field in IPv6 packets. The user can optionally configure a mask to be used in a match. This operation is supported on ingress filters.

  • protocol

    Match the specified protocol against the Protocol field in the IPv4 packet header (for example, TCP, UDP, IGMP) of the outer IPv4. ‟*” can be used to specify TCP or UDP upper-layer protocol match (Logical OR).

  • Next Header

    Match the specified upper-layer protocol (such as, TCP, UDP, IGMPv6) against the Next Header field of the IPv6 packet header. ‟*” can be used to specify TCP or UDP upper-layer protocol match (Logical OR).

    Use the following command to match against up to six extension headers.

    configure system ip ipv6-eh max

    Use the following command to match against the Next Header value of the IPv6 header.

    configure system ip ipv6-eh limited
Fragment match criteria

Match for the presence of fragmented packet. For IPv4, match against the MF bit or Fragment Offset field to determine whether the packet is a fragment. For IPv6, match against the Next Header field for the Fragment Extension Header value to determine whether the packet is a fragment. Up to six extension headers are matched against to find the Fragmentation Extension Header.

IPv4 and IPv6 filters support matching against initial fragment using first-only or non-initial fragment non-first-only.

IPv4 match fragment true or false criteria are supported on both ingress and egress.

IPv4 match fragment first-only or non-first-only are supported on ingress only.

Operational note for fragmented traffic

IP and IPv6 filters defined to match TCP, UDP, ICMP, or SCTP criteria (such as source port, destination port, port, TCP ACK, TCP SYN, ICMP type, ICMP code) with command options of zero or false also match non-first fragment packets if other match criteria within the same filer entry are also met. Non-initial fragment packets do not contain a UDP, TCP, ICMP or SCTP header.

IPv4 options match criteria

You can configure the following IPv4 options match criteria exist:

  • IP option

    Matches the specified command option value in the first option of the IPv4 packet. A user can optionally configure a mask to be used in a match.

  • option present

    Matches the presence of IP options in the IPv4 packet. Padding and EOOL are also considered as IP options. Up to six IP options are matched against.

  • multiple option

    Matches the presence of multiple IP options in the IPv4 packet.

  • source route option

    Matches the presence of IP Option 3 or 9 (Loose or Strict Source Route) in the first three IP options of the IPv4 packet. A packet also matches this rule if the packet has more than three IP options.

IPv6 extension header match criteria

You can configure the following IPv6 Extension Header match criteria:

  • Authentication Header extension header

    Matches for the presence of the Authentication Header extension header in the IPv6 packet. This match criterion is supported on ingress only.

  • Encapsulating Security Payload extension header

    Matches for the presence of the Encapsulating Security Payload extension header in the IPv6 packet. This match criterion is supported on ingress only.

  • hop-by-hop options

    Matches for the presence of hop-by-hop options extension header in the IPv6 packet. This match criterion is supported on ingress only.

  • Routing extension header type 0

    Matches for the presence of Routing extension header type 0 in the IPv6 packet. This match criterion is supported on ingress only.

Upper-layer protocol match criteria

You can configure the following upper-layer protocol match criteria:

  • ICMP/ICMPv6 code field header

    Matches the specified value against the code field of the ICMP or ICMPv6 header of the packet. This match is supported only for entries that also define protocol or next-header match for the ICMP or ICMPv6 protocol.

  • ICMP/ICMPv6 type field header

    Matches the specified value against the type field of the ICMP or ICMPv6 header of the packet. This match is supported only for entries that also define the protocol or next-header match for the ICMP or ICMPv6 protocol.

  • source port number, destination port number, or port

    Matches the specified port, port list, or port range against the source port number or destination port number of the UDP, TCP, or SCTP packet header. An option to match either source or destination (Logical OR) using a single filter policy entry is supported by using a directionless port. Source or destination match is supported only for entries that also define protocol/next-header match for TCP, UDP, SCTP, or TCP or UDP protocols. A non-initial fragment never matches an entry with non-zero port criteria specified. Match on SCTP source port, destination port, or port is supported on ingress filter policy.

  • TCP ACK, TCP CWR, TCP ECE, TCP FIN, TCP NS, TCP PSH, TCP RST, TCP SYN, TCP URG

    Matches the presence or absence of the TCP flags defined in RFC 793, RFC 3168, and RFC 3540 in the TCP header of the packet. This match criteria also requires defining the protocol/next-header match as TCP in the filter entry. TCP CWR, TCP ECE, TCP FIN, TCP NS, TCP PSH, TCP URG are supported on FP4 and FP5-based line cards only. TCP RST is supported on FP3, FP4, and FP5-based line cards. When configured on other line cards, the bit for the unsupported TCP flags is ignored.

  • tcp-established

    Matches the presence of the TCP flags ACK or RESET in the TCP header of the packet. This match criteria requires defining the protocol/next-header match as TCP in the filter entry.

For filter type match criteria

Additional match criteria for source MAC, packet length, and destination class are available using different filter types. See Filter policy type for more information.

MAC filter policy entry match criteria

MAC filter policies support three different filter types with normal, ISID, and VID each supporting a different set of match criteria.

The following list describes the MAC match criteria supported by SR OS or switches for all types of MAC filters (normal, ISID, and VID). The criteria are evaluated against the Ethernet header of the Ethernet frame. Support for match criteria may depend on H/W or filter direction as described in the following description. Match criteria is blocked if it is not supported by a specified frame-type or MAC filter type. Nokia recommends not configuring a filter in a direction or on hardware where a match condition is not supported as this may lead to unwanted behavior.

You can configure the following MAC filter policy entry match criteria:

  • frame format

    The filter searches to match a specific type of frame format. For example, configuring frame-type ethernet_II matches only ethernet-II frames.

  • source MAC address

    The filter searches to match source MAC address frames. The user can optionally configure a mask to be used in a match.

  • destination MAC address

    The filter searches to match destination MAC address frames. The user can optionally configure a mask to be used in a match.

  • 802.1p frames

    The filter searches to match 802.1p frames. The user can optionally configure a mask to be used in a match.

  • Ethernet II frames

    The filter searches to match Ethernet II frames. The Ethernet type field is a two-byte field used to identify the protocol carried by the Ethernet frame.

  • source access point

    The filter searches to match frames with a source access point on the network node designated in the source field of the packet. The user can optionally configure a mask to be used in a match.

  • destination access point

    The filter searches to match frames with a destination access point on the network node designated in the destination field of the packet. The user can optionally configure a mask to be used in a match.

  • specified three-byte OUI

    The filter searches to match frames with the specified three-byte OUI field.

  • specified two-byte protocol ID

    The filter searches to match frames with the specified two-byte protocol ID that follows the three-byte OUI field.

  • ISID

    The filter searches to match for the matching Ethernet frames with the 24-bit ISID value from the PBB I-TAG. This match criterion is mutually exclusive of all the other match criteria under a specific MAC filter policy and is applicable to MAC filters of type ISID only. The resulting MAC filter can only be applied on a BVPLS SAP or PW in the egress direction.

  • inner-tag or outer-tag

    The filter searches to match Ethernet frames with the non-service delimiting tags, as described in the VID MAC filters section. This match criterion is mutually exclusive of all other match criteria under a specific MAC filter policy and is applicable to MAC filters of type VID only.

Filter policy actions

The following actions are supported by ACL filter policies:

  • drop

    Allows users to deny traffic to ingress or egress the system.

    • IPv4 packet-length and IPv6 payload-length conditional drop

      Traffic can be dropped based on IPv4 packet length or IPv6 payload length by specifying a packet length or payload length value or range within the drop filter action (the IPv6 payload length field does not account for the size of the fixed IP header, which is 40 bytes).

      This filter action is supported on ingress IPv4 and IPv6 filter policies only, if the filter is configured on an egress interface the packet-length or payload-length match condition is always true.

      This drop condition is a filter entry action evaluation, and not a filter entry match evaluation. Within this evaluation, the condition is checked after the packet matches the entry based on the specified filter entry match criteria.

      Packets that match a filter policy entry match criteria and the drop packet-length-value or payload-length-value are dropped. Packets that match only the filter policy entry match criteria and do not match the drop packet-length-value or drop payload-length value are forwarded with no further matching in following filter entries.

      Packets matching this filter entry and not matching the conditional criteria are not logged, counted, or mirrored.

    • IPv4 TTL and IPv6 hop limit conditional drop

      Traffic can be dropped based on a IPv4 TTL or IPv6 hop limit by specifying a TTL or hop limit value or range within the drop filter action.

      This filter action is supported on ingress IPv4 and IPv6 filter policies only. If the filter is configured on an egress interface the packet-length or payload-length match condition is always true.

      This drop condition is a filter entry action evaluation, and not a filter entry match evaluation. Within this evaluation, the condition is checked after the packet matches the entry based on the specified filter entry match criteria.

      Packets that match filter policy entry match criteria and the drop TTL or drop hop limit value are dropped. Packets that match only the filter policy entry match criteria and do not match the drop TTL value or drop hop limit value are forwarded with no further match in following filter entries.

      Packets matching this filter entry and not matching the conditional criteria are not logged, counted, or mirrored.

    • pattern conditional drop

      Traffic can be dropped when it is based on a pattern found in the packet header or data payload. The pattern is defined by an expression, mask, offset type, and offset value match in the first 256 bytes of a packet.

      The pattern expression is up to 8 bytes long. The offset-type command identifies the starting point for the offset value and the supported offset-type command options are:

      • layer-3: layer 3 IP header

      • layer-4: layer 4 protocol header

      • data: data payload for TCP or UDP protocols

      • dns-qtype: DNS request or response query type

      The content of the packet is compared with the expression/mask value found at the offset type and offset value as defined in the filter entry. For example, if the pattern is expression 0xAA11, mask 0xFFFF, offset-type data, offset-value 20, the filter entry compares the content of the first 2 bytes in the packet data payload found 20 bytes after the TCP/UDP header with 0xAA11.

      This drop condition is a filter entry action evaluation, and not a filter entry match evaluation. Within this evaluation, the condition is checked after the packet matches the entry based on the specified filter entry match criteria.

      Packets that match a filter policy's entry match criteria and the pattern, are dropped. Packets that match only the filter policy's entry match criteria and do not match the pattern, are forwarded without a further match in subsequent filter entries.

      This filtering capability is supported on ingress IPv4 and IPv6 policies using FP4-based line cards, and cannot be configured on egress. A filter entry using a pattern, is not supported on FP2 or FP3-based line cards. If programmed, the pattern is ignored and the action is forward.

      Packets matching this filter entry and not matching the conditional criteria are not logged, counted, or mirrored.

  • drop extracted traffic

    Traffic extracted to the CPM can be dropped using ingress IPv4 and IPv6 filter policies based on filter match criteria. Any IP traffic extracted to the CPM is subject to this filter action, including routing protocols, snooped traffic, and TTL expired traffic.

    Packets that match the filter entry match criteria and extracted to the CPM are dropped. Packets that match only the filter entry match criteria and are not extracted to the CPM are forwarded with no further match in the subsequent filter entries.

    Cflowd, log, mirror, and statistics apply to all traffic matching the filter entry, regardless of the drop or forward action.

  • forward

    Allows users to accept traffic to ingress or egress the system and be subject to regular processing.

  • accept a conditional filter action

    Allows users to accept a conditional filter action. Use the following commands to configure a conditional filter action:

    • MD-CLI

      configure filter ip-filter entry action accept-when
      configure filter ipv6-filter entry action accept-when
    • classic CLI

      configure filter ip-filter entry action forward-when
      configure filter ipv6-filter entry action forward-when
    • pattern conditional accept

      Traffic can be accepted based on a pattern found in the packet header or data payload. The pattern is defined by an expression, mask, offset type, and offset value match in the first 256 bytes of a packet. The pattern expression is up to 8 bytes long. The offset type identifies the starting point for the offset value and the supported offset types are:

      • layer-3: Layer 3 IP header

      • layer-4: Layer 4 protocol header

      • data: data payload for TCP or UDP protocols

      • dns-qtype: DNS request or response query type

    • The content of the packet is compared with the expression/mask value found at the offset type and offset value defined in the filter entry. For example, if the pattern is expression 0xAA11, mask 0xFFFF, offset-type data, and offset-value 20, then the filter entry compares the content of the first 2 bytes in the packet data payload found 20 bytes after the TCP/UDP header with 0xAA11.

      This accept condition is a filter entry action evaluation, and not a filter entry match evaluation. Within this evaluation, the condition is checked after the packet matches the entry based on the specified filter entry match criteria. Packets that match a filter policy's entry match criteria and the pattern, are accepted. Packets that match only the filter policy's entry match criteria and do not match the pattern, are dropped without a further match in subsequent filter entries.

      This filtering capability is supported on ingress IPv4 and IPv6 policies using FP4-based line cards and cannot be configured on egress. A filter entry using a pattern is not supported on FP2 or FP3-based line cards. If programmed, the pattern is ignored and the action is drop.

      Packets matching this filter entry and not matching the conditional criteria are not logged, counted, or mirrored.

  • FC

    Allows users to mark the forwarding class (FC) of packets. This command is supported on ingress IP and IPv6 filter policies. This filter action can be combined with the rate-limit action.

    Packets matching this filter entry action bypass QoS FC marking and are still subject to QoS queuing, policing and priority marking.

    The QPPB forwarding class takes precedence over the filter FC marking action.

  • rate limit

    This action allows users to rate limit traffic matching a filter entry match criteria using IPv4, IPv6, or MAC filter policies.

    If multiple interfaces (including LAG interfaces) use the same rate-limit filter policy on different FPs, then the system allocates a rate limiter resource for each FP; an independent rate limit applies to each FP.

    If multiple interfaces (including LAG interfaces) use the same rate-limit filter policy on the same FP, then the system allocates a single rate limiter resource to the FP; a common aggregate rate limit is applied to those interfaces.

    Note that traffic extracted to the CPM is not rate limitedby an ingress rate-limit filter policy while any traffic generated by the router can be rate limited by an egress rate-limit filter policy.

    rate-limit filter policy entries can coexist with cflowd, log, and mirror regardless of the outcome of the rate limit. This filter action is not supported on egress on 7750 SR-a.

    Rate limit policers are configured with MBS equals CBS equals 10 ms of the rate and high-prio-only equals 0.

    Interaction with QoS: Packets matching an ingress rate-limit filter policy entry bypass ingress QoS queuing or policing, and only the filter rate limit policer is applied. Packets matching an egress rate-limit filter policy bypass egress QoS policing, normal egress QoS queuing still applies.

    • Kilobits-per-second and packets-per-second rate limit

      The rate-limit action can be defined using kilobits per second or packets per second supported on both ingress and egress filter policies. The packets per second rate limit is not supported using a MAC filter policy and not supported on 7750 SR-a.

    • IPv4 packet-length and IPv6 payload-length conditional rate limit

      Traffic can be rate limited based on the IPv4 packet length and IPv6 payload length by specifying a packet-length value or payload-length value or range within the rate-limit filter action. The IPv6 payload-length field does not account for the size of the fixed IP header, which is 40 bytes.

      This filter action is supported on ingress IPv4 and IPv6 filter policies only and cannot be configured on egress access or network interfaces.

      This rate-limit condition is part of a filter entry action evaluation, and not a filter entry match evaluation. It is checked after the packet is determined to match the entry based on the configured filter entry match criteria.

      Packets that match a filter policy’s entry match criteria and the rate-limit packet-length value or rate-limit payload-length value are rate limited. Packets that match only the filter policy’s entry match criteria and do not match the rate-limit packet-length value or rate-limit payload-length value are forwarded with no further match in subsequent filter entries.

      Cflowd, logging, and mirroring apply to all traffic matching the ACL entry regardless of the outcome of the rate limiter and regardless of the packet-length value or payload-length value.

    • IPv4 TTL and IPv6 hop-limit conditional rate limit

      Traffic can be rate limited based on the IPv4 TTL or IPv6 hop-limit by specifying a TTL or hop-limit value or range within the rate-limit filter action using ingress IPv4 or IPv6 filter policies.

      The match condition is part of action evaluation (for example, after the packet is determined to match the entry based on other match criteria configured). Packets that match a filter policy entry match criteria and the rate-limit ttl or hop-limit value are rate limited. Packets that match only the filter policy entry match criteria and do not match the rate-limit ttl or hop-limit value are forwarded with no further matching in the subsequent filter entries.

      Cflowd, logging, and mirroring apply to all traffic matching the ACL entry regardless of the outcome of the rate-limit value and the ttl-value or hop-limit-value.

    • pattern conditional rate limit

      Traffic can be rate limited when it is based on a pattern found in the packet header or data payload. The pattern is defined by an expression, mask, offset type, and offset value match in the first 256 bytes of a packet. The pattern expression is up to 8 bytes long. The offset-type command identifies the starting point for the offset value and the supported offset-type command options are:

      • layer-3: layer 3 IP header

      • layer-4: layer 4 protocol header

      • data: data payload for TCP or UDP protocols

      • dns-qtype: DNS request or response query type

      The content of the packet is compared with the expression/mask value found at the offset-type command option and offset value defined in the filter entry. For example, if the pattern is expression 0xAA11, mask 0xFFFF, offset-type data, and offset value 20, then the filter entry compares the content of the first 2 bytes in the packet data payload found 20 bytes after the TCP/UDP header with 0xAA11.

      This rate limit condition is a filter entry action evaluation, and not a filter entry match evaluation. Within this evaluation, the condition is checked after the packet matches the entry based on the specified filter entry match criteria.

      Packets that match a filter policy's entry match criteria and the pattern, are rate limited. Packets that match only the filter policy's entry match criteria and do not match the pattern, are forwarded without a further match in subsequent filter entries.

      This filtering capability is supported on ingress IPv4 and IPv6 policies using FP4 and FP5-based line cards and cannot be configured on egress. A filter entry using a pattern is not supported on FP2 or FP3-based line cards. If programmed, the pattern is ignored and the system forwards the packet.

      Cflowd, logging, and mirroring apply to all traffic matching this filter entry regardless of the pattern value.

    • extracted traffic conditional rate limit

      Traffic extracted to the CPM can be rate limited using ingress IPv4 and IPv6 filter policies based on filter match criteria. Any IP traffic extracted to the CPM is subject to this filter action, including routing protocols, snooped traffic, and TTL expired traffic.

      Packets that match the filter entry match criteria and are extracted to the CPM are rate limited by this filter action and not subject to distributed CPU protection policing.

      Packets that match only the filter entry match criteria and are not extracted to the CPM are forwarded with no further match in the subsequent filter entries.

      Cflowd, logging, and mirroring apply to all traffic matching the ACL entry regardless of the outcome of the rate limit or the extracted conditional match.

  • Forward Policy-based Routing and Policy-based Forwarding (PBR/PBF) actions

    Allows users to allow ingress traffic but change the regular routing or forwarding that a packet would be subject to. The PBR/PBF is applicable to unicast traffic only. The following PBR or PBF actions are supported (See Configuring filter policies with CLI for more information):

    • egress PBR

      Enabling egress-pbr activates a PBR action on egress, while disabling egress-pbr activates a PBR action on ingress (default).

      The following subset of the PBR actions (defined as follows) can be activated on egress: redirect-policy, next-hop router, and ESI.

      Egress PBR is supported in IPv4 and IPv6 filter policies for ESM only. Unicast traffic that is subject to slow-path processing on ingress (for example, IPv4 packets with options or IPv6 packets with hop-by-hop extension header) does not match egress PBR entries. Filter logging, cflowd, and mirror source are mutually exclusive of configuring a filter entry with an egress PBR action. Configuring pbr-down-action-override, if supported with a specific PBR ingress action type, is also supported when the action is an egress PBR action. Processing defined by pbr-down-action-override does not apply if the action is deployed in the wrong direction. If a packet matches a filter PBR entry and the entry is not activated for the direction in which the filter is deployed, the system forwards the packet. Egress PBR cannot be enabled in system filters.

    • ESI

      Forwards the incoming traffic using VXLAN tunnel resolved using EVPN MP BGP control plane to the first service chain function identified by ESI (Layer 2) or ESI/SF-IP (Layer 3). Supported with VPLS (Layer 2) and IES/VPRN (Layer 3) services. If the service function forwarding cannot be resolved, traffic matches an entry and action forward is executed.

      For VPLS, no cross-service PBF is supported; that is, the filter specifying ESI PBF entry must be deployed in the VPLS service where BGP EVPN control plane resolution takes place as configured for a specific ESI PBF action. The functionality is supported in filter policies deployed on ingress VPLS interfaces. BUM traffic that matches a filter entry with ESI PBF is unicast forwarded to the VTEP:VNI resolved through PBF forwarding.

      For IES/VPRN, the outgoing R-VPLS interface can be in any VPRN service. The outgoing interface and VPRN service for BGP EVPN control plane resolution must again be configured as part of ESI PBR entry configuration. The functionality is supported in filter policies deployed on ingress IES/VPRN interfaces and in filter policies deployed on ingress and egress for ESM subscribers. Only unicast traffic is subject to ESI PBR; any other traffic matching a filter entry with Layer 3 ESI action is subjected to action forward.

      When deployed in unsupported direction, traffic matching a filter policy ESI PBR/PBF entry is subject to action forward.

    • lsp

      Forwards the incoming traffic onto the specified LSP. Supports RSVP-TE LSPs (type static or dynamic only), MPLS-TP LSPs, or SR-TE LSPs. Supported for ingress IPv4/IPv6 filter policies and only deployed on IES SAPs or network interfaces. If the configured LSP is down, traffic matches the entry and action forward is executed.

    • mpls-policy

      Redirects the incoming traffic to the active instance of the MPLS forwarding policy specified by its endpoint. This policy is applicable on any ingress interface (egress is blocked). The traffic is subject to a plain forward if no policy matches the one specified, or if the policy has no programmed instance, or if it is applied on non-L3 interface.

    • next-hop address

      Changes the IP destination address used in routing from the address in the packet to the address configured in this PBR action. The user can configure whether the next-hop IP address must be direct (local subnet only) or indirect (any IP). This functionality is supported for ingress IPv4/IPv6 filter policies only, and is deployed on Layer 3 interfaces. If the configured next-hop is not reachable, traffic is dropped and a ‟ICMP destination unreachable” message is sent. If indirect is not specified but the IP address is a remote IP address, traffic is dropped.

      interface

      Forwards the incoming traffic onto the specified IPv4 interface. Supported for ingress IPv4 filter policies in global routing table instance. If the configured interface is down or not of the supported type, traffic is dropped.

    • redirect policy

      Implements PBR next-hop or PBR next-hop router action with the ability to select and prioritize multiple redirect targets and monitor the specified redirect targets so PBR action can be changed if the selected destination goes down. Supported for ingress IPv4 and IPv6 filter policies deployed on Layer 3 interfaces only. See section Redirect policies for further details.

    • remark DSCP

      Allows a user to remark the DiffServ Code Points (DSCP) of packets matching filter policy entry criteria. Packets are remarked regardless of QoS-based in- or out-of-profile classification and QoS-based DSCP remarking is overridden. DSCP remarking is supported both as a main action and as an extended action. As a main action, this functionality applies to IPv4 and IPv6 filter policies of any scope and can only be applied at ingress on either access or network interfaces of Layer 3 services only. Although the filter is applied on ingress the dscp remarking effectively performed on egress. As an extended action, this functionality applies to IPv4 and IPv6 filter policies of any scope and can be applied at ingress on either access or network interfaces of Layer 3 services, or at egress on Layer 3 subscriber interfaces.

    • router

      Changes the routing instance a packet is routed in from the upcoming interface’s instance to the routing instance specified in the PBR action (supports both GRT and VPRN redirect). It is supported for ingress IPv4/IPv6 filter policies deployed on Layer 3 interfaces. The action can be combined with the next-hop action specifying direct/indirect IPv4/IPv6 next hop. Packets are dropped if they cannot be routed in the configured routing instance. See section ‟Traffic Leaking to GRT” in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Layer 3 Services Guide: IES and VPRN for more information.

    • SAP

      Forwards the incoming traffic onto the specified VPLS SAP. Supported for ingress IPv4/IPv6 and MAC filter policies deployed in VPLS service. The SAP that the traffic is to egress on must be in the same VPLS service as the incoming interface. If the configured SAP is down, traffic is dropped.

    • sdp

      Forwards the incoming traffic onto the specified VPLS SDP. Supported for ingress IPv4/IPv6 and MAC filter policies deployed in VPLS service. The SDP that the traffic is to egress on must be in the same VPLS service as the incoming interface. If the configured SDP is down, traffic is dropped.

    • srte-policy

      Redirects the incoming traffic to the active instance of the SR-TE forwarding policy specified by its endpoint and color. This policy is applicable on any ingress interface (egress is blocked). The traffic is subject to a plain forward if no policy matches the one specified, or if the policy has no programmed instance, or if it is applied on non-L3 interface.

    • vprn-target

      Redirects the incoming traffic in a similar manner to combined next-hop and LSP redirection actions, but with greater control and slightly different behavior. This action is supported for both IPv4 and IPv6 filter policies and is applicable on ingress of access interfaces of IES/VPRN services. See Filter policy advanced topics for further details.

  • ISA forward processing actions

    ISA processing actions allow users to allow ingress traffic and send it for ISA processing as per specified ISA action. See Configuring filter policies with CLI for command details. The following ISA actions are supported:

    • GTP local breakout

      Forwards matching traffic to NAT instead of being GTP tunneled to the mobile user’s PGW or GGSN. The action applies to GTP-subscriber-hosts. If filter is deployed on other entities, action forward is applied. Supported for IPv4 ingress filter policies only. If ISAs performing NAT are down, traffic is dropped.

    • NAT

      Forwards matching traffic for NAT. Supported for IPv4/IPv6 filter policies for Layer 3 services in GRT or VPRN. If ISAs performing NAT are down, traffic is dropped.

      For classic CLI options, see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Classic CLI Command Reference Guide.

      For MD-CLI options, see 7450 ESS, 7750 SR, 7950 XRS, and VSR MD-CLI Command Reference Guide.

    • reassemble

      Forwards matching packets to the reassembly function. Supported for IPv4 ingress filter policies only. If ISAs performing reassemble are down, traffic is dropped.

    • TCP for MSS adjustment

      Forwards matching packets (TCP SYN) to an ISA BB group for MSS adjustment. In addition to the IP filter, the user also needs to configure the MSS adjust group under the Layer 3 service to specify the group ID and the new segment-size.

  • HTTP redirect

    Implements the HTTP redirect captive portal. HTTP GET is forwarded to CPM card for captive portal processing by router. See the HTTP redirect (captive portal) section for more information.

  • ignore match

    This action allow the user to disable a filter entry, as a result the entry is not programmed in hardware.

In addition to the preceding actions:

  • A user can select a default action for a filter policy. The default action is executed on packets subjected to an active filter when none of the filter’s active entries matches the packet. By default, filter policies have default action set to drop but the user can select a default action to be forward instead.

  • A user can override default action applied to packets matching a PBR/PBF entry when the PBR/PBF target is down using pbr-down-action-override. Supported options are to drop the packet, forward the packet, or apply the same action as configured for the filter policy's default action. The override is supported for the following PBR/PBF actions. For the last three actions, the override is supported whether in redundancy mode or not.

    • forward ESI (Layer 2 or Layer 3)

    • forward SAP

    • forward SDP

    • forward next-hop indirect router

    Default behavior when a PBR/PBF target is down defines default behavior for packets matching a PBR/PBF filter entry when a target is down.

    Table 1. Default behavior when a PBR/PBF target is down
    PBR/PBF action Default behavior when down

    Forward esi (any type)

    Forward

    Forward lsp

    Forward

    Forward mpls-policy

    Forward

    Forward next-hop (any type)

    Drop

    Forward redirect-policy

    Forward when redirect policy is shutdown

    Forward redirect-policy

    Forward when destination tests are enabled and the best destination is not reachable

    Forward redirect-policy

    Drop when destination tests are not enabled and the best destination is not reachable

    Forward sap

    Drop

    Forward sdp

    Drop

    Forward srte-policy

    Forward

    Forward router

    Drop

    Forward vprn-target

    Forward

Viewing filter policy actions

A number of parameters determine the behavior of a packet after it has been matched to a defined criterion or set of criteria:

  • the action configured by the user

  • the context in which a filter policy is applied. For example, applying a filter policy in an unsupported context can result in simply forwarding the packet instead of applying the configured action.

  • external factors, such as the reachability (according to specific test criteria) of a target

Use the following commands to display how a packet is handled by the system.

show filter ip
show filter ipv6
show filter mac

This section describes the key information displayed as part of the output for the preceding show commands, and how to interpret the information.

From a configuration point of view, the show command output displays the main action (primary and secondary), as well as the extended action.

The ‟PBR Target Status” field shows the basic information that the system has of the target based on simple verification methods. This information is only shown for the filter entries which are configured in redundancy mode (that is, with both primary and secondary main actions configured), and for ESI redirections. Specifically, the target status in the case of redundancy depends on several factors; for example, on a match in the routing table for next-hop redirects, or on VXLAN tunnel resolution for ESI redirects.

The ‟Downloaded Action” field specifically describes the action that the system performs on the packets that match the criterion (or criteria). This typically depends on the context in which the filter has been applied (whether it is supported or not), but in the case of redundancy, it also depends on the target status. For example, the downloaded action is the secondary main action when the target associated with the primary action is down. In the nominal (for example, non-failure condition) case the ‟Downloaded Action” reflects the behavior a packet is subject to. However, in transient cases (for example, in the case of a failure) it may not be able to capture what effectively happens to the packet.

The output also displays relevant information such as the default action when the target is down (see Default behavior when a PBR/PBF target is down) as well as the overridden default action when pbr‑down‑action‑override has been configured.

There are situations where, collectively, this information does not capture what effectively happens to the packet throughout the system. Use the following commands to perform advanced checks and display accurate packet fates.
show filter ip effective-action
show filter ipv6 effective-action
show filter mac effective-action

The criteria for determining when a target is down. While there is little ambiguity on that aspect when the target is local to the system performing the steering action, ambiguity is much more prominent when the target is distant. Therefore, because the use of effective-action triggers advanced tests, a discrepancy is introduced compared to the action when effective-action command option is not used. This is, for example, be the case for redundant actions.

Filter policy statistics

Filter policies support per-entry, packet/byte match statistics. The cumulative matched packet/Byte counters are available per ingress and per egress direction. Every packet arriving on an interface/service/subscriber using a filter policy increments ingress or egress (as applicable) matched packet/Byte count for a filter entry the packet matches (if any) on the line card the packet ingresses/egresses. For each policy, the counters for all entries are collected from all line cards, summarized and made available to an operator.

Filter policies applied on access interfaces are downloaded only to line cards that have interfaces associated with those filter policies. If a filter policy is not downloaded to any line card, the statistics show 0. If a filter policy is being removed from any of the line cards the policy is currently downloaded to (as result of association change or when a filter becomes inactive), the associated statistics are reset to 0.

Downloading a filter policy to a new line card continues incrementing existing statistics.

Operational notes:

Conditional action match criteria filter entries for ttl, hop-limit, packet-length, and payload-length support logging and statistics when the condition is met, allowing visibility of filter matched and action executed. If the condition is not met, packets are not logged and statistics against the entry are not incremented.

Filter policy logging

SR OS supports logging of the information from the packets that match a specific filter policy. Logging is configurable per filter policy entry by specifying preconfigured filter log (configure filter log). A filter log can be applied to ACL filters and CPM hardware filters. Users can configure multiple filter logs and specify:
  • memory allocated to a filter log destination
  • syslog ID for filter log destination
  • filter logging summarization
  • wrap-around behavior

Notes related to filter log summarization:

  • The implementation of the feature applies to filter logs with destination syslog.

  • Summarization logging is the collection and summarization of log messages for one specific log ID within a period of time.

  • The summarization interval is 100 seconds.

  • Upon activation of a summary, a mini-table with source and destination address and count is created for each type (IPv4, IPv6, and MAC).

  • Every received log packet (because of filter match) is examined for source or destination address.

  • If the log packet (source/destination address) matches a source/destination address entry in the mini-table, from a packet received previously, the summary counter of the matching address is incremented.

  • If source or destination address of the log messages does not match an entry already present in the table, the source/destination address is stored in a free entry in the mini-table.

  • In case the mini-table has no more free entries, only total counter is incremented.

  • At expiry of the summarization interval, the mini-table for each type is flushed to the syslog destination.

Operational note

Conditional action match criteria filter entries for TTL, hop limit, packet length, and payload length support logging and statistics when the condition is met, allowing visibility of filter matched and action executed. If the condition is not met, packets are not logged and statistics against the entry are not incremented.

Filter policy cflowd sampling

Filter policies can be used to control how cflowd sampling is performed on an IP interface. If an IP interface has cflowd sampling enabled, a user can exclude some flows for interface sampling by configuring filter policy rules that match the flows and by disabling interface sampling as part of the filter policy entry configurations. Use the following commands to disable interface sampling:

  • MD-CLI

    configure filter ip-filter entry interface-sample false
    configure filter ipv6-filter entry interface-sample false
  • classic CLI

    configure filter ip-filter entry interface-disable-sample
    configure filter ipv6-filter entry interface-disable-sample

If an IP interface has cflowd sampling disabled, a user can enable cflowd sampling on a subset of flows by configuring filter policy rules that match the flows and by enabling cflowd sampling as part of the filter policy entry configurations. Use the following commands to enable cflowd sampling on a subset of flows:

  • MD-CLI

    configure filter ip-filter entry filter-sample true
    configure filter ipv6-filter entry filter-sample true
  • classic CLI

    configure filter ip-filter entry filter-sample
    configure filter ipv6-filter entry filter-sample

The preceding cflowd filter sampling behavior is exclusively driven by match criteria. The sampling logic applies regardless of whether an action was executed (including evaluation of conditional action match criteria, for example, packet length or TTL).

Filter policy management

Modifying Existing Filter Policy

There are several ways to modify an existing filter policy. A filter policy can be modified through configuration change or can have entries populated through dynamic, policy-controlled dynamic interfaces; for example, RADIUS, OpenFlow, FlowSpec, or Gx. Although in general, SR OS ensures filter resources exist before a filter can be modified, because of the dynamic nature of the policy-controlled interfaces, a configuration that was accepted may not be applied in H/W due to lack of resources. When that happens, an error is raised.

A filter policy can be modified directly—by changing/adding/deleting the existing entry in that filter policy—or indirectly. Examples of indirect change to filter policy include changing embedded filter entry this policy embeds (see the Filter policy scope and embedded filters section) or changing redirect policy this filter policy uses.

Finally, a filter policy deployed on a specific interface can be changed by changing the policy the interface is associated with.

All of the preceding changes can be done in service. A filter policy that is associated with service/interface cannot be deleted unless all associations are removed first.

For a large (complex) filter policy change, it may take a few seconds to load and initiate the filter policy configuration. Filter policy changes are downloaded to line cards immediately; therefore, users should use filter policy copy or transactional CLI to ensure partial policy change is not activated.

Filter policy copy

Perform bulk operations on filter policies by copying one filter’s entries to another filter. Either all entries or a specified entry of the source filter can be selected to copy. When entries are copied, entry order is preserved unless the destination filter’s entry ID is selected (applicable to single-entry copy).

Filter policy copy and renumbering in classic CLI
Note: The information applies to classic CLI.

SR OS supports entry copy and entry renumbering operations to assist in filter policy management.

Use the following commands to copy and overwrite filter entries.

configure filter copy ip-filter
configure filter copy ipv6-filter
configure filter copy mac-filter

The copy command allows overwriting of the existing entries in the destination filter by specifying the overwrite command option when using the copy command. Copy can be used, for example, when creating new policies from existing policies or when modifying an existing filter policy (an existing source policy is copied to a new destination policy, the new destination policy is modified, then the new destination policy is copied back to the source policy with overwrite specified).

Entry renumbering allows you to change the relative order of a filter policy entry by changing the entry ID. Entry renumbering can also be used to move two entries closer together or further apart, thereby creating additional entry space for new entries.

Filter policy advanced topics

Match list for filter policies

The filter match lists ip-prefix-list, ipv6-prefix-list, protocol-list, and port-list define a list of IP prefixes, IP protocols, and TCP-UDP ports that can be used as match criteria for line card IP and IPv6 filters. Additionally, ip-prefix-list, ipv6-prefix-list, and port-list can also be used in CPM filters.

A match list simplifies the filter policy configuration with multiple prefixes, protocols, or ports that can be matched in a single filter entry instead of creating an entry for each.

The same match list can be used in one or many filter policies. A change in match list content is automatically propagated across all policies that use that list.

Apply-path

The router supports the autogeneration of IPv4 and IPv6 prefix list entries for BGP peers which are configured in the Base router or in VPRN services. Use the following commands to configure the autogeneration of IPv6 or IPv4 prefix list entries.

configure filter match-list ip-prefix-list apply-path
configure filter match-list ipv6-prefix-list apply-path

This capability simplifies the management of CPM filters to allow BGP control traffic from trusted configured peers only. By using the apply-path filter, the user can:

  • specify one or more regex expression matches per match list, including wildcard matches (".*")

  • mix auto-generated entries with statically configured entries within a match list

Additional rules are applied when using apply-path as follows:

  • Operational and administrative states of a specific router configuration are ignored when auto-generating address prefixes.

  • Duplicates are not removed when populated by different auto-generation matches and static configuration.

  • Configuration fails if auto-generation of an address prefix results in the filter policy resource exhaustion on a filter entry, system, or line card level.

Prefix-exclude

A prefix can be excluded from an IPv4 or IPv6 prefix list by using the prefix-exclude command.

For example, when the user needs to rate-limit traffic to 10.0.0.0/16 with the exception of 10.0.2.0/24, then the following options are available.

By applying prefix-exclude, a single IP prefix list with two prefixes is configured:

MD-CLI
*[ex:/configure filter match-list]
A:admin@node-2# info
    ip-prefix-list "list-1" {
        prefix 10.0.0.0/16 { }
        prefix-exclude 10.0.2.0/24 { }    
    }
classic CLI
*A:node-2>config>filter>match-list# info
----------------------------------------------
        ip-prefix-list "list-1" create
            prefix 10.0.0.0/16
            prefix-exclude 10.0.2.0/24
        exit
----------------------------------------------

Without applying prefix-exclude, all eight included subnets should be manually configured in the ip-prefix-list. The following example shows the manual configuration of an IP prefix list.

MD-CLI
*[ex:/configure filter match-list]
A:admin@node-2# info
    ip-prefix-list "list-1" {
        prefix 10.0.0.0/16 { }
        prefix 10.0.0.0/23 { }
        prefix 10.0.3.0/24 { }
        prefix 10.0.4.0/22 { }
        prefix 10.0.8.0/21 { }
        prefix 10.0.16.0/20 { }
        prefix 10.0.32.0/19 { }
        prefix 10.0.64.0/18 { }
        prefix 10.0.128.0/17 { }
    }
classic CLI
*A:node-2>config>filter>match-list# info
----------------------------------------------
            ip-prefix-list "list-1" create
                prefix 10.0.0.0/16
                prefix 10.0.0.0/23
                prefix 10.0.3.0/24
                prefix 10.0.4.0/22
                prefix 10.0.8.0/21
                prefix 10.0.16.0/20
                prefix 10.0.32.0/19
                prefix 10.0.64.0/18
                prefix 10.0.128.0/17
            exit
----------------------------------------------

This is a time consuming and error-prone task compared to using the prefix-exclude command.

The filter resources, consumed in hardware, are identical between the two configurations.

A filter match-list using prefix-exclude is mutually exclusive with apply-path, and is not supported as a match criterion in CPM filter.

Configured prefix-exclude prefixes are ignored when no overlapping larger subnet is configured in the prefix-list. For example: prefix-exclude 1.1.1.1/24 is ignored if the only included subnet is 10.0.0.0/16.

Filter policy scope and embedded filters

The system supports four different filter policies:

  • scope template

  • scope exclusive

  • scope embedded

  • scope system

Each scope provides different characteristics and capabilities to deploy a filter policy on a single interface, multiple interfaces or optimize the use of system resources or the management of the filter policies when sharing a common set of filter entries.

Template and exclusive

A scope template filter policy can be reused across multiple interfaces. This filter policy uses a single set of resources per line card regardless of how many interfaces use it. Template filter policies used on access interfaces consume resources on line cards where the access interfaces are configured only. A scope template filter policy is the most common type of filter policies configured in a router.

A scope exclusive filter policy defines a filter dedicated to a single interface. An exclusive filter allows the highest level of customization but uses the most resources on the system line cards as it cannot be shared with other interfaces.

Embedded

To simplify the management of filters sharing a common set of filter entries, the user can create a scope embedded filter policy. This filter can then be included in (embedded into) a scope template, scope exclusive, or scope system filter.

Using a scope embedded filter, a common set of filter entries can be updated in a single place and deployed across multiple filter policies. The scope embedded is supported for IPv4 and IPv6 filter policies.

A scope embedded filter policy is not directly downloaded to a line card and cannot be directly referenced in an interface. However, this policy helps the network user provision a common set of rules across different filter policies.

The following rules apply when using a scope embedded filter policy:

  • The user explicitly defines the offset at which to insert a filter of scope embedded in a template, exclusive, or system filter. The embedded filter entry-id X becomes entry-id (X + offset) in the main filter.

  • Multiple filter scope embedded policies can be included (embedded into) in a single filter policy of scope template, exclusive, or system.

  • The same scope embedded filter policy can be included in multiple filter policies of scope template, exclusive, or system.

  • Configuration modifications to embedded filter policy entries are automatically applied to all filter policies that embed this filter.

  • The system performs a resource management check when a filter policy of scope embedded is updated or embedded in a new filter. If resources are not available, the configuration is rejected. In rare cases, a filter policy resource check may pass but the filter policy can still fail to load because of a resource exhaustion on a line card (for example, when other filter policy entries are dynamically configured by applications like RADIUS in parallel). If that is the case, the embedded filter policy configured is deactivated (configuration is changed from activate to inactivate).

  • An embedded filter is never embedded partially in a single filter and resources must exist to embed all the entries in a specific exclusive, template or system filter. However, an embedded filter may be embedded only in a subset of all the filters it is referenced into, only those where there are sufficient resources available.

  • Overlapping of filter entries between an embedded filter and a filter of scope template, exclusive or system filter can happen but should be avoided. It is recommended instead that network users use a large enough offset value and an appropriate filter entry-id in the main filter policy to avoid overlapping. In case of overlapping entries, the main filter policy entry overwrites the embedded filter entry.

  • Configuring a default action in a filter of scope embedded is not required as this information is not used to embed filter entries.

Embedded Filter Policy shows a configuration with two filter policies of scope template, filter 100 and 200 each embed filter policy 10 at a different offset:

  • Filter policy 100 and 200 are of scope template.

  • Filter policy 10 of scope embedded is configured with 4 filter entries: entry-id 10, 20, 30, 40.

  • Filter policy 100 embed filter 10 at offset 0 and includes two additional static entries with entry-id 20010 and 20020.

  • Filter policy 200 embed filter 10 at offset 10000 and includes two additional static entries with entry-id 100 and 110.

  • As a result, filter 100 automatically creates entry 10, 20, 30, 40 while filter 200 automatically creates entry 10010, 10020, 10030, 10040. Filter policy 100 and 200 consumed in total 12 entries when both policies are installed in the same line card.

Scope embedded filter configuration (MD-CLI)
*[ex:/configure filter]
A:admin@node-2# info
...
    ip-filter "10" {
        scope embedded
        entry 10 {
        }
        entry 20 {
        }
        entry 30 {
        }
        entry 40 {
        }
    }
    ip-filter "100" {
        scope template
        entry 20010 {
        }
        entry 20020 {
        }
        embed {
            filter "10" offset 0 {
            }
        }
    }
    ip-filter "200" {
        scope template
        entry 100 {
        }
        entry 110 {
        }
        embed {
            filter "10" offset 10000 {
            }
        }
    }
Scope embedded filter configuration (classic CLI)
*A:node-2>config>filter# info
----------------------------------------------
        ip-filter 10 name "10" create
            scope embedded
            entry 10 create
            exit
            entry 20 create
            exit
            entry 30 create
            exit
            entry 40 create
            exit
        exit
        ip-filter 100 name "100" create
            scope template
            embed-filter 10
            entry 20010 create
            exit
            entry 20020 create
            exit
        exit
        ip-filter 200 name "200" create
            scope template
            embed-filter 10 offset 10000
            entry 100 create
            exit
            entry 110 create
            exit
        exit
----------------------------------------------
Figure 1. Embedded Filter Policy
System

The scope system filter policy provides the most optimized use of hardware resources by programming filter entries after the line cards regardless of how many IPv4 or IPv6 filter policies of scope template or exclusive use this filter. The system filter policy entries are not duplicated inside each policy that uses it, instead, template or exclusive filter policies can be chained to the system filter using the chain-to-system-filter command.

When a template of exclusive filter policy is chained to the system filter, system filter rules are evaluated first before any rules of the chaining filter are evaluated (that is chaining filter's rules are only matched against if no system filter match took place).

The system filter policy is intended primarily to deploy a common set of system-level deny rules and infrastructure-level filtering rules to allow, block, or rate limit traffic. Other actions like, for example, PBR actions, or redirect to ISAs should not be used unless the system filter policy is activated only in filters used by services that support such action. The NAT action is not supported and should not be configured. Failure to observe these restrictions can lead to unwanted behavior as system filter actions are not verified against the services the chaining filters are deployed for. System filter policy entries also cannot be the sources of mirroring.

System filter policies can be populated using CLI, SNMP, NETCONF, OpenFlow and FlowSpec. System filter policy entries cannot be populated using RADIUS or Gx.

The following example shows the configuration of an IPv4 system filter:

  • System filter policy 10 includes a single entry to rate limit NTP traffic to the Infrastructure subnets.

  • Filter policy 100 of scope template is configured to use the system filter using the chain-to-system-filter command.

IPv4 system filter configuration (MD-CLI)
*[ex:/configure filter]
A:admin@node-2# info
    ip-filter "10" {
        scope system
        entry 10 {
            description "Rate Limit NTP to the Infrastructure"
            match {
                protocol udp
                dst-ip {
                    ip-prefix-list "Infrastructure IPs"
                }
                dst-port {
                    eq 123
                }
            }
            action {
                accept
                rate-limit {
                    pir 2000
                }
            }
        }
    }
    ip-filter "100" {
        description "Filter scope template for network interfaces"
        chain-to-system-filter true
    }
    system-filter {
        ip "10" { }
    }
IPv4 system filter configuration (classic CLI)
*A:node-2>config>filter# info
----------------------------------------------
        ip-filter 10 name "10" create
            scope system
            entry 10 create
                description "Rate Limit NTP to the Infrastructure"
                match protocol udp
                    dst-ip ip-prefix-list "Infrastructure IPs"
                    dst-port eq 123
                exit
                action
                    rate-limit 2000
                exit
            exit
        exit
        ip-filter 100 name "100" create
            chain-to-system-filter
            description "Filter scope template for network interfaces"
        exit
        system-filter
            ip 10
        exit
----------------------------------------------

Filter policy type

The filter policy type defines the list of match criteria available in a filter policy. It provides filtering flexibility by reallocating the CAM in the line card at the filter policy level to filter traffic using additional match criteria not available using filter type normal. The filter type is specific to the filter policy, it is not a system wide or line card command option. You can configure different filter policy types on different interfaces of the same system and line card.

MAC filter supports three different filter types: normal, ISID, or VID.

IPv4 and IPv6 filters support four different filter types: normal, source MAC, packet length, or destination class.

IPv4, IPv6 filter type source MAC

This filter policy type provides source MAC match criterion for IPv4 and IPv6 filters.

The following match criteria are not available for filter entries in a source MAC filter policy type:

  • IPv4

    source IP, DSCP, IP option, option present, multiple option, source-route option

  • IPv6

    source IP

For a QoS policy assigned to the same service or interface endpoint as a filter policy of type source MAC, QoS IP criteria cannot use source IP or DSCP and QoS IPv6 criteria cannot use source IP.

Filter type source MAC is available for egress filtering on VPLS services only. R-VPLS endpoints are not supported.

Dynamic filter entry embedding using Openflow, FlowSpec and VSD is not supported using this filter type.

IPv4, IPv6 filter type packet-length

The following match criteria are available using packet-length filter type, in addition to the match criteria that are available using the normal filter type:

  • packet length

    Total packet length including both the IP header and payload for IPv4 and IPv6 ingress and egress filter policies.

  • TTL or hop limit

    Match criteria available using FP4-based cards and ingress filter policies; if configured on FP2- or FP3-based cards, the TTL or \hop-limit match criteria part of the filter entries are not programmed in the line card.

The following match criteria are not available for filter entries in a packet-length type filter policy:

  • IPv4

    DSCP, IP option, option present, multiple option, source-route option

  • IPv6

    flow label

For a QoS policy assigned to the same service or interface endpoint on egress as a packet-length type filter policy, QoS IP criteria cannot use DSCP match criteria with no restriction to ingress.

This filter type is available for both ingress and egress on all service and router interfaces endpoints with the exception of video ISA, service templates, and PW templates.

Dynamic filter entry embedding using OpenFlow and VSD is not supported using this filter type.

IPv4, IPv6 filter type destination-class

This filter policy provides BGP destination-class value match criterion capability using egress IPv4 and IPv6 filters, and is supported on network, IES, VPRN, and R-VPLS.

The following match criteria from the normal filter type are not available using the destination-class filter type:

  • IPv4

    DSCP, IP option, option present, multiple option, source-route option

  • IPv6

    flow label

Filtering egress on destination class requires the destination-class-lookup command to be enabled on the interface that the packet ingresses on. For a QoS policy or filter policy assigned to the same interface, the DSCP remarking action is performed only if a destination-class was not identified for this packet.

System filters, as well as dynamic filter embedding using OpenFlow, FlowSpec, and VSD, are not supported using this filter type.

IPv4 and IPv6 filter type and embedding

IPv4 and IPv6 filter policy of scope embedded must have the same type as the main filter policy of scope template, exclusive or system embedding it:

  • If this condition is not met the filter cannot be embedded.

  • When embedded, the main filter policy cannot change the filter type if one of the embedded filters is of a different type.

  • When embedded, the embedded filter cannot change the filter type if it does not match the main filter policy.

Similarly, the system filter type must be identical to the template or exclusive filter to allow chaining when using the chain-to-system-filter command.

Rate limit and shared policer

By default, when a user assigns a filter policy to a LAG endpoint, the system allocates the same user-configured rate limit policer value for each FP of the LAG.

The shared policer feature changes this default behavior. When configured to true, the filter policy can only be assigned to endpoints of the same LAG and the configured rate limit policer value is shared between the LAG complexes based on the number of active ports in the LAG on each complex.

The formula to identify the policer value assigned to each FP complex is the following:
  • for same-speed LAGs
    filter entry rate limit policer value per FP = (configured rate limit) * (number of active ports in the LAG for this FP) / (number of active ports in the LAG)
  • for mixed-speed LAGs and LAGs with a user-configured hash-weight
    filter entry rate limit policer value per FP = (configured rate limit) * (sum of the weights of all active ports for this FP) / (sum of the weights of all active ports in the LAG)

The shared policer feature is supported for IPv4 and IPv6 filter policies with the template or exclusive scope, in ingress and egress directions.

Note:
  • Rate limit policer entries embedded in template or exclusive filters follow the shared policer command option from that filter.
  • SR-a and VSR do not support the shared policer feature.
  • In the MD-CLI configuration combinations with LAG per-link-hash, per-fp-egr-queuing, per-fp-sap-instance, link-map-profile, adapt-qos mode port-fair, and ESM do not support the shared policer feature.
  • In the classic CLI, configuration combinations with LAG per-link-hash, per-fp-egr-queuing, per-fp-sap-instance, link-map-profile, adapt-qos port-fair, and ESM do not support the shared policer feature.

Filter policies and dynamic policy-driven interfaces

Filter policy entries can be statically configured using CLI, SNMP, or NETCONF or dynamically created using BGP FlowSpec, OpenFlow, VSD (XMPP), or RADIUS/Diameter for ESM subscribers.

Dynamic filter entries for FlowSpec, OpenFlow, and VSD can be inserted into an IPv4 or IPv6 filter policy. The filter policy must be either exclusive or a template. Additionally, FlowSpec embedding is supported when using a filter policy that defines system-wide filter rules.

BGP FlowSpec

BGP FlowSpec routes are associated with a specific routing instance (based on the AFI/SAFI and possibly VRF import policies) and can be used to create filter entries in a filter policy dynamically.

Configure FlowSpec embedding using the following contexts:

  • MD-CLI

    configure filter ip-filter embed flowspec
    configure filter ipv6-filter embed flowspec
  • classic CLI

    configure filter ip-filter embed-filter flowspec
    configure filter ipv6-filter embed-filter flowspec

The following rules apply to FlowSpec embedding:

  • The user explicitly defines both the offset at which to insert FlowSpec filter entries and the router instance the FlowSpec routes belong to. The embedded FlowSpec filter entry ID is chosen by the system, in accordance with RFC 5575 Dissemination of Flow Specification Rules.

    Note: These entry IDs are not necessarily sequential and do not necessarily follow the order at which a rule is received.
  • The user can configure the maximum number of FlowSpec filter entries in a specific filter policy at the router or VPRN level using the ip-filter-max-size and ipv6-filter-max-size commands. This limit defines the boundary for FlowSpec embedding in a filter policy (the offset and maximum number of IPv4 or IPv6 FlowSpec routes).

  • When the user configures a template or exclusive filter policy, the router instance defined in the dynamic filter entry for FlowSpec must match the router interface that the filter policy is applied to.

  • When using a filter policy that defines system-wide rules, embedding FlowSpec entries from different router instances is allowed and can be applied to any router interfaces.

  • See section IPv4/IPv6 filter policy entry match criteria on embedded filter scope for recommendations on filter entry ID spacing and overlapping of entries.

The following information describes the FlowSpec configuration that follows:

  • The maximum number of FlowSpec routes in the base router instance is configured for 50,000 entries using the ip-filter-max-size command.

  • The filter policy 100 (template) is configured to embed FlowSpec routes from the base router instance at offset 100,000. The offset chosen in this example avoids overlapping with statically defined entries in the same policy. In this case, the statically defined entries can use the entry ID range 1-99999 and 149999-2M for defining static entries before or after the FlowSpec filter entries.

The following example shows the FlowSpec configuration.

FlowSpec configuration (MD-CLI)
*[ex:/configure router "Base"]
A:admin@node-2# info
    flowspec {
        ip-filter-max-size 50000
    }

[ex:/configure filter ip-filter "100"]
A:admin@node-2# info
...
    ip-filter "100" {
        embed {
            flowspec offset 100000 {
                router-instance "Base"
            }
        }
    }
FlowSpec configuration (classic CLI)
*A:node-2>config>router# info
----------------------------------------------
        flowspec
            ip-filter-max-size 50000
        exit
----------------------------------------------
A:7750>config>filter# info
----------------------------------------------
        ip-filter 100 name "100" create
            embed-filter flowspec router "Base" offset 100000
        exit
----------------------------------------------
OpenFlow

The embedded filter infrastructure is used to insert OpenFlow rules into an existing filter policy. See Hybrid OpenFlow switch for more information. Policy-controlled auto-created filters are re-created on system reboot. Policy controlled filter entries are lost on system reboot and need to be reprogrammed.

VSD

VSD filters are created dynamically using XMPP and managed using a Python script so rules can be inserted into or removed from the correct VSD template or embedded filters. XMPP messages received by the 7750 SR are passed transparently to the Python module to generate the appropriate CLI. See the 7450 ESS, 7750 SR, 7950 XRS, and VSR Layer 2 Services and EVPN Guide for more information about VSD filter provisioning, automation, and Python scripting details.

RADIUS or Diameter for subscriber management:

The user can assign filter policies or filter entries used by a subscriber within a preconfigured filter entry range defined for RADIUS or Diameter. See the 7450 ESS, 7750 SR, and VSR Triple Play Service Delivery Architecture Guide and filter RADIUS-related commands for more information.

Primary and secondary filter policy action for PBR/PBF redundancy

In some deployments, users may want to specify a backup PBR/PBF target if the primary target is down. SR OS allows the configuration of a primary action as part of a single filter policy entry. The secondary action can only be configured if a primary action is configured.

Use the commands in the following contexts to configure a primary action.

configure filter ip-filter entry action
configure filter ipv6-filter entry action
configure filter mac-filter entry action
Use the commands in the following contexts to configure a secondary action.
configure filter ip-filter entry action secondary
configure filter ipv6-filter entry action secondary
configure filter mac-filter entry action secondary

For Layer 2 PBF redundancy, the user can configure the following redundancy command options. Use the forward sap, forward sdp, secondary forward sap, or secondary forward sdp options in the following contexts to configure Layer 2 PBF redundancy.

configure filter ip-filter entry action
configure filter ipv6-filter entry action
configure filter mac-filter entry action

For Layer 3 PBR redundancy, a user can configure any of the following actions as a primary action and any (either same or different than primary) of the following as a secondary action. Furthermore, none of the command options need to be the same between primary and secondary actions. Although the following commands pertain to IPv4 , similar command options also apply to IPv6.

Use the commands in the following contexts to configure forward actions:

  • MD-CLI

    configure filter ip-filter entry action forward next-hop nh-ip-vrf
    configure filter ip-filter entry action forward vprn-target
  • classic CLI

    configure filter ip-filter entry action forward
    

When primary and secondary actions are configured, PBR/PBF uses the primary action if its target is operationally up, or it uses the secondary action if the primary PBR/PBF target is operationally down. If both targets are down, the default action when the target is down (see Default behavior when a PBR/PBF target is down), according to the primary action, is used, unless the pbr-down-action-override command is configured.

When PBR/PBF redundancy is configured, the user can use sticky destination functionality for a redundant filter entry. When sticky destination is configured, the functionality mimics that of sticky destination configured for redirect policies.

Use the following commands to configure sticky destination.

configure filter ip-filter entry sticky-dest
configure filter ipv6-filter entry sticky-dest
configure filter mac-filter entry sticky-dest

Use the following commands to force a switchover from the secondary to the primary action when sticky destination is enabled and secondary action is selected.

tools perform filter ip-filter entry activate-primary-action
tools perform filter ipv6-filter entry activate-primary-action
tools perform filter mac-filter entry activate-primary-action

Sticky destination can be configured even if no secondary action is configured.

The control plane monitors whether primary and secondary actions can be performed and programs forwarding filter policy to use either the primary or secondary action as required. More generally, the state of PBR/PBF targets is monitored in the following situations:

  • when a secondary action is configured

  • when sticky destination is configured

  • when a pbr-down-action-override is configured

Use the following command to display which redundant action is activated or downloaded, including when both PBR and PBF targets are down.

show filter ip 10 entry 1000

The following example shows the partial output of the command as applicable for PBF redundancy.

…
Primary Action      : Forward (SAP)
  Next Hop           : 1/1/1
  Service Id          : Not configured 
  PBR Target Status : Does not exist 
Secondary Action    : Forward (SAP)
  Next Hop          : 1/1/2 
  Service Id        : Not configured 
  PBR Target Status : Does not exist 
  PBR Down Action     : Forward (pbr-down-action-override)
  Downloaded Action   : None
  Dest. Stickiness    : 1000                         Hold Remain    : 0

Extended action for performing two actions at a time

In some deployment scenarios, for example, to realize service function chaining, users may want to perform a second action in addition to a traffic steering action. SR OS supports this behavior by configuring an extended action for a main action. This functionality is supported for Layer 3 traffic steering (that is, PBR) and specifically for the following main actions:

  • forward ESI (Layer 3 version)

  • forward LSP

  • forward next-hop indirect router

  • forward next-hop interface

  • forward redirect-policy

  • forward router

  • forward VPRN target

The capability to specify an extended action is also supported in the case of PBR redundancy, for the following actions:

  • forward next-hop indirect router
  • forward VPRN target BGP next hop

The supported extended action is: remark dscp

Use the commands in the following contexts to configure the extended action:

  • MD-CLI

    configure filter ip-filter entry action ignore-match
    configure filter ip-filter entry action ignore-match
  • classic CLI

    configure filter ip-filter entry action extended-action
    configure filter ipv6-filter entry action extended-action
Extended Action Restrictions

For forward LSP and for actions supporting redundancy, the extended action is not performed when the PBR target is down. Moreover, a filter policy containing an entry with the extended action remark DSCP is blocked in the following cases:

  • if applied on ingress with the egress-PBR flag set

  • if applied on egress without the egress-PBR flag set

The latter case includes actions that are not supported on egress (and for which egress PBR cannot be set).

Advanced VPRN redirection

The VPRN target action is a resilient redirection capability which combines both datapath and control plane lookups to achieve the needed redirection. It allows for the following redirection models:

  • redirection toward the default PE while selecting a specific LSP to use

  • redirection toward an alternative PE while selecting or not a specific LSP to use. If a specific LSP is not selected, then the system automatically selects one based on the BGP next-hop tunnel resolution mechanism

  • all of the preceding within any VPRN

When configuring this action, the user must specify the target BGP next-hop toward the redirection that should occur, as well as the routing context in which the necessary lookups are performed (to derive the service label).

The target BGP next-hop can be configured with any label allocation method (label per VRF, label per next-hop, label per prefix). These methods entail different forwarding behaviors; however, the steering node is not aware of the configuration of the target node. If the user does not specify an advertised route prefix, the steering node assumes that label per VRF is used by the target node and selects the service label accordingly. If the target node is not operating according to the label per VRF method, the user must specify an appropriate route prefix for which a service label is advertised by the target node, keeping in mind the resulting forwarding behavior at the target node of the redirected packet. This specification instructs the steering node to use that specific service label.

Be aware that the system performs an exact match between the specified IPv4 address and mask (or IPv6 address and prefix-length) and the advertised route.

The user can specify an LSP (RSVP-TE, MPLS-TP, or SR-TE LSP) to use toward the BGP next-hop. If no LSP is specified, the system automatically selects one the same way it would have done when normally forwarding a packet toward the BGP next-hop.

Note:

While the system only performs the redirection when the traffic is effectively able to reach the target BGP next-hop, it does not verify whether the redirected packets effectively reach their destination after that.

This action is resilient in that it tracks events affecting the redirection at the service level and reacts to those events. The system performs the redirection as long as it can reach the target BGP next-hop using the correct service label. If the redirection cannot be performed (for example, if no LSP is available, the peer is down, or there is no more specific labeled route), the system reverts to normal forwarding. This can be overridden and configured to drop. A maximum of 8k of unique (3-tuple {bgp-nh, router, adv-prefix}) redirection targets can be tracked.

Destination MAC rewrite when deploying policy-based forwarding

For Layer 2 Policy-Based Forwarding (PBF) redirect actions, a far-end router may discard redirected packets when the PBF changes the destination IP interface the packet arrives on. This happens when a far-end IP interface uses a different MAC address than the IP interface reachable via normal forwarding (for example, one of the routers does not support a configurable MAC address per IP interface).

Use the following command to avoid the discards and deploy egress destination MAC rewrite functionality for VPLS SAPs.

configure service vpls sap egress dest-mac-rewrite

Layer 2 policy-based forwarding (PBF) redirect action shows a deployment.

Figure 2. Layer 2 policy-based forwarding (PBF) redirect action

When enabled, all unicast packets have their destination MAC rewritten to the user-configured value on a Layer 2 switch VPLS SAP. Multicast and broadcast packets are unaffected. The feature:

  • Is supported for regular and split-horizon group Ethernet SAPs in a regular VPLS Service

  • Is expected to be deployed on a SAP that faces far-end IP interface (either a SAP that is the target of PBF action, as shown in Layer 2 policy-based forwarding (PBF) redirect action, or a VPLS SAP of a downstream Layer 2 switch that is connected to a far-end router—not shown).

  • Applies to any unicast egress traffic including LI and mirror.

Restrictions:
The following command is mutually exclusive with the SAP MAC ingress and egress loopback feature:
  • MD-CLI

    tools perform service id loopback eth sap mac-swap
  • classic CLI

    tools perform service id loopback eth sap start mac-swap

Network port VPRN filter policy

The network port Layer 3 service-aware filter feature allows users to deploy VPRN service aware ingress filtering on network ports. A single ingress filter of scope template can each be defined for IPv4 and for IPv6 against a VPRN service. The filter applies to all unicast traffic arriving on auto-bind and explicit-spoke network interfaces for that service. The network interface can be either Inter-AS, or Intra-AS. The filter does not apply to traffic arriving on access interfaces (SAP, spoke SDP, network-ingress (CsC, rVPLS, eVPN).

The same filter can be used on access interfaces of the specific VPRN, can embed other filters (including OpenFlow), can be chained to a system filter, and can be used by other Layer 2 or Layer 3 services.

The filter is deployed on all line cards (chassis network mode D is required). There are no limitations related to filter match/action criteria or embedding. The filter is programmed online cards against ILM entries for this service. All label-types are supported. If an ILM entry has a filter index programmed, that filter is used when the ILM is used in packet forwarding; otherwise, no filter is used on the service traffic.

Restrictions

Network port Layer 3 service-aware filters do not support FlowSpec and LI (cannot use filter inside LI infrastructure nor have LI sources within the VPRN filter).

ISID MAC filters

ISID filters are a type of MAC filter that allows filtering based on the ISID values instead of Layer 2 criteria used by MAC filters of type normal or VID. ISID filters can be deployed on iVPLS PBB SAPs and Epipe PBB SAPs in the following scenarios.

The MMRP usage of the MRP policy ensures automatically that traffic using Group B-MAC is not flooded between domains. However, there could be small transitory periods when traffic originated from PBB BEB with unicast B-MAC destination may be flooded in the BVPLS context as unknown unicast in the BVPLS context for both IVPLS and PBB Epipe. To restrict distribution of this traffic for local PBB services, ISID filters can be deployed. The MAC filter configured with ISID match criterion can be applied to the same interconnect endpoints (BVPLS SAP or PW) as the MRP policy to restrict the egress transmission of any type of frames that contains a local ISID. The ISID filters are applied as required on a per B-SAP or B-PW basis, only in the egress direction.

The ISID match criteria are exclusive with any other criteria under mac-filter. A new mac-filter type attribute is defined to control the use of ISID match criteria and must be set to ISID to allow the use of ISID match criteria.

VID MAC filters

VID filters are a type of MAC filters that extend the capability of current Ethernet ports with null or default SAP tag configuration to match and take action on VID tags. Service delimiting tags (for example, QinQ 1/1/1:10.20 or dot1q 1/1/1:10, where outer tag 10 and inner tags 20 are service delimiting) allow fine granularity control of frame operations based on the VID tag. Service delimiting tags are exact match and are stripped from the frame as shown in VID filtering examples. Exact match or service delimiting tags do not require VID filters. VID filters can only be used to match on frame tags that are after the service delimiting tags.

With VID filters, users can choose to match VID tags for up to two tags on ingress, egress, or both.

  • The outer tag is the first tag in the packet that is carried transparently through the service.

  • The inner tag is the second tag in the packet that is carried transparently through the service.

VID filters add the capability to perform VID value filter policies on default tags (1/1/1:*, or 1/1/1:x.*, or 1/1/1/:*.0) or null tags (1/1/1, 1/1/1:0, or 1/1/1:x.0). The matching is based on the port configuration and the SAP configuration.

At ingress, the system looks for the two outer-most tags in the frame. If present, any service delimiting tags are removed and not visible to VID MAC filtering. For example:

  • 1/1/1:x.y SAP has no tag left for VID MAC filter to match on (outer-tag and inner-tag = 0)

  • 1/1/1:x.* SAP has potentially one tag in the * position for VID MAC filter to match on

  • SAP such as 1/1/1, 1/1/1:*, or 1/1/1:*.* can have as many as two tags for VID MAC filter to match on

  • For the remaining tags, the left (outer-most) tag is what is used as the outer tag in the MAC VID filter. The following tag is used as the inner tag in the filter. If any of these positions do not have tags, a value of 0 is used in the filter. At egress, the VID MAC filter is applied to the frame before adding the additional service tags.

In the industry, the QinQ tags are often referred to as the C-VID (customer VID) and S-VID (service VID). The terms outer tag and inner tag allow flexibility without having to see C-TAG and S-TAG explicitly. The position of inner and outer tags is relative to the port configuration and SAP configuration. Matching of tags is allowed for up to the first two tags on a frame because service delimiting tags may be 0, 1, or 2 tags.

The meaning of inner and outer has been designed to be consistent for egress and ingress when the number of non-service delimiting tags is consistent. Service 1 in VID filtering examples shows a conversion from QinQ to a single dot1q example where there is one non-service delimiting tag on ingress and egress. Service 2 shows a symmetric example with two non-service delimiting tags (plus and additional tag for illustration) to two non-service delimiting tags on egress. Service 3 shows a single non-service delimiting tag on ingress and two tags with one non-service delimiting tag on ingress and egress.

SAP-ingress QoS command option allows for MAC-criteria type VID, which uses the VID filter matching capabilities of QoS and VID Filters (see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Quality of Service Guide).

A VID filter entry can also be used as a debug or lawful intercept mirror source entry.

Figure 3. VID filtering examples

VID filters are available on Ethernet SAPs for Epipe, VPLS, or I-VPLS including eth-tunnel and eth-ring services.

Arbitrary bit matching of VID filters

In addition to matching an exact value, a VID filter mask allows masking any set of bits. The masking operation is ((value and vid-mask) = = (tag and vid-mask)). For example: A value of 6 and a mask of 7 would match all VIDs with the lower 3 bits set to 6. VID filters allow explicit matching of VIDs and matching of any bit pattern within the VID tag.

When using VID filters on SAPs, only VID filters are allowed on this SAP. Filters of type normal and ISID are not allowed.

An additional check for the ‟0” VID tag may be required when using specific wild card operations. For example, frames with no tags on null encapsulated ports match a value of 0 in outer tag and inner tag because there are no tags in the frame for matching. If a zero tag is possible but not wanted, it can be explicitly filtered using exact match on ‟0” before testing other bits for ‟0”.

Use the following command to configure a special QinQ function for single tagged QinQ frames with a null second tag.

Note: This information applies to the classic CLI.
configure system ethernet new-qinq-untagged-sap

Using this command in combination with VID filters is not recommended. The outer tag is the only tag available for filtering on egress for frames arriving from MPLS SDPs or from PBB services, even though additional tags may be carried transparently.

Port group configuration example
Figure 4. Port groups

Port groups shows a customer use example where some VLANs are prevented from ingressing or egressing specific ports. In the example, port A sap 1/1/1:1.* would have a filter as shown in the following example, while port A sap 1/1/1:2.* would not.

The following example shows the configuration of the MAC filter command options that apply to port A sap 1/1/1:1.*.

MD-CLI
*[ex:/configure filter]
A:admin@node-2# info
...
    mac-filter "4" {
        default-action accept
        type vid
        entry 1 {
            match {
                outer-tag {
                    tag 30
                    mask 4095
                }
            }
            action {
                drop
            }
        }
    }
classic CLI
*A:node-2>config>filter# info
----------------------------------------------
        mac-filter 4 name "4" create
            default-action forward
            type vid
            entry 1 create
                match frame-type ethernet_II
                    outer-tag 30 4095
                exit
                action drop
            exit
        exit

IP exception filters

IP exception filters scan all outbound traffic entering an NGE domain and allow packets that match the exception filter criteria to transit the NGE domain unencrypted. For information about IP exception filters supported by NGE nodes, see Router encryption exceptions using ACLs.

The most basic IP exception filter policy must have the following:

  • an exception filter policy ID

  • scope, either exclusive or template

  • at least one filter entry with a specified matching criteria

Redirect policies

SR OS-based routers support configuring of IPv4 and IPv6 redirect policies. Redirect policies allow specifying multiple redirect target destinations and defining status check test methods used to validate the ability for a destination to receive redirected traffic. This destination monitoring allows routers to react to target destination failures. To specify an IPv4 redirect policy, define all destinations to be IPv4. To specify an IPv6 redirect policy, define all destinations to be IPv6. IPv4 redirect policies can only be deployed in IPv4 filter policies. IPv6 redirect policy can only be deployed in IPv6 filter policies.

Redirect policies support the following destination tests:

  • ping-test – with configurable interval, drop-count, and timeout

  • unicast-rt-test – for unicast routing reachability, supported only when router instance is configured for a specific redirect policy. The test yields true if the route to the specified destination exists in RTM for the configured router instance.

Each destination is assigned an initial or base priority describing this destination’s relative importance within the policy. The destination with the highest priority value is selected as most-preferred destination and programmed online cards in filter policies using this redirect policy as an action. Only destinations that are not disabled by the programmed test (if configured) are considered when selecting the most-preferred destination.

In some deployments, it may not be necessary to switch from a currently active, most-preferred redirect-policy destination when a new more-preferred destination becomes available. Use the following command to enable sticky destination functionality to support such deployments.

configure filter redirect-policy sticky-dest

When enabled, the currently active destination remains active unless it goes down or a user forces the switch. Use the following command to force the switch.

tools perform filter redirect-policy activate-best-dest

An optional sticky-dest hold-time-up value or no-hold-time-up command option is available to delay programming the sticky destination in the redirect policy (transition from action forward to PBR action to the most-preferred destination). When the timer is enabled, the first destination that comes up is not programmed and instead the timer is started. After the timer expires, the most-preferred destination at that time is programmed (which may be a different destination from the one that started the timer).

Note:
  • When the manual switchover to most-preferred destination is executed as described in the preceding information, the hold-time-up is stopped.

  • When the timer value is changed, the new value takes immediate effect and the timer is restarted with the new value (or expired if no-hold-time-up is configured).

Note:

The unicast-rt-test command fails when performed in the context of a VPRN routing instance when the destination is routable only through grt-leak functionality. ping-test is recommended in these cases.

Feature restrictions

The following items are feature restrictions:

  • Redirect policy is supported for ingress IPv4 and IPv6 filter policies only.

  • Different platforms support different scale for redirect policies. Contact your local Nokia representative to ensure the planned deployment does not exceed recommended scale.

Router instance support for redirect policies

There are two modes of deploying redirect policies on VPRN interfaces. The functionality supported depends on the configuration of the redirect-policy router. Use the commands in the following context to configure the redirect-router policy:

  • MD-CLI

    configure filter redirect-policy router-instance
  • classic CLI

    configure filter redirect-policy router
  • Redirect policy with router enabled (recommended):

    • When a PBR destination is up, the PBR lookup is performed in the redirect policy's configured routing instance. When that instance differs from the incoming interface where the filter policy using the specific redirect policy is deployed, the PBR action is equivalent to forward next-hop router filter policy action.

    • When all PBR destinations are down (or hardware does not support action router), the system will simply forward the packet and the PBR lookup is performed in the routing instance of the incoming interface where the filter policy using the specific redirect policy is deployed.

    • Any destination tests configured are executed in the routing context specified by the redirect policy.

    • Changing router configuration for a redirect policy brings all destinations with a test configured down. The destinations are brought up after the test confirms reachability based on the new redirect policy router configuration.

  • Redirect policy with router disabled or with router not supported (legacy):

    • When a PBR destination is up, the PBR lookup is performed in the routing instance of the incoming interface where the filter policy using the specific redirect policy is deployed.

    • When all PBR destinations are down, the system will simply forward the packet and the PBR lookup is performed in the routing instance of the incoming interface where the filter policy using the specific redirect policy is deployed.

    • Any destination tests configured are always executed in the Base router instance regardless of the router instance of the incoming interface where the filter policy using the specific redirect policy is deployed.

Restrictions

Only unicast-rt-test and ping-test are supported when redirect-policy router is enabled.

Binding redirect policies

Redirect policies can switch from a specific destination to a new destination in a coordinated manner as opposed to independently as a function of the reachability test results of their configured destinations. Use the commands in following context to bind together destinations of redirect policies.

configure filter redirect-policy-binding

SR OS combines the reachability test results (either TRUE or FALSE) from each of the bound destinations and forms a master test result which prevails over each independent result. The combined result can be obtained by applying either an AND function or an OR function. For the AND function, all destinations must be UP (reachability test result equals TRUE) for each destination to be considered UP. Conversely, a single destination must be DOWN for each to be considered DOWN; for the OR case, a single destination needs to be UP for each destination to be considered UP. Apart from the master test, which overrides the test result of each destination forming a binding, redirect policies are unaltered. For stickiness capability, switching toward a more-preferred destination in a specified redirect policy does not occur until the timers (if any) of each of the associated destinations have expired.

There is no specific constraint about destinations that can be bound together. For example, it is possible to bind destinations of different address families (IPv4 or IPv6), destinations with no test, destinations with multiple tests, or destinations of redirect policies which are administratively down. However, some specific scenarios exist when binding redirect policies:

  • A destination that is in the Administratively down state is considered DOWN (that is, as if its test result was negative, even if no test had been performed).

  • An Administratively down redirect policy is equivalent to a policy with all destinations in an Administratively down state. The system performs a simple forward.

  • A destination with no test is considered always UP.

  • If a destination has multiple tests, all tests must be positive for the destination to be considered UP (logical AND between its own tests results).

  • Destination tests are performed even if a redirect policy has not been applied (that is, not declared as an action of a filter which itself has been applied).

HTTP redirect (captive portal)

SR OS routers support redirecting HTTP traffic by using the line card ingress IP and IPv6 filter policy action HTTP redirect. This capability is mainly used in the configure subscriber-mgmt context to redirect a subscriber web session to a captive portal landing page:

Examples of use cases include redirecting a subscriber after initial connection to a new network to accept the terms of service, or a subscriber out-of-quota redirection.

Traffic matching the HTTP redirect filter entry is sent to the SF/CPM for HTTP redirection:

  • The SF/CPM completes the TCP three-way handshake for new TCP sessions on behalf of the intended server, and responds to the HTTP GET request with a 302 redirect. Therefore, the subscriber web session is redirected to the portal landing page configured in the HTTP redirect filter action.

  • Non TCP flows are ignored.

  • TCP flows other than HTTP, matching an http-redirect filter action, are TCP reset after the three-way handshake. Therefore, it is recommended to configure the http-redirect filter entry to match only TCP port 80. HTTPs uses TLS as underlying protocol, and cannot be redirected to a landing page.

Additional subscriber information may be required by the captive portal. This information can be appended as variables in the http-redirect URL and automatically substituted with the relevant subscriber session data, as follows:

  • $IP: subscriber host IP address

  • $MAC: subscriber host MAC address

  • $URL: original requested URL

  • $SAP: subscriber SAP

  • $SUB: subscriber identification string

  • $CID: circuit-ID, or interface-ID of the subscriber host (hexadecimal format)

  • $RID: remote-ID of the subscriber host (hexadecimal format)

  • $SAPDESC: configured SAP description

The recommended filter configuration to redirect HTTP traffic page is described in the following information using ingress ip-filter policy "10":

  • entry 10: Allows DNS UDP port 53 to a list of allowed DNS servers. Allowing DNS is mandatory for a web client to resolve a URL in the first place. The UDP port directionality indicates DNS request. The destination IP match criteria is optional, creating a list that includes the user DNS, and the most common open DNS servers provide the most security, allowing, alternatively, UDP -port 53 alone is another option.

  • entry 20: Allows HTTP TCP port 80 traffic to the portal landing page defined as a prefix-list. The TCP port directionality indicates an HTTP request. Optionally, the user can create an additional entry to allow TCP port 443 in case the landing page uses both HTTP and HTTPS.

  • entry 30: Redirects all TCP port 80 traffic, other than entry 20, to the landing page URL http://www.mydomain/com/redirect.html?subscriber=$SUB&ipaddress=$IP&mac=$MAC&location=$SAP .

  • entry 40: Drops explicitly any other IP flows, as in the following configuration example.

Redirect HTTP filter configuration (MD-CLI)
*[ex:/configure filter]
A:admin@node-2# info
   ip-filter "10" {
        entry 10 {
            description "Allow DNS Traffic to DNS servers"
            match {
                protocol udp
                ip {
                ip-prefix-list "dns-servers"
                }
                dst-port {
                    eq 53
                }
            }
            action {
                accept
            }
        }
        entry 20 {
            description "Allow HTTP traffic to redirect portal"
            match {
                protocol tcp
                ip {
                ip-prefix-list "portal-servers"
                }
                dst-port {
                    eq 80
                }
            }
            action {
                accept
            }
        }
        entry 30 {
            description "HTTP Redirect all other TCP 80 flows"
            match {
                protocol tcp
                dst-port {
                    eq 80
                }
            }
            action {
                http-redirect {
                    url "http://www.mydomain/com/redirect.html?
subscriber=$SUB&ipaddress=$IP&mac=$MAC&location=$SAP."
                }
            }
        }
        entry 40 {
            description "Drop anything else"
            action {
                drop
            }
        }
    }
Redirect HTTP filter configuration (classic CLI)
*A:node-2>config>filter# info
----------------------------------------------
        ip-filter 10 name "10" create
            entry 10 create
                description "Allow DNS Traffic to DNS servers"
                match protocol udp
                    dst-ip ip-prefix-list "dns-servers"
                    dst-port eq 53
                exit
                action
                    forward
                exit
            exit
            entry 20 create
                description "Allow HTTP traffic to redirect portal"
                match protocol tcp
                    dst-ip ip-prefix-list "portal-servers"
                    dst-port eq 80
                exit
                action
                    forward
                exit
            exit
            entry 30 create
                description "HTTP Redirect all other TCP 80 flows"
                match protocol tcp
                    dst-port eq 80
                exit
                action
                    http-redirect "http://www.mydomain/com/redirect.html?
subscriber=$SUB&ipaddress=$IP&mac=$MAC&location=$SAP."
                exit
            exit
            entry 40 create
                description "Drop anything else"
                action
                    drop
                exit
            exit
        exit
----------------------------------------------

Also, the router supports two redirect scale modes that are configurable at the system level. The optimized-mode improves the number of HTTP redirect sessions supported by system as compared to if optimized mode is disabled.

Optimized-mode configuration (MD-CLI)
*[ex:/configure system cpm-http-redirect]
A:admin@node-2# info detail
 ...
    optimized-mode true
Optimized-mode configuration (classic CLI)
*A:node-2>config>system>cpm-http-redirect# info detail
----------------------------------------------
            optimized-mode
----------------------------------------------
Traffic flow

The following example provides a brief scenario of a subscriber connecting to a new network, where it is required to authenticate or accept the network terms of use, before getting access to Internet:

  1. The subscriber typically receives an IP address upon connecting to the network using DHCP, and is assigned a filter policy to redirect HTTP traffic to a web portal.

  2. The subscriber HTTP session TCP traffic is intercepted by the router. The CPM completes the TCP three-way handshake on behalf of the destination HTTP server, and responds to the HTTP request with an HTTP 302 ‟Moved Temporarily” response. This response contains the URL of the web portal configured in the filter policy.

  3. Upon receiving this redirect message, the subscriber web browser closes the original TCP session, and opens a new TCP session to the redirection portal.

  4. The subscriber can now authenticate or accept the terms of use. After, the subscriber filter policy is dynamically modified.

  5. The subscriber can now connect to the original Internet site.

Figure 5. Web redirect traffic flow

Filter policy-based ESM service chaining

In some deployments, users may select to redirect ESM subscribers to Value Added Services (VAS). Various deployment models can be used but often subscribers are assigned to a specific residential tier-of-service, which also defines the VAS available to subscribers of the specific tier. The subscribers are redirected to VAS based on tier-of-service rules, but such an approach can be hard to manage when many VAS services/tiers of service are needed. Often the only way to identify a subscriber’s traffic with a specific tier-of-service is to preallocate IP/IPv6 address pools to a specific service tier and use those addresses in VAS PBR match criteria. This creates an application-services to network infrastructure dependency that can be hard to overcome, especially if fast and flexible application service delivery is needed.

Filter policy-based ESM service chaining removes ESM VAS steering to network infrastructure inter-dependency. A user can configure per tier of service or per individual VAS service upstream and downstream service chaining rules without a need to define subscriber or tier-of-service match conditions. ACL filter modeling for ESM service chaining shows a possible ACL model (embedded filters are used for VAS service chaining rules).

On the left in ACL filter modeling for ESM service chaining, the per-tier-of-service ACL model is depicted. Each tier of service (Gold or Silver) has a dedicated embedded VAS filter (‟Gold VAS”, ‟Silver VAS”) that contains all steering rules for all service chains applicable to the specific tier. Each VAS filter is then embedded by the ACL filter used by a specific tier. A subscriber is subject to VAS service chain rules based on the per-tier ACL assigned to that subscriber (for example, via RADIUS). If a new VAS rule needs to be added, a user must program that rule in all applicable tiers. Upstream and downstream rules can be configured in a single filter (as shown) or can use dedicated ingress and egress filters.

On the right in ACL filter modeling for ESM service chaining, the per-VAS-service ACL model is depicted. Each VAS has a dedicated embedded filter (‟VAS 1”, ‟VAS 2”, ‟VAS 3”) that contains all steering rules for all service chains applicable to that VAS service. A tier of service is then created by embedding multiple VAS-specific filters: Gold: VAS 1, VAS 2, VAS 3; Silver: VAS 1 and VAS 3. A subscriber is subject to VAS service chain rules based on the per-tier ACL assigned to that subscriber. If a new VAS rule needs to be added, a user needs to program that rule in a single VAS-specific filter only. Again, upstream and downstream rules can be configured in a single filter (as shown) or can use dedicated ingress and egress filters.

Figure 6. ACL filter modeling for ESM service chaining

Upstream ESM ACL policy-based service chaining shows upstream VAS service chaining steering using filter policies. Upstream subscriber traffic entering Res-GW is subject to the subscriber's ingress ACL filter assigned to that subscriber by a policy server. If the ACL contains VAS steering rules, the VAS-rule-matching subscriber traffic is steered for VAS processing over a dedicated to-from-access VAS interface in the same or a different routing instance. After the VAS processing, the upstream traffic can be returned to Res-GW by a to-from-network interface (shown in Upstream ESM ACL policy-based service chaining) or can be injected to WAN to be routed toward the final destination (not shown).

Figure 7. Upstream ESM ACL policy-based service chaining

Downstream ESM ACL-policy based service chaining shows downstream VAS service chaining steering using filter policies. Downstream subscriber traffic entering Res-GW is forwarded to a subscriber-facing line card. On that card, the traffic is subject to the subscriber's egress ACL filter policy processing assigned to that subscriber by a policy server. If the ACL contains VAS steering rules, the VAS rule-matching subscriber's traffic is steered for VAS processing over a dedicated to-from-network VAS interface (in the same or a different routing instance). After the VAS processing, the downstream traffic must be returned to Res-GW via a ‟to-from-network” interface (shown in Downstream ESM ACL-policy based service chaining) to ensure the traffic is not redirected to VAS again when the subscriber-facing line card processes that traffic.

Figure 8. Downstream ESM ACL-policy based service chaining

Ensuring the correct configuration for the VAS interface type, for upstream and downstream traffic redirected to a VAS and returned after VAS processing, is critical for achieving loop-free network connectivity for VAS services.

Use the commands in the following contexts to configure the VAS interface type and command options, which are described in the preceding information.

configure service vprn if vas-if-type
configure service ies if vas-if-type
configure router if vas-if-type
  • deployments that use two separate interfaces for VAS connectivity (which is recommended, and required if local subscriber-to-subscriber VAS traffic support is required):

    • to-from-access

      • upstream traffic arriving from subscribers over access interfaces must be redirected to a VAS PBR target reachable over this interface for upstream VAS processing

      • downstream traffic destined for subscribers after VAS processing must arrive on this interface, so that the traffic is subject to regular routing but is not subject to Application Assurance diversion, nor to egress subscriber PBR

      • the interface must not be used for downstream pre-VAS traffic; otherwise, routing loops occur

    • to-from-network

      • downstream traffic destined for subscribers arriving from network interfaces must be redirected to a VAS PBR target reachable over this interface for downstream VAS processing

      • upstream traffic after VAS processing, if returned to the router, must arrive on this interface so that regular routing can be applied

  • deployments that use a single interface for VAS connectivity (optional, no local subscriber-to-subscriber VAS traffic support):

    • to-from-both

      • both upstream traffic arriving from access interfaces and downstream traffic arriving from the network are redirected to a PBR target reachable over this interface for upstream/downstream VAS processing

      • after VAS processing, traffic must arrive on this interface (optional for upstream), so that the traffic is subject to regular routing but is not subject to AA diversion, nor to egress subscriber PBR

      • the interface must be used for downstream pre-VAS traffic, otherwise, routing loops occur

The ESM filter policy-based service chaining allows users to do the following:

  • Steer upstream and downstream traffic per-subscriber with full ACL-flow-defined granularity without the need to specify match conditions that identify subscriber or tier-of-service

  • Steer both upstream and downstream traffic on a single Res-GW

  • Flexibly assign subscribers to tier-of-service by changing the ACL filter policy a specific subscriber uses

  • Flexibly add new services to a subscriber or tier-of-service by adding the subscriber-independent filter rules required to achieve steering

  • Achieve isolation of VAS steering from other ACL functions like security through the use of embedded filters

  • Deploy integrated Application Assurance (AA) as part of a VAS service chain—both upstream and downstream traffic is processed by AA before a VAS redirect

  • Select whether to use IP-Src/IP-Dst address hash or IP-Src/IP-Dst address plus TCP/UDP port hash when LAG/ECMP connectivity to DC is used. Layer 4 inputs are not used in hash with IPv6 packets with extension headers present.

ESM filter policy-based traffic steering supports the following:

  • IPv4 and IPv6 steering of unicast traffic using IPv4 and IPv6 ACLs

  • use of an action forward redirect-policy filter or an action forward next-hop router filter for IP steering with TCAM-based load-balancing, -to-wire, and sticky destination

  • use of an action forward ESI SF-IP VAS interface router filter for an integrated service chaining solution

Operational notes

The following operational notes apply:

  • Downstream traffic steered toward a VAS on the subscriber-facing IOM is reclassified (FC and profile) based on the subscriber egress QoS policy, and is queued toward the VAS based on the network egress QoS configuration. Packets sent toward VAS do not have DSCP remarked (because they are not yet forwarded to a subscriber). DSCP remarking based on subscriber's egress QoS profile only applies to traffic ultimately forwarded to the subscriber (after VAS or not subject to VAS).

  • If mirroring of subscriber traffic is configured using ACL entry/subscriber/SAP/port mirror, the mirroring applies to traffic ultimately forwarded to subscriber (after VAS or not subject to VAS). Traffic that is being redirected to VAS cannot be mirrored using an ACL filter implementing PBR action (the same egress ACL filter entry being a mirror source and specifying egress PBR action is not supported).

  • Use dedicated ingress and egress filter policies to prevent accidental match of an ingress PBR entry on egress, and the other way around, that results in forwarding or dropping of traffic matching the entry (based on the filter's default action configuration).

Restrictions

The following restrictions apply:

  • This feature is not supported with HSMDAs on subscriber ingress.

  • This feature is not supported when the traffic is subject to non-AA ISA on Res-GW.

  • Traffic that matches an egress filter entry with an egress PBR action cannot be mirrored, cannot be sampled using cflowd, and cannot be logged using filter logging while being redirected to VAS on a sub-facing line card.

  • This feature is not supported with LAC/LNS ESM (PPPoE subscriber traffic encapsulated into or de-encapsulated from L2TP tunnels).

  • This feature is not supported for system filter policies.

Policy-based forwarding for deep packet inspection in VPLS

The purpose policy-based forwarding is to capture traffic from a customer and perform a deep packet inspection (DPI) and forward traffic, if allowed, by the DPI.

In the following example, the split horizon groups are used to prevent flooding of traffic. Traffic from customers enter at SAP 1/1/5:5. Because mac-filter 100 that is applied on ingress, all traffic with dot1p 07 marking is forwarded to SAP 1/1/22:1, which is the DPI.

DPI performs packet inspection/modification and either drops the traffic or forwards the traffic back into the box through SAP 1/1/21:1. Traffic is then sent to spoke-sdp 3:5.

SAP 1/1/23:5 is configured to see if the VPLS service is flooding all the traffic. If flooding is performed by the router then traffic would also be sent to SAP 1/1/23:5 (which it should not).

Policy-based forwarding for deep packet inspection shows an example to configure policy-based forwarding for deep packet inspection on a VPLS service. For information about configuring services, see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Layer 2 Services and EVPN Guide.

Figure 9. Policy-based forwarding for deep packet inspection
VPLS service configuration with DPI (MD-CLI)
*[ex:/configure service]
A:admin@node-2# info
...   
    vpls "10" {
        admin-state enable
        customer "1"
        service-mtu 1400
        fdb {
            static-mac {
                mac 00:00:00:31:11:01 {
                    sap 1/1/21:1
                }
                mac 00:00:00:31:12:01 {
                    sap 1/1/22:1
                }
                mac 00:00:00:31:13:05 {
                    sap 1/1/23:5
                }
            }
        }
        split-horizon-group "dpi" {
            residential true
        }
        split-horizon-group "split" {
        }
        sap 1/1/21:1 {
            split-horizon-group "split"
            fdb {
                mac-learning {
                    learning false
                }
            }
        }
        sap 1/1/22:1 {
            split-horizon-group "dpi"
            arp-reply-agent with-subscr-ident
            stp {
                admin-state disable
            }
            fdb {
                mac-pinning true
                mac-learning {
                    learning false
                }
            }
        }
        sap 1/1/23:5 {
        }
    }
VPLS service configuration with DPI (classic CLI)
*A:node-2>config>service# info
----------------------------------------------
...
        vpls 10 customer 1 create
            service-mtu 1400
            split-horizon-group "dpi" residential-group create
            exit
            split-horizon-group "split" create
            exit
            stp
                shutdown
            exit
            sap 1/1/21:1 split-horizon-group "split" create
                disable-learning
                static-mac 00:00:00:31:11:01 create
            exit
            sap 1/1/22:1 split-horizon-group "dpi" create
                disable-learning
                static-mac 00:00:00:31:12:01 create
            exit
            sap 1/1/23:5 create
                static-mac 00:00:00:31:13:05 create
            exit
            no shutdown
        exit
...
----------------------------------------------
MAC filter configuration (MD-CLI)
*[ex:/configure filter]
A:admin@node-2# info
    mac-filter "100" {
        default-action accept
        entry 10 {
            log 101
            match {
                dot1p {
                    priority 7
                }
            }
            action {
                forward {
                    sap {
                        vpls "10"
                        sap-id 1/1/22:1
                    }
                }
            }
        }
    }
...
MAC filter configuration (classic CLI)
*A:node-2>config>filter# info
----------------------------------------------
...
        mac-filter 100 create
            default-action forward
            entry 10 create
                match
                    dot1p 7 7
                exit
                log 101
                action forward sap 1/1/22:1
            exit
        exit
...
----------------------------------------------
MAC filter added to the VPLS service configuration (MD-CLI)
*[ex:/configure service]
A:admin@node-2# info
...
    vpls "10" {
        admin-state enable
        customer "1"
        service-mtu 1400
        fdb {
            static-mac {
                mac 00:00:00:31:11:01 {
                    sap 1/1/21:1
                }
                mac 00:00:00:31:12:01 {
                    sap 1/1/22:1
                }
                mac 00:00:00:31:13:05 {
                    sap 1/1/23:5
                }
                mac 00:00:00:31:15:05 {
                    sap 1/1/5:5
                }
            }
        }
        split-horizon-group "dpi" {
            residential true
        }
        split-horizon-group "split" {
        }
        spoke-sdp 3:5 {
        }
        sap 1/1/21:1 {
            admin-state enable
            split-horizon-group "split"
            fdb {
                mac-learning {
                    learning false
                }
            }
        }
        sap 1/1/22:1 {
            split-horizon-group "dpi"
            arp-reply-agent with-subscr-ident
            stp {
                admin-state disable
            }
            fdb {
                mac-pinning true
                mac-learning {
                    learning false
                }
            }
        }
        sap 1/1/5:5 {
            split-horizon-group "split"
            ingress {
                filter {
                    mac "100"
                }
            }
        }
    }
...
MAC filter added to the VPLS service configuration (classic CLI)
*A:node-2>config>service# info
----------------------------------------------
...
        vpls 10 customer 1 create
            service-mtu 1400
            split-horizon-group "dpi" residential-group create
            exit
            split-horizon-group "split" create
            exit
            stp
                shutdown
            exit
            sap 1/1/5:5 split-horizon-group "split" create
                ingress
                    filter mac 100
                exit
                static-mac 00:00:00:31:15:05 create
            exit
            sap 1/1/21:1 split-horizon-group "split" create
                disable-learning
                static-mac 00:00:00:31:11:01 create
            exit
            sap 1/1/22:1 split-horizon-group "dpi" create
                disable-learning
                static-mac 00:00:00:31:12:01 create
            exit
            sap 1/1/23:5 create
                static-mac 00:00:00:31:13:05 create
            exit
            spoke-sdp 3:5 create
            exit
            no shutdown
        exit
....
----------------------------------------------

Storing filter entries

FP2-, FP3-, FP4, and FP5-based cards store filter policy entries in dedicated memory banks in hardware, also referred to as CAM tables:

  • IP/MAC ingress

  • IP/MAC egress

  • IPv6 ingress

  • IPv6 egress

Additional CAM tables for CPM filters are used on SR-1, SR-1s, SR-2s line cards for MAC, IP and IPv6.

FP4 and FP5-based cards

To optimize both scale and performance, policy entries configured by the operator are compressed by each FP4 and FP5-based line card before being installed in hardware.

This compression can result, in an unexpected scenario typically only achieved in a lab environment, in an overload condition for a specified FP CAM line card. This overload condition can occur when applying a filter policy for the first time on a line card FP or when adding entries to a filter policy.

For a line card ACL filter, the system raises a trap if a specified FP CAM utilization goes beyond 85% utilization.

Applying a filter policy

A policy is installed for the first time on a line card FP if no router interface, service interface, SAP, spoke SDP, mesh SDP, or ESM subscriber host was using the policy on this FP.

A policy installed for the first time on a line card FP can lead to a compression failure resulting in an overload condition for this policy on this FP CAM. In this case, none of the entries for the affected filter policy are programmed and traffic is forwarded as if no filter was installed.

Adding filter entries

Adding an additional entry to a filter policy can lead to a compression failure resulting in an overload condition.

In this case, the newly added entry is not programmed on the affected FP CAM. Additional entries added to the same policy after the first overload condition are also not programmed on the affected FP CAM as the system attempts to install all outstanding additions in order.

A trap is raised when an overload condition occurs. After the first overload event is detected for a specified ACL FP CAM, the CPM interactively rejects the addition of filter policies or filter entries applied to the same FP CAM, therefore providing an interactive error message to the user or the dynamic provisioning interfaces such as RADIUS.

Note:

The filter resource management task on the CPM controls the maximum number of filter entries per FP. If the user attempts to go over the scaling limit, the system returns an interactive error message. This mechanism is independent from the overload state of the FP CAM.

Removing filter entries

Removing filter entries from a filter policy is always accepted and is used to resolve the overload events.

Resolving overload

The overload condition should be resolved by the network user before adding new entries or policies in the affected FP CAM.

To identify the affected policy, the system logs the overload event providing slot number, FP number, and impacted CAM. Use the following command to identify policy and policy entries in the system that cannot be programmed on a specific FP CAM.

tools dump filter overload

To resolve the overload condition, the network user can remove the newly added entries from the affected policy or assign a different policy.

Configuring filter policies with CLI

This section provides information to configure filter policies using the command line interface.

Common configuration tasks

This section provides a brief overview of the tasks that must be performed for all IPv4, IPv6, and MAC filter configurations and provides the CLI commands.

Creating an IPv4 filter policy

A filter policy has the following attributes:

  • policy ID and policy name

  • scope: template, exclusive, embedded, system

  • type: normal, src-mac, packet-length

  • one or more filter entries defining match criteria and action

  • default action to define how packets that do not match any of the filter entries are handled

Use the commands in the following context to create a template IPv4 filter policy.

configure filter ip-filter
IPv4 filter entry

Within a filter policy, configure filter entries which contain criteria against which ingress, egress traffic is matched. The action specified in the entry determines how the packets are handled, such as drop or forward.

Configure the following to create an IPv4 filter entry:

  1. Enter a filter entry ID.

  2. Configure the filter matching criteria.

  3. Configure the filter action.

The following example shows an IPv4 filter entry configuration.

MD-CLI
*[ex:/configure filter ip-filter "1"]
A:admin@node-2# info
    description "filter-main"
    scope exclusive
    entry 10 {
        description "no-91"
        match {
            src-ip {
                address 10.10.0.100/24
            }
            dst-ip {
                address 10.10.10.91/24
            }
        }
        action {
            drop
        }
    }
classic CLI
*A:node-2>config>filter>ip-filter# info
----------------------------------------------
            description "filter-main"
            scope exclusive
            entry 10 create
                description "no-91"
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.0.100/24
                exit
                action drop
            exit
----------------------------------------------
Cflowd filter sampling

Within a filter entry, you can specify that traffic matching the associated IPv4 filter entry is sampled if the IPv4 interface is set to cflowd ACL mode. Configuring the filter-sample command enables the cflowd tool.

IPv4 filter entry configuration to sample in cflowd ACL mode (MD-CLI)
*[ex:/configure filter ip-filter "10"]
A:admin@node-2# info
    description "filter-main"
    scope exclusive
    entry 10 {
        description "no-91"
        filter-sample true
        interface-sample false
        action {
            forward {
                redirect-policy "redirect1"
            }
        }
    }
IPv4 filter entry configuration to sample in cflowd ACL mode (classic CLI)
*A:node-2>config>filter>ip-filter# info
----------------------------------------------
            description "filter-main"
            scope exclusive
            entry 10 create
                description "no-91"
                filter-sample
                interface-disable-sample
                match
                exit
                action forward redirect-policy redirect1
            exit
----------------------------------------------

Within a filter entry, you can also specify that traffic matching the associated IPv4 filter entry is not sampled by cflowd if the IPv4 interface is set to cflowd interface mode.

IPv4 filter entry configuration not to sample in cflowd ACL mode (MD-CLI)
*[ex:/configure filter ip-filter "1"]
A:admin@node-2# info
    description "filter-main"
    scope exclusive
    entry 10 {
        description "no-91"
        filter-sample false
        interface-sample true
        match {
        action {
            forward {
                redirect-policy "redirect1"
            }
        }
    }
IPv4 filter entry configuration not to sample in cflowd ACL mode (classic CLI)
*A:node-2>config>filter>ip-filter# info
----------------------------------------------
            description "filter-main"
            scope exclusive
            entry 10 create
                description "no-91"
                no filter-sample
                no interface-disable-sample
                match
                exit
                action forward redirect-policy redirect1
            exit
----------------------------------------------

Creating a MAC filter policy

Each filter policy must have the following:

  • the filter policy type specified (MAC normal, MAC ISID, MAC VID)

  • a filter policy ID

  • a default action, either drop or forward

  • filter policy scope, either exclusive or template

  • at least one filter entry, with a match criterion defined

MAC filter policy

The following example shows a MAC filter policy configuration.

MD-CLI
*[ex:/configure filter]
A:admin@node-2# info
...
    mac-filter "90" {
        description "filter-west"
        scope exclusive
    }
*[ex:/configure filter]
A:admin@cses-V27# info detail
 ...
    mac-filter "90" {
...
        type normal
...
    }
classic CLI
*A:node-2>config>filter# info
----------------------------------------------
...
        mac-filter 90 create
            description "filter-west"
            scope exclusive
            type normal
        exit
----------------------------------------------
MAC ISID filter policy

The following example shows a MAC ISID filter policy configuration.

MD-CLI
*[ex:/configure filter]
A:admin@node-2# info
    mac-filter "90" {
        description "filter-wan-man"
        scope template
        type isid
        entry 1 {
            description "drop-local-isids"
            match {
                isid {
                    range {
                        start 100
                        end 1000
                    }
                }
            }
            action {
                drop
            }
        }
        entry 2 {
            description "allow-wan-isids"
            match {
                isid {
                    value 150
                }
            }
            action {
                accept
            }
        }
    }


classic CLI
*A:node-2>config>filter# info
----------------------------------------------
mac-filter 90 create
     description "filter-wan-man"
     scope template
     type isid
     entry 1 create
          description "drop-local-isids"
          match
               isid 100 to 1000
          exit
          action drop
     exit
     entry 2 create
          description "allow-wan-isids"
          match
               isid 150
          exit
          action forward
     exit
MAC VID filter policy

The following example shows a MAC VID filter policy configuration.

MD-CLI
*[ex:/configure filter mac-filter "101"]
A:admin@node-2# info
    default-action accept
    type vid
    entry 1 {
        match {
            frame-type ethernet-ii
            outer-tag {
                tag 85
                mask 4095
            }
        }
        action {
            drop
        }
    }
    entry 2 {
        match {
            frame-type ethernet-ii
            outer-tag {
                tag 43
                mask 4095
            }
        }
        action {
            drop
        }
    }
classic CLI
*A:node-2>config>filter>mac-filter# info
----------------------------------------------
      default-action forward
      type vid
      entry 1 create
         match frame-type ethernet_II
           outer-tag 85 4095
         exit
         action drop
      exit
      entry 2 create
         match frame-type ethernet_II
           outer-tag 43 4095
         exit
         action drop
      exit       
----------------------------------------------
MAC filter entry

Within a filter policy, configure filter entries that contain criteria against which ingress, egress, or network traffic is matched. The action specified in the entry determines how the packets are handled, such as dropping or forwarding.

Configure the following to create a MAC filter entry:

  1. Enter a filter entry ID. The system does not dynamically assign a value.

  2. Specify matching criteria.

  3. Assign an action.

The following example displays a MAC filter entry configuration.

MD-CLI
*[ex:/configure filter]
A:admin@node-2# info
...
    mac-filter "90" {
       entry 1 {
            description "allow-104"
            match 
            action {
                drop
            }
        }
    }
classic CLI
*A:node-2>config>filter# info
----------------------------------------------
        mac-filter 90 create
            entry 1 create
                description "allow-104" 
                match 
                exit 
                action drop
            exit 
        exit 
----------------------------------------------

Creating an IPv4 exception filter policy

Configuring and applying IPv4 exception filter policies is optional. Each exception filter policy must have the following:

  • an exception filter policy ID

  • scope specified, either exclusive or template

  • at least one filter entry with matching criteria specified

IP exception filter policy

Use the commands in the following context to create an IP exception filter policy.

configure filter ip-exception

The following example displays a template IP exception filter policy configuration.

MD-CLI
*[ex:/configure filter]
A:admin@node-2# info
    ip-exception "1" {
        description "IP-exception"
    }
classic CLI
*A:node-2>config>filter# info
----------------------------------------------
...
        ip-exception 1 create
            description "IP-exception"
            scope template
        exit
...
----------------------------------------------
IP exception entry matching criteria

Within an exception filter policy, configure exception entries that contain criteria against which ingress, egress, and network traffic is matched. Packets that match the entry criteria are allowed to transit the NGE domain in cleartext.

Configure the following to create an IP exception entry:

  1. Enter an exception filter entry ID. The system does not dynamically assign a value.

  2. Specify matching criteria.

Use the commands in the following context to configure the IP exception filter matching criteria.

configure filter ip-exception entry match

The following example shows an IP exception entry matching criteria configuration.

MD-CLI
*[ex:/configure filter ip-exception "2"]
A:admin@node-2# info
    description "exception-main"
    entry 1 {
        match {
            src-ip {
                address 10.10.10.10/32
            }
            dst-ip {
                address 10.10.10.91/24
            }
        }
    }
classic CLI
*A:node-2>config>filter>ip-except# info
----------------------------------------------
            description "exception-main"
            scope exclusive
            entry 1 create
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.10.10/32
                exit
            exit
----------------------------------------------

Creating an IPv6 exception filter policy

Configuring and applying IPv6 exception filter policies is optional. Each exception filter policy must have the following:

  • an exception filter policy ID

  • at least one filter entry with matching criteria specified

IPv6 exception filter policy

Use the commands in the following context to create an IPv6 exception filter policy.

configure filter ipv6-exception

The following example shows an IPv6 exception filter policy configuration.

MD-CLI
*[ex:/configure filter]
A:admin@node-2# info
    ipv6-exception "1" {
        description "IPv6-exception"
    }
classic CLI
*A:node-2>config>filter# info
----------------------------------------------
...
        ipv6-exception 1 create
            description "IPv6-exception"
        exit
...
----------------------------------------------
IPv6 exception entry matching criteria

Within an exception filter policy, configure exception entries that contain criteria against which ingress and network traffic is matched. Packets that match the entry criteria are allowed to transit the IPsec domain in cleartext.

Configure the following to create an IPv6 exception entry:

  1. Enter an exception filter entry ID. The system does not dynamically assign a value.

  2. Specify matching criteria.

Use the commands in the following context to configure IPv6 exception filter matching criteria.

configure filter ipv6-exception entry match

The following example shows an IPv6 exception entry matching criteria configuration.

MD-CLI
*[ex:/configure filter ipv6-exception "2"]
A:admin@node-2# info
    description "exception main"
    entry 1 {
        match {
            src-ip {
                address 2001:db8::2/128
            }
            dst-ip {
                address 2001:db8::1/128
            }
        }
    }
classic CLI
*A:node-2>config>filter>ipv6-except# info
----------------------------------------------
            description "exception-main"
            entry 1 create
                match
                    dst-ip 2001:db8::1/128
                    src-ip 2001:db8::2/128
                exit
            exit
----------------------------------------------

Creating a match list for filter policies

To create a match list, you must:

  1. Specify a type of a match list (for example, an IPv4 address prefix list).

  2. Define a unique match list name (for example, an IPv4-Deny-List).

  3. Specify at least one entry in the list (for example, a valid IPv4 prefix).

The following example shows the IPv4 prefix list configuration and its usage in an IPv4 filter policy.

MD-CLI
*[ex:/configure filter]
A:admin@node-2# info
...
    match-list {
        ip-prefix-list "IPv4-Deny-List" {
            description "IPv4-Deny-list"
            prefix 10.0.0.0/21 { }
            prefix 10.254.0.0/24 { }
        }
    }
    ip-filter "ip-edge-filter" {
        scope template
        filter-id 10
        entry 10 {
            match {
                src-ip {
                    ip-prefix-list "IPv4-Deny-List"
                }
            }
            action {
                drop
            }
        }
    }
classic CLI
*A:node-2>config>filter# info
----------------------------------------------
      match-list
        ip-prefix-list "IPv4-Deny-List" create
           description "IPv4-Deny-list"
           prefix 10.0.0.0/21
           prefix 10.254.0.0/24
        exit
     exit
     ip-filter 10 name "ip-edge-filter" create
        scope template
        entry 10 create
           match
              src-ip ip-prefix-list "IPv4-Deny-List"
           exit
           action 
               drop
            exit
      exit
exit

---------------------------------------------

Applying filter policies

Filter policies can be associated with the entities listed in Applying filter policies.

Table 2. Applying filter policies
IPv4 and IPv6 Filter Policies MAC Filter Policies

Epipe SAP, spoke SDP

Epipe SAP, spoke SDP

spoke SDP

IES interface SAP, spoke SDP, R-VPLS

spoke SDP

VPLS mesh SDP, spoke SDP, SAP

VPLS mesh SDP, spoke SDP, SAP

VPRN interface SAP, spoke SDP, R-VPLS, network ingress

Network interface

Applying IPv4/IPv6 and MAC filter policies to a service

IP and MAC filter policies are applied by associating them with a SAP or spoke SDP in ingress or egress direction as needed. Filter ID is used to associate an existing filter policy, or if defined, a filter name for that filter policy can be used in the CLI.

IP and MAC filters assigned to an ingress and egress SAP and spoke SDP (MD-CLI)
*[ex:/configure service epipe "5"]
A:admin@node-2# info
    admin-state enable
...
    spoke-sdp 8:8 {
        ingress {
            filter {
                ip "epipe sap default filter"
            }
        }
        egress {
            filter {
                mac "91"
            }
        }
    }
    sap 1/1/1.1.1 {
        ingress {
            filter {
                ip "10"
            }
        }
        egress {
            filter {
                mac "92"
            }
        }
    }
IP and MAC filters assigned to an ingress and egress SAP and spoke SDP (classic CLI)
*A:node-2>config>service>epipe# info
----------------------------------------------
            sap 1/1/1.1.1 create
                ingress
                    filter ip 10
                exit
                egress
                    filter mac 92
                exit
            exit
            spoke-sdp 8:8 create
                ingress
                    filter ip ‟epipe sap default filter”
                exit
                egress
                    filter mac 91
                exit
            exit
            no shutdown
----------------------------------------------
IPv6 filters assigned to an IES service interface (MD-CLI)
*[ex:/configure service ies "1001"]
A:admin@node-2# info
    admin-state enable
    customer "1"
    interface "testA" {
        sap 2/1/3:0 {
            ingress {
                filter {
                    ipv6 "100"
                }
            }
            egress {
                filter {
                    ipv6 "100"
                }
            }
        }
        ipv4 {
            primary {
                address 192.22.1.1
                prefix-length 24
            }
        }
        ipv6 {
        }
    }
IPv6 filters assigned to an IES service interface (classic CLI)
*A:node-2>config>service# info
----------------------------------------------
        ies 1001 name "1001" customer 1 create
            interface "testA" create
                address 192.22.1.1/24
                ipv6
                exit
                sap 2/1/3:0 create
                    ingress
                        filter ipv6 100
                    exit
                    egress
                        filter ipv6 100
                    exit
                exit
                no shutdown
            exit
----------------------------------------------
Applying IPv4/IPv6 filter policies to a network port

IP filter policies can be applied to network IPv4 and IPv6 interfaces. MAC filters cannot be applied to network IP interfaces or to routable IES services. Similar to applying filter policies to service, IPv4/IPv6 filter policies are applied to network interfaces by associating a policy with ingress and egress direction as required. Filter ID is used to associate an existing filter policy, or if defined, a filter name for that filter ID policy can be used in the CLI.

IP filter applied to an interface at ingress (MD-CLI)
*[ex:/configure router "Base"]
A:admin@node-2# info
...
    interface "to-104" {
        port 1/1/1
        egress {
            filter {
                ip "default network egress policy"
            }
        }
        ingress {
            filter {
                ip "10"
            }
        }
        ipv4 {
            primary {
                address 10.0.0.103
                prefix-length 24
            }
        }
    }
...
IP filter applied to an interface at ingress (classic CLI)
*A:node-2>config>router# info
#------------------------------------------
# IP Configuration
#------------------------------------------
...
        interface "to-104"
            address 10.0.0.103/24
            port 1/1/1
            ingress
                filter ip 10
            exit
            egress
                filter ip ‟default network egress policy”
            exit
        exit
...
#------------------------------------------
IPv4 and IPv6 filters applied to an interface at ingress and egress (MD-CLI)
*[ex:/configure router "Base" interface "test1"]
A:admin@node-2# info
    port 1/1/1
    egress {
        filter {
            ip "2"
            ipv6 "1"
        }
    }
    ingress {
        filter {
            ip "2"
            ipv6 "1"
        }
    }
    ipv6 {
        address 3ffe::101:101 {
            prefix-length 120
        }
    }
IPv4 and IPv6 filters applied to an interface at ingress and egress (classic CLI)
*A:node-2>config>router>if# info
----------------------------------------------
            port 1/1/1
            ipv6
                address 3FFE::101:101/120
            exit
            ingress
                filter ip 2
                filter ipv6 1
            exit
            egress
                filter ip 2
                filter ipv6 1
            exit 
----------------------------------------------

Creating a redirect policy

Configuring and applying redirect policies is optional. Each redirect policy must have the following:

  • a destination IP address

  • a priority (default is 100)

Configuring a ping test is recommended.

The following example shows the configuration for a redirect policy.

MD-CLI
*[ex:/configure filter]
A:admin@node-2# info
    redirect-policy "redirect1" {
        admin-state enable
        destination 10.10.10.104 {
            priority 105
        }
        destination 10.10.10.105 {
            admin-state enable
            priority 95
            ping-test {
                timeout 30
                drop-count 5
            }
        }
        destination 10.10.10.106 {
            admin-state enable
            priority 90
        }
    }
classic CLI
*A:node-2>config>filter# info
----------------------------------------------
        redirect-policy "redirect1" create
            destination 10.10.10.104 create
                priority 105
                exit
                no shutdown
            exit
            destination 10.10.10.105 create
                priority 95
                ping-test
                    timeout 30
                    drop-count 5
                exit
                no shutdown
            exit
            destination 10.10.10.106 create
                priority 90
                exit
                no shutdown
            exit
...
----------------------------------------------

Configuring filter-based GRE tunneling

Traffic matching an IP filter can be tunneled with GRE using the following mechanisms:

  • Configure a GRE tunnel template.

  • Associate the GRE tunnel template with the forwarding action of an IPv4 or IPv6 filter using the forward gre-tunnel command.

The GRE tunnel template defines the command options to create the GRE header used to encapsulate matching IP traffic:

  • One or more destination IP addresses must be defined in the GRE tunnel template.

    • If more than one destination is configured, traffic is hashed across all available destinations.

    • GRE-Tunnel-templates using IPv6 transport are limited to a single destination address.
    • Traffic is routed to the selected destination address based on the route table in the forwarding context of the IP filter.

  • The source address can be configured to any address and is not validated against a local IP address on the local router.

  • The optional GRE key command option can be populated with the ifIndex of the ingress interface on which the matching IP packet was received.

  • An optional template command, skip-ttl-decrement, allows the TTL of the encapsulated IP packet not to be decremented when encapsulated into the GRE header.

The following example shows the configuration for an IPv4-based GRE tunnel template and an IPv6-based GRE tunnel template.

MD-CLI
*[ex:/configure filter]
A:admin@node-2# info
    ip-filter "1" {
        entry 1 {
            pbr-down-action-override forward
            action {
            }
        }
    }
        entry 2 {
            action {
                forward {
                    gre-tunnel "greTunnel_ipv4"
                }
            }
        }
    }
    ip-filter "2" {
        entry 1 {
            action {
                forward {
                    gre-tunnel "greTunnel_ipv6"
                }
            }
        }
    }
    gre-tunnel-template "greTunnel_ipv4" {
        description "10.20.1.5"
        ipv4 {
            source-address 10.20.1.3
            destination-address 9.9.9.9 { }
            destination-address 10.20.1.5 { }
            destination-address 13.13.13.13 { }
        }
    }
    gre-tunnel-template "greTunnel_ipv6" {
        ipv6 {
            source-address 3ffe::a14:100
            gre-key if-index
            destination-address 3ffe::a01:102 { }
        }
    }
classic CLI
*A:node-2>config>filter# info
----------------------------------------------
 ...
        gre-tunnel-template "greTunnel_ipv4" create
            description "10.20.1.5"
            ipv4
                source-address 10.20.1.3
                destination-address 9.9.9.9
                destination-address 10.20.1.5
                destination-address 13.13.13.13
            exit
        exit
        gre-tunnel-template "greTunnel_ipv6" create
            ipv6
                gre-key if-index
                source-address 3ffe::a14:100
                destination-address 3ffe::a01:102
            exit
        exit
        ip-filter 1 name "1" create
            entry 1 create
                action
                exit
                pbr-down-action-override forward
            exit
            entry 2 create
                action
                    forward gre-tunnel "greTunnel_ipv4"
                exit
            exit
        exit
        ip-filter 2 name "2" create
            entry 1 create
                action
                    forward gre-tunnel "greTunnel_ipv6"
                exit
            exit
        exit
----------------------------------------------

Filter management tasks

This section describes filter policy management tasks.

Renumbering filter policy entries

The system exits the matching process when the first match is found and then executes the actions in accordance with the specified action. Because the ordering of entries is important, the numbering sequence may need to be rearranged. Entries should be numbered from the most explicit to the least explicit.

Renumbering filter policy entries (MD-CLI)

[ex:/configure filter ip-filter "11"]
A:admin@node-2# rename entry 10 to 15

[ex:/configure filter ip-filter "11"]
A:admin@node-2# rename entry 20 to 10

[ex:/configure filter ip-filter "11"]
A:admin@node-2# rename entry 40 to 1

Original filter numbers and updated filter numbers configuration (MD-CLI)

*[ex:/configure filter]
A:admin@node-2# info
...
    ip-filter "11" {
        description "filter-main"
        scope exclusive
        entry 10 {
            description "no-91"
            filter-sample true
            interface-sample false
            match {
                src-ip {
                    address 10.10.10.103/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                forward {
                    redirect-policy "redirect1"
                }
            }
        }
        entry 20 {
            match {
                src-ip {
                    address 10.10.0.100/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                drop
            }
        }
        entry 30 {
            match {
                src-ip {
                    address 10.10.0.200/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                accept
            }
        }
        entry 40 {
            match {
                src-ip {
                    address 10.10.10.106/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                drop
            }
        }
    }
...
----------------------------------------------
*[ex:/configure filter]
A:admin@node-2# info
...
ip-filter "11" {
        description "filter-main"
        scope exclusive
        entry 1 {
            match {
                src-ip {
                    address 10.10.10.106/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                drop
            }
        }
        entry 10 {
            match {
                src-ip {
                    address 10.10.0.100/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                drop
            }
        }
        entry 15 {
            description "no-91"
            filter-sample true
            interface-sample false
            match {
                src-ip {
                    address 10.10.10.103/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                forward {
                    redirect-policy "redirect1"
                }
            }
        }
        entry 30 {
            match {
                src-ip {
                    address 10.10.0.200/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                accept
            }
        }
    }
...

Renumbering filter policy entries (classic CLI)

*A:node-2>config>filter>ip-filter# renum 10 15
*A:node-2>config>filter>ip-filter# renum 20 10
*A:node-2>config>filter>ip-filter# renum 40 1

Original filter numbers and updated filter numbers configuration (classic CLI)

*A:node-2>config>filter# info
----------------------------------------------
...
        ip-filter 11 create
            description "filter-main"
            scope exclusive
            entry 10 create
                description "no-91"
                filter-sample
                interface-disable-sample 
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.10.103/24
                exit
                action forward redirect-policy redirect1
            exit
            entry 20 create
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.0.100/24
                exit
                action drop
            exit
            entry 30 create
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.0.200/24
                exit
                action forward
            exit
            entry 40 create
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.10.106/24
                exit
                action drop
            exit
        exit
...
----------------------------------------------
*A:node-2>config>filter# info
----------------------------------------------
...
        ip-filter 11 create
            description "filter-main"
            scope exclusive
            entry 1 create
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.10.106/24
                exit
                action drop
            exit
            entry 10 create
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.0.100/24
                exit
                action drop
            exit
            entry 15 create
                description "no-91"
                filter-sample
                interface-disable-sample 
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.10.103/24
                exit
                action forward redirect-policy 
                     redirect1
            exit
            entry 30 create
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.0.200/24
                exit
                action forward
            exit
        exit
...
----------------------------------------------

Modifying a filter policy

There are several ways to modify an existing filter policy. A filter policy can be modified dynamically as part of subscriber management dynamic insertion or removal of filter policy entries (see the 7450 ESS, 7750 SR, and VSR Triple Play Service Delivery Architecture Guide for details). A filter policy can be modified indirectly by configuration change to a match list the filter policy uses (as described earlier in this guide). In addition, a filter policy can be directly edited as described in the following information.

To access a specific IPv4, IPv6, or MAC filter, you must specify the filter ID, or if defined, filter name.

Modifying a filter (MD-CLI)

In MD-CLI, you can use delete to remove a command option from the configuration.

[ex:/configure filter ip-filter "11"]
A:admin@node-2# description "New IP filter info"

[ex:/configure filter ip-filter "11"]
A:admin@node-2# entry 2

[ex:/configure filter ip-filter "11" entry 2]
A:admin@node-2# description "new entry"

[ex:/configure filter ip-filter "11" entry 2]
A:admin@node-2# action drop

[ex:/configure filter ip-filter "11" entry 2]
A:admin@node-2# match dst-ip address 10.10.10.104/32

[ex:/configure filter ip-filter "11" entry 2]
A:admin@node-2# exit

Modified IP filter output (MD-CLI)

[ex:/configure filter]
A:admin@node-2# info
    ip-filter "11" {
        description "New IP filter info"
        scope exclusive
        entry 1 {
            match {
                src-ip {
                    address 10.10.10.106/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                drop
            }
        }
        entry 2 {
            description "new entry"
            match {
                dst-ip {
                    address 10.10.10.104/32
                }
            }
            action {
                drop
            }
        }
        entry 10 {
            match {
                src-ip {
                    address 10.10.0.100/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                drop
            }
        }
        entry 15 {
            description "no-91"
            match {
                src-ip {
                    address 10.10.10.103/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                accept
            }
        }
        entry 30 {
            match {
                src-ip {
                    address 10.10.0.200/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                accept
            }
        }
    }

Modifying a filter (classic CLI)

In classic CLI you can use the no form of the command to remove the command options or return the command option to the default.

*A:node-2>config>filter>ip-filter# description "New IP filter info"
*A:node-2>config>filter>ip-filter# entry 2 create
*A:node-2>config>filter>ip-filter>entry$ description "new entry"
*A:node-2>config>filter>ip-filter>entry# action drop
*A:node-2>config>filter>ip-filter>entry# match dst-ip 10.10.10.104/32
*A:node-2>config>filter>ip-filter>entry# exit

Modified IP filter output (classic CLI)

*A:node-2>config>filter# info
----------------------------------------------
...
        ip-filter 11 create
            description "New IP filter info"
            scope exclusive
            entry 1 create
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.10.106/24
                exit
                action drop
            exit
            entry 2 create
                description "new entry"
                match
                    dst-ip 10.10.10.104/32
                exit
                action drop
            exit
            entry 10 create
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.0.100/24
                exit
                action drop
            exit
            entry 15 create
                description "no-91"
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.10.103/24
                exit
                action forward
            exit
            entry 30 create
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.0.200/24
                exit
                action forward
            exit
        exit
...
----------------------------------------------

Deleting a filter policy

Before deleting a filter, the filter associations must be removed from all the applied ingress and egress SAPs and network interfaces.

Removing the filter from an SAP network interface (MD-CLI)

In MD-CLI, use the delete command in all contexts where the filter is used to remove the filter.

[ex:/configure service]
A:admin@node-2# epipe 5

[ex:/configure service epipe "5"]
A:admin@node-2# sap 1/1/2:3

[ex:/configure service epipe "5" sap 1/1/2:3]
A:admin@node-2# ingress

[ex:/configure service epipe "5" sap 1/1/2:3 ingress]
A:admin@node-2# delete filter

After you have removed the filter from the SAPs network interfaces, you can delete the filter. The following example shows the deletion of a filter.

Deleting a filter (MD-CLI)

*[ex:/configure filter]
A:admin@node-2# delete ip-filter 11

Removing the filter from an SAP network interface (classic CLI)

In classic CLI, use the no filter command in all contexts where the filter is used to remove the filter.

*A:node-2>config>service# epipe 5
*A:node-2>config>service>epipe# sap 1/1/2:3
*A:node-2>config>service>epipe>sap# ingress
*A:node-2>config>service>epipe>sap>ingress# no filter

After you have removed the filter from the SAPs network interfaces, you can delete the filter. The following example shows the deletion of a filter.

Deleting a filter (classic CLI)

*A:node-2>config>filter# no ip-filter 11

Modifying a redirect policy

To access a specific redirect policy, the policy name must be specified.

Modifying a redirect policy (MD-CLI)

Use the delete form of the command to remove the command options or return the command option to the default.

*[ex:/configure filter]
A:admin@node-2# redirect-policy redirect1

*[ex:/configure filter redirect-policy "redirect1"]
A:admin@node-2# description "New redirect info"

*[ex:/configure filter redirect-policy "redirect1"]
A:admin@node-2# destination 10.10.10.104

*[ex:/configure filter redirect-policy "redirect1" destination 10.10.10.104]
A:admin@node-2# priority 105

*[ex:/configure filter redirect-policy "redirect1" destination 10.10.10.104]
A:admin@node-2# ping-test timeout 20

*[ex:/configure filter redirect-policy "redirect1" destination 10.10.10.104]
A:admin@node-2# ping-test drop-count 7

Modified redirect policy output (MD-CLI)

*[ex:/configure filter]
A:admin@node-2# info
    redirect-policy "redirect1" {
        admin-state enable
        description "New redirect info"
        destination 10.10.10.104 {
            admin-state enable
            description "New redirect info"
            priority 105
            ping-test {
                timeout 20
                drop-count 7
            }
        }
        destination 10.10.10.105 {
            admin-state enable
            priority 95
            ping-test {
                timeout 30
                drop-count 5
            }
        }
    }

Modifying a redirect policy (classic CLI)

Use the no form of the command to remove the command options or return the command option to the default.

*A:node-2>config>filter# redirect-policy redirect1
*A:node-2>config>filter>redirect-policy# description "New redirect info"
*A:node-2>config>filter>redirect-policy# destination 10.10.10.104
*A:node-2>config>filter>redirect-policy>dest# priority 105
*A:node-2>config>filter>redirect-policy>dest# ping-test timeout 20
*A:node-2>config>filter>redirect-policy>dest# ping-test drop-count 7

Modified redirect policy output (classic CLI)

A:node-2>config>filter# info
----------------------------------------------
...
        redirect-policy "redirect1" create
            description "New redirect info"
            destination 10.10.10.104 create
                priority 105
                ping-test
                    timeout 20
                    drop-count 7
                exit
                no shutdown
            exit
            destination 10.10.10.105 create
                priority 95
                ping-test
                    timeout 30
                    drop-count 5
                exit
                no shutdown
            exit
            no shutdown
        exit
...
----------------------------------------------

Deleting a redirect policy

Before a redirect policy can be deleted from the filter configuration, the policy association must be removed from the IP filter.

The following example shows the replacement of redirect policy "redirect1" with redirect policy "redirect2" and the removal of "redirect1" from the filter configuration.

Replacing and deleting a redirect policy (MD-CLI)

*[ex:/configure filter]
A:admin@node-2# ip-filter 11

*[ex:/configure filter ip-filter "11"]
A:admin@node-2# entry 1

*[ex:/configure filter ip-filter "11" entry 1]
A:admin@node-2# action forward redirect-policy redirect2

*[ex:/configure filter ip-filter "11" entry 1]
A:admin@node-2# exit

*[ex:/configure filter ip-filter "11"]
A:admin@node-2# exit

*[ex:/configure filter]
A:admin@node-2# delete redirect-policy redirect1

Output after deleting a redirect policy (MD-CLI)

*[ex:/configure filter ip-filter "11"]
A:admin@node-2# info
    description "This is new"
    scope exclusive
    entry 1 {
        filter-sample true
        interface-sample false
        match {
            src-ip {
                address 10.10.10.106/24
            }
            dst-ip {
                address 10.10.10.91/24
            }
        }
        action {
            forward {
                redirect-policy "redirect2"
            }
        }
    }
    entry 2 {
        description "new entry"
...

Replacing and deleting a redirect policy (classic CLI)

*A:node-2>config>filter# ip-filter 11
*A:node-2>config>filter>ip-filter# entry 1
*A:node-2>config>filter>ip-filter>entry# action forward redirect-policy "redirect2"
*A:node-2>config>filter>ip-filter>entry# exit
*A:node-2>config>filter>ip-filter# exit
*A:node-2>config>filter# no redirect-policy "redirect1"

Output after deleting a redirect policy (classic CLI)

*A:node-2>config>filter>ip-filter# info
----------------------------------------------
            description "This is new"
            scope exclusive
            entry 1 create
               filter-sample
               interface-disable-sample 
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.10.106/24
                exit
                action forward redirect-policy redirect2
            exit
            entry 2 create
                description "new entry"
...
----------------------------------------------

Copying filter policies

When changes are to be made to an existing filter policy applied to a one or more SAPs or network interfaces, Nokia recommends to first copy the applied filter policy, then modify the copy and then overwrite the applied policy with the modified copy. This ensures that a policy being modified is not applied when partial changes are done as any filter policy edits are applied immediately to all services where the policy is applied.

New filter policies can also be created by copying an existing policy and renaming the new filter.

The following example displays the copying of the configuration information from an existing IP filter policy "11" to create a new filter policy "12" that can then be edited. After edits are completed, they can be used to overwrite existing policy "11".

Copying a filter policy (MD-CLI)

*[ex:/configure filter]
A:admin@node-2# copy ip-filter 11 to ip-filter 12

Copied filter policy output (MD-CLI)

*[ex:/configure filter]
A:admin@node-2# info
   ip-filter "11" {
        description "This is new"
        scope exclusive
        entry 1 {
            match {
                src-ip {
                    address 10.10.10.106/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                drop
                }
            }
        }
        entry 2 {
...
    ip-filter "12" {
        description "This is new"
        scope exclusive
        entry 1 {
            match {
                src-ip {
                    address 10.10.10.106/24
                }
                dst-ip {
                    address 10.10.10.91/24
                }
            }
            action {
                drop
            }
        }
        entry 2 {
...

Copying a filter policy (classic CLI)

*A:node-2>config>filter# copy ip-filter 11 to 12

Copied filter policy output (classic CLI)

*A:node-2>config>filter# info
----------------------------------------------
...
        ip-filter 11 create
            description "This is new"
            scope exclusive
            entry 1 create
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.10.106/24
                exit
                action drop
            exit
            entry 2 create
...
        ip-filter 12 create
            description "This is new"
            scope exclusive
            entry 1 create
                match
                    dst-ip 10.10.10.91/24
                    src-ip 10.10.10.106/24
                exit
                action drop
            exit
            entry 2 create
...