Overview

Wavence LAGs

Link Aggregation groups (LAGs) are a set of ports used to interconnect network nodes using multiple links to increase link capacity and availability between them. LAGs also provide redundancy between the aggregated links. If a link fails, traffic is redirected onto the remaining link, or links. You can select a LAG as the terminating port when creating a network interface as part of an L3 service.

LAGs on the NFM-P GUI are represented as part of the navigation tree objects and are located below a device icon.

You can configure LAGs using the configuration forms available when you choose Create LAG from the LAG object navigation tree contextual menu. See the “Logical group object configuration” chapter in the NSP NFM-P Classic Management User Guide for more information.

Supported LAG types

The NFM-P supports the discovery of the LAG associations from NFM-P or the Wavence element managers for equipment functions on the Wavence.

You can configure the following LAG types for the Wavence:

Service configuration is supported across L1 and L2 Radio LAGs using the suite of current service functionality. For example, support is provided for detection of end-to-end bandwidth on the VLAN path or correlation of link level alarms up to paths that include L1 Radio LAGs.

NFM-P supports:

L1 LAGs can be created in the following configurations using a Wavence element manager:

An intra plug-in LAG is a LAG with MPT-HLS configured on the same card. A cross plug-in LAG is a LAG with MPT-HLS configured on two EASv2 cards on the same MSS row.

L1 Radio port deployment guidelines

An L1 Radio LAG follows a similar deployment model as an L2 Radio LAG except that the L1 Radio LAG functions are deployed at the Radio layer. As a result, the L1 Radio LAG has different port associations, cross-connections, and validations from the L2 Radio LAG. The advantage of an L1 Radio LAG is that highly correlated upper-layer traffic can be hashed. For example, an LSP hashes to the same port in an L2 Radio LAG. Both L1 and L2 Radio LAGs must be configured using a Wavence element manager. See the Wavence documentation for more information.

Applying drop priority to SDH data flow cross-connections on L1 Radio LAGs

You can create a cross-connection of SDH data flow from an SDHACC card to an L1 Radio LAG on an EASv2 card. The maximum number of SDH data flow cross-connections to an L1 LAG is 16. The SDH data flow cannot be cross-connected to any other LAG

If the LAG rate drops to less than the bandwidth required by the SDH data flow, congestion and frame loss may occur. To prevent this, you can configure a subset of the SDH data flow cross-connected to the LAG to be dropped to ensure enough bandwidth is available to transmit the remaining flows.

You can use the Drop Priority parameter on the Microwave Backhaul Service GUI forms (at both the service level or individual site level) to define the precedence a flow would take over other flows in the event of congestion. See To configure microwave backhaul services for information about configuring the Drop Priority parameter.

By default, the drop priority of each SDH data flow is 255 when the cross-connection is created. Drop priority can be configured for each SDH data flow. SDH data flows with the highest drop priority configured are dropped first.

Note: You must create SDH2SDH service before configuring the drop priority value.

Drop priority deployment guidelines:

  1. Configuration and reporting of drop priority can be performed at either a service or individual site level.

  2. Cross-connections of SDH data flows to Radio L1 Radio LAGs are supported on the following MPT types on the EASv2:

    • MPT-HLS (version 5.2.1 or later)

    • MPT-xC / HQAM (version 6.0 or later)

  3. If you configure the drop priority at the service level, this value is propagated to all sites that join the service.

  4. On service discovery, if a mismatch is found between the service level and site level drop priority value, a serviceDropPriorityMismatch alarm is raised. You can clear the alarm by:

    • modifying the drop priority at each site to align with the value configured at the service level

    • modifying the drop priority at the service level, which in turn is propagated to all sites

  5. If two of more SDH data flows originate from the same site, apply a unique drop priority value to each flow. In the event that the SDH data flows have the same drop priority value, a siteDropPriorityMismatch alarm is raised.