Usage

OpenStack usage for OpenStack managed networks

The Connect-based OpenStack solution is fully compatible with OpenStack. The standard API commands for creating networks and subnets remain valid.

When mapping to Connect:

  • The network in OpenStack is created within the user's project. If the network is created by an administrator, the administrator can optionally select the project to which the network belongs.

    This OpenStack project appears as a tenant in Connect. This tenant is created when the first OpenStack network is created in this project, and it is deleted when the last network in this project is deleted

  • Only VLAN-based networks (where segmentation_type == 'vlan') are handled by the Fabric Services System ML2 plugin, meaning only VLAN networks are HW-offloaded to the fabric managed by the Fabric Services System.

    Neutron defines the default segmentation type for tenant networks. If you want non-administrators to create their own networks that are offloaded to the Fabric Services System fabric, the segmentation type must be set to 'vlan'.

    However, if only administrators need this capability, then the segmentation type can be left to its default, since administrators can specify the segmentation type as part of network creation.

  • The plugin only supports Layer 2 Fabric Services System fabric offloaded networking. When a Layer 3 model is defined in OpenStack (using router and router attachments), this model will work; but Layer 3 is software-based according the native OpenStack implementation.

    Layer-3-associated features, like floating IPs, will also work; but these are only software-based according to the native OpenStack implementation.

    Note: This is tested with the traditional non-Distributed Virtual Routing (DVR) implementation only; DVR has not been tested, and might not work.
  • Untagged networks are not supported in this release.
  • MTU provisioning is not supported in this release. That is, MTU information is not programmed into the fabric.

OpenStack usage for Fabric Services System managed networks

In addition to the OpenStack managed networking model, Connect supports networking managed by the Fabric Services System. In this case, the Workload EVPN and subnet resources are created first in the Fabric Services System directly, and then are consumed in OpenStack.

To support Fabric Services System managed networking, two pairs of proprietary extensions have been added to Network:

  • when using the UUIDs of the Workload Intent and Subnet: fss-workload-evpn and fss-subnet
  • when using the names of the Workload Intent and Subnet: fss-workload-evpn-name and fss-subnet-name
  1. Using the Fabric Services System intent-based REST API or the UI, create a Workload VPN and one or more bridged subnets within that workload.
  2. Obtain the UUIDs of these resource.
    As an example, assume the Workload Intent has the name fss-evpn-1 and the UID 410559942890618880, and the Subnet within it has the name fss-subnet-1 and the UUID 410560233203564544.
  3. Choose one of the following:
    • To complete the configuration using resource UUIDs, go to step 4.
    • To complete the configuration using object names, go to step 5.
  4. Complete the configuration using object UUIDs:
    1. In OpenStack, create the consuming network, linking to these pre-created entities.
      To continue the example, assume a network named demo-network is created within a project named demo-project. Using the previous example values for the VPN and subnet, the command for this step would be:
      openstack network create --project os-project --fss-workload-evpn 410559942890618880 --fss-subnet 410560233203564544 os-network-1
    2. Create a subnet within this network using the standard API syntax. For example:
      openstack subnet create --network os-network-1 --subnet-range 10.10.1.0/24 os-subnet-1
      
  5. Complete the configuration using object names:
    1. In OpenStack, create the consuming network, linking to these pre-created entities.
      Assume the Workload EVPN is named fss-evpn-1 and the Subnet underneath is named fss-subnet-1. Further assume a network named demo-network is created within a project named demo-project. Using the previous example values for the VPN and subnet, the command for this step would be:
      openstack network create --project os-project --fss-workload-evpn-name fss-evpn-1 --fss-subnet-name fss-subnet-1 os-network-1
    2. Create a subnet within this network using the standard API syntax. For example:
      openstack subnet create --network os-network-1 --subnet-range 10.10.1.0/24 os-subnet-1
      

Creating a Neutron port and Nova server within this network and subnet also uses standard APIs.

When mapping these objects to Connect:

  • In Fabric Services System managed networking, the relationship between OpenStack projects and the Connect tenant disappears because the Connect tenant is dictated by the --fss-workload-evpn/fss-workload-evpn-name parameter set.

    This means that a new Connect tenant is created for every --fss-workload-evpn/fss-workload-evpn-name parameter passed to network create.

    For identical --fss-workload-evpn/fss-workload-evpn-name parameters passed to different network create commands, the Connect tenant resource is re-used.

  • From the OpenStack point of view, Layer 3 is not supported in Fabric Services System managed networking. A subnet within a Fabric Services System managed network cannot be router-associated.

    From the Fabric Services System point of view, routed subnets defined within a Workload EVPN can by consumed by OpenStack by a Fabric Services System managed network.

    In this case, the Layer 3 forwarding and routing is offloaded to the Fabric Services System managed fabric, unlike the software-based model that applies for Layer 3 in OpenStack managed networking.

Edge topology introspection

Edge topology information is introspected using LLDP. LLDP must be enabled on the OpenStack computes. The enabling of LLDP is performed automatically by the nic-mapping L2 agent extension for OVS computes. For more on the nic-mapping agent, see NIC mapping.

NIC mapping

The following cases can apply for managing NIC mapping:

  • Automated NIC mapping as of introspection by extended Layer 2 agent on the compute:

    When the nic-mapping agent extension is configured, the Neutron OVS agent will inspect its configuration on startup and enable LLDP Tx on the interfaces under any OVS bridges it finds.

    It will also persist the <physnet> ↔ <compute, interface> relation in the Neutron database so it can wire Neutron ports properly in the fabric.

    Known interface mappings can be consulted using the CLI command openstack fss interface mapping list.

    Note: Enabling LLDP Tx from the OVS agent extension is supported only for DPDK interfaces. For enabling LLDP in other cases, see Compute requirements.
  • Programmed NIC mapping via Neutron API

    If you do not want to enable the OVS Layer 2 agent extension, you can provision interface mappings manually using the command openstack fss interface mapping create.

    Note: To support this operation, the following configuration should be present in the ml2 plugin section:
    [DEFAULT]
    nic_mapping_provisioning = True
Note: Nokia does not recommend combining the Layer 2 agent extension with manual provisioning, because when the agent starts or re-starts, it clears entries that were provisioned manually.

SR-IOV NICs

SR-IOV NICs are supported the OpenStack/Connect/Fabric Services System solution. They follow the standard OpenStack model, for example using the vnic_type 'direct'.

If there are physical NICs on the compute used by SR-IOV only, and so PF is not in the OVS agent configuration, an LLDP agent such as lldpad or lldpd must be installed and running on the compute to provide neighbour information to fabric. This does not occur automatically.

Bonded interfaces

There are two types of bonded interfaces:

  • Linux bonds
  • OVS bonds

Linux bonds can be applied on VIRTIO computes. In the case of DPDK computes, OVS bonds can be applied.

Both Linux and OVS bonds come as Active/Active (A/A) or Active/Standby (A/S).

Linux bonds can be deployed as static LAGs or dynamic LAGs using LACP.

OVS DPDK can only work with dynamic LAGs using LACP.

For computes configured OVS DPDK Active/Standby (A/S), the preferred_active should select the device which has the active DPDK port.

Note: In the current release, only Active/Active LACP LAGs are supported. Linux mode-1 bonds (Active/Standby) are not yet supported. LACP LAGs require the appropriate configuration to be present on the fabric configuration in the Fabric Services System as well as described in the LACP LAGs.

Trunk ports

The network trunk service allows multiple networks to be connected to an instance using a single virtual NIC (vNIC). Multiple networks can be presented to an instance by connecting it to a single port.

For information about configuring and operating the network trunk service, please see https://docs.openstack.org/neutron/train/admin/config-trunking.html

Trunking is supported for VRTIO, DPDK and SRIOV.

  • When the vnic_type is set to "normal ports" (VIRTIO/DPDK), trunking is supported through an upstream openvswitch trunk driver.
  • When the vnic_type is set to "direct ports" (SRIOV), trunking is supported by the Fabric Services System ML2 plugin trunk driver.

When using trunks with SRIOV, some limitations apply to the upstream OVS trunk model:

  • The parent port of the trunk and all subports must be created with the vnic_type "direct".
  • To avoid the need for QinQ on switches, trunks for SRIOV instance must be created with a parent port belonging to a flat network (untagged).
  • If multiple projects within a deployment must be able to use trunks, the neutron network must be created as shared (using the --share attribute)
  • When adding subports to a trunk:
    • you must specify their segmentation-type as VLAN
    • their segmentation-id must match a segmentation-id of the neutron network to which the subport belongs