Usage

OpenStack usage for OpenStack managed networks

The Connect-based OpenStack solution is fully compatible with OpenStack. The standard API commands for creating networks and subnets remain valid.

When mapping to Connect:

  • The network in OpenStack is created within the user's project. If the network is created by an administrator, the administrator can optionally select the project to which the network belongs.

    This OpenStack project appears as a tenant in Connect. This tenant is created when the first OpenStack network is created in this project, and it is deleted when the last network in this project is deleted

  • Only VLAN-based networks (where segmentation_type == 'vlan') are handled by the Fabric Services System ML2 plugin, meaning only VLAN networks are HW-offloaded to the fabric managed by the Fabric Services System.

    Neutron defines the default segmentation type for tenant networks. If you want non-administrators to create their own networks that are offloaded to the Fabric Services System fabric, the segmentation type must be set to 'vlan'.

    However, if only administrators need this capability, then the segmentation type can be left to its default, since administrators can specify the segmentation type as part of network creation.

  • The plugin only supports Layer 2 Fabric Services System fabric offloaded networking. When a Layer 3 model is defined in OpenStack (using router and router attachments), this model will work; but Layer 3 is software-based according the native OpenStack implementation.

    Layer-3-associated features, like floating IPs, will also work; but these are only software-based according to the native OpenStack implementation.

    Note: This is tested with the traditional non-Distributed Virtual Routing (DVR) implementation only; DVR has not been tested, and might not work.
  • Untagged networks are not supported in this release.
  • MTU provisioning is not supported in this release. That is, MTU information is not programmed into the fabric.

OpenStack usage for Fabric Services System managed networks

In addition to the OpenStack managed networking model, Connect supports Fabric Services System managed networking. In this case, the Workload VPN and subnet resources are created first in the Fabric Services System directly, and then are consumed in OpenStack.

In order to support Fabric Services System managed networking, two proprietary extensions have been added to Network:

  • fss-workload-evpn
  • fss-subnet

Fabric Services System managed networking uses the following management flow:

  1. In the Fabric Services System, create a Workload VPN and one or more bridged subnets within that workload using the Fabric Services System intent-based REST API or the UI.

  2. Obtain the UUIDs of these resource.

    As an example, assume the Workload VPN has the UUID 410559942890618880 and the subnet with in it has the UUID 410560233203564544.

  3. In OpenStack, create the consuming network, linking to these pre-created entities.

    To continue the example, assume a network named demo-network is created within a project named demo-project. Using the previous example values for the VPN and subnet, the command for this step would be:

    openstack network create --project demo-project --fss-workload-evpn 410559942890618880 --fss-subnet 410560233203564544 demo-network

Creating a subnet within this network follows the standard API syntax. For example:
openstack subnet create --network demo-network --subnet-range 10.10.1.0/24
        demo-subnet

Neutron port and Nova server creation within this network and subnet also use the standard APIs.

When mapping these objects to Connect:

  • In Fabric Services System managed networking, the relationship between OpenStack projects and the Connect tenant disappears because the Connect tenant is dictated by the --fss-workload-evpn parameter.

    This means that a new Connect tenant is created for every fss-workload-evpn passed to network create.

  • From the OpenStack point of view, Layer 3 is not supported in Fabric Services System managed networking. A subnet within a Fabric Services System managed network cannot be router-associated.

    From the Fabric Services System point of view, routed subnets defined within a Workload VPN can by consumed by OpenStack by an Fabric Services System managed network.

    In this case, the Layer 3 forwarding and routing is offloaded to the Fabric Services System managed fabric, unlike the software-based model that applies for Layer 3 in OpenStack managed networking.

Edge topology introspection

Edge topology information is introspected using LLDP. LLDP must be enabled on the OpenStack computes. The enabling of LLDP is performed automatically by the nic-mapping L2 agent extension for OVS computes. For more on the nic-mapping agent, see NIC mapping.

NIC mapping

The following cases can apply for managing NIC mapping:

  • Automated NIC mapping as of introspection by extended Layer 2 agent on the compute:

    When the nic-mapping agent extension is configured, the Neutron OVS agent will inspect its configuration on startup and enable LLDP Tx on the interfaces under any OVS bridges it finds.

    It will also persist the <physnet> ↔ <compute, interface> relation in the Neutron database so it can wire Neutron ports properly in the fabric.

    Known interface mappings can be consulted using the CLI command openstack fss interface mapping list.

    Note: Enabling LLDP Tx from the OVS agent extension is supported only for DPDK interfaces. For enabling LLDP in other cases, see Compute requirements.
  • Programmed NIC mapping via Neutron API

    If you do not want to enable the OVS Layer 2 agent extension, you can provision interface mappings manually using the command openstack fss interface mapping create.

    Note: To support this operation, the following configuration should be present in the ml2 plugin section:
    [DEFAULT]
    nic_mapping_provisioning = True
Note: Nokia does not recommend combining the Layer 2 agent extension with manual provisioning, because when the agent starts or re-starts, it clears entries that were provisioned manually.

SR-IOV NICs

SR-IOV NICs are supported the OpenStack/Connect/Fabric Services System solution. They follow the standard OpenStack model, for example using the vnic_type 'direct'.

If there are physical NICs on the compute used by SR-IOV only, and so PF is not in the OVS agent configuration, an LLDP agent such as lldpad or lldpd must be installed and running on the compute to provide neighbour information to fabric. This does not occur automatically.

Bonded interfaces

There are two types of bonded interfaces:

  • Linux bonds
  • OVS bonds

Linux bonds can be applied on VIRTIO computes. In the case of DPDK computes, OVS bonds can be applied.

Both Linux and OVS bonds come as Active/Active (A/A) or Active/Standby (A/S).

Linux bonds can be deployed as static LAGs or dynamic LAGs using LACP.

OVS DPDK can only work with dynamic LAGs using LACP.

For computes configured OVS DPDK Active/Standby (A/S), the preferred_active should select the device which has the active DPDK port.

Note: In the current release, only Active/Active LACP LAGs are supported. Linux mode-1 bonds (Active/Standby) are not yet supported. LACP LAGs require the appropriate configuration to be present on the fabric configuration in the Fabric Services System as well as described in the LACP LAGs.

Trunk ports

Trunk ports are supported the OpenStack/Connect/Fabric Services System solution. They follow the standard OpenStack model.