Usage
OpenStack usage for OpenStack managed networks
The Connect-based OpenStack solution is fully compatible with OpenStack. The standard API commands for creating networks and subnets remain valid.
When mapping to Connect:
-
The network in OpenStack is created within the user's project. If the network is created by an administrator, the administrator can optionally select the project to which the network belongs.
This OpenStack project appears as a tenant in Connect. This tenant is created when the first OpenStack network is created in this project, and it is deleted when the last network in this project is deleted
- Only VLAN-based networks (where segmentation_type == 'vlan') are handled by the
Fabric Services System ML2 plugin, meaning only VLAN networks are HW-offloaded to
the fabric managed by the Fabric Services System.
Neutron defines the default segmentation type for tenant networks. If you want non-administrators to create their own networks that are offloaded to the Fabric Services System fabric, the segmentation type must be set to 'vlan'.
However, if only administrators need this capability, then the segmentation type can be left to its default, since administrators can specify the segmentation type as part of network creation.
-
The plugin only supports Layer 2 Fabric Services System fabric offloaded networking. When a Layer 3 model is defined in OpenStack (using router and router attachments), this model will work; but Layer 3 is software-based according the native OpenStack implementation.
Layer-3-associated features, like floating IPs, will also work; but these are only software-based according to the native OpenStack implementation.
Note: This is tested with the traditional non-Distributed Virtual Routing (DVR) implementation only; DVR has not been tested, and might not work. - Untagged networks are not supported in this release.
- MTU provisioning is not supported in this release. That is, MTU information is not programmed into the fabric.
OpenStack usage for Fabric Services System managed networks
To support Fabric Services System managed networking, two pairs of proprietary extensions have been added to Network:
- when using the UUIDs of the Workload Intent and Subnet:
fss-workload-evpn
andfss-subnet
- when using the names of the Workload Intent and Subnet:
fss-workload-evpn-name
andfss-subnet-name
- Using the Fabric Services System intent-based REST API or the UI, create a Workload VPN and one or more bridged subnets within that workload.
-
Obtain the UUIDs of these resource.
As an example, assume the Workload Intent has the name
fss-evpn-1
and the UID410559942890618880
, and the Subnet within it has the namefss-subnet-1
and the UUID410560233203564544
. - Choose one of the following:
-
Complete the configuration using object UUIDs:
-
Complete the configuration using object names:
Creating a Neutron port and Nova server within this network and subnet also uses standard APIs.
When mapping these objects to Connect:
- In Fabric Services System managed networking, the relationship between OpenStack
projects and the Connect tenant disappears because the Connect tenant is
dictated by the
--fss-workload-evpn/fss-workload-evpn-name
parameter set.This means that a new Connect tenant is created for every
--fss-workload-evpn/fss-workload-evpn-name
parameter passed to network create.For identical
--fss-workload-evpn/fss-workload-evpn-name
parameters passed to different network create commands, the Connect tenant resource is re-used. - From the OpenStack point of view, Layer 3 is not supported in Fabric Services System
managed networking. A subnet within a Fabric Services System managed network
cannot be router-associated.
From the Fabric Services System point of view, routed subnets defined within a Workload EVPN can by consumed by OpenStack by a Fabric Services System managed network.
In this case, the Layer 3 forwarding and routing is offloaded to the Fabric Services System managed fabric, unlike the software-based model that applies for Layer 3 in OpenStack managed networking.
Edge topology introspection
Edge topology information is introspected using LLDP. LLDP must be enabled on the OpenStack computes. The enabling of LLDP is performed automatically by the nic-mapping L2 agent extension for OVS computes. For more on the nic-mapping agent, see NIC mapping.
NIC mapping
The following cases can apply for managing NIC mapping:
- Automated NIC mapping as of introspection by extended Layer 2 agent on the
compute:
When the nic-mapping agent extension is configured, the Neutron OVS agent will inspect its configuration on startup and enable LLDP Tx on the interfaces under any OVS bridges it finds.
It will also persist the <physnet> ↔ <compute, interface> relation in the Neutron database so it can wire Neutron ports properly in the fabric.
Known interface mappings can be consulted using the CLI command
openstack fss interface mapping list
.Note: Enabling LLDP Tx from the OVS agent extension is supported only for DPDK interfaces. For enabling LLDP in other cases, see Compute requirements. - Programmed NIC mapping via Neutron API
If you do not want to enable the OVS Layer 2 agent extension, you can provision interface mappings manually using the command
openstack fss interface mapping create
.Note: To support this operation, the following configuration should be present in the ml2 plugin section:[DEFAULT] nic_mapping_provisioning = True
SR-IOV NICs
SR-IOV NICs are supported the OpenStack/Connect/Fabric Services System solution. They follow the standard OpenStack model, for example using the vnic_type 'direct'.
If there are physical NICs on the compute used by SR-IOV only, and so PF is not in the OVS agent configuration, an LLDP agent such as lldpad or lldpd must be installed and running on the compute to provide neighbour information to fabric. This does not occur automatically.
Bonded interfaces
There are two types of bonded interfaces:
- Linux bonds
- OVS bonds
Linux bonds can be applied on VIRTIO computes. In the case of DPDK computes, OVS bonds can be applied.
Both Linux and OVS bonds come as Active/Active (A/A) or Active/Standby (A/S).
Linux bonds can be deployed as static LAGs or dynamic LAGs using LACP.
OVS DPDK can only work with dynamic LAGs using LACP.
For computes configured OVS DPDK Active/Standby (A/S), the preferred_active should select the device which has the active DPDK port.
Trunk ports
The network trunk service allows multiple networks to be connected to an instance using a single virtual NIC (vNIC). Multiple networks can be presented to an instance by connecting it to a single port.
For information about configuring and operating the network trunk service, please see https://docs.openstack.org/neutron/train/admin/config-trunking.html
Trunking is supported for VRTIO, DPDK and SRIOV.
- When the vnic_type is set to "normal ports" (VIRTIO/DPDK), trunking is supported through an upstream openvswitch trunk driver.
- When the vnic_type is set to "direct ports" (SRIOV), trunking is supported by the Fabric Services System ML2 plugin trunk driver.
When using trunks with SRIOV, some limitations apply to the upstream OVS trunk model:
- The parent port of the trunk and all subports must be created with the vnic_type "direct".
- To avoid the need for QinQ on switches, trunks for SRIOV instance must be created with a parent port belonging to a flat network (untagged).
- If multiple projects within a deployment must be able to use trunks, the neutron network must be created as shared (using the --share attribute)
- When adding subports to a trunk:
- you must specify their segmentation-type as VLAN
- their segmentation-id must match a segmentation-id of the neutron network to which the subport belongs