Host machine requirements
Overview
This section describes the requirements that must be fulfilled by a host machine to support vSIM virtual machines (VMs).
The host machine for vSIM VMs is usually a dedicated server or PC in a lab environment. vSIM VMs may also be deployed in a fully orchestrated data center, but this topic is out of scope of this guide.
Host machine hardware requirements
This section describes the host machine hardware requirements.
vCPU requirements
The minimum number of vCPUs that you can allocate to a vSIM VM is two. See vCPU for more information.
The 7250 IXR family has the following minimum requirements:
-
four vCPUs for cpiom-ixr-r6
-
a minimum of four vCPUs for imm36-100g-qsfp28; however, eight vCPUs are recommended
CPU and DRAM memory
vSIM VMs can be deployed on any PC or server with an Intel CPU that is Sandy Bridge or later in terms of micro-architecture.
The PC or server should be equipped with sufficient DRAM memory to meet the memory requirement of the host, and have adequate resources to back the memory of each vSIM VM without oversubscription.
The minimum amount of memory for each vSIM VM depends on emulated card type, as listed in VM memory requirements by card type .
Emulated card type | Minimum required memory (GB) |
---|---|
cpiom-ixr-r6 |
6 |
imm36-100g-qsfp28 |
6 |
xcm-14s |
8 |
xcm-1s |
6 |
xcm-2s |
6 |
xcm2-x20 |
6 |
xcm-7s |
6 |
cpm-1se/imm36-800g-qsfpdd |
6 |
all other card types |
4 |
Storage
Each vSIM VM needs only a moderate amount of persistent storage space; 5 to 10 Gbytes is sufficient in most cases.
The currently supported method for attaching a storage device to a vSIM VM is to attach a disk image that appears as an IDE hard drive to the guest. The vSIM VM disk images can either be stored on the host server hard drive, or stored remotely.
NICs
vSIM VMs are supported with any type of NIC, as long as it is supported by the hypervisor.
Host machine software requirements
This section describes the requirements for host OS and virtualization software requirements for vSIM VMs.
Host OS and hypervisor
The supported host OS depends on the hypervisor selected to run the vSIM VMs. Integrated model vSIM VMs (SR-1, SR-1s, IXR-R6, IXR-ec) are supported with the following hypervisors:
Linux KVM, as provided by one of the host OSs listed below
VMware ESXi 6.0, 6.5, or 6.7
Distributed model vSIM VMs are only supported with the Linux KVM hypervisor, using one of the following host OSs:
CentOS 7.0-1406 with 3.10.0-123 kernel
CentOS 7.2-1511 with 3.10.0-327 kernel
CentOS 7.4-1708 with 3.10.0-693 kernel
Centos 7.5-1804 with 3.10.0-862 kernel
Red Hat Enterprise Linux 7.1 with 3.10.0-229 kernel
Red Hat Enterprise Linux 7.2 with 3.10.0-327 kernel
Red Hat Enterprise Linux 7.4 with 3.10.0-693 kernel
Red Hat Enterprise Linux 7.5 with 3.10.0-862 kernel
Ubuntu 14.04 LTS with 3.13 kernel
Ubuntu 16.04 LTS with 4.4
Linux KVM hypervisor
vSIM VMs can be created and managed using the open-source Kernel-based VM (KVM) hypervisor.
Nokia recommends the use of the Libvirt software package to manage the deployment of VMs in a Linux KVM environment. Libvirt is open source software that provides a set of APIs for creating and managing VMs on a host machine, independent of the hypervisor. Libvirt uses XML files to define the properties of VMs and virtual networks. It also provides a convenient virsh command line tool.
The vSIM VM examples shown in this guide assume that VM parameters in a domain XML file are read and acted upon by the virsh program.
VMware hypervisor
You can install integrated model vSIM (SR-1, SR-1s, IXR-R6, IXR-ec) VMs on hosts running the VMware ESXi hypervisor. Only ESXi versions 6.0, 6.5, and 6.7 are supported with the vSIM.
Nokia recommends deployment of the vSphere vCenter server and use of the vSphere Web Client GUI for managing the virtual machines in a VMware environment.
The following VMware features are supported with vSIM VMs:
e1000 vNIC interfaces
vNIC association with a vSphere standard switch
vNIC association with a vSphere distributed switch
vMotion
High Availability
Non-supported features include VMXNET3 device adapter support, SR-IOV, PCI pass-through, DRS, fault tolerance, and Storage vMotion.
Virtual switch
A virtual switch (vSwitch) is a software implementation of a Layer 2 bridge or Layer 2-3 switch in the host OS software stack. When the host has one or more VMs, the vNIC interfaces (or some subset) can be logically connected to a vSwitch to enable the following:
vNIC-to-vNIC communication within the same host without relying on the NIC or other switching equipment
multiple vNICs to share the same physical NIC port
The Linux bridge vSwitch implementation option is available on Linux hosts.
The standard switch and distributed switch vSwitch implementation options are available on VMware ESXi hosts.
Linux bridge
The Linux bridge is a software implementation of an IEEE 802.1D bridge that forwards Ethernet frames based on learned MACs. It is part of the bridge-utils package. The Linux bridge datapath is implemented in the kernel (specifically, the bridge kernel module), and it is controlled by the brctl userspace program.
On Centos and RHEL hosts, a Linux bridge can be created by adding the ifcfg-brN (where N is a number) file in the /etc/sysconfig/network-scripts/ directory. The contents of this file contain the following directives:
DEVICE=brN (with N correctly substituted)
TYPE=Bridge (Bridge is case-sensitive)
The following shows an example ifcfg file:
TYPE=Bridge
DEVICE=br0
IPADDR=192.0.2.1
PREFIX=24
GATEWAY=192.0.2.254
DNS1=8.8.8.8
BOOTPROTO=static
ONBOOT=yes
NM_CONTROLLED=no
DELAY=0
To add another interface as a bridge port of brN, add the BRIDGE=brN directive to the ifcfg network-script file for that other interface.
On Ubuntu hosts, a Linux bridge is created by adding an auto brN stanza followed by an iface brN stanza to the /etc/network/interfaces file. The iface brN stanza can include several attributes, including the bridge_ports attribute, which lists the other interfaces that are ports of the Linux bridge.
The following example shows an /etc/network/interfaces file that creates a bridge br0 with eth0 as a bridge port:
auto lo
iface lo inet loopback
auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
By default, the Linux bridge is VLAN unaware and it does not look at VLAN tags, nor does it modify them when forwarding the frames.
If the bridge is configured to have VLAN sub-interfaces, frames without a matching VID are dropped or filtered.
If a VLAN sub-interface of a port is added as a bridge port, then frames with the matching VID are presented to the bridge with the VLAN tag stripped. When the bridge forwards an untagged frame to this bridge port, a VLAN tag with a matching VID is automatically added.