Creating and starting a vSIM VM on a Linux KVM host

Introduction

This section describes how to create and start up vSIM virtual machines (VMs) on host machines using the Linux KVM hypervisor.

Several methods are available for creating a Linux KVM VM based on a specific set of parameters or constraints. These methods include:

  • specifying the VM parameters in a domain XML file read by virsh, the libvirt command shell

  • using the virt-manager GUI application available as part of the libvirt package

  • using the qemu-kvm (RedHat/Centos) or qemu-system-x86_64 (Ubuntu) commands

The Linux libvirt package provides the Virtual Shell (virsh) command-line application to facilitate the administration of VMs. The virsh application provides commands to create and start a VM using the information contained in a domain XML file. It also provides commands to shut down a VM, list all the VMs running on a host, and output specific information about the host or a VM.

This section describes how to define and manage your vSIM VM using the virsh tool.

VM configuration process overview

The libvirt domain XML file for a vSIM VM defines the important properties of the VM. You can use any text editor to create the domain XML file; pass the filename as a parameter of the virsh create command to start up the vSIM VM. For example, virsh create domain1.xml.

You can run virsh commands to display information about the VM or change specific properties. Basic virsh commands lists the basic virsh commands, where VM_name is the value that you configured for the name element in the XML configuration file. See https://libvirt.org/sources/virshcmdref/html/ for more information.

Table 1. Basic virsh commands
Command Example Result

capabilities | grep cpu

virsh capabilities | grep cpu ↵

Displays the number of cores on the physical machine, the vendor, and the model

console

virsh console VM_name

Connects the serial console of the VM if using the serial PTY port

define

virsh define VM_name.xml ↵

Reads the XML configuration file and creates a domain. This is useful to provide persistence of the domain across reboots

destroy

virsh destroy VM_name

Stop and power off a VM (domain). The terminated VM is still available on the host and can be started again. The system status is ‟shut off”

dumpxml

virsh dumpxml VM_name

Displays the XML configuration information for the specified VM, including properties added automatically by libvirt

list

virsh list [ --all | --inactive] ↵

The ‟--all” argument displays all active and inactive VMs that have been configured and their state

The ‟--inactive” argument displays all VMs that are defined but inactive

nodeinfo

virsh nodeinfo ↵

Displays the memory and CPU information, including the number of CPU cores on the physical machine

start

virsh start VM_name

Starts the VM domain

undefine

virsh undefine VM_name

Deletes a specified VM from the system

vcpuinfo

virsh vcpuinfo VM_name

Displays information about each vCPU of the VM

Note: The virsh shutdown and virsh reboot commands do not affect vSIM VMs because the vSIM software does not respond to ACPI signals.

Some VM property changes made from the virsh command line do not take immediate effect because the vSIM does not recognize and apply these changes until the VM is destroyed and restarted. Examples of these changes include:

  • modifying the vCPU allocation with the virsh setvcpus command

  • modifying the vRAM allocation with the virsh setmem command

  • adding or removing a disk with the virsh attach-disk, virsh attach-device, virsh detach-disk, or virsh detach-device commands

  • adding or removing a vNIC with the virsh attach-interface, virsh attach-device, virsh detach-interface, or virsh detach-device commands

Libvirt domain XML structure

The libvirt domain XML file describes the configuration of a vSIM VM. The file begins with a <domain type=‛kvm’> line and ends with a </domain> line. In XML syntax, domain is an element and type=‛kvm’ is an attribute of the domain element. vSIM VMs must have the type='kvm' attribute because KVM acceleration is mandatory. Other domain types, including type='qemu', are not valid.

The libvirt domain XML file structure can conceptually be interpreted as a tree, where the domain element is the root element and contains all the sub-elements (child elements) in the file. All sub-elements can contain their own child elements, and so on. The following domain child elements should be configured to for vSIM VMs:

Domain name and UUID

Use the <name> element to assign each VM a meaningful name. The name should be composed of alphanumeric characters (spaces should be avoided) and must be unique within the scope of the host machine. Use the virsh list command to display the VM name. The following is an example of a <name> element:

<name>v-sim-01-control</name>

Each VM has a globally unique UUID identifier. The UUID format is described in RFC 4122. If you do not include a < uuid> element in the domain XML file, libvirt auto generates a value that you can display (after the VM is created) using the virsh dumpxml command. Setting the UUID value explicitly ensures that it matches the UUID specified in the software license. See vSIM software licensing for information about vSIM software licenses. The following is an example of a <uuid> element, using the correct RFC 4122 syntax:

<uuid>ab9711d2-f725-4e27-8a52-ffe1873c102f</uuid>
Note: Do not use the UUID in this example. A unique UUID must be used. You can use the uuidgen command on a Linux/UNIX machine or use an external UUID generation website – the recommended one is http://uuidgenerator.net; select Version 4 instead of Version 1.

Memory

The maximum memory (vRAM) allocated to a VM at boot time is defined in the <memory> element. The 'unit' attribute is used to specify the unit to count the vRAM size.

Note: The unit value is specified in kibibytes (2^10 bytes) by default. However, all memory recommendations in this document are expressed in units of gigabytes (2^30 bytes), unless otherwise stated.

To express a memory requirement in gigabytes, include a unit=‛G’ (or unit=‛GiB’) attribute, as shown in the following example:

<memory unit='G'>6</memory>

The amount of vRAM needed for a vSIM VM depends on the vSIM system type, vSIM card type, and the MDAs installed in the system or card. See CPU and DRAM memory for more information.

vCPU

The <vcpu> element defines the number of vCPU cores allocated to a VM. The minimum number of vCPUs that you can allocate to a vSIM VM is two.

The <vcpu> element contains the following attributes:

  • cpuset

    The cpuset attribute provides a comma-separated list of physical CPU numbers or ranges, where ‟^” indicates exclusion. Any vCPU or vhost-net thread associated with the VM that is not explicitly assigned by the <cputune> configuration is assigned to one of the physical CPUs allowed by the cpuset attribute.

  • current

    The current attribute allows fewer than the maximum vCPUs to be allocated to the VM at boot up. This attribute is not required for vSIM VMs because in-service changes to the vCPU allocation are not allowed.

  • placement

    The placement attribute accepts a value of either 'static' or 'auto'. You should use 'static' when specifying a cpuset. When 'auto' is used, libvirt ignores the cpuset attribute and maps vCPUs to physical CPUs in a NUMA-optimized manner based on input from the numad process. The placement attribute defaults to the placement mode of <numatune>, or to static if a cpuset is specified.

The following example <vcpu> configuration for a vSIM VM allocates four vCPUs.

<vcpu>4</vcpu>

CPU

The <cpu> element specifies CPU capabilities and topology presented to the guest, and applies to the model of the CPU. The mode attribute of <cpu> supports the following values:

  • custom

    In the custom mode, you must specify all the capabilities of the CPU that are presented to the guest.

  • host-model

    In the host-model mode, the model and features of the host CPU are read by libvirt just before the VM is started and the guest is presented with almost identical CPU and features.

    If the exact host model cannot be supported by the hypervisor, libvirt falls back to the next closest supported model that has the same CPU features. This fallback is permitted by the <model fallback=‛allow’/> element.)

  • host-passthrough

    In the host-passthrough mode, the guest CPU is represented as exactly the same as the host CPU, even for features that libvirt does not understand.

The <topology> child element specifies three values for the guest CPU topology: the number of CPU sockets, the number of CPU cores per socket, and the number of threads per CPU core.

The <numa> child element in the <cpu> element creates specific guest NUMA topology. However, this is not applicable to the vSIM because the vSIM software is not NUMA-aware.

The following is the recommended configuration of the <cpu> element for vSIM VMs:

<cpu mode="custom" match="minimum">
        <model>SandyBridge</model>
        <vendor>Intel</vendor>
</cpu>

Sysinfo

The <sysinfo> element presents SMBIOS information to the guest. SMBIOS is divided into three blocks of information (blocks 0 to 2); each block consists of multiple entries. SMBIOS system block 1 is most important for the vSIM. The SMBIOS system block contains entries for the manufacturer, product, version, serial number, UUID, SKU number, and family.

SMBIOS provides a necessary way to pass vSIM-specific configuration information from the host to the guest so that it is available to vSIM software when it boots. When a vSIM VM is started, the vSIM software reads the product entry of the SMBIOS system block. If the product entry begins with 'TIMOS: ' (without the quotes and case insensitive), the software recognizes the string that follows as containing important initialization information. The string following the 'TIMOS: ' characters contains one or more attribute-value pairs formatted as follows:

attribute1=value1 attribute2=value2 attribute3=value3

This pattern continues until all attributes have been specified.

The supported attribute-value pairs and their uses are summarized in vSIM boot parameters in SMBIOS product entry .

Table 2. vSIM boot parameters in SMBIOS product entry
Attribute name Valid values Description

address

For a vSIM VM acting as CPM:

<ip-prefix>/<ip-prefix-length>@active

<ip-prefix>/<ip-prefix-length>@standby

For a vSIM VM acting as IOM:

n/a

Where:

<ip-prefix>: an IPv4 or IPv6 prefix

<ip-prefix-length>: 1-128

Sets a management IP address.

In a vSIM with two CPMs, the SMBIOS product entry for each CPM should include two address attributes: one for the active CPM (ending with @active) and one for the standby CPM (ending with @standby).

The active and standby management IP addresses must be different addresses in the same IP subnet.

static-route

For a vSIM VM acting as CPM:

<ip-prefix>/<ip-prefix-length>@<next-hop-ip>

For a vSIM VM acting as IOM:

n/a

Where:

<ip-prefix>: an IPv4 or IPv6 prefix

<next-hop-ip>: an IPv4 or IPv6 address

Adds a static route for management connectivity.

Static default routes (0/0) are not supported.

license-file

For a vSIM VM acting as CPM:

<file-url>

For a vSIM VM acting as IOM:

n/a

Where:

<file-url>: <cflash-id/><file-path>

or

ftp://<login>:<password>@<remote-host/><file-path>

or

tftp://<login>:<password>@<remote-host/><file-path>

<cflash-id>: cf1: | cf1-A: | cf1-B: | cf2: | cf2-A: | cf2-B: | cf3: | cf3-A: | cf3-B:

Specifies the local disk or remote FTP/TFTP location of the license file.

primary-config

For a vSIM VM acting as CPM: <file-url>

For a vSIM VM acting as IOM: n/a

Where:

<file-url>: <cflash-id/><file-path>

or

ftp://<login>:<password>@<remote-host/><file-path>

or

tftp://<login>:<password>@<remote-host/><file-path>

<cflash-id>: cf1: | cf1-A: | cf1-B: | cf2: | cf2-A: | cf2-B: | cf3: | cf3-A: | cf3-B:

Specifies the local disk or remote FTP/TFTP location of the primary configuration file.

chassis

One of the chassis names listed in Appendix A: vSIM supported hardware. This parameter must be set to the same value for all CPM and IOM VMs that make up one system.

Specifies the emulated chassis type.

chassis-topology

For 7950 XRS-20/XRS-20e CPM and IOM VMs: XRS-40

This parameter must be set to the same value for all CPM and IOM VMs that make up one system.

Specifies that the 7950 XRS-20 or 7950 XRS-20e CPM or IOM VM should boot up as belonging to an extended 7950 XRS-40 system.

sfm

One of the SFM names from Appendix A.

The SFM type must be valid for the chassis type and must be set to the same value for all CPM and IOM VMs that make up one system.

Specifies the switch fabric module to be emulated.

slot

For a vSIM VM acting as CPM: A,B,C,D

For a vSIM VM acting as IOM: 1 to 20 (chassis dependent)

Specifies the slot number in the emulated chassis.

card

One of the card names from Appendix A: vSIM supported hardware. The card type must be valid for the chassis type, SFM type, and the slot number.

For some chassis types, notably the IXR-e and IXR-X, the required value of this parameter, for both the VM of slot=A and the VM of slot=1, is a CPM name followed by an IOM name, with a single forward slash character separating them. See Appendix A: vSIM supported hardware for more details.

Specifies the emulated card type.

xiom/m

m=x1, x2

One of the XIOM names from Appendix A: vSIM supported hardware. The XIOM type must be valid for the card type.

Specifies the emulated XIOM types that are logically equipped in the indicated card.

mda/n

n=1, 2, 3, 4, 5, 6

mda/m/n

m=x1, x2 and n=1, 2

One of the MDA names from Appendix A: vSIM supported hardware. The MDA type must be valid for the card type and XIOM type (if applicable).

Specifies the emulated MDA types that are logically equipped in the indicated card.

The mda/m/n only applies to XIOM MDAs.

system-base-mac

For a vSIM VM acting as CPM:

hh:hh:hh:hh:hh:hh

For a vSIM VM acting as IOM:

n/a

Specifies the first MAC address in a range of 1024 contiguous values to use as chassis MACs.

The default is the same for all vSIMs and should be changed so that each vSIM has a unique, non-overlapping range.

The two CPMs of a vSIM node must specify the same system-base-mac value.

The following <sysinfo> example personalizes a vSIM VM to emulate an iom4-e card installed in slot 1 of an SR-12 chassis equipped with an m-sfm5-12 switch fabric module The card is virtually equipped with one me10-10gb-sfp MDA and one me40-1gb-csfp MDA. Replace the attribute values in the SMBIOS product entry string with values appropriate for your deployment.

<sysinfo mode=smbios’>
   <system>
      <entry name='product'>TIMOS:slot=1 chassis=SR-12 sfm=m-sfm5-12 card=iom4-e mda/1=me10-10gb-sfp+ mda/2=me40-1gb-csfp</entry> 
   </system>
</sysinfo>

The following <sysinfo> example personalizes a vSIM VM to emulate a xcm-14s card installed in slot 1 of an SR-14s chassis. This chassis has the sfm-s switch fabric module. The following example simulates 2 XIOMs where the first one has two MDAs and the second one has one MDA. Replace the attribute values in the SMBIOS product entry string with values appropriate for your deployment.

<sysinfo mode=smbios'>
   <system>
      <entry name='product'>TIMOS:slot=1 chassis=SR-14s sfm=sfm-s card=xcm-14s xiom/x1=iom-s-3.0t mda/x1/1=ms18-100gb-qsfp28 mda/x1/2=ms4-400gb-qsfpdd+4-100gb-qsfp28 xiom/x2=iom-s-3.0t mda/x2/1=ms6-200gb-cfp2-dco</entry> 
   </system>
</sysinfo>

The following <sysinfo> example personalizes a vSIM VM to emulate a cpm5 card installed in slot A of an SR-12 chassis. This chassis has the m-sfm5-12 switch fabric module. Replace the attribute values in the SMBIOS product entry string with values appropriate for your deployment.

<sysinfo mode=smbios'>
   <system>
     <entry name='product'>TIMOS:slot=A chassis=SR-12 sfm=m-sfm5-12 card=cpm5 \
      system-base-mac=de:ad:be:ef:00:01 \
      address=192.0.2.124@active address=192.0.2.2/24@standby \
      primary-config=ftp://user01:pass@10.0.0.1/home/user01/SR-12/config.cfg \
      license-file=ftp://user01:pass@10.0.0.1/home/user01/license.txt</
      entry>
   </system>
</sysinfo>

OS

The <os> element provides information about the guest OS to the hypervisor. It contains a <type> element that specifies the guest operating system type. For vSIM VMs, the <type> element must specify hvm, which means that the guest OS is designed to run on bare metal and requires full virtualization.

The arch attribute of the <type> element specifies the CPU architecture that is presented to the guest. For vSIM VMs, you must specify arch=x86_64 to allow the vSIM software to take advantage of 64-bit instructions and addressing.

The machine attribute of the <type> element specifies how QEMU should model the motherboard chipset in the guest system. For vSIM VMs, you should specify machine='pc', which is an alias for the latest I440FX/PIIX4 architecture supported by the hypervisor when the VM is created. The I440FX is a (1996 era) motherboard chipset that combines both Northbridge (memory controller) and Southbridge (IO devices) functionality.

QEMU-KVM can also emulate a Q35 chipset, if you specify machine='q35'. Q35 is a relatively modern (2009 era) chipset design; it separates the Northbridge controller (MCH) from the Southbridge controller (ICH9) and provides the guest with advanced capabilities such as IOMMU and PCI-E.

Although the I440FX emulation is the older machine type, it is the more mature and hardened option and is recommended by Nokia.

The <os> element also contains the <smbios> child element that you must include in the configuration of vSIM VMs. Set the mode attribute to ‟sysinfo”, which allows you to pass the information specified in the <sysinfo> element (including the product entry) to the vSIM guest.

The <os> element can also include one or more <boot> child elements. The dev attribute of each <boot> element specifies a device such as 'hd' (hard drive), 'fd' (disk), 'cdrom', or 'network', which indicates that the guest should load its OS from this device. The order of multiple <boot> elements determines the boot order. For vSIM VMs, you should always boot from the 'hd' device that vSIM translates to its CF3 disk.

The following <os> example shows element configuration suitable for vSIM VMs of all types.

<os>
    <type arch='x86_64' machine='pc'>hvm</type>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
</os>

Clock

The <clock> element controls specific aspects of timekeeping within the guest. Each guest must initialize its clock to the correct time-of-day when booting and update its clock accurately as time passes.

The offset attribute of <clock> controls how the of the time-of-day clock of the guest is initialized at bootup. For vSIM VMs, the offset attribute value should be set to utc, which enable the host and guest to belong to different timezones, if required.

The vSIM and other guests update the time-of-day clock by counting ticks of virtual timer devices. The hypervisor injects ticks to the guest in a manner that emulates traditional hardware devices, for example, the Programmable Interrupt Timer (PIT), CMOS Real Time Clock (RTC), or High Precision Event Timer (HPET). Each virtual timer presented to the guest is defined by a <timer> sub-element of <clock>. The name attribute of <timer> specifies the device name (for example, 'pit', 'rtc' or 'hpet'), the present attribute indicates whether the particular timer should be made available to the guest, and the tickpolicy attribute controls the action taken when the hypervisor (QEMU) discovers that it has missed a deadline for injecting a tick to the guest. A tickpolicy value set to 'delay' means the hypervisor should continue to delay ticks at the normal rate, with a resulting slip in guest time relative to host time. A tickpolicy value set to 'catchup' means the hypervisor should deliver ticks at a higher rate to compensate for the missed tick.

The following <clock> example shows element configuration suitable for vSIM VMs.

<clock offset='utc'>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='rtc' tickpolicy='catchup'/>
    <time name='hpet' present='no'/>
</clock>

Devices

Use the <devices> element to add various devices to the VM, including hard drives, network interfaces, and serial console ports.

The <devices> element requires that the file path of the program used to emulate the devices must be specified in the <emulator> child element. On Centos and Red Hat hosts the emulator is a binary called qemu-kvm; on Ubuntu hosts, the emulator is called qemu-system-x86_64.

Device types are:

Disk devices

The <disk> child element of the <devices> element allows you to add up to three disks to a vSIM VM.

The type attribute of the <disk> element specifies the underlying source for each disk. The only supported value for vSIM VMs is type='file', which indicates that the disk is a file residing on the host machine.

The device attribute of the <disk> element configures the representation of the disk to the guest OS. The supported value for vSIM VMs is device='disk'. When device='disk' is specified, QEMU-KVM attaches a hard drive to the guest VM and vSIM interprets this as a Compact Flash (CF) storage device.

The optional <driver> child element of the <disk> element provides additional details about the back-end driver. For vSIM VMs, set the name attribute to 'qemu' and the type attribute to 'qcow2'. These two attributes specify that the disk image has the QCOW2 format.

When you download the vSIM software, the zip file contains a QCOW2 disk image, which is a file that represents the vSIM software on a hard disk; you can boot any vSIM VM from this disk image. QCOW2 is a disk image format for QEMU-KVM VMs that uses thin provisioning (that is, the file size starts small and increases in size only as more data is written to disk). It supports snapshots, compression, encryption, and other features.

The optional cache attribute of the <driver> element controls the caching mechanism of the hypervisor. A value set to 'writeback' offers high performance but risks data loss (for example, if the host crashes but the guest believes the data was written). For vSIM VMs, it is recommended to set cache='none' (no caching) or cache='writethrough' (writing to cache and to the permanent storage at the same time).

The mandatory <source> child element of the <disk> element indicates the path (for disks where type='file') to the QCOW2 file used to represent the disk.

Note: The recommended storage location for QCOW2 disk image files is the /var/lib/libvirt/images directory; storing disk images in other locations may cause permission issues.

The mandatory <target> child element of the <disk> element controls how the disk appears to the guest in terms of bus and device. The dev attribute should be set to a value of 'hda', 'hdb' or 'hdc'. A value of 'hda' is the first IDE hard drive; it maps to CF3 on vSIM VMs. A value of 'hdb' is the second IDE hard drive; it maps to CF1 on vSIM VMs acting as CPMs. A value of 'hdc' is the third IDE hard drive; it maps to CF2 on vSIM VMs acting as CPMs. The bus attribute of the <target> element should be set to 'virtio' for vSIM virtual disks.

Each vSIM VM, including the ones acting as IOMs, must be provided with a ‟hda” hard disk that contains the vSIM software images. You cannot write to the ‟hda” disk associated with an IOM or browse its file system using SR OS file commands. Each virtual disk of each vSIM VM should have be provided with its own, independent QCOW2 file.

The following <disk> element configuration example provides a vSIM CPM with a CF3 device.

<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/libvirt/images/SR-12-cpm.qcow2'/>
      <target dev='hda' bus='virtio'/>
</disk>

Network interfaces

The <interface> sub-element of the <devices> element allows you to add up to 20 virtual NIC ports to a vSIM VM. The type attribute of <interface> supports one of two values:

The following child elements of <interface> are common to most interface types:

  • <mac> contains an address attribute that indicates the MAC address of the guest vNIC port.

  • <model> contains a type attribute that indicates the NIC model presented to the guest.

    The default value for type is 'virtio', which indicates that the guest should use its VirtIO driver for the network interface.

  • <driver> contains several attributes corresponding to tunable driver settings.

    The queues attribute, when used in conjunction with the <model type='virtio'/> element, enables multi-queue VirtIO in the guest.

    Note: The vSIM does not support multi-queue VirtIO.
  • <address> specifies the guest PCI address of the vNIC interface when the type='pci' attribute is included.

    The other attributes required to specify a PCI address are: domain (0x0000), bus (0x00-0xff), slot (0x00-0x1f), and function (0x0-0x7).

    If the <address> element is not included, the hypervisor assigns an address automatically as follows: the first interface defined in the libvirt domain XML has the lowest PCI address, the next one has the next-lowest PCI address, and so on.

    The vSIM maps vNIC interfaces to its own set of interfaces based on the order of the vNIC interfaces, from lowest to highest PCI address; this should be considered when you change the PCI address of a vNIC interface. See Guest vNIC mapping in vSIM VMs for information about how the vSIM maps vNIC interfaces.

  • <target> specifies the name of the target device representing the vNIC interface in the host.

    Note: You do not need to configure this element with vSIM VMs.
type=‛direct’

The <interface> element with type='direct' allows you to create a direct connection between the guest vNIC port and a host physical NIC port. The interconnection uses a MACVTAP driver in the Linux host.

To connect a guest vNIC port to a physical NIC port using the MACVTAP driver, include a <source> sub-element with the dev attribute that indicates the interface name of the host interface and mode='passthrough'. The following example shows a configuration where 'enp133s0' is the host interface name.

Note: type=direct should not be confused with PCI pass-through, which is not supported for vSIM VMs.
<interface type='direct'>
      <source dev='enp133s0' mode='passthrough'/>
      <model type='virtio'>
</interface>
type=‛bridge’

The <interface> element with type='bridge' specifies that the guest vNIC port should be connected to a vSwitch or Linux bridge in the host. The interconnection uses the Vhost-Net back end driver when the <model type='virtio'/> element is included.

To use a Linux bridge named brX, include a <source> sub-element with a bridge='brX' attribute, as shown in the following configuration example.

<interface type='bridge'>
         <source bridge='br0'/>
         <model type='virtio'>
</interface>

To use an OpenvSwitch bridge named brX, include a <source> sub-element with a

bridge='brX' attribute. In addition, include a <virtualport type='openvswitch'/> element, as shown in the following configuration example.

<interface type='bridge'>
       <source bridge='br1'/>
       <virtualport type='openvswitch'/>
       <model type='virtio'>
</interface>

Guest vNIC mapping in vSIM VMs

This section describes the relationship between a network interface defined in the libvirt XML for a vSIM VM and its use by the vSIM software.

In the current release, each vSIM VM supports a maximum of 20 vNIC interfaces. The vSIM software puts the defined interfaces in ascending order of (guest) PCI address.

The order of the defined interfaces and the vSIM VM type determines the use of each interface by the vSIM software. The vSIM interface mapping information is summarized in the following tables:

Table 3. 7750 SR-1, 7750 SR-1s, 7250 IXR-R6, 7250 IXR-ec vSIM interface mapping
Order (by guest PCI address) vSIM software use Supported interface types

First

Management port (A/1)

type='direct' with <model type='virtio'/>

type='bridge' with <model type='virtio'/>

Second

Fabric port 1

type='direct' with <modeltype='virtio'/>

type='bridge' with < model type='virtio'/>

Third

MDA port (1/1/1)

See Network interfaces for more information about the following interface types:

type='direct' with <modeltype='virtio'/>

type='bridge' with < model type='virtio'/>

Fourth

MDA port (1/1/2)

See Network interfaces for more information about the following interface types:

type='direct' with <modeltype='virtio'/>

type='bridge' with <modeltype='virtio'/>

...

Eighth

MDA port (1/1/6)

See Network interfaces for more information about the following interface types:

type='direct' with <modeltype='virtio'/>

type='bridge' with < model type='virtio'/>

...

Twentieth

MDA port (1/1/18)

See Network interfaces for more information about the following interface types:

type='direct' with <modeltype='virtio'/>

type='bridge' with < model type='virtio'/>

Table 4. vSIM CPM interface mapping
Order (by guest PCI address) vSIM software use Supported interface types

First

Management port (A/1)

type='direct' with <model type='virtio'/>

type='bridge' with <model type='virtio'/>

Second

Fabric port

type='direct' with <model type='virtio'/>

type='bridge' with <model type='virtio'/>

Table 5. vSIM IOM interface mapping
Order (by guest PCI address) vSIM software use Supported interface types

First

Management port (not used)

type='direct' with <model type='virtio'/>

type='bridge' with <model type='virtio'/>

Second

Fabric port

type='direct' with <model type='virtio'/>

type='bridge' with <model type='virtio'/>

Third

MDA port (1/1/1)

See Network interfaces for more information about the following interface types:

type='direct' with <model type='virtio'/>

type='bridge' with <model type='virtio'/>

...

Eighth

MDA port (1/1/6)

See Network interfaces for more information about the following interface types:

type='direct' with <model type='virtio'/>

type='bridge' with <model type='virtio'/>

...

Twentieth

MDA port (1/1/18)

See Network interfaces for more information about the following interface types:

type='direct' with <modeltype='virtio'/>

type='bridge' with < model type='virtio'/>

If a vSIM is emulating an MDA with connectors (which is only supported by specific chassis), then only the first breakout port of each connector can be associated with a VM vNIC interface and pass traffic. For example, if an MDA has connectors 1/1/c1, 1/1/c2, 1/1/c3, and so on, then only ports 1/1/c1/1, 1/1/c2/1, 1/1/c3/1, and so on can become operationally up.

Console and serial ports

The <console> sub-element in the <devices> element allows you to add a console port to a vSIM VM. As it does on physical routers, the console port on a vSIM VM provides interactive access to CLI.

There are several methods for creating and accessing a vSIM console port. The first method is to bind the console port to a TCP socket opened by the host. To access the console, you must establish a Telnet session with the host, using the port number of the TCP socket. The following example shows a configuration for this method:

<console type='tcp'>
      <source mode='bind' host='0.0.0.0' service='4000'/>
      <protocol type='telnet'/>
      <target type='virtio' port='0'/>
</console>

The second method is to bind the console port to an emulated serial port. In this case, the virsh console <domain-name> command is used to access the console. The following example shows a configuration for this method:

<serial type='pty'>
      <source path='/dev/pts/1'/>
      <target port='0'/>
      <alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
</console>

Seclabel

The <seclabel> element controls the generation of security labels required by security drivers such as SELinux or AppArmor. These are not supported with vSIM VMs and therefore you must specify <seclabel type='none'> in the domain XML.

Example Libvirt domain XML

The following example shows a Libvirt domain XML configuration for a vSIM VM emulating one CPM of a 7750 SR-12. You should substitute the correct values for your configuration.

<domain type="kvm">
    <name>CPM.A</name>
    <uuid>cb0ba837-07db-4ebb-88ea-694271754675</uuid>
    <description>SR-12 CPMA VM</description>
    <memory>4194304</memory>
    <currentMemory>4194304</currentMemory>
    <cpu mode="custom" match="minimum">
        <model>SandyBridge</model>
        <vendor>Intel</vendor>
    </cpu>
    <vcpu current="2">2</vcpu>
    <sysinfo type="smbios">
        <system>
            <entry name="product">
                TiMOS: slot=A chassis=SR-12 sfm=m-sfm5-12 card=cpm5 \
                primary-config=ftp://user:pass@[135.121.120.218]/./dut-a.cfg \
                license-file=ftp://user:pass@[135.121.120.218]/./license.txt \
                address=135.121.123.4/21@active \
                address=135.121.123.8/21@standby \
                address=3000::135.121.123.4/117@active \
                address=3000::135.121.123.8/117@standby \
                static-route=128.251.10.0/24@135.121.120.1 \
                static-route=135.0.0.0/8@135.121.120.1 \
                static-route=138.0.0.0/8@135.121.120.1 \
                static-route=172.20.0.0/14@135.121.120.1 \
                static-route=172.31.0.0/16@135.121.120.1 \
                static-route=192.168.120.218/32@135.121.120.218 \
                system-base-mac=fa:ac:ff:ff:10:00 \
            </entry>
        </system>
    </sysinfo>
    <os>
        <type arch="x86_64" machine="pc">hvm</type>
        <smbios mode="sysinfo"/>
    </os>
    <clock offset="utc">
        <timer name="pit" tickpolicy="delay"/>
        <timer name="rtc" tickpolicy="delay"/>
    </clock>
    <devices>
        <emulator>/usr/libexec/qemu-kvm</emulator>
        <disk type="file" device="disk">
            <driver name="qemu" type="qcow2" cache="none"/>
            <source file="/var/lib/libvirt/images/cf3.qcow2"/>
            <target dev="hda" bus="virtio"/>
        </disk>
        <disk type="file" device="disk">
            <driver name="qemu" type="qcow2" cache="none"/>
            <source file="/var/lib/libvirt/images/cf1.qcow2"/>
            <target dev="hdb" bus="virtio"/>
        </disk>
        <interface type="bridge">
            <mac address="FA:AC:C0:04:06:00"/>
            <source bridge="breth0"/>
            <model type="virtio"/>
        </interface>
        <interface type="bridge">
            <mac address="FA:AC:C0:04:06:01"/>
            <source bridge="breth1"/>
            <model type="virtio"/>
        </interface>
        <console type="tcp">
            <source mode="bind" host="0.0.0.0" service="2500"/>
            <protocol type="telnet"/>
            <target type="virtio" port="0"/>
        </console>
    </devices>
    <seclabel type="none"/>
</domain>
1

For 7750 SR-1 and 7750 SR-1s, the fabric port is not defined and the second vNIC interface (by PCI device order) is the first MDA port.