Preparing the Fabric Services System virtual machine nodes
The procedures in this section describe how to create and configure Fabric Services System nodes in deployments that use virtual machine servers.
You must use the Fabric Services System base OS image. This image is specially designed for use with the Fabric Services System deployment and comes with the necessary software and components, pre-installed in a minimally-hardened Rocky 9.6 operating system.
Complete the procedure for each individual Fabric Services System node, ensuring that each node is running on a separate hypervisor to minimize the risk of any impact if a hypervisor fails.
Downloading the Fabric Services System base OS image
Contact Nokia support for the location of the Fabric Services System base OS image. Download the OVA or QCOW2 image.
Networking considerations
Nokia recommends that you use two different networks for the Fabric Services System nodes.
Within the hypervisor, both networks should be available as bridged networks. Both these networks require support for jumbo frames (MTU set to 9000).
Ensure that the MTU is set to 9000 on all the interfaces on the hypervisor, Fabric Service System VM nodes, deployer, and the interconnecting devices.
Create the Fabric Services System virtual machine
Creating the VM on bridged networks on KVM
Complete the following steps to deploy a Fabric Services System node as a virtual machine on KVM. The OAM network is referred to as br0 and the fabric management network is referred to as br1.
-
Ensure that the virt-install tool is installed on the KVM
hypervisor.
If you need to install the tool, use the following command:
# yum install virt-install
Note: With Rocky 9.6, the --graphics flag is no longer supported with the virt-install tool. -
Copy the base OS image to the appropriate location on the hypervisor where the
virtual disks should be stored.
# cp fss-baseos-rocky-0.0.0-alpha.1+20240430.191213-f50a660.qcow2 /path/to/fss-compute-01.qcow2
-
Resize the base OS image.
By default, the Fabric Services System base OS image comes with a small partition to reduce the download size of the image. To assign the appropriate size to the image, execute the following commands:
# qemu-img create -f qcow2 -o preallocation=metadata /path/to/fss-compute-01.qcow2.temp 200G # virt-resize --quiet --expand /dev/sda5 /path/to/fss-compute-01.qcow2 /path/to/fss-compute-01.qcow2.temp # mv /path/to/fss-compute-01.qcow2.temp /path/to/fss-compute-01.qcow2
These commands create a temporary qcow2 file of 200GB and then resizes the root partition of fss-compute-01.qcow2 and writes it to the temporary file. Then, the .temp file is renamed to the actual qcow2 file used to create the compute VM. - Optional:
If the node is also going to be used as a storage node, create the necessary
extra disk for the storage cluster to be formed.
Create the virtual disk of 300GB using the following command:
# qemu-img create -f qcow2 /path/to/fss-node01-storage.qcow2 300G
-
Create the virtual machine.
The following command creates a node that also serves as a storage node. If a storage node is not needed, omit the second line that starts with
--disk
.# virt-install --import --name fss-node01 \ --memory 65536 --vcpus 32 --cpu host \ --disk /path/to/fss-node01.qcow2,format=qcow2,bus=virtio \ --disk /path/to/fss-node01-storage.qcow2,format=qcow2,bus=virtio \ --network bridge=br0,model=virtio \ --network bridge=br1,model=virtio \ --os-variant=centos7.0 \ --noautoconsole
-
After the VM boots, use the lsblk command to verify the
partition sizes.
Note:
Log in using the following credentials:
user: root
password: N0ki@FSSb4se!
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 200G 0 disk ├─vda1 252:1 0 99M 0 part /boot/efi ├─vda2 252:2 0 1000M 0 part /boot ├─vda3 252:3 0 4M 0 part ├─vda4 252:4 0 1M 0 part └─vda5 252:5 0 198.9G 0 part / vdb 252:16 0 300G 0 disk
In the output, verify that the vda5 partition is resized to 198GB and the vdb partition is 300GB. You can use the vdb partition for the storage cluster.
Creating the VM on bridged networks on VMware vSphere
- the VMware vSphere vCenter or ESXi UI
For instructions, see Deploy an OVF or OVA Template in the VMware vSphere documentation.
- the VMware Open Virtualization Format Tool CLI
The following section provides an example of how to use the VMware OVF Tool CLI.
- Download and install the latest version of the VMware OVF Tool from the VMware Developer website.
-
Display details about the OVA image.
Execute the ovftool command with just the OVA image name as the argument.
$ ovftool fss-baseos-25.8.1-414.ova OVF version: 1.0 VirtualApp: false Name: fss-baseos Download Size: 2.32 GB Deployment Sizes: Flat disks: 128.00 GB Sparse disks: 4.08 GB Networks: Name: OAM Description: The Fabric Services System OAM (UI and API) network Name: FABRIC Description: The Fabric Services System Fabric Management network Virtual Machines: Name: fss-baseos Operating System: centos7_64guest Virtual Hardware: Families: vmx-14 Number of CPUs: 16 Cores per socket: 1 Memory: 64.00 GB Disks: Index: 0 Instance ID: 4 Capacity: 128.00 GB Disk Types: SCSI-lsilogic NICs: Adapter Type: VmxNet3 Connection: OAM Adapter Type: VmxNet3 Connection: FABRIC References: File: fss-baseos-disk1.vmdk
-
Deploy the OVA image using the OVF Tool.
For details about command line arguments, see the OVF Tool documentation from the VMware website.
Note: Ensure that you use thick provisioning for the disk and to connect all the interfaces to a network. The secondary interface can be disconnected and disabled after the deployment and before you power on.$ ovftool --acceptAllEulas -dm=thick -ds=VSAN -n=fss-node01 --net:"OAM=OAM-network" --net:"FABRIC=Fabric-network" fss-baseos_24.12.1-414.ova vi://administrator%40vsphere.local@vcenter.domain.tld/My-Datacenter/host/My-Cluster/Resources/My-Resource-Group Opening OVA source: fss-base_25.8.1-414.ova The manifest validates Enter login information for target vi://vcenter.domain.tld/ Username: administrator%40vsphere.local Password: *********** Opening VI target: vi://administrator%40vsphere.local@vcenter.domain.tld/My-Datacenter/host/My-Cluster/Resources/My-Resource-Group Deploying to VI: vi://administrator%40vsphere.local@vcenter.domain.tld/My-Datacenter/host/My-Cluster/Resources/My-Resource-Group Transfer Completed Completed successfully
- Optional:
If you are using this node as a storage node, create the necessary extra disk
for the storage cluster to be formed.
To create the extra disk, edit the VM in the VMware vCenter and add a new 300 GB disk.
-
Enable 100% resource reservation for both CPU and memory for the VM.
You can configure the resource reservation for CPU and memory by editing the VM in vCenter.
Configuring the Fabric Services System virtual machine
-
From the VMware vSphere or KVM console, log in to the node VM.
Use the following credentials:
Username:
root
Password:
N0ki@FSSb4se!
-
If your environment does not support or use the cloud-init services, disable
and stop these services.
# systemctl stop cloud-init cloud-init-local cloud-config cloud-final # systemctl disable cloud-init cloud-init-local cloud-config cloud-final
-
Edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file to
configure the correct static IP address, DNS servers, and gateway for the OAM
network.
Note: If you are deploying a dual-stack system, provide the IPv6 details in the ifcfg-eth0 file. Additionally, ensure that the default gateway is configured for both IPv4 and IPv6 and that both gateways are functional before installing the Fabric Services System.The final content should look similar to the following example, except with the IP address, DNS, and domain details specific to the target environment:
BOOTPROTO=static DEVICE=eth0 ONBOOT=yes TYPE=Ethernet USERCTL=no IPADDR=192.0.2.10 PREFIX=24 GATEWAY=192.0.2.1 DNS1=192.0.2.5 DNS2=192.0.2.6 DOMAIN=fss.nokia.local MTU=9000
-
Edit the /etc/sysconfig/network-scripts/ifcfg-eth1 file to
configure the correct static IP address for the fabric management network.
Ensure that the MTU parameter is set to 9000 for all the interfaces.
The final content should look similar to the following, except with the IP address, DNS, and domain details specific to the target environment:BOOTPROTO=static DEVICE=eth0 ONBOOT=yes TYPE=Ethernet USERCTL=no IPADDR=192.0.2.10 PREFIX=24 MTU=9000
-
Restart the network to apply the new configuration.
# systemctl restart NetworkManager.service
-
Configure the appropriate NTP servers.
Edit the /etc/chrony.conf configuration file and replace all lines that begin with
server
with the correct server lines for the environment. -
Restart the chronyd service.
# systemctl restart chronyd
-
Verify that time synchronization is functioning properly.
# chronyc tracking
If the Reference ID field is not set to any of the configured servers, but instead refers to something like 127.127.1.1, time synchronization is not functioning properly.
Reference ID : 87E30FFE (192.0.2.5) Stratum : 4 Ref time (UTC) : Wed Feb 16 01:20:36 2022 System time : 0.000014215 seconds slow of NTP time Last offset : -0.000001614 seconds RMS offset : 0.000106133 seconds Frequency : 11.863 ppm slow Residual freq : -0.071 ppm Skew : 0.187 ppm Root delay : 0.063009784 seconds Root dispersion : 0.018440660 seconds Update interval : 64.5 seconds Leap status : Normal
-
Synchronize the RTC clock and the system clock.
Ensure that the RTC and the system clock are synchronized after every reboot.
# hwclock --systohc
Then, verify that local time and the RTC time are synchronized.# timedatectl
-
Change the hostname.
# hostnamectl set-hostname fss-node01.domain.tld
-
Set up key-based authentication from the deployer VM to Fabric Services System
compute VMs.
If password authentication has been enabled on the node for SSH, enter the following command from the deployer VM.
# ssh-copy-id root@<node IP/FQDN>