The Fabric Services System deployer VM
The procedures in this section describe how to deploy and configure the Fabric Services System deployer VM.
Downloading the Fabric Services System deployer image
Deployment | Where to download the image |
---|---|
VMware vSphere | Download the OVA image to a host that can reach the VMware vCenter or ESXi host on which it will be deployed. |
KVM | Download the QCOW2 image to the deployer host. |
Prepare the Fabric Services System deployer hypervisor
Fabric Services System deployer VM creation
After you have downloaded the OVA or QCOW2 image and prepared the deployer node, follow the installation steps to create the deployer VM.
The Fabric Services System nodes contained in the cluster (worker nodes) and the node hosting the deployer VM must communicate with each other. Both the worker nodes and the deployer VM must be able to initiate connections.
Creating the VM on a bridged network on KVM
This section provides an example script used to create a VM in a KVM-based hypervisor. You can use this script or you can use your own procedure as long as the resulting VM meets the requirements for the Fabric Services System VM.
-
Create an fssvm_create.sh file, then copy the following
contents into the file:
create_fssvm() { BRIDGE="breth0:1" VM=fss-deployer VMDIR=/var/lib/libvirt/images/$VM FSSIMAGE=<path to fss-installer qcow2 image> sudo mkdir -vp $VMDIR sudo cp $FSSIMAGE $VMDIR/$VM.qcow2 sudo virsh pool-create-as --name $VM --type dir --target $VMDIR sudo virt-install --import --name $VM \ --memory 8096 --vcpus 1 --cpu host \ --disk $VMDIR/$VM.qcow2,format=qcow2,bus=virtio \ --network bridge=$BRIDGE,model=virtio \ --os-variant=centos7.0 \ --noautoconsole --debug } VMDIR=. create_fssvm
-
In the script, modify the
FSSIMAGE=<path to fss-installer qcow2 image>
field to show the actual path to the Fabric Services System image on your system.FSSIMAGE=./fss-deployer-x.y.qcow2
-
Modify the permissions of the shell script file.
chmod 755 fssvm_create.sh
-
Execute the shell script.
./fssvm_create.sh
Creating the VM on VMware vSphere
- the VMware vSphere vCenter or ESXi UI
For instructions, see
Deploy an OVF or OVA Template
in the VMware vSphere documentation. - the VMware Open Virtualization Format Tool CLI
The following section provides an example of how to use the VMware OVF Tool CLI.
- Download and install the latest version of the VMware OVF Tool from the VMware Developer website.
-
Display details about the OVA image.
Execute the ovftool command with just the OVA image name as the argument.
$ ovftool fss-deployer-24.8.1-414.ova OVF version: 1.0 VirtualApp: false Name: fss-deployer Download Size: 17.40 GB Deployment Sizes: Flat disks: 40.00 GB Sparse disks: 21.38 GB Networks: Name: OAM Description: The Fabric Services System OAM (UI and API) network Name: FABRIC Description: The Fabric Services System Fabric Management network Virtual Machines: Name: fss-deployer Operating System: centos7_64guest Virtual Hardware: Families: vmx-14 Number of CPUs: 2 Cores per socket: 1 Memory: 7.91 GB Disks: Index: 0 Instance ID: 4 Capacity: 40.00 GB Disk Types: SCSI-lsilogic NICs: Adapter Type: VmxNet3 Connection: OAM Adapter Type: VmxNet3 Connection: FABRIC References: File: fss-deployer-disk1.vmdk
-
Deploy the OVA image using the OVF Tool.
For details about command line arguments, see the OVF Tool documentation from the VMware website.
Note: Ensure that you use thick provisioning for the disk and to connect all the interfaces to a network. The secondary interface can be disconnected and disabled after the deployment and before you power on.$ ovftool --acceptAllEulas -dm=thick -ds=VSAN -n=fss-deployer --net:"OAM=OAM-network" --net:"FABRIC=Fabric-network" fss-deployer_24.5.1-414.ova vi://administrator%40vsphere.local@vcenter.domain.tld/My-Datacenter/host/My-Cluster/Resources/My-Resource-Group Opening OVA source: fss-deployer_24.8.1-414.ova The manifest validates Enter login information for target vi://vcenter.domain.tld/ Username: administrator%40vsphere.local Password: *********** Opening VI target: vi://administrator%40vsphere.local@vcenter.domain.tld/My-Datacenter/host/My-Cluster/Resources/My-Resource-Group Deploying to VI: vi://administrator%40vsphere.local@vcenter.domain.tld/My-Datacenter/host/My-Cluster/Resources/My-Resource-Group Transfer Completed
Configuring the Fabric Services System deployer VM
-
From the VMware vSphere console or the KVM console, log in to the deployer
VM.
Use the following credentials:
Username: root
Password: N0ki@FSSb4se!
Note: After the initial login, Nokia recommends that you change this default password to a stronger password to enhance the security of the deployer and the Fabric Services System environment. -
If your environment does not support or use cloud-init services, disable and
stop these services.
# systemctl stop cloud-init cloud-init-local cloud-config cloud-final
# systemctl disable cloud-init cloud-init-local cloud-config cloud-final
-
Enable SSH.
The base image is a hardened image, so SSH is disabled by default for the root user. To enable SSH, update the /etc/ssh/sshd_config file and change the following lines:
to:PasswordAuthentication no PermitRootLogin no
PasswordAuthentication yes PermitRootLogin yes
Note: You can keep password authentication disabled to provide extra security. In this case, only key-based authentication works, and you must configure the appropriate public SSH keys for the root user to log in over SSH. In any case, this configuration is needed for the deployer VM to reach the nodes. -
Restart SSH.
# systemctl restart sshd
-
Edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file to
configure the correct static IP address, DNS servers, and gateway.
The final content should look similar to the following, except with the IP address, DNS, and domain details specific to the target environment:
BOOTPROTO=static DEVICE=eth0 ONBOOT=yes TYPE=Ethernet USERCTL=no IPADDR=192.0.2.10 PREFIX=24 GATEWAY=192.0.2.1 DNS1=192.0.2.5 DNS2=192.0.2.6 DOMAIN=fss.nokia.local MTU=9000
-
Restart the network to apply the new configuration.
Execute the following command:
# systemctl restart NetworkManager.service
-
Configure the appropriate NTP servers.
Edit the /etc/chrony.conf configuration file and replace all lines beginning with
server
with the correct server lines for the environment. -
Restart the chronyd service.
# systemctl restart chronyd
-
Verify that time synchronization is functioning properly.
# chronyc tracking
Reference ID : 87E30FFE (192.0.2.5) Stratum : 4 Ref time (UTC) : Wed Feb 16 01:20:36 2022 System time : 0.000014215 seconds slow of NTP time Last offset : -0.000001614 seconds RMS offset : 0.000106133 seconds Frequency : 11.863 ppm slow Residual freq : -0.071 ppm Skew : 0.187 ppm Root delay : 0.063009784 seconds Root dispersion : 0.018440660 seconds Update interval : 64.5 seconds Leap status : Normal
If the Reference ID field does not show any of the configured servers, but instead refers to something like 127.127.1.1, time synchronization is not functioning properly.
-
Synchronize the RTC clock and the system clock.
Ensure that the RTC and the system clock are synchronized after every reboot.
# hwclock --systohc
Then, verify that local time and the RTC time are synchronized.# timedatectl
- Optional:
Change the hostname.
# hostnamectl set-hostname new-hostname.domain.tld
-
Reboot the Fabric Services System deployer VM to ensure that all services come
up with the correct network configuration.
# reboot