Fabric Services System installation
After the Fabric Services System deployer VM and the Fabric Services System nodes have been installed and configured, the environment is ready to be installed with the Fabric Services System application.
Using HTTPS for the UI and API
By default, HTTPS is enabled for the UI and API to enforce the use of TLS encryption (v1.2 or v1.3) for all communication to the Fabric Services System management interfaces. Enabling HTTPS guarantees that all information is secured against snooping or changes during transit.
During the installation, a set of default certificates are generated by the installer
for use by the system in different functions. After the installation, a tool and
process is available to replace these auto generated certificates with customer
specific certificates. More information, see Certificate Management
in the
Fabric Services System User Guide.
Dual-stack networks
- The network must be ready for IPv6 and IPv4 IP addresses.
- Each node must be configured with an IPv4 and IPv6 default gateway and the gateways must be functional.
- The pods running in the VMs need to connect to SR Linux, which is in a different network.
- After creating the VMs, Nokia recommends that you verify the required connectivity over IPv4 and IPv6.
sample-input.json
file. You
must also set an IP v6 address in the ip6 parameter for each
worker node and storage node. Geo-redundant installations
For a geo-redundant system, you need to deploy two Fabric Services
System clusters, one for the intended active system and the other the standby, using
different sample-input.json
for the active and standby sites. The
relevant geo-redundancy settings for the deployer on the active and standby sites
are shown in Geo-redundant deployer configuration example. For related
information, see Preparing the Fabric Services System clusters for geo-redundant configuration.
Load balancing using a VIP
The system supports cluster-based load balancing using a virtual IP (VIP) address for the OAM (UI, API, and other northbound services) and one for the node networks (for SR Linux ZTP and DHCP relay). If a Fabric Services System compute node that services an OAM or node network service IP becomes unavailable, the system can continue to service these networks through the VIP, as it automatically assigns the virtual IP to another compute node.
You can configure an IPv4 and IPv6 VIP for the OAM and node networks.
When cluster load balancing is configured, the MetalLB load balancer is installed on the K8s cluster. MetalLB provides load balancing service in L2 mode. MetalLB uses the configured IP address and assigns this IP address to each node in the cluster. MetalLB sets only one node to respond to ARP requests. If this active node fails, another node becomes active. Typically, the IP pool is within the CIDR of one of the interfaces of the nodes, so no extra routing configuration is needed in the network.
In the sample-input.json file, the parameters for configuring load-balancing are in k8s section, under lbconfig. First, set the lbtype parameter to native. The oam section is for configuring the VIP and interface on which to advertise the VIP for the North-bound interfaces (including the UI and REST API); the node section is for configuring the VIP and interface on which to advertise the VIP the node network (for SR Linux ZTP and DHCP relay). The VIP for the node network should be the same as the setting for ztpaddress and ztpv6address parameters in the sample-input.json.
Editing the installation configuration file
As part of the deployment, you must provide specific details about the configurable portions of the installation using a configuration file. The details you provide instruct the deployer how to proceed when setting up the Kubernetes deployment, the Fabric Services System software, and the Digital Sandbox.
sample-input.json
. Create a different sample-input.json
for the active and
standby sites. The relevant geo-redundancy settings for the deployer on the active and
standby sites are shown in Geo-redundant deployer configuration example.
-
From the deployer, access the input configuration file.
[root@fss-deployer ~] vi sample-input.json
-
Edit the configuration file.
Update the file with the following settings:
- IP addresses of the nodes to be used in your Fabric Services System deployment
- deployer node settings
- worker node settings
- storage node settingsNote: When you set the devices parameter, specify only the partition name (
"sdb" or "vdb"
in the example below). You do not need to specify the path. - time synchronization
- replica count
- optional: Digital Sandbox installation characteristics
- optional: remote syslog settings
- optional: load balancer configuration
The deployer creates three Kubernetes master nodes. To specify which worker nodes are Kubernetes master nodes, tag the nodes with the
master
role in the configuration file.The table below describes the fields in theFor examples, see the following sample-input.json files:sample-input.json
file. Examples ofsample-input.json
files for IPv4 and dual-stack deployments follow.Table 1. Field definitions Heading Configurable values deployernode Specifies the IP address, gateway, and netmask configured on the network interface of the deployer VM. The deployer VM must be reachable by all of the Fabric Services System nodes, and the Fabric Services System nodes must be reachable by the deployer VM.
- ipaddr: the IP address of the deployer VM.
- gateway: the gateway address of the interface on the deployer node.
- netmask: the netmask of the interface on the deployer node.
-
sitename: specifies the name of the site in a geo-redunant configuration. If this field is not set, the deployer name is used by default.
-
role: specifies whether the role of the site is active, standby, or standalone in a geo-redunant configuration. If this field is not set, it is set to standalone by default.
-
accessip: specifies the IP address to use to access the deployer in a geo-redunant configuration. If this field is not set, the IP address configured for the ipaddr field is used.
rsyslog Specifies the remote syslog server settings.
- host: the IP address or FQDN of the remote syslog server.
- port: the port that the rsyslog utility uses for network connectivity.
- proto: the protocol used for syslog traffic, either TCP or UDP.
Note: The system currently supports one remote syslog server.digitalsandbox Specifies Digital Sandbox parameters.
enabled: when this flag is set to true
, the Digital Sandbox component is installed. Ensure that at least one worker node is tagged with thedigitalsandbox
role.When set to
false
, the Digital Sandbox component is not installed.fss Specifies Fabric Services System deployment options.
- ztpaddress: specifies an address associated with the node running Traefik. The node can be any of the Fabric Services System cluster nodes. The SR Linux nodes connect to this IP address during the boot process to get the software image and the configuration. This IP address must be reachable from the SR Linux management network.
- ztpv6address: specifies the IPv6 address associated with the node for SR Linux to connect using IPv6.
- dhcpnode: specifies a node on which the Fabric Services System DHCP pod is scheduled.
- dhcpinterface: specifies the address
that the DHCP server listens to for any DHCP requests coming from the DHCP
relay agent. Optionally, you can connect SR Linux nodes via the relay
agent to reach the Fabric Services System if they are not on the
management network.
If the network is not configured with a DHCP relay agent, do not set this parameter; remove it if it present in the input configuration file.
- dhcpv6interface: specifies the IPv6 address of the
DHCPv6 relay agent.
If the network is not configured with a DHCP relay agent, do not set this parameter; remove it if it present in the input configuration file.
- truststoreFilename: specifies the location of the truststore filename with the absolute path information. The JKS file must be generated to access the LDAP server from the Fabric Services System instance. The alternate names in the certificate should match the name and IP address configured for the federation provider (using the Fabric Services System UI or REST API).
-
truststorePassword: specifies the password used to access the truststore.
- kafkaconfig: Configures the parameters that enable
third-party tools to access Fabric Services System alarms.
- port: the port number used by the client to connect to the Kafka service; specify a value between 30000 and 32767.
- groupprefix: the user group prefix for the client to use to connect to Kafka service.
- user and password: the credentials to use to authenticate.
- maxConnections: the maximum number of clients that can connect to the Kafka service. The maximum allowed value is 10.
K8s Specifies whether the system supports dual-stack networks and configures cluster-based load balancing
- enable_dual_stack_networks: specifies whether
dual-stack network is supported. Set to True to
enable support for IPv6 networks. Note: The system supports only a dual-stack network where each VM has an IPv4 and IPv6 address; the system does not support a pure IPv6 network.
- lbconfig section:
configuration for the OAM and node load
balancing.
lbtype: set to native to configure lode-balancing. If set, either the oam or node sections, or both sections must be provided.
- oam section:
configures load-balancing for the north-bound
interface.
interface: specifies the interface on which the load balancer address is advertised.
ipv4: specifies the IPv4 address for the load balancer to use. This value is mandatory.
ipv6: specifies the IPv6 address for load balancer to use.
- node section:
configures load-balancing for SR Linux ZTP and DHCP
relay.
interface: specifies the interface on which the load balancer address is advertised.
ipv4: specifies the IPv4 address for the load balancer to use.
ipv6: specifies the IP address for the load balancer to use. This value must be the same as the value for the ztpv6address parameter.
- oam section:
configures load-balancing for the north-bound
interface.
workernodes Specifies the list of nodes intended to be part of the deployment, except for the deployer host. Worker nodes include storage nodes and Digital Sandbox nodes.
- hostip: the IP address of the specific worker node.
- ip6: the IPv6 address of the worker node; required if the enable_dual_stack_networks parameter is set to True.
- hostname: the hostname of the worker node.
- role: the specified role of the worker
node.
For Digital Sandbox nodes, specify this value as
digitalsandbox
.For Kubernetes master nodes, specify this value as
master
.
replicacount Specifies the replica count for Gluster volumes, including the active volume.
The default value is
1
, indicating no replica (active volume only).A replica count higher than
1
creates the respective number of replica storage volumes. The value cannot be greater than the number of storage nodes.storagenodes Specifies the list of nodes used to create a storage pool. The number of storage nodes must match the value of
Nokia recommends that you configure a minimum of three storage nodes.replicacount
, if configured.- hostip: the IP address of the specific storage node.
- ip6: the IPv6 address of the specific storage node; required if the enable_dual_stack_networks parameter is set to True.
- hostname: the hostname of the storage node.
- devices: separate block devices must be
configured. Configure a raw partition as
xxx
. If an existing file system is present on the device, the setup cannot proceed.
singlenode Specifies whether the deployment consists of only a single node for extra small deployments.
The default value is
If set tofalse
, indicating that the deployment is a standard three- or six-node deployment.true
, the deployment is set up on a single node and has no redundancy built in.
After you finish editing the input configuration file, you can install the Fabric Services System environment as described in Installing the Fabric Services System environment.
IPv4 example
The following is an example of a sample-input.json
configuration file for
an IPv4 network.
{
"deployernode": {
"ipaddr": "192.0.2.200",
"gateway": "192.0.2.1",
"netmask": "255.255.254.0"
"sitename": "SiteA",
"role": "active",
"accessip": "10.254.107.74"
},
"digitalsandbox": {
"enabled": true,
"volumenode": "fss-node04"
},
"fss": {
"dhcpnode": "fss-node01",
"dhcpinterface": "128.66.0.201/24",
"ztpaddress": "192.168.2.3",
"kafkaconfig": {
"port": "31000",
"groupprefix": "fsskafka",
"user": "fssalarms",
"password": "fssalarms",
"maxConnections": 2
}
},
"rsyslog": {
"host": "192.0.2.161",
"port": 51400,
"proto": "udp"
},
"k8s": {
"enable_dual_stack_networks": false
"lbconfig": {
"lbtype": "native",
"oam": {
"interface": "eth0"
"ipv4": ["192.168.5.3"]
}
"node": {
"interface": "eth1"
"ipv4": ["192.168.2.3"]
}
}
},
"replicacount": 2,
"workernodes": [
{
"hostip": "192.0.2.201",
"hostname": "fss-node01",
"role": "master"
},
{
"hostip": "192.0.2.202",
"hostname": "fss-node02",
"role": "master"
},
{
"hostip": "192.0.2.203",
"hostname": "fss-node03",
"role": "master"
},
{
"hostip": "192.0.2.204",
"hostname": "fss-node04",
"role": "digitalsandbox"
},
{
"hostip": "192.0.2.205",
"hostname": "fss-node05",
"role": "digitalsandbox"
},
{
"hostip": "192.0.2.206",
"hostname": "fss-node06",
"role": "digitalsandbox"
}
],
"storagenodes": [
{
"hostip": "192.0.2.204",
"hostname": "fss-node04",
"devices": [
"sdb1"
]
},
{
"hostip": "192.0.2.205",
"hostname": "fss-node05",
"devices": [
"sdb1"
]
},
{
"hostip": "192.0.2.206",
"hostname": "fss-node06",
"devices": [
"sdb1"
]
}
]
}
Dual-stack example
The following sample-input.json
configuration file is an example for a
dual stack, IPv4 and IPv6 networks.
{
"deployernode": {
"ipaddr": "192.0.2.200",
"gateway": "192.0.2.1",
"netmask": "255.255.254.0"
},
"digitalsandbox": {
"enabled": true,
"volumenode": "fss-node04"
},
"fss": {
"dhcpnode": "fss-node01",
"dhcpinterface": "128.66.0.201/24",
"dhcpv6interface": "2001:db8:f128:0::201/64",
"ztpaddress": "128.66.0.201",
"ztpv6address": "2001:db8:f128:0::201",
"truststoreFilename": "/root/fss.truststore.jks",
"truststorePassword": "fss123"
"kafkaconfig": {
"port": "31000",
"groupprefix": "fsskafka",
"user": "fssalarms",
"password": "fssalarms",
"maxConnections": 2
},
"k8s": {
"enable_dual_stack_networks": true
"lbconfig": {
"lbtype": "native",
"oam": {
"interface": "eth0",
"ipv4": ["192.0.2.100"],
"ipv6": ["2001:db8:f685:0::100"]
}
"node": {
"interface": "eth1",
"ipv4": ["128.66.0.201"],
"ipv6": ["2001:db8:f128:0::201"]
}
}
},
"replicacount": 2,
"rsyslog": {
"host": "192.0.2.161",
"port": 514,
"proto": "udp"
},
"workernodes": [
{
"hostip": "192.0.2.201",
"ip6": "2001:db8:f685:0::201",
"hostname": "fss-node01",
"role": "master"
},
{
"hostip": "192.0.2.202",
"ip6": "2001:db8:f685:0::202",
"hostname": "fss-node02",
"role": "master"
},
{
"hostip": "192.0.2.203",
"ip6": "2001:db8:f685:0::203",
"hostname": "fss-node03",
"role": "master"
},
{
"hostip": "192.0.2.204",
"ip6": "2001:db8:f685:0::204",
"hostname": "fss-node04",
"role": "digitalsandbox"
},
{
"hostip": "192.0.2.205",
"ip6": "2001:db8:f685:0::205",
"hostname": "fss-node05",
"role": "digitalsandbox"
},
{
"hostip": "192.0.2.206",
"ip6": "2001:db8:f685:0::206",
"hostname": "fss-node06",
"role": "digitalsandbox"
}
],
"storagenodes": [
{
"hostip": "192.0.2.204",
"ip6": "2001:db8:f685:0::204",
"hostname": "fss-node04",
"devices": [
"sdb1"
]
},
{
"hostip": "192.0.2.205",
"ip6": "2001:db8:f685:0::205",
"hostname": "fss-node05",
"devices": [
"sdb1"
]
},
{
"hostip": "192.0.2.206",
"ip6": "2001:db8:f685:0::206",
"hostname": "fss-node06",
"devices": [
"sdb1"
]
}
]
}
Geo-redundant deployer configuration example
For a geo-redundant system, create two sample-input.json files: one for the active and one for the standby. The following examples show the deployer section for the active and standby sites.
deployernode
section should be similar to the
following example:"deployernode": {
"ipaddr": "192.0.2.200",
"gateway": "192.0.2.1",
"netmask": "255.255.254.0"
"sitename": "primary",
"role": "active",
"accessip": "192.0.2.200"
},
deployernode
section should be similar to the
following example:"deployernode": {
"ipaddr": "192.0.20.200",
"gateway": "192.0.20.1",
"netmask": "255.255.254.0"
"sitename": "secondary",
"role": "standby",
"accessip": "192.0.20.200"
},
Installing the Fabric Services System environment
-
Initiate the setup.
[root@fss-deployer ~]$ fss-install.sh configure sample-input.json
The CLI prompt indicates when the configuration is complete.
-
Start the installation of Kubernetes, the Fabric Services System software, and
the Digital Sandbox.
[root@fss-deployer ~]$ fss-install.sh
The installation time varies depending on the capacity of your system.
-
After the installation script is completed, verify the installation by logging
in to the Fabric Services System user interface using one of the node IP
addresses.
Log in using the following default username and password:
Username:
admin
Password:
NokiaFss1!
Note: After the initial login, Nokia recommends that you change this default admin password to a stronger password to secure the platform properly.
Preparing the Fabric Services System clusters for geo-redundant configuration
-
Ensure that the clusters in the systems meet the following requirements:
-
Use the same deployer and the compute image versions on both clusters.
-
The number of nodes on the active cluster should match the number of nodes on the standby cluster; the node configurations should also match.
-
The active and standby clusters should be installed with the site-specific network configurations.
-
Connectivity should be established between the active and standby sites.
-
The active and standby sites should have the correct network connectivity to the SR Linux network.
-
-
Configure geo-redundancy for the active and standby clusters.
For instructions, see
Geo-redundancy
in the Fabric Services System User Guide.
How to troubleshoot a failed installation
If the Fabric Services System installation fails for any reason, you can use a script that is bundled with the system to generate information about the installation status. For assistance with troubleshooting, contact your Nokia support team.
The technical support script is included with the Fabric Services System.
For more information about the script and how to run it, see "Capturing troubleshooting data" in the Fabric Services System User Guide.