To install NSP Flow Collectors and Flow Collector Controllers

Purpose

Perform this procedure to install one or more NSP Flow Collectors and Flow Collector Controllers, which support collocated and distributed deployment.

Note: For a small-scale deployment, you can collocate an NSP Flow Collector Controller and an NSP Flow Collector on one station, as described in the procedure. A small-scale deployment has a maximum of two stations, and supports the following:

  • standalone—one station that hosts a Flow Collector Controller and Flow Collector, and a second station that hosts only a Flow Collector

  • redundant—two stations that each host a Flow Collector Controller and Flow Collector

Note: The root user password on each NSP Flow Collector Controller and Flow Collector station must be identical.

Note: An NSP Flow Collector or Flow Collector Controller uninstallation backs up the component configuration files in the /opt/nsp/backup_flow directory on the station. A subsequent NSP Flow Collector or Flow Collector Controller installation on the station automatically reloads the saved configuration files. If you do not want the previous configuration restored during a subsequent installation, you must delete the /opt/nsp/backup_flow directory before the installation.

Note: The install.sh utility requires SSH access to a target station. To enable SSH access, you must do one of the following.

  • Configure the required SSH keys on the stations.

  • If each remote station has the same root user password, include the --ask-pass argument in the install.sh command; for example:

    ./install.sh --ask-pass --target remote_station

Steps
 

Download the NSP component installer package NSP_NSD_NRC_R_r.tar.gz) from OLCS and extract it on any station running a supported version of RHEL. This does not have to be the station on which an NSP Flow Collector Controller or Flow Collector is to be installed; the installer can perform remote installations.

Note: In subsequent steps, the directory is called the NSP_installer_directory.

The NSP_installer_directory/NSD_NRC_R_r directory is created, where R_r is the NSP release identifier in the form MAJOR_minor.


Open a console window.


Enter the following as the root user:

cd NSP_installer_directory/NSD_NRC_R_r


Create a hosts file in the current directory that contains the required entries in the following sections:

  • [nspos]—one entry for each ZooKeeper host; the ZooKeeper hosts are one of the following:

    • if the NSP system includes only the NFM-P, the NFM-P main servers

    • otherwise, the VIP address of each NSP cluster

  • [fcc]—one line entry for each Flow Collector Controller

  • [fc]—one line entry for each Flow Collector

Note: If an NSP Flow Collector Controller and Flow Collector are to be collocated on one station, specify the same address for in the [fc] and [fcc] sections; for example:

[fcc] 203.0.113.3 advertised_address=198.51.100.3 ansible_host=198.51.100.3

[fc] 203.0.113.3 ansible_host=198.51.100.3 fc_mode=AA

See NSP hosts file for configuration information.

Note: A sample hosts file is in the following directory; you must use a modified copy of the file for installation:

  • NSP_installer_directory/NSD_NRC_R_r/examples

    where R_r is the NSP software release


Create a config.yml file in the NSP installer directory that includes the following sections; see NSP RPM-based configuration file for information.

  • multi-component deployment:

    • sso

    • tls

    • section for each component to install

  • independent deployment, for example, if you are adding a Flow Collector or Flow Collector Controller to an NFM-P-only system:

    • sso

    • tls

Note: The following parameter values in the tls section must match the values in the NSP configuration file; otherwise, the values must match the values in the NFM-P main server configuration:

  • secure

  • PKI server parameters

You can use the samconfig “show” command on a main server to display the tls parameters. See NFM-P samconfig utility for information about using the samconfig utility.

Note: A sample config.yml file is in the following directory; you must use a modified copy of the file for installation:

  • NSP_installer_directory/NSD_NRC_R_r/examples

    where R_r is the NSP software release


If you intend to use the NSP PKI server, start the PKI server.

  1. Log in as the root user on the NSP deployer host.

  2. Open a console window.

  3. Enter the following:

    cd /opt/nsp/NSP-CN-DEP-release-ID/NSP-CN-release-ID/tools/pki ↵

  4. Enter the following:

    ./pki-server ↵

    The PKI server starts, and the following is displayed:

    date time Using Root CA from disk, and serving requests on port nnnn


Enter the following:

cd NSP_installer_directory/NSD_NRC_R_r/bin ↵


Enter the following:

Note: Include the --ask-pass option only if each target station has the same root user password.

./install.sh --ask-pass --target target_list

where target_list is a comma-separated list of the NSP Flow Collector Controller and NSP Flow Collector internal IP addresses

The NSP Flow Collector Controller or NSP Flow Collector software is installed on the stations.


Configure NFM-P in DR deployment
 

If the NSP cluster and NSP Flow Collector Controllers are not deployed in a DR deployment, go to Step 15.


10 

Log in as the root user on the NFM-P main server in the same data center as the NSP Flow Collector Controller.


11 

Open a console window.


12 

Stop the main server.

  1. Enter the following to switch to the nsp user:

    su - nsp ↵

  2. Enter the following:

    bash$ cd /opt/nsp/nfmp/server/nms/bin ↵

  3. Enter the following:

    bash$ ./nmsserver.bash stop ↵

  4. Enter the following:

    bash$ ./nmsserver.bash appserver_status ↵

    The server status is displayed; the server is fully stopped if the status is the following:

    Application Server is stopped

    If the server is not fully stopped, wait five minutes and then repeat this step. Do not perform the next step until the server is fully stopped.

  5. Enter the following to switch back to the root user:

    bash$ su ↵

  6. If the NFM-P is not part of a shared-mode NSP deployment, enter the following to display the nspOS service status:

    nspdctl status ↵

    Information like the following is displayed.

    Mode:     DR

    Role:     redundancy_role

    DC-Role:  dc_role

    DC-Name:  dc_name

    Registry: IP_address:port

    State:    stopped

    Uptime:   0s

    SERVICE           STATUS

    service_a         inactive

    service_b         inactive

    service_c         inactive

    You must not proceed to the next step until all NSP services are stopped; if the State is not ‘stopped’, or the STATUS indicator of each listed service is not ‘inactive’, repeat this substep.


13 

You must create an association between the local NSP Flow Controller and the local NFM-P main server to ensure that the Flow Collector and Controller remain in communication with the local NFM-P during NSP DR activity.

Add the local data center name to the main-server configuration.

Note: The data center name must be a name other than “default”.

  1. Enter the following:

    samconfig -m main ↵

    The samconfig utility opens, and the following is displayed:

    Start processing command line inputs...

    <main>

  2. Enter the following:

    <main> configure nspos dc-name data_center

    where data_center is the data center name, which must match the dcName value for the local NSP cluster in the NSP configuration file

    The prompt changes to <main configure nspos>.

  3. Enter the following:

    <main configure nspos> exit ↵

    The prompt changes to <main>.

  4. Enter the following:

    <main> apply ↵

    The configuration is applied.

  5. Enter the following:

    <main> exit ↵

    The samconfig utility closes.


14 

Start the main server.

  1. Enter the following to switch to the nsp user:

    su - nsp ↵

  2. Enter the following:

    bash$ cd /opt/nsp/nfmp/server/nms/bin ↵

  3. Enter the following:

    bash$ ./nmsserver.bash start ↵

  4. Enter the following:

    bash$ ./nmsserver.bash appserver_status ↵

    The server status is displayed; the server is fully initialized if the status is the following:

    Application Server process is running.  See nms_status for more detail.

    If the server is not fully initialized, wait five minutes and then repeat this step. Do not perform the next step until the server is fully initialized.


Start NSP Flow Collector Controllers
 
15 

Perform the following steps on each NSP Flow Collector Controller station.

Note: If an NSP Flow Collector is also installed on the station, the Flow Collector starts automatically.

  1. Log in to the station as the nsp user.

  2. Enter the following:

    bash$ /opt/nsp/flow/fcc/bin/flowCollectorController.bash start ↵

    The NSP Flow Collector Controller starts.

  3. Close the console window.


Configure NSP Flow Collector Controllers
 
16 

Perform Step 18 to Step 24 for each NSP Flow Collector Controller.


17 

Go to Step 25.


18 

Use a browser to open the following URL:

https://server:8443/fcc/admin

where server is the NSP Flow Collector Controller IP address or hostname


19 

When the login form opens, enter the required user credentials and click OK. The default user credentials are available from technical support.

The NSP Flow Collector Controller page opens.


20 

Click on the NFM-P Configuration tab.


21 

Configure the parameters in the following table.

Table 14-1: NSP Flow Collector Controller parameters, NFM-P Configuration tab

Parameter

Description

NFM-P

Active

The public IP address or hostname of the standalone main server, or the primary main server in a redundant deployment

Standby

The public IP address or hostname of the standby main server in a redundant deployment

XML API

User Name

The NFM-P user for XML API file transfers

Password

The NFM-P user password for XML API file transfers

HTTPS Port

The HTTPS port for file transfers

JMS

JNDI Port

The TCP port on the main server for JMS communication

Reconnect

Whether the NSP Flow Collector attempts to reconnect to the NFM-P after a connection failure

Durable

Whether the NFM-P JMS subscription is durable

Reconnect Attempts

The number of times to attempt to reconnect to the NFM-P after a connection failure

Reconnect Delay

The time, in seconds, to wait between NFM-P reconnection attempts

Connection Timeout

The time, in seconds, to wait for a response to an NFM-P connection attempt

File Transfer

Protocol

The protocol to use for main server file transfers

Port

The main server TCP port for file transfers

User Name

The FTP or SFTP username required for main server transfers from the

Password

The FTP or SFTP password for file transfers from the main server


22 

Click Save NFM-P configuration.


23 

Click on the Operations tab.


24 
CAUTION 

CAUTION

Service Disruption

The Force Snapshot Extraction option consumes NFM-P main server resources.

Ensure that you perform the step only during a period of low NFM-P system activity.

  1. If the NSP Flow Collectors are to collect AA flow statistics, click Force AA Snapshot Extraction.

  2. If the NSP Flow Collectors are to collect system flow statistics, click Force SYS Snapshot Extraction.

The NSP Flow Collector Controller extracts the managed network information from the NFM-P.


Start NSP Flow Collectors
 
25 

Start each NSP Flow Collector that is not collocated with an NSP Flow Collector Controller.

Note: Any NSP Flow Collector that is collocated with a Flow Collector Controller is automatically started earlier in the procedure.

  1. Log in to the NSP Flow Collector station as the nsp user.

  2. Enter the following:

    bash$ /opt/nsp/flow/fc/bin/flowCollector.bash start ↵

    The NSP Flow Collector starts.

  3. Close the console window.


Configure NSP Flow Collectors
 
26 

As required, perform the following steps on each NSP Flow Collector station to specify the NEs from which the NSP Flow Collector is to collect statistics.

  1. Use a browser to open the following URL:

    https://server:8443/fc/admin

    where server is the NSP Flow Collector IP address or hostname

    The Collection Policy configuration page opens.

  2. Click Add. A new table row is displayed.

  3. Configure the following parameters:

    • System ID

      The System ID value must match the System ID that the NFM-P associates with the NE, for example, as shown on the NE properties form in the GUI.

      You can specify multiple MDAs on one NE by adding one table row for each MDA and using the same System ID in each row.

    • Description

    • Source IPFIX Address

      The Source IPFIX Address value is the NE address specified in the discovery rule for the NE.

    • Port

  4. If the NSP Flow Collector is to collect system Cflowd statistics, use the Flow Protocol drop-down to choose a protocol.

  5. To delete an NE, select the Delete on save check box beside the NE.

  6. Click Save Configuration. The configuration is saved.


27 

Click on the Aggregation Policy tab.


28 

Perform one of the following:

  1. If the NSP Flow Collector is to collect system Cflowd statistics, select the required aggregation types from the tabs in the lower panel.

  2. If the NSP Flow Collector is to collect AA statistics, select one or more statistics classes in the Subscriber Collection panel to enable aggregation for the classes.


29 

Configure the aggregations.

Note: The statistics collection interval affects NSP Flow Collector performance. A larger interval results in proportionally larger files, which take longer to store and transfer.

Note: For BB NAT statistics, you must set the collection interval no higher than the following, based on the expected flow rate:

  • 350 000 flows/sec—1 minute

  • 80 000 flows/sec—5 minutes

  • 40 000 flows/sec—15 minutes

You can achieve the statedrates only if you use a remote FTP client to retrieve the records and do not enable the record transfer in Step 30.

If you enable the record transfer in Step 30, you must set the collection interval no higher than the following, based on the expected flow rate:

  • 100 000 flows/sec—1 minute

  • 50 000 flows/sec—5 minutes

  • 25 000 flows/sec—15 minutes

  1. Use the Interval drop-down menus in the Aggregation Intervals panel to specify the aggregation interval for each statistic type, as required.

  2. The Interval Closing Timeout parameter specifies a latency value that is applied at the end of a collection interval to ensure that any queued statistics are written to the current file. Typically, the default value of one second is adequate; configure the parameter only at the request of technical support.

  3. Click on the tab in the lower panel that corresponds to the statistic type.

  4. Select or deselect aggregations, as required.


30 

Configure the transfer of BB NAT records in CSV format to a file server, if required.

Note: A minimum 1 Gbyte/s link is required between the NSP Flow Collector and the file server.

Note: SFTP transfers are considerably slower than FTP transfers.

  1. Click on the NAT Transfer tab.

  2. Configure the parameters:

    • Enable Transfer—whether file transfers are enabled

    • Transfer Protocol—FTP or SFTP

    • IP Address / Host name—file server address

    • Port—file server port

    • Location—file server directory that is to contain the files

    • User—FTP or SFTP username

    • Password—FTP or SFTP password


31 

Click Save Configuration. The configuration is saved.


32 

Close the open browser pages.


33 

If no other components are to be deployed, stop the PKI server by entering Ctrl+C in the console window.


34 

Close the open console windows.

End of steps