To upgrade a standalone Release 25.11 or earlier auxiliary database

Description

Note: This procedure is performed as part of the NSP or NFM-P upgrade pathway.

The following procedure describes how to upgrade a standalone NSP auxiliary database from Release 25.11 or earlier.

Note: You require the following user privileges on each station:

Note: Ensure that you record the information that you specify, for example, directory names, passwords, and IP addresses.

Note: The following RHEL CLI prompts in command lines denote the active user, and are not to be included in typed commands

Steps
Obtain software
 

Download the following installation files to an empty local directory on a station that is reachable by each station in the auxiliary database cluster:

  • nspos-auxdb-R.r.p-rel.v.rpm

  • VerticaSw_PreInstall.sh

  • nspos-jre-R.r.p-rel.v.rpm

  • vertica-R.r.p-rel.tar

where

R.r.p is the NSP release identifier, in the form MAJOR.minor.patch

v is a version number


Commission new stations, if required
 

If you are deploying the auxiliary database on one or more new stations, commission each station according to the platform specifications in this guide and in the NSP Planning Guide.

For information about deploying the RHEL OS using an NSP OEM disk image, see NSP disk-image deployment.

Note: The IP addresses on the interfaces of a new auxiliary database station must match the addresses on the station that it replaces.


Back up database
 
CAUTION 

CAUTION

Data Loss

If you specify a backup location on the database data partition, data loss or corruption may occur.

The auxiliary database backup location must be an absolute path on a partition other than the database data partition.

Back up the auxiliary database as described in the NSP System Administrator Guide for the installed release.

Note: The backup location requires 20% more space than the database data consumes.

Note: If the backup location is remote, a 1 Gb/s link to the location is required; if achievable, a higher-capacity link is recommended.


Enable auxiliary database maintenance mode
 

Perform the following steps.

  1. Log in as the root or NSP admin user on the NSP deployer host.

  2. Enter the following:

    export KUBECONFIG=/opt/nsp/nsp-configurator/kubeconfig/nsp_kubeconfig ↵

  3. Enter the following:

    kubectl patch configmap/nspos-auxdb-agent-overrides -n namespace --type=merge -p '{"data":{"nspos-auxdb-agent-overrides.json":"{\"auxDbAgent\":{\"config\":{\"maintenance-mode\":true}}}"}}' ↵

    where namespace is the nspos-auxdb-agent namespace

  4. Enter the following to restart the nspos-auxdb-agent pod:

    kubectl delete -n namespace pod `kubectl describe -n namespace pods | grep -P ^^Name: | grep -oP nspos-auxdb-agent[-a-zA-Z0-9]+` ↵

    Do not perform the next step until the nspos-auxdb-agent pod is up and running.

  5. Issue the following RESTCONF API call against the primary NSP cluster to verify that the agent is in :

    Note: In order to issue a RESTCONF API call, you require a token; see the My First NSP API Client tutorial on the Network Developer Portal for information.

    GET https://address/restconf/data/auxdb:auxdb-agent

    where address is the advertised address of the primary NSP cluster

    The call returns information like the following:

    {

        "auxdb-agent": {

            "name": "nspos-auxdb-agent",

            "application-mode": "MAINTENANCE",

            "copy-cluster": {

                "source-cluster": "",

                "target-cluster": "",

                "time-started": "",

                "status": "UNKNOWN"

            }

        }

    }

    The agent is in maintenance mode if the application-mode is MAINTENANCE, as shown in the example.


Stop database proxies
 

Perform the following to stop the auxiliary database proxy services.

  1. Log in as the root user on each auxiliary database station in the cluster.

  2. Enter the following:

    systemctl stop nspos-auxdbproxy.service ↵


Stop auxiliary database
 

Log in as the root user on an auxiliary database station.


Open a console window.


Enter the following:

cd /opt/nsp/nfmp/auxdb/install/bin ↵


Enter the following to block external access to the auxiliary database ports:

./auxdbAdmin.sh shieldsUp ↵


10 

Enter the following to stop the auxiliary database:

./auxdbAdmin.sh stop ↵

The following prompt is displayed:

Please enter auxiliary database dba password [if you are doing initial setup for auxiliary database, press enter]:


11 

Enter the dba password.

Information like the following is displayed:

Database samdb stopped successfully

Database status

 Node       | Host          | State | Version | DB

------------+---------------+-------+---------+-------

 node_1     | internal_IP_1 | STATE | version | db_name

 node_2     | internal_IP_2 | STATE | version | db_name

...

 node_n     | internal_IP_n | STATE | version | db_name

  Output captured in log_file

Note: If the inital stop does not succeed, force_stop is attempted automatically until the auxiliary database is stopped. If the database does not stop, do not proceed to the next step and contact Nokia support for further assistance.


Prepare each station for upgrade
 
12 

Perform the following steps on each existing auxiliary database station in the cluster.

Log in as the root user on the auxiliary database station.


13 

Open a console window.


14 

Transfer the downloaded installation files to an empty directory on the station.

Note: You must ensure that the directory is empty.

Note: In subsequent steps, the directory is called the NSP software directory.


15 

Navigate to the NSP software directory.

Note: The directory must contain only the installation files.


16 

Enter the following:

# chmod +x * ↵


17 

Enter the following:

# ./VerticaSw_PreInstall.sh ↵

Messages like the following are displayed:

Logging Vertica pre install checks to log_file

INFO: About to remove proxy parameters set by a previous run of 

            this script from    /etc/profile.d/proxy.sh

INFO: Completed removing proxy parameters set by a previous run of 

            this script from    /etc/profile.d/proxy.sh

INFO: About to set proxy parameters in /etc/profile.d/proxy.sh...

INFO: Completed setting proxy parameters in /etc/profile.d/proxy.sh...

INFO: Found pre-existing [/opt/nsp/nfmp/auxdb/install/config/install.config]

      file on this machine. This file will be validated.

INFO: About to remove kernel parameters set by a previous run of 

      this script from /etc/sysctl.conf

INFO: Completed removing kernel parameters set by a previous run of 

      this script from /etc/sysctl.conf

INFO: About to set kernel parameters in /etc/sysctl.conf...

INFO: Completed setting kernel parameters in /etc/sysctl.conf...

INFO: About to change the current values of the kernel parameters

INFO: Completed changing the current values of the kernel parameters

INFO: About to remove ulimit parameters set by a previous run of 

      this script from  /etc/security/limits.conf

INFO: Completed removing ulimit parameters set by a previous run of 

      this script from  /etc/security/limits.conf

INFO: About to set ulimit parameters in /etc/security/limits.conf...

INFO: Completed setting ulimit parameters in /etc/security/limits.conf...

INFO: Removing /var/log/wtmp entry from /etc/logrotate.conf

INFO: Adding /etc/logrotate.d/wtmp

Checking user group nsp... 

WARNING: user group with the specified name already exists locally.

Checking user nsp... 

INFO: user with the specified name already exists locally. Setting shell to /usr/sbin/nologin.

Checking Linux user group samauxdb... 

WARNING: user group with the specified name already exists locally.

Checking user samauxdb... 

WARNING: user with the specified name already exists locally.

Moving logfile from /tmp ...

     ... to /opt/nsp/nfmp/auxdb/install/log

Changing ownership of the directory /opt/nsp/nfmp/auxdb/install to samauxdb:samauxdb.

Removing group write and world permissions from the directory /opt/nsp/nfmp/auxdb/install.

Changing ownership of /opt/nsp/nfmp/auxdb files.

Set virtio device queue scheduler for /dev/vda: [none] mq-deadline kyber bfq 

INFO: Updating auxiliary database prep script.

Stopping and disabling systemd service [nspos-auxdbproxy.service]

Stopping and disabling systemd service [nspos-auxdb.service]

************************************************************************ Changes were made that require a restart.                            * Please restart this host before proceeding with Vertica installation.***********************************************************************

18 

Enter the following to reboot the station:

systemctl reboot ↵

The station reboots.


19 

When the reboot is complete, log in as the root user on the station.


20 

Open a console window.


21 

Navigate to the NSP software directory.


22 

Enter the following:

dnf install nspos-*.rpm ↵

The dnf utility resolves any package dependencies, and displays the following prompt for each package:

Total size: nn G

Is this ok [y/N]: 


23 

Enter y. The following and the installation status are displayed as each package is installed:

Downloading Packages:

Running transaction check

Transaction check succeeded.

Running transaction test

Transaction test succeeded.

Running transaction

The package installation is complete when the following is displayed:

Complete!


Upgrade database
 
24 

Log in as the root user on an auxiliary database station.


25 

Open a console window.


26 

Enter the following:

cd /opt/nsp/nfmp/auxdb/install/bin ↵


27 

Enter the following to start the upgrade:

./auxdbAdmin.sh upgrade tar_file

where tar_file is the absolute path and filename of the vertica-R.r.p-rel.tar file in the NSP software directory.

The following prompt is displayed:

Updating Vertica - Please perform a backup before proceeding with this option

Do you want to proceed (YES/NO)?


28 

Enter YES ↵.

The following prompt is displayed:

Please enter auxiliary database dba password [if you are doing initial setup for auxiliary database, press enter]:


29 

Enter the dba password.

The following prompt is displayed:

Please confirm auxiliary database dba password:


30 

Enter the dba password again.

The upgrade begins, and operational messages are displayed.

The upgrade is complete when the following is displayed:

# Post-upgrade tasks completed.


31 

Enter the following to display the auxiliary database status:

./auxdbAdmin.sh status ↵

Information like the following is displayed:

Database status

 Node       | Host          | State | Version | DB

------------+---------------+-------+---------+-------

 node_1     | internal_IP_1 | STATE | version | db_name

 node_2     | internal_IP_2 | STATE | version | db_name

...

 node_n     | internal_IP_n | STATE | version | db_name

   ################################################################################

   WARNING: Shields are up on one or more auxiliary database hosts.

            External database access will be blocked until the shields are dropped.

   Shields up on host internal_IP_1

   Shields up on host internal_IP_2

...

   Shields up on host internal_IP_n

   ################################################################################

  Output captured in log_file

The cluster is running when each STATE entry reads UP.

Note: You must not proceed to the next step until the cluster is running.


Back up database for migration to new OS version
 
32 

Perform the following steps.

  1. Open the /opt/nsp/nfmp/auxdb/install/config/install.config file using a plain-text editor such as vi.

  2. Replace the lines in the TLS settings section with the following:

    # TLS settings.

    # A PKI Server IP address or hostname must be specified in order to generate certificates.

    pki_server=address

    pki_server_port=80

    where address is one of the following in the platformingressApplicationsingressController section of the nsp-config.yml file on the local NSP deployer host:

    In the internalAddresses subsection, if configured, otherwise, in the clientAddresses subsection:
    • if configured, the advertised value

    • otherwise, the virtualIp value

    If an external pki-server is used for generation of NSP certificate artifacts, use the address and port of the external pki-server instead.

  3. Save and close the install.config file.

  4. Copy the install.config file to a secure location on a separate station that is unaffected by the upgrade activity.


33 

Enter the following to regenerate the TLS certificates:

./auxdbAdmin.sh configureTLS ↵


34 

You must back up the auxiliary database data and configuration information to prepare for the migration to the new OS version.

Note: The operation may take considerable time.

Perform the following steps.

  1. Enter the following:

    ./auxdbAdmin.sh backup /path/samAuxDbBackup_restore.conf ↵

    where path is the backup file path specified in Step 3

    The following prompt is displayed:

    Please enter auxiliary database dba password [if you are doing initial setup for auxiliary database, press enter]:

  2. Enter the dba password.

    The backup operation begins, and messages like the following are displayed.

    Copying backup config file to /tmp/auxdbadmin-backup-ID

    Backup snapshot name - AuxDbBackUpID_auxdbAdmin

    Starting auxiliary database backup...

    The backup is complete when the following is displayed:

    Output captured in 

    /opt/nsp/nfmp/auxdb/install/log/auxdbAdmin.sh.timestamp.log

  3. Transfer the database backup file to a secure location on a separate station that is unaffected by the upgrade activity.


35 

Perform the following steps on each auxiliary database station to preserve the upgrade logs.

  1. Enter the following:

    tar czf /opt/nsp/auxdb-node_N-upgrade-log.tar.gz /opt/vertica/log /opt/nsp/nfmp/auxdb/install/log /opt/nsp/nfmp/auxdb/install/proxy/log /opt/nsp/nfmp/auxdb/catalog/samdb/v_samdb_node*/vertica.log ↵

    where N is the node number, as shown in the Step 31 output example

    An auxdb-node_N-upgrade-log.tar.gz file is created in the current working directory.

  2. Transfer the file to a secure location on a separate station that is unaffected by the upgrade activity.


Recommission stations, if required
 
36 

If you are reusing any auxiliary database stations, recommission each station according to the platform specifications in this guide and in the NSP Planning Guide.

For information about deploying the RHEL 9 OS using an NSP OEM disk image, see NSP disk-image deployment.

Note: You must reformat the following disk partitions:

  • root

  • /opt/nsp/nfmp/auxdb/data


Install new software
 
37 

Perform To prepare a station for NSP auxiliary database installation on each auxiliary database station.


38 

Copy each auxdb-node_N-upgrade-log.tar.gz created in Step 35 to the /opt/nsp directory on the associated auxiliary database station.


39 

Log in as the root user on an auxiliary database station.


40 

Copy the install.config file saved in Step 32, substep 3 to the following directory:

/opt/nsp/nfmp/auxdb/install/config


41 

Log in to any auxiliary database station as the root user.

Note: The software is installed on one station, and then automatically propagated to the other stations in the cluster.


42 

Open a console window.


43 

Perform Step 10 to Step 14 from To install the NSP auxiliary database.


Prepare to restore auxiliary database
 
44 

Stop the auxiliary database.

  1. Log in as the root user on any station in the auxiliary database cluster.

  2. Enter the following:

    /opt/nsp/nfmp/auxdb/install/bin/auxdbAdmin.sh stop ↵

    You are prompted to enter the database user password.

  3. Enter the password. The auxiliary database stops.


45 

If the backup files to restore are not in the original backup location on each auxiliary database station, perform the following steps.

  1. If you know the original backup location, go to substep 6.

  2. Open the following file for viewing:

    path/AuxDbBackUp/samAuxDbBackup_restore.conf

    where path is the current location of the backup file set

  3. Locate the [Mapping] section, which contains one line like the following for each auxiliary server station:

    v_name_node0001 = IP_address:path/AuxDbBackUp

    The path is the original backup location.

  4. Record the original backup location.

  5. Close the samAuxDbBackup_restore.conf file.

  6. Copy the AuxDbBackUp directory contents from the current backup location to the AuxDbBackUp directory in the original backup location on each auxiliary database station.

  7. As the root user, enter the following command on each auxiliary database station:

    chown -R samauxdb path

    where path is the absolute path of the original backup location


46 

If the backup is being restored on any stations that have different internal IP addresses, perform the following steps on each auxiliary database station.

  1. Open the following file using a plain-text editor such as vi:

    path/AuxDbBackUp/samAuxDbBackup_restore.conf

    where path is the location of the backup file set

  2. Add the following internal mapping section to the end of the file; the example below is for an auxiliary database of three stations:

    [NodeMapping]

    v_name_node0001 = IP_address_1

    v_name_node0002 = IP_address_2

    v_name_node0003 = IP_address_3

    where

    v_name_node000n is the station name shown in the [Mapping] section of the file

    IP_address_n is the new internal IP address of the station

  3. Save and close the samAuxDbBackup_restore.conf file.


Restore auxiliary database
 
47 

Enter the following on one auxiliary database station:

/opt/nsp/nfmp/auxdb/install/bin/auxdbAdmin.sh restore path/AuxDbBackUp/samAuxDbBackup_restore.conf ↵

where path is the absolute path of the database backup created in Step 32

The restore operation begins. The following messages and progress indicators are displayed:

Starting full restore of database db_name.

Participating nodes: node_1node_2... node_n.

Restoring from restore point: AuxDbBackUpID_datestamp_timestamp

Determining what data to restore from backup.

[==============================================] 100%

Approximate bytes to copy: nnnnnnnn of nnnnnnnnn total.

Syncing data from backup to cluster nodes.

When the restore is complete, the second progress indicator reaches 100%, and the following message is displayed:

[==============================================] 100%

Restoring catalog. Restore complete!


Restore database user passwords
 
48 

If a database user password has changed since the creation of the database backup, the NSP components cannot authenticate with the restored auxiliary database.

In such a scenario, you must perform the following steps to update each affected database user password to the value that the NSP components recognize.

  1. Enter the following on the current auxiliary database station to connect to the auxiliary database:

    vsql ↵

    You are prompted for credentials.

  2. Enter the following:

    • user—samauxdb

    • password—database administrator password recorded during creation of backup

  3. Enter the following once for each user whose password has changed since the backup creation:

    ALTER USER user IDENTIFIED BY 'password';

    where

    user is the database user name

    password is the newer password recognized by the NSP

  4. Verify the success of the password change by:

    • attempting to connect to the auxiliary database as each affected user

    • ensuring that the NSP components are able to connect to the auxiliary database


Enable required database services and proxies
 
49 

Enter the following on one auxiliary database station:

./auxdbAdmin.sh startAndEnableServices ↵

Database services are enabled and the auxiliary database proxy starts.


Disable maintenance mode for auxiliary database agent
 
50 

Perform the following steps.

  1. Log in as the root or NSP admin user on the NSP deployer host.

  2. Enter the following:

    export KUBECONFIG=/opt/nsp/nsp-configurator/kubeconfig/nsp_kubeconfig ↵

  3. Enter the following:

    kubectl patch configmap/nspos-auxdb-agent-overrides -n namespace --type=merge -p '{"data":{"nspos-auxdb-agent-overrides.json":"{\"auxDbAgent\":{\"config\":{\"maintenance-mode\":false}}}"}}' ↵

    where namespace is the nspos-auxdb-agent namespace

  4. Enter the following to restart the nspos-auxdb-agent pod:

    kubectl delete -n namespace pod `kubectl describe -n namespace pods | grep -P ^^Name: | grep -oP nspos-auxdb-agent[-a-zA-Z0-9]+` ↵


Verify auxiliary database status
 
51 

Issue the following RESTCONF API call to verify that the auxiliary database cluster is in active mode:

Note: In order to issue a RESTCONF API call, you require a token; see this tutorial on the Network Developer Portal for information.

GET /data/auxdb:/auxdb-agent HTTP/1.1

Request body:

    Host: address

    Content-Type: application/json

    Authorization: bearer_and_token_from_session_manager

where address is the advertised address of the NSP cluster

The cluster is in active mode if the response includes ACTIVE.


52 

Issue the following RESTCONF API call to verify the auxiliary database operation:

Note: In order to issue a RESTCONF API call, you require a token; see this tutorial on the Network Developer Portal for information.

GET https://address/restconf/data/auxdb:/clusters

where address is the advertised address of the NSP cluster

The call returns auxiliary database cluster status information, as shown below for a three-node cluster; if the mode and status values are not as shown below, contact technical support. Otherwise, at this point, the auxiliary database is fully upgraded and prepared for the NFM-P main server startup.

<HashMap>

    <clusters>

        <cluster>

            <name>cluster_A</name>

            <mode>ACTIVE</mode>

            <status>UP</status>

            <nodes>

                <external-ip>203.0.113.101</external-ip>

                <internal-ip>10.1.2.101</internal-ip>

                <status>UP</status>

            </nodes>

            <nodes>

                <external-ip>203.0.113.102</external-ip>

                <internal-ip>10.1.2.102</internal-ip>

                <status>UP</status>

            </nodes>

            <nodes>

                <external-ip>203.0.113.103</external-ip>

                <internal-ip>10.1.2.103</internal-ip>

                <status>UP</status>

            </nodes>

        </cluster>

    </clusters>

</HashMap>


53 

Close the open console windows.

End of steps