To upgrade a geo-redundant auxiliary database

Description

The following procedure describes how to upgrade a geo-redundant NSP auxiliary database.

Note: A geo-redundant auxiliary database is composed of two clusters, one primary (active mode) and the other in standby mode at the time the upgrade begins. In order to reduce the overall system downtime during upgrade, the auxiliary database cluster that is in standby mode at the beginning of the upgrade process is upgraded first while the second cluster continues to operate. When the first cluster is upgraded it is activated (becomes the primary cluster) and the second cluster is then upgraded.

Note: You require the following user privileges on each auxiliary database station:

  • root

  • samauxdb

Note: Ensure that you record the information that you specify, for example, directory names, passwords, and IP addresses.

Note: The following RHEL CLI prompts in command lines denote the active user, and are not to be included in typed commands

  • # —root user

  • bash$ —samauxdb user

Steps
Obtain software
 

Download the following installation files:

  • nspos-auxdb-R.r.p-rel.v.rpm

  • VerticaSw_PreInstall.sh

  • nspos-jre-R.r.p-rel.v.rpm

  • vertica-R.r.p-rel.tar

where

R.r.p is the NSP release identifier, in the form MAJOR.minor.patch

v is a version number


For each auxiliary database cluster, transfer the downloaded files to a station that is reachable by each station in the cluster.


Verify auxiliary database synchronization
 

Before upgrading the auxiliary database, you must verify the success of the most recent copy-cluster operation, which synchronizes the database data between the clusters.

Note: You must not proceed to the next step until the operation is complete and successful.

Issue the following RESTCONF API call periodically to check the copy-cluster status:

Note: In order to issue a RESTCONF API call, you require a token; see this tutorial on the Network Developer Portal for information.

GET https://address/restconf/data/auxdb:auxdb-agent

where address is the advertised address of the primary NSP cluster

The call returns a status of SUCCESS, as shown below, for a successfully completed copy-cluster operation:

{

    "auxdb-agent": {

        "name": "nspos-auxdb-agent",

        "application-mode": "ACTIVE",

        "copy-cluster": {

            "source-cluster": "cluster_2",

            "target-cluster": "cluster_1",

            "time-started": "timestamp",

            "status": "SUCCESS"

        }

    }

}


Back up database
 
CAUTION 

CAUTION

Data Loss

If you specify a backup location on the database data partition, data loss or corruption may occur.

The auxiliary database backup location must be an absolute path on a partition other than the database data partition.

Before upgrading the first auxiliary database cluster (typically the standby cluster), back up the auxiliary database as described in the NSP System Administrator Guide for the installed release.

Note: The backup location requires 20% more space than the database data consumes.

Note: If the backup location is remote, a 1 Gb/s link to the location is required; if achievable, a higher-capacity link is recommended.


Enable maintenance mode on auxiliary database agent
 

Before upgrading the auxiliary database, perform the following steps.

  1. Log in as the root or NSP admin user on the NSP cluster host in the primary data center.

  2. Enter the following:

    kubectl patch configmap/nspos-auxdb-agent-overrides -n namespace --type=merge -p '{"data":{"nspos-auxdb-agent-overrides.json":"{\"auxDbAgent\":{\"config\":{\"maintenance-mode\":true}}}"}}' ↵

    where namespace is the nspos-auxdb-agent namespace

  3. Repeat substep 2 as the root or NSP admin user on the NSP cluster host in the standby data center.

  4. On the NSP cluster host in the primary data center, enter the following to restart the nspos-auxdb-agent pod:

    kubectl delete -n namespace pod `kubectl describe -n namespace pods | grep -P ^^Name: | grep -oP nspos-auxdb-agent[-a-zA-Z0-9]+` ↵

  5. Issue the following RESTCONF API call against the primary NSP cluster to verify that the agent is in maintenance mode:

    Note: In order to issue a RESTCONF API call, you require a token; see this tutorial on the Network Developer Portal for information.

    GET https://address/restconf/data/auxdb:auxdb-agent

    where address is the advertised address of the primary NSP cluster

    The call returns information like the following:

    {

        "auxdb-agent": {

            "name": "nspos-auxdb-agent",

            "application-mode": "MAINTENANCE",

            "copy-cluster": {

                "source-cluster": "cluster_2",

                "target-cluster": "cluster_1",

                "time-started": "timestamp",

                "status": "SUCCESS"

            }

        }

    }

    The agent is in maintenance mode if the application-mode is MAINTENANCE, as shown in the example.


Stop and disable auxiliary database proxies
 

Perform the following to stop and disable the auxiliary database proxy services in the auxiliary database cluster that you are upgrading.

  1. Log in as the root user on each auxiliary database station in the cluster.

  2. Perform one of the following:

    1. If you are upgrading from Release 23.11 or later, enter the following:

      systemctl stop nspos-auxdbproxy.service ↵

      systemctl disable nspos-auxdbproxy.service ↵

    2. If you are upgrading from Release 23.8 or earlier, enter the following:

      systemctl stop nfmp-auxdbproxy.service ↵

      systemctl disable nfmp-auxdbproxy.service ↵


Stop database
 

Log in as the root user on a station in the auxiliary database cluster that you are upgrading.


Open a console window.


Enter the following:

cd /opt/nsp/nfmp/auxdb/install/bin ↵


10 

Enter the following to block external access to the auxiliary database ports:

./auxdbAdmin.sh shieldsUp ↵


11 

Enter the following to stop the auxiliary database:

./auxdbAdmin.sh stop ↵


12 

Enter the following to display the auxiliary database status:

./auxdbAdmin.sh status ↵

Information like the following is displayed:

Database status

 Node       | Host          | State | Version | DB

------------+---------------+-------+---------+-------

 node_1     | internal_IP_1 | STATE | version | db_name

 node_2     | internal_IP_2 | STATE | version | db_name

.

.

.

 node_n     | internal_IP_n | STATE | version | db_name

      Output captured in log_file

The cluster is stopped when each STATE entry reads DOWN.

Note: If the cluster is not stopped, enter the following to force the auxiliary database to stop.

./auxdbAdmin.sh force_stop ↵


13 

Repeat Step 12 periodically until the cluster is stopped.

Note: You must not proceed to the next step until the cluster is stopped.


Prepare all stations for upgrade
 
14 

Perform Step 16 to Step 33 on each station in the cluster that you are upgrading.


15 

Go to Step 34.


Prepare individual station for upgrade
 
16 

If the auxiliary database station is deployed in a VM created using an NSP RHEL OS disk image, perform To apply a RHEL update to an NSP image-based OS.


17 

Log in as the root user on the station.


18 

Open a console window.


19 

On each auxiliary database station in the cluster, perform one of the following to stop and disable the auxiliary database services:

  1. If you are upgrading from Release 23.11 or later, enter the following sequence of commands:

    systemctl stop nspos-auxdb.service

    systemctl disable nspos-auxdb.service

  2. If you are upgrading from Release 23.8 or earlier, enter the following sequence of commands:

    systemctl stop nfmp-auxdb.service

    systemctl disable nfmp-auxdb.service


20 

Transfer the downloaded installation files to an empty directory on the station.

Note: You must ensure that the directory is empty.

Note: In subsequent steps, the directory is called the NSP software directory.


21 

Navigate to the NSP software directory.

Note: The directory must contain only the installation files.


22 

Enter the following:

# chmod +x * ↵


23 

Enter the following:

# ./VerticaSw_PreInstall.sh ↵

Information like the following is displayed:

Logging Vertica pre install checks to log_file

INFO: About to remove proxy parameters set by a previous run of 

            this script from    /etc/profile.d/proxy.sh

INFO: Completed removing proxy parameters set by a previous run of 

            this script from    /etc/profile.d/proxy.sh

INFO: About to set proxy parameters in /etc/profile.d/proxy.sh...

INFO: Completed setting proxy parameters in /etc/profile.d/proxy.sh...

INFO: Found pre-existing [/opt/nsp/nfmp/auxdb/install/config/install.config]

      file on this machine. This file will be validated.

INFO: About to remove kernel parameters set by a previous run of 

      this script from /etc/sysctl.conf

INFO: Completed removing kernel parameters set by a previous run of 

      this script from /etc/sysctl.conf

INFO: About to set kernel parameters in /etc/sysctl.conf...

INFO: Completed setting kernel parameters in /etc/sysctl.conf...

INFO: About to change the current values of the kernel parameters

INFO: Completed changing the current values of the kernel parameters

INFO: About to remove ulimit parameters set by a previous run of 

      this script from  /etc/security/limits.conf

INFO: Completed removing ulimit parameters set by a previous run of 

      this script from  /etc/security/limits.conf

INFO: About to set ulimit parameters in /etc/security/limits.conf...

INFO: Completed setting ulimit parameters in /etc/security/limits.conf...

INFO: Removing /var/log/wtmp entry from /etc/logrotate.conf

INFO: Adding /etc/logrotate.d/wtmp

Checking user group nsp... 

WARNING: user group with the specified name already exists locally.

Checking user nsp... 

INFO: user with the specified name already exists locally. Setting shell to /usr/sbin/nologin.

Checking Vertica user group samauxdb... 

WARNING: Vertica user group with the specified name already exists locally.

Checking Vertica user samauxdb... 

WARNING: Vertica user with the specified name already exists locally.

Moving logfile from /tmp ...

     ... to /opt/nsp/nfmp/auxdb/install/log

Changing ownership of the directory /opt/nsp/nfmp/auxdb/install to samauxdb:samauxdb.

Removing group write and world permissions from the directory /opt/nsp/nfmp/auxdb/install.

Changing ownership of /opt/nsp/nfmp/auxdb files.

Set virtio device queue scheduler for /dev/vda: [none] mq-deadline kyber bfq 

INFO: Updating auxiliary database prep script.

************************************************************************ Changes were made that require a restart.                            * Please restart this host before proceeding with Vertica installation.***********************************************************************

24 

Enter the following to reboot the station:

systemctl reboot ↵

The station reboots.


25 

When the reboot is complete, log in as the root user on the station.


26 

Open a console window.


27 

On one of the auxiliary database stations in the cluster, enter the following:

cd /opt/nsp/nfmp/auxdb/install/bin ↵


28 

Enter the following to block external access to the auxiliary database ports:

./auxdbAdmin.sh shieldsUp ↵


29 

Enter the following to display the auxiliary database status:

./auxdbAdmin.sh status ↵

Information like the following is displayed:

Database status

 Node       | Host          | State | Version | DB

------------+---------------+-------+---------+-------

 node_1     | internal_IP_1 | STATE | version | db_name

 node_2     | internal_IP_2 | STATE | version | db_name

.

.

.

 node_n     | internal_IP_n | STATE | version | db_name

      Output captured in log_file


30 

if any STATE entry is not DOWN, perform the following steps.

  1. Enter the following to stop the auxiliary database:

    ./auxdbAdmin.sh stop ↵

    Note: If the cluster is not stopped, enter the following to force the auxiliary database to stop.

    ./auxdbAdmin.sh force_stop ↵

  2. Repeat Step 29 periodically until each STATE entry reads DOWN.

    Note: You must not proceed to the next step until each STATE entry reads DOWN.


31 

On each auxiliary database station in the cluster, navigate to the NSP software directory.


32 

Enter the following:

dnf install nspos-*.rpm ↵

The dnf utility resolves any package dependencies, and displays the following prompt for each package:

Total size: nn G

Is this ok [y/N]: 


33 

Enter y. The following and the installation status are displayed as each package is installed:

Downloading Packages:

Running transaction check

Transaction check succeeded.

Running transaction test

Transaction test succeeded.

Running transaction

The package installation is complete when the following is displayed:

Complete!


Upgrade database
 
34 

Log in as the root user on a station in the auxiliary database cluster that you are upgrading.


35 

Open a console window.


36 

Enter the following:

cd /opt/nsp/nfmp/auxdb/install/bin ↵


37 

Enter the following to start the upgrade.

./auxdbAdmin.sh upgrade tar_file

where tar_file is the absolute path and filename of the vertica-R.r.p-rel.tar file in the NSP software directory.

The following prompt is displayed:

Updating Vertica - Please perform a backup before proceeding with this option

Do you want to proceed (YES/NO)?


38 

Enter YES ↵.

The following prompt is displayed:

Please enter auxiliary database dba password [if you are doing initial setup for auxiliary database, press enter]:


39 

Enter the dba password.

The following prompt is displayed:

Please confirm auxiliary database dba password:


40 

Enter the dba password again.

The upgrade begins, and operational messages are displayed.

The upgrade is complete when the following is displayed:

# Post-upgrade tasks completed.


41 

Enter the following to display the auxiliary database status:

./auxdbAdmin.sh status ↵

Information like the following is displayed:

Database status

 Node       | Host          | State | Version | DB

------------+---------------+-------+---------+-------

 node_1     | internal_IP_1 | STATE | version | db_name

 node_2     | internal_IP_2 | STATE | version | db_name

.

.

.

 node_n     | internal_IP_n | STATE | version | db_name

      Output captured in log_file

The cluster is running when each STATE entry reads UP.


42 

Repeat Step 41 periodically until the cluster is running.

Note: You must not proceed to the next step until the cluster is running.


Configure TLS
 
43 

Open the /opt/nsp/nfmp/auxdb/install/config/install.config file using a plain-text editor such as vi.


44 

Edit the following lines in the file to read as shown below:

secure=true

pki_server=address

pki_server_port=80

where address is one of the following in the platformingressApplicationsingressController section of the nsp-config.yml file on the local NSP deployer host:

In the internalAddresses subsection, if configured, otherwise, in the clientAddresses subsection:

  • if configured, the advertised value

  • otherwise, the virtualIp value

Note: If an external pki-server is used for generation of NSP certificate artifacts, use the address and port of the external pki-server instead.


45 

Save and close the install.config file.


46 

Enter the following to regenerate the TLS certificates:

./auxdbAdmin.sh configureTLS ↵


47 

Enter the following:

./auxdbAdmin.sh shieldsDown ↵


Enable required database services and proxies
 
48 

Perform the following steps on each station in the auxiliary database cluster.

  1. Log in as the root user.

  2. Open a console window.

  3. Enter the following sequence of commands to enable the database services:

    systemctl enable nspos-auxdb.service ↵

    systemctl enable nspos-auxdbproxy.service ↵

    systemctl start nspos-auxdbproxy.service ↵


Activate upgraded auxiliary database cluster
 
49 

If the first auxiliary database cluster has been upgraded (typically the standby cluster when the upgrade began), the cluster may now be activated.

Note: This step is only performed once during a geo-redundant auxiliary database upgrade, after the first cluster has been upgraded and before the second cluster is upgraded.

Issue the following RESTCONF API call to activate the upgraded auxiliary database cluster, after which the cluster assumes the primary role:

Note: In order to issue a RESTCONF API call, you require a token; see this tutorial on the Network Developer Portal for information.

POST https://address/restconf/data/auxdb:clusters/cluster=cluster_N/activate

where

address is the advertised address of the primary NSP cluster

N is the auxiliary database cluster number

The following is the request body:

{

  "auxdb:input" : {

    "force": true

  }

}


Disable maintenance mode for auxiliary database agents
 
50 

If both geo-redundant auxiliary database clusters have been upgraded then maintenance mode may be disabled for the auxiliary database agent. Perform the following steps.

  1. Log in as the root or NSP admin user on the NSP cluster host in the primary data center.

  2. Enter the following:

    kubectl patch configmap/nspos-auxdb-agent-overrides -n namespace --type=merge -p '{"data":{"nspos-auxdb-agent-overrides.json":"{\"auxDbAgent\":{\"config\":{\"maintenance-mode\":false}}}"}}' ↵

    where namespace is the nspos-auxdb-agent namespace

  3. Repeat substep 2 as the root or NSP admin user on the NSP cluster host in the standby data center.

  4. On the NSP cluster host in the primary data center, enter the following to restart the nspos-auxdb-agent pod:

    kubectl delete -n namespace pod `kubectl describe -n namespace pods | grep -P ^^Name: | grep -oP nspos-auxdb-agent[-a-zA-Z0-9]+` ↵


51 

Close the open console windows.

End of steps