To upgrade a geo-redundant Release 22.6 or earlier auxiliary database

Description

The following procedure describes how to upgrade a geo-redundant NSP auxiliary database from Release 22.6 or earlier.

Note: You require the following user privileges on each auxiliary database station:

Note: Ensure that you record the information that you specify, for example, directory names, passwords, and IP addresses.

Note: The following RHEL CLI prompts in command lines denote the active user, and are not to be included in typed commands

Steps
Obtain software
 

Download the following installation files:

  • nspos-auxdb-R.r.p-rel.v.rpm

  • VerticaSw_PreInstall.sh

  • nspos-jre-R.r.p-rel.v.rpm

  • vertica-R.r.p-rel.tar

where

R.r.p is the NSP release identifier, in the form MAJOR.minor.patch

v is a version number


For each auxiliary database cluster, transfer the downloaded files to a station that is reachable by each station in the cluster.


Verify auxiliary database synchronization
 

If you are upgrading the standby auxiliary database cluster, you must verify the success of the most recent copy-cluster operation, which synchronizes the database data between the clusters.

Note: You must not proceed to the next step until the operation is complete and successful.

Issue the following RESTCONF API call periodically to check the copy-cluster status:

Note: In order to issue a RESTCONF API call, you require a token; see this tutorial on the Network Developer Portal for information.

GET https://address/restconf/data/auxdb:/auxdb-agent

where address is the advertised address of the primary NSP cluster

The call returns a status of SUCCESS, as shown below, for a successfully completed copy-cluster operation:

<HashMap>

      <auxdb-agent>

         <name>nspos-auxdb-agent</name>

         <application-mode>ACTIVE</application-mode>

         <copy-cluster>

            <source-cluster>cluster_M</source-cluster>

            <target-cluster>cluster_N</target-cluster>

            <time-started>timestamp</time-started>

            <status>SUCCESS</status>

         </copy-cluster>

      </auxdb-agent>

</HashMap>


Stop and disable auxiliary database proxies
 

If you are upgrading the standby auxiliary database cluster, stop and disable the database proxy on each station in each auxiliary database cluster.

  1. Enter the following sequence of commands as the root user on each station in the standby auxiliary database cluster:

    systemctl stop nfmp-auxdbproxy.service

    systemctl disable nfmp-auxdbproxy.service

    The proxy stops, and is disabled.

  2. Enter the following sequence of commands as the root user on each station in the primary auxiliary database cluster:

    systemctl stop nfmp-auxdbproxy.service

    systemctl disable nfmp-auxdbproxy.service

    The proxy stops, and is disabled.


Enable maintenance mode on auxiliary database agent
 

If you are upgrading the standby auxiliary database cluster, enable nspos-auxdb-agent maintenance mode.

  1. Log in as the root or NSP admin user on the NSP cluster host in the primary data center.

  2. Enter the following to set the nspos-auxdb-agent mode to maintenance:

    kubectl patch configmap/nspos-auxdb-agent-overrides -n namespace --type=merge -p '{"data":{"nspos-auxdb-agent-overrides.json":"{\"auxDbAgent\":{\"config\":{\"maintenance-mode\":true}}}"}}' ↵

    where namespace is the nspos-auxdb-agent namespace

  3. Enter the following to restart the nspos-auxdb-agent pod:

    kubectl delete -n namespace pod `kubectl describe -n namespace pods | grep -P ^^Name: | grep -oP nspos-auxdb-agent[-a-zA-Z0-9]+` ↵

  4. Issue the following RESTCONF API call against the primary NSP cluster to verify that the agent is in maintenance mode:

    NOTE: In order to issue a RESTCONF API call, you require a token; see this tutorial on the Network Developer Portal for information.

    GET https://address/restconf/data/auxdb:/auxdb-agent

    where address is the advertised address of the primary NSP cluster

    The call returns information like the following:

    {

        "auxdb-agent": {

            "name": "nspos-auxdb-agent",

            "application-mode": "MAINTENANCE",

            "copy-cluster": {

                "source-cluster": "cluster_2",

                "target-cluster": "cluster_1",

                "time-started": "timestamp",

                "status": "SUCCESS"

            }

        }

    }

    The agent is in maintenance mode if the application-mode is MAINTENANCE, as shown in the example.

  5. Log in as the root or NSP admin user on the NSP cluster host in the standby data center.

  6. Enter the following to set the nspos-auxdb-agent mode to maintenance:

    kubectl patch configmap/nspos-auxdb-agent-overrides -n namespace --type=merge -p '{"data":{"nspos-auxdb-agent-overrides.json":"{\"auxDbAgent\":{\"config\":{\"maintenance-mode\":true}}}"}}' ↵


Stop and disable auxiliary database proxies
 

If you are upgrading the standby auxiliary database cluster, stop and disable the database proxy on each station in each auxiliary database cluster.

  1. Enter the following sequence of commands as the root user on each station in the standby auxiliary database cluster:

    systemctl stop nfmp-auxdbproxy.service

    systemctl disable nfmp-auxdbproxy.service

    The proxy stops, and is disabled.

  2. Enter the following sequence of commands as the root user on each station in the primary auxiliary database cluster:

    systemctl stop nfmp-auxdbproxy.service

    systemctl disable nfmp-auxdbproxy.service

    The proxy stops, and is disabled.


Commission new stations, if required
 

If you are deploying the auxiliary database on one or more new stations, perform the following steps.

For information about deploying the RHEL OS using an NSP OEM disk image, see NSP disk-image deployment.

Note: The IP addresses on the interfaces of a new auxiliary database station must match the addresses on the station that it replaces.

  1. Commission each station according to the platform specifications in this guide and in the NSP Planning Guide.

  2. Perform To apply the RHEL 8 swappiness workaround on the station.


Back up database
 
CAUTION 

CAUTION

Data Loss

If you specify a backup location on the database data partition, data loss or corruption may occur.

The auxiliary database backup location must be an absolute path on a partition other than the database data partition.

If you are upgrading the standby cluster, back up the auxiliary database as described in the NSP System Administrator Guide for the installed release.

Note: The backup location requires 20% more space than the database data consumes.

Note: If the backup location is remote, a 1 Gb/s link to the location is required; if achievable, a higher-capacity link is recommended.


Stop cluster
 

Log in as the root user on a station in the auxiliary database cluster that you are upgrading.


10 

Open a console window.


11 

Enter the following:

cd /opt/nsp/nfmp/auxdb/install/bin ↵


12 

Enter the following to stop the auxiliary database:

./auxdbAdmin.sh stop ↵


13 

Enter the following to display the auxiliary database status:

./auxdbAdmin.sh status ↵

Information like the following is displayed:

Database status

 Node       | Host          | State | Version | DB

------------+---------------+-------+---------+-------

 node_1 internal_IP_1 | STATE | version | db_name

 node_2 | internal_IP_2 | STATE | version | db_name

.

.

.

 node_n | internal_IP_n | STATE | version | db_name

      Output captured in log_file

The cluster is stopped when each STATE entry reads DOWN.


14 

Repeat Step 10 periodically until the cluster is stopped.

Note: You must not proceed to the next step until the cluster is stopped.


Prepare all stations for upgrade
 
15 

Perform Step 17 to Step 33 on each station in the auxiliary database cluster that you are upgrading.


16 

Go to Step 34.


Prepare individual station for upgrade
 
17 

Log in as the root user on the station.


18 

Open a console window.


19 

Enter the following sequence of commands to stop the auxiliary database services:

systemctl stop nfmp-auxdb.service

systemctl stop vertica_agent.service

systemctl stop verticad.service


20 

Enter the following sequence of commands to disable the database services:

systemctl disable nfmp-auxdb.service

systemctl disable vertica_agent.service

systemctl disable verticad.service


21 

Transfer the downloaded installation files to an empty directory on the station.

Note: You must ensure that the directory is empty.

Note: In subsequent steps, the directory is called the NSP software directory.


22 

Navigate to the NSP software directory.

Note: The directory must contain only the installation files.


23 

Enter the following:

# chmod +x * ↵


24 

Enter the following:

# ./VerticaSw_PreInstall.sh ↵

Information like the following is displayed:

Logging Vertica pre install checks to log_file

INFO: About to remove proxy parameters set by a previous run of this script from /etc/profile.d/proxy.sh

INFO: Completed removing proxy parameters set by a previous run of this script from /etc/profile.d/proxy.sh

INFO: About to set proxy parameters in /etc/profile.d/proxy.sh...

INFO: Completed setting proxy parameters in /etc/profile.d/proxy.sh...

INFO: About to remove kernel parameters set by a previous run of this script from /etc/sysctl.conf

INFO: Completed removing kernel parameters set by a previous run of this script from /etc/sysctl.conf

INFO: About to set kernel parameters in /etc/sysctl.conf...

INFO: Completed setting kernel parameters in /etc/sysctl.conf...

INFO: About to change the current values of the kernel parameters

INFO: Completed changing the current values of the kernel parameters

INFO: About to remove ulimit parameters set by a previous run of this script from /etc/security/limits.conf

INFO: Completed removing ulimit parameters set by a previous run of this script from /etc/security/limits.conf

INFO: About to set ulimit parameters in /etc/security/limits.conf...

INFO: Completed setting ulimit parameters in /etc/security/limits.conf...

Checking Vertica DBA group samauxdb...

WARNING: Vertica DBA group with the specified name already exists locally.

Checking Vertica user samauxdb...

WARNING: Vertica user with the specified name already exists locally.

Changing ownership of the directory /opt/nsp/nfmp/auxdb/install to samauxdb:samauxdb.

Adding samauxdb to sudoers file.

Changing ownership of /opt/nsp/nfmp/auxdb files.

INFO: About to remove commands set by a previous run of this script from /etc/rc.d/rc.local

INFO: Completed removing commands set by a previous run of this script from /etc/rc.d/rc.local

INFO: About to add setting to /etc/rc.d/rc.local...

INFO: Completed adding setting to /etc/rc.d/rc.local...


25 

Enter the following to reboot the station:

systemctl reboot ↵

The station reboots.


26 

When the reboot is complete, log in as the root user on the station.


27 

Open a console window.


28 

Enter the following:

cd /opt/nsp/nfmp/auxdb/install/bin ↵


29 

Enter the following to display the auxiliary database status:

./auxdbAdmin.sh status ↵

Information like the following is displayed:

Database status

 Node       | Host          | State | Version | DB

------------+---------------+-------+---------+-------

 node_1 internal_IP_1 | STATE | version | db_name

 node_2 | internal_IP_2 | STATE | version | db_name

.

.

.

 node_n | internal_IP_n | STATE | version | db_name

      Output captured in log_file


30 

if any STATE entry is not DOWN, perform the following steps.

  1. Enter the following to stop the auxiliary database:

    ./auxdbAdmin.sh stop ↵

  2. Repeat Step 26 periodically until each STATE entry reads DOWN.

    Note: You must not proceed to the next step until each STATE entry reads DOWN.


31 

Navigate to the NSP software directory.


32 

Enter the following:

yum install nspos-*.rpm ↵

The yum utility resolves any package dependencies, and displays the following prompt for each package:

Total size: nn G

Installed size: nn G 

Is this ok [y/d/N]: 


33 

Enter y. The following and the installation status are displayed as each package is installed:

Downloading Packages:

Running transaction check

Transaction check succeeded.

Running transaction test

Transaction test succeeded.

Running transaction

The package installation is complete when the following is displayed:

Complete!


Upgrade database
 
34 

Log in as the root user on a station in the auxiliary database cluster that you are upgrading.


35 

Open a console window.


36 

Enter the following:

cd /opt/nsp/nfmp/auxdb/install/bin ↵


37 

Enter the following:

./auxdbAdmin.sh upgrade tar_file

where tar_file is the absolute path and filename of the vertica-R.r.p-rel.tar file in the NSP software directory

The following prompt is displayed:

Updating Vertica - Please perform a backup before proceeding with this option

Do you want to proceed (YES/NO)?


38 

Enter YES ↵.

The following prompt is displayed:

Please enter auxiliary database dba password [if you are doing initial setup for auxiliary database, press enter]:


39 

Enter the dba password.

The following prompt is displayed:

Please verify auxiliary database dba password:


40 

Enter the dba password again.

The upgrade begins, and operational messages are displayed.

The upgrade is complete when the following is displayed:

Database database_name started successfully

  Output captured in log_file


41 

Enter the following to display the auxiliary database status:

./auxdbAdmin.sh status ↵

Information like the following is displayed:

Database status

 Node       | Host          | State | Version | DB

------------+---------------+-------+---------+-------

 node_1 internal_IP_1 | STATE | version | db_name

 node_2 | internal_IP_2 | STATE | version | db_name

.

.

.

 node_n | internal_IP_n | STATE | version | db_name

      Output captured in log_file

The cluster is running when each STATE entry reads UP.


42 

Repeat Step 38 periodically until the cluster is running.

Note: You must not proceed to the next step until the cluster is running.


Back up database for migration to new OS version
 
43 

If no backup has previously been performed on the cluster, for example, if the cluster has always had the standby role, perform the following steps.

  1. Open a console window on a station in the current cluster.

  2. Copy the following file from a station in the peer cluster to the same directory on the current station:

    backup path/samAuxDbBackup_restore.conf

    where backup_path is the backup location specified in Step 3

  3. Enter the following:

    chown samauxdb:samauxdb backup_path/samAuxDbBackup_restore.conf ↵

  4. Enter the following:

    ./auxdbAdmin.sh status ↵

    Information like the following is displayed:

    Database status

     Node       | Host          | State | Version | DB

    ------------+---------------+-------+---------+-------

     node_1 internal_IP_1 | STATE | version | db_name

     node_2 | internal_IP_2 | STATE | version | db_name

    .

    .

    .

     node_n | internal_IP_n | STATE | version | db_name

          Output captured in log_file

  5. Open the following file using a plain-text editor such as vi:

    backup_path/samAuxDbBackup_restore.conf

  6. Locate the section that begins with the following:

    [mapping]

  7. Replace the primary cluster node values in the section with the standby node values displayed in substep 4, as shown in the following example for a three-node cluster:

    node_1 = [internal_IP_1]:backup_path

    node_2 = [internal_IP_2]:backup_path

    node_3 = [internal_IP_3]:backup_path

  8. Save and close the file.


44 

You must back up the auxiliary database data and configuration information to prepare for the migration to the new OS version.

Note: The operation may take considerable time.

Perform the following steps.

  1. Enter the following:

    ./auxdbAdmin.sh backup /path/samAuxDbBackup_restore.conf ↵

    where path is the backup file path specified in Step 3

    The following prompt is displayed:

    Please enter auxiliary database dba password [if you are doing initial setup for auxiliary database, press enter]:

  2. Enter the dba password.

    The backup operation begins, and messages like the following are displayed.

    Copying backup config file to /tmp/auxdbadmin-backup-ID

    Backup snapshot name - AuxDbBackUpID_auxdbAdmin

    Starting auxiliary database backup...

    The backup is complete when the following is displayed:

    Output captured in 

    /opt/nsp/nfmp/auxdb/install/log/auxdbAdmin.sh.timestamp.log

  3. Transfer the database backup file to a secure location on a separate station that is unaffected by the upgrade activity.

  4. Open the following file using a plain-text editor such as vi:

    /opt/nsp/nfmp/auxdb/install/config/install.config

  5. Add the following lines directly above the line that reads, “# Do not modify.....”

    # TLS settings.

    # Set secure=false (default is enabled) to disable secure JDBC and

    # proxy connections.

    # If secure=true then a PKI Server IP address or hostname must be

    # specified in order to generate certificates.

    secure=true

    pki_server=address

    pki_server_port=80

    where address is one of the following in the platformingressApplicationsingressController section of the nsp-config.yml file on the local NSP deployer host:

    In the internalAddresses subsection, if configured, otherwise, in the clientAddresses subsection:
    • if configured, the advertised value

    • otherwise, the virtualIp value

  6. Save and close the install.config file.

  7. Copy the install.config file to a secure location on a separate station that is unaffected by the upgrade activity.


45 

Perform the following steps on each auxiliary database station to preserve the upgrade logs.

  1. Enter the following:

    tar czf /opt/nsp/auxdb-node_N-upgrade-log.tar.gz /opt/vertica/log /opt/nsp/nfmp/auxdb/install/log /opt/nsp/nfmp/auxdb/install/proxy/log /opt/nsp/nfmp/auxdb/catalog/samdb/v_samdb_node*/vertica.log ↵

    where N is the node number, as shown in the Step 38 output example

    An auxdb-node_N-upgrade-log.tar.gz file is created in the current working directory.

  2. Transfer the file to a secure location on a separate station that is unaffected by the upgrade activity.


Recommission stations, if required
 
46 

If you are reusing any auxiliary database stations in the cluster that you are upgrading, recommission each station according to the platform specifications in this guide and in the NSP Planning Guide.

For information about deploying the RHEL OS using an NSP OEM disk image, see NSP disk-image deployment.

Note: You must reformat the following disk partitions:

  • root

  • /opt/nsp/nfmp/auxdb/data


Install new software, restore database
 
47 

Perform To prepare a station for NSP auxiliary database installation on each station in the auxiliary database cluster that you are upgrading.


48 

Copy each auxdb-node_N-upgrade-log.tar.gz created in Step 45 to the /opt/nsp directory on the associated auxiliary database station.


49 

Log in as the root user on an auxiliary database station in the cluster that you are upgrading.


50 

Copy the install.config file saved in Step 40, substep 6 to the following directory:

/opt/nsp/nfmp/auxdb/install/config


51 

Enter the following:

cd /opt/nsp/nfmp/auxdb/install/bin ↵


52 

Enter the following to block external access to the auxiliary database ports:

./auxdbAdmin.sh shieldsUp ↵


53 

Enter the following to regenerate the TLS certificates:

./auxdbAdmin.sh configureTLS force-gen ↵


54 

Enter the following:

./auxdbAdmin.sh install ↵

The script sequentially prompts you to enter and re-enter new passwords for the following user accounts:

  • samauxdb

  • samuser

  • samanalytic

  • samanalytic_ano


55 

At each prompt, enter or re-enter a password, as required.

The script then sequentially prompts for the root user password of each auxiliary database station.


56 

Enter the required password at each prompt.

Messages like the following are displayed as the software is installed on each station and the database is created:

Populating auxiliary database user passwords in the vault

Installing auxiliary database on IP_address ....

Cleaning auxiliary database host(s) in ssh known_hosts for root and samauxdb users.

Creating auxiliary database cluster

   Successfully created auxiliary database cluster

   Creating auxiliary database

Distributing changes to cluster.

        Creating database samdb

        Starting bootstrap node node_name (IP_address)

        Starting bootstrap node node_name (IP_address)

        .

        .

        .

        Starting nodes:

                node_name (IP_address

                node_name (IP_address

                .

                .

                .

        Starting Vertica on all nodes. Please wait, databases with a large catalog may take a while to initialize.

Installing OS_package package

        Success: package OS_package installed

Database creation SQL tasks completed successfully.

Database samdb created successfully.

  Successfully created auxiliary database

  Performing post install configuration tasks

Creating public interface for host node_name (IP_address

Creating public interface for host node_name (IP_address

.

.

.

CREATE NETWORK INTERFACE

ALTER NODE

Setting DB samdb restart policy to never and replicating to cluster...

Database samdb policy set to never

Installing user defined extension libraries and functions.

Unzipping Python libraries.

Setting sticky bit on all nodes.

  INFO: About to configure TLS ....

  Generating TLS certificates

  INFO: About to validate key and certificate

        Make sure the certificate has not expired and that the specified date range is current and valid.

            Not Before: date

            Not After : date

  INFO: Complete validating key and certificate

  INFO: Adding certificate to AuxDB

Distributing configuration to all nodes

   Post install configuration tasks completed successfully.

   Successfully installed auxiliary database.

  Output captured in /opt/nsp/nfmp/auxdb/install/log/auxdbAdmin.sh.timestamp.log


57 

When the software installation on each station is complete, perform the following steps on each station in the cluster that you are upgrading.

  1. Log in to the station as the root user.

  2. Open a console window.

  3. Enter the following:

    su - samauxdb ↵

  4. Enter the following for each station in the geo-redundant cluster:

    bash$ ssh-copy-id station_IP

    where station_IP is the IP address of a station in the geo-redundant cluster

  5. Enter the following to switch back to the root user:

    exit ↵


58 

Stop the auxiliary database.

  1. Enter the following to stop the auxiliary database:

    ./auxdbAdmin.sh stop ↵

  2. Enter the following to display the auxiliary database status:

    ./auxdbAdmin.sh status ↵

    Information like the following is displayed:

    Database status

     Node       | Host          | State | Version | DB

    ------------+---------------+-------+---------+-------

     node_1 internal_IP_1 | STATE | version | db_name

     node_2 | internal_IP_2 | STATE | version | db_name

    .

    .

    .

     node_n | internal_IP_n | STATE | version | db_name

          Output captured in log_file

    The cluster is stopped when each STATE entry reads DOWN.

  3. Repeat substep 2 periodically until the cluster is stopped.

    Note: You must not proceed to the next step until the cluster is stopped.

  4. Enter the following to block external access to the auxiliary database ports:

    ./auxdbAdmin.sh shieldsDown ↵


59 

Restore the database backup created in Step 40; see the auxiliary database restore procedure in the NSP System Administrator Guide for information.


Update database schema
 
60 

If the NSP deployment includes the NFM-P, update the NFM-P database schema.

Note: The schema update may take considerable time.

  1. Log in as the nsp user on the NFM-P main server.

  2. Open a console window.

  3. Enter the following:

    bash$ cd /opt/nsp/nfmp/server/nms/bin ↵

  4. Enter the following:

    bash$ ./nmsserver.bash upgradeAuxDbSchema ↵

    The following prompt is displayed:

    Auxiliary database clusters:

    1: IP_a,IP_b,IP_c

    2: IP_x,IP_y,IP_z

    Select auxiliary database to upgrade:

  5. Enter the number that corresponds to the cluster that you are upgrading.

    The following messages and prompt are displayed:

    WARNING: About to upgrade samdb schema on the auxiliary database cluster [IP_a,IP_b,IP_c].

    It is recommended that a database backup is performed before proceeding.

    Type "YES" to continue

  6. Enter YES.

    The following prompt is displayed:

    Please enter the auxiliary database port [5433]:

  7. Enter the auxiliary database port number; press Enter to accept the default of 5433.

    The following prompt is displayed:

    Please enter the auxiliary database user password:

  8. Enter the required password.

    The following messages are displayed as the upgrade begins:

    INFO: Database upgrade can take a very long time on large databases.

    INFO: logs are stored under /opt/nsp/nfmp/server/nms/log/auxdb. Check the logs for progress.

    INFO: Node Name[v_samdb_node0001]->IP[IP_address]->Status[UP]

    INFO: About to perform upgrade


Enable required database services and proxies
 
61 

Perform the following steps on each station in the auxiliary database cluster.

  1. Log in as the root user.

  2. Open a console window.

  3. Enter the following sequence of commands to enable the database services:

    systemctl enable nspos-auxdb.service

    systemctl enable nspos-auxdbproxy.service

    systemctl enable vertica_agent.service

    systemctl enable verticad.service


62 

If you are upgrading the primary auxiliary database cluster, enter the following on each station in the cluster:

systemctl start nspos-auxdbproxy.service ↵

The auxiliary database proxy starts.


63 

Close the open console windows.

End of steps