To upgrade a geo-redundant Release 22.6 or earlier auxiliary database
Description
The following procedure describes how to upgrade a geo-redundant NSP auxiliary database from Release 22.6 or earlier.
Note: You require the following user privileges on each auxiliary database station:
Note: Ensure that you record the information that you specify, for example, directory names, passwords, and IP addresses.
Note: The following RHEL CLI prompts in command lines denote the active user, and are not to be included in typed commands
Steps
Obtain software | |||
1 |
Download the following installation files: where R.r.p is the NSP release identifier, in the form MAJOR.minor.patch v is a version number | ||
2 |
For each auxiliary database cluster, transfer the downloaded files to a station that is reachable by each station in the cluster. | ||
Verify auxiliary database synchronization | |||
3 |
If you are upgrading the standby auxiliary database cluster, you must verify the success of the most recent copy-cluster operation, which synchronizes the database data between the clusters. Note: You must not proceed to the next step until the operation is complete and successful. Issue the following RESTCONF API call periodically to check the copy-cluster status: Note: In order to issue a RESTCONF API call, you require a token; see this tutorial on the Network Developer Portal for information. GET https://address/restconf/data/auxdb:/auxdb-agent where address is the advertised address of the primary NSP cluster The call returns a status of SUCCESS, as shown below, for a successfully completed copy-cluster operation: <HashMap> <auxdb-agent> <name>nspos-auxdb-agent</name> <application-mode>ACTIVE</application-mode> <copy-cluster> <source-cluster>cluster_M</source-cluster> <target-cluster>cluster_N</target-cluster> <time-started>timestamp</time-started> <status>SUCCESS</status> </copy-cluster> </auxdb-agent> </HashMap> | ||
Stop and disable auxiliary database proxies | |||
4 |
If you are upgrading the standby auxiliary database cluster, stop and disable the database proxy on each station in each auxiliary database cluster.
| ||
Enable auxiliary database maintenance mode | |||
5 |
If you are upgrading the standby auxiliary database cluster, enable nspos-auxdb-agent maintenance mode.
| ||
Commission new stations, if required | |||
6 |
If you are deploying the auxiliary database on one or more new stations, perform the following steps. For information about deploying the RHEL OS using an NSP OEM disk image, see NSP disk-image deployment. Note: The IP addresses on the interfaces of a new auxiliary database station must match the addresses on the station that it replaces.
| ||
Back up database | |||
7 |
If you specify a backup location on the database data partition, data loss or corruption may occur. The auxiliary database backup location must be an absolute path on a partition other than the database data partition. If you are upgrading the standby cluster, back up the auxiliary database as described in the NSP System Administrator Guide for the installed release. Note: The backup location requires 20% more space than the database data consumes. Note: If the backup location is remote, a 1 Gb/s link to the location is required; if achievable, a higher-capacity link is recommended. | ||
Stop cluster | |||
8 |
Log in as the root user on a station in the auxiliary database cluster that you are upgrading. | ||
9 |
Open a console window. | ||
10 |
Enter the following: # cd /opt/nsp/nfmp/auxdb/install/bin ↵ | ||
11 |
Enter the following to stop the auxiliary database: # ./auxdbAdmin.sh stop ↵ | ||
12 |
Enter the following to display the auxiliary database status: # ./auxdbAdmin.sh status ↵ Information like the following is displayed: Database status Node | Host | State | Version | DB ------------+---------------+-------+---------+------- node_1 | internal_IP_1 | STATE | version | db_name node_2 | internal_IP_2 | STATE | version | db_name . . . node_n | internal_IP_n | STATE | version | db_name Output captured in log_file The cluster is stopped when each STATE entry reads DOWN. | ||
13 |
Repeat Step 12 periodically until the cluster is stopped. Note: You must not proceed to the next step until the cluster is stopped. | ||
Prepare all stations for upgrade | |||
14 |
Perform Step 16 to Step 32 on each station in the auxiliary database cluster that you are upgrading. | ||
15 |
Go to Step 33. | ||
Prepare individual station for upgrade | |||
16 |
Log in as the root user on the station. | ||
17 |
Open a console window. | ||
18 |
Enter the following sequence of commands to stop the auxiliary database services: systemctl stop nfmp-auxdb.service systemctl stop vertica_agent.service systemctl stop verticad.service | ||
19 |
Enter the following sequence of commands to disable the database services: systemctl disable nfmp-auxdb.service systemctl disable vertica_agent.service systemctl disable verticad.service | ||
20 |
Transfer the downloaded installation files to an empty directory on the station. Note: You must ensure that the directory is empty. Note: In subsequent steps, the directory is called the NSP software directory. | ||
21 |
Navigate to the NSP software directory. Note: The directory must contain only the installation files. | ||
22 |
Enter the following: # chmod +x * ↵ | ||
23 |
Enter the following: # ./VerticaSw_PreInstall.sh ↵ Information like the following is displayed: Logging Vertica pre install checks to log_file INFO: About to remove proxy parameters set by a previous run of this script from /etc/profile.d/proxy.sh INFO: Completed removing proxy parameters set by a previous run of this script from /etc/profile.d/proxy.sh INFO: About to set proxy parameters in /etc/profile.d/proxy.sh... INFO: Completed setting proxy parameters in /etc/profile.d/proxy.sh... INFO: About to remove kernel parameters set by a previous run of this script from /etc/sysctl.conf INFO: Completed removing kernel parameters set by a previous run of this script from /etc/sysctl.conf INFO: About to set kernel parameters in /etc/sysctl.conf... INFO: Completed setting kernel parameters in /etc/sysctl.conf... INFO: About to change the current values of the kernel parameters INFO: Completed changing the current values of the kernel parameters INFO: About to remove ulimit parameters set by a previous run of this script from /etc/security/limits.conf INFO: Completed removing ulimit parameters set by a previous run of this script from /etc/security/limits.conf INFO: About to set ulimit parameters in /etc/security/limits.conf... INFO: Completed setting ulimit parameters in /etc/security/limits.conf... Checking Vertica DBA group samauxdb... WARNING: Vertica DBA group with the specified name already exists locally. Checking Vertica user samauxdb... WARNING: Vertica user with the specified name already exists locally. Changing ownership of the directory /opt/nsp/nfmp/auxdb/install to samauxdb:samauxdb. Adding samauxdb to sudoers file. Changing ownership of /opt/nsp/nfmp/auxdb files. INFO: About to remove commands set by a previous run of this script from /etc/rc.d/rc.local INFO: Completed removing commands set by a previous run of this script from /etc/rc.d/rc.local INFO: About to add setting to /etc/rc.d/rc.local... INFO: Completed adding setting to /etc/rc.d/rc.local... | ||
24 |
Enter the following to reboot the station: # systemctl reboot ↵ The station reboots. | ||
25 |
When the reboot is complete, log in as the root user on the station. | ||
26 |
Open a console window. | ||
27 |
Enter the following: # cd /opt/nsp/nfmp/auxdb/install/bin ↵ | ||
28 |
Enter the following to display the auxiliary database status: # ./auxdbAdmin.sh status ↵ Information like the following is displayed: Database status Node | Host | State | Version | DB ------------+---------------+-------+---------+------- node_1 | internal_IP_1 | STATE | version | db_name node_2 | internal_IP_2 | STATE | version | db_name . . . node_n | internal_IP_n | STATE | version | db_name Output captured in log_file | ||
29 |
if any STATE entry is not DOWN, perform the following steps.
| ||
30 |
Navigate to the NSP software directory. | ||
31 |
Enter the following: # yum install nspos-*.rpm ↵ The yum utility resolves any package dependencies, and displays the following prompt for each package: Total size: nn G Installed size: nn G Is this ok [y/d/N]: | ||
32 |
Enter y. The following and the installation status are displayed as each package is installed: Downloading Packages: Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction The package installation is complete when the following is displayed: Complete! | ||
Upgrade database | |||
33 |
Log in as the root user on a station in the auxiliary database cluster that you are upgrading. | ||
34 |
Open a console window. | ||
35 |
Enter the following: # cd /opt/nsp/nfmp/auxdb/install/bin ↵ | ||
36 |
Enter the following: # ./auxdbAdmin.sh upgrade tar_file ↵ where tar_file is the absolute path and filename of the vertica-R.r.p-rel.tar file in the NSP software directory The following prompt is displayed: Updating Vertica - Please perform a backup before proceeding with this option Do you want to proceed (YES/NO)? | ||
37 |
Enter YES ↵. The following prompt is displayed: Please enter auxiliary database dba password [if you are doing initial setup for auxiliary database, press enter]: | ||
38 |
Enter the dba password. The following prompt is displayed: Please verify auxiliary database dba password: | ||
39 |
Enter the dba password again. The upgrade begins, and operational messages are displayed. The upgrade is complete when the following is displayed: Database database_name started successfully Output captured in log_file | ||
40 |
Enter the following to display the auxiliary database status: # ./auxdbAdmin.sh status ↵ Information like the following is displayed: Database status Node | Host | State | Version | DB ------------+---------------+-------+---------+------- node_1 | internal_IP_1 | STATE | version | db_name node_2 | internal_IP_2 | STATE | version | db_name . . . node_n | internal_IP_n | STATE | version | db_name Output captured in log_file The cluster is running when each STATE entry reads UP. | ||
41 |
Repeat Step 40 periodically until the cluster is running. Note: You must not proceed to the next step until the cluster is running. | ||
Back up database for migration to new OS version | |||
42 |
If no backup has previously been performed on the cluster, for example, if the cluster has always had the standby role, perform the following steps.
| ||
43 |
You must back up the auxiliary database data and configuration information to prepare for the migration to the new OS version. Note: The operation may take considerable time. Perform the following steps.
| ||
44 |
Perform the following steps on each auxiliary database station to preserve the upgrade logs.
| ||
Recommission stations, if required | |||
45 |
If you are reusing any auxiliary database stations in the cluster that you are upgrading, recommission each station according to the platform specifications in this guide and in the NSP Planning Guide. For information about deploying the RHEL OS using an NSP OEM disk image, see NSP disk-image deployment. Note: You must reformat the following disk partitions: | ||
Install new software | |||
46 |
Perform To prepare a station for NSP auxiliary database installation on each station in the auxiliary database cluster that you are upgrading. | ||
47 |
Copy each auxdb-node_N-upgrade-log.tar.gz created in Step 44 to the /opt/nsp directory on the associated auxiliary database station. | ||
48 |
Log in as the root user on an auxiliary database station in the cluster that you are upgrading. | ||
49 |
Copy the install.config file saved in Step 43, substep 5 to the following directory: /opt/nsp/nfmp/auxdb/install/config | ||
50 |
Enter the following: # cd /opt/nsp/nfmp/auxdb/install/bin ↵ | ||
51 |
Enter the following to block external access to the auxiliary database ports: # ./auxdbAdmin.sh shieldsUp ↵ | ||
52 |
Enter the following to regenerate the TLS certificates: # ./auxdbAdmin.sh configureTLS force-gen ↵ | ||
53 |
Enter the following: # ./auxdbAdmin.sh install ↵ The script sequentially prompts you to enter and re-enter new passwords for the following user accounts: | ||
54 |
At each prompt, enter or re-enter a password, as required. The script then sequentially prompts for the root user password of each auxiliary database station. | ||
55 |
Enter the required password at each prompt. Messages like the following are displayed as the software is installed on each station and the database is created: Populating auxiliary database user passwords in the vault Installing auxiliary database on IP_address .... Cleaning auxiliary database host(s) in ssh known_hosts for root and samauxdb users. Creating auxiliary database cluster Successfully created auxiliary database cluster Creating auxiliary database Distributing changes to cluster. Creating database samdb Starting bootstrap node node_name (IP_address) Starting bootstrap node node_name (IP_address) . . . Starting nodes: node_name (IP_address node_name (IP_address . . . Starting Vertica on all nodes. Please wait, databases with a large catalog may take a while to initialize. Installing OS_package package Success: package OS_package installed Database creation SQL tasks completed successfully. Database samdb created successfully. Successfully created auxiliary database Performing post install configuration tasks Creating public interface for host node_name (IP_address Creating public interface for host node_name (IP_address . . . CREATE NETWORK INTERFACE ALTER NODE Setting DB samdb restart policy to never and replicating to cluster... Database samdb policy set to never Installing user defined extension libraries and functions. Unzipping Python libraries. Setting sticky bit on all nodes. INFO: About to configure TLS .... Generating TLS certificates INFO: About to validate key and certificate Make sure the certificate has not expired and that the specified date range is current and valid. Not Before: date Not After : date INFO: Complete validating key and certificate INFO: Adding certificate to AuxDB Distributing configuration to all nodes Post install configuration tasks completed successfully. Successfully installed auxiliary database. Output captured in /opt/nsp/nfmp/auxdb/install/log/auxdbAdmin.sh.timestamp.log | ||
56 |
When the software installation on each station is complete, perform the following steps on each station in the cluster that you are upgrading.
| ||
57 |
Stop the auxiliary database.
| ||
Stop main server | |||
58 |
If the NSP deployment includes the NFM-P, stop the standby NFM-P main server, if the server is running.
| ||
Prepare to restore auxiliary database | |||
59 |
If the backup files to restore are not in the original backup location on each auxiliary database station, perform the following steps.
| ||
60 |
If the backup is being restored on any stations that have different internal IP addresses, perform the following steps on each auxiliary database station.
| ||
Restore auxiliary database | |||
61 |
Enter the following on one auxiliary database station: # /opt/nsp/nfmp/auxdb/install/bin/auxdbAdmin.sh restore path/AuxDbBackUp/samAuxDbBackup_restore.conf ↵ where path is the absolute path of the database backup created in Step 42 The restore operation begins. The following messages and progress indicators are displayed: Starting full restore of database db_name. Participating nodes: node_1, node_2, ... node_n. Restoring from restore point: AuxDbBackUpID_datestamp_timestamp Determining what data to restore from backup. [==============================================] 100% Approximate bytes to copy: nnnnnnnn of nnnnnnnnn total. Syncing data from backup to cluster nodes. When the restore is complete, the second progress indicator reaches 100%, and the following message is displayed: [==============================================] 100% Restoring catalog. Restore complete! | ||
62 |
Enter the following: # /opt/nsp/nfmp/auxdb/install/bin/auxdbAdmin.sh uninstallUDxLibraries ↵ | ||
63 |
Enter the following: # /opt/nsp/nfmp/auxdb/install/bin/auxdbAdmin.sh installUDxLibraries ↵ | ||
Restore database user passwords | |||
64 |
If a database user password has changed since the creation of the database backup, the NSP components cannot authenticate with the restored auxiliary database. In such a scenario, you must perform the following steps to update each affected database user password to the value that the NSP components recognize.
| ||
Update database schema | |||
65 |
Enter the following to allow external access to the auxiliary database ports: # /opt/nsp/nfmp/auxdb/install/bin/auxdbAdmin.sh shieldsDown ↵ | ||
66 |
If the NSP deployment includes the NFM-P, update the NFM-P database schema. Note: The schema update may take considerable time.
| ||
Enable required database services and proxies | |||
67 |
Perform the following steps on each auxiliary database station in each auxiliary database cluster.
| ||
68 |
Enter the following on each auxiliary database station: # systemctl start nspos-auxdbproxy.service ↵ The auxiliary database proxy starts. | ||
Disable maintenance mode for auxiliary database agents | |||
69 |
Activate the nspos-auxdb-agent in each NSP cluster.
The cluster enters active mode within approximately one minute. | ||
Verify auxiliary database status | |||
70 |
Issue the following RESTCONF API call to verify that the primary auxiliary database cluster is in active mode: Note: In order to issue a RESTCONF API call, you require a token; see this tutorial on the Network Developer Portal for information. GET /data/auxdb:/auxdb-agent HTTP/1.1 Request body: Host: address Content-Type: application/json Authorization: bearer_and_token_from_session_manager where address is the advertised address of the primary NSP cluster The cluster is in active mode if the response includes ACTIVE. | ||
71 |
Issue the following RESTCONF API call to verify the auxiliary database operation: Note: In order to issue a RESTCONF API call, you require a token; see this tutorial on the Network Developer Portal for information. GET https://address/restconf/data/auxdb:/clusters where address is the advertised address of the primary NSP cluster The call returns auxiliary database cluster status information like the following; if each mode and status value are not as shown below, contact technical support. <HashMap> <clusters> <cluster> <name>cluster_M</name> <mode>ACTIVE</mode> <status>UP</status> <nodes> <external-ip>203.0.113.101</external-ip> <internal-ip>10.1.2.101</internal-ip> <status>UP</status> </nodes> <nodes> <external-ip>203.0.113.102</external-ip> <internal-ip>10.1.2.102</internal-ip> <status>UP</status> </nodes> <nodes> <external-ip>203.0.113.103</external-ip> <internal-ip>10.1.2.103</internal-ip> <status>UP</status> </nodes> </cluster> <cluster> <name>cluster_N</name> <mode>STANDBY</mode> <status>ON_STANDBY</status> <nodes> <external-ip>203.0.113.104</external-ip> <internal-ip>10.1.2.104</internal-ip> <status>READY</status> </nodes> <nodes> <external-ip>203.0.113.105</external-ip> <internal-ip>10.1.2.105</internal-ip> <status>READY</status> </nodes> <nodes> <external-ip>203.0.113.106</external-ip> <internal-ip>10.1.2.106</internal-ip> <status>READY</status> </nodes> </cluster> </clusters> </HashMap> | ||
72 |
Close the open console windows. End of steps |