Actian Vector
This section is divided into the following key areas for Actian Vector:
Installation Path
-
Basic Commands
Startup Vector
Shutdown Vector
Maintenance Guide
Installation Path
Actian Vector will be installed by default on:
/opt/Actian
Basic Commands
-
Start up Vector:
systemctl start actian-vectorVW
-
Shut down Vector:
systemctl stop actian-vectorVW
Maintenance Guide
- Optimizing the Accelerator Database
- sysmod - Modifying the System Catalogs
- Propagating Accelerator In-Memory Changes and Condensing the Log
- Disable new Connections to the Accelerator Database
- Rolling the Transaction Log
- Backing-up Tables
- Restoring Tables
- Adding New Database Users
Optimizing the Accelerator Database
The Accelerator database needs to be optimized to ensure the fastest execution paths are used for queries, which also has an impact on the amount of memory used for queries. The database vendor recommends that you optimize a table when the data has changed by 10%. After the initial optimization of the database, and taking the database vendors recommendation into consideration, we would recommend optimization on the database is performed weekly, for each schema. For high volume databases, the optimization will need to be performed more frequently.
By default, there are two schemas in the database, one for the replication data and one for Hubble. If you have created or are going to create new users/schemas for additional data, these will need to be added to the optimization tasks. Optimizing can be run at table level or schema level. See below for the instructions for both operations.
To use the optimizedb command as demonstrated in the following topics you must be logged on as the actian or root user.
Optimizing an Individual Table
Log onto the accelerator server and run the following command:
optimizedb -u<DB_UserName> <Database_Name> -r<TableName>
For example, if the default database is db, to optimize its f0911 table in the replication schema run the following:
optimizedb -ureplication -zfq db -rf0911
Optimizing all Tables in a Schema
Log onto the accelerator server and run the following command:
optimizedb -u<DB_UserName> -zfq <Database_Name>
For example, if the default database is db, to optimize all the tables in the replication schema run the following:
optimizedb -ureplication -zfq db
Example Output from optimizedb
Below is an example of the output that you would expect to see from running optimizedb:
Typically executions will be silent, returning the user to the command prompt when optimization has completed. However, you will be notified of any tables that contain no rows.
Scheduling Optimization
Cron can be used to schedule the optimization.
sysmod - Modifying the System Catalogs
As well as optimizing the Accelerator database, it is recommended that the sysmod command is periodically run. The sysmod command sets the system catalogs of a database to their currently defined storage structure. Doing so removes the overflow and deleted pages, which results in accelerating query processing.
To use the sysmod command as demonstrated in the following sections you must be logged on as the
actian or root user.
Running sysmod for the Hubble Database
The sysmod operation requires exclusive access to the database. To ensure you have exclusive access to the database, you will need to follow the instructions in the following order:
Shutdown Replication.
-
Disable new connections to the Accelerator Database.
For details of these procedures, refer to Chapter 5: Accelerator Shutdown Instructions of the Hubble AcceleratorMaintenance Guide.
Log onto the accelerator server and run the following command:
sysmod <database>
For example, the default database is db, run:
sysmod db
Then manually restart the Replication Service. For details of this procedure, refer to Chapter 6: Accelerator Startup Instructions of the Hubble Accelerator Maintenance Guide.
The replication tasks can be restarted in two modes, Resume and Reload:
Resume will read the source database log and resume reading the changes from where it left off.
Reload will reload the data from the source database and then read the changes from the source database log.
Propagating Accelerator In-Memory Changes and Condensing the Log
The highly performance columnar database that drives Hubble Accelerator achieves its high performance by doing as much work as possible in memory, notably transactions are only written to disk when the system is quiet. The Actian system will attempt to propagate the transactions and condense this log periodically itself but on a system that is continually processing transactions this can fail. As such one of the key maintenance tasks for accelerator administrators is to monitor the volume of in-memory changes to ensure that it doesn't grow too large and ensure that when the accelerator is stopped, e.g. for upgrade, that it is done in such a manner that all the database transactions are written to disk.
Failure to do so could result in delays in startup as the unpropagated in-memory changes are read into memory on startup or the system failing to start if the in-memory changes have grown beyond the size of the memory allowed for the Actian process.
This is a 2-stage process:
Propagate in-memory changes writes the in-memory changes to the database.
Condense the log to remove the in-memory changes now written back to the database. Additional information relating to accelerator database log propagation can be found in the appendix: Appendix 1: Additional Information Relating to Log Propagation.
Propagating the in-memory changes and condensing the log can take some time. This will fail if in-memory changes are made via queries while this is running. As such you must ensure that the steps in the previous sections to effectively stop all queries running on the database must be carried out to ensure that this will complete successfully. This activity, given the time that it is likely to take must be planned at a time when it is least disruptive to the users of the system.
How to Monitor the size of In-Memory Changes
The command vwinfo -M <DATABASE> will return what tables are accumulating in memory change and the size of these.
Thus on a typical Accelerator the steps will be:
Connect to Hubble accelerator server.
-
Switch user to the Accelerator database user.
# su actian
-
Run the above vwinfo command.
# vwinfo -M db
How to Propagate the In-Memory Changes into the Log and Condense the Log
Proceed as follows:
Connect to Hubble accelerator server.
-
Switch user to the Accelerator database user.
# su actian
At this point you need to decide if you want to propagate specific tables where the in-memory changes are particularly large or if you want to do this for all tables in the schema. As noted above, propagating changes can take some time and as the administrator you need to know how large a downtime window your business can tolerate and plan accordingly.
To propagate the in-memory changes for specific tables you need to run the command echo "call vectorwise (combine '<TABLE>')\g" for each table.
echo "call vectorwise (combine 'jde.f0911')\g" | sql db
Alternatively, to propagate the in-memory changes for all tables in the database run the following command.
echo "call vectorwise (combine)\g" | sql db
-
After propagating the changes to all the desired tables in a given schema the following command should be run to condense the log.
echo "call vectorwise (condense_log)\g" | sql db
Disable new Connections to the Accelerator Database
As per the instructions for Shutdown Replication (see Chapter 5: Accelerator Shutdown Instructions of the Hubble Accelerator Maintenance Guide) propagating the log can fail if transactions are added while propagation is in progress. Apart from replication, we need to also stop new SQL sessions being started on the accelerator database; c.f. Propagating Accelerator In-Memory Changes and Condensing the Log.
Connect to Hubble accelerator server.
-
Switch user to the Actian database user.
# su actian
Use iinamu to get the server number that you will need to pass to iimonitor.
-
# iinamu
Actian Vector NAME SERVICE MANAGEMENT UTILITY --
-- Copyright (c) 2009 Actian Corporation IINAMU> show
INGRES * 41271 (sole server) IINAMU> quit
-
Run iimonitor using the number of the "sole server" returned by the above iinamu command.
# iimonitor 41271 IIMONITOR>
-
Stop new SQL sessions from starting.
IIMONITOR> set server closed User connections now disabled IIMONITOR>
-
Wait a few minutes to allow user sessions to naturally terminate and then get a formatted list of all the active sessions. The following only shows the details of a single session but will give you a feel for what to expect to see.
IIMONITOR> show sessions formatted
Session 0000000007023C80:1399772928 (jde810a ) cs_state: CS_EVENT_WAIT (BIOR) cs_mask: CS_INTERRUPT_MASK,CS_NOXACT_
MASK OS_tid: 31151
DB Name: db (Owned by: root ) User: jde810a (jde810a ) Session started at 3-May-2016 12:04:56 as user jde810a
Terminal: batch Group Id:
Role Id:
Application Code: 00000000 Current Facility: CLF (00000001) Client user: Edward
Client host: edward0912 Client tty:
Client pid:
Client connection target: db
Client information: user='Edward',host='edward0912',conn='db', server='57468',session='1c250c0'
Description:
Query:
Last Query: open ~Q cursor for SELECT CVCRCD, CVCDEC, CVDL01 FROM jde810a.F0013 for readonly
Session 000000000714BFC0:-566962432 (integration_test ) cs_state: CS_EVENT_WAIT (BIOR) cs_mask: CS_INTERRUPT_MASK,CS_NOXACT_
MASK OS_tid: 4031
...
Shutdown each non-terminated session using the following command.
The session id that needs to be passed is the hexadecimal number that precedes the colon. Thus for the above example, Session 0000000007023C80:1399772928, the session identifier is 0000000007023C80.
IIMONITOR> remove 0000000007023C80
Session 0000000007023C80 removed
Repeat steps 6 and 7 until there are no non-administrator sessions running.
-
Re-enable connections to the database. This is to ensure that you can run the sql command in the next section to allow the in-memory changes to the database to be propagated and condensed.
IIMONITOR> set server open User connections now allowed
-
Exit iimonitor leaving the server closed.
IIMONITOR> quit #
The iimonitor command is an extremely useful tool for monitoring and understanding what the accelerator database is doing. For more information on this tool refer to Appendix A of the Actian Vector 5.0 User Guide.
Rolling the Transaction Log
The vectorwise.log file, stored in /opt/Actian/VectorVW/ingre/files, accumulates all log messages for the accelerator database and this can grow quite large. This should be regularly rolled over to keep the size of the log manageable.
How to Check the size of the vectorwise.log File
Run the following command as the Actian user:
# ls -lh /opt/Actian/VectorVW/ingres/files/vectorwise.log
How to Roll the vectorwise.log File
If the file is getting large it is advisable to roll the log file. The following command should be run as the Actian user.
# echo "call vectorwise(vwlog_rotate)\g" | sql db
The expected output would be something along the lines of:
Check the log size again and it should be much less in size. There should also be another file next to it named vectorwise.log-<today's date>. This can be safely deleted.
Backing-up Tables
Log onto the accelerator server and check that you have enough space to take backups of data.
The backup process creates uncompressed copies of the tables, and hence will consume disk space. Best practice recommendation is to create an NFS mount or similar to a remote server to store the backup of your accelerator data.
-
Create a directory to hold the backups:
mkdir /tmp/<Backup_Directory_Name>
-
Navigate to the new directory:
cd /tmp/<Backup_Directory_Name>
-
Optionally, if required you can create subdirectories for each schema you are backing up.
The command that is used to take a backup of the table and data is called copydb. The function will create two files, copy.in and copy.out. These files are then used to create the export data files (copy.out) and then to restore the tables and data (copy.in).
The copydb command takes the following parameters:
-
To copy all the tables in the schema owned by the database user:
copydb <database_name> -u<DB_UserName>
-
To copy one or multiple tables from the schema owned by the database user:
copydb <database_name> -u<DB_UserName> -r<TableName1> <TableName2>
-
-
As previously explained, running the copydb command will create two files, copy.in and copy.out. If I wanted to backup TableA from the replication schema, I would run:
copydb db -ureplication -rTableA
This will create copy.in and copy.out into the directory where I ran the command.
-
I would then run the export command to generate the data file backup, based on the copy.out file. The syntax for the export command is:
sql <database_name> -u<DB_UserName> < copy.out
For example:
sql db -ureplication < copy.out
This will create individual data files named <tablename>.<schema> in the directory that where I ran the command from. In our example, TableA.replication.
Restoring Tables
Log on to Accelerator.
Copy the backup file onto the server (SCP) and into the directory that the backup was created from (see Hubble Accelerator Database Table Backup section).
-
Decompress the file if required.
If the location of the files is different to the location the backup was made from you will need to edit the copy.in file and change the location specified in the file to the new location of the backup files. The copy.in file contains sections for Creating Tables, Grant Select and Copying Data, if you want to restore the data, but the table already exists in the database, then you will need to edit the copy.in file accordingly.
-
Then to use the copy.in file to import the data run:
sql <database_name> -u<DB_UserName> < copy.in
For example:
sql db -uhubble < copy.in
Adding New Database Users
The ybdbsetup helper scripts is used to create a new db user.
A menu explaining how to use the script can be produced by running the following, without parameters:
[root@******]# ybdbsetup
Possible Argument for /usr/bin/ybdbsetup To create the database use db
e.g. ybdbsetup dbFor application user app_user -u -p/--password
e.g. ybdbsetup app_user -u=hubble -p=PassworD
For replication user use rep_user -u -p/--password
e.g. ybdbsetup rep_user -u=replication -p=PassworD For attunity user use att_user
e.g. ybdbsetup att_user
You only need to use this script to create extra users.
The difference between the app_user and rep_user parameters is that the app_user will be setup with idle disconnect timeout setup.
For example, to create a new app user called hubble2 with a password of mypassword enter the following:
ybdbsetup app_user -u=hubble2 -p=mypassword