‘
photo by Thomas Hawk
$ ./pgmon.sh
Usage: pgmon.sh [username] [password] [host] <sid=postgres> <port=5432> <runtime=3600>
$ ./pgmon.sh kyle kyle mymachine.com
psql -t -h mymachine.com -p 5432 -U kyle postgres < /tmp/MONITOR/tmp/mymachine.com:postgres_collect.pipe &
RUN_TIME=-1
COLLECT_LIST=
FAST_SAMPLE=wts
TARGET=mymachine.com:postgres
DEBUG=0
Connected, starting collect at Wed Mar 8 12:05:12 PST 2017
starting stats collecting
SET
AAS| blks_hit | blks_read | tup_returned | tup_fetched | tup_inserted | tup_updated | tup_deleted
1 | 38281 | 1 | 628629 | 23600 | 1068 | 2316 | 0
16 | 146522 | 0 | 1497604 | 48647 | 2352 | 4599 | 2462
2 | 114046 | 0 | 1894329 | 46341 | 3822 | 3852 | 3066
2 | 146728 | 0 | 2239014 | 61420 | 3822 | 5668 | 3150
16 | 70446 | 0 | 945021 | 49284 | 2016 | 686 | 757
13 | 264149 | 0 | 1146816 | 53816 | 1638 | 2176 | 1852
15 | 54324 | 0 | 226542 | 19078 | 840 | 396 | 31
13 | 1170087 | 0 | 2301442 | 186967 | 2058 | 4276 | 1340
3 | 1036439 | 0 | 3411396 | 57392 | 4158 | 5041 | 3605
1 | 135927 | 0 | 1931473 | 90238 | 4788 | 5077 | 3654
5 | 92975 | 0 | 1427641 | 49175 | 2772 | 2812 | 1764
16 | 73695 | 0 | 1001290 | 35585 | 1806 | 1721 | 1915
14 | 65117 | 0 | 242383 | 22150 | 420 | 530 | 511
4 | 111906 | 0 | 1593090 | 49570 | 2982 | 4718 | 3086
When there are databases for which I’m responsible, I like to have powerful performance analysis database tools. In the Oracle world I use tools such as Lab128, DB Optimizer and OEM. There a number of other tools that look good from companies such Quest, Solarwinds and Orachrome. Often though when I’m dealing with a new situation, a new database or a database that’s temporary, then I just want a quick and easy way to see basic stats easily and quickly. For Oracle I have a number of easy monitoring scripts.
For monitoring databases other than Oracle, I have less experience. I did do work on SQL Server, Sybase and DB2 when working on the development of DB Optimizer but monitoring popular open source databases such as MySQL and Postgres is relatively new to me.
Right now I’m looking into Postgres.
For Postgres I wanted some quick and easy way to monitor in realtime various stats to start iterating and prototyping the type of data I wanted to look at, so I put together a shell script. The advantage of a shell script is it should work most anywhere and the only requirement is psql which should be readily available in a PosgreSQL environment.
I took a script from my Oracle experiences, oramon.sh, and retrofitted it for Postgres. It is a script that sets up a named pipe then has psql (or for Oracle , SQL*Plus) read from that named pipe. The script procedurally feeds SQL statements into the named pipe. The output from psql goes to files and the script manages the output to the screen. The script is a little less optimal for PostgreSQL than Oracle as I had to use a temporary table to store variable values. Using psql has variables but they look to be static can not modifiable (seem to act like defines).
The script pulls out some basic stats and outputs them every 5 seconds on the command line, sort of like vmstat. The first stat is AAS or average active sessions which is the count of sessions actively running queries, followed by a few basic database stats.
Comments