Sunday, 21 February 2010

Solaris logadm : Grouping logfiles

Using Solaris' "logadm" official tool for log rotation we can configure a rotation period of say daily or weekly and configure how many old logs are retained and where we retain these logs.  Its basically solaris' equivalent of linux's logrotate

Its easy enough to config but I couldn't find info on how to group logfiles together so though I'd blog my findings...

HOW TO GROUP LOGFILES IN LOGADM

Scenario I had is where a couple of instances of Oracle DB running on a server and I wanted the log files to be grouped together by logadm so that after they went through their daily rotation a single command/script was run which would produce a brief report of any errors contained within any of the Oracle Alert logs.  This way rather than an email report for every alert log I could get a single email report detailing any issues in the alert log.

The logadm man page suggests grouping could be done but I couldn't find information on it so I experimented and following seems to work.  Logadm Config file is "/etc/logadm.conf"

# Oracle Alert  Logs
# In this config the Alert logs are  grouped together.
ORACLE_ALERT_LOGS -C 0  -a /home/oracle/alert_log_report.sh -c -p 1d -s 1b -t  '/var/log/oracle/$nodename_$basename.%Y%m%d' \
/path/to/first/db/alert_log \
/path/to/second/db/alert_log

1st time the above config is executed it will rotate and archive /path/to/first/db/alert_log & /path/to/second/db/alert_log into /var/log/oracle with $nodename prefix and YYYYMMDD date suffix.  As these 2 logfiles are group together as ORACLE_ALERT_LOGS the script alert_log_report.sh is executed after both logfiles are rotated, this script can be used to send a daily summary of any Oracle errors.

# Oracle Alert  Logs
# In this config the Alert logs are  grouped together.
ORACLE_ALERT_LOGS< -C 0  -a /home/oracle/alert_log_report.sh -c -p 1d -s 1b -t  '/var/log/oracle/$nodename_$basename.%Y%m%d' \
/path/to/first/db/alert_log \
/path/to/second/db/alert_log

# These lines are added 1st time logadm is executed and used by the
# app to track when next to rotate the logs, i.e. if a rotation 
# period of 1 week was set with "-p 1w" logadm wouldn't rotate these
# 2 log files until Fri 2nd Oct 2009
/path/to/first/db/alert_log  -P 'Fri Sep 25 04:10:00 2009'
/path/to/second/db/alert_log  -P 'Fri Sep 25 04:00:00 2009'

I got most this info from http://docs.sun.com/app/docs/doc/816-5166/logadm-1m?a=view

Explanation of the other params/switches passed to logadm

–s 1b switch which means logs are only rotated if greater than 1 byte.  so logs not needlessly being rotated.

-C 15 is log retention period, here its 15 logs (14 archived + current log)  
–C 0  This turns off the deleting of retained logs, idea being that we keep all logs in /var/log/archives and maybe at some point in the future have a process of archiving them to centralised log server.

-P 'Wed Sep 23 10:03:43 2009' 
This is internal switched used by logadm, when it rotates a log it writes the Time Stamp back into the "logadm.conf" file.  It then uses this timestamp as the base for deciding when the weekly rotation is due.  NOTE: TIMESTAMP MUST BE EXACTLY IN ABOVE FORMAT  

-c is nice feature especially for likes of Oracle's listener.log;  It is copied 1st and then truncated.

-p 1w is a period of 1 week before next rotation.
-p 1d is a period of daily log rotation.

-t '/var/log/archives/$basename.%Y%m%d' 
is the destination and file format of the rotated logs.  I thought it was good idea if they all went into a dedicated log archive area and that they had YYYY-MON-DD suffix.  

-a /home/oracle/alert_log_report.sh The "-a" switch tells logadm to execute this command after the logs have been rotated. e.g. you might use following when rotating the ssh daemon logs -a kill –HUP `cat /etc/run/sshd.pid

 -V is incredibly useful as it validates the configuration file

Simple date arithmetic in shell script

In the past I've always used this perl hack to do simple date arithmetic within a Unix shell script.

yesterday=$(perl -e 'use Date::Format;print time2str("%y%m%d%H%M",time-1800)."\n";')

The above example is a simple perl one-liner which will give you a timestamp from ½ hour ago.
i.e. ran at 12:51 on 01/02/2010 it will return 201002011221 i.e. 12:21

This perl example and this date/timestamp YYYYMMDDHHMM format is useful because you can pass this format to the touch command, and I've used this in scripts in the past to create a temp file with a timestamp of half hour ago and then passed this file to the find command to pickup files that were either modified before or after this time, e.g. for files that haven't been modified in last 10 minutes

tStamp=$(perl -e 'use Date::Format;print time2str("%y%m%d%H%M",time-600)."\n";)`
touch -t ${tStamp} /tmp/my_timestamp
find /path/to/dir -name "File_Glob_You_Want*" -type f ! -newer /tmp/my_timestamp ! -size 0`


The server this script was to run on wasn't configured correctly as it had no nameservers in resolv.conf or dns entires in nsswitch.conf.  I added these but it still wouldn't resolve domain names so I looked for a way to find yesterday's date without using Perl.  Turns out you can do this simply in Solaris by specifying the Timezone just before the date command.

TZ=NZ date +'%d-%b-%Y %H:%M'

will take current time and add 12 hours as New Zealand is 12 hours ahead ot UTC/GMT

Problem is I needed yesterdays date and it had to be run at approx 4am so you needed a timezone that was 5 hours behind to be safe.  If you want to see what Timezones are available just have a look in /usr/share/lib/zoneinfo/ directory, you can then pick any of these and use it in your date command but perhaps the simplest to use are the GMT+n and GMT-n

So running at 4am you could use any of the following to get yesterdays date in oracle DD-MM-YYYY format.

TZ=US/Alaska date +'%d-%b-%Y %H:%M'
TZ=GMT+5 date +'%d-%b-%Y %H:%M'
TZ=Chile/EasterIsland date +'%d-%b-%Y %H:%M'

What I can't figure out is why the GMT+?? & GMT-?? are intuitively the wrong way round.  In above example i had to use GMT+5 to give me the current time in Lima, Peru despite Lima being GMT-5.  I thought at 1st this might be a Solaris 9 bug but i tested on Ubuntu and it has the exact same behaviour.

TZ=Etc/GMT-12 date  # (gives current time + 12 hours)
TZ=Etc/GMT+12 date # (gives current time - 12 hours)


For more into on Timezones have a look at wikipedia

Friday, 19 February 2010

Stornoway Takeaway Menus Online

Brother was round at flat the other week and was wanting to take a takeaway back to Harris with him.  Out of curisosity I googled "stornoway takeaway menus"  and as if by magic this site came up http://sites.google.com/site/stornowayfood/

Well done Seumas for setting this up, I've fwd this site to a few friends.  Such a simple and brilliant idea and so incredibly useful! :-) 

Tuesday, 2 February 2010

ORA-12560: TNS:protocol adapter error

These notes only apply to a problem I had with a local install of oracle on Windows.

Couldn't connect to local install of Oracle on windows with sqlplus, either with sqlplus / as sysdba or sqlplus sys/sys as sysdba.  Tried the usual stuff
  • Check that ORACLE_HOME environment variable is set 
  • Check that listener.ora configured correctly.
  • Check that sqlnet.ora ok.
  • Check the dba group to see if I was a member
All of these seemed to be configured correctly.
Out of desparation i removed the NTS authentication from sqlnet.ora

SQLNET.AUTHENTICATION_SERVICES= (NTS)

So i then try to connect to database with sqlplus / as sysdba but this time get ORA-01031: insufficient privileges.  So i tried using the database authentication and try connecting with sys as sysdba and supplying the database password.  Bingo I'm logged in.

Turns out all the above hassle was caused by the "NT LM Security Support Provider" service, turns out Oracle needs this service to be running to have transparent OS authentication.  When I started this service and restored the SQLNET.AUTHENTICATION_SERVICES= (NTS)into sqlnet.ora I found that I could use sqlplus / as sysdba and quickly login locally to db.