Oracle database maintenance.
One recurring theme I see with junior dbas is the lack of understanding of the 'analyze table' proceedure. This is crucial for proper database performance. Can be scheduled once a week or more frequently based on system profile.
Basically, the 'analyze table' command gathers statistics on the table and stores them internally as hints for the query optimizer. Indexes are picked up based on these statistics. For dynamic tables (large growth or shrinkage or changes to key indexed fields), it is imperative this is performed frequently. For safety, run it anyway after key database operations.
Know what 'good' looks like for your system. Baseline and save (better still commit to memory) the key behavioral aspects of your system eg. cpu utilization profile, throughput, response times, i/o levels etc. That way, you will know when this profile changes and will be a key trigger for you to action for early detection of problems. More importantly, when you implement change, you should compare the before and after profile of your system.
Milan Gupta
1 comment:
Yes, this is exactly what I learnt while working as a support person.. you need to know the normal.. only then can you understand and react when something goes wrong..
What I used to look out for was the number of log switches per hour/day . A sudden increase would mean something that has changed is contributing to a high level of database activity..
I found TOAD to be a really helpful tool in this regards..TOAD has an amazing range of diagnostic/health indicator tools...
...including a chart tool which lists the log switches per hour per day...
Post a Comment