Subscribe to Blog via Email
This is a subject that I see a lot of debate on, but as a DBA who works in mostly large data warehouse and mart environments, I am a pro-rebuild DBA.
Many of the large systems I work on are home-grown and designed. This means that there is either design or code issues that lend itself to poor choices that can leave high fragmentation and ridiculouly over-sized indexes. I have rarely seen this in smaller, OLTP environments, but offer me a large table in a warehouse that somebody has mistakenly decided to stage data to and I will bet you that the index has the potential to be larger than the table. Yes, you heard me, larger than the object it is only a partial representation of.
Homegrown systems are not the only culprits of this, either. Oracle Business Suite is a major offender of this scenario. I love every chance I get to be an Apps DBA. There are so many challenges and situations where Oracle broke it’s own “Ten Commandmants” of Oracle database design. This is my kind of heaven, since I get to take it apart and put it back together the right way, (well, as much as Oracle will let you when we’re talking the business suite!) If you are looking for indexes that can grow larger than the tables they represent, check out the PO, (Purchase Order) indexes. Due to the way PO insert, updates and continually deletes on what can grow to be quite large tables, the indexes can quickly become fragmented.
This subject came up after one of my fellow DBA’s and I were looking at a poor performing query with large waits on sequential reads. Due to the logic in the code, we decided to do a quick check on the table/index size to determine if it was time to partition this table and the waits on the index.
select event,p1text, p1, count(*), sum(time_waited) waiter
where session_id = 697
and event = ‘db file sequential read’
group by event, p1text, p1
order by waiter desc;
EVENT P1 TEXT P1 COUNT(*) WAITER
—————————————- ———- ———- ———- ———-
db file sequential read file# 61 3158 147953231
db file sequential read file# 52 3031 140491700
db file sequential read file# 53 2123 75409199
db file sequential read file# 96 137 3999203
db file sequential read file# 124 141 3435120
db file sequential read file# 86 117 3224430
db file sequential read file# 85 92 2495747
db file sequential read file# 116 92 2009065
db file sequential read file# 76 69 1935742
select file_name, tablespace_name from dba_data_files where file_id in (52,53,61);
SELECT SEGMENT_NAME, BYTES/1024/1024 from DBA_SEGMENTS
WHERE SEGMENT_NAME LIKE ‘EMAIL%’
AND SEGMENT_TYPE IN (‘TABLE’,’INDEX’);
Ready to be partitioned? Considering Oracle recommends investigating anything 2-4GB for partitioning benefits, I’d say yes.
Rebuild the index? Why would anyone NOT rebuild this index??
Investigate the code that is causing the serious fragmentation?
Yes, YES!! My choice would be to first patition the table, creating a local index, then investigate the code with recommendations changing from inserting directly into the main table to creating a staging table that performs all massaging of data first, then create the index and exchange partition. If it also needs saying, ensure the code updates the statistics before the exchange on the partition to ensure solid statistics for the objects involved.
With this size of a table, this type of fragmentation and waits, it would be asking only for poorer performance in the long run if you did not advise your developers and management of what they need to hear.