Rebuilding Vs. No Rebuild on Indexes

This is a subject that I see a lot of debate on, but as a DBA who works in mostly large data warehouse and mart environments, I am a pro-rebuild DBA.

Many of the large systems I work on are home-grown and designed. This means that there is either design or code issues that lend itself to poor choices that can leave high fragmentation and ridiculouly over-sized indexes. I have rarely seen this in smaller, OLTP environments, but offer me a large table in a warehouse that somebody has mistakenly decided to stage data to and I will bet you that the index has the potential to be larger than the table. Yes, you heard me, larger than the object it is only a partial representation of.

Homegrown systems are not the only culprits of this, either. Oracle Business Suite is a major offender of this scenario. I love every chance I get to be an Apps DBA. There are so many challenges and situations where Oracle broke it’s own “Ten Commandmants” of Oracle database design. This is my kind of heaven, since I get to take it apart and put it back together the right way, (well, as much as Oracle will let you when we’re talking the business suite!) If you are looking for indexes that can grow larger than the tables they represent, check out the PO, (Purchase Order) indexes. Due to the way PO insert, updates and continually deletes on what can grow to be quite large tables, the indexes can quickly become fragmented.

This subject came up after one of my fellow DBA’s and I were looking at a poor performing query with large waits on sequential reads. Due to the logic in the code, we decided to do a quick check on the table/index size to determine if it was time to partition this table and the waits on the index.

select event,p1text, p1, count(*), sum(time_waited) waiter
from v$active_session_history
where session_id = 697
and event = ‘db file sequential read’
group by event, p1text, p1
order by waiter desc;

—————————————- ———- ———- ———- ———-
db file sequential read file# 61 3158 147953231
db file sequential read file# 52 3031 140491700
db file sequential read file# 53 2123 75409199
db file sequential read file# 96 137 3999203
db file sequential read file# 124 141 3435120
db file sequential read file# 86 117 3224430
db file sequential read file# 85 92 2495747
db file sequential read file# 116 92 2009065
db file sequential read file# 76 69 1935742

select file_name, tablespace_name from dba_data_files where file_id in (52,53,61);
——————————————— ——————————
/u14/oradata/prodbase/email_index_01.dbf EMAIL_IDX
/u15/oradata/prodbase/email_index_02.dbf EMAIL_IDX
/u16/oradata/prodbase/email_index_03.dbf EMAIL_IDX


————— ————-
EMAIL_IDX1 13822

Ready to be partitioned? Considering Oracle recommends investigating anything 2-4GB for partitioning benefits, I’d say yes.

Rebuild the index? Why would anyone NOT rebuild this index??

Investigate the code that is causing the serious fragmentation?
Yes, YES!! My choice would be to first patition the table, creating a local index, then investigate the code with recommendations changing from inserting directly into the main table to creating a staging table that performs all massaging of data first, then create the index and exchange partition. If it also needs saying, ensure the code updates the statistics before the exchange on the partition to ensure solid statistics for the objects involved.

With this size of a table, this type of fragmentation and waits, it would be asking only for poorer performance in the long run if you did not advise your developers and management of what they need to hear.

Print Friendly
November 14th, 2009 by

facebook comments:

  • Don Burleson

    >> I am a pro-rebuild DBA.

    But what of the "experts" who say otherwise?

  • Without starting a religious war here, I used to be a staunch defender of the (almost) never rebuild camp. I still try to err on the side of not rebuilding, without compelling evidence to do a rebuild. But, in my mind, this comes down to understanding the application, and what it’s doing to the indexes, and understanding the implications. I wouldn’t show up at a new site and say “What? You’ve never rebuilt any of your indexes? You need to rebuild them all, immediately!”

    But, if you are having poor performance, on an otherwise reasonably straightforward query with a clean execution plan, but see an unwarranted number of buffer gets and/or physical reads, then perhaps a rebuild is in order. If then, in talking to the system architect, you discover that this particular table (for example) has a constant stream of new data pouring in, and periodically, significant non-contiguous (in terms of the index order) chunks of data are deleted, then you start to see pretty strong evidence for a case to rebuild.

    Finally, before rebuilding, it’s worth considering a coalesce, particularly if you think the rebuild won’t reduce the blevel. It’s a lot less work, and is an online operation. (Yes, rebuild can be online, but it’s a lot of work, and under heavy load, the rebuild could go on almost indefinitely!)

    Ok, I’ve blathered on long enough….


  • I thought it was funny, Mark, when you realize how old this post was… 🙂

    I was getting a lot of flack at the time for having rebuilt an index that I felt justified to do so, yet I can honestly say I haven’t rebuilt an index since then. It really does have to do with the usage and the design of the system and its impact on the index. Its a rare necessity, but this was a situation when it was deemed necessary.

    Thank you for your input!

  • Pingback: KSCOPE 2014, Seattle! - Oracle - Oracle - Toad World()

  • Facebook
  • Google+
  • LinkedIn
  • Twitter