DBA Kevlar

Tips, tricks, (and maybe a few rants) so more DBAs become bulletproof!

Exadata Optimization Tips

As I spend more and more time optimizing Exadata environments, I seem to be answering the same questions repeatedly when it comes to database design, parameter settings, etc.

Here’s a few high level notes to always remember-

  • Always have baselines from previous environments.  Don’t go into performance expectations blind.  Have AWR and ASH data from the previous environment and mine AWR data for anything when others start “guessing”.
  • Look at the big picture-  those that focus on one thing, often find that it’s a combination of issues that brew the perfect storm.
  • What are your top 10 goals by going to Exadata?  They had better tell you it wasn’t just because someone thought “it was cool”.
  • Expectations should be listed clearly, “we are expecting a XX% average improvement in our ETL loads and XX% average improvement in our reporting.  We expect to have a XX week post optimization exercise to address any performance issues after the migration and know that this may involve query rewrites.”
  • Do the challenges in your database environment coincide with the features that Exadata offers?  If not, you could have a really sweet engineered system that offers your current design minimal improvements and what that leaves you with is future enhancements that should be geared to utilizing smart scans, storage indexes, HCC, etc.
  • Just because a statement offloaded and returned the data quickly, doesn’t mean you have unlimited PGA to then sort and aggregate that data in a report.  Cell nodes can not help you here.

“My goals are not your goals” is an opportunity for any DBA who hears these words…

  • Have you identified your stakeholders in the optimization exercise?  What enhancements are important to you, may not be what is important to them.
  • Discuss what change control is required.  Ensure you know what is required to move anything to production.
  • Openly listen to all involved and realize that all input is valuable and in the end, if the user isn’t happy, nobody’s happy.
  • Options located on web sites and are stated to be “best practices” should not be taken as gospel.  Research and verify everything.

Successful migrations and optimization on Exadata starts with good planning

  • Create top 10 out of group’s goals.  If a goal appears out of context, discuss, find out what is important about the goal and if there is another option for attaining it.
  • Create a simple SOW for the optimization exercise.
  • Clearly state what will be included and what WILL NOT.  Ensure all stakeholders have signed off on the SOW.

Dispel Exadata myths.  I was told by one client that, “Tom Kyte said at the last Oracle Open World that parallel should never be used on Exadata…”  Although I was sure they had heard him incorrectly, I wanted to see the original slides so that I could show them the point he was actually trying to relate.  The outcome of a quick tweet in search of Tom’s slides from OOW resulted in a bizarre and interesting set of emails/tweets when I was hit with the “140 char misconstrued tweet” issue.  Yes, I am now responsible for a “Tom Kyte says you should never use parallel with Exadata” myth…sigh… :)

When you are making recommendations

  • Backup everything with data.
  • IF you come to me with proposed parameter changes, (the silver bullet syndrome…)  please let me apologize for my lack of enthusiasm.  Even when parameter changes made a big difference in earlier versions of Oracle, it was commonly only a 10% improvement, where with SQL rewrite, I could get at least 80%-  you choose… :)
  • Inspect opportunities for SQL rewrite- ensuring others that often it is required as systems grow and change, not because someone wrote, “bad code”.  Pointing fingers is a waste of time and unprofessional.
  • Are there opportunities to incorporate parallelism.
  • Is plan stability an issue? (statistics, dynamic sampling and profiles.)

And Tyler Muth reminded me that  forgot, even after just using it today with a client-  Exachk is your FRIEND!!  It will verify that no undocumented or non-default parameter settings are present, along with perform a health check on your Exadata environment.

Truth be known:

  • Lots of SGA is not always your friend in Exadata, (if it runs in memory when it would better be served by offloading to a cell node due to high SGA, is it a good thing?)
  • Do not count on storage indexes.  The feature, when implemented by Oracle is one requiring the choice of a complicated set of algorithms.  They can be your savior or your devil.
  • More parallel is not better parallel.  Find you “sweet spot”.  Most environments have a setting of DOP, (degree of parallelism) that works best with over 90% of the environment’s SQL and delivers over 90% of the overall performance improvement.  The DOP is also often much LOWER than many would guess, (how often 4 has been my magic DOP… :))
  • Know when it’s time to implement classic warehouse/mart strategies such as star schema, bit map indexing, partitioning, mviews.  The harder you make the Exadata work now with these kinds of tasks, the less smarter it’ll be working as the environment grows.

Last note.  Although for some, it may not seem believable, but if I hear one more debate about any 11g database and optimizer_index_caching  or optimizer_index_cost_adj parameter settings,  I may go postal…:)

See ya next week at RMOUG Training Days 2013!

Print Friendly

facebook comments:

3 thoughts on “Exadata Optimization Tips
  • Uwe Hesse says:

    Helpful collection, Kellyn, thank you!

    It is probably human nature to complain when something decreases from ‘fast as lightning’ (Storage Indexes used) to just ‘very much faster than before migration to Exadata’ (Smart Scan only), right? So better be prepared for it :-)

    Kind regards
    Uwe

  • Hey there’s a lot of really good stuff in this post. I’m curious, was this a instant brain-dump or a note that grew over time as you added things to it?

  • dbakevlar says:

    Hey Jeremy,
    It was all sitting in my brain, but as I have two new Exadata customers and find myself answering the same questions between them, even though I’m doing mostly ETL design work on one, where the other is an Exadata consolidation. Its surprising how often I am using the same techniques and answering the same questions. As much as we present, blog and have white papers on the cool features of Exadata, the Oracle community either ends up with a serious fear or a need to implement every features offered…even if it’s not the right answer. Coming up with the right answer vs. knee-jerk reactions is a challenge, especially when stakeholders may be leaning over IT’s shoulder and repeatedly asking them when they are going to see their money’s worth. Exadata is an engineered system, not a silver bullet and as you already know, it’s important to figure out what is really impacting the system. I’ve seen Exadata take the blame for a a number of isseus, but then it was up to me to prove it innocent and offer a solution for the real culprit… :)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

  • Facebook
  • Google+
  • LinkedIn
  • Twitter