AzureOracle

Going Old School with Optimization Parameters in Oracle 19c

I had an interesting opportunity with a complex Oracle environment that exemplified the challenges of technical debt and dependency on database platforms.  There are those that believe a database should only be used to hold data and should be completely platform interchangeable.  As DBAs, we know this results in very little usage of relational database features and platform strengths and rarely, if ever can scale.  I fully support using the strength of RDBMS platform features, but what happens when we embrace what a feature does and attempt to build it better at the database layer than the RDBMS vendor?

 

Danger Zone

For this database example, complex hinting capabilities had been built into database objects.  Depending on the statement to be executed, a hint was constructed to influence the optimizer with a preference for indexes and nested loops.  This way of influencing the optimizer had been started early on with their adoption of Oracle, with simple /*+ RULE */ hints taking me by surprise in the SQL from the current Oracle 19c database.   As the database grew, the design had stayed simple and dependent upon this type of design, with processing done row by row for their ETL.

The hinting strategy had matured to approximately Oracle 10g stages even for their current customers with multi-TB size databases.  No partitioning strategy was developed and their ETL still uses a basic insert, update and delete process with history logic built via triggers.  While this results in painful performance around hash joins and transaction contention at the object level, when an attempt was made to refactor this to SQL Server, the lock escalation resulted in massive challenges, including deadlocks.

As I and the dedicated CSA were working to optimize the ETL load on Oracle in Azure IaaS, I noticed that there wasn’t a significant improvement with physical VM and storage changes as expected.  As I dug into the code and database design, I started to document what I’ve summarized above and realized that the database was quite frozen in time. Even though I couldn’t make changes to the code, (per the customer request) I was quickly understanding why we had such limited success and why I was failing miserably as I attempted to put recommended practices in place at the parameter level for the Oracle 19c database from what they had originally.

As I thought this through, I had an epiphany-  This database was doing everything in its power to be a 10g or earlier database so why shouldn’t I optimize it like one?

  • 16K Block size with heavy index row returns
  • Heavily dependent upon nested loops and decreased performance if HASH Joins occurred.
  • Stats were locked down and cardinality was controlled via the hints stored in tables.
  • Older hinting options that were from Oracle 7-10g, superseding the optimizer.
  • The 19c Cost-based Optimizer, (CBO) was still escaping through with adaptive plans, etc. that would often impact performance.

What if I changed my mindset to tune the database like I used to in Oracle 10g?

alter system set optimizer_index_caching=80 scope=both;

alter system set optimizer_index_cost_adj=30 scope=both;

What this change did was twofold:

  1. Makes the optimizer assume that there is a higher percentage of index data cached and make indexes look more attractive.
  2. Makes index costs and then by way, nested loops, less expensive to the optimizer, avoiding hash joins.

As the customer was heavily using the sort_area_size and hash_area_size and very little PGA, I changed the retained size for the sort, too:

alter system set sort_area_retained_size=<match to sort_area_size>;

The PGA was sized up for what was recommended practice for Oracle 19c, but they really just weren’t using it, so I shrunk it down and reallocated the memory to the SGA_TARGET, which really did need it to eliminate more physical IO from occurring.

The overall results of all these changes were an 18% savings on the ETL processing.

Was all the work worth it? The customer workload was able to redeploy to an decrease from an E64-16s v3 constrained VM to an E16ds v4 VM for cost savings and run in a 1/2 hour less than the beginning times.

Without being able to change the database design, (introduce partitioning, redesign a few objects and fix the code.) we’re limited in how much of an impact we can make with just infrastructure and parameter tweaking.

It was a nice walk down memory lane, but it reminded me when nothing is off the table and we’re able to address the database environment in a holistic manner, that’s when we’re able to make a real impact to long-term efficiency.

One thought on “Going Old School with Optimization Parameters in Oracle 19c

Comments are closed.