Optimizer Compatibility, Short Pants and Hot Sauce

OK, so I’m all over the map, (technology wise) right now.  One day I’m working with data masking on Oracle, the next it’s SQL Server or MySQL, and the next its DB2.  After almost six months of this, the chaos of feeling like a fast food drive thru with 20 lanes open at all times is starting to make sense and my brain is starting to find efficient ways to siphon all this information into the correct “lanes”.  No longer is the lane that asked for a hamburger getting fries with hot sauce… 🙂

thesemakeme

One of the areas that I’ve been spending some time on is the optimizer and differences in Microsoft SQL Server 2016.  I’m quite adept on the Oracle side of the house, but for MSSQL, the cost based optimizer was *formally* introduced in SQL Server 2000 and filtered statistics weren’t even introduced until 2008.  While I was digging into the deep challenges of the optimizer during this time on the Oracle side, with MSSQL, I spent considerable time looking at execution plans via dynamic management views, (DMVs) to optimize for efficiency.  It simply wasn’t at the same depth as Oracle until the subsequent releases and has grown tremendously in the SQL Server community.

Compatibility Mode

As SQL Server 2016 takes hold, the community is starting to embrace an option that Oracle folks have done historically-  When a new release comes out, if you’re on the receiving end of significant performance degradation, you have the choice to set the compatibility mode to the previous version.

I know there are a ton of Oracle folks out there that just read that and cringed.

Compatibility in MSSQL is now very similar to Oracle.  We allocate the optimizer features by release version value, so for each platform it corresponds to the following:

Database Version Value
Oracle 11.2.0.4 11.2.0.x
Oracle 12.1 12.1.0.0.x
Oracle 12c release 2 12.1.0.2.0
MSSQL 2012 110
MSSQL 2014 120
MSSQL 2016 130

 

SQL Server has had this for some time, as you can see by the following table:

Product Database Engine Version Compatibility Level Designation Supported Compatibility Level Values
SQL Server 2016 13 130 130, 120, 110, 100
SQL Database 12 120 130, 120, 110, 100
SQL Server 2014 12 120 120, 110, 100
SQL Server 2012 11 110 110, 100, 90
SQL Server 2008 R2 10.5 105 100, 90, 80
SQL Server 2008 10 100 100, 90, 80
SQL Server 2005 9 90 90, 80
SQL Server 2000 8 80 80

These values can be viewed in each database using queries for the corresponding command line tool.

For Oracle:

SELECT name, value, description from v$parameter where name='compatible';

Now if you’re in database 12c and multi-tenant, then you need to ensure you’re correct database first:

ALTER SESSION SET CONTAINER = <pdb_name>;
ALTER SYSTEM SET COMPATIBLE = '12.1.0.0.0';

For MSSQL:

SELECT databases.name, databases.compatibility_level from sys.databases 
GO
ALTER DATABASE <dbname> SET COMPATIBILITY_LEVEL = 120
GO

Features

How many of us have heard, “You can call it a bug or you can call it a feature”?  Microsoft has taken a page from Oracle’s book and refer to the need to set the database to the previous compatibility level as Compatibility Level Guarantee.  It’s a very positive sounding “feature” and for those that have upgraded and are suddenly faced with a business meltdown due to a surprise impact once they do upgrade or simply from a lack of testing are going to find this to be a feature.

So what knowledge, due to many years of experience with this kind of feature, can the Oracle side of the house offer to the MSSQL community on this?

I think anyone deep into database optimization knows that “duct taping” around a performance problem like this- by moving the compatibility back to the previous version is wrought with long term issues.  This is not addressing a unique query or even a few transactional processes being addressed with this fix.  Although this should be a short term fix before you launch to production, [we hope] experience has taught us on the Oracle side, that you have databases that exist for years in a different compatibility version than the release version.  Many DBAs have databases that they are creating work arounds and applying one off patch fixes for because the compatibility either can’t or won’t be raised to the release version.  This is a database level way of holding the optimizer at the previous version.  The WHOLE database.

You’re literally saying, “OK kid, [database], we know you’re growing, so we upgraded you to latest set of pants, but now we’re going to hem and cinch them back to the previous size.”  Afterwards we say, “Why aren’t they performing well? After all, we did buy them new pants!”

So by “cinching” the database compatibility mode back down, what are we missing in SQL Server 2016?

  • No 10,000 foreign key or referential constraints for you, no, you’re back to 253.
  • Parallel update of statistic samples
  • New Cardinality Estimator, (CE)
  • Sublinear threshold for statistics updates
  • A slew of miscellaneous enhancements that I won’t list here.

Statistics Collection

Now there is a change I don’t like, but I do prefer how Microsoft has addressed it in the architecture.  There is a trace flag 2371 that controls, via on or off, if statistics are updated at about 20% change in row count values.  This is now on by default with MSSQL 2016 compatibility 130.  If it’s set to off, then statistics at the object level aren’t automatically updated.  There are a number of ways to do this in Oracle, but getting more difficult with dynamic sampling enhancements that put the power of statistics internal to Oracle and less in the hands of the Database Administrator.  This requires about 6 parameter changes in Oracle and as a DBA who’s attempted to lock down stats collection, its a lot easier than said.  There were still ways that Oracle was able to override my instructions at times.

Optimizer Changes and Hot Fixes

There is also a flag to apply hot fixes which I think is a solid feature in MSSQL that Oracle could benefit from, (instead of us DBAs scrambling to find out what feature was implemented, locating the parameter and updating the value for it…)  Using trace flag 4199 granted the power to the DBA to enable any new optimizer features, but, just like Oracle, with the introduction of SQL Server 2016, this is now controlled with the compatibility mode.  I’m sorry MSSQL DBAs, it looks like this is one of those features from Oracle that, (in my opinion) I wish would have infected cross platform in reverse.

As stated, the Compatibility Level Guarantee sounds pretty sweet, but the bigger challenge is the impact that Oracle DBAs have experienced for multiple releases that optimizer compatibility control has been part of our database world.  We have databases living in the past.  Databases that are continually growing, but can’t take advantage of the “new clothes” they’ve been offered.  Fixes that we can’t take advantage of because we’d need to update the compatibility to do so and the pain of doing so is too risky.  Nothing like being a tailor that can only hem and cinch.  As the tailors responsible for the future of our charges, there is a point where we need to ensure our voices are heard, to ensure that we are not one of the complacent bystanders, offering stability at the cost of watching the world change around us.

Print Friendly
November 14th, 2016 by

facebook comments:

  • Facebook
  • Google+
  • LinkedIn
  • Twitter