Subscribe to Blog via Email
I thought I’d do something on Oracle this week, but then Microsoft made an announcement that was like an early Christmas present- SQL Server release for Linux.
I work for a company that supports Oracle and SQL Server, so I wanted to know how *real* this release was. I first wanted to test it out on a new build and as they recommend, along as link to an Ubuntu install, I created a new VM and started from there-
There were a couple packages that were missing until the repository is updated to pull universe by adding repository locations into the sources.list file:
There is also a carriage return at the end of the MSSQL installation when it’s added to the sources.list file. Remove this before you save.
Once you do this, if you’re chosen to share your network connection with your Mac, you should be able to install successfully when running the commands found on the install page from Microsoft.
The second install I did was on a VM using CentOS 6.7 that was pre-discovered as a source for one of my Delphix engines. The installation failed upon running it, which you can see here:
Even attempting to work around this wasn’t successful and the challenge was that the older openssl wasn’t going to work with the new SQL Server installation. I decided to simply upgrade to CentOS 7.
The actual process of upgrading is pretty easy, but there are some instructions out there that are incorrect, so here are the proper steps:
[upgrade] name=upgrade baseurl=http://dev.centos.org/centos/6/upg/x86_64/ enabled=1 gpgcheck=0
Save this file and then run the following:
yum install preupgrade-assistant-contents redhat-upgrade-tool preupgrade-assistant
You may see that one has stated it won’t install as newer ones are available- that’s fine. As long as you have at least newer packages, you’re fine. Now run the preupgrade
The log final output may not write, also. If you are able to verify the runs outside of this and it says that it was completed successfully, please know that the pre-upgrade was successful as a whole.
Once this is done, import the GPG Key:
rpm --import http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-7
After the key is imported, then you can start the upgrade:
/usr/bin/redhat-upgrade-tool-cli --force --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64
Once done, then you’ll need to reboot before you run your installation of SQL Server:
Once the VM has cycled, then you can run the installation using the Redhat installation as root, (my delphix user doesn’t have the rights and I decided to have MSSQL installed under root for this first test run):
curl https://packages.microsoft.com/config/rhel/7/mssql-server.repo > /etc/yum.repos.d/mssql-server.repo
Now run the install:
sudo yum install -y mssql-server
Once its completed, it’s time to set up your MSSQL admin and password:
One more reboot and you’re done!
You should then see your SQL Server service running with the following command:
systemctl status mssql-server
You’re ready to log in and create your database, which I’ll do in a second post on this fun topic.
OK, you linux fans, go MSSQL! 🙂
OK, so I’m all over the map, (technology wise) right now. One day I’m working with data masking on Oracle, the next it’s SQL Server or MySQL, and the next its DB2. After almost six months of this, the chaos of feeling like a fast food drive thru with 20 lanes open at all times is starting to make sense and my brain is starting to find efficient ways to siphon all this information into the correct “lanes”. No longer is the lane that asked for a hamburger getting fries with hot sauce… 🙂
One of the areas that I’ve been spending some time on is the optimizer and differences in Microsoft SQL Server 2016. I’m quite adept on the Oracle side of the house, but for MSSQL, the cost based optimizer was *formally* introduced in SQL Server 2000 and filtered statistics weren’t even introduced until 2008. While I was digging into the deep challenges of the optimizer during this time on the Oracle side, with MSSQL, I spent considerable time looking at execution plans via dynamic management views, (DMVs) to optimize for efficiency. It simply wasn’t at the same depth as Oracle until the subsequent releases and has grown tremendously in the SQL Server community.
As SQL Server 2016 takes hold, the community is starting to embrace an option that Oracle folks have done historically- When a new release comes out, if you’re on the receiving end of significant performance degradation, you have the choice to set the compatibility mode to the previous version.
I know there are a ton of Oracle folks out there that just read that and cringed.
Compatibility in MSSQL is now very similar to Oracle. We allocate the optimizer features by release version value, so for each platform it corresponds to the following:
|Oracle||12c release 2||188.8.131.52.0|
SQL Server has had this for some time, as you can see by the following table:
|Product||Database Engine Version||Compatibility Level Designation||Supported Compatibility Level Values|
|SQL Server 2016||13||130||130, 120, 110, 100|
|SQL Database||12||120||130, 120, 110, 100|
|SQL Server 2014||12||120||120, 110, 100|
|SQL Server 2012||11||110||110, 100, 90|
|SQL Server 2008 R2||10.5||105||100, 90, 80|
|SQL Server 2008||10||100||100, 90, 80|
|SQL Server 2005||9||90||90, 80|
|SQL Server 2000||8||80||80|
These values can be viewed in each database using queries for the corresponding command line tool.
SELECT name, value, description from v$parameter where name='compatible';
Now if you’re in database 12c and multi-tenant, then you need to ensure you’re correct database first:
ALTER SESSION SET CONTAINER = <pdb_name>; ALTER SYSTEM SET COMPATIBLE = '184.108.40.206.0';
SELECT databases.name, databases.compatibility_level from sys.databases GO ALTER DATABASE <dbname> SET COMPATIBILITY_LEVEL = 120 GO
How many of us have heard, “You can call it a bug or you can call it a feature”? Microsoft has taken a page from Oracle’s book and refer to the need to set the database to the previous compatibility level as Compatibility Level Guarantee. It’s a very positive sounding “feature” and for those that have upgraded and are suddenly faced with a business meltdown due to a surprise impact once they do upgrade or simply from a lack of testing are going to find this to be a feature.
So what knowledge, due to many years of experience with this kind of feature, can the Oracle side of the house offer to the MSSQL community on this?
I think anyone deep into database optimization knows that “duct taping” around a performance problem like this- by moving the compatibility back to the previous version is wrought with long term issues. This is not addressing a unique query or even a few transactional processes being addressed with this fix. Although this should be a short term fix before you launch to production, [we hope] experience has taught us on the Oracle side, that you have databases that exist for years in a different compatibility version than the release version. Many DBAs have databases that they are creating work arounds and applying one off patch fixes for because the compatibility either can’t or won’t be raised to the release version. This is a database level way of holding the optimizer at the previous version. The WHOLE database.
You’re literally saying, “OK kid, [database], we know you’re growing, so we upgraded you to latest set of pants, but now we’re going to hem and cinch them back to the previous size.” Afterwards we say, “Why aren’t they performing well? After all, we did buy them new pants!”
So by “cinching” the database compatibility mode back down, what are we missing in SQL Server 2016?
Now there is a change I don’t like, but I do prefer how Microsoft has addressed it in the architecture. There is a trace flag 2371 that controls, via on or off, if statistics are updated at about 20% change in row count values. This is now on by default with MSSQL 2016 compatibility 130. If it’s set to off, then statistics at the object level aren’t automatically updated. There are a number of ways to do this in Oracle, but getting more difficult with dynamic sampling enhancements that put the power of statistics internal to Oracle and less in the hands of the Database Administrator. This requires about 6 parameter changes in Oracle and as a DBA who’s attempted to lock down stats collection, its a lot easier than said. There were still ways that Oracle was able to override my instructions at times.
There is also a flag to apply hot fixes which I think is a solid feature in MSSQL that Oracle could benefit from, (instead of us DBAs scrambling to find out what feature was implemented, locating the parameter and updating the value for it…) Using trace flag 4199 granted the power to the DBA to enable any new optimizer features, but, just like Oracle, with the introduction of SQL Server 2016, this is now controlled with the compatibility mode. I’m sorry MSSQL DBAs, it looks like this is one of those features from Oracle that, (in my opinion) I wish would have infected cross platform in reverse.
As stated, the Compatibility Level Guarantee sounds pretty sweet, but the bigger challenge is the impact that Oracle DBAs have experienced for multiple releases that optimizer compatibility control has been part of our database world. We have databases living in the past. Databases that are continually growing, but can’t take advantage of the “new clothes” they’ve been offered. Fixes that we can’t take advantage of because we’d need to update the compatibility to do so and the pain of doing so is too risky. Nothing like being a tailor that can only hem and cinch. As the tailors responsible for the future of our charges, there is a point where we need to ensure our voices are heard, to ensure that we are not one of the complacent bystanders, offering stability at the cost of watching the world change around us.
It’s Friday and the last day of my first Summit conference. I’ve wanted to attend this conference for quite a few years, but with my focus on the Oracle side and scheduling date so close to Oracle Open World, I couldn’t justify it before joining Delphix. When I was offered a spot to join the Delphix team and attend, I jumped at the opportunity. Look at this booth and these impressively talented Delphix employees- how could you resist??
The Summit event has around 5000 attendees and is held in Seattle each year around the last week of October. Its held in the Washington Convention Center which is a great venue for a conference. I’d attended KSCOPE here in 2014 and loved it, so was looking forward to enjoying this great location once more.
As I’m just re-engaging the MSSQL side of my brain, I have some catching up to do and there was no place better to do it than at Summit. I was able to dig in deep and attend a few sessions around booth duty and meeting folks I’ve rarely had opportunities outside of Twitter to interact with before this event.
Brent Ozar came by the Delphix booth to say “Hi” and absolutely made my day!
There is always that challenge of learning about what you need to know and what you want to know- this event was no different. I had a list of features and products that I really need to come up to speed on, but with cool technology like Polybase, Analytics with R and performance talks, it could be a bit distracting from my main purpose here. I’ve always been a performance freak and I still find it pulling me from the database knowledge that is important to day-to-day tasks. I know this is why I find such value in Delphix- It frees the DBA from spending any extra time on tasks that I find more mundane so we can spend it on more interesting and challenging areas of database technology.
What I did learn was that many of the companies that are realizing the importance of virtualization for on-premise and in the cloud, aren’t coming close to Delphix in features of ensuring they have the basics down first. You can’t just talk the talk, you have to walk the walk, too. I’m proud of Delphix and what we’ve accomplished- that we don’t say we can do something we can’t and continue on the path to be Azure compliant for 2017. Azure was everywhere this year at Summit and will continue to be a major push for Microsoft.
Another important recognition at Summit was the percentage of women attendees, the women in technology program at Summit, along with the support of the everyone at Summit of the WIT program.
The WIT luncheon and happy hour was on Thursday. In support of this day, over 100 men showed up in kilts. It may seem like a small gesture, but it shows in how the men and women interact at this conference and the contribution of everyone at the event. There is a lot more collaboration and the community is on average, much more interactive and involved than I’ve experienced at any Oracle event. It’s not to say that Oracle is doing it wrong, it’s just to say that the SQL Server community is much farther ahead. They refer to their SQL Family and they take it seriously in a good way.
Due to all of this, I was given the opportunity of a PassTV interview, met a number of incredible experts in the MSSQL community that I’ve only previously known on social media and appreciate being embraced.
I want to thank my great cohorts from Delphix who manned the booth with me at Summit. We had incredible crowds that kept us so busy answering questions, doing demos and talking about how virtualized databases can increase productivity and revenue. Sacha, Venkat, Jonathan, Dante and Jenny and Richie- you all ROCKED!
Thanks to everyone for making my first SQL Pass Summit conference AWESOME!!
I’ll be attending my very first Pass Summit next week and I’m really psyched! Delphix is a major sponsor at the event, so I’ll get to be at the booth and will be rocking some amazing new Delphix attire, (thank you to my boss for understanding that a goth girl has to keep up appearances and letting me order my own Delphix ware.)
Its an amazing event and for those of you who are my Oracle peeps, wondering what Summit is, think Oracle Open World for the Microsoft SQL Server expert folks.
I was a strong proponent of immersing in different database and technology platforms early on. You never know when the knowledge you gain in an area that you never thought would be useful ends up saving the day.
Yesterday this philosophy came into play again. A couple of folks were having some challenges with a testing scenario of a new MSSQL environment and asked for other Delphix experts for assistance via Slack. I am known for multi-tasking, so I thought, while I was doing some research and building out content, I would just have the shared session going in the background while I continued to work. As soon as I logged into the web session, the guys welcomed me and said, “Maybe Kellyn knows what’s causing this error…”
Me- “Whoops, guess I gotta pay attention…”
SQL Server, for the broader database world, has always been, unlike Oracle, multi-tenant. This translates to a historical architecture that has a server level login AND a user database level username. The Login ID, (login name) is linked to a userID, (and such a user name) in the (aka schema) user database. Oracle is starting to migrate to similar architecture with Database version 12c, moving more away from schemas within a database and towards multi-tenant, where the pluggable database, (PDB) serves as the schema.
I didn’t recognize the initial error that arose from the clone process, but that’s not uncommon, as error messages can change with versions and with proprietary code. I also have worked very little to none on MSSQL 2014. When the guys clicked in Management Studio on the target user database and were told they didn’t have access, it wasn’t lost on anyone to look at the login and user mapping to show the login didn’t have a mapping to a username for this particular user database. What was challenging them, was that when they tried to add the mapping, (username) for the login to the database, it stated the username already existed and failed.
This is where “old school” MSSQL knowledge came into play. Most of my database knowledge for SQL Server is from versions 6.5 through 2008. Along with a lot of recovery and migrations, I also performed a process very similar to the option in Oracle to plug or unplug a PDB, in MSSQL terminology referred to as “attach and detach” of a MSSQL database. You could then easily move the database to another SQL Server, but you very often would have what is called “orphaned users.” This is where the login ID’s weren’t connected to the user names in the database and needed to be resynchronized correctly. To perform this task, you could dynamically create a script to pull the logins if they didn’t already exist, run it against the “target” SQL Server and then create one that ran a procedure to synchronize the logins and user names.
Use <user_dbname> go exec sp_change_users_login 'Update_One','<loginname>','<username>' go
For the problem that was experienced above, it was simply the delphix user that wasn’t linked post restoration due to some privileges and we once we ran this command against the target database all was good again.
This wasn’t the long term solution, but pointed to where the break was in the clone design and that can now be addressed, but it shows that experience, no matter how benign having it may seem, can come in handy later on in our careers.
I am looking forward to learning a bunch of NEW and AWESOME MSSQL knowledge to take back to Delphix at Pass Summit this next week, as well as meeting up with some great folks from the SQL Family.
See you next week in Seattle!
This is a living document that I will continue to update and will add new database platforms to as I go along in my career. I spend a lot of time translating database platforms I’ve worked in for all tiers of the technical business. It just seems natural that we might need a location on the web where that information is kept.
I’ll add some diagrams at a later date, but we’ll start with a simple spreadsheet of common terms and questions and how each of the terms or tasks in Oracle, (the current #1 RDBMS) translates in other platforms. If there isn’t a similar term or task, I’ve marked not applicable, (N/A). If you have any additions or suggestions on how to improve the list, please feel free to comment or email me at DBAKevlar at Gmail.
As I’m still working on the best viewable format, keep in mind, you can export the table into a number of formats or even print it!
Oracle MSSQL MySQL SAP Hana Cassandra Instance: Start of the SGA and one or more background processes
Instance: A Windows Service and following dbs: Master, model, tempdb, msdb, resource
MySQL Instance is backgroun processes
Hana Instance, Multi-tenant does exist
Storage Engines: Innodb, ISAM, NDB, (cluster), Marta, Falcon, etc.
SGA, (System Global Area) memory allocated to Oracle
Conventional memory mgmt, AWE, pages
Hana is an in-memory database
Java Heap Memory
Query Cache, Key Cache, (storage engine can determine some of this)
Memory Pool, (allocated to in-memory)
sys.tables WHERE name = 'TransactionHistoryArchive'
Persistence Layer, (some of this)
Parition Index Summary Cache
SQL Server Windows Service, MSSQL executable process
Connection and Sesssion Manager
SQLCmd and Powershell
MySQL Workbench and mysql cli
SAP Hana Studio
TNS, (Transparent Network Substrate), Bequeath, EZConnect
ADO.net, OLEDB, ODBC, etc.
MySQL connectors (ODBC, JDBC, .NET, etc)
Performance Management Views, (i.e. V$ views)
Dynamic Management Views/functions
HANA_SQL* and HOST_* views
DB Management Memory Pool
Persistance layer for data store
Index, (one clustered index per object)
Index Management Layer
Index for partition key
Partition Key for row level
Partition key for row level
Partition key for row level
Partition Key as column level to store relevant rows
Compaction of SSTABLES
sql_trace = on
SQL Profiler and for version 2012+ Extended Events
C and ANSI SQL
Java and CQL
Database and DBOwner
Logins and Users
database backup/storage snapshot
incremental/incremental with redo only
part of transaction log, transactional commit
part of transaction log
Part of Transaction Logging and session manager
Temp Database per SQL Server
Data Actually Stores in Order no need of temp
Network /Disk Heartbeat
Part of Connection Manager
Only one heartbeat network heartbeat in messages
Master / Slave Nodes
Node Clustering, managed by Calculation Engine
Peer Nodes (no master) indeed every node act as coordinator
Shared Storage, (Voting Disk, too)
Local Storage to each node
Tokens: Data Stripes using token range at node level
Replication Factor: Data Mirrors across nodes using RF=ONE, ALL, N..
init.ora or spfile.ora
cassandra.yaml in /softwarelocation/conf/
redo log sizes & location: v$log, v$redolog
logging module manages this
redo log flush
backup transaction logs
transaction log volumes
Log located at /cassandrasoftware/clustername/nodename/logs/system.log
crsctl / srvctl
Rebalance of Data in diskgroups
nodetool repair or nodetool repair -st -et
private network or private ips
no private ip's
Microsoft Failover cluster, (MSFC) and Voting Disk
MySQL clusters, sharding
scn_to_timestamp or rowscn functions
Log Sequence Number, (LSN) to timestamp
alter session set schema
alter database set user
ALTER SYSTEM ALTER SESSION
v$asm_operation, rebalance operation
nodetool netstats or nodetool tpstats
cssd.log (disk heartbeat and network heartbeat)
size of table: bytes in dba_tables
calculate space used(total), bytes output from nodetool cfstats keyspace.tablename
number of rows in table: dba_tables.num_rows
calculate number of keys(estimate)
Size of Tablespace:sum(bytes) from dba_segments where tablespace_name=
Size of each schema- SELECT table_schema "Data Base Name", sum( data_length + index_length) / 1024 / 1024
"Data Base Size in MB" FROM information_schema.TABLES GROUP BY table_schema ;
No Undo tablespace in MSSQL
N/A dependent on storage engine, too.
Written to redo
gc_grace_seconds for holding tombstones
result cache , keep pools
memory tables, heap tables
Shutdown instance, srvctl stop instance
NET STOP MSSQLSERVER, NET STOP MSSQL$instancename
sudo /usr/local/mysql/bin/mysqld stop
rman>backup tablespace tag 'today'
backup database dbname filegroup = 'filegroup' to disk = 'path\name'
After read lock, backup table files
on each node, backup keyspace -t today
rman> backup incremental tag 'incr'
Backups database 'db' with differential
enable incremental backups
enable incremental backups
rebuild index or reorg table
rebuild index keyspace tablename idx1, idx2
rman> backup database
backup database, snapshot
logical backup at tablename
rman> recover block 57;
mysqlbinlog- use logs to create statement sql to recover transactions to PIT.
scrub [keyspace] [tablename] [-s|--skip-corrupted]
rman> recover datafile
RESTORE DATABASE adb FILEGROUP='filegroup'
Depends on storage engine- mysqldump, mysqlbackup mysqlndb, etc.
RECOVER DATA USING FILE ('
N/A SSTABLELOADER from snapshot
restricted mode: alter database restricted mode
set global read_only=1
Boot OS into single user mode
cost based statistics, (CBO or optimizer)
cost based statistics
SQL and MDX, (Multidimensional Expressions)
This last week I presented at Great Lakes Oracle Conference, (GLOC16) and the discussion on monitoring of non-Oracle databases came up while we were on the topic of management packs, how to monitor usage and what ones were required to monitor non-Oracle databases. I didn’t realize how confusing the topic could be until I received an email while in on layover in Chicago and relaying what the attendee had taken away from it. I was even more alarmed when I read the email again, planning to blog about it today after a full nights sleep!
You’ll often hear me refer to EM13c as the single-pane of glass when discussing hybrid cloud management, performance management when concerning AWR Warehouse and such, but it also can make a multi-platform environments easier to manage, too.
The difference between managing many Oracle features with EM13c and non-Oracle database platforms is that we need to shift the discussion from Management Packs to Plug-ins. I hadn’t really thought too much of it when I’d been asked what management packs were needed to manage Microsoft SQL Server, Sybase or DB2. My brain was solely focused on the topic of management packs and I told the audience how they could verify management packs on any page in EM, (while on the page, click on Settings, Management Packs, Packs Used for This Page) for any database they were monitoring:
As easily demonstrated in the image above, there aren’t any management packs utilized to access information about the MSSQL_2014 Microsoft SQL Server and you can quickly see each of the User databases status, CPU usage, IO for read and writes, along with errors and even control the agent from this useful EM dashboard.
I can do the same for a DB2_unit6024 database environment:
You’ll note that the DB2 database dashboard is different from the SQL Server one, displaying the pertinent data for that database platform.
Now, you may be saying, Kellyn’s right, I don’t need to have any management packs, (which is true) but then you click on Settings, Extensibility, Plug-ins and you’ll then locate the Database Plug-ins used to add each one of these databases to the Enterprise Manager.
These plug-ins are offered often by third parties and must be licensed through them. There may be and are often charges from these providers and I should have been more in-tune to the true discussion and not stuck on the topic of management packs.
Luckily for me, there is a small amount of explanation on the very bottom of the management pack documentation that should clear up any questions. Hope this offers some insight and thank you to everyone who came to my sessions at GLOC!