Category: SQLServer

November 17th, 2016 by dbakevlar

I thought I’d do something on Oracle this week, but then Microsoft made an announcement that was like an early Christmas present-  SQL Server release for Linux.

santa

I work for a company that supports Oracle and SQL Server, so I wanted to know how *real* this release was.  I first wanted to test it out on a new build and as they recommend, along as link to an Ubuntu install, I created a new VM and started from there-

screen-shot-2016-11-17-at-1-20-55-pm

Ubuntu Repository Challenge

There were a couple packages that were missing until the repository is updated to pull universe by adding repository locations into the sources.list file:

screen-shot-2016-11-17-at-1-22-39-pm

There is also a carriage return at the end of the MSSQL installation when it’s added to the sources.list file.  Remove this before you save.

Once you do this, if you’re chosen to share your network connection with your Mac, you should be able to install successfully when running the commands found on the install page from Microsoft.

CentOS And MSSQL

The second install I did was on a VM using CentOS 6.7 that was pre-discovered as a source for one of my Delphix engines.  The installation failed upon running it, which you can see here:

screen-shot-2016-11-17-at-11-21-21-am

Even attempting to work around this wasn’t successful and the challenge was that the older openssl wasn’t going to work with the new SQL Server installation.  I decided to simply upgrade to CentOS 7.

CentOS 7

The actual process of upgrading is pretty easy, but there are some instructions out there that are incorrect, so here are the proper steps:

  1.  First, take a backup of your image, (snapshot) before you begin.
  2. edit the yum directory to prep it for the upgrade by going to and creating the following file: /etc/yum.repos.d/upgrade.repo
    1. Add the following information to the file:
[upgrade]
name=upgrade
baseurl=http://dev.centos.org/centos/6/upg/x86_64/
enabled=1
gpgcheck=0

Save this file and then run the following:

yum install preupgrade-assistant-contents redhat-upgrade-tool preupgrade-assistant

You may see that one has stated it won’t install as newer ones are available-  that’s fine.  As long as you have at least newer packages, you’re fine.  Now run the preupgrade

preupg

The log final output may not write, also.  If you are able to verify the runs outside of this and it says that it was completed successfully, please know that the pre-upgrade was successful as a whole.

Once this is done, import the GPG Key:

rpm --import http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-7

After the key is imported, then you can start the upgrade:

/usr/bin/redhat-upgrade-tool-cli --force --network 7 --instrepo=http://mirror.centos.org/centos/7/os/x86_64

Once done, then you’ll need to reboot before you run your installation of SQL Server:

reboot

MSSQL Install

Once the VM has cycled, then you can run the installation using the Redhat installation as root, (my delphix user doesn’t have the rights and I decided to have MSSQL installed under root for this first test run):

su
curl https://packages.microsoft.com/config/rhel/7/mssql-server.repo > /etc/yum.repos.d/mssql-server.repo

Now run the install:

sudo yum install -y mssql-server

Once its completed, it’s time to set up your MSSQL admin and password:

sudo /opt/mssql/bin/sqlservr-setup

One more reboot and you’re done!

reboot

You should then see your SQL Server service running with the following command:

systemctl status mssql-server

You’re ready to log in and create your database, which I’ll do in a second post on this fun topic.

OK, you linux fans, go MSSQL! 🙂

 

Posted in Delphix, SQLServer Tagged with: , ,

November 14th, 2016 by dbakevlar

OK, so I’m all over the map, (technology wise) right now.  One day I’m working with data masking on Oracle, the next it’s SQL Server or MySQL, and the next its DB2.  After almost six months of this, the chaos of feeling like a fast food drive thru with 20 lanes open at all times is starting to make sense and my brain is starting to find efficient ways to siphon all this information into the correct “lanes”.  No longer is the lane that asked for a hamburger getting fries with hot sauce… 🙂

thesemakeme

One of the areas that I’ve been spending some time on is the optimizer and differences in Microsoft SQL Server 2016.  I’m quite adept on the Oracle side of the house, but for MSSQL, the cost based optimizer was *formally* introduced in SQL Server 2000 and filtered statistics weren’t even introduced until 2008.  While I was digging into the deep challenges of the optimizer during this time on the Oracle side, with MSSQL, I spent considerable time looking at execution plans via dynamic management views, (DMVs) to optimize for efficiency.  It simply wasn’t at the same depth as Oracle until the subsequent releases and has grown tremendously in the SQL Server community.

Compatibility Mode

As SQL Server 2016 takes hold, the community is starting to embrace an option that Oracle folks have done historically-  When a new release comes out, if you’re on the receiving end of significant performance degradation, you have the choice to set the compatibility mode to the previous version.

I know there are a ton of Oracle folks out there that just read that and cringed.

Compatibility in MSSQL is now very similar to Oracle.  We allocate the optimizer features by release version value, so for each platform it corresponds to the following:

Database Version Value
Oracle 11.2.0.4 11.2.0.x
Oracle 12.1 12.1.0.0.x
Oracle 12c release 2 12.1.0.2.0
MSSQL 2012 110
MSSQL 2014 120
MSSQL 2016 130

 

SQL Server has had this for some time, as you can see by the following table:

Product Database Engine Version Compatibility Level Designation Supported Compatibility Level Values
SQL Server 2016 13 130 130, 120, 110, 100
SQL Database 12 120 130, 120, 110, 100
SQL Server 2014 12 120 120, 110, 100
SQL Server 2012 11 110 110, 100, 90
SQL Server 2008 R2 10.5 105 100, 90, 80
SQL Server 2008 10 100 100, 90, 80
SQL Server 2005 9 90 90, 80
SQL Server 2000 8 80 80

These values can be viewed in each database using queries for the corresponding command line tool.

For Oracle:

SELECT name, value, description from v$parameter where name='compatible';

Now if you’re in database 12c and multi-tenant, then you need to ensure you’re correct database first:

ALTER SESSION SET CONTAINER = <pdb_name>;
ALTER SYSTEM SET COMPATIBLE = '12.1.0.0.0';

For MSSQL:

SELECT databases.name, databases.compatibility_level from sys.databases 
GO
ALTER DATABASE <dbname> SET COMPATIBILITY_LEVEL = 120
GO

Features

How many of us have heard, “You can call it a bug or you can call it a feature”?  Microsoft has taken a page from Oracle’s book and refer to the need to set the database to the previous compatibility level as Compatibility Level Guarantee.  It’s a very positive sounding “feature” and for those that have upgraded and are suddenly faced with a business meltdown due to a surprise impact once they do upgrade or simply from a lack of testing are going to find this to be a feature.

So what knowledge, due to many years of experience with this kind of feature, can the Oracle side of the house offer to the MSSQL community on this?

I think anyone deep into database optimization knows that “duct taping” around a performance problem like this- by moving the compatibility back to the previous version is wrought with long term issues.  This is not addressing a unique query or even a few transactional processes being addressed with this fix.  Although this should be a short term fix before you launch to production, [we hope] experience has taught us on the Oracle side, that you have databases that exist for years in a different compatibility version than the release version.  Many DBAs have databases that they are creating work arounds and applying one off patch fixes for because the compatibility either can’t or won’t be raised to the release version.  This is a database level way of holding the optimizer at the previous version.  The WHOLE database.

You’re literally saying, “OK kid, [database], we know you’re growing, so we upgraded you to latest set of pants, but now we’re going to hem and cinch them back to the previous size.”  Afterwards we say, “Why aren’t they performing well? After all, we did buy them new pants!”

So by “cinching” the database compatibility mode back down, what are we missing in SQL Server 2016?

  • No 10,000 foreign key or referential constraints for you, no, you’re back to 253.
  • Parallel update of statistic samples
  • New Cardinality Estimator, (CE)
  • Sublinear threshold for statistics updates
  • A slew of miscellaneous enhancements that I won’t list here.

Statistics Collection

Now there is a change I don’t like, but I do prefer how Microsoft has addressed it in the architecture.  There is a trace flag 2371 that controls, via on or off, if statistics are updated at about 20% change in row count values.  This is now on by default with MSSQL 2016 compatibility 130.  If it’s set to off, then statistics at the object level aren’t automatically updated.  There are a number of ways to do this in Oracle, but getting more difficult with dynamic sampling enhancements that put the power of statistics internal to Oracle and less in the hands of the Database Administrator.  This requires about 6 parameter changes in Oracle and as a DBA who’s attempted to lock down stats collection, its a lot easier than said.  There were still ways that Oracle was able to override my instructions at times.

Optimizer Changes and Hot Fixes

There is also a flag to apply hot fixes which I think is a solid feature in MSSQL that Oracle could benefit from, (instead of us DBAs scrambling to find out what feature was implemented, locating the parameter and updating the value for it…)  Using trace flag 4199 granted the power to the DBA to enable any new optimizer features, but, just like Oracle, with the introduction of SQL Server 2016, this is now controlled with the compatibility mode.  I’m sorry MSSQL DBAs, it looks like this is one of those features from Oracle that, (in my opinion) I wish would have infected cross platform in reverse.

As stated, the Compatibility Level Guarantee sounds pretty sweet, but the bigger challenge is the impact that Oracle DBAs have experienced for multiple releases that optimizer compatibility control has been part of our database world.  We have databases living in the past.  Databases that are continually growing, but can’t take advantage of the “new clothes” they’ve been offered.  Fixes that we can’t take advantage of because we’d need to update the compatibility to do so and the pain of doing so is too risky.  Nothing like being a tailor that can only hem and cinch.  As the tailors responsible for the future of our charges, there is a point where we need to ensure our voices are heard, to ensure that we are not one of the complacent bystanders, offering stability at the cost of watching the world change around us.

Posted in Oracle, SQLServer Tagged with: , ,

October 28th, 2016 by dbakevlar

It’s Friday and the last day of my first Summit conference.  I’ve wanted to attend this conference for quite a few years, but with my focus on the Oracle side and scheduling date so close to Oracle Open World, I couldn’t justify it before joining Delphix.  When I was offered a spot to join the Delphix team and attend, I jumped at the opportunity.  Look at this booth and these impressively talented Delphix employees-  how could you resist??

1d58vj

Pass Summit 2016

The Summit event has around 5000 attendees and is held in Seattle each year around the last week of October.  Its held in the Washington Convention Center which is a great venue for a conference.  I’d attended KSCOPE here in 2014 and loved it, so was looking forward to enjoying this great location once more.

As I’m just re-engaging the MSSQL side of my brain, I have some catching up to do and there was no place better to do it than at Summit.  I was able to dig in deep and attend a few sessions around booth duty and meeting folks I’ve rarely had opportunities outside of Twitter to interact with before this event.

brento

Brent Ozar came by the Delphix booth to say “Hi” and absolutely made my day!

Opportunities to Learn

There is always that challenge of learning about what you need to know and what you want to know-  this event was no different.  I had a list of features and products that I really need to come up to speed on, but with cool technology like Polybase, Analytics with R and performance talks, it could be a bit distracting from my main purpose here.  I’ve always been a performance freak and I still find it pulling me from the database knowledge that is important to day-to-day tasks.  I know this is why I find such value in Delphix-  It frees the DBA from spending any extra time on tasks that I find more mundane so we can spend it on more interesting and challenging areas of database technology.

What I did learn was that many of the companies that are realizing the importance of virtualization for on-premise and in the cloud, aren’t coming close to Delphix in features of ensuring they have the basics down first.  You can’t just talk the talk, you have to walk the walk, too.  I’m proud of Delphix and what we’ve accomplished-  that we don’t say we can do something we can’t and continue on the path to be Azure compliant for 2017.  Azure was everywhere this year at Summit and will continue to be a major push for Microsoft.

Pass Women in Technology Power-  ACTIVATE!

Another important recognition at Summit was the percentage of women attendees, the women in technology program at Summit, along with the support of the everyone at Summit of the WIT program.

https://pbs.twimg.com/media/CvzBrxYUEAAHogk.jpg:large

The WIT luncheon and happy hour was on Thursday.  In support of this day, over 100 men showed up in kilts.  It may seem like a small gesture, but it shows in how the men and women interact at this conference and the contribution of everyone at the event.  There is a lot more collaboration and the community is on average, much more interactive and involved than I’ve experienced at any Oracle event.  It’s not to say that Oracle is doing it wrong, it’s just to say that the SQL Server community is much farther ahead.  They refer to their SQL Family and they take it seriously in a good way.

PassTV

Due to all of this, I was given the opportunity of a PassTV interview, met a number of incredible experts in the MSSQL community that I’ve only previously known on social media and appreciate being embraced.

interview

I want to thank my great cohorts from Delphix who manned the booth with me at Summit.  We had incredible crowds that kept us so busy answering questions, doing demos and talking about how virtualized databases can increase productivity and revenue.  Sacha, Venkat, Jonathan, Dante and Jenny and Richie- you all ROCKED!

Thanks to everyone for making my first SQL Pass Summit conference AWESOME!!

 

Posted in DBA Life, SQLServer Tagged with: ,

October 20th, 2016 by dbakevlar

I’ll be attending my very first Pass Summit next week and I’m really psyched!  Delphix is a major sponsor at the event, so I’ll get to be at the booth and will be rocking some amazing new Delphix attire, (thank you to my boss for understanding that a goth girl has to keep up appearances and letting me order my own Delphix ware.)

Its an amazing event and for those of you who are my Oracle peeps, wondering what Summit is, think Oracle Open World for the Microsoft SQL Server expert folks.

chris_suddenly

I was a strong proponent of immersing in different database and technology platforms early on.  You never know when the knowledge you gain in an area that you never thought would be useful ends up saving the day.

Just Goin to Take a Look

Yesterday this philosophy came into play again.  A couple of folks were having some challenges with a testing scenario of a new MSSQL environment and asked for other Delphix experts for assistance via Slack.  I am known for multi-tasking, so I thought, while I was doing some research and building out content, I would just have the shared session going in the background while I continued to work.  As soon as I logged into the web session, the guys welcomed me and said, “Maybe Kellyn knows what’s causing this error…”

Me- “Whoops, guess I gotta pay attention…”

SQL Server, for the broader database world, has always been, unlike Oracle, multi-tenant.  This translates to a historical architecture that has a server level login AND a user database level username.  The Login ID, (login name) is linked to a userID, (and such a user name) in the (aka schema) user database.  Oracle is starting to migrate to similar architecture with Database version 12c, moving more away from schemas within a database and towards multi-tenant, where the pluggable database, (PDB) serves as the schema.

I didn’t recognize the initial error that arose from the clone process, but that’s not uncommon, as error messages can change with versions and with proprietary code.  I also have worked very little to none on MSSQL 2014.  When the guys clicked in Management Studio on the target user database and were told they didn’t have access, it wasn’t lost on anyone to look at the login and user mapping to show the login didn’t have a mapping to a username for this particular user database. What was challenging them, was that when they tried to add the mapping, (username) for the login to the database, it stated the username already existed and failed.

Old School, New Fixes

This is where “old school” MSSQL knowledge came into play.  Most of my database knowledge for SQL Server is from versions 6.5 through 2008.  Along with a lot of recovery and migrations, I also performed a process very similar to the option in Oracle to plug or unplug a PDB, in MSSQL terminology referred to as “attach and detach” of a MSSQL database.  You could then easily move the database to another SQL Server, but you very often would have what is called “orphaned users.”  This is where the login ID’s weren’t connected to the user names in the database and needed to be resynchronized correctly.  To perform this task, you could dynamically create a script to pull the logins if they didn’t already exist, run it against the “target” SQL Server and then create one that ran a procedure to synchronize the logins and user names.

Use  <user_dbname>
go
exec sp_change_users_login 'Update_One','<loginname>','<username>'
go

For the problem that was experienced above, it was simply the delphix user that wasn’t linked post restoration due to some privileges and we once we ran this command against the target database all was good again.

This wasn’t the long term solution, but pointed to where the break was in the clone design and that can now be addressed, but it shows that experience, no matter how benign having it may seem, can come in handy later on in our careers.

PASS_2016

I am looking forward to learning a bunch of NEW and AWESOME MSSQL knowledge to take back to Delphix at Pass Summit this next week, as well as meeting up with some great folks from the SQL Family.

See you next week in Seattle!

 

 

 

 

Posted in Delphix, SQLServer Tagged with: , , ,

August 19th, 2016 by dbakevlar

This is a living document that I will continue to update and will add new database platforms to as I go along in my career.  I spend a lot of time translating database platforms I’ve worked in for all tiers of the technical business.  It just seems natural that we might need a location on the web where that information is kept.

alive

I’ll add some diagrams at a later date, but we’ll start with a simple spreadsheet of common terms and questions and how each of the terms or tasks in Oracle, (the current #1 RDBMS) translates in other platforms.  If there isn’t a similar term or task, I’ve marked not applicable, (N/A).  If you have any additions or suggestions on how to improve the list, please feel free to comment or email me at DBAKevlar at Gmail.

As I’m still working on the best viewable format, keep in mind, you can export the table into a number of formats or even print it!

Thanks!

The Rosetta Stone for Database Platforms

Generated by wpDataTables

Posted in DBA Life, SQLServer Tagged with: , , , , , ,

October 25th, 2012 by dbakevlar

A couple folks have emailed me, looking for my slides from Oracle Open World.  Both presentations can be viewed here at Enktiec’s website, along with all my great coworker’s presentations, as well.

If you are still looking for my interview with Oracle Technology Network on contributing to the Oracle community and User group involvement, that can be found here.

I’m thoroughly busy right now with KSCOPE database track abstract selection, the RMOUG Training Days planning and abstract selection and finishing up the chapters along with my wonderful co-authors on the EM12c book for Apress.  I’m really proud of this collaboration with Pete Sharman, Alex Gorachev, Gokhan Atil, Niall Litchfield, Anand Akela and Leighton Nelson.  These guys really know their EM12c and are putting their hearts and minds into this great book.  If you are interested, it’s already available via pre-order through Apress. If you are one of my SQL Server folks and are interested, the new Pro SQL Server 2012 Practices Book is out, too!

I’m hoping in another month I can get back to writing blog posts and not just chapters for books, but I am starting an EM CLI book as soon as the EM12c book is over, so cross your fingers!

 

Posted in DBA Life, Oracle, SQLServer

September 21st, 2012 by dbakevlar

I’m seeing a light at the end of the tunnel and it’s not a train, so that has to be good, right? 🙂

I wanted to send out a quick post and remind folks that I will be presenting at SQL Saturday #169 in Denver, hashtag #sqlsat169 presenting on the seemingly popular “Oracle for the SQL Server DBA”.  I really enjoy these sessions, as the MsSQL DBA’s are always a refreshing change from the more common Oracle world I’ve been in the last year and they are in turn, happy to talk to an Oracle consultant who really, truly believes that a database is a database, forget the platform, it’s what you put behind it that matters!

If you are in the neighborhood and interested in attending, the link will help you register.  If you are attending, please come see me, even if you aren’t interested in learning about Oracle, come by to say ‘hi’! 🙂

Stay tuned, next week-  The train ride on the California Zephyr out to Oracle Open World!

Posted in DBA Life, SQLServer

May 3rd, 2012 by Kellyn Pot'Vin

I ended up so wrapped up in everything going on when I returned home that I never did get my Miracle Oracle World 2012 post out on my blog, (bad DBA Goth Princess!:)) so here it is!

Miracle Oracle World, (aka MOW2012, hashtag #MOW2012) is a yearly event held by Mogens Noorgard and his company Miracle, (http://www.miracleas.dk/ ) at Hotel Legoland in Billund, Denmark each year. This year was no exception and I was thrilled when Mogens asked me while in attendance at RMOUG’s Training Days if I would come to Denmark and do my presentation on EM12c, (then let me choose one more topic to speak on, which turned out to be ASH Analytics.)

I’d never been to Denmark and it sounded like a wonderful opportunity, but there were clashes in schedules personally and I wasn’t sure if I’d be able to attend at first, but Tim was adamant about me going to MOW this year and stayed home to take care of responsibilities so I could speak in Denmark.

Upon reaching Copenhagen, Mogens picked me up at the airport, (still impressed he was able to locate me in the chaos that is any airport!) and after a short time to recover from the long fight at Mogen’s house, I loaded up into the car, driven by Mogens and accompanied by Cary Millsap, his wife, Mindy and their five year old daughter, Cat.

I have to give a lot of credit to young, Miss Cat. She was so well behaved and content to just ride along, watching movies and playing games, etc. She’s very close to both her parents and was quite enjoying her *special* trip to Denmark. It’s a 3 ½ hour trip to Billund from the town near Copenhagen where Mogens resides and the trip was pleasant, good conversation and humor when Cary attempted to calculate the miles per hour of the windmills by circumference, length and speed of the blades.

Upon arriving in Billund, we toured the Hotel Legoland for a bit, but I was glad to have the Millsap’s drop me off at the house at Lilandia and let me recuperate. Lilandia is a huge waterpark across from Hotel Legoland with what looks like small cul de sac’s of houses, different sizes and designs. The one I was in was four bedrooms and I was to share my room with Heli Helskyaho, who I was so looking forward to catching up with since I hadn’t seen her since RMOUG training days! My fellow house guests were Alex Gorbachev, Christian Antognini and Jonathan Lewis.

Most of the folks attend the party house and I heard it was quite the event, but I won’t lie, I tried to catch up on some sleep after being up for 36 hrs straight! Not much jet lag but I was up and spoke to Jonathan when he came in at around 8pm, Alex when he showed up at Midnight and then chatted with Christian when he was up at 2am working on his presentation slides. Jet lag is a funny thing… 🙂

I did get enough sleep to be up and ready before 7am and Jonathan happened to be up at the same time. He asked if I would like some company on my walk over to the conference site and I was happy to have such an excellent fellow along for the 1 mile walk. As much as I enjoy Jonathan’s presentations, there has got to be something said for one on one discussions with him. He can converse on any number of subjects and we were at the hotel Legoland in no time at all.

I appreciate Cary Millsap and his daughter, Cat allowing me to sit with them at breakfast. This helped alleviate some of my separation anxiety from my own children just watching Cat interact with her father. I ran into Oyvind Isene from Norway’s Oracle User Group, who I’d had the pleasure of meeting last year at Oracle Open World and he was someone I could always count on to put up with my incessant chatting for the two days of the MOW conference… 🙂

Choosing sessions was more difficult for me than I think for most other folks at the conference. The thing that really impressed me about MOW was that it was a mixed platform conference. This doesn’t mean there were a few MySQL sessions scattered among the Oracle, but that there was a full set of SQL Server presentations. For me, having as many years in SQL Server as I do Oracle, it was a real treat. I missed out on a few I would have liked to attend, but as always happens, conflicted with others I had promised attendance to. To attend Ron Crisco’s session, (someone I had spoken to and emailed with working with the Method R tool suite, but had never actually met…) meant I missed Thomas Kejser’s “SQL 2012 OLTP Improvements” session. This went on for most of the day even my own presentations competing with a few sessions I wanted to attend.

Christian Antognini’s “End to End Response Time Analysis” was engaging- anything to do with waits and response is going to interest me. His data was flawless and everyone in the room left with a few more ways to address performance challenges after seeing his examples and demos.

I then attended Dan Norris presentation on Exa-environment platform integration techniques. Dan didn’t have a large crowd in his session but this did allow for the conversation to be very specific between the attendees and Dan, which in turn satisfied those that attended his session greatly. Dan has a lot of experience and insight and I was impressed with the level of interest from those that had questions about Exa* products.

I proceeded to attend a SQL Server session next by Alexei Khalyako on SQL 2012 High Availability and Disaster Recovery. I hadn’t worked in this area of SQL Server for a couple years and was quite impressed with some of the new features Microsoft has introduced into the SQL Server product, including Availability Groups Multiple Secondary’s for HA and DR, Advanced Replication and new clustering features.

The next session, by Thomas Kejser, I found very intriguing. It was Hadoop and Big Data vs. Datawarehousing from a SQL Server perspective. I’ve heard plenty of presentations and read a number of papers on it from the Oracle side, but really enjoyed hearing a view of it via SQL Server world. The data behind the presentation was accurate and Thomas is a solid presenter, so the session was one of my favorites. I made enough of an impact, (can you say mouthy?) that Thomas came over promptly after his presentation was done and we continued to talk in a small group about big data and what options DBA’s had to handle it, no matter what the platform or technology choice.

My ASH Analytics session was at 4pm, last session of the day and it went as well as I could have expected. I was missing my demo for this session, the server I’d set it up on was down for maintenance, (I know, why did I trust a server to be up or even available??) I’d already run into the complication when I discovered much of my data I had been collecting over the last couple months missing from my external drives. I unfortunately didn’t realize this until I went to enter all of the data I thought I had, on slides just two weeks prior to the conference. I came up with challenges recollecting this data as my current EM12c environments are on 10g RAC databases. Anyone who has attempted to use ASH Analytics on a 10g environment will learn- it’s not going to happen. You can install the package to bring up the ASH Analytics page, you just won’t have any data in 95% of the panes, so nice try. Lucky for me, I was able to capture most of the data I needed to come up with a presentation just in time- still not the quality I would have liked.

Of course, my ASH Analytics session was attended by Cary Millsap, Alex Gorbachev, Uri Shaft, John Beresniewicz and other folks, many on the DBA God list… 🙂 I still appreciated their time, I was thrilled to have them attend, even if it was not my best session. Afterwards, we talked for quite some time, (missing demos have a tendency to grant that time… ) and Cary asked if I would like to attend his “Mastering Trace Data” Class that he was giving the next day. The opportunity to take it offsite was on the table for me, but to take it in person was too much to pass up, so I agreed.

The afternoon ended with after-first-day-conference drinks at Hotel Legoland’s bar while waiting for dinner to be served. Heli Helskyaho had arrived at that point, was happy to see her, (and this also meant I was no longer the only female presenter at the conference! :)) I hung out with her, Oyvind Isene and Christian Antognini, speaking to many others throughout the time there until they informed us dinner was ready.

Dinner was a fun affair, great food, great company and again, Cary and Mindy Millsap’s daughter, Cat was well-behaved, (more so than some of the adults? :)) Alex Gorbachev, along with our common banter, discussed his recent election to the IOUG Board of Directors and had a good conversation about RMOUG’s relationship with the IOUG.

Post dinner, Heli Helskyaho and I went over to the “Party House” which was four doors down from our own. There was eating and drinking, although not as much eating and drinking as the first night, as I heard there was a roast pig that I chose to miss to catch up on sleep. We spent most of the evening talking with John Beresniewicz, Tuomas Pystynen and Uri Shaft. There were many others, but so many, I gave up on remembering faces, let alone names. Alex Gorbachev took my “hoppy” beer off my hands at one point and replaced it with red wine, which turned into my drink of choice for the evening.

The second day started with a lovely breakfast with Heli, Tuomas Pystynen, Jonathan Lewis and others. I took the first hour off, prepped for my EM12c presentation in a quiet location and it was worth the time. My session was in full attendance and I was content with the outcome. The Q&A was not as intensive as my RMOUG session, but the questions were well thought out and discussion was actively participated in. The presentation was well received and only makes me more anxious for KSCOPE 2012 in San Antonio come June!

I attended Cary Millsap’s “Mastering Trace Data” class for the rest of the day. This was a great refresher course for me after taking the extended class over a year ago and reviewing the book used for the class back in December. Cary goes over the basic information so that anyone can easily utilize the Method R tool suite along with offering clear reasons to trace, showing how having this knowledge can make us better DBA’s.

The conference ended with a speech by Mogens Noorgard, awarding participants and those that came up with a creative way to explain the huge pool that had been a hit with the sauna partiers at the party house to Lilandia management.
Dan Norris had arranged with Mogens that I would drive back with him and his group. It was a good drive back to Malov, Denmark, near Copenhagen. His friend Anne is lovely, easy to talk to and Tuomas Pystynen is fun to be around, (except for those salted, black licorice fish candy he bought and had me try! :))

I spent the last 24hrs at the Yellow Mansion, home of the Oak Table. Dan, Anne and Tuomas made sure I arrived at the airport on time to catch my plane and upon arriving at my connection at London/Heathrow, I entered the lift and heard a familiar voice. Looking over the crowd entering, I noticed Mark Rittman and yes, the person that was missing at MOW was on her way to Calloborate in Las Vegas- Debra Lilley. We shared a few words, hugs and kisses, then I was off on my last leg of my flight home to Denver.

My only regret regarding Miracle Oracle World is that I didn’t have more time in Denmark. Upon returning, viewed some of the pictures Dan Norris’ friend Anne took in Copenhagen after they dropped me off at the airport. Even one more day in the city would have been wonderful!

Posted in DBA Life, Oracle, SQLServer

April 2nd, 2012 by Kellyn Pot'Vin

~ The least questioned assumptions are often the most questionable ~ Paul Broca

I’ve always found assumptions to be one of the most common cause of failures in software releases/maintenance windows.

If I had a dime for every time I’ve heard, “I thought you were going to take care of/do that…” or “I didn’t involve you earlier because I thought the steps you were responsible for were so easy…”, well, you know the saying and I’d be rich.

Assumptions causing participation of the DBA too late into a project, release or maintenance is widespread enough when you are onsite, but as a remote DBA, can take on a whole new dimension. Where being in the office area might allow you some over-heard conversation or privy to meetings that you realize you should be part of, working off-site can really set you up to miss out on important prep time to offer the best level of support.

It’s nothing new, not rare and its most obvious in hindsight, after the release or maintenance has completed. As paranoid as I think I am, causing me to involve myself pretty well, (I’m not a fan of surprises… :)) I did experience it this last weekend as a new DBA for a client. Acclimating to a client, as well as they becoming comfortable and involving you as their new DBA takes time. Something we just didn’t have the opportunity to do much of, nor was it anyone’s fault. Enkitec was unaware of this year-end maintenance, so they assumed I would easily take-over ownership of the client from the previous DBA.

Friday there was some concern, after they sent an email with the tasks they needed my time allocated for that night and my “real” first time on the client’s systems, that there might be a disk space issue for the required backup post the special, once-a-year release upon completion.

I did some quick research after this was discovered and offered an option but the client’s Unix admin cleared off some disk space and assumed the space would be enough. Now the estimated space usage for the release was not that big, definitely not that large when you consider what I’m used to. we are talking gigabytes, not terabytes. Only being in the system for one day, I made my first mistake and assumed the problem was taken care of and proceeded to perform the duties I had been assigned me for that evening and let them work through the weekend.

The client had assumed the tasks were quite simple for a DBA- the previous DBA had been there the entire time they had been clients and the database growth had been of minimal concern. It was taken into consideration that I may require a little more time to find my way around the environment, become familiar with the specifics of design, jobs and backup scenarios, etc., but I had no issues with the pre-release work, so why would “reversing” the process for the post work result in any difficulties?

Post the weekend work, they contacted me and asked me to start the tasks for after year-end processing. Everything worked out well until close to midnight when the backup failed. We didn’t have enough space.

The backup strategy is not RMAN backup files, but image copies, level 1 incremental and the size of the database due to the release ALONG with additional UNDO, etc. caused the database to be too large to fit on the volume. Different strategies hadn’t helped, even splitting across multiple channels to multiple volumes was not enough, now I was having a formatting issue on the apply to the incremental. It did not like the change one bit and yes, it was after midnight, (have we discussed when a DBA’s horse-drawn carriage turns back into a pumpkin yet? :))

The unique configuration and my newness to the environment meant that it did take me a bit longer to work and investigate issues, (although I do think this may be the best way to retain knowledge about an environment, it’s just more painful for everyone involved!) I finally had the answers I needed in the wee morning hours-
– how to retain the existing backup from before the release from the same volume as I needed more space on.
– change from an image copy incremental to a level 0, rman compressed backup.
– what parallelism was acceptable during business hours.
– what jobs had to be stopped to not impact production while all this occurred.

Now the second level of the problem- I’ve been up two nights of the last three, had been ill on Saturday and I was starting to feel my IQ slip away like the free space on the backup volume. Enkitec beleives in giving their clients valuable resources that are able to give them their best and I was in no way close to that. I really appreciate it that my secondary DBA to this client, Miranda was so helpful, so willing to take what I, as the brand new DBA, had and put the plan to action, (and make it sound so easy to me who had so little IQ left at that point! :)) I promptly notified the client after I transitioned to her the information and then collapsed to a deep slumber.

Now we come to the moral of our story.
This should have been simple work for the DBA. It was assumed to be so by BOTH parties: the DBA and the client. This was our downfall in that we really should make only one assumption when it comes to maintenance and releases- If we assume, especially on the simple stuff, it will most likely be what causes our biggest challenges. The DBA should be involved from the beginning of any project, maintenance or release and then from there, once both sides have inspected the tasks/level of difficulty, can they both decide that the DBA is not required to be heavily involved. An open invitation should be available to the DBA to return if any “red flags” arise and all communication should still include the DBA to ensure that there are as few surprises as possible.

Posted in DBA Life, Oracle, SQLServer

March 27th, 2012 by Kellyn Pot'Vin

Chris Shaw from Colorado Springs SQL Server SQLPass group invited me to come speak at this wonderful group last week and I promised a few folks that I would upload my slides to my site, (they are also available via Chris, too…)

The group is a small, close knit group that I was very pleased to have the opportunity to speak to and look forward to having more interaction with in the future!

Thank you to Chris and the rest of Springs SQLPass!

Oracle for the SQL Server DBA Slides

Posted in Oracle, SQLServer

March 22nd, 2012 by Kellyn Pot'Vin

Two things to be happy about today!

1st-  Enkitec did a lovely announcement on Enkitec regarding my coming on board.  I just arrived last night after three days at the main office in Irving, Tx, (Dallas area is lovely with all the fields of Blue Bonnet flowers this time of year…) and am psyched about working remotely for them from my home, northwest of Denver.

2nd-  I have a wonderful opportunity to speak this evening at Colorado Springs SQL Pass group on “Oracle for the SQL Server DBA”.  SpringsSQL

It’s only going to get crazier the next two months from here, so stay tuned! 🙂

Posted in DBA Life, Oracle, SQLServer

September 22nd, 2011 by Kellyn Pot'Vin

Users complained that a monthly financial report would no longer run SQLServer Reporting Services, (SSRS.)   Upon investigation, it was found that this was a stored procedure that ran from one Annex database, sourcing from another and outer joins to a SQLServer database on a remote server through a linked server configuration.  In attempts to run the report,  my SQL Server Profile traces on both the source SQL Server and the remote SQLServer resulted in consistent sp_reset_connection results from the source and no activity in the remote.

 

I ran my trusty and favorite script to tell me what processes were taking the most resources and quickly realized with statement was the cause:

SELECT  TOP 1000
[Object_Name] = object_name(st.objectid),
creation_time, last_execution_time, total_cpu_time = total_worker_time / 1000,
avg_cpu_time = (total_worker_time / execution_count) / 1000,
min_cpu_time = min_worker_time / 1000, max_cpu_time = max_worker_time / 1000,
last_cpu_time = last_worker_time / 1000, total_time_elapsed = total_elapsed_time / 1000 ,
avg_time_elapsed = (total_elapsed_time / execution_count) / 1000,
min_time_elapsed = min_elapsed_time / 1000, max_time_elapsed = max_elapsed_time / 1000,
avg_physical_reads = total_physical_reads / execution_count,
avg_logical_reads = total_logical_reads / execution_count,
execution_count,
SUBSTRING(st.text, (qs.statement_start_offset/2) + 1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(st.text)
ELSE qs.statement_end_offset
END
-- qs.statement_start_offset
) /2) + 1) as statement_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) st
WHERE Object_Name(st.objectid) IS NOT NULL
--AND DB_NAME(st.dbid) = '
ORDER BY   total_cpu_time DESC
db_name(st.dbid), total_worker_time / execution_count  DESC

 

Now the important data from here is that the report came up at the top promptly, as it was an challenge to be sure:

Object_Name execution_time total_cpu_time min_cpu_time max_cpu_time last_cpu_time avg_logical_reads
Rpt_ClientRevenueDownloadKa

33:15.1

1040436

507989

532447

532447

4701439

Note that the min and max CPU times are very consistent and the average logical reads are out of this world!  The other nice thing about this report, is that it will tell you exactly, the code that is causing the issue.  You have the choice then of copying the code out of the final column and running and explain plan to see what is the issue or executing a test to see the problem.  The code in question was a distributed query, so an explain plan often does not show what a test of the query would, so I ran the one statement, which did return to the remote server, but with frustrating results:

What you see in the diagram above is both a choice by the server to parallelize the query, which is flooding the server with both waits on parallel and network IO.   The query did end up returning, but only after approximately 10 minutes.

Something is coming across the network to run against the remote server and after verifying statistics, etc. on the remote, I returned to the source server and looked at the query.

The query required a two step temp table to be created before joining to the tables across the linked server.  The issue here, is that this data was flooding both the network as well as the remote database.  The following, without the temp table, completes very quickly:

SELECT
--ISNULL(atv.NAME, 'UNKWN') AS name,
ISNULL(parentCrmA.NAME, 'UNKWN') AS parent,
month(t.DT1) AS pmonth, YEAR(t.DT2) AS pyear,
RIGHT('0' + CONVERT(VARCHAR(3), MONTH(t.DT2)) + CONVERT(VARCHAR(5), YEAR(t.DT2)), 6) AS fdte,
t.DT2 AS pdate, t.P_AMT*(-1) AS amt,
'rev_amt' AS r_amt, 'adj_amt' AS a_amt,
a.GLA_NO AS GLNo, --atv.crmVert AS vertical,
crmA.OID_NAME AS oidname, s.S_NAME AS spc,
c.C_NAME AS Cname, crmA.NC_NAME AS c_cname,
crmA.I_CD AS i_cd
--INTO      #FINData ß2

nd

 step temp table commented out, along with its columns…
FROM        R_SERVER.FIN_app.dbo.tglTbl t
INNER JOIN  R_SERVER.FIN_app.dbo.tglAtbl a ON t.GLAKey = a.GLAKey
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglItbl i ON t.TNo = i.TNo
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglCtbl c ON i.CKey = c.CKey
--LEFT OUTER JOIN @AccountToVerticalMap atv ON c.CID = atv.cName COLLATE SQL_Latin1_General_CP1_CI_AS
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglStbl s ON i.PSKey = s.SPKey
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglARtbl ar ON t.ARKey = ar.ARKey
LEFT OUTER JOIN   S_SERVER_DB.dbo.Atbl crmA ON c.CID = crmA.Name COLLATE SQL_Latin1_General_CP1_CI_AS AND crmA.DeletionS_CD = 0 AND crmA.S_CD = 0
LEFT OUTER JOIN   S_SERVER_DB.dbo.Atbl parentCrmA ON cA.PA_ID= parentCA.A_IDAND parentCA.DS_CD = 0 AND parentCrmA.S_CD = 0
WHERE a.GLANo between 4000 and 4999

:03 seconds, in fact…

The IO issue is the code creates the two temp objects, then attempts to send this data across the network to the linked server and onto the remote database to process the request.  As this is a monthly report, you have to ask yourself, is a temp table really required??  Not in this instance and it is actually the bottleneck to performance.

 

ORIGINAL:

with cAName(MemName)
AS (SELECT      c.CID
FROM  R_SERVER.FIN_app.dbo.tglCtbl c
WHERE c.CID is not null
UNION ALL
SELECT      ar.ARCode
FROM  R_SERVER.FIN_app.dbo.tglARtbl ar
WHERE ar.ARCode is not null
)
INSERT      INTO @ActVertMap
SELECT      DISTINCT
a.Name AS Name,
a.New_Vertical AS Vert
FROM  S_SERVER_DB.dbo.Atbl a
LEFT OUTER JOIN cAName c ON a.Name = c.Name COLLATE Latin1_General_CI_AS
WHERE a.Name is not null
order by a.Name

 

SIMPLE  REWRITE:

drop table cacct_wrk
go
SELECT      c.CID  as "MemName" into cacct_wrk
FROM  R_SERVER.FIN_app.dbo.tglCtbl c
WHERE c.CID is not null
UNION ALL
SELECT      ar.ARCode
FROM  R_SERVER.FIN_app.dbo.tglARtbl ar
WHERE ar.ARCode is not null
go
drop table ActVertMap_wrk
go
SELECT         DISTINCT
a.Name AS mapName,
a.New_Vertical AS mapVert
into ActVertMap_wrk
FROM  S_SERVER_DB.dbo.Atbl a
LEFT OUTER JOIN cacct_tst c ON a.Name = c.MemName COLLATE Latin1_General_CI_AS
WHERE a.Name is not null
order by a.Name
go

 

FINAL REPORTING QUERY REWRITE:

SELECT
ISNULL(atv.NAME, 'UNKWN') AS name,
ISNULL(parentCrmA.NAME, 'UNKWN') AS parent,
month(t.DT1) AS pmonth, YEAR(t.DT2) AS pyear,
RIGHT('0' + CONVERT(VARCHAR(3), MONTH(t.DT2)) + CONVERT(VARCHAR(5), YEAR(t.PDate)), 6) AS fdte,
t.DT2 AS pdate, t.P_AMT*(-1) AS amt,
'r_amt' AS r_amt, 'a_amt' AS a_amt,
a.GL_NO AS GLANo, atv.M_VERT AS vertical,
cA.O_ID AS oidname, s.S_NAME AS spc,
c.C_NAME AS FIN_cname, cA.NC_NAME AS ccname,
cA.I_CD AS icode --INTO      #FINData
FROM        R_SERVER.FIN_app.dbo.tglTbl t
INNER JOIN  R_SERVER.FIN_app.dbo.tglAtbl a ON t.GLAKey = a.GLAKey
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglItbl i ON t.TNo = i.TNo
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglCtbl c ON i.CKey = c.CKey
LEFT OUTER JOIN   ActVertMap_wrk atv ON c.CID = atv.mName
COLLATE SQL_Latin1_General_CP1_CI_AS
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglStbl s ON i.PSKey = s.SPKey
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglARtbl ar ON t.ARKey = ar.ARKey
LEFT OUTER JOIN   S_SERVER_DB.dbo.Atbl cA ON c.CID = cA.Name COLLATE SQL_Latin1_General_CP1_CI_AS AND cA.DeletionS_CD = 0 AND cA.S_CD = 0
LEFT OUTER JOIN   S_SERVER_DB.dbo.Atbl parentCA ON crmA.P_ID= parentCA.A_ID AND parentCA.DeletionS_CD = 0 AND parentCA.S_CD = 0
WHERE a.GLANo between 4000 and 4999

And this returns in .03 seconds!! 😀

 

Using the correct feature for the goal and the use of the code is important.  Where using a temporary object might be a great option for local processing, the limitations of the optimizer in any platform to utilize that temporary object must be taken into consideration.  This was a complex process involving three databases that with growth, resulted in poor performance.

Posted in SQLServer

October 6th, 2010 by Kellyn Pot'Vin

Part of this is going to be the SQL Server DBA in me ranting, so be patient and know as it says in the lovely site disclaimer, this is MY OPIONION and yes, I stick by this opinion 100%.

I have very strong feelings on who and how a SQL Server, especially one with SSRS is installed.  Microsoft’s continued mis-marketing, (albeit successful for sales) and ease of installation has created database environments that are misconfigured, poor-performing and poorly designed-  BY DEFAULT. 
Few experts in .Net or SSRS, when asked, know what transaction logs, filegroups, lock escalation or sp/dbcc procedures are.  If you don’t know what each and every one of these are for, you shouldn’t be installing SQL Server.  If you don’t know why the tempdb and transaction logs should be on separate spindles or why it’s not a good idea to have one, monstrous, huge drive, (see the first part of this sentence for a clue…) then you shouldn’t be installing SQL Server.

What are the rules of thumb a DBA follows when we are installing?

  • Install the SQL Server as a dedicated SQL Server admin account that has the least amount of privileges required to perform all database tasks, (If I see one more database owned by “domain\standard user of database” I’m going to break something…:))
  • If this is a production database, high use, I’m going to want multiple filegroups, (Oracle DBA’s can think tablespaces at this point… :))  I am good with the primary filegroup for the standard tables, but I want another for indexes, a third, fourth, fifth, etc. for high use objects.
  • I do want multiple drives and I do not want to place my binaries in the default location on the C:\ drive.  The last thing my NT Admin needs is me filling up the drive that the OS resides on.  Give me a designated SQL Server binaries drive with plenty of room for upgrades, it’s only going to get larger.
  • I want drives for my data, my indexes, tempdb, transaction logs and backups.  Do not skimp on space and I am more than happy to tell you the ratio of sizes needed for the backups per retention period. 
  • Don’t give me RAID5 for my databases, you’ll only tick me off when people start complaining about the I/O issues.  I know you get less MB in the end, but trust me, RAID0+1/RAID10 is worth the cost.
  • Don’t give admin access to every Tom, Dick and Harry to the box.  You don’t do it to your Oracle servers, why would you do it to these database servers? 
  • Last, but not least-  treat them with the respect any database server deserves.  Back them up, move the files off to tape, secure the systems, leave the database admin work to a DBA.

So, what started the rant?  One of my poor NT Admin’s went through a challenging process that renamed a service account.  After he corrected all this procedure broke, we arrived a couple days later to find that no one could run any of the reports through the web from this main reporting server.

It turns out, when this SQL Server with SSRS was installed and configured for our company by a third party vendor it was performed with the service account that the NT Admin was forced to rename. Due to this, it was the “db owner” of the Reporting Server, including all the encryption internally and cached authentication had “run out”.

I found the error immediately, as there were also SQL Server Agent jobs attempting to authenticate with the service account, so the following message was reported in SQL Server’s error logs:
Message
[298] SQLServer Error: 15404, Could not obtain information about Windows NT group/user ‘DOMAIN\SERVICE_ACCT’, error code 0xffff0002. [SQLSTATE 42000] (ConnIsLoginSysAdmin)
It wasn’t as simple as renaming the database owner for the ReportServer database, as there are encryption keys that are created at the time of installation.  I will say, they have simplified the process for 2008 vs. earlier versions though! 🙂

• Change the ownership of the ReportServer and ReportServerTempdb databases to the DOMAIN\NEW_SRVC_ACCT database, (as it should have originally been performed as.) using the stored proc sp_changedbowner.
use
 
go
exec sp_changedbowner ‘DOMAIN\NEW_SRVC_ACCT’
go
• Start the Reporting Services Configuration Manager and connect to the database repository for the SSRS.

  1. Click on Service Account and change from “Network Service” to “Use Another Account”.  Type in the username and password of the new DB Owner you specified in the sp_changedbowner step.
  2. Click Apply
  3. Click on “Encyrption Keys” on the left and choose to backup, (always, always backup after every change.)
  4. Choose a secure local location, (and also make a copy of these on your backup server each time..) and password protect it.
  5. Click Apply.
  6. You have now officially reset everything in the Network service from the old domain user account to the new one.  You now have to reset all back to the Network service.
  7. Click again on Service Account and Change the “Use Another Account” back to “Network Service”
  8. Click Apply
  9. Click on “Encryption Keys” and make another backup, saving to a new file name each time you perform this step, (date and time in file name helps…)  Save off a copy of the final encryption keys to the backup server is essential if you wish to recovery the SSRS some day, so DON’T FORGET!!
  10. Change the ownership of any other databases with a simple execution of the sp_changedbowner as seen above, (only SSRS requires the other steps…) and change any agent jobs or other services that are running as the now missing service account. 
  • Test reports and you should be good to go.

Many issues, like the one above, can be avoided if a DBA works with the NT Admin to ensure the server is build correctly to support SQL Server and if a DBA performs the installation and configuration of the database server.

**Added 10/07/10 after an email or two with a couple other SQL Server DBA’s-
IF you don’t know any of the topics that I listed for requirements to install SQL Server, it should also be a requirement to know these before you are aliased as DBO or given admin privileges on a SQL Server box-  nuff’ said!!
~done rant!~ 🙂

Posted in SQLServer

  • Facebook
  • Google+
  • LinkedIn
  • Twitter