Category: SQLServer

March 22nd, 2012 by Kellyn Pot'Vin

Two things to be happy about today!

1st-  Enkitec did a lovely announcement on Enkitec regarding my coming on board.  I just arrived last night after three days at the main office in Irving, Tx, (Dallas area is lovely with all the fields of Blue Bonnet flowers this time of year…) and am psyched about working remotely for them from my home, northwest of Denver.

2nd-  I have a wonderful opportunity to speak this evening at Colorado Springs SQL Pass group on “Oracle for the SQL Server DBA”.  SpringsSQL

It’s only going to get crazier the next two months from here, so stay tuned! 🙂

Posted in DBA Life, Oracle, SQLServer

September 22nd, 2011 by Kellyn Pot'Vin

Users complained that a monthly financial report would no longer run SQLServer Reporting Services, (SSRS.)   Upon investigation, it was found that this was a stored procedure that ran from one Annex database, sourcing from another and outer joins to a SQLServer database on a remote server through a linked server configuration.  In attempts to run the report,  my SQL Server Profile traces on both the source SQL Server and the remote SQLServer resulted in consistent sp_reset_connection results from the source and no activity in the remote.

 

I ran my trusty and favorite script to tell me what processes were taking the most resources and quickly realized with statement was the cause:

SELECT  TOP 1000
[Object_Name] = object_name(st.objectid),
creation_time, last_execution_time, total_cpu_time = total_worker_time / 1000,
avg_cpu_time = (total_worker_time / execution_count) / 1000,
min_cpu_time = min_worker_time / 1000, max_cpu_time = max_worker_time / 1000,
last_cpu_time = last_worker_time / 1000, total_time_elapsed = total_elapsed_time / 1000 ,
avg_time_elapsed = (total_elapsed_time / execution_count) / 1000,
min_time_elapsed = min_elapsed_time / 1000, max_time_elapsed = max_elapsed_time / 1000,
avg_physical_reads = total_physical_reads / execution_count,
avg_logical_reads = total_logical_reads / execution_count,
execution_count,
SUBSTRING(st.text, (qs.statement_start_offset/2) + 1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(st.text)
ELSE qs.statement_end_offset
END
-- qs.statement_start_offset
) /2) + 1) as statement_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) st
WHERE Object_Name(st.objectid) IS NOT NULL
--AND DB_NAME(st.dbid) = '
ORDER BY   total_cpu_time DESC
db_name(st.dbid), total_worker_time / execution_count  DESC

 

Now the important data from here is that the report came up at the top promptly, as it was an challenge to be sure:

Object_Name execution_time total_cpu_time min_cpu_time max_cpu_time last_cpu_time avg_logical_reads
Rpt_ClientRevenueDownloadKa

33:15.1

1040436

507989

532447

532447

4701439

Note that the min and max CPU times are very consistent and the average logical reads are out of this world!  The other nice thing about this report, is that it will tell you exactly, the code that is causing the issue.  You have the choice then of copying the code out of the final column and running and explain plan to see what is the issue or executing a test to see the problem.  The code in question was a distributed query, so an explain plan often does not show what a test of the query would, so I ran the one statement, which did return to the remote server, but with frustrating results:

What you see in the diagram above is both a choice by the server to parallelize the query, which is flooding the server with both waits on parallel and network IO.   The query did end up returning, but only after approximately 10 minutes.

Something is coming across the network to run against the remote server and after verifying statistics, etc. on the remote, I returned to the source server and looked at the query.

The query required a two step temp table to be created before joining to the tables across the linked server.  The issue here, is that this data was flooding both the network as well as the remote database.  The following, without the temp table, completes very quickly:

SELECT
--ISNULL(atv.NAME, 'UNKWN') AS name,
ISNULL(parentCrmA.NAME, 'UNKWN') AS parent,
month(t.DT1) AS pmonth, YEAR(t.DT2) AS pyear,
RIGHT('0' + CONVERT(VARCHAR(3), MONTH(t.DT2)) + CONVERT(VARCHAR(5), YEAR(t.DT2)), 6) AS fdte,
t.DT2 AS pdate, t.P_AMT*(-1) AS amt,
'rev_amt' AS r_amt, 'adj_amt' AS a_amt,
a.GLA_NO AS GLNo, --atv.crmVert AS vertical,
crmA.OID_NAME AS oidname, s.S_NAME AS spc,
c.C_NAME AS Cname, crmA.NC_NAME AS c_cname,
crmA.I_CD AS i_cd
--INTO      #FINData ß2

nd

 step temp table commented out, along with its columns…
FROM        R_SERVER.FIN_app.dbo.tglTbl t
INNER JOIN  R_SERVER.FIN_app.dbo.tglAtbl a ON t.GLAKey = a.GLAKey
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglItbl i ON t.TNo = i.TNo
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglCtbl c ON i.CKey = c.CKey
--LEFT OUTER JOIN @AccountToVerticalMap atv ON c.CID = atv.cName COLLATE SQL_Latin1_General_CP1_CI_AS
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglStbl s ON i.PSKey = s.SPKey
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglARtbl ar ON t.ARKey = ar.ARKey
LEFT OUTER JOIN   S_SERVER_DB.dbo.Atbl crmA ON c.CID = crmA.Name COLLATE SQL_Latin1_General_CP1_CI_AS AND crmA.DeletionS_CD = 0 AND crmA.S_CD = 0
LEFT OUTER JOIN   S_SERVER_DB.dbo.Atbl parentCrmA ON cA.PA_ID= parentCA.A_IDAND parentCA.DS_CD = 0 AND parentCrmA.S_CD = 0
WHERE a.GLANo between 4000 and 4999

:03 seconds, in fact…

The IO issue is the code creates the two temp objects, then attempts to send this data across the network to the linked server and onto the remote database to process the request.  As this is a monthly report, you have to ask yourself, is a temp table really required??  Not in this instance and it is actually the bottleneck to performance.

 

ORIGINAL:

with cAName(MemName)
AS (SELECT      c.CID
FROM  R_SERVER.FIN_app.dbo.tglCtbl c
WHERE c.CID is not null
UNION ALL
SELECT      ar.ARCode
FROM  R_SERVER.FIN_app.dbo.tglARtbl ar
WHERE ar.ARCode is not null
)
INSERT      INTO @ActVertMap
SELECT      DISTINCT
a.Name AS Name,
a.New_Vertical AS Vert
FROM  S_SERVER_DB.dbo.Atbl a
LEFT OUTER JOIN cAName c ON a.Name = c.Name COLLATE Latin1_General_CI_AS
WHERE a.Name is not null
order by a.Name

 

SIMPLE  REWRITE:

drop table cacct_wrk
go
SELECT      c.CID  as "MemName" into cacct_wrk
FROM  R_SERVER.FIN_app.dbo.tglCtbl c
WHERE c.CID is not null
UNION ALL
SELECT      ar.ARCode
FROM  R_SERVER.FIN_app.dbo.tglARtbl ar
WHERE ar.ARCode is not null
go
drop table ActVertMap_wrk
go
SELECT         DISTINCT
a.Name AS mapName,
a.New_Vertical AS mapVert
into ActVertMap_wrk
FROM  S_SERVER_DB.dbo.Atbl a
LEFT OUTER JOIN cacct_tst c ON a.Name = c.MemName COLLATE Latin1_General_CI_AS
WHERE a.Name is not null
order by a.Name
go

 

FINAL REPORTING QUERY REWRITE:

SELECT
ISNULL(atv.NAME, 'UNKWN') AS name,
ISNULL(parentCrmA.NAME, 'UNKWN') AS parent,
month(t.DT1) AS pmonth, YEAR(t.DT2) AS pyear,
RIGHT('0' + CONVERT(VARCHAR(3), MONTH(t.DT2)) + CONVERT(VARCHAR(5), YEAR(t.PDate)), 6) AS fdte,
t.DT2 AS pdate, t.P_AMT*(-1) AS amt,
'r_amt' AS r_amt, 'a_amt' AS a_amt,
a.GL_NO AS GLANo, atv.M_VERT AS vertical,
cA.O_ID AS oidname, s.S_NAME AS spc,
c.C_NAME AS FIN_cname, cA.NC_NAME AS ccname,
cA.I_CD AS icode --INTO      #FINData
FROM        R_SERVER.FIN_app.dbo.tglTbl t
INNER JOIN  R_SERVER.FIN_app.dbo.tglAtbl a ON t.GLAKey = a.GLAKey
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglItbl i ON t.TNo = i.TNo
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglCtbl c ON i.CKey = c.CKey
LEFT OUTER JOIN   ActVertMap_wrk atv ON c.CID = atv.mName
COLLATE SQL_Latin1_General_CP1_CI_AS
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglStbl s ON i.PSKey = s.SPKey
LEFT OUTER JOIN   R_SERVER.FIN_app.dbo.tglARtbl ar ON t.ARKey = ar.ARKey
LEFT OUTER JOIN   S_SERVER_DB.dbo.Atbl cA ON c.CID = cA.Name COLLATE SQL_Latin1_General_CP1_CI_AS AND cA.DeletionS_CD = 0 AND cA.S_CD = 0
LEFT OUTER JOIN   S_SERVER_DB.dbo.Atbl parentCA ON crmA.P_ID= parentCA.A_ID AND parentCA.DeletionS_CD = 0 AND parentCA.S_CD = 0
WHERE a.GLANo between 4000 and 4999

And this returns in .03 seconds!! 😀

 

Using the correct feature for the goal and the use of the code is important.  Where using a temporary object might be a great option for local processing, the limitations of the optimizer in any platform to utilize that temporary object must be taken into consideration.  This was a complex process involving three databases that with growth, resulted in poor performance.

Posted in SQLServer

October 6th, 2010 by Kellyn Pot'Vin

Part of this is going to be the SQL Server DBA in me ranting, so be patient and know as it says in the lovely site disclaimer, this is MY OPIONION and yes, I stick by this opinion 100%.

I have very strong feelings on who and how a SQL Server, especially one with SSRS is installed.  Microsoft’s continued mis-marketing, (albeit successful for sales) and ease of installation has created database environments that are misconfigured, poor-performing and poorly designed-  BY DEFAULT. 
Few experts in .Net or SSRS, when asked, know what transaction logs, filegroups, lock escalation or sp/dbcc procedures are.  If you don’t know what each and every one of these are for, you shouldn’t be installing SQL Server.  If you don’t know why the tempdb and transaction logs should be on separate spindles or why it’s not a good idea to have one, monstrous, huge drive, (see the first part of this sentence for a clue…) then you shouldn’t be installing SQL Server.

What are the rules of thumb a DBA follows when we are installing?

  • Install the SQL Server as a dedicated SQL Server admin account that has the least amount of privileges required to perform all database tasks, (If I see one more database owned by “domain\standard user of database” I’m going to break something…:))
  • If this is a production database, high use, I’m going to want multiple filegroups, (Oracle DBA’s can think tablespaces at this point… :))  I am good with the primary filegroup for the standard tables, but I want another for indexes, a third, fourth, fifth, etc. for high use objects.
  • I do want multiple drives and I do not want to place my binaries in the default location on the C:\ drive.  The last thing my NT Admin needs is me filling up the drive that the OS resides on.  Give me a designated SQL Server binaries drive with plenty of room for upgrades, it’s only going to get larger.
  • I want drives for my data, my indexes, tempdb, transaction logs and backups.  Do not skimp on space and I am more than happy to tell you the ratio of sizes needed for the backups per retention period. 
  • Don’t give me RAID5 for my databases, you’ll only tick me off when people start complaining about the I/O issues.  I know you get less MB in the end, but trust me, RAID0+1/RAID10 is worth the cost.
  • Don’t give admin access to every Tom, Dick and Harry to the box.  You don’t do it to your Oracle servers, why would you do it to these database servers? 
  • Last, but not least-  treat them with the respect any database server deserves.  Back them up, move the files off to tape, secure the systems, leave the database admin work to a DBA.

So, what started the rant?  One of my poor NT Admin’s went through a challenging process that renamed a service account.  After he corrected all this procedure broke, we arrived a couple days later to find that no one could run any of the reports through the web from this main reporting server.

It turns out, when this SQL Server with SSRS was installed and configured for our company by a third party vendor it was performed with the service account that the NT Admin was forced to rename. Due to this, it was the “db owner” of the Reporting Server, including all the encryption internally and cached authentication had “run out”.

I found the error immediately, as there were also SQL Server Agent jobs attempting to authenticate with the service account, so the following message was reported in SQL Server’s error logs:
Message
[298] SQLServer Error: 15404, Could not obtain information about Windows NT group/user ‘DOMAIN\SERVICE_ACCT’, error code 0xffff0002. [SQLSTATE 42000] (ConnIsLoginSysAdmin)
It wasn’t as simple as renaming the database owner for the ReportServer database, as there are encryption keys that are created at the time of installation.  I will say, they have simplified the process for 2008 vs. earlier versions though! 🙂

• Change the ownership of the ReportServer and ReportServerTempdb databases to the DOMAIN\NEW_SRVC_ACCT database, (as it should have originally been performed as.) using the stored proc sp_changedbowner.
use
 
go
exec sp_changedbowner ‘DOMAIN\NEW_SRVC_ACCT’
go
• Start the Reporting Services Configuration Manager and connect to the database repository for the SSRS.

  1. Click on Service Account and change from “Network Service” to “Use Another Account”.  Type in the username and password of the new DB Owner you specified in the sp_changedbowner step.
  2. Click Apply
  3. Click on “Encyrption Keys” on the left and choose to backup, (always, always backup after every change.)
  4. Choose a secure local location, (and also make a copy of these on your backup server each time..) and password protect it.
  5. Click Apply.
  6. You have now officially reset everything in the Network service from the old domain user account to the new one.  You now have to reset all back to the Network service.
  7. Click again on Service Account and Change the “Use Another Account” back to “Network Service”
  8. Click Apply
  9. Click on “Encryption Keys” and make another backup, saving to a new file name each time you perform this step, (date and time in file name helps…)  Save off a copy of the final encryption keys to the backup server is essential if you wish to recovery the SSRS some day, so DON’T FORGET!!
  10. Change the ownership of any other databases with a simple execution of the sp_changedbowner as seen above, (only SSRS requires the other steps…) and change any agent jobs or other services that are running as the now missing service account. 
  • Test reports and you should be good to go.

Many issues, like the one above, can be avoided if a DBA works with the NT Admin to ensure the server is build correctly to support SQL Server and if a DBA performs the installation and configuration of the database server.

**Added 10/07/10 after an email or two with a couple other SQL Server DBA’s-
IF you don’t know any of the topics that I listed for requirements to install SQL Server, it should also be a requirement to know these before you are aliased as DBO or given admin privileges on a SQL Server box-  nuff’ said!!
~done rant!~ 🙂

Posted in SQLServer

  • Facebook
  • Google+
  • LinkedIn
  • Twitter