Friday, December 13, 2013

SQL Server - Auditing sessions over time - Code example

So today I had a requirement to provide a quick and dirty way of collecting data on the sessions connected to SQL Server.  Specifically, I had to write a job that collected basic data on the current sessions, and if these sessions had not been previously recorded, record them into a table.  If the session was identical in all ways other than the session ID, then a counter called 'session count' is incremented for that session.

The challenging part here was providing what is basically a mashup of an INSERT and UPDATE statement, sometimes called an UPSERT.  In SQL Server 2008 and later versions, this functionality is provided using the MERGE statement.  My requirement here is to update a table with values if they exist; otherwise, insert them.  

So here's an example of how to use this MERGE statement in the context of my requirement. The example code is below.  You can modify it to suit your purpose - for example, write out to a permanent table, put it in a job, query the table in a different way, monitor requests rather than sessions, etc. or re-write it entirely, or simply use it as an example of MERGE for reference.

There's also a self-join in there which provides the columns on which to match.  This isn't strictly essential but I needed to match on columns then only update the last login time and last request end time if matched.    The outer query simply aggregates the results in the target table to provide a good view of the data.

Note this won't collect data on sessions which are opened and closed between running the collection.  So if you modify this code by e.g. using a permanent table and putting this code into a job (can't use a temp table in this context), run the job frequently.  If you run it e.g. every 15 minutes and a session is created and destroyed between the job execution intervals, you won't capture the data.



 -- ---------------------------------------------------------------------------------------------  
 -- Script to audit sessions on the server, accumulate session counts per distinct group of   
 -- values from sys.dm_exec_sessions, will ignore duplicate sessions (by session_id and unique   
 -- set of values), will increment when a new session is started that has the key values   
 -- matching, will insert when a brand new session is opened. Uses MERGE.  
   
 -- Run inside a stored procedure, or on the query window, multiple times over the course of X   
 -- hours to get a full list of sessions.  
 -- Add filters if necessary on the final SELECT.   
 -- Change #Logins to a permanent table if required.  
   
 -- Works only in SQL Server 2008, R2, 2012 and 2014.  
   
 -- Author: Derek Colley, 13/12/2013  
 -- ---------------------------------------------------------------------------------------------  
   
 -- TRUNCATE TABLE #Logins -- for debug  
   
 IF NOT EXISTS ( SELECT name FROM tempdb.sys.tables WHERE name LIKE ('#Logins%') )  
     CREATE TABLE #Logins (   
         uqid INT IDENTITY(1,1),   
         session_id INT,   
         last_login_time DATETIME,   
         host_name NVARCHAR(128),   
         program_name NVARCHAR(128),   
         client_interface_name NVARCHAR(32),   
         login_name NVARCHAR(128),   
         last_request_end_time DATETIME )  
   
 DECLARE @CurrentSessions TABLE (   
     session_id INT,   
     login_time DATETIME,   
     host_name NVARCHAR(128),   
     program_name NVARCHAR(128),   
     client_interface_name NVARCHAR(32),   
     login_name NVARCHAR(128),   
     last_request_end_time DATETIME )   
   
 INSERT INTO @CurrentSessions  
     SELECT        session_id, login_time, host_name, program_name, client_interface_name,   
                 login_name, last_request_end_time  
     FROM        sys.dm_exec_sessions  
     WHERE        session_id <> @@SPID   
     AND            session_id > 50  
       
 MERGE INTO #Logins   
 USING (   
         SELECT        c.session_id, c.login_time, c.host_name, c.program_name, c.client_interface_name,   
                     c.login_name , c.last_request_end_time  
         FROM        @CurrentSessions c  
         LEFT JOIN    #Logins l   
         ON            c.session_id = l.session_id   
         AND            c.host_name = l.host_name   
         AND            c.program_name = l.program_name   
         AND            c.client_interface_name = l.client_interface_name   
         AND            c.login_name = l.login_name    ) AS CurrentSessions  
     ON        #Logins.session_id = CurrentSessions.session_id   
     AND        #Logins.host_name = CurrentSessions.host_name   
     AND        #Logins.program_name = CurrentSessions.program_name   
     AND        #Logins.client_interface_name = CurrentSessions.client_interface_name   
     AND        #Logins.login_name = CurrentSessions.login_name  
 WHEN MATCHED THEN   
     UPDATE          
     SET            last_login_time = CurrentSessions.login_time,  
                 last_request_end_time = CurrentSessions.last_request_end_time  
 WHEN NOT MATCHED THEN   
     INSERT        (    session_id, last_login_time, host_name, program_name, client_interface_name,   
                     login_name, last_request_end_time )      
     VALUES        (session_id, login_time, host_name, program_name, client_interface_name,   
                 login_name, last_request_end_time );  
   
 SELECT        COUNT(*) [session_count], MAX(last_login_time) [last_login_time], host_name, program_name,   
             client_interface_name, login_name, MAX(last_request_end_time) [last_request_end_time]  
 FROM        #Logins   
 GROUP BY    host_name, program_name, client_interface_name, login_name  
 ORDER BY    COUNT(*) DESC   
   
   
   
        
           
       
   
   




Monday, June 24, 2013

Vendors! Wake up and hire a DBA!


Time to have a rant about third-party vendor databases.  Yes, I'm looking at you Sage, VersionOne, and most especially, YOU, Hybris CRM.

Yes, you.  Mr. 'Let's not use clustered indexes.'  'Let's model our schemata using ORM.'  'Let's call our primary key column 'PK' on almost every table.  Let's not bother naming any of our constraints.  And using stored procedures or functions - that's so 1990s, man.

I mean, why NOT use BIGINT for every single integer column.  Or VARCHAR(255).  No, wait a minute, that's way too efficient.  Let's use NVARCHAR(255) instead.  For everything.

Take a look at this, part of the table definition from a vanilla Hybris CRM installation:


CREATE TABLE [dbo].[addresses](
[hjmpTS] [bigint] NULL,
[TypePkString] [bigint] NOT NULL,
[PK] [bigint] NOT NULL,
[createdTS] [datetime] NOT NULL,
[modifiedTS] [datetime] NULL,
[OwnerPkString] [bigint] NULL,
[aCLTS] [int] NULL,
[propTS] [int] NULL,
[p_dateofbirth] [datetime] NULL,
[p_middlename2] [nvarchar](255) NULL,
[p_streetname] [nvarchar](255) NULL,
[p_contactaddress] [tinyint] NULL,
[titlepk] [bigint] NULL,
[p_phone1] [nvarchar](255) NULL,
[p_remarks] [nvarchar](255) NULL,
[p_firstname] [nvarchar](255) NULL,
[p_phone2] [nvarchar](255) NULL,
[originalpk] [bigint] NULL,
[p_fax] [nvarchar](255) NULL,
[p_shippingaddress] [tinyint] NULL,
[p_streetnumber] [nvarchar](255) NULL,
[p_gender] [bigint] NULL,
[p_url] [nvarchar](255) NULL,
[p_district] [nvarchar](255) NULL,
[countrypk] [bigint] NULL,
[p_lastname] [nvarchar](255) NULL,
-- continues on for a loooooong time...

'p_gender' - BIGINT??? Really?  So there's 2^64 -1, or 18,446,744,073,709,551,615 possible genders, are there?  On what planet?  Or how about 'PK' for the primary key?  In column 3, of all places?  I sincerely hope the user isn't intending to put some long ASP.NET-generated URL in 'p_url', since it's limited to 255 characters.  And 'p_dateofbirth' clearly needs to store time information too, to the millisecond, as that's important when profiling your customers.

I'm dreading the day when we actually start using your hated system in production and your table scans and parameter-sniffed execution plans slam into my servers like a colossal tidal wave of shit.  I can absolutely guarantee that when I ring the vendor to complain I'll get through to some poor sod on a mere handful of Vietnamese Dong per hour telling me 'prease to call back in mornink'.  Believe me when I say I'm beginning to batten down the hatches now, and when 'main screen turn on' I'll be hiding in the server room, watching our monitoring software with my fist in my mouth.

Sage, you're not off the hook.  No matter how hard you insist, a database is not a 'company'.  It's a database.  I admire your overall design, placing your metadata in a separate DB and company / entity-specific information in separate DBs.  I don't appreciate you writing metadata to 'master' and bursting into tears when I remove it.  Nor, frankly, are your table structures much cop either.  It's not hilarious to dress up your SQL Server-related errors in another error wrap, spreading confusion and delay among the various support teams:

'An unexpected error has occurred.  SQL code 208.' (Sage)
=
'Msg 208, Level 16, State 1, Line 1 - Invalid object name ...' (SQL Server)

Why not just return the second message?  The problem will land in the lap of the DBAs anyway, won't it?  Save time!  'An unexpected error has occurred' simply means 'Something went wrong and I (the developer) can't be fucked to write a coherent error message, preferring to let support teams scramble to look up SQL Server error codes in sys.messages until the application dies a death and they switch to something written properly.'

Vendors!  Wake up!  It's time to start reconsidering old-fashioned concepts like 'testing' and 'good design'.  I know we're all about Agile now, but 30 years ago we had good, solid texts like 'The Theory of Relational Databases' (David Maier - free eText here) full of things like ... wait for it ... relations!  Functional dependencies!  Normalisation!  I know, yawn, yawn, why not just let nHibernate create it all for us, yeah, yeah ... But there's good, solid systems out there right now using databases built on these basic principles.  

Vendors, don't give up your day job.  Develop great apps and leave the database administration to the professionals.


The information and views expressed in this article are my own and do not represent the opinion or views of my employer or any third-party. 


Thursday, June 13, 2013

Killer Code - Nested Aggregates using CTEs 

(Or, 'what NOT to do on a production SQL Server box')


Yesterday, I was experimenting with pulling out some figures in SQL Server using windowed functions.  SQL Server 2012 comes with a couple of really neat system functions called LAG and LEAD, and these are useful for selecting values from n rows behind or in front of a particular row, denoted by column.  There's other articles out there that deal with using LAG/LEAD, so I'm not going to cover it here, save to give an example of using this and the requirement I was aiming at.

For reasons of confidentiality I sadly cannot reproduce here the exact code I was using.  However here's a simplified example.  Here's an initial table configuration:

CREATE TABLE dbo.TestData (
DateCreated DATE,
ArbitraryValue NUMERIC(16,2) )

INSERT INTO dbo.TestData
SELECT '2013-06-01', 342.14 UNION ALL
SELECT '2013-06-01', 659.45 UNION ALL
SELECT '2013-06-01', 283.49 UNION ALL
SELECT '2013-06-01', 903.34 UNION ALL
SELECT '2013-06-01', 129.06 UNION ALL
SELECT '2013-06-01', 756.65 UNION ALL
SELECT '2013-06-01', 239.05 UNION ALL
SELECT '2013-06-01', 194.52 UNION ALL
SELECT '2013-06-01', 804.44 UNION ALL
SELECT '2013-06-01', 116.69 


So, what I wanted was to have a third column, produced during a SELECT, which gave me the difference between each ArbitraryValue and the value in the row preceding it.  Using LAG, this was quite easy:

SELECT         DateCreated, ArbitraryValue,
ArbitraryValue - LAG(ArbitraryValue, 1, NULL)
                  OVER ( ORDER BY DateCreated ASC ) [Difference]
FROM dbo.TestData
ORDER BY DateCreated ASC

Next, I decided I wanted a fourth column, to work out the average of the differences - I'm sure there's a mathematical term for this not dissimilar to standard deviation but I'm not a mathematician - and I tried defining a fourth column that averaged the values in all preceding rows using the AVG aggregate, inside a subquery (with DateCreated standing in as a key).  The purpose of this column was to, over the DateCreated column, illustrate the regression to the mean of the differences in the ArbitraryValue column, with the latest value being the 'best' average of the differences available:

SELECT t1.DateCreated, t1.ArbitraryValue,
t1.ArbitraryValue - LAG(t1.ArbitraryValue, 1, NULL) 
          OVER ( ORDER BY t1.DateCreated ASC ) [Difference],
        ( SELECT AVG(t2.ArbitraryValue)
FROM dbo.TestData t2
WHERE t2.DateCreated < t1.DateCreated
        ) [Average_Difference]
FROM         dbo.TestData t1
ORDER BY t1.DateCreated

This works wonderfully - for a small result set.  When I tested this with absolute values in rows, there was no problem.  The execution plan doesn't look bad either - table scans, but then again I haven't defined an index.

So, I modified the query to use aggregates in place of absolute values, as this was the actual requirement.  Let's modify the table definition and content as follows which will illustrate what I mean:

TRUNCATE TABLE dbo.TestData
ALTER TABLE dbo.TestData DROP COLUMN ArbitraryValue
INSERT INTO dbo.TestData ( DateCreated )
SELECT '2013-06-01' UNION ALL SELECT '2013-06-01' 
        UNION ALL SELECT '2013-06-01' UNION ALL 
        SELECT '2013-06-01' UNION ALL SELECT '2013-06-02' 
        UNION ALL SELECT '2013-06-02' UNION ALL 
SELECT '2013-06-03' UNION ALL SELECT '2013-06-03' 
        UNION ALL SELECT '2013-06-03' UNION ALL 
SELECT '2013-06-04' UNION ALL   SELECT '2013-06-05' 
        UNION ALL SELECT '2013-06-05' UNION ALL 
        SELECT '2013-06-05' UNION ALL SELECT '2013-06-06' 
        UNION ALL SELECT '2013-06-06' UNION ALL 
        SELECT '2013-06-06' UNION ALL SELECT '2013-06-06' 
        UNION ALL SELECT '2013-06-07' UNION ALL 
        SELECT '2013-06-07' UNION ALL SELECT '2013-06-08' 
        UNION ALL SELECT '2013-06-08' UNION ALL 
        SELECT '2013-06-08' UNION ALL SELECT '2013-06-09' 
        UNION ALL SELECT '2013-06-09' UNION ALL 
SELECT '2013-06-10' 

So now I have a table with ten distinct values in it, a total of 25 rows.  I now want to modify my SELECT query to give me a count of the number of rows per distinct row (i.e. a row count per distinct day), and calculate the difference, day to day, of these values.  In reality, this example had a direct correlation that a row was inserted into the table for an event E.  I needed to count the events, grouped by day, the differences between these counts, and the average of these differences.  This, unfortunately, didn't work, throwing up an interesting error message:

SELECT    t1.DateCreated, COUNT(t1.DateCreated),
   COUNT(t1.DateCreated) 
           - LAG(COUNT(t1.DateCreated), 1, NULL) 
           OVER ( ORDER BY t1.DateCreated ) [Difference],
(  SELECT  AVG(COUNT(t2.DateCreated))
   FROM    dbo.TestData t2
   WHERE   t2.DateCreated < t1.DateCreated 
        )  [Average_Difference]
FROM    dbo.TestData t1
GROUP BY   t1.DateCreated 


Msg 130, Level 15, State 1, Line 3
Cannot perform an aggregate function on an expression containing an aggregate or a subquery.


This is due to the SELECT AVG(COUNT(t2.DateCreated)).  Fair enough.  SQL Server won't allow this as by itself, this isn't a column - it's a statement, and pointless since AVG(COUNT(t2.DateCreated)) == COUNT(t2.DateCreated) for a single value.  However, not only have I defined a WHERE clause, I've included it as an extra column using the subquery.  While logically this should be fine, it's still a case of 'computer says no'.

I decided to get around this restriction by writing the query as follows, using a CTE for the outer query in place of the subquery, meaning I didn't have to calculate the AVG and COUNT in the same statement:

;WITH t2 (DateCreated, DateCount, Difference) AS (
SELECT t1.DateCreated, COUNT(*),
COUNT(*) - LAG(COUNT(*), 1, NULL) 
        OVER ( ORDER BY t1.DateCreated ) [Difference]
        FROM dbo.TestData t1
GROUP BY t1.DateCreated )
SELECT t3.DateCreated, t3.DateCount, t3.Difference, 
( SELECT AVG(t2.Difference) 
FROM t2 
WHERE t2.DateCreated <= t3.DateCreated 
        )  [Average_Difference]
FROM         t2 t3
ORDER BY t3.DateCreated ASC

For a small result set, this worked beautifully, returning the average difference in the fourth column.  The problem came when I tried to scale it up.  Result sets for sub-10,000 rows take a few seconds at most.  Running this query on a set of 320,000 rows (specified using a WHERE clause which I've omitted from the example above, for clarity) took around 10sec.  Scaling it up a bit, I ran this query on 10,000,000 rows (about a year's worth of data) to get the results back in about 1m 30sec.  And finally, I attempted to run this for three years of data, estimated about 30m rows.

Oh dear. 

Immediate, total devastation.  A quick look at Spotlight showed all 16 cores going flat-out at 100% CPU.  Alarms started going off.  A quick check of IOSphere, the FusionIO software, showed I had managed to achieve what 256 threads of SQLIO at full-tilt could not - complete saturation of FusionIO read-write capability (and bear in mind this can cope with about 24Gbps, around 500k IOPS read, 500k IOPS write - see here for more, http://www.fusionio.com/products/iodrive2-duo/).  I stopped the query immediately, and went for the autopsy.

The execution plan showed massive activity with scalar computations with the nested 'call' to t2.Difference causing the first COUNT(*) (in line 3) and the second COUNT(*) (in line 3) to be executed for each and every row returned.  This, combined with the simple arithmetic in line 3, combined with the cost of working with the CTE, combined with the subquery AVG calculation, created a massive amount of set-based load that smashed into the CPU schedulers like an iron fist.  

I would love to reproduce the execution plans here but shouldn't due to NDA-related reasons.  I've got my three years of average differences by using Excel to generate the fourth column instead of SQL.  And I'm sure (and if anyone's reading this, they may post) there's fifty different ways of refactoring this query.  My next read is going to be Itzik Ben-Gan's 'Microsoft SQL Server 2012 High Performance T-SQL Using Window Functions', so next time I can do it properly.

Lessons learned - don't test on production, no matter how tempting.  And don't nest aggregates.

Wednesday, May 15, 2013

From 0 to sysadmin in 30 seconds...

This is a quick tip for anyone put on the spot like I was today...

Picture the scene.  It's five to five, I'm packing up my laptop and finding my headphones.  I'm about to leave the office.  In a hurry, one of our engineers comes over and explains how the on-site supplier is having a problem accessing an obscure, unsupported SQL Server database.  They can't get access.

So I go over to have a look.  It's a SQL Server 2005 DB on the application server.  I've never seen it before.  The engineer explains how they have a username, but the password is lost.  SQL Server Management Studio isn't installed.  No-one knows how to retrieve or reset the password for the user they know of.

All faces turn to you.  What do you do?

In summary:

Start -> Run -> cmd
sqlcmd -Slocalhost -E
(if you're lucky, you'll get in, as 2005 has local admins as sysadmins by default).
If this doesn't work - sqlcmd -Slocalhost -E -A

You're in.  If you're not, get local admin access on the machine first.  Now...

create login 'me' with password 'Pass1234';
go
exec sp_addsrvrolemember @loginame = 'me', @rolename = 'sysadmin';
go
exit
sqlcmd -Slocalhost -Ume -PPass1234

You're now sysadmin.

alter login 'someLogin' with password = 'new password';
(this is the username they've supplied.  Create a new password).
go
exit

Job done.  Coat on, go home.