Match those types!

This is a recommendation I believe is worth repeating from time to time: Make sure you match data types when you write TSQL code. Else you in most cases end up with an implicit data type conversion. And in worst case, this conversion is performed at the column side – not the literal side of your query. What does that mean? Consider below:

WHERE Col = Val

Now, say that the types for above don’t match. Val might be some parameter (to a stored procedure, for instance), a variable or a written value (literal). In any case, when SQL Server need to do some operation )like comparison like here) involving several values which aren’t of the same type, then one of the values need to be converted so it is of the same type as the other. Which one is determined by the rules for “Data Type Precedence“.

What we don’t want is a conversion at the column side. Why? I would argue that an implicit conversion in the first place in many cases mean I misunderstood something about the types in my system and am on my way of producing a bug in my code. But having a more strict language is not the topic for this particular post (check out this). My point here is that it is bad for performance reasons. Just yesterday and today I was involved in a thread on the MSDN forum. Here’s the repro script from that thread (slightly adjusted by me):

USE tempdb
GO
IF OBJECT_ID('t'IS NOT NULL DROP TABLE t
GO
CREATE TABLE t
(
c1 int IDENTITY(10000001,1) NOT NULL
,
c2 char(8) NULL
,
c3 datetime NULL
)
GO
INSERT INTO t(c3)
SELECT TOP 3000000 '20080203'
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
UPDATE SET c2 CAST(c1 AS char(8))
CREATE UNIQUE CLUSTERED INDEX ON t(c2)
GO

SELECT FROM WHERE c2 N'10000009'
SELECT FROM WHERE c2 N'10000008'

Don’t bother too much about the first section. The relevant part here is that we have char(8) column with values which are unique (enforced by a unique index), on a table with three million rows.

Now, before we dig deeper: What is the problem with above queries? The type do not match! The type in the table is char(8), but literal is Unicode (the N before the string). Since nchar has higher precedence than char, then the column side needs to be converted to the value side. Not good. Try the code on SQL Server 2000 and you will see disaster. For me, STATISTICS IO gave 10755 logical reads for both queries, with Clustered Index Scan (which is the same as table scan). Hoover on the Clustered Index Scan icon and you will find “WHERE:(Convert([t].[c2])=[@1])”. See how the conversion is done at the column side? This, my friend, mean that SQL Server cannot seek an index to find the rows. Ouch! That’s just because we were a bit sloppy… Compare to below:

SELECT FROM WHERE c2 '10000009'
SELECT FROM WHERE c2 '10000008'

You will find for both two above we have Clustered Index Seek with only three logical reads. That is a good improvement.

OK, enough about SQL Server 2000, lets discuss 2008/2005. At first I couldn’t see any difference between the two original queries (N’10000009′ and N’10000008′). Both seemed to be instant, and the same execution plan for both. I even saved the execution plan as XML and used WinDiff to compare the xml files. No difference. But the poster said the “9” query was slower than the “8” query. I executed to text and had ctrl-E pressed for a while. And now I could see a difference. The flicker was quicker for the “8” version vs. the “9” version. I used Profiler and confirmed. The “9” version had consistently about 90000 microsecond duration where the “8” version had about 0 duration. This was also reflected in CPU usage, as shown by Profiler. I hoped to use Extended Events to find something deeper about this, but that didn’t give me anything. Perhaps I didn’t capture the relevant events, quite simply. X/Event experts are welcome to suggest event session configurations. 🙂

My guess for the difference is in the execution plan. You will see an index seek for both. That is good, since SQL Server nowadays potentially can use an index even if you end up with an implicit conversion at the column side. But you will see a few more operators in addition to the index seek. See that Constant Scalar followed by a Compute Scalar? My guess is that the built-in internal function used inside the Compute Scalar operator is more expensive for N’10000009′ compared to N’10000008′. You can see this function if you use SET STATISTICS PROFILE ON. See the GetRangeThroughConvert() function. My guess is that the answer is the cost for this function. (Search the Internet for things like “Constant Scan” and “GetRangeThroughConvert” for other blog post on this topic.)

The moral of the story? Here we have a cost which can be relevant in some cases, but we will only see this if we happen to execute our query with the right values and are very very observant. But we can eliminate all this mess this by using the right type for the column in the first place!

Do you have Instant File Initialization?

You might ask yourself: What is Instant File Initialization and why does it matter?

What Instant File Initialization is:
Instant File Initialization allow SQL Server to allocate storage (space from disks) very very quickly. As you probably know, when you delete files they are not physically deleted from the disk – they are only marked as deleted in the file system allocation structures. This is why various undelete programs can recover deleted files. So imagine a process requiring disk space. The disk area given to the process from the file system and operating system could potentially contains some secret information from deleted files. This is why the file system/OS insist to zero out the storage before the process can actually see it. (I’m not OS / file system expert so anyone is free to correct me here.) That is, unless the one allocating the storage has some special privileges.

When does the zeroing out of disk space occur?
Whenever SQL Server need disk space, such as:

  • Create database
  • Add file to database
  • Restore (if the restore process includes database creation)
  • File growth (both manual and auto-grow)
  • Backup [edit 2011-08-26: Not sure how this got here, AFAIK shouldn’t be here]

Can we avoid the slow zeroing out space?
Yes, but only if you are on SQL Server 2005 or higher and for some operations: creation and allocation of data database files (mdf and ndf). The SQL Server service account need to have appropriate permissions from the OS. To be more precise, it need to have a privilege called SE_MANAGE_VOLUME_NAME. This is by default granted to Administrators. Do you run your SQL Server as an account being member of Administrators? I hope not. Did you grant this permission to the service account?

How do I grant this privilege?
This is easy. Add the SQL Server service account to the “Perform Volume Maintenance Tasks” security policy.

Does it matter?
You be the judge. Just to give you an idea, I created a database with a data file of various size (I had the log file at 1MB for all tests in order for it to influence the least). I timed it both with and without Instant File Initialization. I ran it on my desktop machine which has a RAID0 of two 7200RPM disks:

Size without IFI with IFI 1GB 10.3 s 0.3 s 10GB 128 s 1.3 s 50GB 663 s 4.5 s

The difference is roughly a factor of 100!

When does it hurt?
Basically every time disk storage is allocated. But let us focus of the occasions where you can do anything about it, i.e., when you can have Instant File Initialization. Such occasions include:

  • Whenever a database is created. Space need to be allocated for the data file(s).
  • Whenever a data file is manually expanded in size.
  • Whenever you have auto-grow for a data file. Note that potentially some poor user will now sit and wait for the auto-grow to complete.
  • When you start SQL Server. Why? Tempdb need to be re-created.
  • When you perform restore, if the destination database not already exists with matching database file structure.

How can I tell if I have Instant File Initialization?
I find it easiest to just create a database and time it, using some reasonable size for your data file, like 5GB. Actually, run two test: One with 5GB data file and really small log file. And then another with very small data file and 5GB log file. Remember that we never have Instant File Initialization for log files. For instance, run below and you will see (adjust the file path for the database files). You need to adapt your code for file path name, possibly database name and the datetime handling if you are lower then SQL Server 2008:

DECLARE @t time(3SYSDATETIME()
CREATE DATABASE IFI_test_ld
ON  PRIMARY
(NAME N'IFI_test'FILENAME N'C:\IFI_test\IFI_test_ld.mdf'SIZE 5GBFILEGROWTH 100MB)
LOG ON
(NAME N'IFI_test_log'FILENAME N'C:\IFI_test\IFI_test_ld.ldf'SIZE 1MB,  FILEGROWTH 10MB)
SELECT DATEDIFF(ms@tCAST(SYSDATETIME() AS time(3))) AS LargeDataFile
SET @t SYSDATETIME()
CREATE DATABASE IFI_test_ll
ON  PRIMARY
(NAME N'IFI_test'FILENAME N'C:\IFI_test\IFI_test_ll.mdf'SIZE 3MBFILEGROWTH 100MB)
LOG ON
(NAME N'IFI_test_log'FILENAME N'C:\IFI_test\IFI_test_ll.ldf'SIZE 5GB,  FILEGROWTH 10MB)
SELECT DATEDIFF(ms@tCAST(SYSDATETIME() AS time(3))) AS LargeLogFile

Are numbers for above two about the same? If yes, then you don’t have Instant File Initialization. If the one with large data file is much quicker, then you do have Instant File Initialization. And now you also know approx how long it takes to allocate 1 GB with of data and log file for your SQL Server.

John Samson blogged about an alternative way to check, involving trace flags.

I’m curious: Did you have Instant File Initialization?

Does the Resource Governor govern?

Two weeks ago, we did the “SQL 2008 Summit” roadshow here in Sweden. We did 4 cities around Sweden in 4 days (http://www.expertzone.se/sql2k8/). It was a bit exhaustive, but even more fun – being able to travel and spend some time with persons wish I could meet more often (Kalen), others I meet regularly but only at workplace (Roger, Patrik, Anna) and yet other persons I just met (George, Per).

One of my presentations was on Resource Governor (RG), and I has this super-simple demo meaning to show CPU throttling. I classified connections to one of two Workload Groups based on login name. One group used a Resource Pool with max CPU at 10% and the other a Resource Pool with max CPU at 90%. Since I have two CPU cores, I started two execution loops for each login. An execution loop uses SQLCMD to login using the appropriate loginID and execute a proc which loops and outputs a counter using RAISERROR and NOWAIT (so we see something happening in the console).

For two of my presentations it worked just as expected. For two presentations it didn’t: the CPU usage looked very very strange – nowhere near what we expected. So, during the final day, I managed to spend some time with Mikael Colliander from MS Sweden. First we couldn’t reproduce the strange behavior, but after several restart, re-boot etc. we could. We now finally got to look at what scheduler each connection was using and there was the answer. One connection (ending up in the pool with max 10% CPU) was alone on one scheduler meaning alone on one CPU! The other three connections (one one on 10% CPU and two on max 90% CPU) was using the other CPU. So for the CPU where we had only one connection (belonging to the pool to be capped at 10% CPU) we just had no contention. So this connection could use everything on that CPU since nobody else was assigned to the CPU.

Now when I understand why this happened, it doesn’t look that strange. But I think we need to be very careful when we monitor resource usage for our connections and are using resource governor. The more CPUs we have the less chance we will see the (perhaps expected) distribution of load.

Execution plan re-use, sp_executesql and TSQL variables

Let me start by saying that the contents of this post is not very advanced. If you have read the excellent paper “Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005”, https://technet.microsoft.com/en-us/library/cc966425.aspx  and understood it, you would already know below, and much more…

I was reading a thread in the open newsgroups today (.tools, posted by Mike), about an advice that the SQL Server 2005 Database Engine Tuning Advisor apparently gave. It advices to replace sp_executesql usage with declaring TSQL variables and using those in the WHERE clause. Translated to the Adventureworks database, it advices that instead of 1) below, we should use 2) below.

1)
EXEC sp_executesql N'SELECT FirstName, LastName, PersonType, Title
FROM Person.Person
WHERE LastName = @P1',
N'@P1 nvarchar(50)''Diaz'

2)
DECLARE @P1 NVARCHAR(50)
SET @P1 'Diaz'
SELECT FirstNameLastNamePersonTypeTitle
FROM Person.Person
WHERE LastName @P1

Now, I could not reproduce this (make DTA give me the same advice, to replace 1) with 2) ). Perhaps I misunderstood the poster in the group, it is because I’m running SQL Server 2008 DTA and engine, I’m not looking in the right place for that advice, my data isn’t representative, I’m running the DTA with some other settings, etc.. But say that DTA does indeed give such an advice, would would it do that? To be honest, I don’t know. It can hardly have enough information to determine whether 1) or 2) is the best choice.

In short: Say we have an index on the LastName column and the name we look for can either be a very common name, like perhaps “Smith”, or a not so common name, like “Karaszi”. For the more common name, a table scan might be the best choice, where for the not-so-common name, an index seek it probably best thing. OK, a very simple example, but serves well for this discussion.

Back to the difference between 1) and 2). There are potentially very important differences between the two:

For 1), SQL Server will determine a plan based on the contents of the parameter when the plan is created. I.e., it can determine selectivity based on that and determine whether it is a good idea to use an index or not. The plan is then cached and can be re-used. But what if we for the first execution pass in something which is very selective, but typically we are not very selective? Or the other way around? I,e, the plan for one case might not be optimal for some other case. This is where we have new optimizer hints in 2005 like OPTIMIZE FOR and the RECOMPILE hints. I will not go into details about these here, since I’m focusing on the differences between 1) and 2). See the white paper I mentioned, and of course Books Online, for more details.

For 2), SQL Server (or rather: the optimizer) has no knowledge of the contents of the variable when the plan is produced. So it cannot use the statistics to determine selectivity. In above case, it instead uses density (stored with the statistics, assuming such exists for the column). Density is basically the 1/ number of unique values for the column(s). This might be a good representation for a typical lastname, but perhaps not too good for a very common or a very uncommon lastname. Now, in 2005, we have hints for these situations as well (RECOMPILE), but again, that is not the point here.

In order for DTA to give the best advice here, it would have to know about the distribution over the data for that column and also have many executions of that query to see if “all” executions are using a typical value (sp_executesql might be better) or if the stored density value is a good representation for “all” queries that are passed from the clients. I very much doubt that DTA has this level of sophistication. Basically, I don’t know why it advices this. There might be other aspects, like “avoid dynamic SQL” (which whether that holds for this case we could argue in another post), but DTA is about performance, not best practices.

Bottom line is that these things are not simple and we should be very cautious with “rules of thumbs”.

Here’s some TSQL code to demonstrate the differences between 1) and 2). As always, only execute after you read and understood the code!

--Create a copy of the person table
--We will have lots of "Diaz" and very few "Gimmi"
IF OBJECT_ID('p'IS NOT NULL DROP TABLE p
GO
SELECT INTO p
FROM Person.Person

CREATE INDEX ON dbo.p(LastName)

--Create lots of Diaz
INSERT INTO p
SELECT  BusinessEntityID 30000PersonTypeNameStyleTitleFirstNameMiddleNameN'Diaz'SuffixEmailPromotionAdditionalContactInfoDemographicsrowguidModifiedDate
FROM p

--Make sure we have up-to-date statistics
UPDATE STATISTICS WITH FULLSCAN

--Verify execution plan and I/O cost
--for table scan with low selectivity
--and index seek with high selectivity
SET STATISTICS IO ON

DBCC FREEPROCCACHE
SELECT FROM WHERE LastName 'Diaz'
--20183 rows, table scan, 7612 pages

DBCC FREEPROCCACHE
SELECT FROM WHERE LastName 'Gimmi'
--1 row, index seek, 3 pages

------------------------------------------------------------
--sp_execute alternative
------------------------------------------------------------

--Table scan will be used for both because of execution plan re-use
DBCC FREEPROCCACHE
EXEC sp_executesql N'SELECT FirstName, LastName, PersonType, Title
FROM p
WHERE LastName = @P1',
N'@P1 nvarchar(50)''Diaz'
--20183 rows, table scan, 7612 pages

EXEC sp_executesql N'SELECT FirstName, LastName, PersonType, Title
FROM p
WHERE LastName = @P1',
N'@P1 nvarchar(50)''Gimmi'
--1 row, table scan, 7612 pages

--Other way around
--Index search will be used for both because of execution plan re-use
DBCC FREEPROCCACHE
EXEC sp_executesql N'SELECT FirstName, LastName, PersonType, Title
FROM p
WHERE LastName = @P1',
N'@P1 nvarchar(50)''Gimmi'
--1 row, index seek, 3 pages

EXEC sp_executesql N'SELECT FirstName, LastName, PersonType, Title
FROM p
WHERE LastName = @P1',
N'@P1 nvarchar(50)''Diaz'
--20183 rows, index seek, 20291 pages

------------------------------------------------------------------
--Alternative using variable
------------------------------------------------------------------
DBCC FREEPROCCACHE
DECLARE @P1 NVARCHAR(50)
SET @P1 'Diaz'
SELECT FirstNameLastNamePersonTypeTitle
FROM p
WHERE LastName @P1
GO
--20183 rows, index seek, 20291 pages

DBCC FREEPROCCACHE
DECLARE @P1 NVARCHAR(50)
SET @P1 'Gimmi'
SELECT FirstNameLastNamePersonTypeTitle
FROM p
WHERE LastName @P1
GO
--1 rows, index seek, 1 pages

--Same plan even though very different selectivity
--and emptying plan cache in between

--Estimated 33 rows for both above.
--See if that is drawn from statistics density?

--Formula for density: 1/#OfUniqueValuesInColumn
SELECT 1./COUNT(DISTINCT LastNameFROM p
--0.00082918739

--Does that match density in index statistics?
DBCC SHOW_STATISTICS('p''x')
--Yes

--How many rows in the table?
SELECT COUNT(*) FROM p
--39944

--So how many rows would we estimate based on density?
SELECT 0.00082918739 39944
--Yep, 33 rows.

--I.e, for the variable alternative, SQL Server has no
--knowledge of contents of those variables so it must use density instead.

--Clean up:
IF OBJECT_ID('p'IS NOT NULL DROP TABLE p

Are inserts quicker to heap or clustered tables?

Is it quicker and/or lower overhead to insert into a heap vs. a clustered table?
I don’t know. So I decided to do a test. Some background information first:

The test was inspired from a sidebar with Gert-Jan Strik in the open newsgroups. Basically I expressed that a heap doesn’t automatically carry lower overhead… just because it is a heap. Now, heaps vs. clustered tables is a huge topic with many aspects. I will not cover anything else here except inserts into a heap vs. a table which is clustered on an ever increasing key. No other indexes. There will be no fragmentation. I do not cover searches, covering etc. Only the pure insert aspect. OK? Good!

One might think that a heap has lower overhead because it is a … heap. But hang on for a second and think about what happens when you do an insert:

Heap:
SQL Server need to find where the row should go. For this it uses one or more IAM pages for the heap, and it cross references these to one or more PFS pages for the database file(s). IMO, there should be potential for a noticable overhead here. And even more, with many users hammering the same table I can imagine blocking (waits) against the PFS and possibly also IAM pages.

Clustered table:
Now, this is dead simple. SQL server navigates the clustered index tree and find where the row should go. Since this is an ever increasing index key, each row will go to the end of the table (linked list).

The result:
So what is the conclusion? I did several executions of the code at the end of this post, with some variations. Basically there was no or very little difference whith only one user. I.e., no contention to the GAM or PFS pages. This was pretty consistent for below three scenarios:

  1. Insert with subselect. I.e., this inserts lots of rows in the same statement.
  2. Insert in a loop (one insert and row per iteration), many rows in the same transaction.
  3. Insert in a loop, one row per transaction.

Now the difference between 2 and 3 is important.
With many transactions, we incur an overhead of force-log-write-at-commit *for each row*. I.e., much more overhead against the transaction log. And indeed, the timings between 2 and 3 for one of my executions (10000 rows) showed that 2 took on average 650 ms where the same number for 3 was 5600 ms. This is about 9 times longer!!! Now, this was more or less expected, but another important aspect is when we have several users. With many users, we might run into blocking on the PFS and IAM pages. Also, with several users it is meaningless to do it all in one transaction since we will block and essentially single-thread the code anyhow. I.e., the only revelant measure where we run many users is the loop construction where each row is its own transaction (3).

There was indded a noticeable difference when I executed several inserts in parallell and had each insert in its own transaction (for clustered table vs. heap table).

Some numbers:
I did 4 repeated tests and calculated average execution time for inserting 10000 rows for a thread. With 6 parallel thread I had 22 seconds for a clustered table and 29 seconds for a heap table. With 10 threads I had 31 seconds for a clustered table and 42 seconds for a heap table.

I didn’t find performance difference more than a couple of percents for batch inserts, when I single threaded (only one thread pumping inserts), or when I had all inserts in the loop as one transaction.

Now, I would need lots of more time to run exchaustive tests, but my interpretation is that with many users doing inserts, there is an noticable overhead for the heap vs clustering on a increasing key.

The code:
Note that for parallell executions, I recommend starting the DoTheInserts procedure using SQLCMD, a BAT file and START. As always, read the code carefully (so you understand it) and execute at your own risk.

——————————————–
–Create the database etc.
——————————————–
USE master SET NOCOUNT ON
GO
IF DB_ID(‘TestDb’) IS NOT NULL DROP DATABASE TestDb
GO
–Makes files large enough so that inserts don’t causes autogrow
CREATE DATABASE TestDb
ON  PRIMARY
(NAME = ‘TestDb’, FILENAME = ‘C:\TestDb.mdf’, SIZE = 300MB, FILEGROWTH = 50MB)
LOG ON
(NAME = ‘TestDb_log’, FILENAME = ‘C:\TestDb_log.ldf’, SIZE = 200MB, FILEGROWTH = 100MB)
GO
–Full recovery to avoid effect of system caused log truncation
ALTER DATABASE TestDb SET RECOVERY FULL
BACKUP DATABASE TestDb TO DISK = ‘nul’
USE TestDb

–Execution time log table
IF OBJECT_ID(‘TimeLogger’) IS NOT NULL DROP TABLE TimeLogger
GO
CREATE TABLE TimeLogger
(
SomeId int identity
,spid int
,TableStructure varchar(10) CHECK (TableStructure IN (‘heap’, ‘clustered’))
,InsertType varchar(20) CHECK (InsertType IN(‘one statement’, ‘loop’))
,ExecutionTimeMs int
)
GO

IF OBJECT_ID(‘RowsToInsert’) IS NOT NULL DROP TABLE RowsToInsert
CREATE TABLE RowsToInsert(#rows int)
GO

–Support procedures
IF OBJECT_ID(‘CreateTables’) IS NOT NULL DROP PROC CreateTables
GO
CREATE PROC CreateTables AS
IF OBJECT_ID(‘HeapLoop’) IS NOT NULL DROP TABLE HeapLoop
CREATE TABLE HeapLoop(c1 int identity, c2 int DEFAULT 2, c3 datetime DEFAULT GETDATE(), c4 char(200) DEFAULT ‘g’)
IF OBJECT_ID(‘ClusteredLoop’) IS NOT NULL DROP TABLE ClusteredLoop
CREATE TABLE ClusteredLoop(c1 int identity, c2 int DEFAULT 2, c3 datetime DEFAULT GETDATE(), c4 char(200) DEFAULT ‘g’)
CREATE CLUSTERED INDEX x ON ClusteredLoop(c1)
IF OBJECT_ID(‘HeapOneStatement’) IS NOT NULL DROP TABLE HeapOneStatement
CREATE TABLE HeapOneStatement(c1 int identity, c2 int DEFAULT 2, c3 datetime DEFAULT GETDATE(), c4 char(200) DEFAULT ‘g’)
IF OBJECT_ID(‘ClusteredOneStatement’) IS NOT NULL DROP TABLE ClusteredOneStatement
CREATE TABLE ClusteredOneStatement(c1 int identity, c2 int DEFAULT 2, c3 datetime DEFAULT GETDATE(), c4 char(200) DEFAULT ‘g’)
CREATE CLUSTERED INDEX x ON ClusteredOneStatement(c1)
GO

IF OBJECT_ID(‘TruncateTables’) IS NOT NULL DROP PROC TruncateTables
GO
CREATE PROC TruncateTables AS
TRUNCATE TABLE HeapLoop
TRUNCATE TABLE ClusteredLoop
TRUNCATE TABLE HeapOneStatement
TRUNCATE TABLE ClusteredOneStatement
GO

IF OBJECT_ID(‘DoBefore’) IS NOT NULL DROP PROC DoBefore
GO
CREATE PROC DoBefore AS
BACKUP LOG TestDb TO DISK = ‘nul’
CHECKPOINT
GO

IF OBJECT_ID(‘iHeapLoop’) IS NOT NULL DROP PROC iHeapLoop
GO
CREATE PROC iHeapLoop @rows int AS
DECLARE @i int = 1
WHILE @i <= @rows
BEGIN
INSERT INTO HeapLoop (c2) VALUES(2)
SET @i = @i + 1
END
GO

IF OBJECT_ID(‘iClusteredLoop’) IS NOT NULL DROP PROC iClusteredLoop
GO
CREATE PROC iClusteredLoop @rows int AS
DECLARE @i int = 1
WHILE @i <= @rows
BEGIN
INSERT INTO ClusteredLoop (c2) VALUES(2)
SET @i = @i + 1
END
GO

IF OBJECT_ID(‘iHeapOneStatement’) IS NOT NULL DROP PROC iHeapOneStatement
GO
CREATE PROC iHeapOneStatement @rows int AS
INSERT INTO HeapOneStatement (c2)
SELECT TOP(@rows) 2 FROM syscolumns a CROSS JOIN syscolumns b
GO

IF OBJECT_ID(‘iClusteredOneStatement’) IS NOT NULL DROP PROC iClusteredOneStatement
GO
CREATE PROC iClusteredOneStatement @rows int AS
INSERT INTO ClusteredOneStatement (c2)
SELECT TOP(@rows) 2 FROM syscolumns a CROSS JOIN syscolumns b
GO

–Proc to do the inserts
IF OBJECT_ID(‘DoTheInserts’) IS NOT NULL DROP PROC DoTheInserts
GO
CREATE PROC DoTheInserts
AS
DECLARE @dt datetime, @NumberOfRowsToInsert int
SET @NumberOfRowsToInsert = (SELECT #rows FROM RowsToInsert)
EXEC DoBefore –Batch allocation, heap:
SET @dt = GETDATE()
EXEC iHeapOneStatement @rows = @NumberOfRowsToInsert
INSERT INTO TimeLogger (spid, TableStructure, InsertType, ExecutionTimeMs)
VALUES(@@SPID, ‘heap’, ‘one statement’, DATEDIFF(ms, @dt, GETDATE()))

EXEC DoBefore –Batch allocation, clustered:
SET @dt = GETDATE()
EXEC iClusteredOneStatement @rows = @NumberOfRowsToInsert
INSERT INTO TimeLogger (spid, TableStructure, InsertType, ExecutionTimeMs)
VALUES(@@SPID, ‘clustered’, ‘one statement’, DATEDIFF(ms, @dt, GETDATE()))

EXEC DoBefore –Single allocations, heap:
SET @dt = GETDATE()
–BEGIN TRAN
EXEC iHeapLoop @rows = @NumberOfRowsToInsert
–COMMIT
INSERT INTO TimeLogger (spid, TableStructure, InsertType, ExecutionTimeMs)
VALUES(@@SPID, ‘heap’, ‘loop’, DATEDIFF(ms, @dt, GETDATE()))

EXEC DoBefore –Single allocations, clustered
SET @dt = GETDATE()
–BEGIN TRAN
EXEC iClusteredLoop @rows = @NumberOfRowsToInsert
–COMMIT
INSERT INTO TimeLogger (spid, TableStructure, InsertType, ExecutionTimeMs)
VALUES(@@SPID, ‘clustered’, ‘loop’, DATEDIFF(ms, @dt, GETDATE()))
GO

–Run the tests
EXEC CreateTables
TRUNCATE TABLE TimeLogger
TRUNCATE TABLE RowsToInsert INSERT INTO RowsToInsert VALUES(10000)

–<Below can be executed over several connections>
EXEC DoTheInserts
EXEC DoTheInserts
EXEC DoTheInserts
EXEC DoTheInserts
–</Below can be executed over several connections>

–How did we do?
SELECT COUNT(*) AS NumberOfExecutions, TableStructure, InsertType, AVG(ExecutionTimeMs) AS AvgMs
FROM TimeLogger WITH(NOLOCK)
GROUP BY TableStructure, InsertType
ORDER BY InsertType, TableStructure

–Verify that no fragmentation
SELECT
OBJECT_NAME(OBJECT_ID) AS objName
,index_type_desc
,avg_fragmentation_in_percent AS frag
,page_count AS #pages
,record_count
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, ‘DETAILED’)
WHERE OBJECT_NAME(OBJECT_ID) <> ‘TimeLogger’ AND index_level = 0

 

Finally got Kalen’s new Query Tuning book…

I know this book has been out for a little while now, but I didn’t get it until now. This one I’ve been looking forward to for a long time. A quick look in the book is very promising (as expected).

The full name of the book, btw, is “Inside Microsoft SQL Server 2005 Query Tuning and Optimization” from MS press. The title says it all, I guess. 🙂

Non-trusted constraints and performance

(See my part 1 article about non-trusted constraints in general: http://sqlblog.com/blogs/tibor_karaszi/archive/2008/01/12/non-trusted-constraints.aspx)

The optimizer is a pretty smart little animal. It can even use constraints to eliminate some data access or part of a query. That assumes however that the constraint is trusted. For example, for a check constraint:

USE Adventureworks
SET STATISTICS IO ON
--Also check the execution plan
SELECT AVG(OrderQtyFROM Sales.SalesOrderDetail
SELECT AVG(OrderQtyFROM Sales.SalesOrderDetail WHERE UnitPrice 0

Note that the table isn’t even accessed for the second query. SQL Server know that there can be no rows in the table where UnitPrice > 0 because there is a constraint:

SELECT OBJECT_NAME(parent_object_idAS table_namenamedefinition
FROM sys.check_constraints
WHERE parent_object_id OBJECT_ID('sales.salesorderdetail')
ORDER BY table_name

Notice the constraint CK_SalesOrderDetail_UnitPrice with the condition ([UnitPrice]>=(0.00)). But what if the constraint isn’t trusted?

ALTER TABLE Sales.SalesOrderDetail NOCHECK CONSTRAINT CK_SalesOrderDetail_UnitPrice
SELECT AVG(OrderQtyFROM Sales.SalesOrderDetail WHERE UnitPrice 0

Now we do have data access. And even if we enable the constraint, it will be non-trusted and SQL Server cannot know for sure that no row violates the condition:

ALTER TABLE Sales.SalesOrderDetail CHECK CONSTRAINT CK_SalesOrderDetail_UnitPrice
SELECT AVG(OrderQtyFROM Sales.SalesOrderDetail WHERE UnitPrice 0

We need to enable the constraint WITH CHECK to make sure it is trusted:

ALTER TABLE Sales.SalesOrderDetail WITH CHECK CHECK CONSTRAINT CK_SalesOrderDetail_UnitPrice
SELECT AVG(OrderQtyFROM Sales.SalesOrderDetail WHERE UnitPrice 0

Performance can also be improved with foreign key constraints. For example, we have a foreign key which states that we cannot have an order detail row for an order which doesn’t exist in the orders table. Now, consider below (check out both I/O and execution plan):

SELECT sd.SalesOrderIDsd.CarrierTrackingNumber
FROM Sales.SalesOrderHeader AS s
INNER JOIN Sales.SalesOrderDetail AS sd
ON s.SalesOrderID sd.SalesOrderID
WHERE sd.OrderQty 20

We didn’t return any columns from the SalesOrderHeader table, and since SQL Server know that each row in the SalesOrderDetail table has a corresponding row in the SalesOrderHeader table, there’s no need to access the SalesOrderHeader table at all. But if the constraint isn’t trusted:

ALTER TABLE Sales.SalesOrderDetail NOCHECK CONSTRAINT FK_SalesOrderDetail_SalesOrderHeader_SalesOrderID

SELECT sd.SalesOrderIDsd.CarrierTrackingNumber
FROM Sales.SalesOrderHeader AS s
INNER JOIN Sales.SalesOrderDetail AS sd
ON s.SalesOrderID sd.SalesOrderID
WHERE sd.OrderQty 20

Notice that now the SalesOrderTable *is* accessed, because the constraint is disabled. As, as with above, it isn’t enough to enable it, we need to enable it WITH CHECK to make it trusted again. OK, we might say that why even bother with the join if we only need columns from the referencing table in the first place. But consider this view:

CREATE VIEW myView AS
SELECT 
sd.SalesOrderIDs.RevisionNumbers.DueDatesd.CarrierTrackingNumber
FROM Sales.SalesOrderHeader AS s
INNER JOIN Sales.SalesOrderDetail AS sd
ON s.SalesOrderID sd.SalesOrderID

The view accesses columns from both tables. But it is quite possible that we have users of this view who are only intered in columns from the SalesOrderDetail table:

SELECT SalesOrderIDCarrierTrackingNumber
FROM myView

If the constraint is not trusted, then both tables are accessed (with the join operation, of course). But if we make sure that the constraint is trusted, then only the SalesOrderDetail table is accessed and no join is performed:

ALTER TABLE Sales.SalesOrderDetail WITH CHECK CHECK CONSTRAINT FK_SalesOrderDetail_SalesOrderHeader_SalesOrderID

SELECT SalesOrderIDCarrierTrackingNumber
FROM myView

So, bottom line is that you should be careful with non-trusted constraints. Obviously because you might have data in the table which violates the constraint, but also for performance reasons. It is easy to check whether you have any non-trusted constraints in the database:

SELECT OBJECT_NAME(parent_object_idAS table_namename
FROM sys.check_constraints
WHERE is_not_trusted 1
UNION ALL
SELECT OBJECT_NAME(parent_object_idAS table_namename
FROM sys.foreign_keys
WHERE is_not_trusted 1
ORDER BY table_name

I should add that to me, the purpose of constraint to make sure I have valid data. The performance aspect is just a bonus. But I admitt that I’m pretty impressed by the optimizer in some cases. 🙂

 

Owner/schema qualifying object names

It isn’t easy to remember all the details regarding object/schema qualifying object names. There are many variables involved, such as:

  • Version of SQL Server
  • Qualifying the owner when you call the proc (EXEC dbo.procname)
  • Qualifying object references inside the proc code
  • Qualifying object names if you aren’t using procedures
  • Whether the user is the same as the owner of the proc/table
  • Default schema for the user

So, I decided to give it a spin for different combinations and investigate both profiler events and also number of plans in cache. I won’t post all details here, that would be too much to write down. You can use the scripts at the end to do your own findings. I did not investigate the differences regarding compile locks. Anyhow, here are my conclusions:

  • I could not produce any recompile (SP:Recompile or SQLStmtRecompile) event for any combbination.
  • I did find SP:CacheMiss events on 2000 when you execute a proc and don’t qualify the proc name (for 2005 I always got those events). Then a subsequent SP_CacheHit will follow.
  • For straight SQL (no procedures) I noticed that each user get its separate plan when you don’t owner-qualify the table name. This makes sense. An interesting aspect on 2005 was that if you specify a default schema for the user (and two users has the same default schema), then the users will share the plan (basically the default schema becomes the “owner” of the plan).

Below are the scripts I used:

--Login as sysadmin:
USE master
IF DB_ID('myTestDb'IS NOT NULL DROP DATABASE myTestDb
IF EXISTS(SELECT FROM syslogins WHERE name 'Kalle'EXEC sp_droplogin 'Kalle'
IF EXISTS(SELECT FROM syslogins WHERE name 'Olle'EXEC sp_droplogin 'Olle'
GO
EXEC sp_addlogin 'Kalle''*hjk&6f' EXEC sp_addlogin 'Olle''*hjk&6f'
CREATE DATABASE myTestDb
GO
USE myTestDb
EXEC sp_grantdbaccess 'Kalle' EXEC sp_grantdbaccess 'Olle'
GO
CREATE TABLE dbo.t(c1 int identity PRIMARY KEYc2 char(30))
INSERT INTO dbo.t SELECT TOP 1000 'hello' FROM sysobjects a CROSS JOIN sysobjects b
CREATE INDEX ON t(c1)
GO
CREATE PROC dbo.p AS SELECT c1 FROM WHERE c1 34 AND c2 'Hello'
GO
CREATE PROC dbo.p_q AS SELECT c1 FROM dbo.t WHERE c1 34 AND c2 'Hello'
GO
GRANT EXECUTE ON dbo.p TO KalleOlle
GRANT EXECUTE ON dbo.p_q TO KalleOlle
GRANT SELECT ON TO KalleOlle

--Number of plans in cache, run after executions of proc
SELECT OBJECT_NAME(objid), sqluid, *
FROM master..syscacheobjects
WHERE dbid DB_ID()
AND 
cacheobjtype 'Compiled Plan'
AND sql NOT LIKE '%PSTATMAN%'

--Run this three times, logged in as sysadmin (dbo), Kalle and Olle
USE myTestDb
GO
EXEC dbo.p
GO
EXEC dbo.p_q
GO
EXEC p
GO
EXEC p_q
GO
SELECT c1 FROM WHERE c1 34 AND c2 'Hello'
GO
SELECT c1 FROM dbo.t WHERE c1 34 AND c2 'Hello'
GO
USE master

Find table and index name for fragmented indexes

Got this question from a newsgroup today. The answer is pretty simple, just use the dynamic management view sys.dm_db_index_physical_stats. I’m posting this here mostly so I have somewhere to refer to when asked this question…

I prefer to have a helper function to get the index name:

CREATE FUNCTION dbo.index_name (@object_id int@index_id int)
RETURNS sysname
AS
BEGIN
RETURN
(SELECT name FROM sys.indexes WHERE object_id @object_id and index_id @index_id)
END;
GO

And then a simple query:

SELECT
OBJECT_NAME(object_idAS tblName
,dbo.index_name(object_idindex_idAS ixName
,avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL)
WHERE avg_fragmentation_in_percent 20
AND index_type_desc IN('CLUSTERED INDEX''NONCLUSTERED INDEX')

Then you just adjust the search clause to your liking. One hint is to exclude nindexes with few pages (the page_count column).