Match those types!

This is a recommendation I believe is worth repeating from time to time: Make sure you match data types when you write TSQL code. Else you in most cases end up with an implicit data type conversion. And in worst case, this conversion is performed at the column side – not the literal side of your query. What does that mean? Consider below:

WHERE Col = Val

Now, say that the types for above don’t match. Val might be some parameter (to a stored procedure, for instance), a variable or a written value (literal). In any case, when SQL Server need to do some operation )like comparison like here) involving several values which aren’t of the same type, then one of the values need to be converted so it is of the same type as the other. Which one is determined by the rules for “Data Type Precedence“.

What we don’t want is a conversion at the column side. Why? I would argue that an implicit conversion in the first place in many cases mean I misunderstood something about the types in my system and am on my way of producing a bug in my code. But having a more strict language is not the topic for this particular post (check out this). My point here is that it is bad for performance reasons. Just yesterday and today I was involved in a thread on the MSDN forum. Here’s the repro script from that thread (slightly adjusted by me):

USE tempdb
GO
IF OBJECT_ID('t'IS NOT NULL DROP TABLE t
GO
CREATE TABLE t
(
c1 int IDENTITY(10000001,1) NOT NULL
,
c2 char(8) NULL
,
c3 datetime NULL
)
GO
INSERT INTO t(c3)
SELECT TOP 3000000 '20080203'
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
UPDATE SET c2 CAST(c1 AS char(8))
CREATE UNIQUE CLUSTERED INDEX ON t(c2)
GO

SELECT FROM WHERE c2 N'10000009'
SELECT FROM WHERE c2 N'10000008'

Don’t bother too much about the first section. The relevant part here is that we have char(8) column with values which are unique (enforced by a unique index), on a table with three million rows.

Now, before we dig deeper: What is the problem with above queries? The type do not match! The type in the table is char(8), but literal is Unicode (the N before the string). Since nchar has higher precedence than char, then the column side needs to be converted to the value side. Not good. Try the code on SQL Server 2000 and you will see disaster. For me, STATISTICS IO gave 10755 logical reads for both queries, with Clustered Index Scan (which is the same as table scan). Hoover on the Clustered Index Scan icon and you will find “WHERE:(Convert([t].[c2])=[@1])”. See how the conversion is done at the column side? This, my friend, mean that SQL Server cannot seek an index to find the rows. Ouch! That’s just because we were a bit sloppy… Compare to below:

SELECT FROM WHERE c2 '10000009'
SELECT FROM WHERE c2 '10000008'

You will find for both two above we have Clustered Index Seek with only three logical reads. That is a good improvement.

OK, enough about SQL Server 2000, lets discuss 2008/2005. At first I couldn’t see any difference between the two original queries (N’10000009′ and N’10000008′). Both seemed to be instant, and the same execution plan for both. I even saved the execution plan as XML and used WinDiff to compare the xml files. No difference. But the poster said the “9” query was slower than the “8” query. I executed to text and had ctrl-E pressed for a while. And now I could see a difference. The flicker was quicker for the “8” version vs. the “9” version. I used Profiler and confirmed. The “9” version had consistently about 90000 microsecond duration where the “8” version had about 0 duration. This was also reflected in CPU usage, as shown by Profiler. I hoped to use Extended Events to find something deeper about this, but that didn’t give me anything. Perhaps I didn’t capture the relevant events, quite simply. X/Event experts are welcome to suggest event session configurations. 🙂

My guess for the difference is in the execution plan. You will see an index seek for both. That is good, since SQL Server nowadays potentially can use an index even if you end up with an implicit conversion at the column side. But you will see a few more operators in addition to the index seek. See that Constant Scalar followed by a Compute Scalar? My guess is that the built-in internal function used inside the Compute Scalar operator is more expensive for N’10000009′ compared to N’10000008′. You can see this function if you use SET STATISTICS PROFILE ON. See the GetRangeThroughConvert() function. My guess is that the answer is the cost for this function. (Search the Internet for things like “Constant Scan” and “GetRangeThroughConvert” for other blog post on this topic.)

The moral of the story? Here we have a cost which can be relevant in some cases, but we will only see this if we happen to execute our query with the right values and are very very observant. But we can eliminate all this mess this by using the right type for the column in the first place!

One Reply to “Match those types!”

Leave a Reply

Your email address will not be published.