Index fragmentation, part 2

Does fragmentation hurt?

Last time, I said “probably not”. I’m glad I added that “probably” in there. This time, I will say possibly. See http://sqlblog.karaszi.com/does-index-fragmentation-matter/

In my last blog post, I wanted to focus on the sequential vs random I/O aspect and how that part should be more or less irrelevant with modern hardware. So I did a test that did a full scan (following the linked list of an index) and see if I could notice any performance difference on my SSD. I couldn’t.

That isn’t the end of the story, it turns out. Another aspect is how the data is brought into memory. You might know that SQL server can do “read ahead”, meaning it does larger reads per I/O instead if single-page I/O. I.e., fewer but larger I/O operations. This sounds fine, but what happens when we have fragmentation?

I have had this in the back of my mind for ages now. Since nobody else seemed to care about or discuss this topic, I had this feeling that I was barking up the wrong tree. But, since I now have a test-script I can just tweak it a bit so it does less than a full scan to see if fragmentation seems to make a difference for read-ahead. Spoiler alert: It sure can do! (But possibly not for read-ahead per se.)

My thinking was that read-ahead will read stuff into memory, which it turns out it doesn’t need. Something like bring an extent into memory, but it turns it only need 1 of those 8 pages, and then it jumps somewhere else. I never had a real confirmation whether this applies, or if read-ahead is smarter than that.

So, basically, I parameterized my query so I can play with the selectivity for each SELECT. I have two tables, or perhaps I should say two B-Tree indexes, to play with:

NameRows (millions)Size GB
wide index3.78.4
narrow index201

So, what selectivity to choose? I did three tests:

  • 1000 rows per query.
  • 2 rows per query. For this I suspected no difference. After all, each query will just navigate the tree and find those 2 rows most probably on the same page.
  • 10000 rows per query.

This should be enough to see a pattern, I hope. We already know that for a full scan/seek we couldn’t see a difference. I’m measuring the following attributes:

  • Duration.
  • Physical reads.
  • Size of index in memory. I.e., how much memory was consumed by the query load.

I did not find any difference for a warm index, i.e., when the data is already in cache. So, I didn’t clutter this post with such measures here. Feel free to run my test and play with it if you want to see for yourself. This is why I did not measure logical reads. Also, CPU usage was so low it wasn’t measurable.

Again, the rebuild was done with a fillfactor so that the indexes are the same size both when fragmented and when not fragmented. This is probably not how you do things, but it is so obvious that fitting more rows on a page will make reads quicker, so we don’t need to investigate that aspect.

1000 rows per SELECT:

index_widthfragmentedduration_msphysical_readsmb_cache
narrow_indexn95180.64
narrow_indexy109013118.01
wide_indexn515619114.94
wide_indexy565620416.27

What stands out above is how much of the narrow table was brought into memory from the workload. First few times I ran the test, I thought that something was wrong. I also ran the test and looked if I had physical I/O for the “warm” execution – no. The thinking was that the for the no frag executions, data was brought out of cache for whatever reason, but no physical I/O for the warm execution. I also did a run where I removed the wide_index from the test, thinking that perhaps having this in there will skew the test data – but no.

I also investigated the BP using sys.dm_os_buffer_descriptors with the thinking that just a few MB of used cache cannot contain the data I’m looking for. I fit 162 rows per data page when the index isn’t fragmented. I had 72 such pages in cache. This gives me about 11000 rows. In my workload, I did 10 iterations where each read 1000 rows. I.e., 10000 rows in total. In other words, the data that I worked with did indeed fit in those few MB of memory. I also used sys.dm_os_buffer_descriptors after an execution with only the fragmented narrow table and now there are 17,000 pages in cache instead of 72 pages.

My conclusion is that fragmentation can cause bringing lots and lots if unnecessary data into cache. If you have other thoughts or theories, let me know.

As for the other numbers: Duration increases a bit with fragmentation, and so does physical reads. But not that much.

2 rows per SELECT:

index_widthfragmentedduration_msphysical_readsmb_cache
narrow_indexn67320.16
narrow_indexy84013124.41
wide_indexn94120.19
wide_indexy1421171.58

Above doesn’t make sense at all. Why would a two (2) row SELECT bring so much data into cache when the index happens to be fragmented? I.e., why is read-ahead kicking in for a query with such high selectivity? So I decided to dig deeper. I extracted the query and ran it for itself, just a single execution, without any fancy dynamic SQL going on. I compared execution plans between the fragmented index and the non-fragmented index, but both are identical, index seek. Estimated numbers of rows were also pretty much on the target (1.2 rows).

A single query, reading a single row bring 20 data pages into cache when the table was fragmented (32 pages in total, including index pages). When defragmenting that index, the very same query brought 6 pages into cache. That was still more than expected, but less then when we had fragmentation. There is obviously more at play here… The funny thing is that STATISTICS IO doesn’t report any read-ahead and trace flag 652 (disable read ahead) doesn’t do any difference for the numbers above in this paragraph. So maybe it isn’t read ahead, after all? But there is something fishy going on.

10,000 rows per SELECT:

index_widthfragmentedduration_msphysical_readsmb_cache
narrow_indexn2851655.14
narrow_indexy225158120.04
wide_indexn313711883147.12
wide_indexy346971908149.26

Same pattern for the narrow index. Not much for me to add, except that here we got less I/O for the narrow index when the table was fragmented. I ran it a few times and got a consistent result. I can’t explain this, so I just leave it for any of you to pick your guesses.

Disabling read-ahead

What if I disable read-ahead? I used trace flag 652 to disable read-ahead and ran the the workload with 1000 rows selectivity:

index_widthfragmentedduration_msphysical_readsmb_cache
narrow_indexn1117151.19
narrow_indexy172322120.65
wide_indexn1433320415.95
wide_indexy1614321316.83

Same pattern for memory usage. But now we see a very different number for physical reads. It seems like Extended Event, the sp_statement_completed event and physical_reads value, suffers from the same defect as performance monitor. It doesn’t include the reads done my the read-ahead thread. On the other hand, we can also see that read-ahead can be our friend. Look at the difference in duration.

Bottom line

It turns out that fragmentation can have a pretty significant impact. How much? As usual, it depends on your workload. If you scan entire indexes (see my earlier blog post), then the memory aspect doesn’t seem to differ, which also makes sense. Unless you are so memory constrained that a page is removed from cache, while the query is still running and that page turns out to be needed again.

But when we are more selective than doing a full scan, fragmentation can make a difference for memory usage, evidently. My test is only one workload. And I only noticed a relevant difference for the narrow index. YMMV, of course.

The code

SET NOCOUNT ON


-----------------------------------------------------------------------------------
--Create table to keep track of buffer pool usage
USE StackOverflow

DROP TABLE IF EXISTS bp_usage

CREATE TABLE bp_usage (
 index_width varchar(30) NOT NULL
,fragmented varchar(20) NOT NULL
,mb_cache decimal(9,2) NOT NULL)


-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Disable IAM order scan, so we know that SQL Server will follow the linked list
--See https://sqlperformance.com/2015/01/t-sql-queries/allocation-order-scans

EXEC sp_configure 'cursor threshold', 1000000
RECONFIGURE


-----------------------------------------------------------------------------------
--Drop and create event session to keep track of execution time
IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'frag_test')
DROP EVENT SESSION frag_test ON SERVER

--Delete XE file, using xp_cmdshell (bad, I know)
EXEC sp_configure 'xp_cmdshell', 1 RECONFIGURE WITH OVERRIDE
EXEC xp_cmdshell 'DEL R:\frag_test*.xel', no_output 
EXEC sp_configure 'xp_cmdshell', 0 RECONFIGURE WITH OVERRIDE

CREATE EVENT SESSION frag_test ON SERVER 
ADD EVENT sqlserver.sp_statement_completed()
ADD TARGET package0.event_file(SET filename=N'R:\frag_test')
WITH (MAX_DISPATCH_LATENCY=2 SECONDS)
GO

ALTER EVENT SESSION frag_test ON SERVER STATE = START


-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Setup section

--Grow the data and log files for StackOverflow database.
IF EXISTS(SELECT size * 8/(1024*1024), * FROM sys.database_files WHERE name = N'StackOverflow2010' AND size * 8/(1024*1024) < 20)
	ALTER DATABASE [StackOverflow] MODIFY FILE ( NAME = N'StackOverflow2010', SIZE = 20GB )
IF EXISTS(SELECT size * 8/(1024*1024), * FROM sys.database_files WHERE name = N'StackOverflow2010_log' AND size * 8/(1024*1024) < 10)
	ALTER DATABASE [StackOverflow] MODIFY FILE ( NAME = N'StackOverflow2010_log', SIZE = 10GB )
GO


--Create the table for the narrow index
RAISERROR('Setup section. About to create table with narrow index...', 10, 1) WITH NOWAIT

DROP TABLE IF EXISTS narrow_index

SELECT TOP(1000*1000*20) ROW_NUMBER() OVER( ORDER BY (SELECT NULL)) AS c1, CAST('Hello' AS char(8)) AS c2
INTO narrow_index
FROM sys.columns AS a, sys.columns AS b, sys.columns AS c

CREATE CLUSTERED INDEX x ON narrow_index(c1)

--Modify Posts table, so we condense cl ix and make it non-fragmented. So we (soon) can fragment it.
RAISERROR('Setup section. About to modify Posts table, so we condense cl ix and make it non-fragmented...', 10, 1) WITH NOWAIT
UPDATE Posts SET LastEditorDisplayName = ''
ALTER INDEX PK_Posts__Id ON Posts REBUILD WITH (FILLFACTOR = 100)
GO


-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Investigate the data if you want

/*
--wide index
SELECT TOP(100) * FROM Posts
EXEC sp_indexinfo Posts				--Found on my web-site
EXEC sp_help 'Posts'

--narrow index
SELECT TOP(100) * FROM narrow_index
EXEC sp_indexinfo narrow_index		--Found on my web-site
EXEC sp_help 'narrow_index'
*/



-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Create the proc that executes our SQL
RAISERROR('About to create the proc that executes our SQL...', 10, 1) WITH NOWAIT
GO
CREATE OR ALTER PROC run_the_sql
 @fragmented varchar(20)
,@rows_to_read int
,@range_iterations int
AS
DECLARE 
 @sql varchar(1000)
,@sql_condition varchar(1000)
,@bp_wide_mb decimal(9,2)
,@bp_narrow_mb decimal(9,2)
,@range_start int
,@range_iterations_counter int


--Empty cache
CHECKPOINT
DBCC DROPCLEANBUFFERS

--Cold cache
SET @range_iterations_counter = 1
SET @range_start = 1000
WHILE @range_iterations_counter <= @range_iterations
BEGIN
	SET @sql_condition = ' BETWEEN ' + CAST(@range_start AS varchar(10)) + ' AND ' + CAST(@range_start + @rows_to_read AS varchar(10))
	SET @sql = 'DECLARE @a int SET @a = (SELECT COUNT_BIG(OwnerUserId) AS [cl_ix_scan ' + @fragmented + '] FROM Posts WHERE Id ' + @sql_condition + ')'
	EXEC (@sql)
	SET @sql = 'DECLARE @a int SET @a = (SELECT COUNT_BIG(c1) AS [nc_ix_scan ' + @fragmented + '] FROM narrow_index WHERE c1 ' + @sql_condition + ')'
	EXEC (@sql)
	SET @range_start = @range_start + 100000
	SET @range_iterations_counter += 1
END

/*
--Warm cache (same as above, don't clear cache first)
SET @range_iterations_counter = 1
SET @range_start = 1000
WHILE @range_iterations_counter <= @range_iterations
BEGIN
	SET @sql_condition = ' BETWEEN ' + CAST(@range_start AS varchar(10)) + ' AND ' + CAST(@range_start + @rows_to_read AS varchar(10))
	SET @sql = 'DECLARE @a int SET @a = (SELECT COUNT_BIG(OwnerUserId) AS [cl_ix_scan ' + @fragmented + '] FROM Posts WHERE Id ' + @sql_condition + ')'
	EXEC (@sql)
	SET @sql = 'DECLARE @a int SET @a = (SELECT COUNT_BIG(c1) AS [nc_ix_scan ' + @fragmented + '] FROM narrow_index WHERE c1 ' + @sql_condition + ')'
	EXEC (@sql)
	SET @range_start = @range_start + 100000
	SET @range_iterations_counter += 1
END
*/


--Keep track of BP usage
SET @bp_wide_mb = 
(
			SELECT 
			CAST((COUNT(*) * 8.00) / 1024 AS DECIMAL(9,2)) AS MB 
			FROM  sys.allocation_units AS a  
			  JOIN sys.dm_os_buffer_descriptors AS b 
				ON a.allocation_unit_id = b.allocation_unit_id
			  JOIN sys.partitions AS p
				ON a.container_id = p.hobt_id
			WHERE p.object_id = OBJECT_ID('Posts')
			  AND b.database_id = DB_ID()
)

SET @bp_narrow_mb = 
(
			SELECT 
			CAST((COUNT(*) * 8.00) / 1024 AS DECIMAL(9,2)) AS MB 
			FROM  sys.allocation_units AS a  
			  JOIN sys.dm_os_buffer_descriptors AS b 
				ON a.allocation_unit_id = b.allocation_unit_id
			  JOIN sys.partitions AS p
				ON a.container_id = p.hobt_id
			WHERE p.object_id = OBJECT_ID('narrow_index')
			  AND b.database_id = DB_ID()
)

INSERT INTO bp_usage (index_width, fragmented, mb_cache)
VALUES
 ('wide_index', CASE WHEN @fragmented = 'high_frag_level' THEN 'y' ELSE 'n' END, @bp_wide_mb)
,('narrow_index', CASE WHEN @fragmented = 'high_frag_level' THEN 'y' ELSE 'n' END, @bp_narrow_mb)


--Note size of index and frag level, should be comparative between executions
SELECT OBJECT_NAME(object_id), index_type_desc, CAST(avg_fragmentation_in_percent AS decimal(5,1)) AS frag, page_count/1000 AS page_count_1000s
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'LIMITED') 
WHERE index_level = 0 AND alloc_unit_type_desc = 'IN_ROW_DATA' AND OBJECT_NAME(object_id) IN('Posts', 'narrow_index')
ORDER BY index_id
GO


-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--1: Cause fragmentation in both indexes

--Fragment wide ix
RAISERROR('About to cause fragmentation in wide index...', 10, 1) WITH NOWAIT
UPDATE Posts SET LastEditorDisplayName = REPLICATE('x', 39)

--Fragment narrow ix
RAISERROR('About to cause fragmentation in narrow index...', 10, 1) WITH NOWAIT
UPDATE narrow_index SET c1 = c1 + 1 WHERE c1 % 100 = 0

--Run the queries
RAISERROR('About to run queries with high frag level...', 10, 1) WITH NOWAIT
EXEC run_the_sql @fragmented = 'high_frag_level', @rows_to_read = 1000, @range_iterations = 10


-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--2: no frag in either index, fillfactor set to make same size as when fragmented

RAISERROR('About to eliminate fragmentation in wide index...', 10, 1) WITH NOWAIT
ALTER INDEX PK_Posts__Id ON Posts REBUILD WITH (FILLFACTOR = 60)

RAISERROR('About to eliminate fragmentation in narrow index...', 10, 1) WITH NOWAIT
ALTER INDEX x ON narrow_index REBUILD WITH (FILLFACTOR = 50)

--Run the queries
RAISERROR('About to run queries with low frag level...', 10, 1) WITH NOWAIT
EXEC run_the_sql @fragmented = 'low_frag_level', @rows_to_read = 1000, @range_iterations = 10


-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Reset
EXEC sp_configure 'cursor threshold', -1
RECONFIGURE

--Stop trace
ALTER EVENT SESSION frag_test ON SERVER STATE = STOP

-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Work the trace data

--Extract into a temp table
DROP TABLE IF EXISTS myXeData
DROP TABLE IF EXISTS myXeData2
DROP TABLE IF EXISTS myXeData3

SELECT CAST(event_data AS XML) AS StatementData
INTO myXeData
FROM sys.fn_xe_file_target_read_file('R:\frag_test*.xel', NULL, NULL, NULL);

--SELECT * FROM #myXeData;

--Use XQuery to transform XML to a table
WITH t AS(
SELECT 
 StatementData.value('(event/data[@name="duration"]/value)[1]','bigint') AS duration_microsec
,StatementData.value('(event/data[@name="cpu_time"]/value)[1]','bigint') AS cpu_microsec
,StatementData.value('(event/data[@name="physical_reads"]/value)[1]','bigint') AS physical_reads
,StatementData.value('(event/data[@name="logical_reads"]/value)[1]','bigint') AS logical_reads
,StatementData.value('(event/data[@name="statement"]/value)[1]','nvarchar(500)') AS statement_
FROM myXeData AS evts
WHERE StatementData.value('(event/data[@name="statement"]/value)[1]','nvarchar(500)') LIKE '%frag_level%'
),
t2 AS (
SELECT 
 CASE WHEN t.physical_reads = 0 THEN 'warm' ELSE 'cold' END AS cold_or_warm
,CASE WHEN t.statement_ LIKE '%cl_ix_scan_%' THEN 'wide_index' ELSE 'narrow_index' END AS index_width
,CASE WHEN t.statement_ LIKE '%low_frag_level%' THEN 'n' ELSE 'y' END AS fragmented
,duration_microsec
,cpu_microsec
,physical_reads
,logical_reads
FROM t)
SELECT *
INTO myXeData2
FROM t2;




-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Run below manually and investigate the output

--Raw data from the trace
SELECT * FROM myXeData2 ORDER BY cold_or_warm, index_width, fragmented
--Verify pretty consistent values in each quartet. 
--If not, then something special occurred (checkppoint, or something external to SQL) - delete that row.

--Get avg values into a new temp table
SELECT
-- t2.cold_or_warm
 t2.index_width
,t2.fragmented
,AVG(t2.duration_microsec) AS duration_microsec
--,AVG(t2.cpu_microsec) AS cpu_microsec
,AVG(t2.physical_reads) AS physical_reads
--,AVG(t2.logical_reads) AS logical_reads
INTO myXeData3
FROM myXeData2 aS t2
GROUP BY t2.index_width, t2.fragmented
ORDER BY index_width, fragmented

--Check it out
SELECT * FROM myXeData3 ORDER BY index_width, fragmented

--Memory usage
SELECT * FROM bp_usage

--Bottom line, join the two
SELECT p.index_width, p.fragmented, p.duration_microsec, p.physical_reads, b.mb_cache
FROM myXeData3 AS p INNER JOIN bp_usage AS b ON p.index_width = b.index_width AND p.fragmented = b.fragmented
ORDER BY index_width, fragmented


-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Cleanup
/*
DROP TABLE IF EXISTS narrow_index
DROP TABLE IF EXISTS myXeData
DROP TABLE IF EXISTS myXeData2
DROP TABLE IF EXISTS bp_usage
*/

Possibly the worst analogy in the world – columnstore and ordering

This post is about a topic that everybody who uses columnstore should think about. And that is about ordering. The analogy I will be using is flawed in so many ways, but it help to get my point through. I hope.

You now say to yourself that columnstore indexes aren’t sorted, so what am I yapping about? Hear me out.

Imagine that I will organize my underpants. I don’t know if the word underpants is, sort of, reserved for female underwear, but I’ll use that term anyhow (I’m Swedish and don’t have all nuances of the English language). Also Imagine that I have a bunch of drawers. Say I fit about 50 underpants in each drawer and I have some 1000 underpants. How should I do this? The drawer is the rowgroup but it would be ridiculous to say you have about 1,000,000 underpants in each drawer. The 1000 underpants is the number of rows in the table, and even 1000 underpants seems like a ridiculous number. Anyhow…

Now imagine that I’m very picky with what underpants I’m wearing. The type (boxer, briefs, string, and whatnot), color, size, material etc. So how do I decide what underpants goes into each drawer? I could just shove them in there without much though about that process. I think that is how many handle their col-store indexes.

Also imagine that we have a device that register attributes about the underpants as we add them to the drawer. Lets call this device u47. So, the u47 will label the drawer with some info on what briefs are in there. Like range of sizes (xxs to xxl). OK, strange perhaps but say that my size vary a lot over time. And also range of colors. And range of models. You have now figured ut that I like to classify things in ranges. I.e., this particular drawer has m-xl; white-green, string to briefs and cotton to satin. This would be the meta-data that SQL Server has for each rowgroup/segment, as seen in sys.column_store_segments.

Lets go back to how you populated your drawers. Did you just shove the underpants there? Then you probably have close to the full range of all attribute values in each drawer. I.e., every drawer has underpants of all sizes, all colors, etc.

Now I want to wear all black underpants. Let’s say I’m a bit strange in that sense. Since there are black underpants in all drawers It will take a while to find them all. I have to go through each drawer. Or I want all satin underpants. Or perhaps all medium size black satin briefs. I still have to go through each drawer.

But what if I took the time to sort the underpants before adding them to the drawers? I can only sort them by one attribute. I should pick the one attribute that I most often have in my “what underpants should I use today?” decision process. You achieve this by having a row-clustered index on that column when building the col-store clustered index and build the col-store index using WITH DROP_EXISTING. Also, instead of having my friends over and each of us grab some underwear and add them to each person’s drawer, I would do this myself. Yes, that means MAXDOP = 1. Without it, say you have 4 threads/friends then you have 4 drawers with xs-s, 4 drawers with m-l, etc. This is a lot of work, so whether or not it is worth it is up to you.

For non-clustered col-store index, it would be enough with a non-clustered index that covers the clustered index when converting it from row to col-store index (DROP_EXISTING).

The underpants inside the drawer isn’t sorted; this is a col-store index we’re talking about, remember? You might have some fancy vacuum-tech that allow you to fit many underpants in each drawer (the various columnstore compression technologies), something you probably ordered when you bought the u47, btw.

Now, imagine me walking from the shower to my dressing room and thinking about how to find the underpants of today. I.e., the execution plan. “Today, I want to wear underpants based on color, size and material – model doesn’t matter”. I can’t say what drawers to go through, since that information isn’t available at this stage. It is a run-time decision to do drawer-elimination based on the notes on each drawer (what ranges it contains for each attribute). I.e., rowgroup/segment elimination is a run-time decision.

The underwear will only be partitioned over the drawers in an effective manner for one attribute, the one I sorted them by before populating my drawers. If that is size, then one drawer has only xs, another has only x, etc. But the “xs drawer” has all range of colors, materials etc. There might be some correlation between attributes (like rubber underpants tend to be black), but that correlation isn’t usable for us.

How would row indexes fit in this analogy? You have a bunch of post-it notes with which you build a tree-like structure that you lay out on the floor. You use this tree (over size, for instance) to navigate to the ones you are interested in (medium size for instance), and finally you reach the sub-section in the drawer. You can have several such trees, for various attributes, of course. But this smart vacuum-tech you bought with the u47 isn’t available for these indexes. If you are very selective (remember the rubber pants?), then you can very quickly find that pair using the tree structure. But it is quicker to find all black underpants using the col-store index because you have many of those and the vacuum-tech allow you to fit so many in each drawer. And that vacuum-tech also magically allow you to put on as many as some 900 pairs of underpants as a time. (Rumors has it that we this year will have the option to put on more then one pair at a time even without this magical vacuum tech. We’ll see. )

The bottom line? Think about how the data is sorted when building your col-store indexes and consider rebuilding them if your loading/modification routines causes them to degrade over time. Oh, I should add that you shouldn’t mess about with col-store indexes without having Niko’s blog post series handy and also have read Hugo’s series first.

Disclaimer 1: I don’t own any rubber underpants.
Disclaimer 2: If I did own any rubber underpants, I would still have disclaimer 1 in there…

Does index fragmentation matter?

Short answer: probably not.

Hold on! Make sure you also read my follow-up post on this topic, which has some interesting findings: http://sqlblog.karaszi.com/index-fragmentation-part-2/

Long answer: read on.

You know the story. Every week or so, we defragment the indexes. Many of us uses Ola Hallengren’s great script for this, some uses Maintenance Plans, and there are of course other alternatives as well. But are we just wasting time and effort? Quite probably we are. I’m going to start with some basics, and then do some reasoning, and finally give you some numbers of a very simple test that I ran. The T-SQL code is available. If you give it a try, please let us know your finding for your environment by adding a comment. I will do some generalizations and simplifications, to avid this post being 10 times longer.

 

What is index fragmentation?

By index here I mean b-tree indexes. You know, the regular types of indexes. I’m not referring to “odd” indexes such as columnstore, hash, geospatial, fulltext or xml indexes. For a b-tree index (from now on, I will just say “index”), the leaf level is stored according the order of the index key. I.e., the leaf is sorted. As you modify data, you can get page splits. Imagine inserting a row “in the middle” and there’s no room on the page. SQL server will split the page, by inserting a page from somewhere into the linked list, move 50% to that new page, and then add the row to where it should be.

  • If you now follow the linked list, you jump back and forth. Aka external fragmentation.
  • The page split left us with two (nearly) half-full pages. Aka internal fragmentation.

 

What do we do about it?

Duh, we defragment the indexes, of course! 🙂 The two commonly used methods are:

  • Rebuild the indexes. ALTER INDEX … REBUILD. This will create a new index and then remove the old index. This requires free space in the database file(s) about the same size as the index.
  • Reorganize the index. ALTER INDEX … REORGANIZE. Basically walk the linked list and for every time you have to go backward in the file when going to the next page, you swap these two pages. It does a bit more than that (re-introduce the fillfactor), but essentially swapping places of the pages not going forwards physically in the file.

While doing defrag, we typically leave some free space on each page (aka fillfactor). Rebuild allow us to specify the fillfactor, while reorg will re-apply whatever value you specified when creating the index.

 

Why might fragmentation hurt us?

Several aspects here. Let us discuss them one by one:

Lots of empty space

Yes, a page split leaves us with two half-full pages. At that point in time. But as more rows are added, they will fit on that free space.

In the end, you will probably average at about 75% fullness (75% being in between half-full and full). That is for indexes where the inserts (and possibly updates) are inserted “all over the place”.

But we also have increasing indexes. Think columns such as “OrderDate”. Or identity/sequence columns, which are pretty often primary keys, which in turn pretty often are also the clustered index. For such an index, inserts will always add rows to the end of the linked list, and SQL Server will just add pages to the end – not “splitting” any page. (Yeah, adding a page to the end of the linked list is technically called a page split in SQL Server lingo, but that show more the problems that SQL server has with terminology than anything else.) The point here are that increasing indexes will not have free space in them because of inserts going to the end.

Common values I’ve seen for fillfactor are between 70 and 80%.

  • For increasing indexes, that just make the indexes bigger. And remember that it isn’t unlikely that the actual data is an increasing index (the clustered index). So, instead of having a 100 GB table, you now have a 120 GB table.
  • For other indexes, you probably get about the same size “naturally” (without doing defrag) as when doing a defrag (depending on your fillfactor, of course).

 

Pages are out of order

Ask your SAN people how much difference there is between sequential I/O and random I/O in your SAN. The answer to this question will tell you whether this point is relevant. For a single spinning disk, there is a big difference. With sequential I/O you don’t have the disk head movement. But as you introduce more disks in various RAID configs, add in a SAN in the mix and now have a bunch of disks and also other concurrent activity apart from your SQL Server in that SAN, the picture isn’t so clear anymore. Now add SSD or some similar technology to the mix. In the end, there might not be that much of a difference between random and sequential I/O on a modern storage solution.

Splitting a page takes time

Yup, sure does. If you can defragment frequently enough with a chosen fillfactor for each index so that they never split, then you avoid this. But how many of us do that? I.e., hand craft the chosen fillfactor for each index by observing the amount of fragmentation for each index since last time we did defrag, and repeat this week after week until we tuned the “perfect” fillfactor for each index. Some of us might do that, but as a consultant, I can tell you that none of my clients have opted for me to do that type of work. In the end, I’m not that certain that we reduce page splits by doing our regular defrag, at least not to the extent that we hope for.

Splitting writes to the transaction log.

Yes, it does. But again, how much difference is there in reality between defrag and not defrag regarding page splits occurring during business hours.

 

Why might doing defrag hurt us?

Of course there is another side of the coin! Defrag isn’t free. When people asking me whether to defrag, a short question back from me is “How much does it hurt?” If the answer is “None at all”, then sure go ahead! If the answer is “A lot!”, then we want to think twice.

It reduces concurrency because of locking

  • Rebuild has an ONLINE option available if you are on Enterprise Edition (EE), which makes the rebuild essentially online. But using online causes the rebuilt to take longer time compared to offline.
  • If not on EE, then the whole table will be locked.
    • Shared lock if the rebuild is on a nonclustered index – this allow concurrent reads but not modifications.
    • Exclusive locks if the index is a clustered index – disallowing both read and modifications.
  • Reorg will only put a brief lock of the two pages “where it currently is”. This makes reorg much more online-ly than rebuild on non-EE.

 

It puts load your system

Say you have a 200 GB database, and rebuild everything. Now you shuffle 200 GB data. This isn’t free. Even if you use smarter scripts to only defrag what is actually fragmented in the first place, defrag will probably shuffle quite a lot of data. Your VM/SAN/SQL/whatever probably have other things to do at night-time. We all fight over these precious night/weekend hours. And for systems that are expected to be online pretty much all the time, this is even more pressing.

You will need a larger LDF file

If you rebuild everything (blindly) then the ldf file need to be the same size as the (sum of) data file(s). A smart script perhaps only rebuild 30% of te data, cutting this down to some 30% of the data size. (This doesn’t apply to simple recovery model.)

The following log backup will be large

Again, if you rebuild 100 GB worth of indexes, then the subsequent log backup will be 100 GB in size. Etc. You get the picture by now. (Again this doesn’t apply to simple recovery model.)

 

Is there a bottom line?

I don’t know. Maybe there is. But as you already know, we need to throw in a bunch of “it depends” in the mix. But I believe that there are a bunch of defrag going on for a bunch of SQL servers just because “that is the way we always did it”. Our environments aren’t the same as they were 20 years ago.

 

Can we quantify this?

Now we are getting to the really hard part. Our environments aren’t the same. The HW, SAN, data, VMs, load pattern. Etc.

Still, I was curious to see if I could measure any difference between a fragmented and a non-fragmented index, on an SSD. I used my Ultrabook, 2 core proc with 4 logical cores, with an SSD disk. (I did not disable parallelism.) This is not how your production SQL Servers are setup, I hope. But I still wanted to test the SSD aspect, and I think that many of you are moving towards flash and tiered SANS, if you didn’t do that transformation already. Anyhow, at the end of this post, you’ll find the SQL I ran. In short:

  • I used the Stack Overflow database, available to download. The smallest one (roughly 10 GB of data). Here I used the Posts table, which has a clustered index on an identity column. This is my wide index. The table (i.e., the index) has 37 million rows and is about 8.5 GB in size (when I run my tests).
  • I created my own table for the narrow index. I wanted to test both. This has 20 million rows, and is only about 1 GB in size.
  • I have a stored procedure that scans the whole index following the linked list (for both above, one query per each). No data is returned to the client.
  • I run the queries twice in the proc, so I can compare cold (physical I/O) and warm (no physical I/O).
  • I did this first for when the indexes are fragmented, ran the proc 4 times.
  • I then did a defrag, and ran the proc 4 times again.
  • I selected the fillfactor so we end up with same index size as in the first run. This can be debated, but what I wanted to look at is the random vs sequential aspect.
  • I run the proc 4 times. I was prepared to delete if any one execution had weird values because of external things going on. That was not necessary in the end.
  • I averaged duration_ms, cpu_ms, physical_reads and logical_reads.
  • I captured the metrics using an Extended Events trace, which I post-processed using T-SQL and X/Query.

If you want to use the script at the end, you will need to go through it and adapt to your environment. It will perhaps take 30 minutes to understand what it does and adapt file name for trace etc. And then some 30 minutes for the execution.

My findings

For my environment, I found no relevant difference between a fragmented index and a non fragmented index when doing a scan over the whole index, from first page in the linked list to the last page. I’m not surprised considering that I’m on SSD, abut I still wanted to see this with my own eyes, in a sort of controlled environment.

Update: Magnus Ahlkvist did a test at a SQL Saturday meeting on a spinning disk. He did find a significant difference for the wider table. However, no noticeable difference for the narrow index (something one would want to dive deeper into why…).

Here are the numbers (not nicely formatted):

cold_or_warm index_width fragmented duration_ms cpu_ms physical_reads logical_reads
cold narrow_index n 1848 2543 123787 124471
cold narrow_index y 1877 2550 124307 125571
cold wide index n 15067 2758 1019433 1020312
cold wide index y 15217 2722 1018283 1019295
warm narrow_index n 1149 4104 0 124942
warm narrow_index y 1105 4101 0 126517
warm wide index n 475 1867 0 1028017
warm wide index y 490 1945 0 1031488

References

Paul White: https://sqlperformance.com/2015/01/t-sql-queries/allocation-order-scans

The Code:

-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Setup trace section

SET NOCOUNT ON

ALTER EVENT SESSION frag_test ON SERVER STATE = STOP
WAITFOR DELAY '00:00:02'

--Delete XE file, using xp_cmdshell (bad, I know)
EXEC sp_configure 'xp_cmdshell', 1 RECONFIGURE WITH OVERRIDE
EXEC xp_cmdshell 'DEL R:\frag_test*.xel', no_output 
EXEC sp_configure 'xp_cmdshell', 0 RECONFIGURE WITH OVERRIDE


IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'frag_test')
DROP EVENT SESSION frag_test ON SERVER


CREATE EVENT SESSION frag_test ON SERVER 
ADD EVENT sqlserver.sp_statement_completed()
ADD TARGET package0.event_file(SET filename=N'R:\frag_test')
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=2 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
GO

ALTER EVENT SESSION frag_test ON SERVER STATE = START



-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Disable IAM order scan, so we know that SQL Server will follow the linked list
--See https://sqlperformance.com/2015/01/t-sql-queries/allocation-order-scans

EXEC sp_configure 'cursor threshold', 1000000
RECONFIGURE


-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Setup section

--You might want to grow the data and log files for StackOverflow database pre-running this test. Or run it twice.

USE StackOverflow

DROP TABLE IF EXISTS narrow_index

--Create the table for the narrow index
SELECT TOP(1000*1000*20) ROW_NUMBER() OVER( ORDER BY (SELECT NULL)) AS c1, CAST('Hello' AS char(8)) AS c2
INTO narrow_index
FROM sys.columns AS a, sys.columns AS b, sys.columns AS c

CREATE CLUSTERED INDEX x ON narrow_index(c1)

--Modify Posts table, so we condense cl ix and make it non-fragmented. So we (soon) can fragment it.
UPDATE Posts SET LastEditorDisplayName = ''
ALTER INDEX PK_Posts__Id ON Posts REBUILD WITH (FILLFACTOR = 100)
GO


-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Investigate the data if you want

/*
--wide index
SELECT TOP(100) * FROM Posts
EXEC sp_indexinfo Posts				--Found on my web-site
EXEC sp_help 'Posts'

--narrow index
SELECT TOP(100) * FROM narrow_index
EXEC sp_indexinfo narrow_index		--Found on my web-site
EXEC sp_help 'narrow_index'
*/



-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Create the proc that executes our SQL

CREATE OR ALTER PROC run_the_sql
 @fragmented varchar(20)
AS
DECLARE 
 @sql varchar(1000)

--Empty cache
CHECKPOINT
DBCC DROPCLEANBUFFERS


--Cold cache
SET @sql = 'DECLARE @a int SET @a = (SELECT COUNT_BIG(OwnerUserId) AS [cl_ix_scan ' + @fragmented + '] FROM Posts WHERE Id between 1000 AND 20000)'
EXEC (@sql)
SET @sql = 'DECLARE @a int SET @a = (SELECT COUNT_BIG(c1) AS [nc_ix_scan ' + @fragmented + '] FROM narrow_index WHERE c1 BETWEEN 50000 AND 60000)'
EXEC (@sql)

----Warm cache
SET @sql = 'DECLARE @a int SET @a = (SELECT COUNT_BIG(OwnerUserId) AS [cl_ix_scan ' + @fragmented + '] FROM Posts WHERE Id between 1000 AND 20000)'
EXEC (@sql)
SET @sql = 'DECLARE @a int SET @a = (SELECT COUNT_BIG(c1) AS [nc_ix_scan ' + @fragmented + '] FROM narrow_index WHERE c1 BETWEEN 50000 AND 60000)'
EXEC (@sql)

--Note size of index and frag level, should be comparative between executions
SELECT OBJECT_NAME(object_id), index_type_desc, CAST(avg_fragmentation_in_percent AS decimal(5,1)) AS frag, page_count/1000 AS page_count_1000s
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'LIMITED') 
WHERE index_level = 0 AND alloc_unit_type_desc = 'IN_ROW_DATA' AND OBJECT_NAME(object_id) IN('Posts', 'narrow_index')
ORDER BY index_id
GO



-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--1: cause fragmentation in both indexes

--Fragment cl ix
UPDATE Posts SET LastEditorDisplayName = REPLICATE('x', 39)

--Fragment nx ix
UPDATE narrow_index SET c1 = c1 + 1 WHERE c1 % 100 = 0

--Run the queries
EXEC run_the_sql @fragmented = 'high_frag_level'
EXEC run_the_sql @fragmented = 'high_frag_level'
EXEC run_the_sql @fragmented = 'high_frag_level'
EXEC run_the_sql @fragmented = 'high_frag_level'




-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--2: no frag in either index, fillfactor set to make same size as when fragmented

ALTER INDEX PK_Posts__Id ON Posts REBUILD WITH (FILLFACTOR = 60)
ALTER INDEX x ON narrow_index REBUILD WITH (FILLFACTOR = 50)

--Run the queries
EXEC run_the_sql @fragmented = 'low_frag_level'
EXEC run_the_sql @fragmented = 'low_frag_level'
EXEC run_the_sql @fragmented = 'low_frag_level'
EXEC run_the_sql @fragmented = 'low_frag_level'



-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Reset
EXEC sp_configure 'cursor threshold', -1
RECONFIGURE

--Stop trace
ALTER EVENT SESSION frag_test ON SERVER STATE = STOP

-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Work the trace data

--Extract into a temp table
DROP TABLE IF EXISTS myXeData
DROP TABLE IF EXISTS myXeData2

SELECT CAST(event_Data AS XML) AS StatementData
INTO myXeData
FROM sys.fn_xe_file_target_read_file('R:\frag_test*.xel', NULL, NULL, NULL);

--SELECT * FROM #myXeData;

--Use XQuery to transform XML to a table
WITH t AS(
SELECT 
 StatementData.value('(event/data[@name="duration"]/value)[1]','bigint') / 1000 AS duration_ms
,StatementData.value('(event/data[@name="cpu_time"]/value)[1]','bigint') /1000 AS cpu_ms
,StatementData.value('(event/data[@name="physical_reads"]/value)[1]','bigint') AS physical_reads
,StatementData.value('(event/data[@name="logical_reads"]/value)[1]','bigint') AS logical_reads
,StatementData.value('(event/data[@name="statement"]/value)[1]','nvarchar(500)') AS statement_
FROM myXeData AS evts
WHERE StatementData.value('(event/data[@name="statement"]/value)[1]','nvarchar(500)') LIKE '%frag_level%'
),
t2 AS (
SELECT 
 CASE WHEN t.physical_reads = 0 THEN 'warm' ELSE 'cold' END AS cold_or_warm
,CASE WHEN t.statement_ LIKE '%cl_ix_scan_%' THEN 'wide index' ELSE 'narrow_index' END AS index_width
,CASE WHEN t.statement_ LIKE '%low_frag_level%' THEN 'n' ELSE 'y' END AS fragmented
,duration_ms
,cpu_ms
,physical_reads
,logical_reads
FROM t)
SELECT *
INTO myXeData2
FROM t2;

--Raw data from the trace
SELECT * FROM myXeData2 ORDER BY index_width, cold_or_warm, fragmented
--Verify pretty consistent values in each quartet. 
--If not, then something special occurred (checkppoint, or something external to SQL) - delete that row.

--Get avg values and compare them
SELECT
 t2.cold_or_warm
,t2.index_width
,t2.fragmented
,AVG(t2.duration_ms) AS duration_ms
,AVG(t2.cpu_ms) AS cpu_ms
,AVG(t2.physical_reads) AS physical_reads
,AVG(t2.logical_reads) AS logical_reads
FROM myXeData2 aS t2
GROUP BY t2.cold_or_warm, t2.index_width, t2.fragmented
ORDER BY cold_or_warm, index_width, fragmented



-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Cleanup
/*
DROP TABLE IF EXISTS narrow_index
DROP TABLE IF EXISTS myXeData
DROP TABLE IF EXISTS myXeData2
*/

Make sure you update the statistics

You might know that statistics can have a high impact on performance. As you add, remove and modify data, the statistics will be more and more outdated.

Sure, there’s the database option that updates statistics automatically, but it is a bit … rough. For a table with 10,000,000 rows, you have to modify 2,000,000 rows prior to 2016 (20%) and 100,000 rows as of 2016 with db compat level 2016 (SQRT(1000*@rows)). Also, when auto-update stats kicks in, it will sample the underlying data, in order for it not to take too long etc.

Many of us use Ola Hallengren’s excellent maintenance solution. I’m sure that some of us have our own favorite modifications we do to the jobs that the installation scrips creates for us. My point here is that the installation script do not update statistics by default. Here’s how job step in the “IndexOptimize – USER_DATABASES” job look like on my SQL Server 2017 instance. All by default:

EXECUTE [dbo].[IndexOptimize]
@Databases = 'USER_DATABASES',
@LogToTable = 'Y'

The default value for the @UpdateStatistics parameter is NULL which means “Do not perform statistics maintenance” See this page for reference.

So, by using the defaults, you end up defragmenting your indexes (which might not give you that much nowadays, considering that there isn’t that much of a difference between random and sequential I/O on modern disk subsystems). For some of the indexes that you defragment, you will get new statistics as a by-product. Those are the indexes that you rebuild – rebuild is internally creating a new index and the dropping the old one. But you likely have many indexes that don’t reach the 30% fragmentation threshold. And consider indexes over a key which is steadily increasing or decreasing. Inserting new rows will not cause fragmentation, but the statistics will become out-of-date.

What to do? Easy, just add a new job in which you call the IndexOptimize procedure with options to update all statistics where at least one row has been modified. Ola even has such example on his web-site, Example D. Here it is, I just added a parameter to log the commands to the CommandLog table:

EXECUTE dbo.IndexOptimize
@Databases = 'USER_DATABASES',
@FragmentationLow = NULL,
@FragmentationMedium = NULL,
@FragmentationHigh = NULL,
@UpdateStatistics = 'ALL',
@OnlyModifiedStatistics = 'Y',
@LogToTable = 'Y'

Schedule above as you wish. I prefer to do it every early morning if I can. But your circumstances like database size, how the data is modified etc will influence the frequency.

Another option is toe modify the existing “IndexOptimize – USER_DATABASES” job and just add below. This will both defrag your indexes and also update statistics.

@UpdateStatistics = 'ALL'

What about Maintenance Plans, you might think? Well, we all know that they aren’t that … smart – which is why we use scripts like Ola’s in the first place. Regarding statistics updates, they will update all stats, regardless of whether any rows at all has been modified since last time. It is a complete waste of resources to update stats if nothing has changed. Even sp_updatestats is smarter in this regard.

Note: This is in no way a criticism of the maintenance solution that Ola provides. He has no way of knowing our requirements in the job he creates. If the installation script could read our minds, I’m sure that it would schedule the jobs for us as well. OTOH, if Ola could create a T-SQL installation script that could read our minds, then he probably would do something else in the first place. 🙂

Rebuilding indexes online, wasting time and resources?

Many SQL server installations does regular index rebuild or reorg in order to defragment indexes. One can question how much we really gain by doing this when using modern SANs, SSD, flashdisks, etc. But, that is not the topic for this blog post. What I want to raise with this blog is whether to use ONLINE when rebuilding indexes. Some background first::

  • With ONLINE = OFF (default), you will be blocked from accessing the data while the index rebuild is being performed. Depending on the type of index, you either get a schema modification lock on the table which prohibits any type of access, or a shared lock which allow reads only.
  • With ONLINE = ON (requires Enterprise Edition), you get a very brief lock at beginning and end of operation. I will refer to the modes as ONLINE and OFFLINE from now on.

There’s more details to what locks are acquired depending on various factors. Check out https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-index-transact-sql for more info. The point is that ONLINE = ON, allow for users to both read and modify data while the operation is on-going. That is a good thing, right? Well, it comes with a price…

The time it takes to rebuild the index can be substantially longer for ONLINE. Many of us has other things to do with the database during night-time and/or weekends. Yes, these are the typical window in time where we try to find things such as index rebuilds. Say that you do it night-time and it currently take 4 hours. Wouldn’t it be nice if you could cut that time down to 1.5 hours? That would leave more time for imports, massaging of data, CHECKDB and other things you want to do. Sure, you can do it ONLINE, but it will slow down access during the rebuild. Also the more data you modify during the rebuild, the more space you need in tempdb.

My point is to not do ONLINE unless you actually need it. Some system don’t have to be “open” during night-time anyhow. Only you can tell what your requirements are, of course. To get some numbers I decided to create a few indexes on a copy of the FactResellerSalesXL_PageCompressed found in the AdventureworksDW2016 database, and I named that table FactResellerSalesXL. I created a copy without data compression and also without any columnstore index. And then I created a few non-clustered index. Here’s the index and space usage for the table, using my sp_indexinfo (with some columns omitted and others re-arranged):

Not a large table by any means, but it gives us something to play with. And it allowed me to do the rebuilds several times in order to rule out figures out of the ordinary. I tested this both in FULL and SIMPLE recovery. The numbers are average after several executions,  minutes : seconds:

  • FULL recovery
    • ONLINE: 3:45
    • OFFLINE: 1:03
  • SIMPLE recovery
    • ONLINE: 1:40
    • OFFLINE: 0:31

 

I frequently use Ola Hallengren’s scripts for database maintenance. (We all do, right? :-)) The default for Ola’s solution (IndexOptimize) is to do the rebuild ONLINE if possible. Or, to be more specific, the defaults are:

  • < 5% fragmentation: do nothing
  • Fragmentation between 5 and 30 %: do REORGANIZE.
  • Fragmentation over 30%: do REBUILD with ONLINE = ON

I.e., when using Ola’s scripts you will rebuild your indexes ONLINE, assuming you are on Enterprise Edition (or similar). For Maintenance Plans (anybody remember those?), the checkbox for ONLINE is not checked by default.

Bottom line: If you have problems with index rebuilds taking a long time and you don’t need other people accessing the data while this is being done, consider doing it OFFLINE instead. Or perhaps not defragment in the first place? Or, raise the bar for to some 60-70 % so you only defragment the indexes that are really fragmented in the first place.

Free book about execution plans

Grant Fritchey has released the third edition of his book about execution plans. And it is freely available thanks to the cooperation with Redgate.

There’s a lot of good stuff in there. Introduction to execution plans, plan caching, reading plans, etc. and then of course perhaps the “meat of the book”: a discussion about the various operators that you can find in an execution plan. Sure, these are documented in Books Online, but far from the level of this book. Say you find a Hash Match in the plan. Is it good? Is it bad? Under what circumstances? And, exactly what does it really do? Things like that…

Get it here .

Managing tempdb

This post is all about the tempdb database. It is not about capacity planning, like size or number of data files – there is plenty of info about that out there. Instead, this is about managing it, in the sense of changing initial file size, adding a file, removing a file etc. SSMS isn’t obvious for some of these things for the tempdb database, and for some things SSMS is flat out wrong.

I tested all operations on SQL server 2017 and SSMS 17.8.1.

 

Background

Tempdb is re-created at start-up. Or, rather re-initiated (in lack of better words). SQL Server do not delete the files and create new ones, that would take ages for large files if you don’t have Instant File Initialization (which you never have for log files). So how does SQL Server know what files to create, the logical and physical names, size etc? The answer is sys.master_files.

 

sys.master_files

This is the “template” information for the tempdb files. I will refer to sys.master_files as the template below (not to be confused with model which is template for other things, but not the file aspects of tempdb). Here you see how the tempdb files will be created/initialized at next start-up. Don’t confuse this with tempdb.sys.database_files, which show the current state for the tempdb files. Below two queries will show you the current state for the tempdb files and also the template state:

--Current
SELECT
'tempdb' AS db_name_
,file_id
,name
,physical_name
,size * 8/1024 AS size_MB
,type_desc
,CASE WHEN is_percent_growth = 1 THEN CAST(growth AS varchar(3)) + ' %' ELSE CAST(growth * 8/1024 AS varchar(10)) + ' MB' END AS growth
,max_size * 8/1024 AS max_size_MB
FROM tempdb.sys.database_files
ORDER BY type, file_id

--Template
SELECT
DB_NAME(database_id) AS db_name_
,file_id
,name
,physical_name
,size * 8/1024 AS size_MB
,type_desc
,CASE WHEN is_percent_growth = 1 THEN CAST(growth AS varchar(3)) + ' %' ELSE CAST(growth * 8/1024 AS varchar(10)) + ' MB' END AS growth
,max_size * 8/1024 AS max_size_MB
FROM master.sys.master_files
WHERE DB_NAME(database_id)  = 'tempdb'
ORDER BY db_name_, type, file_id

 

Increasing current and template file size to larger than what it currently is

This is pretty straight-forward, both using T-SQL directly and also using SSMS. Here’s what it looks like in SSMS:

(The word “Initial” for the header “Initial Size (MB)” is pure nonsense. What you see is the current file size, picked up from sys.database_files. I am happy to see that the word “Initial” is removed in SSMS 18.0.)

To increase the file size you just type the desired, larger, size for the tempdb file. The T-SQL command to do this is also very straight forward, example:

ALTER DATABASE tempdb 
 MODIFY FILE ( NAME = N'tempdev', SIZE = 5GB )

ALTER DATABASE tempdb 
 MODIFY FILE ( NAME = N'temp2', SIZE = 5GB )

Note that SSMS will generate the file size in KB fpr the T-SQL command, but the T-SQL command accepts KB, MB, GB and even TB.

This operation increases the current file size as well as the template size.

 

Decreasing current file size

To reduce the current size, you can attempt a shrink operation, for instance using such as below T-SQL command:

USE tempdb
DBCC SHRINKFILE(tempdev, 100)

The second parameter is the desired size in MB. This will change the current size as well as the template size. In my experience, you should be prepared for a limited success to do shrink file a tempdb file reducing the current size. In many cases, you will have to re-start SQL Server so it will be created with the template size. Or, hunt down whatever is using it and take it from there.

 

Changing template file size to smaller than what it currently is

One way is to change the size in the GUI to a smaller size and it will try to make the current size smaller as well as change the template size. If you don’t want to perform the shrink operation, then specify a larger size than the current size, script the command to a T-SQL query window and in there just specify whatever size you want. It will generate a command such as:

ALTER DATABASE tempdb 
 MODIFY FILE ( NAME = N'tempdev', SIZE = 100MB )

If the specified size is smaller than the current file size, then it will only change the template, not the current size.

 

Adding file

This is dead-easy. In SSMS, just press the “Add” button and specify the desired name, size etc. This will generate T-SQL such as:

ALTER DATABASE tempdb 
 ADD FILE ( NAME = N'tempdb3', FILENAME = N'R:\SqlData\a\tempdb_mssql_3.ndf' , SIZE = 5GB , FILEGROWTH = 200MB )

This will add it immediately and also to the template (both sys.database_files and sys.master_files).

 

Remove file

This is simple enough if the file is empty. In SSMS you select the file and press the “Remove” button.  It will generate T-SQL such as:

ALTER DATABASE tempdb 
 REMOVE FILE tempdb3

But if the file is current used you will get an error message and nothing was changed: not the current state, nor the template.

You can always attempt to shrink first using the EMPTYFILE option:

USE tempdb
DBCC SHRINKFILE(tempdb3, EMPTYFILE)

If you’re lucky, then the file was emptied and you can now use the ALTER command with the REMOVE FILE option. If not, then you are back to where you started.

You can try to re-start SQL Server and see if that allow you to remove the file. If not, then perhaps using the “failsafe” startup switch: /f (see Erin Stellato’s blog about this: https://www.sqlskills.com/blogs/erin/remove-files-from-tempdb/ ). Not be connected from SSMS when executing the commands might also help.

 

Changing other file attributes

If you want to change things like autogrow or maxsize, then just use SSMS or T-SQL directly. Here’s an example T-SQL command to change both autogrow and maxsize:

ALTER DATABASE tempdb 
 MODIFY FILE ( NAME = N'tempdev', MAXSIZE = 500GB , FILEGROWTH = 500MB )

 

Moving a file

This operation requires a re-start. SSMS doesn’t allow you to do this, so use T-SQL directly, such as:

ALTER DATABASE tempdb 
 MODIFY FILE (NAME = tempdev, FILENAME = 'R:\SqlData\b\tempdb.mdf')

Warning: Henrik reminded me in the comments section to add a warning here. “Make sure that R:\SqlData\b is present and that SQL Server has sufficient rights to create files here!” He is of course correct. In worst case you end up with SQL Server refusing to start if it cannot create tempdb where you have specified. In such case you can try the -f startup switch, perhaps a topic for another blog post.

Now, this is a sneaky one. It does change both sys.database_files and sys.master_files. So, looking only at the catalog views, you will be fooled that the file was moved immediately. But that didn’t happen, check the disk. Fortunately, the command comes with a message such as:

The file "tempdev" has been modified in the system catalog.
The new path will be used the next time the database is started.

That makes it pretty obvious. Note that the old file will not be removed when you restart your SQL Server. Go delete that file, after verifying that the new file was indeed created (if the “old” one is still the current one, then you won’t be able to delete it as long as SQL Server is running).

 

Summary

Managing tempdb isn’t that hard. As soon as you understand that we have the current state (tempdb.sys.master_files) and the template (sys.master_files), it is easier to understand. And, yes, SSMS is a bit quirky for these operations. The reason, methinks, is that it uses the same GUI as for other databases, which doesn’t have the aspect of a “template”.

Did I miss any operation? Let me know and I’ll might add it.

 

Bug in SSMS when using Extended Events

I’d like to share one bug that has been irritating me over a long time. It is very easy to handle, once you know what the problem is. I’ve seen it when I want to open a file XE target:

Here’s the relevant text in the message box, to facilitate searching:

The storage failed to initialize using the provided parameters. (Microsoft.SqlServer.XEventStorage)

Cannot view the function ‘fn_MSXe_read_event_stream’, because it does not exist or you do not have permission. (Microsoft SQL Server, Error: 15151)

The problem is that Object Explorer in SSMS tries to use that function from the current database, not the master database (where the function exists, so to speak). So the solution is to open Object Explorer making sure that the current database for Object Explorer is the master database.

This bug has been reported on UserVoice, April 2016. Go vote for it if you want to add your voice: https://feedback.azure.com/forums/908035-sql-server/suggestions/32902330-cannot-view-the-function-fn-msxe-read-event-strea

 

Tips for getting started with Extended Events

I know, I know. New habits are hard to learn. Many of us have been using SQL trace and the Profiler GUI for a very long time. And we know that we are supposed to move over to Extended Events (XE), but we postpone it for some later time. And then we give XE a try, and some thing doesn’t work as we want. So we go back to more familiar territories.

But XE has really grown on me over the last few years. I like to share the things that I initially didn’t like with XE, and how I overcame them. And also some other of my tips to make it easier to be productive with XE. I will deliberately minimize showing T-SQL and queries against the XE dynamic management views here. As you use XE more and more, you will probably use T-SQL to a higher degree. But this blog post is for those of you who want to “get into” XE and I find using a GUI is great as a starting point. Assuming the GUI is any good, that is. And I think the SSMS GUI is, for most parts.

Don’t use the ring buffer as target

This is possibly the single most important thing to remember. Some would argue that the limitation for the DMV where you read the data is limited to 4 MB as the reason, and I can appreciate that. But more important is that SSMS won’t parse the data for you. XE is XML based, meaning that when you look at the XE data you get XML. And SSMS does nothing to help you for the ring buffer. It will just show the XML as is!

 

 

 

To the left you have what SSMS show you for the ring buffer and to the right you see what you get when you click the hyperlinked XML: the raw XML.

Use the file as target

So the obvious target would then be to use a file instead:

Note that the wizard to create event session defaults to the ring buffer. So, if you use the wizard, you probably want to change that to a file instead.

Use the targets instead of “watch live data”

Sure, sometimes you want to see things rolling by, as they happens. But saving to a target and looking at that target instead of the live data is more powerful. See the section about sorting etc below.

Show fields as columns

Bu default SSMS only show the name and timestamp fields. But all you have to do is to right-click in a filed at the lower half and select to show that as a column.

Explore sorting, grouping and aggregations

For this you need to look at a trace target, not the live window (except for filtering), but you can display a target even if the trace is currently running. Then just explore and find out that you can easily sort, group and aggregate the data:

Use SSMS to explore events, columns, etc

There are so many things you can capture with XE. But it can also be a but daunting. In my SQL Server 2017 instance, I have 1511 events. Compare that to the 180 you have for the old trace engine! So, XE is definitely richer, but you probably won’t browse through all 1500 events to find whichever could be interesting for the task at hand. So, create an event session using either the wizard to the “real GUI”. Just type letters and the GUI will filter on those, as you type:

Don’t expect events etc to be documented

For the old trace engine, we could read about all events and columns in Books Online. The “documentation” for the XE objects is instead in the DMVs, like for instance sys.dm_xe_objects has a “description” column. You don’t have to use these DMVs directly, though. Follow my above advice and SSMS will pick up those descriptions for you:

Use the “XEvent Profiler” that was added in SSMS 17.3

MS has probably noticed that there is a hesitation to pickup on XE. In SSMS 17.3, they added the XEvent Profiler in SSMS. This is just a quick way to start an XE session with a live data window. Nothing fancy, really, but having it in the “root” of SSMS makes it very snappy to kick of that trace quickly. And you can also customize it. See my earlier blog-post on this.

Explore other targets

The Wizard only allow you to select from two target. The dreaded ring buffer and the file target. But there are other interesting targets. Like histogram, to for instance show how many SQL commands were submitted for each database. Or the event_counter which just counts how any of each events occurred, for instance how many attentions, deadlocks and log escalations have we had. There are other targets as well!

Watch out for that latency

By default you can wait 30 second after an event happened until it has been delivered to the target(s). This is fine for a trace that you run into a target and analyze that target later. But when you want to do something interactively it feels like ages. I’m thinking of the “watch live data” functionality. The Wizard doesn’t let you specify the value for the MAX_DISPATCH_LATENCY parameter (of the CREATE EVENT SESSION command). So, use the “real GUI” or just script for the wizard and add that parameter. Of course, the shorter latency, the higher overhead. So, use with care.

Note that filters and actions are added per event

You can for most cases think of actions as “extra columns” you want to add. Anyhow, XE is flexible in many ways. For instance you might want to filter one event on something, and some other event on something else. Or accept the overhead of collecting one action for one event but not the other. But the wizard doesn’t work this way. The filtering and actions you specify will be applied for all events in the trace. You can of course script from the wizard and modify that script. But the “real GUI” os more true in the sense that you specify filter and actions for each event.

Don’t be afraid to use T-SQL

You can of course automate and customize enormously when you start using T-SQL to a higher degree. As usual, you can script things in SSMS that can serve as a great starting point. Below is an example of a query that “parses” the XML for a ring buffer trace into columns in the result set:

SELECT 
event_data.value('(@name)[1]', 'varchar(50)') AS event_name
,event_data.value('(@timestamp)[1]', 'datetime2(0)') AS time_stamp
,event_data.value('(data[@name="statement"]/value)[1]','nvarchar(50)') AS statement_
,event_data.value('(data[@name="duration"]/value)[1]','bigint') / 1000 AS duration_ms
,event_data.value('(data[@name="cpu_time"]/value)[1]','bigint') /1000 AS cpu_ms
,event_data.value('(data[@name="logical_reads"]/value)[1]','bigint') AS logical_reads
,event_data.value('(data[@name="physical_reads"]/value)[1]','bigint') AS physical_reads
,event_data.value('(data[@name="row_count"]/value)[1]','bigint') AS row_count
,event_data.value('(action[@name="database_name"]/value)[1]','nvarchar(128)') AS database_name_
FROM #myXeData AS evts
CROSS APPLY StatementData.nodes('//RingBufferTarget/event') AS XEventData(event_data)
--ORDER BY time_stamp
ORDER BY duration_ms DESC

Note that I have saved the trace into a temp table. If you don’t, those XQuery operations will take ages.

Etc

I feel I could go on and on. But I’ll stop here. I might add something at a later time, if I come to think of something obvious. Feel free to add your tips in the comments.

Should it be [ola] or [olahallengren]?

I’m (of course 😉 ) talking about the schema that Ola Hallengren plan to move his objects to. If you don’t get what I’m talking about, then visit Ola’s database maintenance solution site here.

Ola is soliciting feedback for the schema naming at Github now, go and express your opinion at: https://github.com/olahallengren/sql-server-maintenance-solution/issues/91.