Scalar functions and improvements in SQL Server 2019

Now that SQL server 2019 is released I want to discuss one of the more interesting improvements, at least from a performance perspective. I believe that this alone can result in greatly improved performance and reduced resource usage. For some installations, I should add.

Just a couple of quick notes:

  • This post is about scalar UDFs that does SELECT. A forthcoming post will be about scalar functions that doesn’t do SELECT.
  • Jump towards the end to see a table with the performance differences. That is why you are here, after all! Then you can come back and read the introduction paragraphs.

For a couple of decades, “we all” have known that scalar functions are bad. Really bad. To the extent that they are verboten in many installations. Why are they so bad? For two reasons:

  1. The function is called once per row. Imagine if you have a SELECT in there, and the function is called 100,000 times. You have now executed that SELECT 100,000 times.
  2. But also the overhead of calling the function is significant. So, even if it does something very simple like adding two numbers the overhead of calling it many times can be huge.

Don’t believe me? Read on.

I don’t want to get too theoretical regarding how this work, digging into execution plans etc. But the big thing here in SQL server 2019 is that SQL server can in-line those functions for you! I won’t go into the pre-requisites for this, just check out the product documentation for those things. You will also see that this requires database compatibility level 150 (2019) and that the database option for this isn’t turned off.

I’m using the AdventureworksDW database, and two tables:

  • DimProduct, 606 rows, 6 MB.
  • FactResellerSalesXL, approx 11,000,000 rows. I have three versions of this.
    • Row-based clustered index. 2.48 GB.
    • Same as above but with page compression: 0.68 GB. I don’t use this table for my tests here.
    • Same as the first with a clustered columnstore index: 0,48 GB

UDF with I/O

Let’s start with a UDF that performs I/O. It is dead-simple, it will just summarize the sales amount from the fact table for a certain product. Here’s the first version:

CREATE OR ALTER FUNCTION ProdSalesAmt(@ProductKey int)
RETURNS int
AS
BEGIN
DECLARE @ret int
SET @ret =
(
SELECT SUM(d.UnitPrice * d.OrderQuantity)
FROM dbo.FactResellerSalesXL AS d
WHERE d.ProductKey = @ProductKey
)
IF @ret IS NULL
SET @ret = 0
RETURN @ret
END
GO

You probably say that the function can be simplified. So here is such a simplified version. The idea is to get rid of all procedural code constructs.:

CREATE OR ALTER FUNCTION ProdSalesAmt_Simple(@ProductKey int)
RETURNS int
AS
BEGIN
RETURN (
SELECT ISNULL(SUM(d.UnitPrice * d.OrderQuantity), 0)
FROM dbo.FactResellerSalesXL AS d
WHERE d.ProductKey = @ProductKey
)
END
GO

Some of you know the trick to not use a scalar UDF but an Inline Table function instead. The calling query would have to use a CROSS APPLY, so the optimization isn’t transparent to the client code. Anyhow, here is such a version:

CREATE OR ALTER FUNCTION ProdSalesAmt_TVF(@ProductKey int)
RETURNS table
AS
RETURN (
SELECT ISNULL(SUM(d.UnitPrice * d.OrderQuantity), 0) AS SumSales
FROM dbo.FactResellerSalesXL AS d
WHERE d.ProductKey = @ProductKey
)
GO

I’m going to test with different compatibility levels for the database:

ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 140
ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 150

And I’m also going to test with and without in-lining turned on.

ALTER DATABASE SCOPED CONFIGURATION SET TSQL_SCALAR_UDF_INLINING  = ON 
ALTER DATABASE SCOPED CONFIGURATION SET TSQL_SCALAR_UDF_INLINING  = OFF 

Automatic inlining requires compatibility level 150 and also that inining is turned on. I did not see any difference between turning off iniling one or the other way, so I will just report it as inlining turned on or off.

But what if we didn’t use a function in the first place? Sure, functions are a neat programming construct. But neat programming constructs and performance doesn’t necessarily go hand-in-hand – we learned the hard way. So, I will also use a simple join as well as a correlated subquery.

So here are the queries I will use:

-- Q1 Basic scalar function
 SELECT p.ProductKey, p.EnglishProductName, dbo.ProdSalesAmt(p.ProductKey) AS SumSales
 FROM DimProduct AS p
 ORDER BY SumSales

-- Q2 Simplified scalar function
 SELECT p.ProductKey, p.EnglishProductName, dbo.ProdSalesAmt_Simple(p.ProductKey) AS SumSales
 FROM DimProduct AS p
 ORDER BY SumSales

-- Q3 Inline TVF instead of scalar function
 SELECT p.ProductKey, p.EnglishProductName, s.SumSales AS SumSales
 FROM DimProduct AS p
  OUTER APPLY dbo.ProdSalesAmt_TVF(p.ProductKey) AS s
 ORDER BY SumSales

-- Q4 Join instead of calling a function
 SELECT p.ProductKey, p.EnglishProductName, SUM(f.UnitPrice* f.OrderQuantity) AS SumSales
 FROM DimProduct AS p
  LEFT OUTER JOIN FactResellerSalesXL AS f ON f.ProductKey = p.ProductKey
 GROUP BY p.ProductKey, p.EnglishProductName
 ORDER BY SumSales

-- Q5 Correlated subquery
 SELECT p.ProductKey, p.EnglishProductName, 
 ( SELECT SUM(d.UnitPrice * d.OrderQuantity) 
   FROM dbo.FactResellerSalesXL AS d
   WHERE d.ProductKey = p.ProductKey ) AS SumSales
 FROM DimProduct AS p
 ORDER BY SumSales

-- Q6 Join using a table with a columnstore index
 SELECT p.ProductKey, p.EnglishProductName, SUM(f.UnitPrice* f.OrderQuantity) AS SumSales
 FROM DimProduct AS p
  LEFT OUTER JOIN FactResellerSalesXL_CCI AS f ON f.ProductKey = p.ProductKey
 GROUP BY p.ProductKey, p.EnglishProductName
 ORDER BY SumSales

I measured three things. The execution time as shown in SSMS. I also used an Extended Event trace to capture duration and logical reads. I will limit significant figures for duration and logical reads to 3.

QueryDescriptionInliningTimeDurationLogical reads
Q1Basic UDFN3:23204,000200,000,000
Q1Basic UDFY0:3232,10035,600,000
Q2Simplified UDFY0:3232,10035,600,000
Q3Inline TVF0:3233,20035,600,00
Q4Join0:00692331,000
Q5Join corr sub0:00708331,000
Q6Join col-store0:001534,960

Conclusions

Let us focus on the time it takes, the “Time” column above. This is the end-user perspective, and of is what matters in the very end.

Without inlining, the end-user would have time to grab a cup of coffee and chat with a friend while waiting. This is the pre-2019 behavior, assuming you didn’t do any optimization.

With inlining, the user can swap to the email program and check for new mails while waiting. We had to do manual work prior to 2019 to get from three and a half minute to half a minute. This meant re-writing the function and the SQL that called the function. SQL server 2019 basically does that for us! Note that I didn’t see any difference with simplifying the scalar function to only contain one SELECT.

But we could be smarter than that. For this simple example we could either do a join or a correlated subquery. It might not be that simple in all situations, but there can still be room for query optimizations, evidently. Going from half a minute to half a second is a big thing. Now the end-user will barely notice the execution time.

And imagine if the dev/dba created a columnstore index to support the query. Now we basically have instant result.

Also consider the resource consumption. I’m thinking about the logical reads column here. I didn’t measure CPU time for this test, but I would expect that it to some extent reflect the execution time and logical reads.

I will discuss a scalar UDF that doesn’t do SELECT (I/O) in a later blog post.

Disclaimers

You can do many more combinations than what I did. For instance scalar inlining with columnstore index. Etc. My goal here was not to test all possible permutations. Please give the others a try if you want!

This was only one example. I’m sure that there are examples where inlining doesn’t work that well. And perhaps also where the benefits are even grater. My aim here was to give it a try, with an example with was simple enough and something that I have seen in the wild (similar cases, that is).

Fragmentation, the final installment

Measuring things can be difficult. I have been painfully reminded of that with my attempts to measure whether fragmentation matters with new hardware. I created a test script in which I read data seeking the index (following the linked list of the leaf level of the index).

My thoughts were that “jumping back and forth” doesn’t matter with modern I/O subsystems. But perhaps read-ahead reads more (unnecessary) data into cache and less useful data remains in memory because of this?

In my first post, came to the conclusion that it didn’t matter for execution time, but I noticed that lots of memory was brought into cache when the index was fragmented. There’s some background info in there: http://sqlblog.karaszi.com/does-index-fragmentation-matter/ .

In my second blog post, I focused on how much memory was brought into cache. And I found that when the index was fragmented, I noticed 1000 times more memory usage. Surely, read-ahead, cannot cause this!? http://sqlblog.karaszi.com/index-fragmentation-part-2/

Because of the absurde figures, I had to dig deeper. Paul White told me about a lesser known optimization where SQL server will do “large I/O” (my own term for this) if you have plenty of unused memory. Instead of reading single pages, it reads whole extents. This makes sense, warming up the cache. https://blogs.msdn.microsoft.com/ialonso/2011/12/09/the-read-ahead-that-doesnt-count-as-read-ahead/

But still, that can’t account for 1000 times more memory usage for a fragmented index. I can live with 5-10 times, but not 1000 times. And then Simon Sabin pointed out the obvious: auto-update of statistics! My first set of SELECTs was done after a updating a bunch of rows (so to fragment the index), and that triggered auto update statistics (bringing the whole index into memory). Mystery solved.

My findings

As I said, I found it extremely difficult to measure this. Things happens in the machine which can muddle the results. I did the test over and over again. Looking at both details and averages. My conclusion is that with same amount of pages, you will see a pretty marginal effect of fragmentation. “Marginal” is in the eye of the beholder, though. More details further down. Please read my earlier posts, especially the first one for background information.

What about page fullness?

To be honest, I find this to be a much more interesting aspect. I’d like to split this into two sub-sections. Increasing and “random” index.

Increasing index

This can be like an identity or ever increasing datetime column (think for instance order_date). This will not become fragmented from INSERTS and assuming that you don’t delete “every other row”, then you will end up with a nice unfragmented index with a high degree of page fullness. If you rebuild it with a fillfactor of 100%, you have just wasted resources. If you rebuild it with a lower fillfactor, you also waste space and cause SQL Server to read more data – causing worse performance.

Random index

By this I mean index where the data distribution is all over the place. For instance an index on the LastName column. As you insert data, pages are split and you end up with a fillfactor of about 75% (which is in between a half-full and a full page). If you rebuild it with a similar fillfactor, you didn’t gain anything. If you rebuild it with a high fillfactor, then the following activity in your database will generate a high number of page splits. It seems reasonable to let it be, accepting a fillfactor of about 75% and accept a page split every now and then.

But we also delete data!

Do you? Really? OK, let us say that you do. What is the delete pattern? And the degree of deletes? How does this spread over your particular index? Deleting old data over some ever increasing index will just deallocate those pages. Deleting data which doesn’t match with the index key (rows deleted are spread all over the index) will leave empty space. This will be re-used as you insert new rows. But if you delete a lot of data, then you will have a low degree of page fullness. IMO, this is a rather unusual situation and should warrant a manually triggered index maintenance, in contrast to rebuilding indexes every week or so.
(And don’t get me started on heaps, but surely you don’t have those…? I once had a SELECT of all rows for a table with only 3000 rows, which took minutes. It had allocated more extents than number of rows in the table. Rebuild it, and it took, of course, sub-second. Needless to say, this didn’t stay as a heap for long. And no, the table wasn’t wide, at all.)

Bottom line?

Being who I am, I don’t want to draw too hard conclusions from some tests and reasoning. But I must say that I doubt the value of regular index defrag – assuming you are on some modern I/O subsystem, of course. I think that folks focus on index defrag and forget about statistics. I like to provide the optimizer with as high-quality statistics as possible. Sure, index rebuild will give you new statistics. But if you condition it based on fragmentation, then you end up with not rebuilding a bunch of indexes and have stale statistics. Auto update statistics is OK, but you have to modify a decent amount of data before it kicks in. And when it kicks in, the user waits for it. Unless you do async update…. But I digress. Statistics management is a topic of its own. My point is that you probably don’t want to lose track of statistics management because you keep staring as index fragmentation.

My tests – details

Check out this section if you are curious of how I did it, want to see the numbers and perhaps even do tests of your own. I welcome constructive criticism which can be weaknesses etc in my test scripts. Or perhaps different interpretations of the outcome!

The basic flow is:

  • Create a proc which does bunch of reads to fill the buffer pool with data.
  • Create some other supporting stuff, like trace definition, trace control procedure etc.
  • Grow the data and log files for the Stackoverflow (10 GB) database.
  • Create table to hold my measures.
  • Create the narrow_index table which is a table with a clustered index which is narrow. A row is some 40 bytes (very rough calculation off the top of my head).
  • Create the wide_index is a table with a clustered index which is wider – some 340 bytes.
  • Set max server memory to 1 GB.
  • Turn off auto-update statistics.
  • Create the proc that will do the tests.
  • Generate fragmentation in both indexes/tables.
  • The narrow table has 20,000,000 rows and is about 970 MB in size.
  • The wide table has 10,000,000 rows and is about 6.8 GB in size.
  • Run the proc that does the test with 1 row selectivity.
  • Run the proc that does the test with 10000 rows selectivity.
  • Run the proc that does the test with 100000 rows selectivity.
  • Each test runs 99 SELECTs, spread over the data (so not to read the same row).
  • Average the result, into a table named resx.
  • I did 4 such iterations, so I have the tables res1, res2, res3 and res4.
  • Check the numbers from these tables and see if they correspond reasonably. They did.
  • Average the results from these four table to get the end results.

The procedure which executes the SELECTS and captures the measures has this basic flow

  • Stop the (XE) trace if it is running.
  • Delete all old trace files.
  • Do a first SELECT to get rid of auto-stats (yeah, I disabled it, but just in case).
  • Empty the cache.
  • Reads lots of other data into cache.
  • Start the trace
  • Do the SELECT, in a loop, reading as many rows as requested (param to the proc), as many times as requested (another param to the proc). Don’t read the same data over and over again. This is done both for the narrow and wide table/index.
  • Capture memory usage for each table (buffer pool usage).
  • Get the trace raw data into a temp table.
  • Parse the trace data/above table using XQuery into the measures table.
  • Ie., we now have one row for each SELECT with a bunch of measures.

The measures

  • Microseconds as captured with the Extended Event trace.
  • Microseconds as “wall clock time”. I.e., capture datetime before the loop in a variable and calculate microseconds after the loop into a variable, divided by number of executions. This value will then be the same for each execution in the loop. I want to have two different ways to calculate execution time, as a sanity check. In the end, they corresponded pretty well.
  • How much data of the table was brought into cache.
  • Physical reads. This is of lower interest, since read-ahead will muddle the result.
  • Logical reads. This is more interesting and can among other things be used to check t at we read about the same number of pages for the fragmented and for the non-fragmented index. Which we should – they have the same fillfactor!
  • CPU microseconds. I didn’t find this very interesting.

Outcome

I ignored the reading of 1 row, this shouldn’t differ, and the small difference I found I attribute to imperfections in the ability to capture exact numbers.

When reading 1000 rows for the narrow table, about 75 pages, the fragmented index was about 30% slower.

When reading some 700-900 pages (100,000 rows for the narrow table and 10,000 rows for the wide table) the fragmented table was about 16-17 % slower.

When reading about 9,000 pages (100,000 rows for the wide table), the fragmented index was about 15% slower.

Here are the averages, one row per execution (remember that I did 99 SELECTs in a loop, so this is the average of those):

Narrow index
Wide index

And here is the average of those averages

The code

------------------------------------------------------------------------------------------------
--Measure cost of fragmentation in terms of execution time and memory usage
--Tibor Karaszi, 2019
--Requires the Stackoverflow database. I was using the 10 GB version.
------------------------------------------------------------------------------------------------

SET NOCOUNT ON

------------------------------------------------------------------------------------------------
--Setup section
------------------------------------------------------------------------------------------------
USE StackOverflow
GO

--Proc to fill buffer pool, reads lots of data, takes a while to execute, especially if on spinning disk
CREATE OR ALTER PROC fill_bp
AS
DECLARE @a bigint
SET @a = (SELECT AVG(CAST(PostId AS bigint)) FROM Comments)
SET @a = (SELECT AVG(CAST(CommentCount AS bigint)) FROM Posts)
SET @a = (SELECT AVG(CAST(PostId AS bigint)) FROM Comments)
SET @a = (SELECT AVG(CAST(VoteTypeId AS bigint)) FROM Votes)
SET @a = (SELECT AVG(CAST(PostId AS bigint)) FROM Comments)
SET @a = (SELECT AVG(CAST(VoteTypeId AS bigint)) FROM Votes)
SET @a = (SELECT AVG(CAST(VoteTypeId AS bigint)) FROM Votes)
GO

--Disable IAM order scan, so we know that SQL Server will follow the linked list
--See https://sqlperformance.com/2015/01/t-sql-queries/allocation-order-scans
EXEC sp_configure 'cursor threshold', 1000000 RECONFIGURE
GO

--Proc to change trace status
CREATE OR ALTER PROC change_trace_status 
	 @state varchar(50) --start, stop, delete_all_files
	AS

	IF @state = 'start'
	 BEGIN
	   IF NOT EXISTS (SELECT * FROM sys.dm_xe_sessions WHERE name = 'frag_test')
		ALTER EVENT SESSION frag_test ON SERVER STATE = START
	 END

	IF @state = 'stop'
	BEGIN
	   IF EXISTS (SELECT * FROM sys.dm_xe_sessions WHERE name = 'frag_test')
		 ALTER EVENT SESSION frag_test ON SERVER STATE = STOP
	 END

	--Delete XE file, using xp_cmdshell (ugly, I know)
	IF @state = 'delete_all_files'
	BEGIN
	  EXEC sp_configure 'xp_cmdshell', 1 RECONFIGURE WITH OVERRIDE
	  EXEC xp_cmdshell 'DEL R:\frag_test*.xel', no_output 
	  EXEC sp_configure 'xp_cmdshell', 0 RECONFIGURE WITH OVERRIDE
	END

	/*
	EXEC change_trace_status @state = 'start'
	EXEC change_trace_status @state = 'stop'
	EXEC change_trace_status @state = 'delete_all_files'
	*/
GO


--Drop and create event session to keep track of execution time
EXEC change_trace_status @state = 'stop'
WAITFOR DELAY '00:00:01'
IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'frag_test')
 DROP EVENT SESSION frag_test ON SERVER
EXEC change_trace_status @state = 'delete_all_files'
CREATE EVENT SESSION frag_test ON SERVER 
	ADD EVENT sqlserver.sp_statement_completed()
	ADD TARGET package0.event_file(SET filename=N'R:\frag_test')
	WITH (MAX_DISPATCH_LATENCY=10 SECONDS)
GO


--Create proc to report progress
CREATE OR ALTER PROC #status 
 @msg varchar(200)
AS
RAISERROR(@msg, 10, 1) WITH NOWAIT
GO

--Grow the data and log files for StackOverflow database.
EXEC #status 'Grow the data and log files for StackOverflow database...'
IF EXISTS(	SELECT size * 8/(1024*1024), * 
			FROM sys.database_files 
			WHERE name = N'StackOverflow2010' AND size * 8/(1024*1024) < 30)
	ALTER DATABASE [StackOverflow] MODIFY FILE ( NAME = N'StackOverflow2010', SIZE = 30GB )
IF EXISTS(	SELECT size * 8/(1024*1024), * 
			FROM sys.database_files 
			WHERE name = N'StackOverflow2010_log' AND size * 8/(1024*1024) < 15)
	ALTER DATABASE [StackOverflow] MODIFY FILE ( NAME = N'StackOverflow2010_log', SIZE = 15GB )
GO


--Table to hold output 
DROP TABLE IF EXISTS measures

CREATE TABLE measures (
 id int identity(1,1) PRIMARY KEY NOT NULL
,rows_to_read int NOT NULL
,index_width varchar(20) NOT NULL
,fragmented varchar(2) NOT NULL
,usec_xe bigint NOT NULL
,usec_clock bigint NOT NULL
,cpu_microsec bigint NOT NULL
,physical_reads bigint NOT NULL
,logical_reads bigint NOT NULL
,mb_cache decimal(9,2) NOT NULL
);



--Create the table for the narrow index
EXEC #status 'Setup section. Create table with narrow index...'
DROP TABLE IF EXISTS narrow_index


--Adjust numbers. 20,000,000 rows means about 970 MB in the end
SELECT TOP(1000*1000*20) ROW_NUMBER() OVER( ORDER BY (SELECT NULL)) AS c1, CAST('Hello' AS varchar(80)) AS c2
INTO narrow_index
FROM sys.columns AS a, sys.columns AS b, sys.columns AS c


CREATE CLUSTERED INDEX x ON narrow_index(c1)


--Create the table for the wide index
EXEC #status 'Setup section. Create table with wide index...'
DROP TABLE IF EXISTS wide_index

--Adjust numbers. 10,000,000 rows means about 6.8 GB in the end
SELECT 
 TOP(1000*1000*10) ROW_NUMBER() OVER( ORDER BY (SELECT NULL)) AS c1
,CAST('Hi' AS char(80)) AS c2
,CAST('there' AS char(80)) AS c3
,CAST('I''m' AS char(80)) AS c4
,CAST('on' AS char(80)) AS c5
,CAST('my' AS varchar(200)) AS c6
INTO wide_index
FROM sys.columns AS a, sys.columns AS b, sys.columns AS c

CREATE CLUSTERED INDEX x ON wide_index(c1)
GO


------------------------------------------------------------------------------------------------
--Investigate the data if you want

/*
--wide index
SELECT TOP(100) * FROM wide_index
EXEC sp_indexinfo wide_index		--Found on my web-site
EXEC sp_help 'wide_index'

--narrow index
SELECT TOP(100) * FROM narrow_index
EXEC sp_indexinfo narrow_index		--Found on my web-site
EXEC sp_help 'narrow_index'
*/

--Execute this if you want to have a rather full BP, restricts memory to 1 GB
EXEC sp_configure 'max server memory', 1000 RECONFIGURE

--Turn off auto-update statistics
ALTER DATABASE Stackoverflow SET AUTO_UPDATE_STATISTICS OFF

------------------------------------------------------------------------------------------------
--/Setup section
------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------
--Create the proc that executes our SQL
EXEC #status 'Create the proc that executes our SQL...'
GO
CREATE OR ALTER PROC run_the_sql
 @fragmented varchar(20)
,@rows_to_read int
,@range_iterations int
,@fill_bp char(1)			--'y' or 'n'
,@report_frag char(1) = 'n'
AS
DECLARE 
 @sql varchar(1000)
,@sql_condition varchar(1000)
,@bp_wide_mb decimal(9,2)
,@bp_narrow_mb decimal(9,2)
,@range_start int
,@range_iterations_counter int
,@a int
,@start_time datetime2
,@exec_time_wide_index_usec bigint
,@exec_time_narrow_index_usec bigint

EXEC change_trace_status @state = 'stop'
EXEC change_trace_status @state = 'delete_all_files'

--Do a first execution to get autostats etc out of the way
DBCC FREEPROCCACHE
SET @range_iterations_counter = 1
SET @range_start = 1000
SET @a = (SELECT COUNT_BIG(c1) AS c1_count FROM wide_index WHERE c1 BETWEEN @range_start AND (@range_start + @rows_to_read - 1))
SET @a = (SELECT COUNT_BIG(c1) AS c1_count FROM narrow_index WHERE c1 BETWEEN @range_start AND (@range_start + @rows_to_read - 1))

--Empty data cache (buffer pool, BP)
CHECKPOINT
DBCC DROPCLEANBUFFERS

--Run proc to read stuff into BP if requested
IF @fill_bp = 'y'
  EXEC fill_bp

EXEC change_trace_status @state = 'start'

--Do the SELECTs, narrow index
SET @range_iterations_counter = 1
SET @range_start = 1000
SET @start_time = SYSDATETIME()
WHILE @range_iterations_counter <= @range_iterations
BEGIN
	SET @a = (SELECT COUNT_BIG(c1) AS c1_count FROM narrow_index WHERE c1 BETWEEN @range_start AND (@range_start + @rows_to_read - 1))
	SET @range_start = @range_start + 100000
	SET @range_iterations_counter += 1
END
SET @exec_time_narrow_index_usec = DATEDIFF_BIG(microsecond, @start_time, SYSDATETIME()) / @range_iterations

--Do the SELECTs, wide index
SET @range_iterations_counter = 1
SET @range_start = 1000
SET @start_time = SYSDATETIME()
WHILE @range_iterations_counter <= @range_iterations
BEGIN
	SET @a = (SELECT COUNT_BIG(c1) AS c1_count FROM wide_index WHERE c1 BETWEEN @range_start AND (@range_start + @rows_to_read - 1))
	SET @range_start = @range_start + 100000
	SET @range_iterations_counter += 1
END
SET @exec_time_wide_index_usec = DATEDIFF_BIG(microsecond, @start_time, SYSDATETIME()) / @range_iterations

EXEC change_trace_status @state = 'stop'

--Keep track of BP usage
SET @bp_wide_mb = 
(
			SELECT 
			CAST((COUNT(*) * 8.00) / 1024 AS DECIMAL(9,2)) AS MB 
			FROM  sys.allocation_units AS a  
			  JOIN sys.dm_os_buffer_descriptors AS b 
				ON a.allocation_unit_id = b.allocation_unit_id
			  JOIN sys.partitions AS p
				ON a.container_id = p.hobt_id
			WHERE p.object_id = OBJECT_ID('wide_index')
			  AND b.database_id = DB_ID()
)

SET @bp_narrow_mb = 
(
			SELECT 
			CAST((COUNT(*) * 8.00) / 1024 AS DECIMAL(9,2)) AS MB 
			FROM  sys.allocation_units AS a  
			  JOIN sys.dm_os_buffer_descriptors AS b 
				ON a.allocation_unit_id = b.allocation_unit_id
			  JOIN sys.partitions AS p
				ON a.container_id = p.hobt_id
			WHERE p.object_id = OBJECT_ID('narrow_index')
			  AND b.database_id = DB_ID()
)


--Wait for trace data to arrive in target
WAITFOR DELAY '00:00:10'

--Get the trace data into our table
SELECT CAST(event_data AS XML) AS StatementData
INTO #myRawXeData
FROM sys.fn_xe_file_target_read_file('R:\frag_test*.xel', NULL, NULL, NULL);

--Done with trace file, delete it
EXEC change_trace_status @state = 'delete_all_files';

--Transform raw trace data into our measures table
WITH t AS(
SELECT 
 StatementData.value('(event/data[@name="duration"]/value)[1]','bigint') AS duration_microsec
,StatementData.value('(event/data[@name="cpu_time"]/value)[1]','bigint') AS cpu_microsec
,StatementData.value('(event/data[@name="physical_reads"]/value)[1]','bigint') AS physical_reads
,StatementData.value('(event/data[@name="logical_reads"]/value)[1]','bigint') AS logical_reads
,StatementData.value('(event/data[@name="statement"]/value)[1]','nvarchar(500)') AS statement_
FROM #myRawXeData AS evts
WHERE StatementData.value('(event/data[@name="statement"]/value)[1]','nvarchar(500)') LIKE '%index WHERE c1%'
),
t2 AS (
SELECT 
 CASE WHEN t.statement_ LIKE '%wide_index%' THEN 'wide_index' ELSE 'narrow_index' END AS index_width
,CASE @fragmented WHEN 'high_frag_level' THEN 'y' ELSE 'n' END AS fragmented
,duration_microsec
,CASE WHEN t.statement_ LIKE '%wide_index%' THEN @exec_time_wide_index_usec ELSE @exec_time_narrow_index_usec END AS usec_clock
,cpu_microsec
,physical_reads
,logical_reads
,CASE WHEN t.statement_ LIKE '%wide_index%' THEN @bp_wide_mb ELSE @bp_narrow_mb END AS mb_cache
FROM t)
INSERT INTO measures(rows_to_read, index_width, fragmented, usec_xe, usec_clock, cpu_microsec, physical_reads, logical_reads, mb_cache)
SELECT @rows_to_read, index_width, fragmented, duration_microsec, usec_clock, cpu_microsec, physical_reads, logical_reads, mb_cache
FROM t2;

--Report fragmentation level, if requested
IF @report_frag = 'y'
	--Note size of index and frag level, should be comparative between executions
	SELECT
	  OBJECT_NAME(object_id) AS table_name
	 ,index_type_desc
	 ,CAST(avg_fragmentation_in_percent AS decimal(5,1)) AS frag_level
	 ,page_count/1000 AS page_count_1000s
	FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'LIMITED') 
	WHERE index_level = 0 AND alloc_unit_type_desc = 'IN_ROW_DATA' AND OBJECT_NAME(object_id) IN('narrow_index', 'wide_index')
	ORDER BY index_id
GO


------------------------------------------------------------------------------------------------
--1: Generate fragmentation in both indexes

--Fragment wide ix
EXEC #status 'Generate fragmentation in wide index...'
UPDATE wide_index SET c6 = REPLICATE('a', 200) WHERE c1 % 20 = 0
UPDATE STATISTICS wide_index WITH FULLSCAN

--Fragment narrow ix
EXEC #status 'Generate fragmentation in narrow index...'
UPDATE narrow_index SET c2 = REPLICATE('a', 20) WHERE c1 % 100 = 0
UPDATE STATISTICS narrow_index WITH FULLSCAN

--Run the queries
EXEC #status 'Run queries with high frag level...'
EXEC run_the_sql @fragmented = 'high_frag_level', @rows_to_read = 1,		@range_iterations = 99, @fill_bp = 'y'
EXEC run_the_sql @fragmented = 'high_frag_level', @rows_to_read = 10000,	@range_iterations = 99, @fill_bp = 'y'
EXEC run_the_sql @fragmented = 'high_frag_level', @rows_to_read = 100000,	@range_iterations = 99, @fill_bp = 'y'


-----------------------------------------------------------------------------------
--2: no frag in either index, fillfactor set to make same size as when fragmented

EXEC #status 'Eliminate fragmentation in wide index...'
ALTER INDEX x ON wide_index REBUILD WITH (FILLFACTOR = 48)

EXEC #status 'Eliminate fragmentation in narrow index...'
ALTER INDEX x ON narrow_index REBUILD WITH (FILLFACTOR = 50)

--Run the queries
EXEC #status 'Run queries with low frag level...'
EXEC run_the_sql @fragmented = 'low_frag_level', @rows_to_read = 1,		 @range_iterations = 99, @fill_bp = 'y'
EXEC run_the_sql @fragmented = 'low_frag_level', @rows_to_read = 10000,	 @range_iterations = 99, @fill_bp = 'y'
EXEC run_the_sql @fragmented = 'low_frag_level', @rows_to_read = 100000, @range_iterations = 99, @fill_bp = 'y'


-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Reset
EXEC sp_configure 'cursor threshold', -1
EXEC sp_configure 'max server memory', 2147483647
RECONFIGURE
GO

-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Run below manually and investigate the output

--Raw data from the trace
SELECT * FROM measures ORDER BY rows_to_read, index_width, fragmented

--Average the data.
--Note that I ran this script 4 times and used INTO here to create 4 tables. Res1, res2, res3, and res4.
SELECT
 m.rows_to_read
,m.index_width
,m.fragmented
,AVG(m.usec_xe) AS usec_xe
,AVG(m.usec_clock) AS usec_clock
,AVG(m.mb_cache) AS mb_cache
,AVG(m.physical_reads) AS physical_reads
,AVG(m.logical_reads) AS logical_reads
,AVG(m.cpu_microsec) AS cpu_microsec
-- INTO res4						--if you want to persist the results
FROM measures AS m
GROUP BY m.rows_to_read, m.index_width, m.fragmented
ORDER BY index_width, rows_to_read, fragmented;


--Check the details from the averages of each execution
WITH x AS(
SELECT 1 AS ord, * FROM res1
UNION ALL
SELECT 2 AS ord, * FROM res2
UNION ALL
SELECT 3 AS ord, * FROM res3
UNION ALL
SELECT 4 AS ord, * FROM res4
)
SELECT * FROM x
WHERE index_width = 'narrow_index'
ORDER BY index_width, rows_to_read, fragmented, ord;

--Average the averages
WITH x AS(
SELECT 1 AS ord, * FROM res1
UNION ALL
SELECT 2 AS ord, * FROM res2
UNION ALL
SELECT 3 AS ord, * FROM res3
UNION ALL
SELECT 4 AS ord, * FROM res4
)
SELECT
 m.rows_to_read
,m.index_width
,m.fragmented
,AVG(m.usec_xe) AS usec_xe
,AVG(m.usec_clock) AS usec_clock
,AVG(m.mb_cache) AS mb_cache
,AVG(m.physical_reads) AS physical_reads
,AVG(m.logical_reads) AS logical_reads
,AVG(m.cpu_microsec) AS cpu_microsec
 FROM x AS m
GROUP BY m.rows_to_read, m.index_width, m.fragmented
ORDER BY index_width, rows_to_read, fragmented;





-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
--Cleanup
/*
DROP TABLE IF EXISTS narrow_index
DROP TABLE IF EXISTS myXeData
DROP TABLE IF EXISTS myXeData2
DROP TABLE IF EXISTS bp_usage
*/

Looking for strange (in your SQL Server)

Do you know if you have weird stuff going on in your SQL Server? Perhaps you walk through the errorlog every now and then? (Do you, really? You should!).

But there are lots of thing that doesn’t end up in the errorlog. For these kind of things, I created what I refer to as my KMDFM trace. This is from a tune of theirs, “Looking for strange“.

I decided to have this as an article, on my web-site. The reason is that I think if it as something that may evolve over time. Hopefully I’ll get feedback, with suggestions to refine the trace (adding filters, events, etc).

Needless to say, I’m using Extended Events. Instead of capturing each event, which can get pretty noisy, I use the event_counter target. This just count how many has happened.

You find the article here, with the details: https://karaszi.com/looking-for-strange

Here are some info from that article. In particular, I included below what events I added, and why:

General events

Attention. This fires when a client times out. The client might do a re-try, so then end-user isn’t even aware that this is happening, except that the performance sucks. The time-out period is specified at the client side, in the database API. “Indicates that a cancel operation, client-interrupt request, or broken client connection has occurred. Be aware that cancel operations can also occur as the result of implementing data access driver time-outs.

Auto_stats. If you have lots of auto update statistics going on, then you might want to do something about that. The user submitting a query when auto-update occurs will notice a halt while the update happens (unless set to async, but then the query is ran with inferior statistics). Perhaps you want to run manual update more frequently? Or remove some table/index/stats from the auto-update handling? I had to work quite heavily with filtering for this to ger rid of noise. “Occurs when index and column statistics are automatically updated. This event can be generated multiple times per statistics collection when the update is asynchronous.

Database_file_size_change. The change can be for several reasons. A grow or shrink can be automatic or manual. This should capture all. You should be aware of manual size changes, and they shouldn’t be frequent. But frequent autogrow you might want to handle by manually growing the file or changing the increment. And frequent autosrink is so bad and I doubt I have to explain any further on this. Hopefully you don’t have autosrink turned on for any of your databases. If this happens a lot,. consider adding a filter to only catch shrink operations and/or add a file target. “Occurs when any of the data or log files for a database changes size. Use this event to monitor changes to the size of database files.

Database_started. You probably want to know if databases are created, attached frequently. Not to mention f you have autoclose turned on for any databases and that causes a database to be started. “Database started.“‘

Lock_timeout_greater_than_0. If a programmer specified to get a time-out while waiting for a log, then this can fire. Why not the evant simply named “lock_timeout”? For two reasons: One is that I noticed it fires more than once for each occurrence. And the other reason is that a developer might have deliberately set a timeout of 0, to immediately get an exception, and those you probably don’t want to catch. “Occurs when a request for a lock times out because of a blocking lock that is being held by a different resource. This event is identical to lock_timeout, but excludes cases in which the time-out value is 0, such as when lock probes are used. Time-outs with a duration of 0 do not necessarily indicate a blocking problem.

Long_io_detected. You might have seem messages in the log that an I/O operation took longer than 15 seconds. This event fires when that happens. I didn’t include this in the performance section since 15 seconds is such a ridiculous long time for an I/O operation and something is clearly not normal if this event fires. “Long IO detected

Lock_deadlock. Hopefully this is obvious. If you have lots of deadlocks, then you want to do something about it. Perhaps run a trace and catch the xml_deadlock_report event, and then pass this on to the developers. “Occurs when an attempt to acquire a lock is canceled for the victim of a deadlock.

Lock_escalation. When SQL server escalates a log to a higher level (table), then it first acquired lots is fine-grained locks, then the use the sledge by acquiring a table level lock. You don’t want this to happen too frequently, adding indexes or other measures can reduce lock escalations. “Occurs when a fine-grain lock is converted into a coarser-grain lock, such as when a row lock is converted into an object lock. Use this event to monitor how locks are being used on the server.

I also wanted to have the plan_affecting_convert event, to capture when some covers happens in a WHERE clause so that it makes it impossible to use an index seek. But SQL server seems to include anything with a conversion here, including conversion in the column list. This makes the even, IMO, useless.

Performance related events

Check the trace definitions for these. Some might happen a lot. If you aren’t interested in these, then just skip them. Perhaps you can’t do anything about it in the first place?

Possibly the worst analogy in the world – columnstore and ordering

This post is about a topic that everybody who uses columnstore should think about. And that is about ordering. The analogy I will be using is flawed in so many ways, but it help to get my point through. I hope.

You now say to yourself that columnstore indexes aren’t sorted, so what am I yapping about? Hear me out.

Imagine that I will organize my underpants. I don’t know if the word underpants is, sort of, reserved for female underwear, but I’ll use that term anyhow (I’m Swedish and don’t have all nuances of the English language). Also Imagine that I have a bunch of drawers. Say I fit about 50 underpants in each drawer and I have some 1000 underpants. How should I do this? The drawer is the rowgroup but it would be ridiculous to say you have about 1,000,000 underpants in each drawer. The 1000 underpants is the number of rows in the table, and even 1000 underpants seems like a ridiculous number. Anyhow…

Now imagine that I’m very picky with what underpants I’m wearing. The type (boxer, briefs, string, and whatnot), color, size, material etc. So how do I decide what underpants goes into each drawer? I could just shove them in there without much though about that process. I think that is how many handle their col-store indexes.

Also imagine that we have a device that register attributes about the underpants as we add them to the drawer. Lets call this device u47. So, the u47 will label the drawer with some info on what briefs are in there. Like range of sizes (xxs to xxl). OK, strange perhaps but say that my size vary a lot over time. And also range of colors. And range of models. You have now figured ut that I like to classify things in ranges. I.e., this particular drawer has m-xl; white-green, string to briefs and cotton to satin. This would be the meta-data that SQL Server has for each rowgroup/segment, as seen in sys.column_store_segments.

Lets go back to how you populated your drawers. Did you just shove the underpants there? Then you probably have close to the full range of all attribute values in each drawer. I.e., every drawer has underpants of all sizes, all colors, etc.

Now I want to wear all black underpants. Let’s say I’m a bit strange in that sense. Since there are black underpants in all drawers It will take a while to find them all. I have to go through each drawer. Or I want all satin underpants. Or perhaps all medium size black satin briefs. I still have to go through each drawer.

But what if I took the time to sort the underpants before adding them to the drawers? I can only sort them by one attribute. I should pick the one attribute that I most often have in my “what underpants should I use today?” decision process. You achieve this by having a row-clustered index on that column when building the col-store clustered index and build the col-store index using WITH DROP_EXISTING. Also, instead of having my friends over and each of us grab some underwear and add them to each person’s drawer, I would do this myself. Yes, that means MAXDOP = 1. Without it, say you have 4 threads/friends then you have 4 drawers with xs-s, 4 drawers with m-l, etc. This is a lot of work, so whether or not it is worth it is up to you.

For non-clustered col-store index, it would be enough with a non-clustered index that covers the clustered index when converting it from row to col-store index (DROP_EXISTING).

The underpants inside the drawer isn’t sorted; this is a col-store index we’re talking about, remember? You might have some fancy vacuum-tech that allow you to fit many underpants in each drawer (the various columnstore compression technologies), something you probably ordered when you bought the u47, btw.

Now, imagine me walking from the shower to my dressing room and thinking about how to find the underpants of today. I.e., the execution plan. “Today, I want to wear underpants based on color, size and material – model doesn’t matter”. I can’t say what drawers to go through, since that information isn’t available at this stage. It is a run-time decision to do drawer-elimination based on the notes on each drawer (what ranges it contains for each attribute). I.e., rowgroup/segment elimination is a run-time decision.

The underwear will only be partitioned over the drawers in an effective manner for one attribute, the one I sorted them by before populating my drawers. If that is size, then one drawer has only xs, another has only x, etc. But the “xs drawer” has all range of colors, materials etc. There might be some correlation between attributes (like rubber underpants tend to be black), but that correlation isn’t usable for us.

How would row indexes fit in this analogy? You have a bunch of post-it notes with which you build a tree-like structure that you lay out on the floor. You use this tree (over size, for instance) to navigate to the ones you are interested in (medium size for instance), and finally you reach the sub-section in the drawer. You can have several such trees, for various attributes, of course. But this smart vacuum-tech you bought with the u47 isn’t available for these indexes. If you are very selective (remember the rubber pants?), then you can very quickly find that pair using the tree structure. But it is quicker to find all black underpants using the col-store index because you have many of those and the vacuum-tech allow you to fit so many in each drawer. And that vacuum-tech also magically allow you to put on as many as some 900 pairs of underpants as a time. (Rumors has it that we this year will have the option to put on more then one pair at a time even without this magical vacuum tech. We’ll see. )

The bottom line? Think about how the data is sorted when building your col-store indexes and consider rebuilding them if your loading/modification routines causes them to degrade over time. Oh, I should add that you shouldn’t mess about with col-store indexes without having Niko’s blog post series handy and also have read Hugo’s series first.

Disclaimer 1: I don’t own any rubber underpants.
Disclaimer 2: If I did own any rubber underpants, I would still have disclaimer 1 in there…

Make sure you update the statistics

You might know that statistics can have a high impact on performance. As you add, remove and modify data, the statistics will be more and more outdated.

Sure, there’s the database option that updates statistics automatically, but it is a bit … rough. For a table with 10,000,000 rows, you have to modify 2,000,000 rows prior to 2016 (20%) and 100,000 rows as of 2016 with db compat level 2016 (SQRT(1000*@rows)). Also, when auto-update stats kicks in, it will sample the underlying data, in order for it not to take too long etc.

Many of us use Ola Hallengren’s excellent maintenance solution. I’m sure that some of us have our own favorite modifications we do to the jobs that the installation scrips creates for us. My point here is that the installation script do not update statistics by default. Here’s how job step in the “IndexOptimize – USER_DATABASES” job look like on my SQL Server 2017 instance. All by default:

EXECUTE [dbo].[IndexOptimize]
@Databases = 'USER_DATABASES',
@LogToTable = 'Y'

The default value for the @UpdateStatistics parameter is NULL which means “Do not perform statistics maintenance” See this page for reference.

So, by using the defaults, you end up defragmenting your indexes (which might not give you that much nowadays, considering that there isn’t that much of a difference between random and sequential I/O on modern disk subsystems). For some of the indexes that you defragment, you will get new statistics as a by-product. Those are the indexes that you rebuild – rebuild is internally creating a new index and the dropping the old one. But you likely have many indexes that don’t reach the 30% fragmentation threshold. And consider indexes over a key which is steadily increasing or decreasing. Inserting new rows will not cause fragmentation, but the statistics will become out-of-date.

What to do? Easy, just add a new job in which you call the IndexOptimize procedure with options to update all statistics where at least one row has been modified. Ola even has such example on his web-site, Example D. Here it is, I just added a parameter to log the commands to the CommandLog table:

EXECUTE dbo.IndexOptimize
@Databases = 'USER_DATABASES',
@FragmentationLow = NULL,
@FragmentationMedium = NULL,
@FragmentationHigh = NULL,
@UpdateStatistics = 'ALL',
@OnlyModifiedStatistics = 'Y',
@LogToTable = 'Y'

Schedule above as you wish. I prefer to do it every early morning if I can. But your circumstances like database size, how the data is modified etc will influence the frequency.

Another option is toe modify the existing “IndexOptimize – USER_DATABASES” job and just add below. This will both defrag your indexes and also update statistics.

@UpdateStatistics = 'ALL'

What about Maintenance Plans, you might think? Well, we all know that they aren’t that … smart – which is why we use scripts like Ola’s in the first place. Regarding statistics updates, they will update all stats, regardless of whether any rows at all has been modified since last time. It is a complete waste of resources to update stats if nothing has changed. Even sp_updatestats is smarter in this regard.

Note: This is in no way a criticism of the maintenance solution that Ola provides. He has no way of knowing our requirements in the job he creates. If the installation script could read our minds, I’m sure that it would schedule the jobs for us as well. OTOH, if Ola could create a T-SQL installation script that could read our minds, then he probably would do something else in the first place. 🙂

Rebuilding indexes online, wasting time and resources?

Many SQL server installations does regular index rebuild or reorg in order to defragment indexes. One can question how much we really gain by doing this when using modern SANs, SSD, flashdisks, etc. But, that is not the topic for this blog post. What I want to raise with this blog is whether to use ONLINE when rebuilding indexes. Some background first::

  • With ONLINE = OFF (default), you will be blocked from accessing the data while the index rebuild is being performed. Depending on the type of index, you either get a schema modification lock on the table which prohibits any type of access, or a shared lock which allow reads only.
  • With ONLINE = ON (requires Enterprise Edition), you get a very brief lock at beginning and end of operation. I will refer to the modes as ONLINE and OFFLINE from now on.

There’s more details to what locks are acquired depending on various factors. Check out https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-index-transact-sql for more info. The point is that ONLINE = ON, allow for users to both read and modify data while the operation is on-going. That is a good thing, right? Well, it comes with a price…

The time it takes to rebuild the index can be substantially longer for ONLINE. Many of us has other things to do with the database during night-time and/or weekends. Yes, these are the typical window in time where we try to find things such as index rebuilds. Say that you do it night-time and it currently take 4 hours. Wouldn’t it be nice if you could cut that time down to 1.5 hours? That would leave more time for imports, massaging of data, CHECKDB and other things you want to do. Sure, you can do it ONLINE, but it will slow down access during the rebuild. Also the more data you modify during the rebuild, the more space you need in tempdb.

My point is to not do ONLINE unless you actually need it. Some system don’t have to be “open” during night-time anyhow. Only you can tell what your requirements are, of course. To get some numbers I decided to create a few indexes on a copy of the FactResellerSalesXL_PageCompressed found in the AdventureworksDW2016 database, and I named that table FactResellerSalesXL. I created a copy without data compression and also without any columnstore index. And then I created a few non-clustered index. Here’s the index and space usage for the table, using my sp_indexinfo (with some columns omitted and others re-arranged):

Not a large table by any means, but it gives us something to play with. And it allowed me to do the rebuilds several times in order to rule out figures out of the ordinary. I tested this both in FULL and SIMPLE recovery. The numbers are average after several executions,  minutes : seconds:

  • FULL recovery
    • ONLINE: 3:45
    • OFFLINE: 1:03
  • SIMPLE recovery
    • ONLINE: 1:40
    • OFFLINE: 0:31

 

I frequently use Ola Hallengren’s scripts for database maintenance. (We all do, right? :-)) The default for Ola’s solution (IndexOptimize) is to do the rebuild ONLINE if possible. Or, to be more specific, the defaults are:

  • < 5% fragmentation: do nothing
  • Fragmentation between 5 and 30 %: do REORGANIZE.
  • Fragmentation over 30%: do REBUILD with ONLINE = ON

I.e., when using Ola’s scripts you will rebuild your indexes ONLINE, assuming you are on Enterprise Edition (or similar). For Maintenance Plans (anybody remember those?), the checkbox for ONLINE is not checked by default.

Bottom line: If you have problems with index rebuilds taking a long time and you don’t need other people accessing the data while this is being done, consider doing it OFFLINE instead. Or perhaps not defragment in the first place? Or, raise the bar for to some 60-70 % so you only defragment the indexes that are really fragmented in the first place.

Free book about execution plans

Grant Fritchey has released the third edition of his book about execution plans. And it is freely available thanks to the cooperation with Redgate.

There’s a lot of good stuff in there. Introduction to execution plans, plan caching, reading plans, etc. and then of course perhaps the “meat of the book”: a discussion about the various operators that you can find in an execution plan. Sure, these are documented in Books Online, but far from the level of this book. Say you find a Hash Match in the plan. Is it good? Is it bad? Under what circumstances? And, exactly what does it really do? Things like that…

Get it here .

Explaining Activity Monitor

This post is not about how to use the Activity Monitor (AM) tool in SQL Server Management Studio (SSMS) – there are loads of such posts written already. Also, it is not about dissing AM, you will find such posts as well.

What I want to do is to explain the information in AM, for instance what time span the information covers. I see lots of confusion about this, and incorrect assumptions can be made because of that. This aspect is typically not mentioned in other blog posts on AM that I have seen.

The SQL Server documentation is very vague on the topic. In fact, the most recent documentation article explaining the information in AM that I found is for SQL Server 2008 R2. And that documentation is at many places vague, or even flat out incorrect. For more recent versions of SQL Server, there’s not even an attempt to explain the information. (Please let me know if you find such official articles.)

I know that lots of people uses the Activity Monitor tool in SQL Server, especially the less experienced DBA. The more experienced DBA often uses other tools like Adam Machanic‘s sp_whoisactive (see this and this) , Brent Ozar’s sp_BlitzFirst, etc.

Say for instance that you had massive amount of I/O for one of your databases for the last day, up until a minute ago. I.e., the I/O load for the database varies a bit, but on average is very high. You look in AM which show this database as silent since you happened to have low I/O the last minute, and AM show some other database as being the one with high load.

So, we need to think about the time dimension here. AM does a refresh at certain intervals. By default it is every 10 seconds, but you can change that by right-clicking somewhere in AM and change in that context menu. Keep this in mind. It is important. We will refer to it as the most recent refresh interval, or snapshot.

I’m using SQL Server Management Studio (SSMS) 2016, and SQL Server 2016. It is possible that other version combination does other things. With the information in this blog post, you will be able to find and determine that for yourself. Please comment if you find important deviations, or perhaps just confirmations (like “SSMS 2012 does the same thing”).

I mainly used tracing to spy on the SQL submitted by AM.

The four top graphs:

 
“% Processor Time” is picked up directly from the OS (using WMI, I believe). Most probably a Performance Monitor counter in the end.


“Database I/O” is the sum of I/O for all database files performed since the last snapshot. This is fine since we intuitively understand that, because we have the trail of prior snapshot values displayed in the graph. The information comes from sys.dm_io_virtual_file_stats, doing a SUM over num_of_bytes_read + num_of_bytes_written, converted to MB.


“Batch Requests/sec”
 is the number of batches we have submitted to our SQL Server since the last snapshot. Again, this is pretty intuitive since we have a trail of snapshot values in the graph. The information is from the performance counter “Batch Requests/sec” picked up from sys.sysperfinfo (bad Microsoft, you should use sys.dm_os_performance_counters 🙂 ).


“Waiting Tasks”
 show how many that are waiting for something (a lock to be released, for instance). This is not as straight-forward as the others. The information comes from sys.dm_os_wait_stats UNION ALL with sys.dm_os_waiting_tasks.

The values are compared to those from the prior snapshot. However, a higher weight in that calculation will be given to the prior snapshot values if you have a short refresh interval. Say that you have a 1-second refresh interval. Then only a weight of 9% is from the current interval and 91% is from the prior interval. Since the current interval value will become the prior value for the next snapshot, a “trail” is kept back in time with a diminishing weight the longer back in time you go.

If you refresh every 10 seconds, then current interval weight is 60% and previous interval weight is 40%. It pretty quickly approaches 100% for current snapshot the longer refresh interval you are using. Hats off to Microsoft for so clearly documenting this in the temporary stored procedures that AM is using. It is in the source code, all you need to do is to grab it in a trace and read it. The name of the procedure is #am_generate_waitstats, and it is created when you open the AM window in SSMS.

Note that not all wait types are represented here. See the section below named ‘The “Resource Waits” pane’ for more information. The “Waiting Tasks” diagram and the “Resource Waits” pane shares some T-SQL code.

The “Processes” pane
This is pretty straight forward so I won’t spend much time on it here. It shows information about the sessions you have at the moment the snapshot is produced. It uses a query joining DMVs such as sys.dm_exec_sessions, sys.dm_exec_requests, sys.dm_os_tasks, etc. Go grab the query in a trace and paste into a query window if you want to dig into it.

The “Resource Waits” pane
The idea here is to show where SQL Server is waiting, “wait stats”.

It uses the same procedure as the “Waiting Tasks” diagram uses, #am_generate_waitstats, to get the information. See the above section for “Waiting Tasks” to understand the time dimension for this. For simplicity, we can say that it shows only wait stats for the past 30-60 seconds. This is important. Imagine that you had loads of a certain wait stats, but none just for the last minute. This pane can now fool you that you didn’t have any waits of that kind, just because you didn’t for the past minute. Note, though, that the “Cumulative Wait Time” column is the sum of wait in the group since SQL Server was re-started or since we last cleared the wait state (DBCC SQLPERF(“sys.dm_os_wait_stats”,CLEAR)).

In an attempt to be friendly, it will group and summarize wait stats into various groups. That would be fine if there were some documentation about which individual wait type is in each group. Also, some wait types are ignored. One of the ignored wait types is CXPACKET, another is THREADPOOL.

AM creates a table named #am_wait_types when you open the AM window, which it populates with various wait types and the group each wait stats is in. This table has a column named “ignore”. The two wait types I mentioned above has 1 in this “ignore” column. There are 35 rows which has 1 for the “ignore” column. To be fair, most are benign but the two which I immediately reacted on are the ones I mentioned above.

But hang on, how many rows do we have in this #am_wait_types table in total?  The answer is 263. Are there more than 263 wait types in 2016? You bet! I did a select from sys.dm_os_wait_stats and I got 875 rows. So, 633 of the wait types in 2016 are not at all considered by AM. That of course begs the question whether I found any interesting wait types that aren’t in #am_wait_types? I didn’t go through them all, but I glanced only quickly and for instance SOS_SCHEDULER_YIELD caught my attention. If you want to go through them, then I highly recommend Paul Randal’s wait types library. If you find anything that stands out, then please post a comment.

SELECT ws.wait_type FROM sys.dm_os_wait_stats AS ws
WHERE ws.wait_type NOT IN(
SELECT wt.wait_type FROM #am_wait_types AS wt WHERE wt.ignore = 0
)
ORDER BY wait_type

The “Data File I/O” pane
This shows I/O activity per database file since the last snapshot. Again, you could for instance have had lots of I/O for a database the last day, but if it was silent the past minute, then this dialog will potentially mislead you.

The “Recent Expensive Queries” pane
This shows the most expensive queries, based on what column you sort on, executed since the last snapshot. If you have, say, a 10 second snapshot interval, you will only see the queries executed during these 10 seconds. AM uses a procedure named #am_get_querystats to collect the information. There are a few things going on inside this procedure, but at the most basic level, it uses sys.dm_exec_query_stats and sys.dm_exec_requests to get queries from cache and currently executing queries. It then does some processing and store the result in temp tables so we later can sort on different columns depending on what metric we are interested in. I suggest that you spend some time with the source code if you want to dig deeper.

The “Active Expensive Queries” pane
This is very straight forward. It executes a query which uses sys.dm_exec_requests joined to a few other DMVs.

How to dig deeper
I thought about including snippets of AM’s source code, the list of wait stats grouping etc here. But I decided against that. It would litter this post, and the code might differ between releases and builds of SSMS. So, if you are interested in digging deeper, fire up your favorite trace tool (Extended Events, Profiler, Server-side trace, …) and catch the SQL submitted by AM.

When you open the AM window, it executes a few batches that creates procedures and tables that it will later use for each refresh interval. Here are the ones that I found:

  • The table #am_wait_types, which contains the wait types that AM bothers about, along with the grouping and which of those that are ignored. This is the one you want to investigate to see which wait types that are ignores by AM; either having 1 in the “ignore” column, or by not being in that table in the first place. Match against sys.dm_os_wait_stats.
  • The procedure #am_generate_waitstats which collects wait stats with some trail back in time, as explained above.
  • The table #am_dbfileio in which file I/O stats is stored.
  • The tables #am_request_countand and #am_fingerprint_stats_snapshots, used for query statistics.
  • The procedure #am_get_querystats, which collects and stores the query statistics.

At each refresh interval, you see 4 T-SQL batches submitted for the top 3 graphs that are T-SQL based (ignoring the “dead” graph that I have in SSMS 2016, and also ignoring “% Processor Time” since it is WMI-based).

If you have expanded the “Processes” pane, you also see a batch that collects that information at every refresh interval.

If you have expanded the “Resource waits” pane, you also see a batch that does a SELECT from the #am_resource_mon_snap table at every refresh interval, with grouping and SUM based on resource type.

If you have expanded the “Data File I/O” pane, you also see a batch that collects that information at every refresh interval.

If you have expanded the “Recent Expensive Queries” pane, you also see a batch that executes the #am_get_querystats procedure to collects that information. It is executed at refresh intervals, but not necessarily at every refresh interval. Check the source code for the procedure and you see that SM will execute this no more frequently than every 15 seconds.

If you have expanded the “Active Expensive Queries” pane, you also see a batch that executes a query to collects that information. It seems to be limited so it doesn’t execute more frequently than every 5 seconds (even with a shorter refresh interval).

The bottom line
As always, with understanding of the data we see, we have a chance to make information out of it. The Activity Monitor certainly has its quirks, but if you do feel like using a GUI for these type of things, I hope that you are better equipped now to interpret what it is you are seeing. Personally, I find “Resource Waits”, “Data File I/O” and “Recent Expensive Queries” less useful because of the time dimension handling. As for expensive queries, IMO nothing beats the Query Store in SQL Server 2016.

Logging wait stats over time

We all know how valuable wait statistics can be when doing performance analysis. One thing I feel is missing in SQL Server is a trail of various measures, for instance wait stats. I really wish Microsoft could include something in this area, which can be used as a baseline. I recently fount this, from the Tiger Team. It looks promising, but I haven’t implemented it yet.

Anyhow, I have created a script that log wait stats over time. I keep a trail of logged value which is denser – for instance once per minute. And also a trail which is courser, for instance once per day.

The solution has a few tables and then you create a couple of Agent jobs. The logging and clean-up part is methinks pretty much done. The part that need more work is the analysis of the information. I.e., expect to find queries and views added. Suggestions are of course very welcome!

You find the stuff here.

Log Page Life Expectancy over time

You often see Page Life Expectancy referred to as an interesting performance monitor counter. And it can be! It indicates for how long a page is expected to stay in cache, from the time it was brought into cache.

But just looking at a snapshot value doesn’t say that much. It might be high, but that is because you haven’t had a high turnover of you pages for the past couple of hours. Or the other way around, you happen to look just after a very rare monster query. Furthermore, having a log can show you how PLE changes over time.

“The app was slow at two o’clock today.”

You look at the PLE log and you see how PLE dropped at that time, which indicates that something was executed that did lots of I/O. Or you don’t see PLE drop at all, and can assume that it probably was a blocking situation instead.

Some of you already have performance monitoring solutions that include PLE logging over time. For you that don’t, I created a very simple solution. It involved a table, a scheduled Agent job and queries to read the data. That’s it.

You find it at http://karaszi.com/log-page-life-expectancy-over-time .

Improvements and comments are of course appreciated. 🙂