Make sure you update the statistics

You might know that statistics can have a high impact on performance. As you add, remove and modify data, the statistics will be more and more outdated.

Sure, there’s the database option that updates statistics automatically, but it is a bit … rough. For a table with 10,000,000 rows, you have to modify 2,000,000 rows prior to 2016 (20%) and 100,000 rows as of 2016 with db compat level 2016 (SQRT(1000*@rows)). Also, when auto-update stats kicks in, it will sample the underlying data, in order for it not to take too long etc.

Many of us use Ola Hallengren’s excellent maintenance solution. I’m sure that some of us have our own favorite modifications we do to the jobs that the installation scrips creates for us. My point here is that the installation script do not update statistics by default. Here’s how job step in the “IndexOptimize – USER_DATABASES” job look like on my SQL Server 2017 instance. All by default:

EXECUTE [dbo].[IndexOptimize]
@Databases = 'USER_DATABASES',
@LogToTable = 'Y'

The default value for the @UpdateStatistics parameter is NULL which means “Do not perform statistics maintenance” See this page for reference.

So, by using the defaults, you end up defragmenting your indexes (which might not give you that much nowadays, considering that there isn’t that much of a difference between random and sequential I/O on modern disk subsystems). For some of the indexes that you defragment, you will get new statistics as a by-product. Those are the indexes that you rebuild – rebuild is internally creating a new index and the dropping the old one. But you likely have many indexes that don’t reach the 30% fragmentation threshold. And consider indexes over a key which is steadily increasing or decreasing. Inserting new rows will not cause fragmentation, but the statistics will become out-of-date.

What to do? Easy, just add a new job in which you call the IndexOptimize procedure with options to update all statistics where at least one row has been modified. Ola even has such example on his web-site, Example D. Here it is, I just added a parameter to log the commands to the CommandLog table:

EXECUTE dbo.IndexOptimize
@Databases = 'USER_DATABASES',
@FragmentationLow = NULL,
@FragmentationMedium = NULL,
@FragmentationHigh = NULL,
@UpdateStatistics = 'ALL',
@OnlyModifiedStatistics = 'Y',
@LogToTable = 'Y'

Schedule above as you wish. I prefer to do it every early morning if I can. But your circumstances like database size, how the data is modified etc will influence the frequency.

Another option is toe modify the existing “IndexOptimize – USER_DATABASES” job and just add below. This will both defrag your indexes and also update statistics.

@UpdateStatistics = 'ALL'

What about Maintenance Plans, you might think? Well, we all know that they aren’t that … smart – which is why we use scripts like Ola’s in the first place. Regarding statistics updates, they will update all stats, regardless of whether any rows at all has been modified since last time. It is a complete waste of resources to update stats if nothing has changed. Even sp_updatestats is smarter in this regard.

Note: This is in no way a criticism of the maintenance solution that Ola provides. He has no way of knowing our requirements in the job he creates. If the installation script could read our minds, I’m sure that it would schedule the jobs for us as well. OTOH, if Ola could create a T-SQL installation script that could read our minds, then he probably would do something else in the first place. 🙂

Rebuilding indexes online, wasting time and resources?

Many SQL server installations does regular index rebuild or reorg in order to defragment indexes. One can question how much we really gain by doing this when using modern SANs, SSD, flashdisks, etc. But, that is not the topic for this blog post. What I want to raise with this blog is whether to use ONLINE when rebuilding indexes. Some background first::

  • With ONLINE = OFF (default), you will be blocked from accessing the data while the index rebuild is being performed. Depending on the type of index, you either get a schema modification lock on the table which prohibits any type of access, or a shared lock which allow reads only.
  • With ONLINE = ON (requires Enterprise Edition), you get a very brief lock at beginning and end of operation. I will refer to the modes as ONLINE and OFFLINE from now on.

There’s more details to what locks are acquired depending on various factors. Check out https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-index-transact-sql for more info. The point is that ONLINE = ON, allow for users to both read and modify data while the operation is on-going. That is a good thing, right? Well, it comes with a price…

The time it takes to rebuild the index can be substantially longer for ONLINE. Many of us has other things to do with the database during night-time and/or weekends. Yes, these are the typical window in time where we try to find things such as index rebuilds. Say that you do it night-time and it currently take 4 hours. Wouldn’t it be nice if you could cut that time down to 1.5 hours? That would leave more time for imports, massaging of data, CHECKDB and other things you want to do. Sure, you can do it ONLINE, but it will slow down access during the rebuild. Also the more data you modify during the rebuild, the more space you need in tempdb.

My point is to not do ONLINE unless you actually need it. Some system don’t have to be “open” during night-time anyhow. Only you can tell what your requirements are, of course. To get some numbers I decided to create a few indexes on a copy of the FactResellerSalesXL_PageCompressed found in the AdventureworksDW2016 database, and I named that table FactResellerSalesXL. I created a copy without data compression and also without any columnstore index. And then I created a few non-clustered index. Here’s the index and space usage for the table, using my sp_indexinfo (with some columns omitted and others re-arranged):

Not a large table by any means, but it gives us something to play with. And it allowed me to do the rebuilds several times in order to rule out figures out of the ordinary. I tested this both in FULL and SIMPLE recovery. The numbers are average after several executions,  minutes : seconds:

  • FULL recovery
    • ONLINE: 3:45
    • OFFLINE: 1:03
  • SIMPLE recovery
    • ONLINE: 1:40
    • OFFLINE: 0:31

 

I frequently use Ola Hallengren’s scripts for database maintenance. (We all do, right? :-)) The default for Ola’s solution (IndexOptimize) is to do the rebuild ONLINE if possible. Or, to be more specific, the defaults are:

  • < 5% fragmentation: do nothing
  • Fragmentation between 5 and 30 %: do REORGANIZE.
  • Fragmentation over 30%: do REBUILD with ONLINE = ON

I.e., when using Ola’s scripts you will rebuild your indexes ONLINE, assuming you are on Enterprise Edition (or similar). For Maintenance Plans (anybody remember those?), the checkbox for ONLINE is not checked by default.

Bottom line: If you have problems with index rebuilds taking a long time and you don’t need other people accessing the data while this is being done, consider doing it OFFLINE instead. Or perhaps not defragment in the first place? Or, raise the bar for to some 60-70 % so you only defragment the indexes that are really fragmented in the first place.

Managing the errorlog file

I frequently see recommendations to regularly run sp_cycle_errorlog, so that the errorlog doesn’t become huge. My main concern with that is that the errorlog contains valuable information.

When I do a health check on a SQL Server machine, I want a few months worth of errorlog information available. I typically use my own scripts for this, available here. Almost every time I go through the errorlog, I find valuable information. Some things you address, like find whatever it is that is attempting to login every minute. Other things you might not have control over, but the information is valuable to have.

So, if you run sp_cycle_errorlog every day or week, you end up with only a week worth, or a few weeks worth of errorlog file information.

Suggestion 1: Increase the number of errorlog files.

You probably want more than 6 history errorlog files. For instance, a client of mine told me that he was about to patch a server a few days before I was to visit that client. That patch procedure resulted in enough re-start of SQL Server so we ended up with only 4 days worth of errorlog files. Yes, this client had the default of 6 historic errorlog files. I typically increase this to 15. You can do this by right-clicking the “SQL Server Logs” folder under “Management” in SSMS. If you want to use T-SQL, you can use xp_instance_regwrite, as in:

EXEC xp_instance_regwrite
N’HKEY_LOCAL_MACHINE’
,N’Software\Microsoft\MSSQLServer\MSSQLServer’
,N’NumErrorLogs’, REG_DWORD, 15;

Suggestion 2: Set a limit for the size of the errorlog file.

But what about the size? Say that we have crash dumps, for instance. Or other things that start to happen very frequently. The good news is that as of SQL Server 2012, we can set a max size for the errorlog file. There is no GUI for this, so we have to manipulate the registry directly. Again, we can use xp_instance_regwrite. Below will limit the size to 30 MB:

EXEC xp_instance_regwrite
N’HKEY_LOCAL_MACHINE’
,N’Software\Microsoft\MSSQLServer\MSSQLServer’
,N’ErrorLogSizeInKb’, REG_DWORD, 30720;

With 15 files, you can patch of your SQL Server machine without aging out all old errorlog files. And with a max size of 30 MB, you keep each file manageable in size. And you keep the total size of errorlog files for that instance to 450 MB. Not enough to fill your disks. But enough to have historical information for when you are about to perform a health check on your SQL Server instance.

References: this by Jan Kåre and this by Paul Randal.

Ola Hallengrens Maintenance Solution now supports mirrored backup

You probably know that you can mirror a backup to several destinations, assuming you are on a supported edition (Enterprise or Developer). This is not the same as striping; you can compare striping to RAID 0, and mirroring to RAID 1.

Ola now supports mirroring in his maintenance solution, found here. A couple of examples:

EXECUTE dbo.DatabaseBackup
@Databases = 'USER_DATABASES',
@Directory = 'C:\Backup',
@MirrorDirectory = 'D:\Backup',
@BackupType = 'FULL',
@CleanupTime = 24,
@MirrorCleanupTime = 48

EXECUTE dbo.DatabaseBackup
@Databases = 'USER_DATABASES',
@Directory = 'C:\Backup,D:\Backup',
@MirrorDirectory = 'E:\Backup,F:\Backup',
@BackupType = 'FULL',
@CleanupTime = 24,
@MirrorCleanupTime = 48

Note that if any of the destinations are unanavilable, then the backup fails for all destinations. SQL Server do not produce the backup to the ones that are available. This has nothing to do with Ola’s solution, it is just how MS decided to implement backup mirroring.

 

How often do you rebuild your heaps?

Never? You are not alone. None of the maintenance solutions I use includes functionality to rebuild a heap, and that includes Maintanance Plans and Ola Hallengren’s maintenance solution.

“So why would you want to rebuild a heap, when it isn’t sorted in the first place?”, you ask. The answer is to get rid of Forwarding Pointers, and these can really hurt performance by adding lots more logical and physical reads, and random I/O. See for instance this from Kalen Delaney, this from Hugo Kornelis and this from the SQLCAT team.

SQL Server 2008 added the ALTER TABLE command, with the REBUILD clause. And this is what I’m using in my procedure rebuild_heaps which rebuilds all fragmented heaps on a SQL Server.

You find the procedure here: http://karaszi.com/rebuild-all-fragmented-heaps.

Do you clean up your Database Mail log tables?

Database Mail has a couple of log tables in the msdb database. These can become large over time. I’ve seen MSDB databases over 1 GB in size, where normal size is less than 50 MB (heavy usage of old SSIS deployment model excluded).

Unfortunately Maintenance Plans do not have built-in functionality for this, nor does Ola Hallengren’s excellent maintenance solution ( http://ola.hallengren.com/ ). All you have to do is to schedule an agent job to be executed, say, every week, having one T-SQL jobstep containing: 

DECLARE @DeleteOlder DATETIME

SET @DeleteOlder = DATEADD(MONTH, -1, CURRENT_TIMESTAMP)

EXECUTE msdb.dbo.sysmail_delete_mailitems_sp @sent_before = @DeleteOlder

EXECUTE msdb.dbo.sysmail_delete_log_sp @logged_before = @DeleteOlder

Above removes mail history older than one month. Adjust to your liking, using the values in the DATEADD function.

As always, remember to comment your job and to specify appropriate database for the T-SQL jobstep (for documentation purposes, msdb in this case).

Wait random number of minutes

Why on earth would you want to do that? you ask. Say you have a job that is scheduled to start at the same time over a number of servers. This might be because you have an SQL Server Master/Target server environment (MSX/TSX) or you quite simply script a job and execute that script on several servers. You probably want to spread the load on your SAN and virtual machine host a bit. This is the exact reason I use this procedure. I frequently use MSX servers and I usually add a job step (executing this procedure) to wait a random number of minutes between 0 and 30.

You find the procedure here.

Analyzing the errorlog

How often do you do this? Look over each message (type) in the errorlog file and determine whether this is something you want to act on. Sure, some (but not all) of you have some monitoring solution in place, but are you 100% confident that it really will notify for all messages that you might find interesting? That there isn’t even one little message hiding in there that you would find valuable knowing about? Or how about messages that you typically don’t are about, but knowing that you have a high frequency can be valuable information?

So, this boils down to actually reading the errorlog file. Some of you probably already have scripts and tool that makes this easier than just reading every simple message from top to bottom. I wanted to share how I do it, and this is why I wrote my Analyze SQL Server logs article. Check it out. And, feedback is always welcome!

Will EMPTYFILE on primary ldf "doom" it somehow?

I just read a newgroup question whether doing SHRINKFILE with the EMPTYFILE option for the primary log file somehow cause ill effects.

Shrinkfile for the ldf will not move any data (log records) or so. For an ldf file it is basically a preparation to tell the engine that you are about to remove this file (ALTER DATABASE … REMOVE FILE).

Now, the first (primary) log file is special and cannot be removed. So, what if we do an EMPTYFILE on the primary log file. Will we end up in some limbo-state? I did a test and performed EMPTYFILE on the primary file. Nothing bad happened. I then did EMPTYFILE on the other log file and removed that file successfully. So it seems that this should not cause any havoc. Just pretend you never did that EMPTYFILE operation against the primary log file.

It isn’t doable to create a repro script which show shrinking and removing nf log files.It will require some engagement for you. The reason is that we never know from what file and where the nect virtual log file comes from. So, if you are about to run below, be prepared to read up on DBCC LOGINFO and other command, understand what VLF is, perhaps some operation need to be done everal times before what we expect will happen… And as always, use at own risk.

--Drop and create database named x
SET NOCOUNT ON
USE 
master
IF DB_ID('x'IS NOT NULL DROP DATABASE x
GO
CREATE DATABASE [x] ON  PRIMARY
NAME N'x'FILENAME N'C:\DemoDatabases\DbFiles\a\x.mdf' SIZE 10MBFILEGROWTH 3MB )
LOG ON
NAME N'x_log'FILENAME N'C:\DemoDatabases\DbFiles\a\x_log.ldf' SIZE 2MB FILEGROWTH 1MB)
,( 
NAME N'x_log2'FILENAME N'C:\DemoDatabases\DbFiles\a\x_log.ldf2' SIZE 2MB FILEGROWTH 1MB)
GO

--Get the database out of "auto-truncate" mode.
ALTER DATABASE SET RECOVERY FULL
BACKUP DATABASE 
TO DISK = 'nul'

--Fill up the log some
USE x
CREATE TABLE t(c1 INT IDENTITYc2 CHAR(300DEFAULT 'a')
GO
INSERT INTO DEFAULT VALUES
DELETE FROM 
t
GO 2000

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

--Empty log
BACKUP LOG TO DISK = 'nul'

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

--"Empty" primary log file
DBCC SHRINKFILE(2EMPTYFILE)

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

--Fill up the log some
INSERT INTO DEFAULT VALUES
DELETE FROM 
t
GO 2000

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

-- Do above several times and see
-- that 2 is still allocated from...

--Can we get rid of file 3?
BACKUP LOG TO DISK = 'nul'
DBCC SHRINKFILE(3EMPTYFILE)
--We might need to do above a few times
--until 3 is "clean" - no used VLFs

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

ALTER DATABASE x REMOVE FILE x_log2

--Might need to do some stuff to get rid of file physically
CHECKPOINT
BACKUP 
LOG TO DISK = 'nul'
GO
SELECT FROM sys.database_files

Heaps, forwarding pointers, ALTER TABLE REBUILD and non-clustered indexes

Let’s start with some background on forwarding pointers:

Forwarding pointers in heaps can be a mess to get rid of. A forwarding pointer happens when you modify a row so that the row doesn’t fit on the same page anymore. SQL Server moves the row to a new page and leaves a forwarding pointer where the old row used to be. This means that non-clustered indexes are not affected by the moving of the row – it can still point to the old location. So the modification is less costly than if all non-clustered indexes needed to reflect the row move. But having forwarding pointers can be bad for performance when you read data. A perhaps less known fact is that a scan over a table needs to follow forwarding pointers – essentially “jumping back and forth” a lot if you have many forwarding pointers. That can be really bad for performance (if you have table scans, of course). So, how do we get rid of forwarding pointers? Well, we can shrink the database file, but that is a little like curing a headache by jumping into a big meat grinder. We can export all data and re-import it – not very practical.

Greg Linwood reminded me that in SQL Server 2008, we can do ALTER TABLE … REBUILD. Now, I knew about this option, and every time I tell about it in class I’ve been thinking silently for myself “I need to test whether this is a way to get rid of fwd pointers”. (You generally talk about ALTER TABLE … REBUILD when you talk about enabling compression on a heap.) So, doing a REBUILD of a table using ALTER TABLE sounds promising. Will it get rid of forwarding pointers? Will it also rebuild all non-clustered indexes?

Quick answer for those who don’t care reading the TSQL script:
ALTER TABLE … REBUILD will remove forwarding pointers, but for some strange reason it will also rebuild all non-clustered indexes on that table.

See the TSQL code below. It was adapted from a challenge by Linchi Shea to produce a data loading script resulting in worst performance (where I immediately thought of forwarding pointers). See for instance http://sqlblog.com/blogs/linchi_shea/archive/2009/06/07/t-sql-exercise-simplest-data-loading-script-to-produce-worst-query-performance-june-7th-update.aspx.

USE tempdb
GO

IF EXISTS (SELECT FROM sysobjects WHERE name 'test')
DROP TABLE test
GO
CREATE TABLE test (
INT NOT NULL
,x2 INT NOT NULL
,CHAR(10) NOT NULL DEFAULT ('')
,CHAR(10) NOT NULL DEFAULT('')
)

DECLARE @rows INT = 666666@toKeep INT = 400000@diff INT

INSERT test (xx2)
SELECT TOP(@rows)
ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS r
,ROW_NUMBER() OVER (ORDER BY (SELECT 1)) % 10 AS s
FROM master..spt_values a CROSS JOIN master..spt_values b
GO
ALTER TABLE test ALTER COLUMN CHAR(892)
ALTER TABLE test ALTER COLUMN CHAR(100)
GO

DECLARE @rows INT = 666666@toKeep INT = 400000@diff INT
DELETE TOP(@rows @toKeep)
FROM test WHERE x2 IN(2468)
GO

CREATE INDEX x1 ON test(x)
CREATE INDEX x2 ON test(x2)

SELECT FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('test'), NULL, NULL, 'DETAILED')
--First  run (no nc indexes ): 387157 fwd records (out of 400000 rows), 96104 pages
--Second run (two nc indexes): 387157 fwd records (out of 400000 rows), 96105 pages

CHECKPOINT
DBCC DROPCLEANBUFFERS
DECLARE @t time SYSDATETIME()
ALTER TABLE test REBUILD
SELECT DATEDIFF(ms@tCAST(SYSDATETIME() AS time))
--First run, no non-clustered indexes, three subsequent executions (ms): 19351, 20683, 20275
--Second run, with two non-clustered indexes, three subsequent executions (ms): 31803, 35065, 37511

SELECT FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('test'), NULL, NULL, 'DETAILED')
--First  run, heap = "index" 0 (no nc indexes ): 0 fwd records, 50002 pages
--Second run, heap = "index" 0 (two nc indexes): 0 fwd records, 50003 pages
--But: avg_page_space_used_in_percent changes for the nc indexes after the rebuild.
--That I take as nc indexes are also rebuilt.