Managing tempdb

This post is all about the tempdb database. It is not about capacity planning, like size or number of data files – there is plenty of info about that out there. Instead, this is about managing it, in the sense of changing initial file size, adding a file, removing a file etc. SSMS isn’t obvious for some of these things for the tempdb database, and for some things SSMS is flat out wrong.

I tested all operations on SQL server 2018 and SSMS 17.8.1.

 

Background

Tempdb is re-created at start-up. Or, rather re-initiated (in lack of better words). SQL Server do not delete the files and create new ones, that would take ages for large files if you don’t have Instant File Initialization (which you never have for log files). So how does SQL Server know what files to create, the logical and physical names, size etc? The answer is sys.master_files.

 

sys.master_files

This is the “template” information for the tempdb files. I will refer to sys.master_files as the template below (not to be confused with model which is template for other things, but not the file aspects of tempdb). Here you see how the tempdb files will be created/initialized at next start-up. Don’t confuse this with tempdb.sys.database_files, which show the current state for the tempdb files. Below two queries will show you the current state for the tempdb files and also the template state:

--Current
SELECT
'tempdb' AS db_name_
,file_id
,name
,physical_name
,size * 8/1024 AS size_MB
,type_desc
,CASE WHEN is_percent_growth = 1 THEN CAST(growth AS varchar(3)) + ' %' ELSE CAST(growth * 8/1024 AS varchar(10)) + ' MB' END AS growth
,max_size * 8/1024 AS max_size_MB
FROM tempdb.sys.database_files
ORDER BY type, file_id

--Template
SELECT
DB_NAME(database_id) AS db_name_
,file_id
,name
,physical_name
,size * 8/1024 AS size_MB
,type_desc
,CASE WHEN is_percent_growth = 1 THEN CAST(growth AS varchar(3)) + ' %' ELSE CAST(growth * 8/1024 AS varchar(10)) + ' MB' END AS growth
,max_size * 8/1024 AS max_size_MB
FROM master.sys.master_files
WHERE DB_NAME(database_id)  = 'tempdb'
ORDER BY db_name_, type, file_id

 

Increasing current and template file size to larger than what it currently is

This is pretty straight-forward, both using T-SQL directly and also using SSMS. Here’s what it looks like in SSMS:

(The word “Initial” for the header “Initial Size (MB)” is pure nonsense. What you see is the current file size, picked up from sys.database_files. I am happy to see that the word “Initial” is removed in SSMS 18.0.)

To increase the file size you just type the desired, larger, size for the tempdb file. The T-SQL command to do this is also very straight forward, example:

ALTER DATABASE tempdb 
 MODIFY FILE ( NAME = N'tempdev', SIZE = 5GB )

ALTER DATABASE tempdb 
 MODIFY FILE ( NAME = N'temp2', SIZE = 5GB )

Note that SSMS will generate the file size in KB fpr the T-SQL command, but the T-SQL command accepts KB, MB, GB and even TB.

This operation increases the current file size as well as the template size.

 

Decreasing current file size

To reduce the current size, you can attempt a shrink operation, for instance using such as below T-SQL command:

USE tempdb
DBCC SHRINKFILE(tempdev, 100)

The second parameter is the desired size in MB. This will only change the current size, not the template. In my experience, you should be prepared for a limited success to do shrink file a tempdb file. In many cases, you will have to re-start SQL Server so it will be created with the template size. Or, hunt down whatever is using it and take it from there.

 

Changing template file size to smaller than what it currently is

This is a bit trickier using the GUI. If you type a size which is smaller than current size in the GUI, then it will generate a DBCC SHRINKFILE command, as discussed above and that is not what we wanted to do here. So, what you can do is to specify a larger size than the current size, script the command to a T-SQL query window and in there just specify whatever size you want. It will generate a command such as:

ALTER DATABASE tempdb 
 MODIFY FILE ( NAME = N'tempdev', SIZE = 100MB )

If the specified size is smaller than the current file size, then it will only change the template, not the current size.

 

Adding file

This is dead-easy. In SSMS, just press the “Add” button and specify the desired name, size etc. This will generate T-SQL such as:

ALTER DATABASE tempdb 
 ADD FILE ( NAME = N'tempdb3', FILENAME = N'R:\SqlData\a\tempdb_mssql_3.ndf' , SIZE = 5GB , FILEGROWTH = 200MB )

This will add it immediately and also to the template (both sys.database_files and sys.master_files).

 

Remove file

This is simple enough if the file is empty. In SSMS you select the file and press the “Remove” button.  It will generate T-SQL such as:

ALTER DATABASE tempdb 
 REMOVE FILE tempdb3

But if the file is current used you will get an error message and nothing was changed: not the current state, nor the template.

You can always attempt to shrink first using the EMPTYFILE option:

USE tempdb
DBCC SHRINKFILE(tempdb3, EMPTYFILE)

If you’re lucky, then the file was emptied and you can now use the ALTER command with the REMOVE FILE option. If not, then you are back to where you started.

You can try to re-start SQL Server and see if that allow you to remove the file. If not, then perhaps using the “failsafe” startup switch: /f (see Erin Stellato’s blog about this: https://www.sqlskills.com/blogs/erin/remove-files-from-tempdb/ ). Not be connected from SSMS when executing the commands might also help.

 

Changing other file attributes

If you want to change tings like autogrow or maxsize, then just use SSMS or T-SQL directly. Here’s an example T-SQL command to change both autogrow and maxsize:

ALTER DATABASE tempdb 
 MODIFY FILE ( NAME = N'tempdev', MAXSIZE = 500GB , FILEGROWTH = 500MB )

 

Moving a file

This operation requires a re-start. SSMS doesn’t allow you to do this, so use T-SQL directly, such as:

ALTER DATABASE tempdb 
 MODIFY FILE (NAME = tempdev, FILENAME = 'R:\SqlData\b\tempdb.mdf')

Warning: Henrik reminded me in the comments section to add a warning here. “Make sure that R:\SqlData\b is present and that SQL Server has sufficient rights to create files here!” He is of course correct. In worst case you end up with SQL Server refusing to start if it cannot create tempdb where you have specified. In such case you can try the -f startup switch, perhaps a topic for another blog post.

Now, this is a sneaky one. It does change both sys.database_files and sys.master_files. So, looking only at the catalog views, you will be fooled that the file was moved immediately. But that didn’t happen, check the disk. Fortunately, the command comes with a message such as:

The file "tempdev" has been modified in the system catalog.
The new path will be used the next time the database is started.

That makes it pretty obvious. Note that the old file will not be removed when you restart your SQL Server. Go delete that file, after verifying that the new file was indeed created (if the “old” one is still the current one, then you won’t be able to delete it as long as SQL Server is running).

 

Summary

Managing tempdb isn’t that hard. As soon as you understand that we have the current state (tempdb.sys.master_files) and the template (sys.master_files), it is easier to understand. And, yes, SSMS is a bit quirky for these operations. The reason, methinks, is that it uses the same GUI as for other databases, which doesn’t have the aspect of a “template”.

Did I miss any operation? Let me know and I’ll might add it.

 

Should it be [ola] or [olahallengren]?

I’m (of course 😉 ) talking about the schema that Ola Hallengren plan to move his objects to. If you don’t get what I’m talking about, then visit Ola’s database maintenance solution site here.

Ola is soliciting feedback for the schema naming at Github now, go and express your opinion at: https://github.com/olahallengren/sql-server-maintenance-solution/issues/91.

We are now recommended to install cumulative updates

Microsoft just released a couple of CUs for SQL Server 2012. What is exiting is how Microsoft phrases their recommendations whether we should install them or not. Below is from the KB of one of those recently released CUs:

  • Microsoft recommends ongoing, proactive installation of CUs as they become available:
  • SQL Server CUs are certified to the same levels as service packs and should be installed at the same level of confidence.
  • Historical data shows that a significant number of support cases involve an issue that has already been addressed in a released CU.
  • CUs may contain added value over and above hotfixes. This includes supportability, manageability, and reliability updates.

Now, that is a pretty significant change from what they used to say. In addition, requiring the CU is much easier. You just go to MS Download, select whether you want 32 or 64 bit and then download the bits immediately.

Check it out yourself, go to for instance https://www.microsoft.com/en-us/download/details.aspx?id=50731.

Or check out how the KB for a new SQL Server CU: https://support.microsoft.com/en-us/kb/3120313 (see the “Notes for the update” section).

Managing the errorlog file

I frequently see recommendations to regularly run sp_cycle_errorlog, so that the errorlog doesn’t become huge. My main concern with that is that the errorlog contains valuable information.

When I do a health check on a SQL Server machine, I want a few months worth of errorlog information available. I typically use my own scripts for this, available here. Almost every time I go through the errorlog, I find valuable information. Some things you address, like find whatever it is that is attempting to login every minute. Other things you might not have control over, but the information is valuable to have.

So, if you run sp_cycle_errorlog every day or week, you end up with only a week worth, or a few weeks worth of errorlog file information.

Suggestion 1: Increase the number of errorlog files.

You probably want more than 6 history errorlog files. For instance, a client of mine told me that he was about to patch a server a few days before I was to visit that client. That patch procedure resulted in enough re-start of SQL Server so we ended up with only 4 days worth of errorlog files. Yes, this client had the default of 6 historic errorlog files. I typically increase this to 15. You can do this by right-clicking the “SQL Server Logs” folder under “Management” in SSMS. If you want to use T-SQL, you can use xp_instance_regwrite, as in:

EXEC xp_instance_regwrite
N’HKEY_LOCAL_MACHINE’
,N’Software\Microsoft\MSSQLServer\MSSQLServer’
,N’NumErrorLogs’, REG_DWORD, 15;

Suggestion 2: Set a limit for the size of the errorlog file.

But what about the size? Say that we have crash dumps, for instance. Or other things that start to happen very frequently. The good news is that as of SQL Server 2012, we can set a max size for the errorlog file. There is no GUI for this, so we have to manipulate the registry directly. Again, we can use xp_instance_regwrite. Below will limit the size to 30 MB:

EXEC xp_instance_regwrite
N’HKEY_LOCAL_MACHINE’
,N’Software\Microsoft\MSSQLServer\MSSQLServer’
,N’ErrorLogSizeInKb’, REG_DWORD, 30720;

With 15 files, you can patch of your SQL Server machine without aging out all old errorlog files. And with a max size of 30 MB, you keep each file manageable in size. And you keep the total size of errorlog files for that instance to 450 MB. Not enough to fill your disks. But enough to have historical information for when you are about to perform a health check on your SQL Server instance.

References: this by Jan Kåre and this by Paul Randal.

Are we Borg?

Is it time to stop remeber things? For real, this time?

Today I needed to find two pieces of SQL Server related information. One was straight foward, I posted it on #sqlhelp on twitter and a minute or so later I had the answer. The other was a bit more involved and I asked around in our MVP mail list – a couple of hours later I had bunch of suggestions and tips. These are only examples, it could just as well have been some web-forum, newsgroup, or some other forum. We’ve all had cases like this, but it made me think and reflect.

Travel back in time, say 20 years or so. Computers were hard to use. Well, not computers per se, but rather the software made them hard to use. (Without software computers are easy to use – they don’t do anything!) We were lucky if the software came with a printed manual. Whatever the quality of that manual.

Then came electronic help with the software. That was a big step forward. And it even had index and search!

With the Internet, things really started to pick up. Sure, there were BBS before that, but not broadly used. With Internet, we were given e-mail. Mail lists. Newsgroups. Altavista, Google and Bing (sounds like a supergroup). Web-forums. Facebook. Twitter. Etc. And not only that, we carry the Internet in our pockets, wherever we are.

So, this is what hit me today. I used to keep tips and tricks in a register. Nuggets I picked up, here and there. The past few years, I do less and less of this. There are so many great minds out there, who also are willing to lend a helping hand. So, almost when I realize I need some bit of information, that information is available. Sounds pretty much like the collective mind of the Borg to me. Perhaps not quite there yet, but give it a year or five.

But what worries me is: Where does that leave us, as human beings. Where is the difference between learning and remembering. Deducing versus asking and being given the information. I’m sure you can relate to this, at some level. For instance when we were forbidden to use pocket calculatos at school, because they were too powerful  Or the other way around.

To put it bluntly: If I don’t have to remember anything about SQL Server (the info is out there, right?), I won’t be a very profficient SQL Server person. I’m sure we can formulate lots of ways to differentiate between learning/creating/deducing and remembering. And, don’t get me wrong. I think the development is really interesting – and the community out there is way cool.

But, I just find the development a bit interesting… and worrying … and thrilling.

Why we never want to trick SQL Server into rebuilding a log file

“Never” is a strong word, so let me say that we really really want to avoid it, if at all humanly possible. In short, we will have a (potentially) broken database, both at the physical level and at the logical level. Read on.

Just to be clear, when I refer to a “log file” here, I’m talking about a transaction log file, an .ldf file. Ever so often we see in forums how log files are “lost”, “disappears” or so. Now, SQL Server relies on the ldf file at startup to bring the database to a consistent state. This is known as “recovery”, or “startup recovery”. This is pretty well known, I for instance teach this at every intro level admin course. So, what if

  • The ldf file isn’t there?
  • Or isn’t accessible to the db engine?
  • Or is broken somehow?
  • Or is from a different point in time from the data file? (You’d be surprised to all the things one see over the years.)

Well, SQL Server will do the only reasonable, refuse us into the database and produce an error message (in eventlog etc).

What we see from time to time, is trying to “trick” SQL Server into re-creating an ldf file. So, why is this so bad? I will try to explain why. Let me first say that SQL Server doesn’t do these things to be mean to us, or to prove a point. If SQL Server know that the ldf file is not necessary for recovery (the database was “cleanly shutdown”), then it can and will re-create a log file for us at startup. The problem is that it isn’t these cases we see in forum. The cases we see in the forums is when this didn’t happen. SQL Server relied on the ldf file in order to bring the database to a consistent state.

Enough beating around the bush, here is an example of why we don’t want to trick SQL Server to forcefully re-create a log file:

Say you have a transaction in which you add a row to the order header table (oh) and five rows to the order details table (od). Physically, each command is reflected in the tlog, the page is modified (but still cached) and at commit, the log records are written to the ldf file. There are lots written about these algorithms, but the concepts are pretty simple. For more details, read this.

Say that we forcefully stop SQL Server, delete the ldf file, start SQL Server, see that the database isn’t accessible and somehow “trick” SQL Server into creating an ldf file for us. What is the problem? Why is this so bad? The problem is that you can have no confidence in the state of your data, both at the physical level and at the logical level. Let me start explaining what I mean by the logical level and problems at this level:

The logical level
By the logical level, I consider the user data. The rows in the tables, quite simply. We inserted one row in the oh table and five rows in the od table. These can be inserted into a total of two pages or 6 pages (in reality more, since each table is likely to have indexes etc, I’ll touch on this when we get to the logical level). Say that three of order details rows have been written to disk, but not the order header row, and not the other two order details rows. This is just an example; you can pick and choose any combo you want. It can get just as bad as you can imagine! You might think that the pages are in the end written in the same sequence as we modified them. No, that is now how the cookie crumbles (read the article I posted link to). You can probably imagine other scenarios, closer to your data. For instance, we will not do some certain modification to a table unless some other prior modification was also performed. In essence, rebuilding a log file leave us with no confidence in the data. Are you prepared to go through your database and manually verify all the data in there? Majority of you will say “no”, and it might not even be doable (data volumes, lack of information to verify etc). So, logical inconsistencies are bad. Really bad. We don’t want this. Ok?

The physical level
This log is not only used for “user-data”. It i also used for system data. Tables has indexes, where each row in a table is reflected by a row in each index. We have allocation pages stating what storage is used. We have IAM pages and PFS pages. We have linked lists. And whatnot. You probably realize that these structures also require that some modification is performed in full or not at all. (an extent allocation will be reflected in both the GAM or SGAM page and also in th extent data itself, etc). What do you think is used to protect this? Yes, you got it, the ldf file. Trick SQL Server into re-creating an ldf file and you have all sorts of physical inconsistencies. Actually, physical inconsistencies are a bit better than logical since we do have a way to check for these. I’m of course talking about the mighty DBCC CHECKDB command, a command with lots of smarts (right Paul?) to check that the db is consistent at the physical level. And what if it isn’t? CHECKDB spits out errors. Sure it has repair options, but those generally mean (unless you are the type of person who wins are lotteries) that the repair will rip out whatever makes the db inconsistent resulting in data loss (which also has no knowledge of what you might consider logical consistent data).

So, what to do?
I hope the answer is obvious. Don’t get into this situation in the first place. Don’t go deleting ldf files for instance. Do take backups frequently enough so you don’t end up in a situation like “I need to rebuild the log or I’m toast.”. If something strange happens, don’t start mucking about with the system unless you know what you are doing. Hire a specialist, call MS support or so. If I had a brand new Ferrari, I wouldn’t disassemble the engine in case I hear weird noise from the it. Heck, I barely pop the hood of my Beamer!

And no, I won’t get into what commands can be used to re-build the ldf file. I expect all readers of this to not get into a situation where it is needed. 🙂

(I understand one can encounter a machine with no backups and something happens to the ldf file, and such scnearios. Of course I do. I feel really really bad every time I read about such a situation, because there is no graceful way to handle it. That is why I have such a harsh tone above. I don’t want this to happen to anybody. One would hope that this is obvious, but nothing ever is on the Net. So I’ve learned. So, please leave such unconstructive comments out of the discussions!)

Watch out for old stuff

No, I’m not referring to me, I’m referring to options, config values and such which should really have been removed from the product a long time ago. I was reading a recent blog from Jonathan Kehayias and realize how much old stuff are still visible one way or the other in the product. There are of course commands which has more modern replacements (manage logins and users, attaching databases, changing SID for user etc), but keeing the old command for a few version is common sense. I’m thinking more of stuff where the functionality it gone ages, or even decades, ago, so why keep the knob for it?

Referring to Jonathan’s article, the LoginMode property has in SMO and policies four possible values.

  • “Unknown” maps to registry value 3 or higher which is clearly rubbish (somebody manually changed this to something weird).
  • “Mixed” maps to value 2 in the registry, Known as “SQL Server and Windows”, or “Mixed mode”.
  • “Integrated” maps to value 1 in the registry. Known as “Windows only”.
  • “Normal” maps to values 0 in the registry. But what is this, you think? There are only two values, right?

So, we have something strange exposed in SMO and policies, and it is even referred to as “normal”. This is one example if heritage which should have been cleaned up a long long time ago. We have to go all the way back to SQL Server 6.5 in order to explain this.

  • “Normal” (more commonly known as “Standard” back in the days) is more like what “Mixed” is today.
  • “Mixed” meant something slightly different back then compared to what it means today. We need to recall that back then, we didn’t have Windows authentication like how it is done today. We could map Windows account to SQL Server account using a tool, and this mapping could be used to do what we call today a “Windows authentication” or “trusted connection”. This setting meant that SQL Server could attempt a trusted connection based on the login name you specify, even if you in the connection string didn’t ask for a trusted connection. Confusing, huh? No wonder they got rid of the distinction bewteen “Normal” and “Mixed”.

As of 7.0, the only registry values used are 1 (Integrated) and 2 (Mixed). So why do SMO still expose 0 (Normal)?

We have plenty of other such examples. Look in the registry and you can see mapping for _, # and $. These were used in the mapping tool, so you could map a Windows login, which apparently has a backslash in the name and SQL Server would translate the backslash to some other character (_ by default), allowed as a SQL Server login name. Why do these still exist in the registry?
Or “Default login”? There no such thing as a default login in SQL Server anymore (there is something like a guest *user* but that is a different things from a login.
Or how about setting the master database to full recovery model? That is absurd, you still can’t do log backup and it still behaves like in simple recovery.

OK, enough of the rant. Back to meaningful work.

Do you remember Database Gateways?

I replied to a question in a forum today about intercepting and changing a SQL query, before it reaches SQL Server. I started typing a reply, and realized that… wow… “Does this take me on a trip down memory lane, or what!”. So, just for fun, here is what I wrote (slightly modified):

There used to be products on the market, called “Database Gateways”, something that looks like one type of DBMS but actually accepts the SQL queries and submits them to some other type of DBMS. Basically an “in between” piece of software. For instance I used to work with such a product, which looked like a SQL Server but submitted the queries to IBM’s mainframe DB2.

That market pretty much disappeared around when ODBC was born. Yes, we are talking ancient here, around 1992.

There was a leftover from this time period in SQL Server, called “Open Data Services”, which was basically an API allowing you to write something that looks like a SQL Server. There was even a sample app which allowed you to intercept the SQL queries and then re-submit them to a SQL Server.

This API was in turn transformed over time to what became known as “extended stored procedures”, which in turn was deprecated with SQL Server 2005 and the SQL CLR support.

Mirroring: what happens if principal loses contact with both mirror and wittness?

Imagine a database mirroring setup where you have two LANs with a WAN link in between. Let’s call them site A and site B. Say that principal is currently in site A, and both mirror and witness are in site B. I.e., we are running database mirroring with a witness, and assuming we are running safety FULL (synchronous), we have auto-fail over.

Now, what is really fail over when it comes to mirroring? the simple answer is that the mirror will finish the recovery process (UNDO) and make the database available. So, what if the mirror and witness becomes isolated from the principal? In out setup, this would happen is site A is disconnected from site B. Well, the fail-over protocol is as follows:

If the mirror cannot contact the principal, it asks the witness if it has contact with the principal. If the witness responds and says it doesn’t have contact with the principal, then they consider this a quorum. They are in majority and fail over occurs. I.e., the mirror will make the database available (and essentially becomes the principal).

But, think about the poor principal now. It is isolated and knows nothing about what happens at the other end. The only logical thing to do is to make the database inaccessible. If that didn’t happen, then we would have a case where both sites had the database available, and we wouldn’t want that!

This is nothing new, and it is clearly stated in Ron Talmage’s great whitepaper on mirroring: http://technet.microsoft.com/en-us/library/cc917680.aspx.

This is easy to test. Setup mirroring (you can even do it between three instances in the same OS). Make sure you are running safety full and have a wittness. Now you check what Windows process ID the principal has:

SELECT SERVERPROPERTY(‘ProcessID’)

Based on above, you now use Task Manage to kill the *other* two SQL Servers. Make sure that SQL Server Agent isn’t running for those two, else agent will start those SQL Servers again. Management Studio will now show the database as “In Recovery”, and if you try to access it, for instance from a query window, you get below error:

Msg 955, Level 14, State 1, Line 1
Database Adventureworks2008 is enabled for Database Mirroring,
but the database lacks quorum: the database cannot be opened.
Check the partner and witness connections if configured.

Are log records removed from ldf file for rollbacks?

Seems like a simple enough question, right? This question (but more targeted, read on) was raised in an MCT forum. While the discussion was on-going and and I tried to come up with answers, I realized that this question are really several questions. First, what is a rollback? I can see three different types of rollbacks (there might be more, of course):

  1. Regular rollback, as in ROLLBACK TRAN (or lost/terminated connection)
  2. Rollback done by restore recovery. I.e., end-time of backup included some transaciton which wasn’t committed and you restore using RECOVERY, so SQL Server need to rollback this transaction (UNDO).
  3. Rollback done by crash (restart) recovery. I.e. SQL Server was shut down with some open transaction.

I’m going to try to show whether log records are removed or still present for these three types of rollback situations. I will use the fn_dblog function. This isn’t documented or supported, but search the internet and you will find how to use it. The result isn’t documented either, of course, so we have to guess a bit what the various values mean…

The TSQL script has some common parts (the same part executed repeatedly, once for each test):

  1. Create a database
  2. Make sure it is in full recovery
  3. Do a database backup
  4. Create a table (myTable)
  5. Insert a row into myTable. This last operation generates 5 log records for the myTable table: one for the PFS page, two for IAM pages, one format page for the heap and the last one is a LOP_INSERT_ROWS.
  6. Start a transaction
  7. Insert one more row into myTable. We now have one more log record for myTable (LOP_INSERT_ROWS). Looking at the transaction id for this last insert, we see two log records (one LOP_BEGIN_XACT and the LOP_INSERT_ROWS). Note that this transaction is now open!

Here is the above mentioned first part of the script:

USE master
IF DB_ID(‘x’IS NOT NULL DROP DATABASE x
GO
CREATE DATABASE x
GO
ALTER DATABASE SET RECOVERY FULL
BACKUP DATABASE 
TO DISK = ‘C:\x.bak’ WITH INIT

USE x
CREATE TABLE myTable(c1 INT IDENTITY)
INSERT INTO myTable DEFAULT VALUES

SELECT 
FROM fn_dblog(NULL, NULL) WHERE AllocUnitName LIKE ‘%myTable%’
–5 rows

BEGIN TRAN
INSERT INTO 
myTable DEFAULT VALUES

SELECT 
FROM fn_dblog(NULL, NULL) WHERE AllocUnitName LIKE ‘%myTable%’
–6 rows

SELECT FROM fn_dblog(NULL, NULL) 
WHERE [Transaction ID] (SELECT TOP(1[Transaction ID] 
                          
FROM fn_dblog(NULL, NULL)
                          
WHERE AllocUnitName LIKE ‘%myTable%’
                          
ORDER BY [Current LSN] DESC)
–2 rows

Now, on to the different cases: 

1. Regular rollback, as in ROLLBACK TRAN.
It seems reasonable to me that SQL Server will just add some “rollback transaction” log record here. So, let’s try that (continuing on above first part)… We now have 7 log records for myTable, with an added LOP_DELETE_ROWS which undo the previously insert. And for our transaction ID, we have 4 rows, with added two rows being LOP_DELETE_ROWS (compensation log record) and a LOP_ABORT_XACT.

–Regular ROLLBACK
ROLLBACK TRAN

SELECT 
FROM fn_dblog(NULL, NULL) WHERE AllocUnitName LIKE ‘%myTable%’
–7 rows

SELECT FROM fn_dblog(NULL, NULL) 
WHERE [Transaction ID] (SELECT TOP(1[Transaction ID] 
                          
FROM fn_dblog(NULL, NULL)
                          
WHERE AllocUnitName LIKE ‘%myTable%’
                          
ORDER BY [Current LSN] DESC)
–4 rows

2. Restore recovery
Now, what does this really mean? Restoring a transaction log backup is a good example. SQL Server read log records from the transaction log backup file and writes them into the LDF file. This is what we call the “data copy” phase. Then SQL Server performs REDO (a.k.a. roll forward). And finally, SQL Server performs UNDO (roll back), assuming we don’t do the restore using the NORECOVERY option. Restoring from a database backup isn’t any differene except the log records are of course read from the database backup file.

Here it seems likely that SQL Server will wipe more or less anything from the LDF file as soon as the database is restored and brought on-line. Why? The log in this newly restored database can’t serve as a starting point for a restore operation for this database. You first need a database backup. So, no use hanging onto log records either! Let’s see if we can verify that:

–Restore recovery

–Do this from different connection
BACKUP LOG TO DISK = ‘C:\x.bak’
–We now have open transaction in database!

–Perform RESTORE
ROLLBACK
USE 
master
RESTORE DATABASE FROM DISK = ‘C:\x.bak’ WITH REPLACENORECOVERY
RESTORE LOG FROM DISK = ‘C:\x.bak’ WITH FILE = 2RECOVERY

–Rollback was done, and database berought online. 
–What log records do we have?

SELECT FROM fn_dblog(NULL, NULL) WHERE AllocUnitName LIKE ‘%myTable%’
–0 rows

–Use the transaction ID from earlier SELECT
SELECT FROM fn_dblog(NULL, NULL) 
WHERE [Transaction ID] ‘0000:00000203’
–0 rows

So, SQL Server will remove user-defined stuff from LDF file after restore recovery was performed. Makes sense.

3. Crash recovery (a.k.a. restart or startup recovery)
I couldn’t really guess here. So, lets give it a spin immediately and see:

–Crash recovery

–Do below from separate connection
SHUTDOWN WITH NOWAIT

— startup SQL Server and examine the log records:

SELECT FROM fn_dblog(NULL, NULL) WHERE AllocUnitName LIKE ‘%myTable%’
–5 rows

–Use the transaction ID from earlier SELECT
SELECT FROM fn_dblog(NULL, NULL) 
WHERE [Transaction ID] ‘0000:00000204’
–0 rows

Hmm, so SQL Server removes the log record after the rollback was performed. Makes sense.

But this got me thinking some more. How can this be done… physically? SQL Server would just “reverse” the head of the log a bit. But what if we have later transactions for other connections, which has been committed? SQL Server can’t ignore those, of course. These need to be kept in the LDF file for subsequent log backups. OTOH, I doubt that SQL Server will somehow physically delete things “in the middle” of an ldf file. Time for yet another test:

–Crash recovery, part two

–Do below from separate connection
–First something which creates more recent log records
CREATE TABLE y(c1 INTINSERT INTO y(c1VALUES(1)
SHUTDOWN WITH NOWAIT

— startup SQL Server and examine the log records:

SELECT FROM fn_dblog(NULL, NULL) WHERE AllocUnitName LIKE ‘%myTable%’
–7 rows

–Use the transaction ID from earlier SELECT
SELECT FROM fn_dblog(NULL, NULL) 
WHERE [Transaction ID] (SELECT TOP(1[Transaction ID] 
                          
FROM fn_dblog(NULL, NULL)
                          
WHERE AllocUnitName LIKE ‘%myTable%’
                          
ORDER BY [Current LSN] DESC)
–4 rows

Now the log records for our rolled back transaction are still there! So, just as when we did a regular rollback, SQL Server inserted a LOP_DELETE_ROWS to reflect the undo of the INSERT, and then a LOP_ABORT_XACT.

Conclusion
Isn’t it beautiful when it all makes sense? Here are my conclusions, whether log records are kept or removed from transaction log file (LDF) for various types of rollback scenarios:

  • Regular rollback. Log records are not removed. Compensation log records are logged, reflecting undo of the modifications, and then an LOP_ABORT_XACT is logged.
  • Restore recovery. Log records are removed.
  • Crash recovery. It depdends. If you have a transaction which is at the very head of the log, then those log records can be removed. If there are other, subsequent committed transactions, then compensation log records are logged, reflecting undo of the modifications, and then a LOP_ABORT_XACT is logged.