Do you perform log backup for the model database?

Are you stupid, you might think… But stop and think for a while. Model is no different from other databases. And by default it is in full recovery model. So as soon as you do your first database backup (you do backup your system databases, right?) the log for model will start filling up and autogrow. “But, hey, I don’t do any modifications in model!”, you probably say now. Fair, but other things happens in each database from time to time. Bottom line is that ldf file for model will start growing after a while . Perhaps not huge, but I find it “un-neat” to have a model with 3 MB mdf file and 20 MB ldf file.

Personally I prefer to have model in simple recovery since I feel that is a better default recovey model. An alternative is to regurarly set model in simple recovery and back to full recovery (schduled job).

Will EMPTYFILE on primary ldf "doom" it somehow?

I just read a newgroup question whether doing SHRINKFILE with the EMPTYFILE option for the primary log file somehow cause ill effects.

Shrinkfile for the ldf will not move any data (log records) or so. For an ldf file it is basically a preparation to tell the engine that you are about to remove this file (ALTER DATABASE … REMOVE FILE).

Now, the first (primary) log file is special and cannot be removed. So, what if we do an EMPTYFILE on the primary log file. Will we end up in some limbo-state? I did a test and performed EMPTYFILE on the primary file. Nothing bad happened. I then did EMPTYFILE on the other log file and removed that file successfully. So it seems that this should not cause any havoc. Just pretend you never did that EMPTYFILE operation against the primary log file.

It isn’t doable to create a repro script which show shrinking and removing nf log files.It will require some engagement for you. The reason is that we never know from what file and where the nect virtual log file comes from. So, if you are about to run below, be prepared to read up on DBCC LOGINFO and other command, understand what VLF is, perhaps some operation need to be done everal times before what we expect will happen… And as always, use at own risk.

--Drop and create database named x
SET NOCOUNT ON
USE 
master
IF DB_ID('x'IS NOT NULL DROP DATABASE x
GO
CREATE DATABASE [x] ON  PRIMARY
NAME N'x'FILENAME N'C:\DemoDatabases\DbFiles\a\x.mdf' SIZE 10MBFILEGROWTH 3MB )
LOG ON
NAME N'x_log'FILENAME N'C:\DemoDatabases\DbFiles\a\x_log.ldf' SIZE 2MB FILEGROWTH 1MB)
,( 
NAME N'x_log2'FILENAME N'C:\DemoDatabases\DbFiles\a\x_log.ldf2' SIZE 2MB FILEGROWTH 1MB)
GO

--Get the database out of "auto-truncate" mode.
ALTER DATABASE SET RECOVERY FULL
BACKUP DATABASE 
TO DISK = 'nul'

--Fill up the log some
USE x
CREATE TABLE t(c1 INT IDENTITYc2 CHAR(300DEFAULT 'a')
GO
INSERT INTO DEFAULT VALUES
DELETE FROM 
t
GO 2000

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

--Empty log
BACKUP LOG TO DISK = 'nul'

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

--"Empty" primary log file
DBCC SHRINKFILE(2EMPTYFILE)

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

--Fill up the log some
INSERT INTO DEFAULT VALUES
DELETE FROM 
t
GO 2000

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

-- Do above several times and see
-- that 2 is still allocated from...

--Can we get rid of file 3?
BACKUP LOG TO DISK = 'nul'
DBCC SHRINKFILE(3EMPTYFILE)
--We might need to do above a few times
--until 3 is "clean" - no used VLFs

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

ALTER DATABASE x REMOVE FILE x_log2

--Might need to do some stuff to get rid of file physically
CHECKPOINT
BACKUP 
LOG TO DISK = 'nul'
GO
SELECT FROM sys.database_files

SQL Server Agent jobs and user contexts

In what user context does a job run? I recently found myself in a forum discussion and gave my stock reply, later realizing that I haven’t actually tested this for a long time (I used to demo this in class during 6.5 courses – when we actually had time for slight diversions). Lets start with my assumptions:

  1. Job owned by sysadmin, TSQL jobsteps:
    Agent log in to SQL Server using its own credential (windows authentication from service account) and execute the TSQL commands – no impersonation performed.
  2. Job owned by sysadmin, other jobsteps:
    Agent starts a process using its service account – no impersonation performed.
  3. Job owned by non-sysadmin, TSQL jobstep:
    Agent log in to SQL Server using its own credential (windows authentication from service account), then uses EXECUTE AS LOGIN = ‘<job_owner_login>’ to “impersonate” the login who owns the job. I.e., the TSQL commands will be executed in the security context of the job owner’s login.
  4. Job owned by non-sysadmin, other jobsteps:
    Agent starts a process using the service account as specified by the Agent proxy selected for this jobstep. (See Books Online “sp_add_proxy” and “CREATE CREDENTIAL” and this for more information about Agent Proxies and credentials in SQL Server.)

So, how to prove above? Lets use a combination of jobstep output file and a Profiler trace. The ouput files will be used to catch the login/user names, and the Profiler trace to verify what TSQL commands will be submitted by Agent (for TSQL jobstep). For the TSQL jobsteps, we execute:

SET NOCOUNT ON
SELECT
CAST(ORIGINAL_LOGIN() AS VARCHAR(20)) AS Original_login
,CAST(SUSER_SNAME() AS VARCHAR(20)) AS Effective_login
,CAST(USER_NAME() AS VARCHAR(20)) AS Db_user

And to test the other jobsteps, where we will be using a CmdExec jobstep as example (we are interested in what Windows account the process will be started as), and use whoami.exe which returns the windows user name. Each jobstep is configured to output the result to a file.

  • I change my real machine name to “Machine” in this text.
  • I’m not in a domain.
  • I’m logged in interactively (to Windows) as Windows account “Tibor”.
  • I’m logged in to SQL Server using Windows authentication.
  • I’m sysadmin.
  • My service account for both Agent and SQL Server services is “SQLService”.

Here’s the result from having the job owned by me (being sysadmin):

Original_login       Effective_login       Db_user
——————– ——————– ——————–
MACHINE\SqlService   MACHINE\SqlService   dbo

machine\sqlservice

As you see, no attempted impersonation performed. Steps 1 – 2 in above list confirmed.

We now going to test this for somebody who isn’t sysadmin:

  • Create SQL Server login named “Kalle”
  • Create user in msdb for above
  • Assign Kalle permission to create jobs (SQLAgentUser role in msdb)
  • Create account “MySqlProxy” in Windows.
  • Creade credential “c_MySqlProxy” in SQL Server for above.
  • Create proxy “p_MySqlProxy” for above, and allow login “Kalle” to use this for CmdExec jobsteps.

Now we modify the job and set the owner to Kalle, and also specify for the CmdExec jobstep to use the proxy p_MySqlProxy. Also, since non-sysadmins can’t use output files (I didn’t know that), we instead configure each step’s output to go to table. Execute job. No, finally we can check each job step output for result:

Original_login       Effective_login       Db_user
——————– ——————– ——————–
MACHINE\SqlService   Kalle                guest

machine\mysqlproxy

The first section above confirms that Agent uses it’s own login, but then changes login context to Kalle (verified by Profiler trace capturing the “EXECUTE AS LOGIN = N’Kalle’ WITH NO REVERT” command). The job step was configured to run in the master database; since I didn’t add Kalle as a user to master, you see the user name guest.

The second section verifies that The CmdExec step started a process using the mysqlproxy windows account, and specified using our SQL Server Agent Proxy and SQL Server credential object.

(I will not reply to support questions here. For support, I recommend you visit a forum, for instance this. Other discussions, comments and corrections are of course very welcome!)

Finally, I scripted out the job (as owned by me – being sysadmin), if you want to re-create it and play with it. Use at own risk, do apropriate modifications etc.

USE [msdb]
GO

/****** Object:  Job [CredTest]    Script Date: 09/19/2009 12:12:47 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode 0
/****** Object:  JobCategory [[Uncategorized (Local)]]]    Script Date: 09/19/2009 12:12:47 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]’ AND category_class=1)
BEGIN
EXEC @ReturnCode msdb.dbo.sp_add_category @class=N’JOB’@type=N’LOCAL’@name=N'[Uncategorized (Local)]’
IF (@@ERROR <> OR @ReturnCode <> 0GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N’CredTest’,
@enabled=1,
@notify_level_eventlog=0,
@notify_level_email=0,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N’No description available.’,
@category_name=N'[Uncategorized (Local)]’,
@owner_login_name=N’MACHINE\Tibor’@job_id @jobId OUTPUT
IF (@@ERROR <> OR @ReturnCode <> 0GOTO QuitWithRollback
/****** Object:  Step [A_TSQL]    Script Date: 09/19/2009 12:12:48 ******/
EXEC @ReturnCode msdb.dbo.sp_add_jobstep @job_id=@jobId@step_name=N’A_TSQL’,
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=3,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0@subsystem=N’TSQL’,
@command=N’SET NOCOUNT ON
SELECT
CAST(ORIGINAL_LOGIN() AS VARCHAR(20)) AS Original_login
,CAST(SUSER_SNAME() AS VARCHAR(20)) AS Effective_user
,CAST(USER_NAME() AS VARCHAR(20)) AS Db_user’,
@database_name=N’master’,
@flags=8
IF (@@ERROR <> OR @ReturnCode <> 0GOTO QuitWithRollback
/****** Object:  Step [B_OS]    Script Date: 09/19/2009 12:12:48 ******/
EXEC @ReturnCode msdb.dbo.sp_add_jobstep @job_id=@jobId@step_name=N’B_OS’,
@step_id=2,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0@subsystem=N’CmdExec’,
@command=N’whoami.exe’,
@flags=16
IF (@@ERROR <> OR @ReturnCode <> 0GOTO QuitWithRollback
EXEC @ReturnCode msdb.dbo.sp_update_job @job_id @jobId@start_step_id 1
IF (@@ERROR <> OR @ReturnCode <> 0GOTO QuitWithRollback
EXEC @ReturnCode msdb.dbo.sp_add_jobserver @job_id @jobId@server_name N'(local)’
IF (@@ERROR <> OR @ReturnCode <> 0GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT 0ROLLBACK TRANSACTION
EndSave:

GO

SQL Client config, 32 and 64 bit

Say you want to change the preferred netlib connection order. Or add a server alias.

You can do this using the “SQL Server Configuration Manager” program. But installing this on each client machine where you want to do the modification might not feel that attractive.

Another option is to use a tool like regmon while doing the config on a reference machine, sniff the registry modifications and then shoot these out to the client machines. This might be overkill, though.

Yet another option is to use the cliconfg.exe tool, which ship with Windows. This tool is already available on your machine. However, on a 64 bit machine, you need to consider whether the client app is a 32 or 64 bit app. The processor architectore for that app will determine where in the registry the app (client libraries) will look. Below is my results from working on a x64 XP installation.
64 bit (native): HKLM\SOFTWARE\Microsoft\MSSQLServer\Client\SuperSocketNetLib
32 bit (WOW): HKLM\SOFTWARE\WOW6432Node\Microsoft\MSSQLServer\Client\SuperSocketNetLib

Razvan Socol was kind enough to enlighten me, a while ago, of the fact that runnig the 64 bit version of cliconfg.exe will modify in the “64 bit” registry entry (as per above) and vice versa for the 32 bit version of cliconfg.exe. Razvan mentioned that starting cliconfg.exe from Run will start the 64 bit version, and from a 32 bit app (Explorer for instance – which I couldn’t find how to do, but I’m sure somebody will enlighten me) will start the 32 bit version.

Above made me wonder in what folder each file is. Here are my findings (on the test machine I was using – a pretty clean XP x64 machine):

64 bit version of cliconfg.exe: C:\Windows\System32
32 bit version of cliconfg.exe: C:\Windows\SysWOW64

(And, before you ask, no, above is not a typo. There is some logic behind this. 🙂 )

Heaps, forwarding pointers, ALTER TABLE REBUILD and non-clustered indexes

Let’s start with some background on forwarding pointers:

Forwarding pointers in heaps can be a mess to get rid of. A forwarding pointer happens when you modify a row so that the row doesn’t fit on the same page anymore. SQL Server moves the row to a new page and leaves a forwarding pointer where the old row used to be. This means that non-clustered indexes are not affected by the moving of the row – it can still point to the old location. So the modification is less costly than if all non-clustered indexes needed to reflect the row move. But having forwarding pointers can be bad for performance when you read data. A perhaps less known fact is that a scan over a table needs to follow forwarding pointers – essentially “jumping back and forth” a lot if you have many forwarding pointers. That can be really bad for performance (if you have table scans, of course). So, how do we get rid of forwarding pointers? Well, we can shrink the database file, but that is a little like curing a headache by jumping into a big meat grinder. We can export all data and re-import it – not very practical.

Greg Linwood reminded me that in SQL Server 2008, we can do ALTER TABLE … REBUILD. Now, I knew about this option, and every time I tell about it in class I’ve been thinking silently for myself “I need to test whether this is a way to get rid of fwd pointers”. (You generally talk about ALTER TABLE … REBUILD when you talk about enabling compression on a heap.) So, doing a REBUILD of a table using ALTER TABLE sounds promising. Will it get rid of forwarding pointers? Will it also rebuild all non-clustered indexes?

Quick answer for those who don’t care reading the TSQL script:
ALTER TABLE … REBUILD will remove forwarding pointers, but for some strange reason it will also rebuild all non-clustered indexes on that table.

See the TSQL code below. It was adapted from a challenge by Linchi Shea to produce a data loading script resulting in worst performance (where I immediately thought of forwarding pointers). See for instance http://sqlblog.com/blogs/linchi_shea/archive/2009/06/07/t-sql-exercise-simplest-data-loading-script-to-produce-worst-query-performance-june-7th-update.aspx.

USE tempdb
GO

IF EXISTS (SELECT FROM sysobjects WHERE name 'test')
DROP TABLE test
GO
CREATE TABLE test (
INT NOT NULL
,x2 INT NOT NULL
,CHAR(10) NOT NULL DEFAULT ('')
,CHAR(10) NOT NULL DEFAULT('')
)

DECLARE @rows INT = 666666@toKeep INT = 400000@diff INT

INSERT test (xx2)
SELECT TOP(@rows)
ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS r
,ROW_NUMBER() OVER (ORDER BY (SELECT 1)) % 10 AS s
FROM master..spt_values a CROSS JOIN master..spt_values b
GO
ALTER TABLE test ALTER COLUMN CHAR(892)
ALTER TABLE test ALTER COLUMN CHAR(100)
GO

DECLARE @rows INT = 666666@toKeep INT = 400000@diff INT
DELETE TOP(@rows @toKeep)
FROM test WHERE x2 IN(2468)
GO

CREATE INDEX x1 ON test(x)
CREATE INDEX x2 ON test(x2)

SELECT FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('test'), NULL, NULL, 'DETAILED')
--First  run (no nc indexes ): 387157 fwd records (out of 400000 rows), 96104 pages
--Second run (two nc indexes): 387157 fwd records (out of 400000 rows), 96105 pages

CHECKPOINT
DBCC DROPCLEANBUFFERS
DECLARE @t time SYSDATETIME()
ALTER TABLE test REBUILD
SELECT DATEDIFF(ms@tCAST(SYSDATETIME() AS time))
--First run, no non-clustered indexes, three subsequent executions (ms): 19351, 20683, 20275
--Second run, with two non-clustered indexes, three subsequent executions (ms): 31803, 35065, 37511

SELECT FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('test'), NULL, NULL, 'DETAILED')
--First  run, heap = "index" 0 (no nc indexes ): 0 fwd records, 50002 pages
--Second run, heap = "index" 0 (two nc indexes): 0 fwd records, 50003 pages
--But: avg_page_space_used_in_percent changes for the nc indexes after the rebuild.
--That I take as nc indexes are also rebuilt.

Do maintenance plans require SSIS?

To many, this is a non-issue, since SSIS is installed anyhow. But not everyody installs SSIS. I for instance prefer to keep my production systems as clean as possible and only have what I really need (a principle which is harder and harder to live after as years go by…). Also, not all Editions of SQL Server comes with SSIS.

I did a test some months ago for SQL Server 2005 with a recent sp and also on SQL Server 2008. SQL Server 2008 did require SSIS (I tested both without and with SSIS installed), where 2005 sp2 didn’t. I recently learned from Phil Brammer in MVP group that there has been progress. Here’s the run-down, you don’t need SSIS installed to execute maint plans:

SQL Server 2005 requires sp2.

SQL Server 2008 requires CU3 or sp1 (see http://support.microsoft.com/kb/961126/). I dodn’t test this combo (2008 sp1 without SSIS), but I have no reason to doubt that KB article.

Watch out for that autogrow bug

Under some circumstances, autogrow for database files can be set to some 12000 percent. I think this is limited to SQL Server 2005 and for databases upgraded from SQL Server 2000 (I didn’t bother to search – feel free to comment if you know). So, if you have a reasonably sized database and autogrow kicks in, you can do the maths and realize that pretty soon you are out of disk space.

I wrote a proc that I schedule that check for out-of-bounds values in sys.database files. The proc generates a bunch of messages it prints (handy if you have as Agent job with output file) and also constructs an error message and does RAISERROR (handy if you implemented alerting, for instance according to http://karaszi.com/agent-alerts-management-pack).

I prefer to schedule below as Agent job and use Agent alerts to notify me if we do have db with autogrow out-of-whack. As always, don’t use code if you don’t understand it.

USE maint
GO

IF OBJECT_ID('check_autogrow_not_percent'IS NOT NULL DROP PROC check_autogrow_not_percent
GO

CREATE PROC check_autogrow_not_percent
AS
DECLARE 
 @db sysname
,@sql NVARCHAR(2000)
,@file_logical_name sysname
,@file_phyname NVARCHAR(260
,@growth VARCHAR(20)
,@did_exist bit
,@msg NVARCHAR(1800)
,@database_list NVARCHAR(1000)

SET @did_exist CAST(AS bit)
SET @database_list ''

--For each database
DECLARE dbs CURSOR FOR
 SELECT name FROM sys.databases
OPEN dbs
WHILE 1
BEGIN
  FETCH NEXT FROM dbs INTO @db
  IF @@FETCH_STATUS <> BREAK

  SET @sql 'DECLARE files CURSOR FOR
 SELECT CAST(growth AS varchar(20)), physical_name, name
 FROM ' QUOTENAME(@db) + '.sys.database_files
 WHERE is_percent_growth = 1
 AND growth > 20'
  EXEC(@sql)
  OPEN files
  WHILE 1
  BEGIN
    FETCH NEXT FROM files INTO @growth@file_phyname@file_logical_name
    IF @@FETCH_STATUS <> BREAK
    SET @did_exist CAST(AS bit)
    SET @database_list @database_list '["' @db '": "' @file_logical_name '"]' CHAR(13) + CHAR(10)
   SET @msg 'Out-of-band autogrow in database "' @db '"' 
   ' with growth of ' @growth +
   ', logical file name "' @file_logical_name '"' 
   ', physical file name "' @file_phyname '"' 
   '.'
   RAISERROR(@msg101WITH NOWAIT
  END
  CLOSE files
  DEALLOCATE files
END
CLOSE dbs
DEALLOCATE dbs
IF @did_exist CAST(AS bit)
  BEGIN
   SET @msg 'Databases with out-of-control autogrow in databases: ' CHAR(13) + CHAR(10) + @database_list
   RAISERROR(@msg161WITH LOG
  END
GO

Spooky: What do you connect to?

I only recently discovered that SSMS will connect to different things. For instance, press the “New query” button. What were you connected to? The answer is the same server as your “current” server. But what is the current server? It is the server where you happened to have focus when the pressed the “New query” button. So, can you say whether you had focus in a query window, Object Exporer or Registered Servers?

This also applies to when you double-click a .sql file. And it doesn’t stop there. Open the “Registered Servers” window. Now, click on a server group. Go to explorer and double-click a .sql file. What were you connected to? Yes, all the servers in that group. Now, don’t get me wrong here; the ability to open the same query window against several servers can be a very useful thing. What I had no idea until just about now is how easily thsi can happen by mistake. Just by cklicking the New Query window, or even double-clicking an .sql file. So – be aware…

(FYI: SSMS 2005 doesn’t seem to do this for clicking a file in explorer, and the functionality to have a query window against several server didn’t exist in SSMS 2005…)

Table restore and filegroups

The story usually goes something like:

Q – How can I restore only this table?
A – Put it on its own filegroup and you can do filegroup level backup and restore.

The problem with above answer is that it most likely misses the point. We need to ask ourselves:
Why do you want to do a table level restore?

The answer to the question is very often that the table need to be reverted to an earlier point in time, possibly because some accident happened; like deleting all rows in the table by mistake. (See my minimizing data loss when accidents happens article for a more general discussion.) So, why is not filegroup backup that usable for this scenario?

SQL Server will not let you into a database where different data is from different points in time!
(2005+, Enterprise Edition and Developer Edition, has online restore which allow you into the database but you wont be able to access the restored data until you make it current – so it doesn’t really changes the basic issue here.)

Now, think about above. If we restore the filegroup backup containing the emptied table, but then need to restore all subsequent log backups up to “now”, what good did this song-and-dance-act do us? No good at all (except for a learning experience, of course).
We can of course restore the primary filegroup and the one with the damaged data into a new temp database – to the desired earlier point in time, and then copy the relevant data from this temp database into the production database. But this operation is certainly not as straight forward as just restoring the filegroup backup into the production/source database.

Now, about having data from different point in time (regardless of how you achieve it): Handle with care. Just think about relationship and dependencies you have inside a database. Reverting some table to an earlier point in time can cause havoc for those dependencies.

I won’t get into details about how filegroup backups work, online restore, the PARTIAL option of the restore command etc. – you can read about all that in Books Online. The point about this blog is to have somewhere I can point to when I see the “put-the-table-on-its-own-filegroup-and-backup-that-filegroup” recommendation.

As usual, I have a TSQL script to display my points. If you happen to think that it *is* possible to restore part of the database to an earlier point in time into the production/source database – I’m all ears. You can post a comment here, I will be notified. Please use below script as a template, and modify so that we can execute it and re-execute it.
The usual disclaimer is to not execute below if you don’t understand what it is doing, etc.

 

--Drop and create the database
USE master
IF DB_ID('fgr'IS NOT NULL DROP DATABASE fgr
GO
--Three filegroups
CREATE DATABASE fgr ON  PRIMARY
NAME N'fgr'FILENAME 'C:\fgr.mdf'),
FILEGROUP fg1
NAME N'fg1'FILENAME 'C:\fg1.ndf'),
FILEGROUP fg2
NAME N'fg2'FILENAME 'C:\fg2.ndf')
LOG ON
NAME N'fgr_log'FILENAME 'C:\fgr_log.ldf')
GO
ALTER DATABASE fgr SET RECOVERY FULL

--Base backup
BACKUP DATABASE fgr TO DISK = 'C:\fgr.bak' WITH INIT
GO

--One table on each filegroup
CREATE TABLE fgr..t_primary(c1 INTON "PRIMARY"
CREATE TABLE fgr..t_fg1(c1 INTON fg1
CREATE TABLE fgr..t_fg2(c1 INTON fg2

--Insert data into each table
INSERT INTO fgr..t_primary(c1VALUES(1)
INSERT INTO fgr..t_fg1(c1VALUES(1)
INSERT INTO fgr..t_fg2(c1VALUES(1)

BACKUP LOG fgr TO DISK = 'c:\fgr.trn' WITH INIT --1

--Filegroup backup of fg2
BACKUP DATABASE fgr FILEGROUP 'fg2' TO DISK = 'C:\fgr_fg2.bak' WITH INIT

BACKUP LOG fgr TO DISK = 'c:\fgr.trn' WITH NOINIT --2

--Delete from t_fg2
--Ths is our accident which we want to rollback!!!
DELETE FROM fgr..t_fg2

BACKUP LOG fgr TO DISK = 'c:\fgr.trn' WITH NOINIT --3

--Now, try to restore that filegroup to previos point in time
RESTORE DATABASE fgr FILEGROUP 'fg2' FROM DISK = 'C:\fgr_fg2.bak'
GO

SELECT FROM fgr..t_fg2 --error 8653
GO

--If we are on 2005+ and EE or Dev Ed, the restore can be online
--This means that rest of the database is accessible during the restore
INSERT INTO fgr..t_fg1(c1VALUES(2)
SELECT FROM fgr..t_fg1

--We must restore *all* log backups since that db backup
RESTORE LOG fgr FROM DISK = 'c:\fgr.trn' WITH FILE = --out of 3
RESTORE LOG fgr FROM DISK = 'c:\fgr.trn' WITH FILE = --out of 3
GO

SELECT FROM fgr..t_fg2 --Success
--We didn't get to the data before the accidental DELETE!
GO

----------------------------------------------------------------------------
--What we can do is restore into a new database instead,
--to an earlier point in time.
--We need the PRIMARY filegroup and whatever more we want to access
----------------------------------------------------------------------------
IF DB_ID('fgr_tmp'IS NOT NULL DROP DATABASE fgr_tmp
GO
RESTORE DATABASE fgr_tmp FILEGROUP 'PRIMARY' FROM DISK = 'C:\fgr.bak'
WITH
MOVE 'fgr' TO 'C:\fgr_tmp.mdf'
,MOVE 'fg2' TO 'C:\fg2_tmp.ndf'
,MOVE 'fgr_log' TO 'C:\fgr_tmp_log.ldf'
,PARTIALNORECOVERY

RESTORE DATABASE fgr_tmp FILEGROUP 'fg2' FROM DISK = 'C:\fgr_fg2.bak'

RESTORE LOG fgr_tmp FROM DISK = 'c:\fgr.trn' WITH FILE = 1NORECOVERY
RESTORE LOG fgr_tmp FROM DISK = 'c:\fgr.trn' WITH FILE = 2RECOVERY

--Now the data in PRIMARY and fg2 is accessible
SELECT FROM fgr_tmp..t_fg2

--We can use above to import to our production db:
INSERT INTO fgr..t_fg2(c1)
SELECT c1 FROM fgr_tmp..t_fg2

--And now the data is there again 🙂
SELECT FROM fgr..t_fg2

Key count in sys[.]indexes

The old sysindexes table (as of 2005 implemented as a compatibility view) has a useful column named keycnt. This is supposed to give us the number of columns (keys) in the index. However, to make heads and tails out of the numbers, we need to understand how a non-clustered index is constructed. For a heap, the pointer to a row is the physical file/page/row address (aka “rowid”). This is counted as a key in the keycnt column:

IF OBJECT_ID('t1'IS NOT NULL DROP TABLE t1
GO
CREATE TABLE T1 (c1 INTc2 datetimec3 VARCHAR(3))
CREATE INDEX ix_T1_c1 ON T1 (c1)
CREATE INDEX ix_T1_c1_c2 ON T1 (c1c2)
CREATE INDEX ix_T1_c1_c2_c3 ON T1 (c1c2c3)
CREATE INDEX ix_T1_c2 ON T1 (c2)
SELECT namekeycntindidid
FROM sys.sysindexes
WHERE id OBJECT_ID('T1')

For the index on column (c2), you see a keycnt of 2. This is the key in the index plus the rowid.

For a nonclustered index on a clustered table, the row locator is the clustering key. Note, though, that if the clustered index is not defined as unique (PK, UQ etc), then another “uniqueifier” key/column is added. Building on above example:

CREATE UNIQUE CLUSTERED INDEX ix_T1_c1 ON T1 (c1WITH DROP_EXISTING
SELECT namekeycntindidid
FROM sys.sysindexes
WHERE id OBJECT_ID('T1')
GO
CREATE CLUSTERED INDEX ix_T1_c1 ON T1 (c1WITH DROP_EXISTING
SELECT namekeycntindidid
FROM sys.sysindexes
WHERE id OBJECT_ID('T1')

Consider the (non-clustered) index on column c2. For the first one, where the table has a unique clustered index, we see a keycnt of 2, the column c2 plus the clustered key. But when we define the clustered index as non-unique, we see +1 for the keycnt column; the uniqueifier. The uniqueifier is 4 byte, and only populated for rows which are duplicates of an existing clustered key (i.e. no extra cost if no duplicates).

But we want to stay away from compatibility views, right? Since we no longer have a key count column in sys.indexes, we need to grab that from sys.index_columns. This do not, however include the internal columns, only the explicitly defined columns:

SELECT
i.name
,i.index_id
,(SELECT COUNT(*)
FROM sys.index_columns AS ic
WHERE i.OBJECT_ID ic.OBJECT_ID
AND i.index_id ic.index_idAS keycnt
FROM sys.indexes AS i
WHERE OBJECT_ID OBJECT_ID('T1')