Be careful with constraints calling UDFs

You might just not get what you think. I would be surprised if this hasn’t been blogged already, but if so, it would be worth repeating. Here’s the deal (example from a forum,, slightly re-worked):

I want the values in one column to be unique, assuming the value in another column is 1. Can I use an UDF for that?

On the surface, yes. You can write an UDF to wich you pass the value which should be conditionally unique and in that UDF check how many rows has this value AND othercolumn = 1. If more than 1 row, then function returns 0, else 1 (or something else to signal “OK” or “Not OK”). Now, you can call this function in a CHECK constraint. Something like CHECK(myFunction(uniqueCol) = 1). this will on the surface do its job, as long as you INSERT into the table. But if you update a row and only set the otherColumn for some row from 0 to 1, then the check constraint will not be checked. The optimizer is smart enough to understand that the update doesn’t change anything that we refer to in our CHECK constraint, so why bother checking the constraint? End result here is that the constraint doesn’t do what we want it to do. Use a trigger instead (or some other method). Here’s a repro:

USE tempdb
GO
IF OBJECT_ID('t'IS NOT NULL DROP TABLE t
IF OBJECT_ID('t_uq'IS NOT NULL DROP FUNCTION t_uq
GO

CREATE TABLE t(c0 INTc1 NVARCHAR(50), c2 bit)
GO

CREATE FUNCTION t_uq(@c1 NVARCHAR(50))
RETURNS bit
AS
BEGIN
DECLARE 
@ret bit
IF (SELECT COUNT(*) FROM WHERE c1 @c1 AND c2 1) > 1
SET @ret 0
ELSE
SET 
@ret 1
RETURN @ret
END
GO

ALTER TABLE ADD CONSTRAINT t_c CHECK(dbo.t_uq(c11)

INSERT INTO t(c0c1c2VALUES(1'a'0--OK
INSERT INTO t(c0c1c2VALUES(2'a'0--OK
INSERT INTO t(c0c1c2VALUES(3'b'1--OK
INSERT INTO t(c0c1c2VALUES(4'b'1--Fails

--So far so good, but watch now:

UPDATE SET c2 WHERE c0 2
--No error, the constraint doesn't do its job!

--We have invalid data:
SELECT FROM t

Do you perform log backup for the model database?

Are you stupid, you might think… But stop and think for a while. Model is no different from other databases. And by default it is in full recovery model. So as soon as you do your first database backup (you do backup your system databases, right?) the log for model will start filling up and autogrow. “But, hey, I don’t do any modifications in model!”, you probably say now. Fair, but other things happens in each database from time to time. Bottom line is that ldf file for model will start growing after a while . Perhaps not huge, but I find it “un-neat” to have a model with 3 MB mdf file and 20 MB ldf file.

Personally I prefer to have model in simple recovery since I feel that is a better default recovey model. An alternative is to regurarly set model in simple recovery and back to full recovery (schduled job).

Will EMPTYFILE on primary ldf "doom" it somehow?

I just read a newgroup question whether doing SHRINKFILE with the EMPTYFILE option for the primary log file somehow cause ill effects.

Shrinkfile for the ldf will not move any data (log records) or so. For an ldf file it is basically a preparation to tell the engine that you are about to remove this file (ALTER DATABASE … REMOVE FILE).

Now, the first (primary) log file is special and cannot be removed. So, what if we do an EMPTYFILE on the primary log file. Will we end up in some limbo-state? I did a test and performed EMPTYFILE on the primary file. Nothing bad happened. I then did EMPTYFILE on the other log file and removed that file successfully. So it seems that this should not cause any havoc. Just pretend you never did that EMPTYFILE operation against the primary log file.

It isn’t doable to create a repro script which show shrinking and removing nf log files.It will require some engagement for you. The reason is that we never know from what file and where the nect virtual log file comes from. So, if you are about to run below, be prepared to read up on DBCC LOGINFO and other command, understand what VLF is, perhaps some operation need to be done everal times before what we expect will happen… And as always, use at own risk.

--Drop and create database named x
SET NOCOUNT ON
USE 
master
IF DB_ID('x'IS NOT NULL DROP DATABASE x
GO
CREATE DATABASE [x] ON  PRIMARY
NAME N'x'FILENAME N'C:\DemoDatabases\DbFiles\a\x.mdf' SIZE 10MBFILEGROWTH 3MB )
LOG ON
NAME N'x_log'FILENAME N'C:\DemoDatabases\DbFiles\a\x_log.ldf' SIZE 2MB FILEGROWTH 1MB)
,( 
NAME N'x_log2'FILENAME N'C:\DemoDatabases\DbFiles\a\x_log.ldf2' SIZE 2MB FILEGROWTH 1MB)
GO

--Get the database out of "auto-truncate" mode.
ALTER DATABASE SET RECOVERY FULL
BACKUP DATABASE 
TO DISK = 'nul'

--Fill up the log some
USE x
CREATE TABLE t(c1 INT IDENTITYc2 CHAR(300DEFAULT 'a')
GO
INSERT INTO DEFAULT VALUES
DELETE FROM 
t
GO 2000

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

--Empty log
BACKUP LOG TO DISK = 'nul'

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

--"Empty" primary log file
DBCC SHRINKFILE(2EMPTYFILE)

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

--Fill up the log some
INSERT INTO DEFAULT VALUES
DELETE FROM 
t
GO 2000

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

-- Do above several times and see
-- that 2 is still allocated from...

--Can we get rid of file 3?
BACKUP LOG TO DISK = 'nul'
DBCC SHRINKFILE(3EMPTYFILE)
--We might need to do above a few times
--until 3 is "clean" - no used VLFs

--Investigate log
DBCC SQLPERF(logspace)
DBCC LOGINFO

ALTER DATABASE x REMOVE FILE x_log2

--Might need to do some stuff to get rid of file physically
CHECKPOINT
BACKUP 
LOG TO DISK = 'nul'
GO
SELECT FROM sys.database_files

SQL Server Agent jobs and user contexts

In what user context does a job run? I recently found myself in a forum discussion and gave my stock reply, later realizing that I haven’t actually tested this for a long time (I used to demo this in class during 6.5 courses – when we actually had time for slight diversions). Lets start with my assumptions:

  1. Job owned by sysadmin, TSQL jobsteps:
    Agent log in to SQL Server using its own credential (windows authentication from service account) and execute the TSQL commands – no impersonation performed.
  2. Job owned by sysadmin, other jobsteps:
    Agent starts a process using its service account – no impersonation performed.
  3. Job owned by non-sysadmin, TSQL jobstep:
    Agent log in to SQL Server using its own credential (windows authentication from service account), then uses EXECUTE AS LOGIN = ‘<job_owner_login>’ to “impersonate” the login who owns the job. I.e., the TSQL commands will be executed in the security context of the job owner’s login.
  4. Job owned by non-sysadmin, other jobsteps:
    Agent starts a process using the service account as specified by the Agent proxy selected for this jobstep. (See Books Online “sp_add_proxy” and “CREATE CREDENTIAL” and this for more information about Agent Proxies and credentials in SQL Server.)

So, how to prove above? Lets use a combination of jobstep output file and a Profiler trace. The ouput files will be used to catch the login/user names, and the Profiler trace to verify what TSQL commands will be submitted by Agent (for TSQL jobstep). For the TSQL jobsteps, we execute:

SET NOCOUNT ON
SELECT
CAST(ORIGINAL_LOGIN() AS VARCHAR(20)) AS Original_login
,CAST(SUSER_SNAME() AS VARCHAR(20)) AS Effective_login
,CAST(USER_NAME() AS VARCHAR(20)) AS Db_user

And to test the other jobsteps, where we will be using a CmdExec jobstep as example (we are interested in what Windows account the process will be started as), and use whoami.exe which returns the windows user name. Each jobstep is configured to output the result to a file.

  • I change my real machine name to “Machine” in this text.
  • I’m not in a domain.
  • I’m logged in interactively (to Windows) as Windows account “Tibor”.
  • I’m logged in to SQL Server using Windows authentication.
  • I’m sysadmin.
  • My service account for both Agent and SQL Server services is “SQLService”.

Here’s the result from having the job owned by me (being sysadmin):

Original_login       Effective_login       Db_user
——————– ——————– ——————–
MACHINE\SqlService   MACHINE\SqlService   dbo

machine\sqlservice

As you see, no attempted impersonation performed. Steps 1 – 2 in above list confirmed.

We now going to test this for somebody who isn’t sysadmin:

  • Create SQL Server login named “Kalle”
  • Create user in msdb for above
  • Assign Kalle permission to create jobs (SQLAgentUser role in msdb)
  • Create account “MySqlProxy” in Windows.
  • Creade credential “c_MySqlProxy” in SQL Server for above.
  • Create proxy “p_MySqlProxy” for above, and allow login “Kalle” to use this for CmdExec jobsteps.

Now we modify the job and set the owner to Kalle, and also specify for the CmdExec jobstep to use the proxy p_MySqlProxy. Also, since non-sysadmins can’t use output files (I didn’t know that), we instead configure each step’s output to go to table. Execute job. No, finally we can check each job step output for result:

Original_login       Effective_login       Db_user
——————– ——————– ——————–
MACHINE\SqlService   Kalle                guest

machine\mysqlproxy

The first section above confirms that Agent uses it’s own login, but then changes login context to Kalle (verified by Profiler trace capturing the “EXECUTE AS LOGIN = N’Kalle’ WITH NO REVERT” command). The job step was configured to run in the master database; since I didn’t add Kalle as a user to master, you see the user name guest.

The second section verifies that The CmdExec step started a process using the mysqlproxy windows account, and specified using our SQL Server Agent Proxy and SQL Server credential object.

(I will not reply to support questions here. For support, I recommend you visit a forum, for instance this. Other discussions, comments and corrections are of course very welcome!)

Finally, I scripted out the job (as owned by me – being sysadmin), if you want to re-create it and play with it. Use at own risk, do apropriate modifications etc.

USE [msdb]
GO

/****** Object:  Job [CredTest]    Script Date: 09/19/2009 12:12:47 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode 0
/****** Object:  JobCategory [[Uncategorized (Local)]]]    Script Date: 09/19/2009 12:12:47 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]’ AND category_class=1)
BEGIN
EXEC @ReturnCode msdb.dbo.sp_add_category @class=N’JOB’@type=N’LOCAL’@name=N'[Uncategorized (Local)]’
IF (@@ERROR <> OR @ReturnCode <> 0GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N’CredTest’,
@enabled=1,
@notify_level_eventlog=0,
@notify_level_email=0,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N’No description available.’,
@category_name=N'[Uncategorized (Local)]’,
@owner_login_name=N’MACHINE\Tibor’@job_id @jobId OUTPUT
IF (@@ERROR <> OR @ReturnCode <> 0GOTO QuitWithRollback
/****** Object:  Step [A_TSQL]    Script Date: 09/19/2009 12:12:48 ******/
EXEC @ReturnCode msdb.dbo.sp_add_jobstep @job_id=@jobId@step_name=N’A_TSQL’,
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=3,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0@subsystem=N’TSQL’,
@command=N’SET NOCOUNT ON
SELECT
CAST(ORIGINAL_LOGIN() AS VARCHAR(20)) AS Original_login
,CAST(SUSER_SNAME() AS VARCHAR(20)) AS Effective_user
,CAST(USER_NAME() AS VARCHAR(20)) AS Db_user’,
@database_name=N’master’,
@flags=8
IF (@@ERROR <> OR @ReturnCode <> 0GOTO QuitWithRollback
/****** Object:  Step [B_OS]    Script Date: 09/19/2009 12:12:48 ******/
EXEC @ReturnCode msdb.dbo.sp_add_jobstep @job_id=@jobId@step_name=N’B_OS’,
@step_id=2,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0@subsystem=N’CmdExec’,
@command=N’whoami.exe’,
@flags=16
IF (@@ERROR <> OR @ReturnCode <> 0GOTO QuitWithRollback
EXEC @ReturnCode msdb.dbo.sp_update_job @job_id @jobId@start_step_id 1
IF (@@ERROR <> OR @ReturnCode <> 0GOTO QuitWithRollback
EXEC @ReturnCode msdb.dbo.sp_add_jobserver @job_id @jobId@server_name N'(local)’
IF (@@ERROR <> OR @ReturnCode <> 0GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT 0ROLLBACK TRANSACTION
EndSave:

GO

SQL Client config, 32 and 64 bit

Say you want to change the preferred netlib connection order. Or add a server alias.

You can do this using the “SQL Server Configuration Manager” program. But installing this on each client machine where you want to do the modification might not feel that attractive.

Another option is to use a tool like regmon while doing the config on a reference machine, sniff the registry modifications and then shoot these out to the client machines. This might be overkill, though.

Yet another option is to use the cliconfg.exe tool, which ship with Windows. This tool is already available on your machine. However, on a 64 bit machine, you need to consider whether the client app is a 32 or 64 bit app. The processor architectore for that app will determine where in the registry the app (client libraries) will look. Below is my results from working on a x64 XP installation.
64 bit (native): HKLM\SOFTWARE\Microsoft\MSSQLServer\Client\SuperSocketNetLib
32 bit (WOW): HKLM\SOFTWARE\WOW6432Node\Microsoft\MSSQLServer\Client\SuperSocketNetLib

Razvan Socol was kind enough to enlighten me, a while ago, of the fact that runnig the 64 bit version of cliconfg.exe will modify in the “64 bit” registry entry (as per above) and vice versa for the 32 bit version of cliconfg.exe. Razvan mentioned that starting cliconfg.exe from Run will start the 64 bit version, and from a 32 bit app (Explorer for instance – which I couldn’t find how to do, but I’m sure somebody will enlighten me) will start the 32 bit version.

Above made me wonder in what folder each file is. Here are my findings (on the test machine I was using – a pretty clean XP x64 machine):

64 bit version of cliconfg.exe: C:\Windows\System32
32 bit version of cliconfg.exe: C:\Windows\SysWOW64

(And, before you ask, no, above is not a typo. There is some logic behind this. 🙂 )

Heaps, forwarding pointers, ALTER TABLE REBUILD and non-clustered indexes

Let’s start with some background on forwarding pointers:

Forwarding pointers in heaps can be a mess to get rid of. A forwarding pointer happens when you modify a row so that the row doesn’t fit on the same page anymore. SQL Server moves the row to a new page and leaves a forwarding pointer where the old row used to be. This means that non-clustered indexes are not affected by the moving of the row – it can still point to the old location. So the modification is less costly than if all non-clustered indexes needed to reflect the row move. But having forwarding pointers can be bad for performance when you read data. A perhaps less known fact is that a scan over a table needs to follow forwarding pointers – essentially “jumping back and forth” a lot if you have many forwarding pointers. That can be really bad for performance (if you have table scans, of course). So, how do we get rid of forwarding pointers? Well, we can shrink the database file, but that is a little like curing a headache by jumping into a big meat grinder. We can export all data and re-import it – not very practical.

Greg Linwood reminded me that in SQL Server 2008, we can do ALTER TABLE … REBUILD. Now, I knew about this option, and every time I tell about it in class I’ve been thinking silently for myself “I need to test whether this is a way to get rid of fwd pointers”. (You generally talk about ALTER TABLE … REBUILD when you talk about enabling compression on a heap.) So, doing a REBUILD of a table using ALTER TABLE sounds promising. Will it get rid of forwarding pointers? Will it also rebuild all non-clustered indexes?

Quick answer for those who don’t care reading the TSQL script:
ALTER TABLE … REBUILD will remove forwarding pointers, but for some strange reason it will also rebuild all non-clustered indexes on that table.

See the TSQL code below. It was adapted from a challenge by Linchi Shea to produce a data loading script resulting in worst performance (where I immediately thought of forwarding pointers). See for instance http://sqlblog.com/blogs/linchi_shea/archive/2009/06/07/t-sql-exercise-simplest-data-loading-script-to-produce-worst-query-performance-june-7th-update.aspx.

USE tempdb
GO

IF EXISTS (SELECT FROM sysobjects WHERE name 'test')
DROP TABLE test
GO
CREATE TABLE test (
INT NOT NULL
,x2 INT NOT NULL
,CHAR(10) NOT NULL DEFAULT ('')
,CHAR(10) NOT NULL DEFAULT('')
)

DECLARE @rows INT = 666666@toKeep INT = 400000@diff INT

INSERT test (xx2)
SELECT TOP(@rows)
ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS r
,ROW_NUMBER() OVER (ORDER BY (SELECT 1)) % 10 AS s
FROM master..spt_values a CROSS JOIN master..spt_values b
GO
ALTER TABLE test ALTER COLUMN CHAR(892)
ALTER TABLE test ALTER COLUMN CHAR(100)
GO

DECLARE @rows INT = 666666@toKeep INT = 400000@diff INT
DELETE TOP(@rows @toKeep)
FROM test WHERE x2 IN(2468)
GO

CREATE INDEX x1 ON test(x)
CREATE INDEX x2 ON test(x2)

SELECT FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('test'), NULL, NULL, 'DETAILED')
--First  run (no nc indexes ): 387157 fwd records (out of 400000 rows), 96104 pages
--Second run (two nc indexes): 387157 fwd records (out of 400000 rows), 96105 pages

CHECKPOINT
DBCC DROPCLEANBUFFERS
DECLARE @t time SYSDATETIME()
ALTER TABLE test REBUILD
SELECT DATEDIFF(ms@tCAST(SYSDATETIME() AS time))
--First run, no non-clustered indexes, three subsequent executions (ms): 19351, 20683, 20275
--Second run, with two non-clustered indexes, three subsequent executions (ms): 31803, 35065, 37511

SELECT FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('test'), NULL, NULL, 'DETAILED')
--First  run, heap = "index" 0 (no nc indexes ): 0 fwd records, 50002 pages
--Second run, heap = "index" 0 (two nc indexes): 0 fwd records, 50003 pages
--But: avg_page_space_used_in_percent changes for the nc indexes after the rebuild.
--That I take as nc indexes are also rebuilt.

Do maintenance plans require SSIS?

To many, this is a non-issue, since SSIS is installed anyhow. But not everyody installs SSIS. I for instance prefer to keep my production systems as clean as possible and only have what I really need (a principle which is harder and harder to live after as years go by…). Also, not all Editions of SQL Server comes with SSIS.

I did a test some months ago for SQL Server 2005 with a recent sp and also on SQL Server 2008. SQL Server 2008 did require SSIS (I tested both without and with SSIS installed), where 2005 sp2 didn’t. I recently learned from Phil Brammer in MVP group that there has been progress. Here’s the run-down, you don’t need SSIS installed to execute maint plans:

SQL Server 2005 requires sp2.

SQL Server 2008 requires CU3 or sp1 (see http://support.microsoft.com/kb/961126/). I dodn’t test this combo (2008 sp1 without SSIS), but I have no reason to doubt that KB article.

Watch out for that autogrow bug

Under some circumstances, autogrow for database files can be set to some 12000 percent. I think this is limited to SQL Server 2005 and for databases upgraded from SQL Server 2000 (I didn’t bother to search – feel free to comment if you know). So, if you have a reasonably sized database and autogrow kicks in, you can do the maths and realize that pretty soon you are out of disk space.

I wrote a proc that I schedule that check for out-of-bounds values in sys.database files. The proc generates a bunch of messages it prints (handy if you have as Agent job with output file) and also constructs an error message and does RAISERROR (handy if you implemented alerting, for instance according to http://karaszi.com/agent-alerts-management-pack).

I prefer to schedule below as Agent job and use Agent alerts to notify me if we do have db with autogrow out-of-whack. As always, don’t use code if you don’t understand it.

USE maint
GO

IF OBJECT_ID('check_autogrow_not_percent'IS NOT NULL DROP PROC check_autogrow_not_percent
GO

CREATE PROC check_autogrow_not_percent
AS
DECLARE 
 @db sysname
,@sql NVARCHAR(2000)
,@file_logical_name sysname
,@file_phyname NVARCHAR(260
,@growth VARCHAR(20)
,@did_exist bit
,@msg NVARCHAR(1800)
,@database_list NVARCHAR(1000)

SET @did_exist CAST(AS bit)
SET @database_list ''

--For each database
DECLARE dbs CURSOR FOR
 SELECT name FROM sys.databases
OPEN dbs
WHILE 1
BEGIN
  FETCH NEXT FROM dbs INTO @db
  IF @@FETCH_STATUS <> BREAK

  SET @sql 'DECLARE files CURSOR FOR
 SELECT CAST(growth AS varchar(20)), physical_name, name
 FROM ' QUOTENAME(@db) + '.sys.database_files
 WHERE is_percent_growth = 1
 AND growth > 20'
  EXEC(@sql)
  OPEN files
  WHILE 1
  BEGIN
    FETCH NEXT FROM files INTO @growth@file_phyname@file_logical_name
    IF @@FETCH_STATUS <> BREAK
    SET @did_exist CAST(AS bit)
    SET @database_list @database_list '["' @db '": "' @file_logical_name '"]' CHAR(13) + CHAR(10)
   SET @msg 'Out-of-band autogrow in database "' @db '"' 
   ' with growth of ' @growth +
   ', logical file name "' @file_logical_name '"' 
   ', physical file name "' @file_phyname '"' 
   '.'
   RAISERROR(@msg101WITH NOWAIT
  END
  CLOSE files
  DEALLOCATE files
END
CLOSE dbs
DEALLOCATE dbs
IF @did_exist CAST(AS bit)
  BEGIN
   SET @msg 'Databases with out-of-control autogrow in databases: ' CHAR(13) + CHAR(10) + @database_list
   RAISERROR(@msg161WITH LOG
  END
GO

Spooky: What do you connect to?

I only recently discovered that SSMS will connect to different things. For instance, press the “New query” button. What were you connected to? The answer is the same server as your “current” server. But what is the current server? It is the server where you happened to have focus when the pressed the “New query” button. So, can you say whether you had focus in a query window, Object Exporer or Registered Servers?

This also applies to when you double-click a .sql file. And it doesn’t stop there. Open the “Registered Servers” window. Now, click on a server group. Go to explorer and double-click a .sql file. What were you connected to? Yes, all the servers in that group. Now, don’t get me wrong here; the ability to open the same query window against several servers can be a very useful thing. What I had no idea until just about now is how easily thsi can happen by mistake. Just by cklicking the New Query window, or even double-clicking an .sql file. So – be aware…

(FYI: SSMS 2005 doesn’t seem to do this for clicking a file in explorer, and the functionality to have a query window against several server didn’t exist in SSMS 2005…)