How selective do we need to be for an index to be used?

You know the answer already: It depends. But I often see some percentage value quoted and the point of this post is to show that there is no such percentage value.

To get the most out of this blog post, you should understand the basic structure for an index, i.e. how the b+ tree look like. You should also understand the difference between a clustered and a non-clustered index. In essence, you should be able to visualize these structures and searches through them as you read the text. If you find that difficult, draw a few versions on a piece of paper and “navigate” through them by tracing through them with a pen or your finger. After a while, you will do this in your mind. For instance, check out the sections under this.

I’m not saying that we shouldn’t consider selectivity when designing indexes – of course we should! I’m not saying that one shouldn’t have some vague “feeling” about how much data to be return when making such decisions. What I will prove is that there is in reality no set percentage that the optimizer uses. The comment we usually see is something like:

“If we return more than 5% of the rows, then an index will not be used.”

Where did that 5% number came from? I can assure you that this is not some hard-wired number in the optimizer (except for an edge-case, see below). The optimizer aims at running the query with as low cost as possible. Considering the data access part (think WHERE clause and the condition), this is pretty much about reading as few pages as possible (few page-accesses).

Just to cut down a bit on the thread that might follow these types of blogs (“Hey, when I do this, your observations doesn’t match, your blog post is incorrect!”), let us first consider some special cases:

Clustered index
The clustered index *is* the data. If the search condition (SARG) is SEEKable, then SQL Server will obviously seek through a clustered index instead of scan it. Anything else would be stupid. There can be *other* search conditions that are more efficient, but we are considering one search condition at-a-time.

Non-clustered index that covers the query
This is pretty much the same argument as for above. Since all data is in the index (“covers the query”), not seeking it would be stupid. Again, there can be cheaper alternatives for any of the other search conditions, but we are considering one condition at-a-time.

The value is not known to the optimizer
This is what happens when you have a TSQL variable you compare against. Something like “colname = @v”. The optimizer has no knowledge of the contents of this variable. Either it uses density (where applicable, like “=”), as stored in the statistics information of the index. Where not applicable (like “>”, “<“, “BETWEEN” etc), then the optimizer actually do use some hard-wired percentage value. This value can change between versions so give it a spin of you want to know what value you have for your version/build number. Note that a variable is not the same thing as a parameter. SQL Server sniffs parameters (parameter sniffing). Read this for elaboration:

The search expression is not seek-able
I hope you know this already, but just to point it out. In most cases, having some calculation at the column side will void the ability to seek through the index. This should ideally be known to all T-SQL developers: Never do calculations at the column side! So, things to avoid are like “colname * 23 > 45” or “SOMEFUNCTION(colname) = 44”.

Hopefully by now we all understand that there are always special cases and exceptions. The more of Kalen’s books you have read, the more you understand this. What we are discussing here is the typical situation. OK? Fine. So, “Why is there no percentage value that the optimizer uses?”, you ask. Because the value will differ. In short, SQL Server wants to read as few pages as possible. In the most simple example, the alternative to an index seek is a table scan. So we will use this as basis for your discussion. There can be other alternatives to the table scan (using some other index for some other condition), but that doesn’t change the principal “it depends” concept.

In essence, it is all about the alternative. As I said, our example wil use a table scan as alternative. A table scan (or clustered index scan if it is a clustered table) means that SQL Server will look at every page and see what rows satisfies the search condition on each page.

My example has two different tables, both with 100,000 rows. These tables both have an integer column with consecutive increasing unique values, which also has a non-clustered index. I will see how selective I need to be when searching on this column in order for an index search to be done, compared to a table scan. I.e, find this percentage cut-off value.

The fewrows table only fit one row per data page. This means 100,000 data pages. My tests show that the cutoff for fewrows is about 31,000 rows. I.e., we need to be more selective than 31% for the index to be used.

The manyrows table fit 384 rows per page. This means 260 pages. My tests show that the cutoff for fewrows is about 155 rows. I.e., we need to be more selective than 0.16% for the index to be used.

You might end up with different exact numbers, depending on what you have in the statistics, the build number of your SQL Server etc. But what you will see that a similar pattern. A huge difference between the two.

It is all about the alternative
If I look at my indexes using sys.dm_db_index_physical_stats, I will see that the non-clustered index on the int column for the two tables are exactly the same (same number of pages in the index, etc). So, two indexes with the same characteristics have very different cut-off values. How can that be? It is because the alternative differs. The alternative for this example is a table scan. For the bigrows table, the alternative means reading 100,000 pages. But for the smallrows table, the alternative means reading only 260 pages. There can of course be other alternatives, like using some other index for some other search condition. This is, in the end, why we don’t have a set percentage value: it is all about the alternative!

The typical cases will of course fall somewhere between my more extreme examples. But my examples show that there is no set percentage value used by the optimizer. I showed that for my test, the percentage value can be as low as 0.15% or as high as 31%. What matter is the alternative!


USE tempdb



INSERT INTO manyrows
FROM sys.columns AS asys.columns AS b

FROM sys.columns AS asys.columns AS b

CREATE INDEX ON manyrows (c2)
CREATE INDEX ON fewrows (c2)

--Number of pages etc:
EXEC sp_indexinfo 'manyrows'
-- Data: 265 pages (2.07 MB)
-- Index x: 187 pages (1.46 MB)

EXEC sp_indexinfo 'fewrows'
-- Data: 100385 pages (784 MB)
-- Index x: 187 pages (1.46 MB)

FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED')

--Run below with showplan:
SELECT FROM manyrows
--155 = ix search, 156 = ts
--Cut-off is 0.16%

--31000 = ix search, 32000 = ts
--Cut-off is 31%

Adding a PK online?

I just read in a forum about a user who want to replikate a table, but the table doesn’t have a PK. The table is pretty large, and having the table not available while adding the PK is undesireable. The table has a clustered index already, and there are other columns which are known to be unique (presence of unique indexes).

What I wanted to test is whether we can just add the PK constraint using the ONLINE option. Show answer is “yes”. We can’t turn a unique index into a PK using some meta-data only operation, unfortunately. That would be the easiest step. But we can add a unique constraint using the ONLINE option – there’s even an example syntax for this in BOL. We can then remove the pre-existing unique index using ONLINE. Since we are using ONLINE, we need to be on Enterprise or Developer Edition.

I wanted to test this, and below is my test script:

USE tempdb


FROM syscolumns AS a
CROSS JOIN syscolumns AS b
CROSS JOIN syscolumns AS c

--Now try to add a PK "online"...:

--Add a nullable identity?
--Msg 8147, Level 16, State 1, Line 1
--Could not create IDENTITY attribute on nullable column 'c3', table 't'.

--Add a PK using ONLINE?
--Prepare a new connection with following INSERTs
--to verify it can run simultaneously:
--INSERT INTO t(c1, c2) VALUES(5000001, 't')
--INSERT INTO t(c1, c2) VALUES(5000001, 't')
--INSERT INTO t(c1, c2) VALUES(5000002, 't')
--INSERT INTO t(c1, c2) VALUES(5000003, 't')

--Above prepared? OK, execute below and jump to
--other window to verify it is online

--Verify the indexes using my own sp_indexinfo
EXEC sp_indexinfo 't'

Key count in sys[.]indexes

The old sysindexes table (as of 2005 implemented as a compatibility view) has a useful column named keycnt. This is supposed to give us the number of columns (keys) in the index. However, to make heads and tails out of the numbers, we need to understand how a non-clustered index is constructed. For a heap, the pointer to a row is the physical file/page/row address (aka “rowid”). This is counted as a key in the keycnt column:

CREATE TABLE T1 (c1 INTc2 datetimec3 VARCHAR(3))
CREATE INDEX ix_T1_c1 ON T1 (c1)
CREATE INDEX ix_T1_c1_c2 ON T1 (c1c2)
CREATE INDEX ix_T1_c1_c2_c3 ON T1 (c1c2c3)
CREATE INDEX ix_T1_c2 ON T1 (c2)
SELECT namekeycntindidid
FROM sys.sysindexes

For the index on column (c2), you see a keycnt of 2. This is the key in the index plus the rowid.

For a nonclustered index on a clustered table, the row locator is the clustering key. Note, though, that if the clustered index is not defined as unique (PK, UQ etc), then another “uniqueifier” key/column is added. Building on above example:

SELECT namekeycntindidid
FROM sys.sysindexes
SELECT namekeycntindidid
FROM sys.sysindexes

Consider the (non-clustered) index on column c2. For the first one, where the table has a unique clustered index, we see a keycnt of 2, the column c2 plus the clustered key. But when we define the clustered index as non-unique, we see +1 for the keycnt column; the uniqueifier. The uniqueifier is 4 byte, and only populated for rows which are duplicates of an existing clustered key (i.e. no extra cost if no duplicates).

But we want to stay away from compatibility views, right? Since we no longer have a key count column in sys.indexes, we need to grab that from sys.index_columns. This do not, however include the internal columns, only the explicitly defined columns:

FROM sys.index_columns AS ic
AND i.index_id ic.index_idAS keycnt
FROM sys.indexes AS i

Fed up with hunting physical index details?

I am. I find myself endlessly hunting for index information when working against the various SQL Servers I come in contact with. And, sure, the information is there. You just need to go and get it. This generally means that I start with sp_helpindex. Then some SELECT from sys.indexes. Then some more against sys.partitions and sys.allocation units (we want space usage stats as well). And perhaps general usage stats (sys.dm_index_usage_stats). (I sometimes might even use the GUI (SSMS) reports and index dialog – but you might already know that I’m not much of a GUI person.)

The good news with all this is that I learn to use these catalog and dynamic management views. Bad news is that it is kind of … boring to do the same thing again and again.

This is why wrote sp_indexinfo. You might have your own index information procedures (which you wrote yourself or found on the Internet). If not, you are welcome to use this one. I aim to improve it over time, so suggestions are welcome. Possible improvements include:

  1. Make it a function. Functions are nice since we can order the results, aggregate, and basically do whatever we want to when we SELECT from the function. But for this I need to find out how we install a user-defined global system function – there’s no supported way to do this. I’m not sure I want to go there…
  2. Reurn missing index information as well. For this we probably want two resultsets, and only return missing index information when we targeted *one* table (no wildcards). If we do this, then function is out since a function can only return *one* result set.

If you care to give it a spin, please let me know. I just wrote the procedure, so I haven’t tested it much yet. If you do find bugs, please leave a comment and I will incorporate into the source (let me know if you want to be acknowledged). Any comments are welcome.

You find the proc at:

Find table and index name for fragmented indexes

Got this question from a newsgroup today. The answer is pretty simple, just use the dynamic management view sys.dm_db_index_physical_stats. I’m posting this here mostly so I have somewhere to refer to when asked this question…

I prefer to have a helper function to get the index name:

CREATE FUNCTION dbo.index_name (@object_id int@index_id int)
RETURNS sysname
(SELECT name FROM sys.indexes WHERE object_id @object_id and index_id @index_id)

And then a simple query:

OBJECT_NAME(object_idAS tblName
,dbo.index_name(object_idindex_idAS ixName
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL)
WHERE avg_fragmentation_in_percent 20

Then you just adjust the search clause to your liking. One hint is to exclude nindexes with few pages (the page_count column).