SQL Server : Concurrency

Locks

Locks are the way most relational databases serialise data changes, so people don’t overwrite each others changes. All databases try to hide the fact they are applying changes serially, but ultimately only one change to data ever happens at one time. If that were not the case, point in time recovery (PITR) would be impossible. Locks are necessary, and as such blocking will happen from time to time. Blocking is not inherently evil. Excessive blocking is.

Blocks

If two or more database sessions are trying to modify the same data, they will effectively form a queue. The first session locks the data, blocking the other session. Once the first session releases its lock, the second session is able to lock and modify the data. This is a normal locking situation, and although it ultimately affects performance and scalability, it doesn’t represent a problem as such. This is sometimes called a “live lock”.

A deadlock occurs when two or more sessions are waiting for data locked by each other, resulting in all the sessions being blocked. So session 1 is locking data in table A and waiting for a lock on data in table B, while session 2 has a lock the same data in table B and is waiting for the a lock on the same data in table A. Both sessions are blocking each other and there is no way out, other than for one session to end. SQL Server kills one of the sessions to break the deadlock.

Causes of Blocking

Most forms of excessive blocking are the result of bad database design and/or bad development.

Deadlocks are caused by poor application design. If all processes request the necessary locks in the same order, deadlocks become almost impossible.

Other locking problems tend to be down to bad performance. If all operations are extremely quick and efficient, no one process is likely to block another process, or be blocked itself, for very long. If processes are inefficient, the chances of excessive blocking are worse. Poor database design, bad indexing strategies and badly written SQL result in badly performing SQL, which by its very nature is more likely to block resources unnecessarily.

Manual Locking

You should avoid manually acquiring locks if possible. Locks are automatically managed by the Lock Manager in SQL Server.

In the example below, the XLOCK table hint is used to take an exclusive lock on the data in a transaction. The data remains locked until the COMMIT TRAN or ROLLBACK command is issued.

BEGIN TRAN

SELECT * FROM MyTable (xlock)
WHERE id = 1

-- Stays locked until COMMIT TRAN/ROLLBACK is issued.
COMMIT TRAN

If another session were to issue the following query it would be blocked until the previous session issued the COMMIT TRAN command.

SELECT * FROM MyTable
WHERE id = 1

Manually locking data is sometimes necessary, but you are responsible for understanding the implications.

Detecting Blocks

This query tells us what is blocked (SPID) and which session is blocking it (BLOCKED).

SELECT spid, blocked
FROM sys.sysprocesses
WHERE blocked > 0

We can see what the sessions are doing using the following command, where “N” is the session number (SPID or BLOCKED).

dbcc inputbuffer(n)

The following query shows all active locks in the system.

SELECT *
FROM sys.dm_tran_locks

There are lots of additional locks at multiple levels, in addition to specific object or row locks. You will see why later.

Types of Locks

There are different types of locks (Shared, Update, Exclusive, Intent etc.) that can be applied at different levels (Row, Page, Table, File, Database etc.) in the database. They are described in this document.

To protect the data properly, there is often a hierarchy of locks. For example, to protect a row with exclusive access there may be an exclusive lock on the row, along with intent share locks on the page and table, along with a share lock on the database. This allows the lock manager to prevent a higher level lock (like database level locks) sneaking in and gaining a lock at a higher level that is shouldn’t have. The result of this lock hierarchy is the sys.dm_tran_locks table can contain lots of locks, even for a simple and small transaction.

Killing Processes

You kill a session using the KILL command, specifying the SPID of the session you want to kill.

KILL 150

Be careful about what you kill. Killing sessions can be dangerous.

Dealing with Big Transactions

When you perform DML on a large number of rows, you are potentially blocking lots of sessions until the DML is complete. You can break down large DML into smaller batches. The following example uses batch mode for a large delete, commiting the work and releasing the locks every 10,000 rows.

SET NOCOUNT ON
WHILE 1=1
BEGIN
DELETE TOP(10000)
FROM MyTable
WHERE id BETWEEN 1 AND 10000000
END
SET NOCOUNT OFF

For more information see: