SQL Server : Basic Restores

Simple Full Restore

Here is a simple example of restoring a full backup.

RESTORE DATABASE MyDatabase
FROM DISK = 'C:\backups\MyDatabase.bak'

If the database is already there, we use the REPLACE to specify we want the existing database to be replaced and we are happy to lose any updates that were applied since the backup was taken, even if they are still present on disk. In the following example we also use the STATS=10 option to get feedback every 10%.

RESTORE DATABASE MyDatabase
FROM DISK = 'C:\backups\MyDatabase.bak'
WITH REPLACE, STATS = 10

Backup-Based Move

The first thing you should do before any backup-based move operation is to check the file contained in the backup.

RESTORE FILELISTONLY
FROM DISK = 'C:\backups\MyDatabase.bak'

With the information provided by that we can define how we want to move the files using the MOVE WITH option. The move specifies the logical name of the file you want to move and the new destination for that file. You can move all files, or just a subset of them. In the following case we restore and move the files for the data and the logs.

RESTORE DATABASE MyDatabase
FROM DISK = 'C:\backups\MyDatabase.bak'
WITH 
MOVE 'MyDatabase' TO 'C:\NewLocation\MyDatabase\MyDatabase.mdf'
MOVE 'MyDatabaseLog' TO 'C:\NewLocation\MyDatabase\MyDatabaseLog.ldf'
REPLACE, STATS = 10

Restore to New Name

This is similar to the backup-based move shown above. This time we restore to a new database name and make sure all the files have a different path and/or name, so we don’t have any collisions. We don’t need the REPLACE option as there is nothing to replace here.

RESTORE DATABASE MyDatabase2
FROM DISK = 'C:\backups\MyDatabase.bak'
WITH 
MOVE 'MyDatabase' TO 'C:\NewLocation\MyDatabase2\MyDatabase2.mdf',
MOVE 'MyDatabaseLog' TO 'C:\NewLocation\MyDatabase2\MyDatabaseLog2.ldf',
STATS = 10

If this were being done on a new server we wouldn’t need to rename the files, but they would be placed in their original paths, which isn’t great.

Restore Logs

A restore using logs has to start with a full restore. We need to make sure we don’t rollback any uncommitted transactions during the recovery, as that allows us to continue with the restore using the logs. We prevent this rollback using the NORECOVERY option.

RESTORE DATABASE MyDatabase
FROM DISK = 'C:\backups\MyDatabase.bak'
WITH 
STATS = 10, NORECOVERY

We can now restore the first log since the full backup. Once again we use NORECOVERY to allow us to continue.

RESTORE LOG MyDatabase
FROM DISK = 'C:\backups\MyDatabaseLog1.trn'
WITH 
STATS = 10, NORECOVERY

We can now restore the second log since the full backup. If we know this is the last log we want to restore we can use the RECOVERY option to complete the process. In this case we will use NORECOVERY again.

RESTORE LOG MyDatabase
FROM DISK = 'C:\backups\MyDatabaseLog2.trn'
WITH 
STATS = 10, NORECOVERY

If we didn’t use RECOVERY for the last restore, we can still finish the recovery process using the following command, which will rollback any uncommitted transactions.

RESTORE LOG MyDatabase
WITH RECOVERY

Until the final recovery has taken place you don’t have any access to the database.

Restore Logs (Standby)

If you need read-only database access during the restore, to check the database contents for example, you can replace the NORECOVERY option with the STANDBY option.

-- Full backup.
RESTORE DATABASE MyDatabase
FROM DISK = 'C:\backups\MyDatabase.bak'
WITH 
STATS = 10, STANDBY = 'C:\backups\Restore.txt

-- Log.
RESTORE LOG MyDatabase
FROM DISK = 'C:\backups\MyDatabaseLog1.trn'
WITH 
STATS = 10, STANDBY = 'C:\backups\Restore.txt

What this does is rollback the uncommitted transactions, but store them in the standby file, so they can be reapplied before the next restore operation. Between each restore operation the database will be left in read-only mode, allowing you to check the contents of the database. As before, once you’ve restored the last log you can recovery the database.

RESTORE LOG MyDatabase
WITH RECOVERY

Restore Differential Backups

Here we restore the full backup, the latest differential, then any additional log backups.

-- Full backup.
RESTORE DATABASE MyDatabase
FROM DISK = 'C:\backups\MyDatabase.bak'
WITH 
STATS = 10, NORECOVERY

-- Latest differential.
RESTORE DATABASE MyDatabase
FROM DISK = 'C:\backups\MyDatabase.diff'
WITH 
STATS = 10, NORECOVERY

-- 0-many logs, using standby.
RESTORE LOG MyDatabase
FROM DISK = 'C:\backups\MyDatabaseLog1.trn'
WITH 
STATS = 10, STANDBY = 'C:\backups\Restore.txt

-- Check the read-only database to see if
-- this is the correct place to stop.

-- Complete the recovery.
RESTORE LOG MyDatabase
WITH RECOVERY

For more information see:

SQL Server : Basic Backups

Introduction

There are two things to consider when thinking about backups.

  • Recovery Point Objective : How much data can you afford to lose?
  • Recovery Time Objective : How long can the database be down for while you do a recovery.

The closer these are to zero, the more money and effort you have to put into your backup processes.

Much of the functionality discussed here is available from the SQL Server Management Studio.

  • Right-click on the database.
  • Select Tasks > Back Up.
  • Enter the backup details and click the “OK” button.

The rest of the article will discuss backups from Transact-SQL.

Note. You should probably do most of your backups using a maintenance plan, which will be covered separately.

Full Backups

The minimum you can do to perform a backup is shown below. Each time you run it, it will append the new backup to the existing file, which isn’t very useful.

BACKUP DATABASE MyDatabase
TO DISK = 'C:\backups\MyDatabase.bak'

We can use flags to influence this behaviour. There is a complete list of possible flags here. For now, let’s look at these four.

  • INIT – Overwrites the Existing backup. Without this the default is NOINIT, which appends the new backup to the existing location.
  • FORMAT – Write new media header. This would make existing content in the location (file) invalid. Without this the default is NOFORMAT, which preserves the existing headers and therefore the backups too.
  • STATS – Used to show the progress of the backup. The default is 10%, STATS=10. This is only useful if you are running the backups interactively, but it doesn’t cause a problem with scripted backups.
  • COMPRESSION – Compress the backup. The opposite is NO_COMPRESSION.

The following backup command will use the same file each time, but will overwrite the contents with a clean backup.

BACKUP DATABASE MyDatabase
TO DISK = 'C:\backups\MyDatabase.bak'
WITH INIT, FORMAT, STATS=10, COMPRESSION

We can stripe backups across multiple files.

BACKUP DATABASE MyDatabase
TO DISK = 'C:\backups\MyDatabase1.bak',
   DISK = 'C:\backups\MyDatabase2.bak'
WITH INIT, FORMAT, STATS=10, COMPRESSION

Log Backups

The command for transaction log backups is similar to that for database backups. We don’t normally want to overwrite an existing log backup so we should use a unique name each time by using an incrementing counter or date-time.

BACKUP LOG MyDatabase
TO DISK = 'C:\backups\MyDatabaseLog1.trn'
WITH INIT, FORMAT, STATS=10, COMPRESSION

Remember, if you use a maintenance plan it will take care of backup naming for you!

Differential Backups

A differential backup includes everything that has changed since the last full backup. This is cumulative, so if you do full backups on Sunday, the daily differential backup would gradually grow in size as it covers more days, like this.

Sun  Mon  Tue  Wed  Thu  Fri  Sat
 |----|
 |---------|
 |--------------|
 |-------------------|
 |------------------------|
 |-----------------------------|

A differential backup is only possible if a full backup has already been done. Recovery of a differential backup is only possible if both the full backup and the differential backups are present. You can still keep log backups to allow specific point-in-time recovery.

The backup command follows the same format as before, but you add the DIFFERENTIAL flag to the WITH clause. Remember, if you want to keep multiple differentials you will need to use a different file name each day.

BACKUP DATABASE MyDatabase
TO DISK = 'C:\backups\MyDatabaseDiff1.bak'
WITH INIT, FORMAT, STATS=10, COMPRESSION, DIFFERENTIAL

Filegroup Backups

Rather than backing up a whole database in one go you can choose to backup an individual file groups. Ultimately you will need backups of all file groups and all the transaction logs to make the database consistent again, but you might choose to reduce the frequency of backups of file groups with relatively static data to improve backup speed.

BACKUP DATABASE MyDatabase
FILEGROUP = 'MyFileGroup1', FILEGROUP = 'MyFileGroup2'
TO DISK = 'C:\backups\MyDatabase-MyFileGroup.bak'
WITH INIT, FORMAT, STATS=10, COMPRESSION

System Database Backups

Backing up system databases is essentially the same as backing up a user database.

BACKUP DATABASE master
TO DISK = 'C:\backups\master.bak'
WITH INIT, FORMAT

BACKUP DATABASE model
TO DISK = 'C:\backups\model.bak'
WITH INIT, FORMAT

BACKUP DATABASE msb
TO DISK = 'C:\backups\msb.bak'
WITH INIT, FORMAT

You can’t backup “tempdb” as it is recreated every time the instance is started.

Depending on your recovery model, you can backup logs for some of the system databases.

BACKUP LOG model
 TO DISK = 'C:\backups\model.bak'
 WITH INIT, FORMAT;

BACKUP LOG msb
 TO DISK = 'C:\backups\msb.bak'
 WITH INIT, FORMAT;

You can’t backup the logs for the master database, so either do a full backups at intervals during the day, or rely on the fact there aren’t frequent changes to the master database on a daily basis.

For more information see:

SQL Server : Auditing

Create a New Audit

The “Audit” object is really just a reference to where the audit file(s) will be located and how they will be managed. It doesn’t define what specific auditing will be done. That is controlled by the server-level and database-level audits explained later.

  • Expand the Management Studio tree for the database of interest. “Security > Audits”.
  • Right-click on “Audits” and select the “New Audit” option.
  • Name the “Audit name” to indicate what it is doing. In this case “ServerAudit”.
  • Decide what should happen on audit failure.
  • Pick the destination (File, Security Log, Application Log). I picked File.
  • If you pick “File”, enter the path and file handling details.
  • Click “OK”.
  • Right-click on the new audit and select the “Enable Audit” option. If you don’t do this you will not be able to write any audit information to the audit location.

Server-Level Audits

  • Expand the Management Studio tree for the server. “Security > Server Audit Specifications”.
  • Right-click on “Server Audit Specifications” and pick the “New Server Audit Specification” option.
  • Set the name. It’s easier if you make this match the Audit you created above.
  • Select the matching “Audit” from the drop-down.
  • Pick the audit action type(s) required.
  • Click “OK”.
  • Right-click on the new server audit specification and select the “Enable Server Audit Specification” option.
  • Right-click on the audit you defined previously (not the server audit specification) and select the “View Audit Logs” option.

You can find an explanation of all the Audit Action Groups and Actions here.

Database-Level Audits

The process is similar to the server-level audits.

  • Create a new Audit as described previously.
  • Expand the specific database of interest in the tree. (Server-Name > Databases > Database-Name > Security > Database Audit Specifications).
  • Right-click on “Database Audit Specifications” and pick the “New Database Audit Specification” option.
  • Set the name. It’s easier if you make this match the Audit you created above.
  • Select the matching “Audit” from the drop-down.
  • Pick the audit action type(s) required. Make them as specific as possible by targeting (Database, Schema or Object) and clicking the “…” buttons to select specific qualifiers.
  • Click “OK”.
  • Right-click on the new database audit specification and select the “Enable Database Audit Specification” option.
  • Right-click on the audit you defined previously (not the server audit specification) and select the “View Audit Logs” option.

You can find an explanation of all the Audit Action Groups and Actions here.

Importing Audits

You can import audit files using the following statement, with the file pattern adjusted accordingly.

SELECT *
FROM sys.fn_get_audit_file (
         'c:\temp\ServerAudit*.sqlaudit',
         Default,
         Default
       )

The “Default” setting mean we will read all files matching the pattern. We can actually set an initial file and offset to limit the files we read. This is explained in the following document.

sys.fn_get_audit_file (Transact-SQL)

We can also create a table holding the files as follows.

SELECT * INTO MyAuditTable
FROM sys.fn_get_audit_file (
         'c:\temp\ServerAudit*.sqlaudit',
         Default,
         Default
        )

SELECT * FROM MyAuditTable

Policy-Based Management

  • Expand the tree. (Server-name > Management > Policy Management).
  • You will see Facets, Conditions and Policies. Facets represent groups of items you can test. Conditions a test itself, using a facet. A policy is a group of one or more conditions. Policies can be scheduled, event-based or on-demand etc.
  • Expand the “Facets” tree node.
  • Double-click on the “Database” facet.
  • Scroll down the list of properties and find AutoShrink. This is what we will test.
  • Richt-click on “Conditions” and select the “New Condition” option.
  • Give it a reasonable name (AutoShrinkCondition).
  • Select the “Database” facet.
  • Enter the Field “@AutoShrink:, the operator “=” and the value “FALSE’.
  • Click “OK”.
  • Right-click on “Policy” and select the “New Policy” option.
  • Give it a reasonable name (AutoShrinkPolicy).
  • Select the condition(s) you want in the policy.
  • Decide how it should be evaluated (on-demand, scheduled etc.).
  • Click “OK”.
  • For an on-demand policy, right-click on the policy and select the “Evaluate” option. The results will let you know if all databases have AutoShrink=FALSE or not.

For more information see:

SQL Server : Troubleshooting

Here are some tools that can be useful for troubleshooting.

SQL Server Profiler

Start the profiler using the menu option “Tools > SQL Server Profiler”.

On the “General” tab.

  • Trace name : A name for the trace.
  • Template : Pick any of the predefined templates. “Blank” allows you to select exactly what events you care about. There are a lot. “Standard” is a good staring point.
  • Check “Save to file” and specify a file on the machine running the profiler (your PC).
  • Check “Enable file rollover”.
  • If you want the trace to stop automatically, check and set the “Enable trace stop time”.

On the “Events Selection” tab. Select the event you are interested in. The “Show all events” and “Show all columns” allow you to see all possible choices if unchecked. If checked, they only show the events and columns where you have made a selection.

When you are happy with your selection, click the “Run” button. To manually stop the trace, click the stop button (red square) on the toobar.

If you chose to save to file, you can open a previous trace file in the profiler and work through it. You can also amend the trace properties and it will amend the trace output on screen to filter out events you exclude.

For more information see:

Server-Side Trace (SST)

The SQL Profiler is a GUI wrapper over Server-Side Trace (SST), a collection of system stored procedures that expose the SQL Server trace information.

The easiest way to get to grips with this is to use the SQL Profiler as follows.

  • Start a profiler run.
  • Immediately stop it.
  • Export export it as a “Script Trace Definition”. (File > Export > Script Trace Definition > For SQL Server 2005 – 20014).

The contents of the resulting file will be the SST commands to reproduce the SQL Profiler run you started. There are instructions in the script comments telling you how to modify this script to allow you to rerun specific traces.

You can see any traces that are present on the system using the following query.

SELECT * FROM sys.traces

For more information see:

Dynamic Management Views and Functions (DMVs and DMFs)

These are like Oracle’s dynamic performance (V$) views. They provide information about everything happening in the system. For example, there are DMVs and DMFs for session resource usage, index usage, OS performance and I/O performance, amongst other things.

For information see:

SQL Server : Concurrency

Locks

Locks are the way most relational databases serialise data changes, so people don’t overwrite each others changes. All databases try to hide the fact they are applying changes serially, but ultimately only one change to data ever happens at one time. If that were not the case, point in time recovery (PITR) would be impossible. Locks are necessary, and as such blocking will happen from time to time. Blocking is not inherently evil. Excessive blocking is.

Blocks

If two or more database sessions are trying to modify the same data, they will effectively form a queue. The first session locks the data, blocking the other session. Once the first session releases its lock, the second session is able to lock and modify the data. This is a normal locking situation, and although it ultimately affects performance and scalability, it doesn’t represent a problem as such. This is sometimes called a “live lock”.

A deadlock occurs when two or more sessions are waiting for data locked by each other, resulting in all the sessions being blocked. So session 1 is locking data in table A and waiting for a lock on data in table B, while session 2 has a lock the same data in table B and is waiting for the a lock on the same data in table A. Both sessions are blocking each other and there is no way out, other than for one session to end. SQL Server kills one of the sessions to break the deadlock.

Causes of Blocking

Most forms of excessive blocking are the result of bad database design and/or bad development.

Deadlocks are caused by poor application design. If all processes request the necessary locks in the same order, deadlocks become almost impossible.

Other locking problems tend to be down to bad performance. If all operations are extremely quick and efficient, no one process is likely to block another process, or be blocked itself, for very long. If processes are inefficient, the chances of excessive blocking are worse. Poor database design, bad indexing strategies and badly written SQL result in badly performing SQL, which by its very nature is more likely to block resources unnecessarily.

Manual Locking

You should avoid manually acquiring locks if possible. Locks are automatically managed by the Lock Manager in SQL Server.

In the example below, the XLOCK table hint is used to take an exclusive lock on the data in a transaction. The data remains locked until the COMMIT TRAN or ROLLBACK command is issued.

BEGIN TRAN

SELECT * FROM MyTable (xlock)
WHERE id = 1

-- Stays locked until COMMIT TRAN/ROLLBACK is issued.
COMMIT TRAN

If another session were to issue the following query it would be blocked until the previous session issued the COMMIT TRAN command.

SELECT * FROM MyTable
WHERE id = 1

Manually locking data is sometimes necessary, but you are responsible for understanding the implications.

Detecting Blocks

This query tells us what is blocked (SPID) and which session is blocking it (BLOCKED).

SELECT spid, blocked
FROM sys.sysprocesses
WHERE blocked > 0

We can see what the sessions are doing using the following command, where “N” is the session number (SPID or BLOCKED).

dbcc inputbuffer(n)

The following query shows all active locks in the system.

SELECT *
FROM sys.dm_tran_locks

There are lots of additional locks at multiple levels, in addition to specific object or row locks. You will see why later.

Types of Locks

There are different types of locks (Shared, Update, Exclusive, Intent etc.) that can be applied at different levels (Row, Page, Table, File, Database etc.) in the database. They are described in this document.

To protect the data properly, there is often a hierarchy of locks. For example, to protect a row with exclusive access there may be an exclusive lock on the row, along with intent share locks on the page and table, along with a share lock on the database. This allows the lock manager to prevent a higher level lock (like database level locks) sneaking in and gaining a lock at a higher level that is shouldn’t have. The result of this lock hierarchy is the sys.dm_tran_locks table can contain lots of locks, even for a simple and small transaction.

Killing Processes

You kill a session using the KILL command, specifying the SPID of the session you want to kill.

KILL 150

Be careful about what you kill. Killing sessions can be dangerous.

Dealing with Big Transactions

When you perform DML on a large number of rows, you are potentially blocking lots of sessions until the DML is complete. You can break down large DML into smaller batches. The following example uses batch mode for a large delete, commiting the work and releasing the locks every 10,000 rows.

SET NOCOUNT ON
WHILE 1=1
BEGIN
DELETE TOP(10000)
FROM MyTable
WHERE id BETWEEN 1 AND 10000000
END
SET NOCOUNT OFF

For more information see:

 

SQL Server : Instance Configuration and Management

Recycling Logs

By default SQL Server retains six logs are only recycles a log when the instance is restarted. As a result, the logs can grow really big, wasting a lot of space and making them really hard to open and read. You should probably consider recycling the logs on a regular basis and altering the number of logs retained.

In this example, we will recycle the logs every day and retain 30 days worth of logs.

  • Expand “Management”.
  • Right-click on the “SQL Server Logs” node and select the “Configure” menu option.
  • Check the Limit option and enter the maximum number of error log files to “30”.
  • Click the “OK” button.

Recycle the error log every day.

  • Expand “SQL Server Agent”.
  • Right-click on “Jobs”.
  • Select the “New Job…” menu option.
  • Set Name to “RecycleErrorLog” and Owner to “sa”.
  • Click on the “Steps” page and click the “New” button.
  • Set the “Step name” to “Step_1” and set the Command to “sp_cycle_errorlog”.
  • Click the “OK” button.
  • Click on the “Schedules” page and click the “New” button.
  • Set the Name to “DailyMidnight”.
  • Change “Occurs” to “Daily” and leave the default “Occurs once at” as “00:00:00”.
  • Click the “OK” button.
  • Click the “OK” button on the “New Job” dialog.

For more information see:

Patch Management

  • RTM – Release to Market : The initial release.
  • Hotfix : Specific fixes.
  • CU – Cumulative Update : Rollup hotfixes.
  • SP – Service Pack : A large number of patches combined together.

Check out the Update Center for Microsoft SQL Server.

Trace Flags

Where “n” is a trace flag number. To alter a trace flag at session level do the following.

dbcc traceon(n)

To alter the trace flag globally do the following. It will not persist past a restart of SQL Server.

dbcc traceon(n, -1)

To make a persistent change to a trace flag do the following.

  • Open “SQL Server Configuration Manager”.
  • Double-click on the “SQL Server Service”.
  • Click the “Startup Parameters” tab.
  • Add a startup parameter “-T3226” and click the “Add” button.
  • Click the “Apply” button and click “OK” on the warning message.

The trace flag will be applied on subsequent restarts.

For more information see:

SQL Server : Transaction Logs

Recovery Models

The following recovery models are available.

  • Simple : No log backups. No point in time recovery.
  • Full : Must do log backups or transaction logs will grow continually until the next backup.
  • Bulk Logged : Bulk operations will not be logged completely. Non-bulk operations will be logged.
    Must do log backups, but may not be possible to do PITR if bulk operations were performed.

Managing Transaction Logs

Logs are written to sequentially, so there is no performance gain with striping.

  • In simple mode, as soon as a transaction is complete, the log space is marked as reusable.
  • In full mode, after the first full backup the log space will be kept until the next log backup. Before the first full backup, the database acts like pseudo-simple mode. It doesn’t act like true full mode until after the first full backup.

Monitor the log usage as follows.

SELECT name, recovery_model_dec, log_reuse_wait_desc
FROM sys.databases

dbcc sqlperf(logspace)

For more information see:

SQL Server : Permission Chaining

If objects are in the same schema, granting access on one object implicitly grants access to dependent objects.

For example, if a procedure queries a table, granting EXECUTE on that procedure implicitly grants SELECT on the table, since it is necessary for the SP to complete. This only works if all objects are in the same schema.

SQL Server : Database Level-Security

Database Users

Users are based on Logins (AD user, AD group, SQL Server).

CREATE USER [domain\username] FOR LOGIN [domain\username] WITH DEFAULT SCHEMA=[dbo]
GO

For more information see:

To run as a user. (switch context).

EXECUTE AS USER = '[domain\username]'

-- Do something.

-- Switch back to normal context.
REVERT

Grant privileges to a user.

GRANT EXECUTE TO [domain\username]
GRANT EXECUTE ON object-name TO [domain\username]
GRANT SELECT, INSERT, UPDATE, DELETE ON schema.table-name TO [domain\username]

It’s better to use a database role, so the same privileges can easily be granted to others.

CREATE ROLE [MyRole]
GRANT SELECT, INSERT, UPDATE, DELETE ON schema.table-name TO [MyRole]

ALTER SERVER ROLE [MyRole]
ADD MEMBER [domain\username]
GO

Schemas

A schema is a collection of objects. Here are some examples of creating a schema.

CREATE SCHEMA [MySchema] AUTHORIZATION [domain\username]
CREATE SCHEMA [MySchema2]

Create and object in the schema.

CREATE TABLE MySchema.MyTable (
 id int
)

This gives a logical group of objects, which is good for organisation and security. Rather than granting permissions on the object, you can grant them for the schema.

GRANT SELECT, INSERT, UPDATE, DELETE ON schema::MySchema TO [MyRole]

Without a scope qualifier, the default scope is “object”. You can alter this for a statement using a scope qualifier like “schema::”.

For more information see:

SQL Server : Server-Level Security

When created via the GUI, you might find it hard to distinguish between a login and user.

  • Logins : Allow connections to the server and can have server permissions assigned to them.
  • Users  : Determine which databases regular users have access to, as well as database and object level privileges granted to them.

Logins

Here are some examples of creating logins.

-- AD User
CREATE LOGIN [domain\username] FROM WINDOWS

-- AD Group
CREATE LOGIN [domain\groupname] FROM WINDOWS

-- SQL Server user.
CREATE LOGIN MyUser WITH PASSWORD=N'MyPassword1' MUST_CHANGE,
 DEFAULT_DATABASE=MyDatabase,
 CHECK_EXPIRATION=ON,
 CHECK_POLICY=ON

For more information see:

Roles

Server Roles:

The sysadmin role is a super user. Be careful who you grant it to.

Check the components of a server role using the following command.

sp_srvrolepermission 'public'

Add a person to a server role.

ALTER SERVER ROLE sysadmin
ADD MEMBER [domain\username]
GO

Create a new server role and assign it to a user.

CREATE SERVER ROLE [MyRole]

ALTER SERVER ROLE [MyRole]
ADD MEMBER [domain\username]
GO

For more information see:

Granting Server-Level Permissions

View available server properties.

  • Right-click on the database server and select the “Properties” menu option.
  • Click on the permissions tab.
  • The list at the bottom shows the available server permissions.

These permissions can be granted to roles or logins.

  • Click on the Role or Login.
  • Check the permissions required.
  • Click the OK button.

The grant option allows the grantee to grant the role to others.

Here’s how you would do it from T-SQL for the “VIEW SERVER STATE” property.

USE master
GO

GRANT VIEW SERVER STATE TO [MyRole]
GO

GRANT VIEW SERVER STATE TO [domain\username]
GO

GRANT VIEW SERVER STATE TO [MyRole] WITH GRANT OPTION AS [sa]
GO

For more information see: