Patch SQL Server Always On Cluster

Assumptions

This post discusses the procedure for upgrading a two node always on cluster. The latest SQL Server updates can be found here.

It is assumed Node1 is initially the primary and Node2 is the secondary.

The same procedure can be used for Windows Update, Service Packs, Cumulative Updates and GDRs.

Prerequisite Checks

  • Do a practice failover backward and forward to make sure the failover works.
    • Go to Node1.
    • Expand “AlwaysOn High Availability > Availability Groups”.
    • Right-click on the Availability Group and pick the “Failover” menu option.
    • Work through the wizard until the failover is complete.
    • Repeat on Node2 (the new primary) to fail back to Node1 (the original primary).
  • Check you have good backups.
  • Run DBCC CHECKDB on every database on the primary and check for errors.

Patch Procedure

Node1 : Prevent automatic failover.

  • Expand “AlwaysOn High Availability > Availability Groups”
  • Right click on the Availability Group and pick the “Properties” menu option.
  • Set the failover mode for both nodes to “Manual”.
  • Click the “OK” button.

Node2 :

  • Install the latest Service Pack and the latest Cumulative Update (CU) or GDR.
  • Reboot as required.

Node1 :

  • Check synchronization is OK. You can see this listed against the databases.
  • Failover to Node2, making Node1 the secondary.

Node1 :

  • Install the latest Service Pack and the latest Cumulative Update (CU) or GDR.
  • Reboot as required.

Node2 :

  • Check synchronization is OK. You can see this listed against the databases.
  • Failover to Node1, making Node2 the secondary.

Node1 : Enable automatic failover.

  • Expand “AlwaysOn High Availability > Availability Groups”
  • Right click on the Availability Group and pick the “Properties” menu option.
  • Set the failover mode for both nodes to “Automatic”.
  • Click the “OK” button.

The procedure is now complete.

For more information see:

SQL Server : Maintenance Plans

Maintenance plans are probably the easiest way to configure database and transaction log backups, as well as schedule regular maintenance tasks. Maintenance plans can have multiple steps with different schedules, so you might decide to have one all-encompassing plan, or several distinct plans.

Basic Plans

Each system will need individual consideration, but as a starting point I might use the following plans.

System databases – full backups : As the name suggests, this is a nightly full backup of the system databases. It should include the “Backup Up Database” step, and a “Maintenance Cleanup Task” step to remove old backups, files with an extension of “bak” older than X days.

System databases – weekend processing : A single job to do some of the following, depending on the needs of the installation.

  • Check Database Integrity
  • Shrink Database (Never schedule this!)
  • Rebuild Indexes (or Reorganize Indexes)
  • Update Statistics
  • Clean Up History

User databases – full backups : Nightly full backups of all user databases. It should include the “Backup Up Database (Full)” step, and a “Maintenance Cleanup Task” step to remove old backups, files with an extension of “bak” older than X days.

User databases – TLOG backups : Backups of the transaction logs for the user databases. The schedule for these will depend on the activity of the databases and any requirements like log shipping, that may determine maximum lag time, and therefore backup schedule. Doing them every 30 or 60 minutes is probably fine for most databases. It should include the “Backup Up Database (Transaction Log)” step, and a “Maintenance Cleanup Task” step to remove old backups, files with an extension of “trn” older than X days.

User databases – weekend processing : The same as the job for the system databases, but you are likely to have more variety here, unless all the databases on the server have a similar requirement.

Considerations

As I said previously, these are a basic starting point, not a cast in stone rule. There are a number of things to consider which may affect what maintenance plans you need. Here are some things you may need to consider.

  • One size does not fit all. The plans listed above assume all user databases have a similar requirement. In practice this may not the the case. There is nothing to stop you having database-specific plans or a mix-and-match approach.
  • If user databases are not in FULL recovery mode, you don’t need to bother with the transaction log backups, but I would question why your databases are not in full recovery mode.
  • Some people like to put the model and msdb databases into FULL recovery mode, some are happy with SIMPLE mode. If you do use FULL mode, you may want a transaction log backup of those also.
  • If your user database become really big you might find nightly full backups impractical and have to switch to differentials. There is nothing to stop you doing full backups of some user databases and differentials of others. It totally depends on the databases you have on a system.
  • The frequency of transaction log backups is related to the amount of loss off data you are willing to accept. Also, if you are using log shipping, you may wish to backup more frequently to reduce potential data loss.
  • Some systems will get heavily fragmented indexes and need index rebuilds really frequently, while others are not affected. Doing them once a week as a starting point is probably OK, but as always you need to monitor and adjust as necessary.
  • A rule of thumb is almost always a lie, but as a rule of thumb, never shrink a database and definitely don’t schedule it! Check out the links on this post to see why.
  • Each database could have different requirements for how long you keep the backups. Adjust the plans accordingly.

Maintenance Plans Don’t Show

If the maintenance plans won’t show, complaining about “Agent XPs” not being
enabled, you have to run this from the Management Studio.

SP_CONFIGURE 'show advanced options',1;
RECONFIGURE WITH OVERRIDE;
SP_CONFIGURE 'Agent XPs',1
GO

For more information see:

SQL Server : PowerShell Introduction

This article will give you a feel for what PowerShell can do. There are lots of examples on the net, so you can probably perform many tasks by taking someone else’s examples and modifying them.

PowerShell has lots of command. Open up PowerShell and type the following at the prompt to get a list of them.

get-command

Try a few of the “get-” commands to display information.

get-service
get-process

You can pipe commands together. In this example, we spool the list of processes out to a file.

get-process | out-file c:\test.txt

You can See the methods and properties of a command using the get-member (gm) command.

get-service | get-member
get-service | gm

You can alter the output using format-table (ft) command, adding properties displayed by “get-member”. Use “-auto” option to make the output more compact.

get-service | format-table Name, Status, DisplayName, CanStop -auto
get-service | ft Name, Status, DisplayName, CanStop -auto

You can limit the output using where-object (?).

get-service | where-object{$_.Name -eq "Spooler"}
get-service | ?{$_.Name -eq "Spooler"}

You can pipe many commands together to get the result you want. Here we list some specific information about a service called “Spooler”.

get-service | where-object{$_.Name -eq "Spooler"} | format-table Name, Status, DisplayName, CanStop -auto

When you know how to identify and object, you can call one of its methods. For example, find the service called “Spooler” and call the “Stop” and”Start” methods using foreach-object (%) command.

# Check status
get-service | where-object{$_.Name -eq "Spooler"} | format-table Name, Status, DisplayName, CanStop -auto 
# Stop service
get-service | where-object{$_.Name -eq "Spooler"} | Foreach-object{$_.Stop()}
# Check status
get-service | where-object{$_.Name -eq "Spooler"} | format-table Name, Status, DisplayName, CanStop -auto 
# Start service
get-service | where-object{$_.Name -eq "Spooler"} | %{$_.Start()}
# Check status
get-service | where-object{$_.Name -eq "Spooler"} | format-table Name, Status, DisplayName, CanStop -auto

The psdrive command is really useful. You can list all the drives.

psdrive

Notice the registry is listed as a drive too. You can check the contents of the registry from PowerShell. You can use UNIX or Dos type commands for navigation.

dir hkcu:\software\oracle\virtualbox
ls hkcu:\software\oracle\virtualbox

You can interact directly with SQL Server from PowerShell if you import the relevant module.

import-module sqlps

ls sqlserver:\sql\myserver\default

You can use PowerShell variables so you don’t have to keep repeating paths etc.

$db = "sqlserver:\sql\its-n-sfarm-01\default"
cd $db\databases
ls

If you right-click on the database in Management Studio, you can pick the “Start PowerShell” menu option and you will be taken directly to that database in PowerShell. We can check the tables in the database.

cd tables
ls

As we’ve seen before, we can alter the output of the ls/dir command.

ls | gm

ls | ft name, rowcount -auto

We can get the scripts for the tables and push them to a file.

ls | %{$_.script()} | out-file c:\tables-script.sql

For more information see:

SQL Server : High Availability (HA) and Disaster Recovery (DR) Overview

What Are High Availability (HA) and Disaster Recovery (DR)

The line between High Availability (HA) and Disaster Recovery (DR) can be a little blurred. Typically you might see them defined as follows.

  • High Availability (HA) : Keeping the system available at all times. This could be with zero downtime, or near zero downtime.
  • Disaster Recovery (DR) : Getting your system back online in the event of a catastrophic failure.

The reason it becomes blurred is because some of the tools available for HA may reduce the likelihood of a DR scenario occurring, and some of the DR tools may be helpful for HA.

The rest of this article will describe some of the features that may be useful for HA and DR. Remember, your worst case scenario is rebuilding everything using your OS and database backups.

Log Shipping

This uses the backup and recovery system to keep a copy of the database relatively up to date. Transaction log backups are shipped to a remote server, where they are applied to the remote database, keeping it up to date. There is some lag involved as the logs have to be backed up, shipped and applied, of if your log backups happen once per hour, you could be an hour out of date on the remote site in a failure scenario.

There is no auto-failover, so you are responsible making sure the remote box is available before having users connect.

Pointing clients to the new box can be problematic if they make direct references. This could be as simple as defining a DNS alias to the server, which you can redirect at will, or using a load balancer in front of the server.

The target database is unavailable during recovery. You could switch it to standby, making the system read-only, but then the logs wouldn’t be applying.

Replication

Once set up, replication has less latency than log shipping as it uses internal mechanisms to replicate the data between servers. Unlike log shipping, the targets are usable, but you are still responsible for getting the clients connected to the correct database.

Mirroring

Mirroring has been replaced by AlwaysOn in later releases.

Mirroring relies on internal mechanisms to ship log information between servers, allowing zero latency and zero data loss. When a change is applied on the primary server, it is also applied to the failover partner, so there is no data loss. This last point is configurable. The mirroring can be synchronous, so you know the transaction has been mirrored before moving on. If it is configured as asynchronous, the performance of the primary will be better, but there is potential loss of data.

Client connections are managed automatically. The connection details include an automatic failover partner, so if a client can’t connect to the primary server, it automatically tries the failover partner.

It’s worth remembering there can only be one failover partner and that partner is unusable until you failover.

Always On and Availability Groups

Always On is the evolution of mirroring. There are some similarities, in that it uses internal mechanisms to keep the databases in sync, can provide zero latency and data loss and has automatic failover and reconnect of clients.

The big differences are:

  • The partner servers are usable as read-only clients. That can be useful for reporting.
  • You can have multiple partner servers.
  • You can group databases into an availability group, so groups of databases will failover simultaneously.

Clustering

Clustering is where two or more servers use the same disks, but only one server can use the disk at any one time. In the event of a server failure on the primary server, the second server will spin up and the database will be running again, because it is using the same disks as the other server. It will effectively look like the primary server has been power cycled. This will protect you from server failures, but any problems with the shared disks will be a problem on both servers. So clustering is protecting the hardware, not the database.

Note. If you are running in a virtual environment, the virtualisation layer is probably protecting you at the server (VM) level, so you may be able to ignore clustering altogether.

You may well find AlwaysOn is more useful from a database perspective than clustering.

For more information see:

SQL Server : Exporting and Importing Data

Import and Export Wizard

In SQL Server Management Studio this feature the SQL Server Integration Services functionality is called the “Import and Export Wizard”. Right-click on the database and select the “Tasks > Import Data” or “Tasks > Export Data” option. They both result in the same Wizard, but depending on which you pick different fields may be defaulted.

To transfer some data do the following.

  • Right-click on the database.
  • Select the “Tasks > Export Data” menu option.
  • Click next on the first page, so you move to the “Choose a Data Source” page.
  • Choose a source. This could be another database using any available provider, including databases on the local or remote machines, or even different database engines. It could even be a text file. In this case we will pick another database on the same server.
  • Make sure the following settings are correct.
    • Data Source : SQL Server Native Client 11.0
    • Server name : Your server name.
    • Authentication : Use Windows authentication, or set the SQL Server authentication credentials.
    • Database : Select the database you want to export from.
  • Click the “Next” button.
  • Choose a destination. This could be another database using any available provider, including databases on the local or remote machines, or even different database engines. It could even be a text file. In this case we will pick another database on the same server.
    • Data Source : SQL Server Native Client 11.0
    • Server name : Your server name.
    • Authentication : Use Windows authentication, or set the SQL Server authentication credentials.
    • Database : Select the database you want to import into.
  • Click the “Next” button.
  • Decide if you want to copy data from multiple tables or view, or use a single query. The later will allow you to write complex queries including joins of multiple tables, but it will result in a single object in the destination database. Here we will pick “Copy data from one or more tables or views”.
  • Click the “Next” button.
  • Chose the tables you want to copy. You can alter the destination schema using the “Edit Mappings” button. It also gives you some other options for how to deal with existing options. When you’ve completed your table selection and mappings, click the “Next” button.
  • Click the “Next” button to see the summary information.
  • If you are happy with the summary, click the “Finish” button.
  • Wait for the transfer to complete. You can see a summary of the operations and save a report if required.
  • Click the “Close” button.

Copy Database

You can copy a whole database using a wizard too.

  • Right-click on the database.
  • Select the “Tasks > Copy Database” menu option.
  • Select the source server and authentication method, then click the “Next” button.
  • Select the destination servers and authentication method, then click the “Next” button.
  • Decide on the transfer method. Using the detach and attach method is faster, but makes the source database offline. The SQL Management Option method is slower, but allows the database to remain online. Make your choice and click the “Next” button.
  • Select the database(s) and decide if this is a copy or move. Click the “Next” button.
  • Assign a destination database name, the location of the files and decide how to handle an existing database of the same name. Click the “Next” button.
  • Enter a package name, or accept the default one. Click the “Next” button.
  • Run immediately, or schedule for a later time. Click the “Next” button.
  • If you are happy with the summary information, click the “Finish” button.
  • If you chose to run immediately, wait for the operation to complete.
  • You can view or save a report. Once you are finished, click the “Close” button.

Bulk Insert

You can load data from a file using the SSIS Wizard described above. You can also load data from text from the command line using the following type of command.

BULK INSERT MyTable
  FROM 'C:\MyData.txt'
  WITH (
          FIELDTERMINIATOR = '|'
       )

Bulk Copy Program (BCP)

This utility can copy data from the database into a file, or copy data from files into the databases. You can get help using the “bcp /?” command from the command prompt or PowerShell prompt.

PS C:\> bcp /?
usage: C:\Program Files\Microsoft SQL Server\100\Tools\Binn\bcp.exe {dbtable | query} {in | out | queryout | format} datafile
 [-m maxerrors] [-f formatfile] [-e errfile]
 [-F firstrow] [-L lastrow] [-b batchsize]
 [-n native type] [-c character type] [-w wide character type]
 [-N keep non-text native] [-V file format version] [-q quoted identifier]
 [-C code page specifier] [-t field terminator] [-r row terminator]
 [-i inputfile] [-o outfile] [-a packetsize]
 [-S server name] [-U username] [-P password]
 [-T trusted connection] [-v version] [-R regional enable]
 [-k keep null values] [-E keep identity values]
 [-h "load hints"] [-x generate xml format file]
 [-d database name]
PS C:\>

You can copy data out of the database using the following type of command, which uses a trusted connection (-T) to create a character file (-c) with a “|” delimiter (-t”|”) .

C:\>bcp MyDatabase.dbo.MyTable out c:\MyTable.txt -T -c -t"|"

You can import the same file with the following command. The parameters are similar, but we include a batch size (-b 1000).

C:\>bcp MyDatabase.dbo.MyTableAgain in c:\MyTable.txt -T -c -t"|" -b 10000

For more information see:

 

SQL Server : Database Mail

Enable Database Mail

You can check if Database Mail is enabled using the following command. The setting should be config_value=1 and run_value=1.

sp_configure 'Database Mail XPs'

If Database Mail is not enabled, you can enable it with the following commands.

sp_configure 'Database Mail XPs', 1
reconfigure

Profiles and Accounts

This can be done using the GUI.

  • Right-click on “Management > Database Mail”.
  • Select the “Configure Database Mail” menu option and click the “Next” button on the first screen.
  • Accept the first option “Set up Database Mail by performing the following tasks:” by clicking the “Next” button.
  • On the “New Profile” screen enter a suitable profile name, like “DBA”, and description.
  • Click the “Add” button, then the “New Account” button on the subsequent dialog.
  • Enter a suitable account name, like “DBA”, and SMTP details.
  • Click the “OK” button.
  • Back on the “New Profile” screen, click the “Next” button.
  • If you want to set a default profile, do it here, otherwise, select the profile required and click the “Next” button.
  • If you want to alter the system parameters, do it here, then click the “Next” button.
  • If you are happy with the settings, click the “Finish” button.

You can edit your settings by working through the screens from the “Configure Database Mail” menu option.

Testing Database Mail

You can test the settings as follows.

  • Right-click on “Management > Database Mail”.
  • Select the “Send Test E-Mail” menu option.
  • Pick the profile and enter the message details.
  • Click the “Send Test E-Mail” button.
  • When you are done, click the “Close” button.

For more information see:

SQL Server : SQL Server Agent (Jobs)

Create New Job

Creating a new job should be self explanatory. Here we will use the example of a job to recycle logs. The typical process to create a new job is as follows.

  • Expand “SQL Server Agent”.
  • Right-click on “Jobs”.
  • Select the “New Job…” menu option.
  • Set “Name” to “RecycleErrorLog” and “Owner” to Windows or SQL Server user, for example “sa”. The owner will determine what functionality is possible.
  • Assign a category if you like to use them. Defining your own categories is described below.
  • Click on the “Steps” page and click the “New” button.
  • Set the “Step name” to “Step_1” and set the Command to “sp_cycle_errorlog”. If you want any advanced actions, click the “Advanced” page and change the settings.
  • Click the “OK” button.
  • Click on the “Schedules” page and click the “New” button.
  • Set the Name to “DailyMidnight”.
  • Change “Occurs” to “Daily” and leave the default “Occurs once at” as “00:00:00”.
  • Click the “OK” button.
  • If you need any notifications, use the “Notifications” page to configure them.
  • Click the “OK” button on the “New Job” dialog.

Monitor Jobs

The “Job Activity Monitor” allows you to track the status of previous and current job runs.

  • Expand “SQL Server Agent > Jobs”.
  • Right-click on “Job Activity Monitor”.
  • Select the “View Job Activity” menu option.
  • Click the “Refresh” button if you are tracking running jobs.

Schedules

SQL Server allows you to create defined schedules that can be used by several jobs.

  • Expand “SQL Server Agent”.
  • Right-click on “Jobs”.
  • Select the “Manage Schedules” menu option.
  • Click the “Refresh” button if you are tracking running jobs.
  • The screen lists all existing schedules, along with the jobs they are linked to. Clicking on the “Job in schedule” link gives you a list of the jobs for that schedule.
  • To create a new schedule, click the “New” button. The subsequent screen gives the standard schedule definition screen. Make sure you give is a reasonable name.

Categories

You can associate jobs with categories, and even define your own categories. This doesn’t alter their functionality or security. It’s just a way to group them.

  • Expand “SQL Server Agent”.
  • Right-click on “Jobs”.
  • Select the “Manage Categories” menu option.
  • You can “Add” new categories, or highlight an existing category and click “View Jobs” to see the jobs assigned to that category.

Operators

Operators allow you to define individuals, or groups of individuals using mailing lists, who require notifications.

  • Expand “SQL Server Agent”.
  • Right-click on “Operators”.
  • Select the “New Operator” menu option.
  • Enter the “Name” of the operator, along with their contact details, typically the email address.
  • Make sure the operator is enabled.
  • Click the “OK” button.

With the operator in place, you can assign the operator to the notification for a job.

  • Right-click on the job of interest.
  • Select the “Properties” menu option.
  • Click on the “Notifications” page.
  • Check the “E-mail” option and select the operator you just created. Select the condition associated with the notification, for example, “When the job fails”.
  • Click the “OK” button.

You need to make sure the agent is configured to send emails. This assumes Database Mail is already configured.

  • Right-click on the “SQL Server Agent”.
  • Select the “Properties” menu option.
  • Click the “Alert System” page.
  • Check the “Enable mail profile” option and select the “Mail system” of “Database Mail” and “Mail profile” you have defined for this system.
  • There are additional mail settings you can alter, but they are not necessary.
  • Click the “OK” button.

Agent Properties

You will probably leave the defaults for most systems.

  • Right-click on the “SQL Server Agent”.
  • Select the “Properties” menu option.
  • On the “General” page, make sure the “Auto restart…” options are checked.
  • On the “History” page, you may want to check the “Remove agent history” option and specify an “older than”.
  • Click “OK” when you are finished.

For more information see:

SQL Server : Query Tuning (Basic Approach)

Basic Approach

Here are some things to think about.

  1. Identify Slow SQL : The quickest way to identify slow SQL is to use the SQL Server Profiler. You should focus on the queries having the biggest impact on the process you are investigating. For example, saving 1 second on a query that runs once a day is less important than saving 0.25 of a second on a query that runs 10,000 times a day.
  2. Tune One Query at a Time : Focus on a single query. Make adjustments and and test the performance again. It’s possible that an improvement in that query will have a positive effect on others in the system, as there may be a general reduction in blocking. If possible, once one query is tuned, use the profiler again.
  3. Adding/Removing Indexes : You will often want to add or remove indexes to fix a problem. Although this may have a positive effect on your query, you have to consider the system as a whole. Altering indexes could result in a detrimental effect on other queries. You have to consider changes holistically.
  4. Change Over Time : As your data changes, your query performance may change. Don’t be surprised if execution plans change over time if your data changes drastically. That is why people usually try to avoid hints.
  5. Diminishing Returns : Be aware that something will always be “the slowest query”, but there is a point where spending any more time will be a waste of effort. You have to understand where tuning becomes “Compulsive Tuning Disorder”, a great phrase coined by Gaja Krishna Vaidaynatha.
  6. Communicate Benefits : People will make the same mistakes again and again. When you have identified an issue and fixed it, make sure other people understand the problem and the impact it was having. It might stop them making the same mistakes again.

Query Tuning in Batch Mode

You can run two or more queries at a time in the query window by highlighting them and clicking the execute button. This is called running them in batch mode. If you turn on the “Include Actual Execution Plan” option, explained here, you can compare the execution plans of multiple statements. This is how you can tune queries in batch mode. There are several approaches you can use.

  • Compare slightly different SQL statements against the same tables to see which is the most efficient.
  • Compare similar SQL statements against two or more copies of your tables with differing indexes, allowing you to compare the impact of the different indexing strategies.
  • Use the same tables, but force each query to use different indexes using hints, so you can compare the results. Try to avoid hints wherever possible, but it might be useful for this type of comparison.

For more information see:

SQL Server : Execution Plans and Indexes

Scan vs. Seek

A scan involves looking at all the data in a table, what in Oracle we would call a Full Table Scan. A seek is where you use an index to access the data.

The usual rules apply. If you are expecting to visit most of the rows in a table, a scan is more efficient than a seek. If you are expecting to interact with a small number of rows a seek is more efficient than a scan.

Table Indexes

You can check if there are any indexes on a table using the following command.

USE MyDatabase
GO

sp_helpindex 'MyTable'

If there aren’t any indexes, you are not going to be able to use them.

Displaying Execution Plans

The toolbar associated with a query tab contains the following buttons.

These buttons do the following.

  • Display Estimated Execution Plan : As the name suggests, this button displays the estimated execution plan for the current statement. The estimate will be based on the current statistics, so if the statistics are bad, the estimate may not match the actual execution plan.
  • Query Options : Allows you to alter the way the query is processed. There are lots of different options available, so a discussion of them is beyond the scope of this article.
  • Intellisence Enabled : This is not a performance feature. It’s just used to control whether code completion and tooltips are shown within the query window.
  • Include Actual Execution Plan : When highlighted, running a query will result in the display of an “Execution Plan” tab containing the actual execution plan used by the query. The text at the top of this tab may also include missing index report.
  • Include Client Statistics : When highlighted, running a query will produce an additional tab displaying the client statistics associated with the query.

These suggestions in the missing index report do not include any reference to existing indexes, so it may be better to and an extra column to an existing index, or alter column order, rather than creating a new index. Regardless, this can provide helpful suggestions! To use the missing index report, simply right-click on it and select the “Missing Index Details” menu option. This will produce a commented out script to create the index.

Bookmark Lookups (RID or Key)

If all the data required by the query is not present in the index, you will use the index seek to find the relevant rows in the table, then lookup each row to get the remaining data. This type of lookup is called a bookmark lookup, which can be based on a “RID Lookup” (rowid lookup) or a “Key Lookup”.

You may be able to reduce the number of bookmark lookups by adding columns into the index (a covering index), or including their data into the leaf block if they don’t form part of the lookup key. The following example created a non-clustered index on the MyTable.Name column, but include the Description and CreatedDate values in the leaf block, so you don’t need a bookmark lookup to retrieve them.

CREATE NONCLUSTERED INDEX IX_MyTable_Name  
ON dbo.MyTable(Name)  
INCLUDE (Description, CreatedDate);  
GO

You need to be careful about including too many column into the index as this will increase the overhead of index maintenance, as well as greatly increasing its size.

SET STATISTICS IO

You may find it useful to issue the following statement before issuing a query.

SET STATISTICS IO ON

This will display statistics in the “Messages” tab.

For more information see:

SQL Server : Index Fragmentation

Index Fragmentation

Fragmentation can happen within a page (Internal) or between pages (External). SQL Server DBAs seem obsessed by rebuilding indexes.

You will probably detect and fix index fragmentation as part of your maintenance plan, but this article focuses on manually identifying and fixing index fragmentation.

Detecting Index Fragmentation

Note. Small indexes will often show large amounts of fragmentation, which might actually get worse after index rebuilds. Indexes with a page_count of less than ~1000 pages are unlikely to suffer from performance issues caused by fragmentation. Microsoft recommend avoiding rebuilds on indexes on indexes smaller than 1000 pages (here). Keep this in mind when looking at the information below.

The following query gives you the statistics for all indexes in every database. One of the columns produced is called avg_fragmentation_in_percent, which gives the information we are looking for.

SELECT *
FROM   sys.dm_db_index_physical_stats(NULL, NULL, NULL, NULL, NULL)
ORDER BY avg_fragmentation_in_percent DESC

You can narrow this down to a specific database using the first parameter. We have also returned the object name associated with the object_id.

USE MyDatabase
GO

SELECT object_name(object_id) AS object_name, *
FROM   sys.dm_db_index_physical_stats(DB_ID('MyDatabase'), NULL, NULL, NULL, NULL)
ORDER BY avg_fragmentation_in_percent DESC

We can now see which indexes are suffering from fragmentation.

Fixing Index Fragmentation

The following commands can be used to fix index fragmentation for all indexes on a specified table. A rebuild removes both internal and external fragmentation.

USE MyDatabase
GO

ALTER INDEX ALL ON dbo.MyTable
  REBUILD WITH (ONLINE = ON)

The WITH option of “ONLINE = ON” means transactions can continue against the object while the processing happens. Without it, the table would be locked for the duration of the operation.

A rebuild could result in the index being bigger as the index is rebuilt using the fill factor. If the fill factor is the default (100%), no space is left for future changes, which could result in a page split on the next insert. If you specify a fill factor, leaving some space for future updates you can reduce this fragmentation, but the index will be bigger. The necessity for a non-default fill factor depends on the nature of the work done against the table. The following example does a rebuild, specifying an 80% fill factor.

USE MyDatabase
GO

ALTER INDEX ALL ON dbo.MyTable
  REBUILD WITH (ONLINE = ON, FILLFACTOR = 80)

An alternative is to reorg the index, which shuffles the pages, only fixing external page fragmentation.

USE MyDatabase
GO

ALTER INDEX ALL ON dbo.MyTable
  REORG WITH (ONLINE = ON)

For more information see: