Friday, November 16, 2007

Import and Export SQL 2005 Maintenance plans

The way SQL 2005 produces Maintenance plans is very different from SQL 2000. Instead of using the extended stored procedure xp_sqlmaint to execute and the system tables of the MSDB database to store the package – SQL 2005 uses SSIS (SQL Server Integration Services) to create and store the Maintenance plans.

To Export an existing maintenance plan from an SQL Server 2005

  • Click connect in the Object Explorer
  • Select "Integration Services"
  • Enter credentials for the server
  • Open the folder: "Stored Packages" -> "MSDB" -> "Maintenance Plans"
  • Right click the package you want to export
  • Select Export package
  • If you want to export the package to another server or edit it – using BIDS (SQL Business Intelligence Studio) – select "File system" in the Package Placement dropdown menu.

To Import or copy an existing maintenance plan from one server to another

  • On the server you want the maintenance plan copied TO.
  • Click connect in the Object Explorer
  • Select "Integration Services"
  • Enter credentials for the server
  • Open the folder: "Stored Packages" -> "MSDB"
  • Right Click the folder "Maintenance Plans" and select "Import Package"
  • In the dropdown "Package Placement" select SQL Server
  • Enter the name of the SQL server where the source maintenance package exists.
  • Click the … besides the Package Placement box and select the package you want to import from the Maintenance Plan folder.

Friday, November 9, 2007

Moving system databases

When installing SQL Server 2000/2005/2008 many often forget where the system databases go on the filesystem. They are by default placed into C:\Program Files\Microsoft SQL Server \.... – Which we properly all can agree is not the optimal placement.

Note that to move the system databases require SQL server downtime, it doesn't take long though (depending on the size of course)


Please test this recipe on a test server before trying it in production – failing at any step will invalidate or totally stop you SQL server installation.


To move the system databases do as follows:

  1. Shut down the SQL Server services – all of them
  2. Start a command prompt (and keep it open)
  3. Start the SQL server service in this nonstandard trace mode that skips recovery of the databases (except master)
  4. NET START MSSQLSERVER /f /T3608 (or NET START MSSQLSERVER$Instancename /f /T3608 if you have a named instance)
  5. Now modify the logical file placement by script:

Adapt the new file location to match YOUR filesystem

ALTER DATABASE MODEL MODIFY FILE (NAME = modellog , FILENAME = 'G:\SQLLogFiles\modellog.ldf')



ALTER DATABASE master MODIFY FILE (NAME = master , FILENAME = 'H:\SQLDataFiles\master.mdf')
ALTER DATABASE master MODIFY FILE (NAME = mastlog, FILENAME = 'G:\SQLLogFiles\Mastlog.ldf')

As Per SQL Server 2008 the MSSQLSYSTEMRESOURCE database cannot be moved ! But in SQL Server 2005 the database location must be the same as the Master database.

For SQL 2005 use:

ALTER DATABASE mssqlsystemresource MODIFY FILE (NAME=data, FILENAME= 'H:\SQLDataFiles\mssqlsystemresource.mdf')
ALTER DATABASE mssqlsystemresource MODIFY FILE (NAME=log, FILENAME= 'H:\SQLDataFiles\mssqlsystemresource.ldf')

The MSSQLSYSTEMRESOURCE database file and log file must remain on the same drive !


  2. Move the files physically to the new location you have specified in the script
  3. Change the startup parameters of the the SQL Server service in SQL Server Configuration Manager to match the new location of the master database. Eg:
  4. Start the MSSQLSERVER services normally

Please test this recipe on a test server before trying it in production – failing at any step will invalidate or totally stop you SQL server installation.

Wednesday, September 26, 2007

Are we missing any? (indexes)

One of the really greath things about SQL Server 2005 are the DM's - Dynamic Management views. These generally provide a good insight into the servers current state and can be used to monitor the health of the server as well as tune or diagnose any problems.

A key issue in tuning any database is tuning the indexes. Not only maintaining those that are already there, but also finding out if we are missing any, or if any existing ones are not being used.

Finding the missing indexes can be done by using a group of dm views, the sys.dm_db_missing_index_xx group. This group of dm views record information about index usage from the query optimizer. For each statement the optimizer generates an optimal plan, but if the indexes does not exist for this plan, it generates another less optimal plan for the statement. The missing index is then recorded in the missing_index_detail view.

If the optimal index for the statement is present the usage of this index is recorded in the sys.dm_db_index_usage stats which can the be used to remove used indexes.

An examble of the use of the missing_index_detail view:
Use AdventureWorks

Select Firstname, Lastname
from Person.Contact
Where Lastname='Smith'

Select *
from sys.dm_db_missing_index_details

OR if you want the result for the missing index, column by column

SELECT mig.*, statement AS table_name, column_id, column_name, column_usageFROM sys.dm_db_missing_index_details AS midCROSS APPLY sys.dm_db_missing_index_columns (mid.index_handle)INNER JOIN sys.dm_db_missing_index_groups AS mig ON mig.index_handle = mid.index_handleORDER BY mig.index_group_handle, mig.index_handle, column_id

Please note that this example also proves the point that the missing_index views are limited in their recordings. In this examble it would be the most optimal solution also to include the Firstname column in the index. But the missing_index views has a number of limitations and should therefore be used as a guide towards the tables and columns that are missing indexes - not as a tool for fine tuning.

Further readings on subject

Tuesday, September 25, 2007

Converting column values to comma delimited list (csv)

Once again I recently met the challenge of how to convert a list of rows to one comma delimited field. (And NOT use cursors)


Converted to Once,upon,a,time

The solution surprisingly enough lies in the build in sql function COALESCE.

Declare @CSVField varchar(100)
Select @CSVField = COALESCE(@CSVField + ',','') + ColumnToCsv
From SomeTable

Select @CSVField


The Coalesce function returns the first not null value in the list of expressions. Which in this case is the ColumnToCsv value. And in this case it does it for each row in the select resultset.

Friday, July 13, 2007

Help my log file is growing huge !

Database file size (.mdf): 250 MB
Log file size (.ldf): 10 GIGABYTES

Quick solution
Make a transaction log backup of your database. Shrink the log file. Make a new transaction log backup.

See BOL for the How to on your version of the SQL Server.

Many administrators mistakenly think that using the full recovery model (which is default) and then making a daily full backup is sufficient maintenance.

The problem is that the full backup does not flush the inactive transactions from the transaction log when the full backup finishes - this is the case both for SQL 2000 and SQL 2005.

There are different ways to solve the problem - the right one in my opinion being regularly to back up the log file - which flushes the log - and then solving the problem in the long run.

Long term solution
Schedule at least one daily transaction log backup - I would recommend in the middle of the work-day if only 1. Eg. Full backup at 23:00, T-Log at 12:00. If the database has any important activity during the day (and most does) then you properbly want to take transaction log backup every 2 or 3 or 4 hours during the day. Worst case - you can go back to the time of the last t-log backup. If you only take one full backup a day - worst case you need to go 24 hours back......

How many files in a filegroup

Because SQL Server opens one IO thread for each file pr. filegroup - it gives the server a higher read/write performance to have more than one file pr. filegroup.

This does not mean though, that you should have as many files as possible! The optimal amount of files pr. filegroup is between 0.25 and 1 pr. CPU. Meaning that if your Server has 4 CPUs you should have between 1 and 4 files pr. filegroup.

Please note that when I write CPUs I acutally mean physical CPU Cores !

My experience is that you normally would benefit the most by aligning the number of files with the number of CPUs. Eg. a server with 1 CPU with 2 physical cores should have 2 files pr. filegroup in general.

This is always true for TEMPDB where you should have 1 file pr. CPU.

Wednesday, July 11, 2007

Performance tuning counters for SQL Server

As an instructor I always tell my students to make a performance baseline on their SQL Server so that they have something to compare against, when (not if) the users complain about performance. But my students and customers often ask – which counters should I monitor then?

Well here is, to my experience, a good list to start with. Arguably you could narrow or widen the list – but for a start this list has in my experience proven to be sufficient.

Performance object









I know that many sites say 20 – but that was from the NT4 days with 128 MB RAM


Available MBytes 




SQL Server: Buffer Manager

Buffer Cache Hit Ratio 




SQL Server: Buffer Manager 

Page Life Expectancy 




These should be per physical disk 


Physical Disk 

Disk Reads/sec 

No levels – just monitor these 

Physical Disk 

Disk Writes/sec 

No levels – just monitor these

Physical Disk 

% Disk time 

No levels – just monitor these 

Physical Disk 

% Disk time  

No levels – just monitor these 

Physical Disk 

Avg. Disk Queue Length 



This is x times the number of spindles in the raid set. E.g. Raid 1 with 2 disk = 4



% Processor Time 





Processor Queue Length 


Network Interface 

Bytes Received/sec 

No levels – just monitor these 

Network Interface 

Bytes Sent/sec 

No levels – just monitor these 

Network Interface 

Output Queue Length 

No levels – just monitor these


SQL Server: General 

User Connections 

No levels – just monitor these 

SQL Server: Memory Manager 

Connection Memory (KB) 

No levels – just monitor these 

SQL Server: Memory Manager 

Total Server Memory (KB) 

No levels – just monitor these


You don't have to monitor the counters all the time to create a good baseline, the warning and alert levels are included for those of you who want to implement the counters in your normal performance monitoring system like MOM.

Move a database between servers - WITH the logins

Quite a few times I have been asked if moving a database from one server to another would be any problem.

Generally this is not a problem - although you can save yourself some trouble by moving the logins for the database first.
The How-To is:
1) Script the logins for the database on the source server by using the scripts listed below (Originally posted by Microsoft)
Remember to filter the logins the script generates - it takes every login on the source server and you generally only want those with reference to the database being moved.
2) After filtering the logins from the source server apply the needed logins on the destination server before attaching the database.
3) Detach the database on the source server, move it, attach it on the destination server
The detach/attach procedure can most easily be done from Management Studio by right-clicking the database name in the object explorer -> Select Task -> Select Detach. On the destination server right click the folder Databases -> Select Attach.
The scripts that script the loginsThe scripts is originally posted by Microsoft. They come in two variants - one for SQL 2000 and one for SQL 2005.
Both scripts consist of the same. 2 stored procedures - sp_hexadecimal, and sp_help_revlogin. sp_help_revlogin genereates a new script in the result windows with one line for each login on the server. Copy the text result from the result window, and filter it before applying it on the destination server.