Category Archives: Internals

SQL Server Configuration Manager in SQL Server Denali

While I was working with the SQL Server Configuration Manager in SQL Server 2005 mentioned in yesterday’s post, the new features in SQL Server Denali came to my mind. Here is why I liked the Configuration Manager in SQL Server Denali.

The first striking change is that the number of tabs have increased from 3 in SQL Server 2008 to 6. This means that the options tightly stuffed inside the 3 tabs have been allocated their own space which results in ease of management.

The feature which a DBA would appreciate is the Startup Parameters tab. In the earlier versions of SQL Server, the values in the startup parameters had to be edited in-line and a small typo there would not allow the service to start as posted here. But in SQL Server Denali, a seperate text box has been provided to Add/Update the startup parameters. This makes the DBA’s job relatively easy and the chances of erroneous entry in startup parameters are less.

The other DBA friendly feature is the ability to change the directory where dump files are generated. SQL Server Dump files (*.dmp, *.mdmp) files are generated when SQL Server encounters an issue like Access Violation or when SQL Server crashes. These files contains information about what SQL Server was doing prior to the crash or error.

In earlier versions of SQL Server, any SQL Server Dump files were by default created in the directory where the SQL Server Error Log was located. I had faced this issue on a SQL Server 2000 instance where the dump files were continuously getting generated on the C drive and managing space on C drive was a challenge.

In SQL Server Denali, the SQL Server Configuration Manager includes the option to change the Dump directory (under Advanced tab).

I wish we had this feature in SQL Server 2000. My team members would agree to this, since they had a tough time managing the .mdmp files in late 2009.

Overall the new SQL Server Configuration Manager is DBA friendly. Since we are still at CTP1, hoping to have many more such good features by the time it reaches RTM.

Contained Databases in SQL Server Denali

Last Sunday I spent some time exploring the features of SQL Server Denali. One of the new features is Contained Databases. In short a Contained Database is independent of the SQL Server Instance. All the database metadata and user information is stored inside the database itself. A user defined in this database need not have a corresponding login. This feature is very useful in Non-Production environment where the databases are moved across instances. No more orphaned users or syncing of logins across instances. This MSDN article has more details on Contained Databases.

Configuring Contained Databases Property

Contained Databases feature is not enabled by default. It can be enabled through T-SQL as below.

sp_configure 'show advanced options',1
GO
RECONFIGURE WITH OVERRIDE
GO
sp_configure 'contained database authentication', 1
GO
RECONFIGURE WITH OVERRIDE
GO

It can also be enabled by setting the Enable Contained Databases to TRUE using SSMS GUI as in this screenshot.

Creating Contained Databases

After enabling this option, the next step is to create a Contained Database. While creating the database, changing the Containment type to Partial will make the database as Contained. By default this option is set to None.

It can also be created using the below T-SQL command.

CREATE DATABASE [ContainedExample]
 CONTAINMENT = PARTIAL
 ON  PRIMARY
( NAME = N'ContainedExample', FILENAME = N'C:ContainedExample.mdf' )
 LOG ON
( NAME = N'ContainedExample_log', FILENAME = N'C:ContainedExample_log.ldf')

Creating Users with Password

In this database like in earlier versions of SQL Server, users can be created which are associated with logins. The new feature is that a new user with password which is not associated with a login can also be created for Contained Databases. Like I mentioned earlier, the password is also stored within the database.

The T-SQL command for creating a user with password is as below.

USE [ContainedExample]
GO
CREATE USER [username] WITH PASSWORD=N'Strong@Password'
GO

Connecting to SSMS

In SSMS just entering the user name and password of this user is not enough to connect to the instance. Since this user does not have an associated login, SSMS raises the following error.

Error: 18456, Severity: 14, State: 5.
Login failed for user 'containeduser'. Reason: Could not find a login matching the name provided.

In order to connect using the credentials of a user with password through SSMS, the Contained Database name needs to be entered in the Connect to Database drop down box.

“Un-Containing” the database

A Contained Database can be “un-contained” as a normal database. The database property can be changed in SSMS or using a T-SQL command as the one below.

USE [master]
GO
ALTER DATABASE [ContainedExample] SET CONTAINMENT = NONE
GO

If the Contained Database has an user with password, this command would fail with the below error.

Msg 33233, Level 16, State 1, Line 1
You can only create a user with a password in a contained database.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.

Before changing the Containment option, SQL Server tries to create login within the database (most probably creating a user for the login executing the command. I was not able to track this down), hence this command would fail. After dropping the user with password I was able to change the Containment option to None. If the database does not have an user with password, the Containment option changes without any errors.

Converting a database to a Contained Database

A database can be converted to a Contained Database using both SSMS and T-SQL.

USE [master]
GO
ALTER DATABASE [databasename] SET CONTAINMENT = PARTIAL
GO

If this database has users associated with Windows Logins they are contained in the database without any extra effort. In order to contain the users which are associated with SQL Server logins, sp_migrate_user_to_contained stored procedure needs to be executed. This will convert the SQL Server Logins to users with password. The following command will convert a user associated with a SQL Server Login named tlogin to a user with password.

USE ContainedDatabase
GO
sp_migrate_user_to_contained
@username = N'TLOGIN',
@rename = N'keep_name',
@disablelogin = N'do_not_disable_login' ;

Once this is done, the newly contained database is ready to go.

Conclusion

Contained Databases is a very useful feature. This reduces a lot of work for the DBA while shipping the databases from one environment to another. But the Security Auditors will need to update their auditing procedures due to the threats mentioned here.


CHECKDB running every minute? Check Database Properties

Sometime back I came across a question in the forums where the user was getting this message in the SQL Server Error Log every minute.

CHECKDB for database 'DBName' finished without errors on [date and time].
This is an informational message only; no user action is required.

Starting up database 'DBName'

He had not scheduled CHECKDB to run every minute and wanted to know what does this message mean? Quick glance at the informational message clearly indicates that SQL Server is not reporting the results of DBCC CHECKDB. This message is reported in the Error Log whenever a database starts up. This is a new feature in SQL Server 2005 onwards about which I had posted here. The third line in the above message confirms that database is indeed starting up every minute.

Why does the database starts up every minute?

This is because the AutoClose property for that database is set to True.

With this property set to True, when the last user connection disconnects, the database is closed. When a user connects back to the database, the database again starts up and the informational message is logged in the SQL Server Error Log. When a database starts up resources are assigned to it and when it is closed the resources are released. The AutoClose option is useful on a database which is not frequently used like in a database running on SQL Server Express Edition. But if this property is set to True on a busy OLTP database it will have negative impact on the performance of the instance.

Even I had come across few databases in my client’s environment where the AutoClose property was set to True. Since these databases were small in size and did not have much importance, there was no impact. This property can be turned off using Database Properties dialogue in SSMS or using the following query.

ALTER DATABASE [DBName] SET AUTO_CLOSE OFF

Does tempdb runs out of space during DBCC CHECKDB?

Recently I read one of the forum posts where the user was running DBCC CHECKDB against one of the biggest databases on an instance. But CHECKDB will terminate abruptly reporting that the tempdb had run out of space. Starting SQL Server 2005, SQL Server makes use of Database Snapshots while running CHECKDB. The snapshot is created on the same drive where the data file is located. If there is no enough space for the snapshot to be created, CHECKDB will fail reporting the error. I had come across this issue and blogged about it here.

But in this case, CHECKDB was failing because of space constraints on tempdb. Why does tempdb come into picture while running CHECKDB? While running CHECKDB, SQL Server stores the information related to the Database in memory. If the database is huge or if there is not enough memory, SQL Server is forced to store this information in tempdb. Hence the tempdb grows and if there is not enough space for the tempdb to grow, CHECKDB will fail. This article on MSDN explains how to Optimize DBCC CHECKDB Performance.

DBCC CHECKDB has an option to check how much space is required by tempdb. This estimation can be obtained before running DBCC CHECKDB against a given database.

DBCC CHECKDB ('databasename') WITH ESTIMATEONLY

Here is the estimated space needed for tempdb to run CHECKDB against a 25 GB database.

This option helps to size the tempdb properly, before running DBCC CHECKDB on a big database.

Whenever I hear DBCC CHECKDB, the first name that comes to my mind is Paul Randal (blog | twitter). His posts in the CHECKDB From Every Angle series are the best source of information available on the internet regarding DBCC CHECKDB.

Insufficient system resources exist to complete the requested service

In the next two weeks we have a Disaster Recovery activity for the client. As a part of it the Backup Team was trying to copy some of the backup files from the Production server to the DR server. These backups files several TBs in size. When they were trying to copy one of those files on Windows Server 2003 box, they got an error.

Received Win32 RC 1450 (0x000005aa) from FileRead():
ReadFile '\ServerNamei$SQLBACKUPSFileName.BAK'.
Insufficient system resources exist to complete the requested service.

Just because the file they were trying to copy had SQL in its path, they got back to us to fix this issue. This has nothing to do with SQL Server and it is purely related to Memory Management on the server. On Windows servers, if one tries to copy *huge* files in parallel over the network this situation arises. I am no expert on Operating System but I will try to explain the reasons for this with whatever knowledge I have on this topic.

When the system boots the Memory Manager creates two pools namely Paged and Non-paged. The size of these is dependent on the Physical RAM available on the server. The Non-paged pool is used by the Windows kernel and drivers as the memory in this pool cannot be paged out. The memory in the Paged Pool can be paged out. Depending on the Operating System, RAM installed and the architecture (32 or 64 bit) the maximum size of the Paged Pool differs. On a server running Windows Server 2003 (32 bit) the maximum size of the Paged Pool is 650 MB (2GB for 64 bit).

Now coming back to the Copy command, it also makes use of the memory in Paged Pool. The memory that it uses is directly related to the size of the file being copied. As per this article for each MB of file that is being copied it utilizes 2 KB (1KB each for source and destination) of Paged Pool memory. If PAE mode is enabled the memory usage per MB is 4 KB.

On this server the Backup team was trying to copy a file which was 150 GB in size. This is running on Windows Server 2003 and it has PAE mode enabled. To copy this file it requires 600 MB of Paged Pool Memory (150*4). The limit is 650 MB. If we know how much Paged Pool memory is in use, it will be easy to predict if the copy will complete or not. Yet again, Process Explorer comes in handy. To know how much Paged Pool Memory is in use, just launch Process Explorer and click View –> System Information.

As per this information on that server 221 MB of Paged Memory is already in use. To copy this file it requires 600 MB more, which will take the total above the limit of Windows 2003 server. Hence Insufficient system resources exist to complete the requested service error and the copy command failed! Some other process has already occupied this memory and until it is cleaned up, they will not be able to copy the file using the native copy or xcopy commands. What is the fix? Since this memory pool is defined during system boot, the option is to reboot.

I/O is frozen on database, No user action is required

I recently responded to one of the questions in which the user wanted to know what these messages in the SQL Server Error Log mean.

spid92,Unknown,I/O is frozen on database [databasename]. No user action is required.
spid92,Unknown,I/O was resumed on database [databasename]. No user action is required.

Even in my environment I have seen this message in some of the Data warehouse instances. This message is logged in the Error Log whenever any backup service making use of SQL Server Virtual Device Interface (VDI) tries to backup the database (with snapshot)/drive on which the database files reside. Microsoft Backup (ntbackup.exe), Volume Shadow Copy (VSS), Data Protection Manager (DPM) and third party tools like Symantec Business Continuance Volume (BCV) are some of the application which cause this message to logged in the SQL Server Error Log.

What does these messages mean? Let me explain this with an example. Suppose ntbackup.exe is configured to take the backup of D drive. This drive has some data files related to few databases on SQL Server. Since the data files are in use by SQL Server, if these files are copied as it is the files in the backup will be inconsistent. To ensure that the database files are consistent in the drive backup, this application internally issues a BACKUP DATABASE [databasename] WITH SNAPSHOT command against the database. When this command is issued, the I/O for that database is frozen and the backup application is informed to proceed with its operation. Until the BACKUP WITH SNAPSHOT command is complete, the I/O for the database is frozen and the I/O is resumed once it completes. The corresponding messages are logged in the SQL Server Error Log.

If the ntbackup is issued during business hours, the users accessing the database will surely have some connectivity issues. This is one of reasons why the backup on production systems should not be done during business hours. Like I mentioned earlier, in my environment I have seen these messages but during the regular downtime for backups.

Pruning backup history taking too long to complete? Read on

Just a while ago, one of the colleagues from my previous employer posted a question on #sqlhelp to know how backup related records in MSDB database get purged. I gave him this screenshot of SQL Server Agent properties which controls the Job History in MSDB.

I was about to tweet about sp_delete_backuphistory stored procedure which can be used to prune the backup/restore history in MSDB. But someone else did it before I could. This stored procedure reminded me of one of the questions I had come across in one of the forums, where the user trying to drop the database using SSMS and SSMS would just hang. After a few responses he informed that he had selected “Delete backup and restore history information….” option while dropping the database.

There were no processes which were blocking this still the database would never get dropped. He had several thousand records of backup history for this database. The drop database session was progressing but it was really slow because it was waiting for the “Delete backup and restore….” transaction to complete. You may be surprised to know the reason for this. No indexes/foreign keys are there on the tables related to Backup/Restore history tables! sp_delete_backuphistory stored procedure deletes data from the following tables in MSDB.

  • backupfile
  • backupfilegroup
  • backupmediafamily
  • backupmediaset
  • backupset
  • restorefile
  • restorefilegroup
  • restorehistory

Run this query on MSDB and you would be surprised to know that there are only 4 indexes present for those 8 tables.

SELECT
    t.name as [TableName],
    ind.name as [IndexName],
    col.name as [ColumnName],
    ind.type_desc [IndexType]
from
    sys.indexes ind
inner join
    sys.index_columns ic on
      ind.object_id = ic. object_id and ind.index_id = ic. index_id
inner join
    sys.columns col on
      ic.object_id = col.object_id and ic.column_id = col.column_id
inner join
    sys.tables t on
      ind.object_id = t. object_id
where
     ind.is_unique = 0
    and ind. is_unique_constraint = 0
    and t.name in
    (
    'backupfile',
       'backupfilegroup',
       'backupmediafamily',
       'backupmediaset',
       'backupset',
       'restorefile',
       'restorefilegroup',
       'restorehistory'
    )
order by
    t .name, ind.name

Since there are no proper indexes on these tables, the stored procedure would obviously take a long time to complete. What is the fix? Create the indexes on the columns that are being referred by the sp_delete_backuphistory stored procedure. Geoff N. Hiten had written a good script which does this job. The script can be downloaded from here. The user created the indexes on MSDB using this script and he was in for a surprise. The backup job history pruning completed quickly and the database got dropped in a few seconds.

If you are facing similar issues, please go ahead and create those indexes on MSDB tables. Even if you are not affected by this yet, it doesn’t hurt to keep these indexes on. You never know when you need them 😉