Defeating deadlocks with READ_COMMITTED_SNAPSHOT isolation

I was recently asked by a client to look into an issue they were having with a WCF web service. The application was generating a large number of errors, filling a 5MB log file every 5 minutes, and the performance of the underlying database was so bad that a simple query such as:


would take up to a minute and a half to return. Checking the error logs I could see a huge number of exceptions like:

Transaction (Process ID 112) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

First things first I made sure that the server wasn’t lacking any physical resources but memory was sitting at less than 50% usage, processors less than 10% and disk less than 5%. So I fired up SQL Profiler to let me trace exactly which queries were causing the deadlocks and which queries were being sacrificed. If I didn’t have direct access to the server I could have used Trace Flags to log details of the deadlock for me with trace flag 1222 but fortunately I didn’t have to jump through that extra hoop and I could point  SQL Profiler directly at the server and select the ‘Locks:Deadlock graph’ event:

Trace properties

I could also turn on ‘Lock:Deadlock’ and ‘Lock:Deadlock chain’ but the deadlock graph is great because it gives you visual representation of the deadlock and you can also see the queries causing the deadlocks so I just had to wait for a minute and up popped the following:

Deadlock graph

As you can see the deadlocks were caused by a page lock in this case and I could simply hover my mouse over either transaction (the two ellipses) to see the specific queries that were involved in the deadlock. The victim in this case was a SELECT and the winner an INSERT which were both trying to access the same page of a specific table, each one trying to lock the page because NHibernate was configured to use a ReadComitted IsolationLevel. Now normally that’s exactly the isolation level I would recommend but the problem for this application was that it was doing a large number of read transactions and they were getting blocked by some rather slow write transactions.  This was having a knock on effect on other reads/writes and eventually we ended up with locks all over the place and the deadlocks and horrible performance I started this post with.

OK so at this point we can modify the code to use a less strict isolation level for reads and redeploy it and hooray our issue will be fixed. But there’s a couple of reasons I didn’t want to do that, firstly the web service code was fiendishly complicated and had no test coverage (I should point out at this point that I didn’t write it)  so any changes to the code were inherently risky. Secondly the magic of READ_COMMITTED_SNAPSHOT means that modifying the code is unnecessary. So what is ‘Snapshot Isolation’ I hear you cry, well the MSDN definition is:

Specifies that data read by any statement in a transaction will be the transactionally consistent version of the data that existed at the start of the transaction. The transaction can only recognize data modifications that were committed before the start of the transaction. Data modifications made by other transactions after the start of the current transaction are not visible to statements executing in the current transaction. The effect is as if the statements in a transaction get a snapshot of the committed data as it existed at the start of the transaction.

Now you have probably realised that this isolation level has potential drawbacks, see this question on stackexchange for an excellent summary of them, but in this case the benefits far outweighed any potential risk and the application was capable of handling those risks anyway. By enabling READ_COMMITTED_SNAPSHOT at the database level we can make sure that although the  application’s connection to the database is specifying ReadCommitted IsolationLevel it is achieved using a snapshot rather than a lock, we do that with the following SQL:



ALTER DATABASE database_with_deadlocks SET MULTI_USER

N.B. being in single user mode stops the query from being suspended and never completing

After making this one change there has been a dramatic increase in the performance, there are no more deadlock exceptions thrown and simple queries are back to completing in less than a second. Obviously READ_COMMITTED_SNAPSHOT is not a panacea and should be used with caution and with an awareness of it’s possible risks but in the right situation it can be an incredibly powerful tool to have in your belt.



Awesome links of the week – Part 2


WebSockets let you have bi-directional communication  between the browser and the server. SignalR lets you use WebSockets with ASP.NET and falls back  gracefully if they’re not available as well as providing some other nifty RPC functionality.


Taking a second look at free fonts

Smashing magazine are re-evaluating free fonts and seem to have nice things to say about some of them. They’ve also, very helpfully, selected some of the best available and there’s even a few more linked in the comments.


Unreal Engine 4 running in Firefox

Epic and Mozilla have put together a demo video of the upcoming Unreal Engine 4 running a demo inside Firefox with no plugins



A great insight into some of the voodoo problems you’ll come across working with HTML and CSS clearly explained and with concise solutions.



Awesome links of the week – Part 1


Like bootstrap for building HTML 5 apps http://ionicframework.com/ check out the getting started video then dive into the great quality docs.


Do you want to automatically generate TypeScript Interfaces from your C# classes. So did I and I was going to have to write a T4 template to do it, fortunately someone far more awesome than me has already done just that, open sourced it,  created a NuGet package and put together a great web site.

A re-introduction to JavaScript

A re-introduction to JavaScript (JS Tutorial) from the lovely folks over at the Mozilla Developer Network is the kind of article I wish I’d read when I first started working with JavaScript. It covers everything you need to know if you are already, or are planning to start, programming in JavaScript. Read it, you won’t regret it.

Erik Johansson

Erik Johansson is a photographer who creates rather wonderful Photoshop montages.




JavaScript Documentation

I use JavaScript on a pretty much daily basis so as I haven’t blogged about anything recently I thought I’d help promote the great (as in comprehensive) JavaScript docs that the Mozilla foundation provide. Right now this is being pushed via the http://promotejs.com domain which will hopefully make it more likely that anyone searching for JS docs will find good quality, in depth docs easily.

The campaign aims to point people to the Mozilla Developer Centre for JavaScript but I’d also always recommend Douglas Crockford’s JavaScript. JavaScript’s a much maligned and misunderstood languages so hopefully between the two you’ll appreciate some of it’s many benefits. Enjoy :)

JavaScript JS Documentation: JS Array every, JavaScript Array every, JS Array .every, JavaScript Array .every


Seven essential Visual Studio 2010 keyboard shortcuts

Microsoft recently released some reference posters for the default keybindings in Visual Studio 2010 in pdf format. I’m constantly amazed by how few of these most developers seem to know so I thought I’d list my favourites. I use C# but the majority should work in VB and all but Ctrl + Comma will work in VS 2008.

Ctrl + Full Stop (.)

Displays the available options on the Smart Tag menu. This is by far my favourite, Smart Tags allow you to rename properties/methods/classes throughout your solution, add a required using statement or even create a new class/property/method.

Ctrl + Comma

Displays the NavigateTo window, which gives you search-as-you-type support for files, types and members. Scott Gu has a great blog post on just how useful this is.

F12/Shift F12

F12 will go to the definition of the currently selected symbol. Shift F12 will find all references for the currently selected symbol.

Ctrl + K, C/Ctrl + K, U

Ctrl + K, C comments out all currently selected lines of text or the current line if no text is selected. Ctrl + K, U uncomments all currently selected lines of text or the current line if no text is selected. This works in .js, .cs, .aspx, .config and .xml files.

Ctrl + K, D

Formats the current document according to the indentation and code formatting settings specified on the Formatting pane under Tools | Options | Text Editor | C#. Instantly tidy up a poorly formatted code file!

Ctrl + M, O

Collapses all declarations down to an outline to give you a quick high-level overview of your code file.

Ctrl + M, M

Toggles the currently selected region, method, class or property between collapsed and expanded view.


Enable Service Broker taking forever

Today I had to enable Service Broker in SQL 2008 because when using a SqlCacheDependency I was getting the error:

The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported.  Please enable the Service Broker for this database if you wish to use notifications.

This should be pretty easy just using the following command:


The problem I had was that it was taking forever for this to execute. The reason it turns out is that other processes were stopping the script from acquiring an exclusive lock. The solution is pretty simple, a script to kill all other processes. Be aware this will kill all processes if they’re important that could be a very bad thing!

DECLARE @DatabaseName nvarchar(50)

SET @DatabaseName = N'DatabaseName' -- Specify the database we want to run the script against
DECLARE @SQL varchar(max)

-- Build a SQL script to kill all other processes
SELECT @SQL = COALESCE(@SQL,'') + 'Kill ' + Convert(varchar, SPId) + ';'
FROM MASTER..SysProcesses
WHERE DBId = DB_ID(@DatabaseName)
AND SPId <> @@SPId -- Make sure we don't kill our own process

SELECT @SQL  -- Write out the SQL so you can see what's happening
EXEC(@SQL) -- Kill all the other processes

-- Now we can enable the service broker instantly

And Service Broker’s enabled, time for tea.

Some neat features of asp.net mvc 2

I’ve been upgrading an application to MVC 2 recently and I’m really liking a lot of the new features, so here’s some of my favourites:

Model validation

Steve Sanderson’s xVal was great for adding client/server side validation to MVC 1 painlessly. Obviously someone at Microsoft liked it because now pretty much the exact same functionality is baked into MVC 2.

RequireHttps action filter

Want a single action or all actions of a controller to always use ssl? You used to have to code this up yourself but now you can just add the RequireHttps attribute to your class or method and, if it isn’t already, it’ll automatically be redirected to use ssl.

Strongly typed Html Helpers

I really dislike magic strings which meant that code like this:

<%= Html.TextBox("Name", Model.Name) %>

always felt wrong inside a view. Now you can use the the strongly typed helper methods instead and do this:

<%= Html.TextBoxFor(m => m.Name) %>

No magic string, so much nicer and obviously there are LabelFor, TextAreaFor etc. methods.

Html.EditorFor() and Html.DisplayFor()

Even better than strongly typed helpers is editor/display templates which lets you create your own views for editing and displaying different object types and then render them using the Html.EditorFor and Html.DisplayFor methods. It’s difficult to get across how awesome this is without writing a whole post about it, fortunately someone’s already done that for me and I can’t recommend it enough.

Simpler Http verb attributes

To specify that a controller action only accepted accepts a Post you used to have to use the following attribute:

public ActionResult Edit(int id)

Now the attributes have been simplified so you can just do:

public ActionResult Edit(int id)


Not a massive change, but it’s so much tidier and easier to read.

SubSonic is out

I’ve been a big fan of SubSonic for a while now, as you can probably tell from my last post and I’ve been helping out with the project a lot more over the last few months. Yesterday all the hard work paid off and we’ve released version to the world, see Rob’s blog for more details of what’s included. There’s a mass of bug fixes but there’s also some things I’ve been working on behind the scenes to reduce the friction for everyone involved in the project and make it easier for future releases to be more regular and high quality.

Making testing less painful

The SubSonic core is pretty well covered with tests but they were almost all integration tests. In fact to run them you needed to install and configure three databases (SQL2005, SQL2008 and MySQL) and the tests then took over 6 minutes to run. So last month I dived in and reworked a large chunk of them into unit tests. The unit tests run in 6 seconds now and cover about 35% of the SubSonic core, not perfect but it’s heading in the right direction and combined with a continuous integration server makes it a lot easier to work on the code quickly and safely.

Continuous integration, like trust but with a blame button …

I really like having a continuous integration server that builds and tests code automatically whenever a change is checked into source control. Fortunately for us the nice people at teamcity.codebetter.com provide TeamCity server hosting for open source projects and they’re now hosting SubSonic. Right now we’re only running our unit tests but the plan is to also have all our integration tests run on checkin too. If you want to keep an eye on the SubSonic builds and tests you can check the rss feed. Special thanks to Kyle Baley who has been unswervingly patient and helpful getting everything set up.

What’s coming in 3.1 is a maintenance release, fixes a bunch of bugs and sets the project up for the future. So work’s now starting on version 3.1 which is planned for release on the 22nd of May and should include the following features:

  • Oracle support
  • Medium Trust support
  • Automatic foreign keys for SimpleRepository
  • More/better/smarter attributes for SimpleRepository

Seven reasons you should try SubSonic

SubSonic is a query tool for .Net data access. If you haven’t tried it out then you really should and here’s why:

It’s simpler than, for example, nHibernate

Don’t get me wrong I like nHibernate a lot it’s amazingly powerful, flexible and mature. I’ve worked with it on many projects and admire it a great deal. But if you want to go from nothing to a working project with it you’re going to have to do some serious work and spend some time understanding the nHibernate way.

You can get started with SubSonic and have data access up and running in half an hour and that includes watching the getting started video. In fact I can summarise that video in 7 steps:

  • Download SubSonic
  • Add a reference to SubSonic.Core to your project
  • Add a connectionString to your project’s config file
  • Modify the Settings.ttinclude file to specify your Namespace, ConnectionStringName and DatabaseName
  • Add the tt and ttinclude files to your project, if they don’t run automatically click ‘Transform All Templates’ button at the top of Solution Explorer
  • Start accessing your data from the generated classes
  • Grab a kabob (I have no idea what a kabob is), this step is optional but apparently Rob thinks they’re good.


Want to query your data using Linq and lambda expressions, no problem you get that out of the box.

You can target databases that aren’t SQL Server

OK so you’re thinking what’s the point, I’ve got Linq2Sql if I want a simple  data access method. Well SubSonic works with MySQL, SQLite and there’s Oracle support almost ready to go.


SubSonic is simple to get started but that doesn’t mean it’s inflexible, the classes that SubSonic creates for you are generated using t4. If you don’t like the way your classes are being generated by the standard templates you can dive in and start modifying them yourself. Grab a copy of Clarius’ T4 Generator Toolkit to get syntax highlighting and other editor goodness within Visual Studio.

Built in testing

So you’re writing unit tests that need some data, but you don’t want to have to build a database dependency into your unit tests. SubSonic’s got that covered for you if you’re using the ActiveRecord templates you can simply specify a test database in your connection string.

<add name="MyDatabase"connectionString="Test" providerName="System.Data.SqlClient" />

You’ve now got an in memory database that you can populate with whatever you need in your test setup so you can do something like this:

public void SetupTest()
Person.Setup(new Person { Name = "John Smith", Age="34" });

public void TestIsOverThirty()
// Arrange
Person person = Person.FirstOrDefault(p => p.Name == "John Smith");

bool isOverThirty = PersonService.CheckIsOverThirty(person);

// Assert

No need to create a database, runs in memory and you can unit test your code painlessly, what’s not to like?


Don’t want to build your database first and then generate classes well you can go the other direction. SubSonic’s SimpleRepository allows you to build your classes and get them to automatically generate your database schema for you. If you just want to get on and build a really simple app this is a great way to go.

It’s not dying and there’s a lot of help out there

There’s a whole load of documentation on the SubSonic site covering everything from getting started to contributing to the project and including FAQs, quickstart videos, how to customise your templates and much much more. There’s also a community of folks answering questions on stackoverflow.com so if you get stuck you’ve got a quick way to get help and answers. Oh and SubSonic’s not dying.

In conclusion I honestly think SubSonic’s an amazing tool to have available, I know there’s plenty of others out there but why not give it a try. If you’ve got questions about it feel free to comment or send them straight to stackoverflow.


Add a Google map to a webpage in less than seven easy steps

As you may already know the latest version of the Google maps API no longer requires an API key so I thought I’d see how easy it is to add quick map to a website. The answer turns out to be, very very easy.

Add a reference to the API

So the first thing we need to do is reference the API. This is as easy as adding one javascript include:

<script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false&region=GB"></script>

There’s two querystring parameters I’m specifying:

  • sensor=false – This tells google that I don’t have  a gps sensor that will be passing coordinates.
  • region=GB – This restricts the region that I want to display a map for and means that when I’m looking up an address it will be constrained to only this region (Great Britain).

Add some html elements

The body of this page is going to be very simple. We’re just going to show a textbox where you can enter a postcode, a button that you click to show the map for that postcode and the map itself.

<body style="margin:0px; padding:0px;" onload="initialize()">
<input id="address" type="textbox" value="L1171AP">
<input type="button" value="Show map" onclick="showMap()">
<div id="mapCanvas" style="width:300px; height:300px"></div>

Initialise the map

To show our map we will need a Map object and a Geocoder object. We’re going to use the geocoder to get the geolocation of our postcode by first creating a new Geocoder instance. Next we specify a few options, a zoom level that specifies how far our map will be zoomed and a mapTypedId which will make our map be shown as a roadmap. Then we create a new Map object passing in the element which will contain the map and the options we’ve specified. We call this function in the onload event of the page and we’ve got a google map, but it’s not showing anything just yet.

var geocoder;
var map;
function initialize() {
geocoder = new google.maps.Geocoder();
var options = {
zoom: 13,
mapTypeId: google.maps.MapTypeId.ROADMAP
map = new google.maps.Map(document.getElementById("mapCanvas"), options);

Show the map

So now all we need to do is show the map for our postcode. We call the geocode function of out geocoder passing in the value of our address text box and a function that will center our map on our postcode and add a marker to it.

function showMap() {
var address = document.getElementById("address").value;
geocoder.geocode( { 'address': address}, function(results, status) {
if (status == google.maps.GeocoderStatus.OK) {
var marker = new google.maps.Marker({
map: map,
position: results[0].geometry.location
} else {
alert("Geocode was not successful for the following reason: " + status);

That’s it

This was meant to be a seven step tutorial but Google have made working with their API so simple that there’s nothing else to do. The full code is below:

<meta name="viewport" content="initial-scale=1.0, user-scalable=no"/>
<meta http-equiv="content-type" content="text/html; charset=UTF-8"/>
<title>Google Maps JavaScript API v3 Example: Geocoding Simple</title>
<script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false&region=GB"></script>
<script type="text/javascript">
var geocoder;
var map;
function initialize() {
geocoder = new google.maps.Geocoder();
var myOptions = {
zoom: 13,
mapTypeId: google.maps.MapTypeId.ROADMAP
map = new google.maps.Map(document.getElementById("mapCanvas"), myOptions);
function showMap() {
var address = document.getElementById("address").value;
geocoder.geocode({ 'address': address }, function(results, status) {
if (status == google.maps.GeocoderStatus.OK) {
var marker = new google.maps.Marker({
map: map,
position: results[0].geometry.location
} else {
alert("Geocode was not successful for the following reason: " + status);
<body style="margin:0px; padding:0px;" onload="initialize()">
<input id="address" type="textbox" value="L171AP">
<input type="button" value="Show map" onclick="showMap()">
<div id="mapCanvas" style="width:300px; height:300px"></div>

I based all of this on one of the great samples at the Google API geocoding docs.