announcements comments edit

WordPress and I had an altercation the other day.

I logged in to the management interface for my blog, and it was begging me to update to WordPress 4.2.2, and given the number of security vulnerabilities that have been in the news lately, I figured that was probably a good idea.

But this time, the automated, one-click update process failed me for the first time. I don't know precisely why, but it blew up mid-stream. Luckily the public portion of the site was still serving, but the admin was completely roasted. So I had to go through the painful process of doing a manual upgrade over FTP.

Luckily, I got everything working on Version 4.2.2, but I resolved at that point that 4.2.2 would be my last.

It's not that I'm a huge WordPress hater. When it works it works well enough. But I absolutely disdain PHP, kind of like this guy, so I can't really go hack on it very well because to do so would give me the itchies all over. The freedom I want to have with my blog makes hosting impossible, but I don't want to go through the hassle of self-hosting either. Plus I really really want to the particular webhost I'm using because reasons, and moving my blog is a great first step.

I recently joined Particular Software and we do everything on GitHub. No really, I mean everything. (Well OK, GitHub and Slack.) And while I'm no slouch at HTML when I really want to write, nothing beats Markdown. So it makes sense to take advantage of that.

So, if you're reading this post, my blog is now run by Jekyll on GitHub Pages. How does that work? Glad you asked.

  1. Create a new GitHub repository called - mine is
  2. Create a Jekyll repository. (This is the tricky part.)
  3. Write your posts as Markdown files.
  4. Push your changes to GitHub.
  5. GitHub compiles your posts from your master branch into HTML pages serves it up as static content.

Rather than do a whole bunch of work on #2, I decided to stand on the shoulders of Phil Haack whose blog post on converting his own blog had originally informed me Jekyll, and ~~take inspiration from~~ outright steal his Jekyll repository as a starting point. Luckily, he's OK with that. I did make some changes to make it my own.

The trickiest part turned out to be porting my content. There is a WordPress to Jekyll converter but it goes pretty crazy on you, and doesn't convert the WordPress HTML to Markdown. So I had to do a lot of work on my own to convert the HTML to Markdown with Pandoc and then clean up a lot of the mess afterwards.

But it's definitely worth it. Now I don't have to worry if my blog is down. That's GitHub's problem. I can edit my posts with Markdown using the same GitHub workflow I use every day. And it means I can accept pull requests on my blog! So if I make a mistake, please speak up and correct me!

Hopefully this will make it even easier for me to blog in the future.

announcements comments edit

For the last few years, I have tried to arrange my career around the following two principles:

  1. Surround myself with the smartest people I can find.
  2. Be prepared to do the thing that scares me a little bit.

And it's been working great. This led me to leave a full-time SaaS company to join ILM Professional Services a few years ago. As a software consultant, I was able to work on multiple projects for different companies, learn new technologies and different methodologies, and have a lot of fun doing it. But the best part about ILM is its people. Even though I would go out to work at client sites, ILM fosters a real sense of community and family, so I was never on an island.

I really can't say enough good things about ILM. If you're a software developer in the Minneapolis/St. Paul area, you love to code, and you have a thirst for knowledge, you should check them out. Tell them David sent you.

These principles of mine also led me to write my book, Learning NServiceBus, which is now in its second edition. Compared to the effort that goes into writing a book, you really don't get paid that much in terms of raw dollars, but it has been amazing for my career growth. Pretty soon I was speaking at conferences and teaching the official NServiceBus training course.

And that has all led to this moment.

It's sad to have to leave ILM, because they have become a family to me, but I am very excited to be joining Particular Software full-time on May 4. (That's right, May the Fourth!) It's pretty difficult to find someone smarter than Udi Dahan, and it will be a privilege to work for him. Add in all the other extremely smart and talented people in the company, and I just hope that I'll be able to keep up.

When I first moved away from home to go to college, my father told me, "Well son, this is going to be quite the adventure." Well, the adventure continues, and I'm excited to get started.

development comments edit

I'm excited to say that the second edition of my book, Learning NServiceBus, has now been published!

Learning NServiceBus Second Edition

The second edition of the book includes the following improvements over the first edition:

  • Completely updated to cover NServiceBus 5.0
  • All-new chapter on the Service Platform (ServiceControl, ServiceInsight, ServicePulse, and ServiceMatrix)
  • More diagrams (these were unfortunately sparse in the first edition)
  • Coverage of V5-specific features (Pipeline, Outbox)
  • Revised and expanded...everything

All told, there are roughly 44 additional pages (over the first edition) of just raw new content.

And perhaps best of all, the new edition includes a foreward from Udi Dahan himself, which tells the story of how NServiceBus got its start in the first place, tracing the history from his early days as a programmer to the point where this book has been published in its second edition. It's very humbling for me personally to have his endorsement on my work, and I am very thankful.

Also, as always, so many thanks to everyone at Particular Software who were very helpful during the development of the book, and to my tech reviewers Daniel Marbach, Hadi Eskandari, Roy Cornelissen, and Prashant Brall, who made sure that you have the best content in your hands possible.

The book is available for purchase right now from the publisher in physical and eBook forms, and will be available via other channels (Amazon, Barnes & Noble, Safari Books Online, etc.) shortly. I hope of course that you buy it, but more importantly, that you find it useful.

development comments edit

Uncle Bob Martin is one of the true learned elders of our industry, one of those who signed the Agile Manifesto when I was still taking college courses. Recently, he wrote about (and has talked about) something that absolutely blew me away.

Uncle Bob correctly identifies that many in our industry are young (even too young) and that there is a relative lack of older (and one would hope, more experienced) software developers. Not because they are going away, but because of the exponential growth in the number of total software developers. Indeed, he estimates that the number of software developers doubles every five years.

This was not altogether unsurprising to me, until he pointed out that this means if the number of developers is doubling every five years, then at any given point in time half of all software developers on the planet have less than five years of experience.


Half of ALL software developers are fairly junior developers without a lot of experience under their belt. So to really help advance the field of software development, we should be finding better ways to train those scores of junior developers joining our ranks each and every year.

Luckily, I’ve been given the opportunity to do something about that.

This January, I won’t be going to code up some website for a new client. As part of a partnership between ILM, The Learning House, the Software Craftsmanship Guild, and Concordia University, I will be serving as an Adjunct Professor at Concordia teaching the .NET track of the Coding Bootcamp, an intensive 12-week course designed to take a student with an aptitude for computers and turn them into a well-trained junior developer ready to write code in the real world.

I can’t speak for Computer Science curriculum around the country, but in my own college experience I felt that while I was taught the basics of a computer language (C++ at the time) I was not effectively prepared to be a software developer in the real world. Hopefully this has changed in the intervening years, but I found I had to learn a lot of those other skills on the job.

This is why I’m so excited about the Coding Bootcamp curriculum. (Look at the very bottom of the linked page.) It’s not just about learning C#. It doesn’t cover what was current five years ago. Students will be learning ASP.NET MVC 5, Web API, and SQL Server 2012. It’s not just Microsoft-centered; there’s some Dapper in there in addition to Entity Framework. Beyond just HTML and CSS, students will be introduced to jQuery and AngularJS. And they’ll learn about effective source control with Git–a skill I consider just as important as writing the code itself.

All told, what the students will learn has less in common with what I learned in college, and much more in common with the core areas on which we evaluate all potential ILM employees. Teaching students the exact skills they’ll actually need in a real-life job seems like such a crazy idea that it just might work.

I don’t remember a lot about my early days in elementary school, but I do remember that my fourth grade teacher Mrs. Dickerson gave me a book about programming in AppleSoft BASIC 1 that I read cover to cover. (The book pictured may be that book or perhaps a later edition of it. I was unable to find it in my parents' attic.) While other events certainly had an impact on my life’s direction, you could argue that she set me upon a path that led to my eventual career and where I am today. For that I am forever grateful to her, and I hope that I can do her proud.

I’m really looking forward to helping people the way Mrs. Dickerson helped me by sharing what I love to do. If the number of software developers really is doubling every five years, hopefully I can ensure that 10-15 of them at a time are at the very least well prepared to get started.

If you would like to attend the Coding Bootcamp you can apply at

Or, if you are looking for talented new developers, you can join the employer network.

  1. In yet another example of Atwood’s Law, AppleSoft BASIC now runs in JavaScript.

development comments edit

Recently I saw Evil Trout’s screencast Wrapping a jQuery plugin in an Ember.js component, and thought that it would be really valuable to show the exact same plugin implemented instead as an AngularJS directive.

I want to be clear that I don’t intend to start some sort of Angular vs. Ember flame war. I happen to believe, as Ben Lesh concludes in his excellent 6-part blog series comparing the two frameworks, that Angular and Ember are two paths up the same mountain, and that together they are pushing the state of web development forward.

It’s clear that I need better audio equipment if I intend to keep doing screencasts, but I think it’s passable. I hope you enjoy it!

development comments edit

Recently Romiko Derbynew was reading my book, Learning NServiceBus, and noticed a contradiction between the manuscript and the included source code:

In the Sample for DependecyInjection in the book, the code is:

public class ConfigureDependencyInjection : NServiceBus.IWantCustomInitialization

However in the book, is says

IWantCustomInitialization should be implemented only on the class that implements IConfigureThisEndpointand allows you to perform customizations that are unique to that endpoint.

Of course I could wax philosophic about the tight deadline of book publishing, or how difficult it is to keep the sample code in sync with the manuscript, or how I probably wrote that part of the code and that part of the manuscript on different days on different pre-betas of NServiceBus 4.0, but what it comes down to at the end of the day is #FAIL!

So here’s the real scoop, or at least, updated information as I see it and would recommend now, circa NServiceBus 4.6.1.

The part of the book referenced comes from Chapter 5: Advanced Messaging, where I am discussing the various general extension points in the NServiceBus framework that you can hook into by implementing certain marker interfaces, and then at startup, NServiceBus finds all these classes via assembly scanning and executes them at the proper time.

The interfaces are nominally described in the order that they are executed, and so I described IWantCustomInitialization third, after IWantCustomLogging and IWantToRunBeforeConfiguration, and described it as shown above. (The quoted passage is the entire bullet point.)

Unfortunately, it isn’t that easy.

(The following bit of explanation references history that exists mainly in my mind and, therefore, may not be entirely accurate as I’m not willing to dig through years of Git history to prove it. I might get a detail or two wrong, but stay with me.)

IWantCustomInitialization and IWantCustomLogging are somewhat unique in the list because they have been around forever (I gauge “forever” as since I started with NServiceBus at Version 2.0) and in the meantime, all of the other interfaces were added on (at least as far as my memory serves) through the development of V3 and V4.

So in this before-time of long-long-ago, these two interfaces only worked when applied to the EndpointConfig (the class that implements IConfigureThisEndpoint) but the new ones can be on any class, and there can be multiple ones.

Except as it turns out, IWantCustomInitialization pulls double-duty. It will execute either on the EndpointConfig OR as a standalone class, but with one critical difference: Whether or not it exists on the EndpointConfig changes the order of execution with respect to the other extension point interfaces!

When implemented on a random class, an IWantCustomInitialization will run third, where I described in the book (after IWantToRunBeforeConfiguration, but before INeedInitialization) but if implemented on EndpointConfig, it will run second only to IWantCustomLogging, which always runs first because otherwise, you don’t have logging.

Confused yet? Here’s the definitive updated order:

  1. IWantCustomLogging (only executes on EndpointConfig)
  2. IWantCustomInitialization, implemented on EndpointConfig
  3. IWantToRunBeforeConfiguration
  4. IWantCustomInitialization, implemented on its own class
  5. INeedInitialization
  6. IWantToRunBeforeConfigurationIsFinalized
  7. IWantToRunWhenConfigurationIsComplete *
  8. IWantToRunWhenBusStartsAndStops

\ One could argue that IWantToRunWhenConfigurationIsComplete should not be listed as a “general extension point” because it alone is located in the NServiceBus.Config namespace, not in the root NServiceBus namespace with all the others. This may have been an oversight that the NServiceBus developers weren’t willing to break SemVer for (which would require bumping to 5.0) or may be intentional, but I personally see the value in having a near-the-end extension point with full access to the DI container.*

So what would I recommend about IWantCustomInitialization now?

I view the duality of “runs at different times depending upon which class it’s implemented on” to be dangerous and I would rather avoid that, especially since INeedInitialization provides basically the exact same behavior at the exact same time. So I would respect history (and the text in the book) and say that IWantCustomInitialization should only be used on the EndpointConfig for endpoint-specific behaviors, or in a BaseEndpointConfig class you inherit real EndpointConfigs from.

So that would make the code sample from the book wrong, even though it technically works. I would use INeedInitialization for that class instead.

By the way, this is a great time to mention a previously reported issue with the same section of the book. Even though the text says that “INeedInitialization is the best place to perform common customizations such as setting a custom dependency injection container…” but as it turns out, the only place you can set up a custom DI container is from IWantCustomInitialization on the EndpointConfig.

And on a closing note, I would suggest that if you don’t get enough evidence that you are human and make mistakes by being a software developer, you should perhaps consider writing a book. ;-)

development comments edit

In my post Distributed System Monitoring Done Right, I mentioned in passing how ServicePulse doesn't ship with any built-in notification system for failed messages, but that you could easily build a system to send an email (or SMS, or carrier pigeon) to do so.

In this post I'll show you how.

First, create a new Class Project called ErrorNotify, and turn it into an endpoint by including the NServiceBus.Host NuGet package.

Next, you need to reference the messages assembly that ServiceControl uses for its externally published events. It's called ServiceControl.Contracts and you can find it in your ServiceControl installation directory. For me that's located at:

C:\Program Files (x86)\Particular Software\ServiceControl\ServiceControl.Contracts.dll

Note that ServiceControl uses the JSON serializer internally, so if you subscribe to the failed message notifications, your endpoint will need to use the JSON serializer too. Even if you use a different serializer (like the default XML one) in the rest of your system, it doesn't matter because this error notifier endpoint is completely separate and decoupled from the rest of your system.

To set the serializer to JSON, modify your EndpointConfig.cs given to you by NuGet so that it implements IWantCustomInitialization:

Next we need to write the actual code to subscribe to the MessageFailed event published by ServiceControl. I'm not going to show you how to build and send an email. That would be boring and silly and I'm sure you can do it yourself. But it is important to point out that you can extract the FailedMessageId from the failed message details and craft a URL using ServiceInsight's URL scheme that will launch ServiceInsightand show you the offending message directly!

Lastly, we need to modify the App.config file to subscribe to messages from the Particular.ServiceControl service.

That's it! Once we deploy this code, we will get email notifications of failures complete with links to ServiceInsight so we can go figure out exactly what went wrong.

development comments edit

When I first started writing Learning NServiceBus, I was targeting Version 4.0 which, at that time, was still several months away from release. Writing about something that’s still very much in flux is definitely a challenge, and to some extent I was definitely learning as I went.

What really struck me during the writing process was how much easier people learning NServiceBus 4.0 were going to have it than I did when I learned NServiceBus 2.0. The developers at Particular Software (a name change from NServiceBus Ltd – a lot of people seem to think they were bought and this is not the case) are really obsessive about making a powerful framework as easy to use as possible, and I salute them for that.

I remember creating endpoints by hand. Create a new Class Library project. Reference the NServiceBus DLLs and NServiceBus.Host.exe. Build so that the EXE is copied to the bin directory. Go to Project Properties. Set the debugger to run the Host. Create an EndpointConfig class. Add an App.config. Enter a bunch of required XML configuration. OK that’s a lie. As I was once quoted during a live coding demo, “Don’t worry I have been doing this for years. You never write this yourself; you always copy it from somewhere else.” Not exactly a glowing recommendation right?

Then you start debugging and hope you didn’t screw anything up.

NServiceBus 3.x and 4.x changed all that. Now you just reference a the NServiceBus.Host NuGet package and it sets all that stuff up for you. And if you need some bit of config, you can run a helpful PowerShell cmdlet from the Package Manager Console to generate it for you along with XML comments describing what every knob and lever does.

NServiceBus 4.x is a fantastic platform to build distributed systems, but as of the release of NServiceBus 4.0 in July 2013, the big thing still missing was the ability to effectively debug a messaging system (let’s face it, gargantuan log files don’t count) and monitor a distributed system in production to make sure everything isn’t running off the rails.

Well that’s all about to change.

Don’t Build Your Own Monitoring Tools

For the first system I ever built on NServiceBus 2.x, I built my own monitoring and management tools because I had no other choice. I didn’t want to remote desktop into a server and launch Computer Management to view the Message Queues. Let’s face it, that tool is heinous enough when run locally. And I certainly didn’t want to remote desktop into the server to run ReturnToSourceQueue.exe, and have to potentially copy and paste a message id into a console window over remote desktop. No thank you!

So I built a tool called MsmqRemote that had a daemon process that I installed on every single server that hosted any NServiceBus queues. It was responsible for interacting with MSMQ and NServiceBus on each server. It had the capability to list queues, and get details about the messages in each queue, and return all of this information to a client application via a WCF service hosted over TCP. It could move and delete messages, all based on MSMQ code I had to write myself. It contained a copy of the relevant ReturnToSourceQueue code so that it could do that operation as well.

The client application was a WinForms monstrosity with four panes. First you selected a server which was populated from a config file, that told the application which WCF service URL to try to connect to. Then it would ask the server for a list of queues, which would appear in the second pane. After selecting a queue, it would ask for a list of messages, which would appear in the third pane, and finally, selecting a message would again go to the server to ask for message details and contents, and the XML representation of the message would appear in the fourth and final pane.

The tool suffered from the same problem that plagues many internal tools. It wasn’t refined or nice or even very usable. It was always the minimum necessary to get the job done which meant that it was always pretty shitty. It didn’t always work quite right either, especially when a queue would fill up with a significant amount of messages, everything would slow to a crawl. And sometimes the daemon process would just completely crap out, because as you’re probably already aware WCF is such a joy to work with.(Sarcasm intentional.)

I don’t have any idea how many hours I ended up pouring into that tool, but what I do know for sure is that I wasn’t solving any business problems during that time. Meanwhile it was never the tool I wanted or really needed it to be, and addressing its shortcomings was always my lowest priority.

And MsmqRemote even begin to cover everything we needed to effectively monitor a production system. Endpoint health was a big concern. It wasn’t unheard of for an endpoint to appear to be healthy as far as the Process Manager was concerned, but for some reason to have stopped processing messages for whatever reason. I can think of one instance where in retrospect I’m sure my crappy code was to blame – a “command and control” sort of component implemented in an IWantToRunAtStartup that should have been a bunch of never-ending sagas instead. So my IT Manager would create a bunch of monitors in Microsoft SCOM (may have been MOM at the time) based on queue sizes and performance counters and all that sort of stuff. That was really his deal, not mine. But every once in awhile we’d forget to register a new endpoint when it got deployed for the first time, so then the first time it acted up or stalled we’d have to deal with problems like a few million messages backed up in a queue with no warning.

What a pain! If only there was a company out there that understood how distributed systems worked that could make tools to address these issues!

The Service Platform

The whole reason that NServiceBus Ltd. changed its name to Particular Software is that they were developing products to meet these needs, making NServiceBus itself only part of the story.

NServiceBus is now joined by a bunch of friends:


ServiceControl is a specialized endpoint that lives on the same server as your centralized audit and error queues. It processes every message that arrives in the audit queue (in other words, a copy of every message that flows through your entire system) and stores the details in an embedded RavenDB database. It then discards those audit messages because otherwise you’d be running out of disk space in a hurry. It also reads the messages off the error queue and similarly stores these in Raven, but keeps these message around in a new queue called error.log because you’ll more than likely want to send those messages back to their source queues after you fix the underlying problem.

All this information stored in the embedded RavenDB database is made available via a REST API. (Suck it WCF.) With this API you can build your own reports and tools if you like, but this provides the foundation from which the other Service Platform tools are built.


ServiceInsight is a WPF application that makes my little MsmqRemote look like it was written by a 3rd grader, but it extends much deeper than just showing message details and retrying errors. Because it feeds off the ServiceControl API, which is processing the audit messages from ALL your endpoints, it shows a holistic view of your entire distributed system.

When NServiceBus sends or publishes a message in the scope of a message handler, enough headers are added that by the time it gets to ServiceInsight, complete conversations can be stitched together and represented as graphical flows, where sending commands are represented as solid lines and published events are represented as dashed lines.

Check out the flow diagram in this screenshot.

ServiceInsight Flow Diagram

Notice how some of those messages have “policies” mentioned under the timestamp. Those are sagas, and show how the message flow integrates with sagas you write. This is because I’ve included the ServiceControl.Plugin.SagaAudit NuGet Package in my endpoints, which inserts itself into the pipeline to send saga auditing information to ServiceControl.

If you click on one of those, or on the Saga tab near the bottom, you’ll get this amazing visualization showing the saga’s state changes in vivid detail, like this screenshot zoomed to show only the saga flow:

ServiceInsight Saga Diagram

This is pure awesome, and something you’ll only ever have time to build on your own if either 1) you work for Particular, or 2) you work for a company that somehow isn’t concerned with making money. You’re also not going to get this level of tooling from MassTransit. You do, after all, get what you pay for.


Where ServiceInsight is the tool for a developer to debug a system, ServicePulse is the tool for my IT Manager and our other Ops friends to monitor our systems in production and make sure that everything is healthy.

All you need to do is deploy the ServiceControl.Plugin.Heartbeat NuGet package with your endpoint, and it will begin periodically sending heartbeat messages to ServiceControl. ServicePulse is a web application that will use this information, along with information about failed messages, and serve up a dashboard giving you near real-time updates on system health with all sorts of SignalR-powered goodness.

In addition, you can program your own custom checks to be tracked in ServicePulse. For instance, let’s say you needed to be sure a certain FTP server was up. You could program a custom check for that by including the ServiceControl.Plugin.CustomChecks NuGet package and creating a class that inherits PeriodicCheck.

This is what ServicePulse looks like moments after I stopped debugging in Visual Studio, causing the heartbeat messages to stop.

ServicePulse Screenshot

Yes, the Endpoints box bounces when there’s an issue. I guess it’s mad at me! I would show you more screenshots, but they’re full of a recent client’s name and I don’t love image editing that much, plus you should go try for yourself!

The one thing that is missing from ServicePulse, by necessity really, is a direct notification feature. You aren’t going to want your Ops people constantly staring at the ServicePulse website; you need some way for them to be notified when there’s an issue. Every company is going to want to do that differently, of course. Some will want a simple email notification, some will want an SMS, some will want integration with a HipChat bot, and of course some will want all of the above!

It’s convenient that ServiceControl is really just another endpoint. It has an events assembly ServiceControl.Contracts that contains events that you can subscribe to. Check out this sample MessageFailedHandler that shows how you could subscribe to the MessageFailed event and send a notification email.

In the future there will be additional tooling to connect ServicePulse with Microsoft SCOM and perhaps other monitoring suites as well.


This article is mostly about system monitoring, and ServiceMatrix is really not a monitoring tool, but it deserves a mention because it is also a part of the new Service Platform suite of tools.

ServiceMatrix is a Visual Studio plugin that makes it possible to build an NServiceBus system with graphical design tools, dragging and dropping to send messages from one endpoint to another, and that sort of thing. It really deserves an article all to itself.

I’ve been doing NServiceBus the hard way for quite some time, so it’s hard for me to wrap my head around doing it graphically. But the hard truth is that the NServiceBus code-by-hand demo I frequently give that takes about an hour to create manually can be done in about 5 minutes with ServiceMatrix. Five Minutes. Udi himself has stated that now that he’s gotten used to ServiceMatrix, he can’t envision creating NServiceBus solutions any other way.

Aside from creation speed and baking in NServiceBus design best practices, ServiceMatrix contains two features I really feel are game-changers.

First, whenever you debug your solution with ServiceMatrix, it will generate a debug session id that is shared with all your endpoints, and reported to ServiceControl via the ServiceControl.Plugin.DebugSession NuGet Package. It will then navigate to a URL starting with the si:// scheme, which is registered to ServiceInsight, so ServiceInsight will open up and show you details for just the messages volleyed around during the current debug session. This means, in many cases, you won’t need to painstakingly arrange all of your endpoint console windows just right so that you can see what’s going on, you’ll just look at the results in ServiceInsight.

Second, when you create an MVC website with ServiceMatrix, it will auto-scaffold a UI to test sending messages with fields to enter message property values. What a big time saver over creating temporary controllers just to test things out, and having to interact with them only from the query string!


When I think about the old about the crummy tools I built in the past for NServiceBus monitoring in comparison to the new tools in the Service Platform, it reminds me of the difference between my garage and my grandfather’s woodshop. My garage contains a bunch of the basics. Sure I have a couple saws and screwdrivers and a hammer or two, but my grandfather has been retired for several years and in that time has been pursuing woodworking seriously as more than a hobby, so he’s got a dozen saws and the central vacuum system and all the little toys and jigs you need to really get some serious work done. Every time I need to use the table saw I have to back a car out and drag the saw out from the corner, but he doesn’t waste time with that because his whole workshop is set up and ready to go.

Just as I could accomplish so much more in my grandpa’s workshop than in my garage, I will be able to accomplish so much more with NServiceBus using the tools in the Service Platform. They’re exactly the tools I would have built myself (or better) if only I’d had the time.

But I didn’t have to.

development comments edit

As readers of this blog already know, NServiceBus offers a great framework for building distributed systems with publish/subscribe, automatic retries, long-running business processes, high performance, and scalability. It offers a fully pluggable transport mechanism so that it can be run over MSMQ, RabbitMQ, ActiveMQ, Windows Azure, or even use SQL Server as its queuing infrastructure. No matter which transport you choose, NServiceBus enables you to build a highly reliable system with minimal effort.

But who wants that?

Honestly the developers at Particular have gone a little bit overboard with how easy they have made it to build these robust distributed systems. This is why it’s such good news that, thanks to me, there is finally an NServiceBus transport available that supports RFC 1149: IP Datagrams over Avian Carriers.

That’s right, MSMQ, ActiveMQ, and all those existing transports? Their major failing is they’re all too reliable. There’s just no challenge in creating a system on such a reliable transport. You do that, and your system may be laden with undesirable side effects like running smoothly, and never losing data. As a result, you might get to head home from work on time and be forced to spend time with your family who loves you. You might never get phone calls waking you up in the middle of the night to deal with some sort of crisis. Can you imagine?

And worst of all, without the system crashing down every Monday, Tuesday, and every other Thursday, your boss may start to realize he doesn’t need someone as skilled as you to run it anymore, and may replace you with a couple college interns.

So what you need is a much less reliable transport, and that transport is NServiceBus.Rfc1149. You’re welcome.

So here’s how it works:

  • Messages are stored as text files on a flash drive.
  • The transport uses the first removable drive it can find with a “NServiceBus.Rfc1149” directory at the root level. It doesn’t create this for you. That would be too easy, and you’re not in this for easy.
  • A directory is created for each machine name.
  • Within the machine name directory, a directory is created for each queue.
  • Sent messages are placed in the appropriate directory for the destination machine and queue.
  • Each endpoint reads files from the appropriate queue directory.
  • In order for the messages to be received by the other machines, you must remove the flash drive from the current machine, attach it to the leg of your avian carrier (a domesticated rock pigeon, Columba livia, is recommended) and send the carrier to the physical location of the destination server. Every minute, the transport counts the messages bound for other machines and makes a recommendation for a destination server based on highest pending message count.
  • If no suitable flash drive can be found, it is impossible to send outgoing messages, so sending messages will fail silently. It’s more fun that way.
  • Because no messages can be sent if no flash drive is present, it's advisable to use multiple flash drives with multiple avian carriers. We call this scaling out.

Sound good? Here’s how to use it:

  • Clone the NServiceBus.Rfc1149 source from GitHub and build it yourself. Seriously, when it comes to development getting too easy, NuGet packages are half of the problem.
  • Reference the NServiceBus.Rfc1149 assembly in your endpoints.
  • Stock up on birdseed and newspaper for your avian carriers.
  • Configure the RFC 1149 transport with the following:

Then fire up your solution and enjoy the low latency and unreliability! Your job security should be ensured for years.

Yes, this is an April Fools post, but the transport really does work, after a fashion, and can be a useful exercise for understanding a little more about how NServiceBus works at its lower levels. Check out the source code on GitHub; it’s well-documented and should be instructive.

development, announcements comments edit

I will also be giving my Modeling Tricks My Relational Database Never Taught Me talk at the first ever RavenDB conference in Raleigh, North Carolina on April 8. Click on the conference banner below for details.

RavenConf 2014

development, announcements comments edit

This week I will be speaking at the Twin Cities .NET User Group:

Modeling Tricks My Relational Database Never Taught Me Date: Thursday, April 6, 5:30 PM Location: ILM Professional Services, 5221 Viking Drive, Edina, MN 55435

In this session we will explore several modeling scenarios from my own experience that can easily be achieved using RavenDB, but difficult (if not nearly impossible) to build using a classic relational database. The focus will be on helping those accustomed to SQL Server or other relational databases learn good document modeling skills by example, with a summary of document modeling guidelines at the end.

As always, my employer ILM Professional Services will be providing the pizza at the meeting.

TCDNUG has recently switched to using to register for these events, but apparently Meetup has become the victim of a very sophisticated DDoS attack and is temporarily unavailable. Seriously, who does that?

So if you are unable to actually register, please come anyway. We would love to see you there!

development comments edit

My publisher has allowed me to reprint my favorite part of my book, Learning NServiceBus, here on my blog. It is the introduction to Chapter 3, Preparing for Failure.

Why is it my favorite? Chapter 3 is the chapter that deals with how to be ready for the inevitable errors that will befall a system due to the fallacies of distributed computing, stupid user tricks, and plain outright buggy code. This is the part of NServiceBus that really grabbed me in the beginning and has never let go.

Plus, and I can’t stress this point enough, this is the part of the book about Batman.

Alfred: Why do we fall sir? So that we can learn to pick ourselves up. Bruce: You still haven't given up on me? Alfred: Never.

-Batman Begins (Warner Bros., 2005)

I'm sure that many readers are familiar with this scene from Christopher Nolan's Batman reboot. In fact, if you're like me, you can't read the words without hearing them in your head delivered by Michael Caine's distinguished British accent.

At this point in the movie, Bruce and Alfred have narrowly escaped a blazing fire set by the Bad Guys that is burning Wayne Manor to the ground. Bruce had taken up the mantle of the Dark Knight to rid Gotham City of evil, and instead it seems as if evil has won, with the legacy of everything his family had built turning to ashes all around him.

It is at this moment of failure that Alfred insists he will never give up on Bruce. I don't want to spoil the movie if, by chance, you haven't seen it, but let's just say some bad guys get what's coming to them.

This quote has been on my mind from the past few months as my daughter has been learning to walk. Invariably she would fall and I would think of Alfred. I realized that this short exchange between Alfred and Bruce is a fitting analogy for the design philosophy of NServiceBus.

Software fails. Software engineering is an imperfect discipline, and despite our best efforts, errors will happen. Some of us have surely felt like Bruce, when an unexpected error makes it seem as if the software we built is turning to so much ash around us.

But like Alfred, NServiceBus will not give up. If we apply the tools that NServiceBus gives us, even in the face of failure, we will not lose consistency, we will not lose data, we can correct the error, and we will make it through.

And then the bad guys will get what's coming to them.

In this chapter we will explore the tools that NServiceBus gives us to stare failure in the face and laugh.

In the rest of the chapter, you learn about NServiceBus message handlers’ fault tolerance, error queues and replay, automatic and second-level retries, express and expiring messages, message auditing, and integration with web services.

If you’d like to read more, you can download a preview of Chapter 1 from Packt, or buy the book from Packt, Amazon, Amazon UK, or Barnes & Noble.

Or, if you’d like to get formal NServiceBus training from me in person, you can attend the course I’ll be leading in December. Go to for details and to register.

development comments edit

Last night my blog passed the 100,000 all-time views mark, which feels like a pretty cool (if ultimately fairly meaningless) accomplishment. The lucky 100,000th visitor will receive…….absolutely nothing of course, mostly because I have no way to figure out who they were. But that person and the 99,999 that came before have my thanks and appreciation at the very least.

I started this blog in 2010 thinking that I would blog about software development and beer. I quickly learned that although I love craft beer, I had absolutely no ability to describe a beer’s flavor and aroma in any sort of meaningful terms. As far as I was concerned, it was either tasty or not tasty. So the blog became 100% software development and my beer hobby became solely about drinking and enjoying, and later brewing as well. (I currently have a Honey Bee Ale in the fermenter that will be ready to enjoy just before Christmas.)

What I can say is that this blog has done a tremendous amount to advance my career, so if you are a junior developer thinking about starting a blog, don’t think about it, just do it. You will become a better writer, and learn things better through the process of explaining them to others. Then one day, it may help land you a great job or opportunity that may not have been available to you otherwise.

OK that’s it, I promised myself this post would not become a long-winded “why blogging is awesome” post.

So thank you to everyone for sharing in this arbitrary milestone with me. I hope that if you have found yourself here you have found something worthwhile. If not, leave me a comment and maybe I can do something about it!

development comments edit

While of course buying my book is a great way to get started with NServiceBus, absolutely nothing beats formal in-person training. There is no substitute from learning directly from someone who has been there before, who can provide both the info you need and the background behind it, with the ability to ask questions specific to your use case.

This is why I am so excited to be offering the formal 4-day NServiceBus training course. The course will be held on December 9-12 at ILM’s offices in Minneapolis, Minnesota. This is the official course with the official curriculum originally developed by Udi Dahan himself, but updated for all of the new features in NServiceBus V4.

Click here for all the details for the course, and to register to attend.

If you have any questions about the course, please contact me through the training site contact form or through the comments on this post and I’ll answer any question I can.

I’m looking forward to meeting some talented developers and hopefully throwing back a tasty beverage or two as well!

We’re also planning to hold more training courses on other topics soon, so be sure to watch my Twitter feed for updates.

development comments edit

If you really want to take control of your ASP.NET routes to create a RESTful or at least RESTish URL naming scheme, first you need to let go of default MVC convention-based routes (Controller/Action/{id}) and use something like AttributeRouting. (AR is now half-baked into MVC 5, unfortunately without some of the more useful things AttributeRouting has to offer, like the ~/routes.axd route debugging handler.)

In any case, once you use either method of getting attribute routes, you’re now faced with how to integrate that data in your controllers. The first step is to put a RoutePrefix attribute on your controllers, but then to get that data out, you have to scrape the data out of the RouteData manually within your action method, or override OnActionExecuting to do the same except drop the values in an instance variable.

In any case, it shouldn’t be that hard. This is something that should be handled at the infrastructure level, so let’s look at how to do that via an InjectRouteData ActionFilter attribute:

Now that we have this ActionFilter in place, we can have these values injected directly into member variables or properties in our controller. This is a neat trick, but of course we could just let MVC do its thing and ask for the value in any controller by adding it as a parameter to the action method, right? Sure, but the injection gives us the ability to do things like this:

This way you don’t have to have the awkward repetitive code at the beginning of each action method getting the thing by its id, you just have the thing already available.

We’re making the assumption here that every single action method within the controller would have need for the value being injected and that it’s not a waste of time to go to the data store and get it. Also, you have to be careful that values injected in this way use some level of caching (at least within the HTTP request) and don’t go back to the data store multiple times for the same thing. Of course if you’re using an OR/M with enough smarts (or even better, RavenDB) then this is taken care of for you.

What is really nice is to take this pattern as a starting point and extend it to be specific to your domain, so that various well-known objects are automatically injected into your controllers based on the inclusion of their Id. For instance, the injection filter could be smart enough to inject a User object when a userId is present in the routing data. This could then be reused for all the controllers in your application that need to know about a User.

development comments edit

Recently a colleague referred me to a video from Marius Gunderson’s session at JSConf EU 2013 entitled A comparison of the two-way binding in AngularJS, EmberJS and KnockoutJS. It’s an excellent watch, comparing and contrasting the two-way databinding capabilities of each product without descending into a flame war about which is better, and it only weighs in at 20 minutes long. Here’s the video itself.

Of course everyone is trying to figure out what the best framework for creating single page applications right now. Knockout doesn’t qualify as a full framework but it can be paired with Durandal to fill in the missing bits. RavenDB 3.0 is using Durandal with Knockout to power their new HTML5 RavenDB Studio to replace the current Silverlight version. Ember is perhaps most notable for being used by Jeff Atwood and the rest of the Discourse team to remake forum software for the next decade. And of course Angular, created by Google, seems to be easily winning most popularity contests, as evidenced by Google Trends.

(Of course, it’s not such a hot time to be a Backbone fan, as Backbone seems to be in decline these days.)

Before watching this video, I had some misgivings about Angular. Robin Ward of the Discourse team has a great blog post touting Ember’s advantages over Angular, and some of these really resonate with me. Specific to this video is the fact that Angular does dirty-checking, sometimes re-evaluating an expression up to 10 times to make sure its value doesn’t drift. This seems ridiculously inefficient, and an invitation to poorly performing web applications.

Other frameworks handle this in different ways. Both Knockout and Ember have the ability to create computed properties, but this comes at the cost of complicated (and differing) APIs, where Angular’s dirty-checking allows use of a plain old JavaScript object.

What this made me realize is that, at least as far as data-binding is concerned, Angular may be the most forward-thinking of all the frameworks. At some point in the future, hopefully browsers will start to support object.observe (currently only supported by Chrome Canary right now as far as I’m aware), and when this happens, Angular will be positioned to use this natively and skip the dirty checking, while other libraries and frameworks, while they may be slightly more performant in those scenarios now, will be forever saddled with the complicated APIs necessary to get around this temporary shortcoming of JavaScript. Angular could polyfill browsers without object.observe support by polyfilling the behavior with the current dirty-checking methods. It would be like a free speed upgrade when those browsers become available.

In any case, two-way data-binding is only one aspect of the current SPA battles. A new framework could always be unveiled tomorrow that trumps all of them. Only time will tell.

What do you think? I’d love to hear your thoughts in the comments below.

development comments edit

I presented an introduction to Service Oriented Architecture at Twin Cities Code Camp over the weekend, and unfortunately hit a little bit of a snag during one one of the demos, so unfortunately while the audience got to see all the code that makes Publish/Subscribe work in NServiceBus, they didn't get to see it actually, you know, work. Such is the risk of a live demo. (Even if you test it the day before, which I did!)

This morning I was able to take a closer look at the error message and figure out what is going on. I'm currently starting a project using RavenDB so the first thing I did was to upgrade RavenDB to the latest stable version, 2.5 Build 2700.

Ummmmm....shouldn't have done that.

A license for NServiceBus (including the developer license you get for free from the NServiceBus website) covers your use of RavenDB to handle NServiceBus related storage, for subscriptions, sagas, timeouts, and a few other things that are handled by the NServiceBus framework. If you want to use Raven for your own application (which I would highly recommend) you need a separate license for that in production.

For development where Raven isn't really licensed anyway, I didn't think it would matter that much, but I was wrong.

Raven.Abstractions.Exceptions.OperationVetoedException: PUT vetoed by Raven.Database.Server.Security.Triggers.WindowsAuthPutTrigger because: Cannot setup Windows Authentication without a valid commercial license.

Oops! NServiceBus V4 uses Raven 2.0 and hasn't been tested with 2.5 yet. So what to do if you want to develop solutions with both simultaneously? First, allow NServiceBus to install RavenDB 2.0 on port 8080 as is the default. Then, if you want to develop with RavenDB 2.5 as well, install that to port 8081. In my opinion it's easier to set the nonstandard port on your DocumentStore's URL (which you have to set anyway) than to modify the NServiceBus conventions that (while they are overridable) expect to see RavenDB 2.0 living on port 8080.

development comments edit

I will be presenting an Introduction to Service Oriented Architecture, featuring NServiceBus, at the Twin Cities Code Camp 15 this weekend. The code camp is on the University of Minnesota campus so if you're in the area, I would highly encourage you to come out and see me as well as some other awesome presentations. I will be speaking at 12:45 in room 3-180. Personally I'm really looking forward to seeing Judah Himango, the brains behind the new HTML 5 version of the RavenDB Management Studio, speak about TypeScript. Of course the code camp is entirely free, and I think you can still register.

As a bonus, I'll be giving away a free hardcopy of my book, Learning NServiceBus, during my session!

I hope to see you there!

development comments edit

Well Christopher Columbus was kind of an all around poor example of a human being, but the upside is that my publisher is running a pretty awesome sale on all of their eBooks, including my book, Learning NServiceBus. Anyone who uses the promotional code COL50 at checkout will get 50% off any eBooks or videos. That's a pretty good deal. Aside from my book, I would also strongly recommend RavenDB 2.x Beginner's Guide by Khaled Tannir, and RavenDB High Performance by Brian Ritchie.

Columbus Day Sale

The sale has been extended until Monday, October 21, so get your order in now!