Development comments edit

My parents always told me to say what you mean and mean what you say.  It’s good advice for life in general but even more so in software engineering.

I’ve been updating our online store back end jobs, which were still using legacy code, to use updated code to prepare for a database move that would have been impossible before.  At the same time, I’m trying to make the code more efficient and maintainable.

The new code threw an exception today from the e-commerce provider.  We were attempting to capture an amount that was more than what was authorized on the card.

Read more →

Development, Technology comments edit

Last week in our development team meeting we had a discussion about QR Codes and whether or not there was a good reason to include QR reader functionality in future products.

For those unfamiliar, the short version (stolen from the Wikipedia linked above):

A QR Code is a matrix code (or two-dimensional bar code) created by Japanese corporation Denso-Wave in 1994. The “QR” is derived from “Quick Response”, as the creator intended the code to allow its contents to be decoded at high speed.

My personal opinion, which I espoused during that meeting, was that QR codes are, for lack of a better word, stupid, that they would never appear in anything mainstream, had no potential return on investment, and that generally they were a complete and utter waste of time and we should go out of our way to avoid allocating any development time toward it.

Read more →

Development comments edit

In my previous post I attempted to build an NServiceBus Timeout Manager that used timers and events to send back timeout messages when they came due instead of looping through the message queue with thread sleeps until each message was ready to send back that occurs in the timeout manager included with the NServiceBus 2.0 RTM.

That implementation stored timeouts that were set to expire “soon” in memory, and stored everything in a secondary MSMQ queue so that if the application failed, it could recover when it started back up.

I realized that a better implementation would be to separate out the storage using a provider implementation.  This way, the entire implementation could be switched out completely by providing a second class that implements ITimeoutStorageProvider.

Also, I refactored the MsmqTimeoutStorage class into a base class that’s concerned with the in-memory timeout handling, and the superclass that adds in the actual storage implementation with MSMQ.

Read more →

Development comments edit

Update 4/30/2010: I updated this code to use a provider implementation so that it is easy to switch out the MSMQ storage for database storage or any other storage medium.  Check it out!

Udi Dahan himself has stated that the TimeoutManager included with NServiceBus 2.0 is “not generically suitable for production” purposes, and since I needed to create a system that used a TimeoutManager for multiple sagas in production, I set about the task to create a better one.

For a complete discussion, check out the message thread Timeout Manager process causes heavy disk IO, or for a quicker read, here are some of the limitations of the TimeoutManager included with NServiceBus 2.0:

Read more →

Development comments edit

I’ve been working with NServiceBusand I ran across a problem with System.Transactions.

Basically, I wanted to do the following:

public class RunAtStartup : IWantToRunAtStartup
    public IBus Bus { get; set; }
    public void Run()
        // ...doesn't matter
        using (TransactionScope ts = new TransactionScope())
            // Do some database inserts/updates
    public void Stop()

This code blew up when I tried to send the message to the bus.

The problem is that because we still have SQL 2000 databases that cannot enlist in System.Transactions transactions without the performance penalty of distributed transactions, our database library uses a transaction adapter class that magically uses a SqlTransaction under the hood in a very lightweight manner. The error is the result of this library not being able to promote a local transaction to distributed, because MSMQ (used under the covers by NServiceBus) can only use distributed transactions.

Read more →

Development comments edit

One of the most frustrating things to me as a software architect is evaluating a new technology for a new project, and getting the feeling in my gut that if properly applied, this new technology could solve a lot of problems, if only I had the time to fully study it and do a lot of research and development. But, I’m working on a project, and that project has a deadline, and I know it won’t happen. So instead of being unhappy with my code after finding it a year or two later (as should be the case, or you’re not learning) I’m going to be unhappy with it nearly immediately.

I’m facing a project where I need to build a highly scalable background processing service. Of course, it should be load balanced, so that maintenance can be performed on the host servers, and moreover, an architecture that allows me to scale it by applying more instances on more servers (or even more instances in the cloud) will provide me with the ultimate in scalability potential.

This is easy with a web application. We have a load balancer, and it’s very good at what it does. A web application just listens for and then processes requests. If you take it off the load balancer it will get no requests. Easy.

This is a lot more convoluted for back-end processes. It doesn’t necessarily sit back and wait for requests. It does things of its own volition. This means if there are more than one process, there needs to be some concept of who is in charge. Heartbeats between applications to know who else is alive and who is dead. And how is dead or alive defined? The fact that the server will respond to a request on port 80 will no longer cut it.

I’m currently evaluating NServiceBus and I have to say at first blush, it’s pretty impressive. I like how it essentially turns a console process into a bunch of handlers waiting for events - a lot like a web application actually, which would make it easier to load balance. If events get fed into distributors, and then multiple instances can retrieve work out of that work queue, then it makes it more straightforward - an instance will either be active and retrieving work, or it will be inactive and the other instances will make up for it, and more instances can be brought online to help scale out.

But that doesn’t completely eliminate the problem of having a master or lead process. This application, for example, needs to start its work on a schedule. If 3 processes are running, one needs to be in charge and say when to start processing on a piece of work. After that one event, the NServiceBus message handling infrastructure can take care of the rest of the scaling.

Hopefully I’ll figure this out before it’s too late for this project. If so, I’ll come back and link to it here.

Development comments edit

We all have drilled into us how important it is to use a 301 redirect when changing a website’s address so that the site’s page rank and other SEO goodies are preserved.  Since it’s so important, you’d think it would be easier, or maybe even straightforward to accomplish.

I recently moved this blog from a subdirectory of another domain to its very own domain, and wanted to do just that, and found it anything but easy.

People with Apache webservers have long had the ability to perform 301 redirects in a fairly straightforward manner using .htaccess files.  However, IIS users have not been so lucky.  Although I have a WordPress blog (because it’s the best out there) I am an ASP.NET developer and I need to have a Windows server playground to try stuff out in, so I have PHP 5 and .NET 3.5 running on Windows Server 2008.  So .htaccess files aren’t an option for me.

At first I tried redirecting purely with PHP inside the bounds of WordPress.  I tried multiple redirection plugins, none of which seemed to work.  They would redirect my site, but when I checked with Fiddler it would always be a 302 redirect.  Unacceptable.

Read more →

Development comments edit

I strongly believe that code isn’t perfect.  Ever.  There’s always some improvement that can be made, but frequently I don’t have time to do it, or for whatever reason I can’t do it.  I want to be able to remember where these things are, so that someday when I have free time (insert laugh here) I can go back and do something about it.

A bug tracking system is good, but if it’s something that exists in multiple locations it can be hard to document where all those points are in a defect report.  Even thorough documentation can be hard to track down later if refactoring or re-engineering moves code around.

Since the first version of Visual Studio, there were TODO Comments, but these are of limited usefulness.  They show up in Visual Studio’s Task List toolbar (select Task List from the View menu, then select Comments from the dropdown in the toolbar) but only if the file containing the comments is open.  What if I’m trying to do something across multiple classes and methods?

Prime example: we still have SQL Server 2000 databases.  We have a new SQL Server 2008 to migrate to, but with an ample amount of legacy code, the migration is anything but simple, and there are always more important projects with revenue attached.

In the meantime, we have features that would benefit from Common Table Expressions or Table Valued Parameters, but we can’t use them because we’re still stuck on SQL 2000 for the time being.  I need to be able to mark those chunks of code so I can find them all back someday when the database migration is complete.

Enter reminder attributes.

Read more →

Development comments edit

A friend of mine posted a political link on Facebook to an article on, who have a perfect implementation of how notto redirect to a mobile website.

I was using the Facebook app on my iPhone, so the redirect to a mobile site wasn’t unexpected.  However, when I arrived, the page had navigation and no story content.  I assumed I had reached the homepage (root) of the mobile site.  This is not helpful.  The page had a link to the full version of the site, but of course that went to the full site’s homepage.  I should not have to go to the full site and then search around to find the content that I was given a deep link for!

When I got back to a real computer, I found out that it was actually much worse.  Using an iPhone UserAgent in Firefox, the Facebook link redirected to the exact same path, but on instead of

The problem was that the directory structure on the two sites was not the same.  Redirecting to the same path could not hope to succeed because the same path didn’t exist on the mobile site, so all I got was navigation, leading me to believe that I’d been redirected to the mobile homepage.

So here are my thoughts on how to redirect to a mobile site without making your users leave quickly:

  • Keep the directory structures exactly the same between mobile and full versions of your site.  If you can serve both sites with the same plumbing, then all the better.  To give an ASP.NET example, use the same ASPX pages, but change the MasterPageFile during OnPreInit to switch out the template for a simpler version on your mobile site.
  • Include a link to the Full Version in your footer, but make sure that it’s to the full version of the same mobile page currently being viewed.
  • Set a session cookie to keep the user anchored to the full site if that’s where they’ve told you they want to be.

Always keep usability foremost in mind.  90% of people won’t go as far to read your content as I did.

Update March 7, 2011:

A year after I wrote this blog post, XKCD expressed the entirety of my thoughts in simple comic form:

XKCD: Server Attention Span - Licensed under Creative Commons Attribution-NonCommercial 2.5

Development comments edit

One Framework to rule them all…

I have a love-hate relationship with Windows Communication Foundation (WCF). I’ve been doing a lot of work with it lately and depending on the day, I think the acronym might stand for Way Cool Feature or Why is Configuration so Frustrating.

One of the most difficult aspects is that there are too many moving parts. Every solution to a problem requires six or seven different parts, each of which can have a core component and a configuration component and probably another couple components, all of which can inter-operate in a few dozen different ways.

Perhaps this is the ultimate drawback of a system that can do so many things that it’s it’s really hard to come up with an elevator pitch that describes it succinctly.

So whenever I finally figure something out, if possible it’s nice to wrap all that confusement (ridiculous non-word used on purpose) up in something more sane and digestible.

So that was my goal in making a WcfPeerNode to encapsulate the power of the NetPeerTcpBinding to create a peer network of interconnected applications around a WCF service contract.

My goal was to interconnect different ASP.NET applications in a server farm, so that when a user performed an action against one server, resulting in a cache item being dropped and reloaded, all servers in the farm could drop the cache item in a coordinated fashion. This would enable longer cache times on seldom-changed data without sacrificing update speed, without going for a full-blown distributed cache like Memcached or Microsoft project code named Velocity. Sometimes I don’t want to deal with a distributed cache and its requirement that everything be serializable, I just want to be able to drop cache entries in all locations simultaneously!

Sadly, it looks like NetPeerTcpBinding doesn’t work in ASP.NET. Although the following code works just fine in a console application, when run in an ASP.NET website it generates the following error:

System event notifications are not supported under the current context. Server processes, for example, may not support global system event notifications.

Um, gee, thanks.

Read more →