A Tale of Five IoC Containers

In my spare time over the last week or so I added a very simple IoC container abstraction to ProjectExtensions.Azure.ServiceBus. Incidentally, if you are interested in building scalable applications that use a service bus, the ProjectExtensions version is a great lightweight choice whether your application lives on Azure or not. The Azure Service Bus API is fairly simple itself until you start dealing with tricky bits like transient faults and the need to poll intelligently to pick up new messages. The ProjectExtensions library takes care of all the nasty details so all you have to do is put messages on the bus and define classes to consume them. Although it does not have nearly all the features of NServiceBus, it doesn’t have all the complexity either. I should also note that the Azure service bus is easy to configure, very fast and extremely inexpensive. In many ways, it is even a better choice than MSMQ for applications that live outside Azure if you want your application to be easy to deploy and manage. Anyway, enough digression; I’m here to talk a little about the IoC container abstraction I put together for the ProjectExtensions Azure bus project.

Well, that’s not exactly true either. I’m not going to talk about the implementation. There’s nothing particularly interesting, complex or tricky about it. If you don’t believe me, go look at it on Github. I’ll wait. Boring huh? Anyway, many open source projects implement a minimal application-specific IoC abstraction. On the service bus side, you can find IoC abstractions in both NServiceBus and Rhino Service Bus. I would guess MassTransit has one too. Does anyone use MassTransit anymore? Geez, I’m digressing again. What was my point?

Ahh, I remember now. It sure was easy to put together a simple IoC abstraction for ProjectExtensions.Azure.ServiceBus because at their core all five popular IoC containers I used, Autofac, Castle Windsor, Ninject, Structure Map and Unity, have similar capabilities, similar APIs and perfectly adequate performance. Certainly, they all have slightly different philosophies and, when you dig deep, significant differences in their APIs. I am simply not the right guy to get into those details here. I am fairly expert with Castle Windsor and Autofac, but knew very little about the others until a couple days ago. If you want an expert, in-depth analysis, along with some great insight into how and why to use DI/IoC, go get a copy of “Dependency Injection in .NET” from Manning Press. My only intent here is to point out that they all work and they all work well for the basic use cases in ProjectExtensions.

So what does ProjectExtensions do with the IoC? Well, it’s as simple as the following:

/// <summary>
/// Generic IOC container interface
/// </summary>
public interface IAzureBusContainer {
    /// <summary>
    /// Resolve component type of T with optional arguments.
    /// </summary>
    /// <typeparam name="T"></typeparam>
    /// <returns></returns>
    T Resolve<T>() where T : class;

    /// <summary>
    /// Resolve component with optional arguments.
    /// </summary>
    /// <param name="t">The type to resolve</param>
    /// <returns></returns>
    object Resolve(Type t);

    /// <summary>
    /// Register an implementation for a service type.
    /// </summary>
    /// <param name="serviceType">The service type.</param>
    /// <param name="implementationType">The implementation type.</param>
    /// <param name="perInstance">
    /// True creates an instance each time resolved.  
    /// False uses a singleton instance for the entire lifetime of the process.
    /// </param>
    void Register(Type serviceType, Type implementationType, bool perInstance = false);

    /// <summary>
    /// Registers the configuration instance with the bus if it is not already registered
    /// </summary>
    void RegisterConfiguration();

    /// <summary>
    /// Build the container if needed.
    /// </summary>
    void Build();

    /// <summary>
    /// Return true if the given type is registered with the container.
    /// </summary>
    /// <param name="type"></param>
    /// <returns></returns>
    bool IsRegistered(Type type);
}

Pretty simple huh? Given the requirement to support any IoC, this kind of interface works pretty well. It gives us everything we need to allow library users to leverage the IoC container of their choice without making our lives too difficult. Because the interface is very simple, it is also quite easy for library users to roll their own support for any IoC they use that we don’t happen to support ourselves. If we settled for supporting only one container, we could have a far more elegant implementation. In fact, that’s what we had when we started with Autofac. However, consumers wanted to use the IoC of their choice with minimal fuss and this interface makes that possible.

There is one little twist left and that’s disposal of per-instance components that happen to implement IDisposable. Castle Windsor, for example, will hold onto disposable components until they are released. There are a couple ways to solve this. For example, some of the containers have the concept of a subcontainer that releases disposables when it goes out of scope. However, I want to keep this implementation simple so a little more investigation is needed. I’ll post the solution I settle on next time.

Setting Index Options for IDictionary in RavenDB

This post has been updated to work with the latest stable build of RavenDB (Build 573).

In my opinion, RavenDB is the best NoSQL option for .NET applications. Some time ago, I recommended it to one of my clients and they are planning to use it in a major greenfield project involving the re-architecture of their customer and administrative web applications. They are currently working on a series of technical spikes/proofs of concept to better understand technical risks, put together budgets and demonstrate key application capabilities to the project stakeholders. As part of their effort, I’ve been working with them to put together some demonstrations around search and how it can be extended into driving the configuration of custom landing pages for various marketing campaigns.

One of the biggest issues they face is that their products have different configurable options and properties. These are typically contained in an IDictionary<string, string>.  Users need to be able to search on the various properties.  For example, a user might want to find all products that have a property named “color” with a value of “red”.   To make matters more interesting, many of the property values have synonyms that must be usable in search too.  For example, they might need a search on color=maroon to match products where color is red.   RavenDB can do this, but there are some implementation details that are not well documented. This article outlines the solution that worked for us.

The first part of the solution is fairly well documented in the RavenDB Google Group. Take, for example, my colleague’s original post.  Given an object MyDocument with a property of type IDictonary<string,string> named Attributes, you create an index entry for each name/value pair as follows:

public class MyDocument_ByProperty : AbstractIndexCreationTask
{
    public MyDocument_ByProperty()
    {
        Map = docs => from doc in doc select new {
          _ = from prop in doc.AttributeValues select
                new Field(prop.Key, prop.Value, Field.Store.NO, Field.Index.ANALYZED))
    }
}

Although this gave us the ability to search by key/value, it does not handle the synonym requirement. For our proof of concept, we adapted the synonym analyzer described on Code Project. Since RavenDB provides a way to set the analyzer for a field, it should have been easy to configure it to use our synonym analyzer for the various name/values. Unfortunately, the method shown in the documented examples and discussed in the group only allow you to set an analyzer using Linq; Since the fields in this index are the result of a projection, we could not use it to set the analyzer for the projected fields.

Based on Ayende’s suggestion in the post referenced above, I took a look at the RavenDB source thinking I needed to create a plugin or some other extension to make this possible. As it turned out, the capability was already present. All we had to do was override another method of the AbstractIndexCreationTask as follows:

public class MyDocument_ByProperty : AbstractIndexCreationTask
{
    public MyDocument_ByProperty()
    {
        Map = docs => from doc in doc select new {
          _ = from prop in doc.AttributeValues select
                new Field(prop.Key, prop.Value, Field.Store.NO, Field.Index.ANALYZED))
    }

    public override IndexDefinition CreateIndexDefinition()
    {
        foreach (var propertyName in propertyNames)
        {
            var indexDefinition = base.CreateIndexDefinition();
            indexDefinition.Analyzers.Add(propertyName,
            "Eleanor.Analyzers.SynonymAnalyzer, Eleanor.Analyzers");
        }

        return indexDefinition;
    }
}

This illustrates yet another reason why I am a big advocate of dual-source licensing for commercial programming libraries and tools. The availability of RavenDB source code made it possible for us to get the most out of the product. It also means that as long as the project goes forward we will buy some RavenDB licenses. That’s a win-win outcome especially when you consider that without source we may have been forced to go in another direction, which would have meant the loss of licensing revenue for the developers of RavenDB.

Blogging on JDF Tools and Techniques at the JDF Blog

My passion is building systems that tie together supply chains.  For the last several years, I have focused my efforts on the commercial printing industry and the industry’s integration standard, JDF.  As my company gets closer to releasing FluentJDF, an opensource JDF library for .NET, I will be posting on JDF tools and techniques at the JDF Blog.  I will continue to post here on general programming and entrepreneurship.

NServiceBus Fluent Interface is Not All That Fluent

I am only getting started with NServiceBus after having used Rhino ESB for some time.  Overall, I’m liking the functionality.  However, at least in the 2.5 release, configuration is a little sensitive and often doesn’t provide any useful information when things go wrong.  Take, for example, the following configuration for a web application:

NServiceBus.Configure.WithWeb()
    .XmlSerializer()
    .Log4Net()
    .CastleWindsorBuilder()
    .MsmqTransport()
        .IsTransactional(false)
        .PurgeOnStartup(false)
    .UnicastBus()
    .ImpersonateSender(false)
    .CreateBus()
    .Start();

When you put in this your Application_Start method it throw a null reference exception in the NServiceBus configuration routine.  As it turns out, the code was supposed to look like this instead:

NServiceBus.Configure.WithWeb()
    .Log4Net()
    .CastleWindsorBuilder()
    .XmlSerializer()
    .MsmqTransport()
        .IsTransactional(false)
        .PurgeOnStartup(false)
    .UnicastBus()
    .ImpersonateSender(false)
    .CreateBus()
    .Start();

Did you spot the difference?  The issue is you can’t tell it which serializer to use until after you tell it how to configure the container.  Seems kind of fragile if you ask me.  This certainly doesn’t make NServiceBus a bad library, but it does make it quite a bit harder to get started.   Anyway, thanks to this being open source I was able to debug into the offending routine and figure out what was going wrong.

Impressions of Fitnesse With .NET for Acceptance Testing

After using FitNesse for the last several months I can say the following:

  • The documentation is quite limited especially when it comes to working with .NET.  Lots of trial an error involved for any novice.
  • How lucky am I to have Mike Stockdale, the principal developer of FitSharp, working on the project to show the team a variety of useful tricks?  I don’t think we would have been successful with FitNesse without him.
  • Technical product owners are able to write and troubleshoot their own tests using the wiki once the right test fixtures are in place.  Very nice.
  • Our tests generate lots of XML that the product owners review from time to time so I decided to add syntax highlighting via google’s prettifier javascript.  FitNesse uses velocity templates so it should have been easy to do.  Although I was able to get syntax highlighting working on the test history page, velocity is not used to generate the live test results so I couldn’t get it working there.  Bummer.  Have to find time to contribute a fix given that the velocity feature is no longer in active development.
  • Integrating FitNesse with TeamCity is easy as long as you don’t care about integrating the test counts.  Wrote a little MSBuild step that takes care of this.  Note to self: document and release as open source to help others.
  • Integrating FitNesse with TeamCity’s built-in code coverage has proved impossible thanks to the tests running under Java.  Oh well.
  • Database setup and FitNesse add substantial overhead to the acceptance test suite so it take several minutes to run.  Our extensive unit test suite remains fast partially because integration/acceptance tests run under FitNesse so this is not a big deal.
  • I have looked at alternatives like SpecFlow but remain convinced that FitNesse is about the only automated acceptance testing tool that is approachable for non-programmers.  For example, although  most product owners can write Gherkin specs for SpecFlow,  I don’t think they could easily run and troubleshoot tests like they can with FitNesse.  Therefore, I will continue to use FitNesse for acceptance testing on future projects.

Developer Station Revisited

I just built a developer station for the company with specs substantially identical to the one I built for personal use a few months ago  with an I7950 processor, 12GB RAM, 240GB SSD main drive, 1TB storage drive, and a Radeon HD5770 graphics card.  I economized a bit by going with an 800 watt power supply, RAM not suitable for over-clocking and a cheaper case.  The net result is a cost of less than $2,000 including a whole bunch of extras I didn’t have to buy for my personal computer like three brand new 23″ flat screens, DVI cables, UPS, keyboard, mouse, webcam etc.  Apples to apples, it was about 33% less expensive than my last build.  Amazing how fast computer prices fall.

Xoom Impressions

I took the plunge and bought the new Xoom Android 3.0 tablet four days ago and so far I am generally impressed.  That’s not to say there are not substantial flaws like applications that don’t know how to handle the large screen (e.g. Mint), applications that don’t yet match their iPad counterparts (e.g. Skype without video support) and applications promised and not yet released (e.g. logmein Ignition and Flash).  However, the good outweighs the bad.  The tablet is fast and responsive, the built-in applications for web browsing, email and calendar are excellent and the screen is very good indeed.

I can certainly see myself traveling with the Xoom instead of a laptop as long as I don’t have to do heavy-duty development.  For example, I was able to access a development environment hosted at Amazon with EC2 via RDP to do a little test, fix and patch for a C# application over an average broadband connection without the benefit of a Bluetooth keyboard.  Although I would not try this with the current 3G wireless, I fully expect Verizon’s 4G (upgrade available soon) to be fully up to the task.

On the negative side, quite a bit of the potential of the Xoom is untapped right now.  Besides 4G, early adopters will have to wait for Flash, support for the Micro SD slot and versions of popular applications that take full advantage of things like the front-facing camera and the large screen.  Although overall stability is good, I did experience problems with some popular applications such as Skype and Mint.

Developing for the Xoom has been a good experience so far.  I use Intellij with the Android SDK and developing against the device has been trouble-free.  The emulator, on the other hand, is ridiculously slow even when running on my otherwise fast I7-950 desktop.  For example, I experienced waits of up to three minutes when starting a simple hello world application on the emulator.  I can’t quite understand why it has to be so much slower than the emulator for the phone form factors.

If you want to develop for Android tablets or hate big-brother Apple, you will be happy with the Xoom tablet as it exists right now.  However, average users would probably be happier with an iPad 2.  It is lighter and thinner, has more applications available and has a better UI.  Although the Xoom has slightly better hardware , right now the software is a bit too rough around the edges to recommend an Android tablet over the iPad 2 for average users.  I fully expect open source, hardware competition and Google to eventually trump the iEmpire, but for now Jobs and company still come out on top.

I wrote the original version of this post on the Xoom.  Unfortunately, the open source WordPress Android application chopped up several of the paragraphs and inserted block quotes seemingly at random.  I guess there is at least one more application in need of an upgrade.

Put Your Apps on the TopShelf

Many of my projects end up using a Windows service or three to host background processes.  Over the years, I’ve developed a common-sense strategy of setting up a server class to contain the functionality that implements start and stop methods.  I then create minimal command-line and windows service hosts to instantiate the server class and call start and stop when appropriate.  This gives me a command-line server that can be conveniently started from the debugger and a windows service application for use in the production environment.  Of course, this also means using InstallUtil when it comes time to install the service.

Today I stumbled across a much nicer solution in the open source TopShelf project.  It lets me build a console application using about ten lines of code that hosts my server for development and provides a command-line to install as a Windows service so InstallUtil is not required.   Highly recommended!