Lissome Project

Just started working on the design for the project I discussed in my last post.  I decided to call it Lissome, which means, among other things, “nimble”.  Seems like a decent name for an agile task board.   The Balsamiq mockup can be found in the Lissome repository.

I decided to license it under AGPL .  I realize this license is fussier than many other open source licenses about making source changes freely available, but that is precisely the point.

Building Something Twice (Just for Fun)

I’ve been playing with Node.js a bit lately and have been looking for something fun and useful to do with it.  Along the way, I’ve also developed a curiosity about how a well-written, non-trivial Node.js application might compare to an equally well-written .NET application in terms of features, quality and development time.  Today, I am starting a spare time project to satisfy my curiosity.

The idea is to develop an agile task board application with a highly-interactive web front end and two different fully scalable backends: the first in Node.js and the second in .NET.  My goal is to make the backends as interchangeable as possible.  My initial guess is that the only difference in how the client will communicate with the backend is in the area of server notifications, which will probable use socket.io with the Node.js backend and SignalR with the .NET backend.  For simplicity, both backends will use the same hosted NoSQL and search facilities.  I’m leanings towards RavenDB hosted by RavenHQ simply because I am familiar with it and because it includes powerful search capabilities.

The whole thing will be open source and I’ll blog about it as I progress.  I’ll start by sketching a basic UI wireframe so I can put together an initial product backlog.  Wish me luck.

A Tale of Five IoC Containers

In my spare time over the last week or so I added a very simple IoC container abstraction to ProjectExtensions.Azure.ServiceBus. Incidentally, if you are interested in building scalable applications that use a service bus, the ProjectExtensions version is a great lightweight choice whether your application lives on Azure or not. The Azure Service Bus API is fairly simple itself until you start dealing with tricky bits like transient faults and the need to poll intelligently to pick up new messages. The ProjectExtensions library takes care of all the nasty details so all you have to do is put messages on the bus and define classes to consume them. Although it does not have nearly all the features of NServiceBus, it doesn’t have all the complexity either. I should also note that the Azure service bus is easy to configure, very fast and extremely inexpensive. In many ways, it is even a better choice than MSMQ for applications that live outside Azure if you want your application to be easy to deploy and manage. Anyway, enough digression; I’m here to talk a little about the IoC container abstraction I put together for the ProjectExtensions Azure bus project.

Well, that’s not exactly true either. I’m not going to talk about the implementation. There’s nothing particularly interesting, complex or tricky about it. If you don’t believe me, go look at it on Github. I’ll wait. Boring huh? Anyway, many open source projects implement a minimal application-specific IoC abstraction. On the service bus side, you can find IoC abstractions in both NServiceBus and Rhino Service Bus. I would guess MassTransit has one too. Does anyone use MassTransit anymore? Geez, I’m digressing again. What was my point?

Ahh, I remember now. It sure was easy to put together a simple IoC abstraction for ProjectExtensions.Azure.ServiceBus because at their core all five popular IoC containers I used, Autofac, Castle Windsor, Ninject, Structure Map and Unity, have similar capabilities, similar APIs and perfectly adequate performance. Certainly, they all have slightly different philosophies and, when you dig deep, significant differences in their APIs. I am simply not the right guy to get into those details here. I am fairly expert with Castle Windsor and Autofac, but knew very little about the others until a couple days ago. If you want an expert, in-depth analysis, along with some great insight into how and why to use DI/IoC, go get a copy of “Dependency Injection in .NET” from Manning Press. My only intent here is to point out that they all work and they all work well for the basic use cases in ProjectExtensions.

So what does ProjectExtensions do with the IoC? Well, it’s as simple as the following:

/// <summary>
/// Generic IOC container interface
/// </summary>
public interface IAzureBusContainer {
    /// <summary>
    /// Resolve component type of T with optional arguments.
    /// </summary>
    /// <typeparam name="T"></typeparam>
    /// <returns></returns>
    T Resolve<T>() where T : class;

    /// <summary>
    /// Resolve component with optional arguments.
    /// </summary>
    /// <param name="t">The type to resolve</param>
    /// <returns></returns>
    object Resolve(Type t);

    /// <summary>
    /// Register an implementation for a service type.
    /// </summary>
    /// <param name="serviceType">The service type.</param>
    /// <param name="implementationType">The implementation type.</param>
    /// <param name="perInstance">
    /// True creates an instance each time resolved.  
    /// False uses a singleton instance for the entire lifetime of the process.
    /// </param>
    void Register(Type serviceType, Type implementationType, bool perInstance = false);

    /// <summary>
    /// Registers the configuration instance with the bus if it is not already registered
    /// </summary>
    void RegisterConfiguration();

    /// <summary>
    /// Build the container if needed.
    /// </summary>
    void Build();

    /// <summary>
    /// Return true if the given type is registered with the container.
    /// </summary>
    /// <param name="type"></param>
    /// <returns></returns>
    bool IsRegistered(Type type);
}

Pretty simple huh? Given the requirement to support any IoC, this kind of interface works pretty well. It gives us everything we need to allow library users to leverage the IoC container of their choice without making our lives too difficult. Because the interface is very simple, it is also quite easy for library users to roll their own support for any IoC they use that we don’t happen to support ourselves. If we settled for supporting only one container, we could have a far more elegant implementation. In fact, that’s what we had when we started with Autofac. However, consumers wanted to use the IoC of their choice with minimal fuss and this interface makes that possible.

There is one little twist left and that’s disposal of per-instance components that happen to implement IDisposable. Castle Windsor, for example, will hold onto disposable components until they are released. There are a couple ways to solve this. For example, some of the containers have the concept of a subcontainer that releases disposables when it goes out of scope. However, I want to keep this implementation simple so a little more investigation is needed. I’ll post the solution I settle on next time.

Setting Index Options for IDictionary in RavenDB

This post has been updated to work with the latest stable build of RavenDB (Build 573).

In my opinion, RavenDB is the best NoSQL option for .NET applications. Some time ago, I recommended it to one of my clients and they are planning to use it in a major greenfield project involving the re-architecture of their customer and administrative web applications. They are currently working on a series of technical spikes/proofs of concept to better understand technical risks, put together budgets and demonstrate key application capabilities to the project stakeholders. As part of their effort, I’ve been working with them to put together some demonstrations around search and how it can be extended into driving the configuration of custom landing pages for various marketing campaigns.

One of the biggest issues they face is that their products have different configurable options and properties. These are typically contained in an IDictionary<string, string>.  Users need to be able to search on the various properties.  For example, a user might want to find all products that have a property named “color” with a value of “red”.   To make matters more interesting, many of the property values have synonyms that must be usable in search too.  For example, they might need a search on color=maroon to match products where color is red.   RavenDB can do this, but there are some implementation details that are not well documented. This article outlines the solution that worked for us.

The first part of the solution is fairly well documented in the RavenDB Google Group. Take, for example, my colleague’s original post.  Given an object MyDocument with a property of type IDictonary<string,string> named Attributes, you create an index entry for each name/value pair as follows:

public class MyDocument_ByProperty : AbstractIndexCreationTask
{
    public MyDocument_ByProperty()
    {
        Map = docs => from doc in doc select new {
          _ = from prop in doc.AttributeValues select
                new Field(prop.Key, prop.Value, Field.Store.NO, Field.Index.ANALYZED))
    }
}

Although this gave us the ability to search by key/value, it does not handle the synonym requirement. For our proof of concept, we adapted the synonym analyzer described on Code Project. Since RavenDB provides a way to set the analyzer for a field, it should have been easy to configure it to use our synonym analyzer for the various name/values. Unfortunately, the method shown in the documented examples and discussed in the group only allow you to set an analyzer using Linq; Since the fields in this index are the result of a projection, we could not use it to set the analyzer for the projected fields.

Based on Ayende’s suggestion in the post referenced above, I took a look at the RavenDB source thinking I needed to create a plugin or some other extension to make this possible. As it turned out, the capability was already present. All we had to do was override another method of the AbstractIndexCreationTask as follows:

public class MyDocument_ByProperty : AbstractIndexCreationTask
{
    public MyDocument_ByProperty()
    {
        Map = docs => from doc in doc select new {
          _ = from prop in doc.AttributeValues select
                new Field(prop.Key, prop.Value, Field.Store.NO, Field.Index.ANALYZED))
    }

    public override IndexDefinition CreateIndexDefinition()
    {
        foreach (var propertyName in propertyNames)
        {
            var indexDefinition = base.CreateIndexDefinition();
            indexDefinition.Analyzers.Add(propertyName,
            "Eleanor.Analyzers.SynonymAnalyzer, Eleanor.Analyzers");
        }

        return indexDefinition;
    }
}

This illustrates yet another reason why I am a big advocate of dual-source licensing for commercial programming libraries and tools. The availability of RavenDB source code made it possible for us to get the most out of the product. It also means that as long as the project goes forward we will buy some RavenDB licenses. That’s a win-win outcome especially when you consider that without source we may have been forced to go in another direction, which would have meant the loss of licensing revenue for the developers of RavenDB.

Blogging on JDF Tools and Techniques at the JDF Blog

My passion is building systems that tie together supply chains.  For the last several years, I have focused my efforts on the commercial printing industry and the industry’s integration standard, JDF.  As my company gets closer to releasing FluentJDF, an opensource JDF library for .NET, I will be posting on JDF tools and techniques at the JDF Blog.  I will continue to post here on general programming and entrepreneurship.

NServiceBus Fluent Interface is Not All That Fluent

I am only getting started with NServiceBus after having used Rhino ESB for some time.  Overall, I’m liking the functionality.  However, at least in the 2.5 release, configuration is a little sensitive and often doesn’t provide any useful information when things go wrong.  Take, for example, the following configuration for a web application:

NServiceBus.Configure.WithWeb()
    .XmlSerializer()
    .Log4Net()
    .CastleWindsorBuilder()
    .MsmqTransport()
        .IsTransactional(false)
        .PurgeOnStartup(false)
    .UnicastBus()
    .ImpersonateSender(false)
    .CreateBus()
    .Start();

When you put in this your Application_Start method it throw a null reference exception in the NServiceBus configuration routine.  As it turns out, the code was supposed to look like this instead:

NServiceBus.Configure.WithWeb()
    .Log4Net()
    .CastleWindsorBuilder()
    .XmlSerializer()
    .MsmqTransport()
        .IsTransactional(false)
        .PurgeOnStartup(false)
    .UnicastBus()
    .ImpersonateSender(false)
    .CreateBus()
    .Start();

Did you spot the difference?  The issue is you can’t tell it which serializer to use until after you tell it how to configure the container.  Seems kind of fragile if you ask me.  This certainly doesn’t make NServiceBus a bad library, but it does make it quite a bit harder to get started.   Anyway, thanks to this being open source I was able to debug into the offending routine and figure out what was going wrong.

Impressions of Fitnesse With .NET for Acceptance Testing

After using FitNesse for the last several months I can say the following:

  • The documentation is quite limited especially when it comes to working with .NET.  Lots of trial an error involved for any novice.
  • How lucky am I to have Mike Stockdale, the principal developer of FitSharp, working on the project to show the team a variety of useful tricks?  I don’t think we would have been successful with FitNesse without him.
  • Technical product owners are able to write and troubleshoot their own tests using the wiki once the right test fixtures are in place.  Very nice.
  • Our tests generate lots of XML that the product owners review from time to time so I decided to add syntax highlighting via google’s prettifier javascript.  FitNesse uses velocity templates so it should have been easy to do.  Although I was able to get syntax highlighting working on the test history page, velocity is not used to generate the live test results so I couldn’t get it working there.  Bummer.  Have to find time to contribute a fix given that the velocity feature is no longer in active development.
  • Integrating FitNesse with TeamCity is easy as long as you don’t care about integrating the test counts.  Wrote a little MSBuild step that takes care of this.  Note to self: document and release as open source to help others.
  • Integrating FitNesse with TeamCity’s built-in code coverage has proved impossible thanks to the tests running under Java.  Oh well.
  • Database setup and FitNesse add substantial overhead to the acceptance test suite so it take several minutes to run.  Our extensive unit test suite remains fast partially because integration/acceptance tests run under FitNesse so this is not a big deal.
  • I have looked at alternatives like SpecFlow but remain convinced that FitNesse is about the only automated acceptance testing tool that is approachable for non-programmers.  For example, although  most product owners can write Gherkin specs for SpecFlow,  I don’t think they could easily run and troubleshoot tests like they can with FitNesse.  Therefore, I will continue to use FitNesse for acceptance testing on future projects.