First Impressions of Silverlight

I am in the process of putting together some demos for the shipping API my company has started marketing within the commercial print industry.  I decided to use Silverlight mainly to see if it really makes it easier to deliver a rich UI experience via a web browser on Mac and PC.  If I like it enough, I will probably use it for the administrative interface of our Connect Shipping product.  So far, I am moderately impressed.  Here are some random initial thoughts based on my experience so far:

  • It’s too bad I have to make a Silverlight-specific version of the API DLL.  I know it would be easy to wrap in a web service, but I want the demo app to call the API directly.
  • Creating my Silverlight API DLL is not as bad as I thought it would be thanks to VS 2010 linked files and a couple bits of conditionally compiled code specific to Silverlight.
  • Caliburn Micro is a very nice lightweight open source MVVM library for Silverlight and WPF.  Works on the Windows 7 phone too.  The Soup to Nuts tutorial series is quite helpful.
  • Silverlight is sure sticky about the foreground thread.  You absolutely have to do your work in the background.
  • Too bad I can’t use log4net.  Yes, I know about Clog.
  • Silverlight’s HttpWebRequest only supports asynchronous methods.  I find this sort of annoying.  I realize I need to put this processing in the background, but I think using a BackgroundWorker to wrap a synchronous HttpWebRequest would be much simpler.
  • The Silverlight testing framework is pretty decent as long as you do not try to use it with Chrome.  It has nice support for testing my asynchronous API.  Too bad the asynchronous tests are not at all compatible with standard MSTest.
  • Although the visual designer for XAML in VS 2010 is decent, it is often easier to hand-edit the XAML.
  • Need to display source code with syntax highlighting.  Jeff Wilcox has put together a nice syntax highlighting text block that does what I need.

Review of “Coders at Work” by Peter Seibel

As young as programming is, it seems that most of the practitioners in the field have little sense of the history of it. I’m not entirely sure why this is. Perhaps it is caused by the twin realities of rapidly evolving technologies and the relative youth of the folks that write code. No matter the reasons, it seems like nobody thinks very much about what happened in languages like Assembly, Lisp, Fortran and Smalltalk on platforms like early IBM mainframes, Altos and PDP-11s back in the dark ages before the dawn of the Internet age. It seems to me that it pays to understand where your field has come from. If you feel as I do about this, you should take the time to read “Coders at Work”, a collection of interviews with some of the all-time greatest programmers that most of us have never heard of.

Well, hopefully you’ve heard of some of them. Guys like Ken Thompson (co-creator of UNIX) and Donald Knuth (author of the masterwork “The Art of Computer Programming”) have set down bodies of work that demand attention. Some of them, like Brendan Eich (CTO at the Mozilla Foundation) and Joshua Bloch (Chief Java Architect at Google), might be well-known because of their current positions more than their past work. Others, like Fran Allen (2002 Turing Award winner), are probably unknown to 99% of the programmers on the planet. No matter their fame, every single one of them has fascinating things to say about where we’ve been and where we’re going as a profession.

I think what struck me the most as I read through the first-hand accounts was just how little computer these guys had to work with back in the early days. I thought I had it rough at the start of my career writing systems using HP3000 Basic and its two-character limit on variable names until I read about programming analog computers in the 50s and the early digital computers that supported as little as 200 words of memory. It makes you wonder what people will think of our mildly multi-core servers thirty years from now. It is also amazing how programming has remained just about as hard as it was back then. Sure, we have better tools now, but our users expect much more too.

Although this book does not offer the sweeping, dispassionate view point of a true history, it provides the invaluable personal perspective of the people that made history happen. Reading this book is about the closest many of us will ever get to joggling punch cards and toggling switches to enter code. It’s definitely worthwhile reading for every professional coder.

Check out “Coders at Work” at Amazon.

As young as programming is, it seems that most of the practitioners in the field have little sense of the history of it.  I’m not entirely sure why this is.  Perhaps it is caused by the twin realities of rapidly evolving technologies and the relative youth of the folks that write code.  No matter the reasons, it seems like nobody thinks very much about what happened in languages like Assembly, Lisp, Fortran and Smalltalk on platforms like early IBM mainframes, Altos and PDP-11s back in the dark ages before the dawn of the Internet age.  It seems to me that it pays to understand where your field has come from.   If you feel as I do about this, you should take the time to read “Coders at Work”, a collection of interviews  with some of the all-time greatest programmers that most of us have never heard of.

Well, hopefully you’ve heard of some of them.  Guys like Ken Thompson (co-creator of UNIX) and Donald Knuth (author of the masterwork “The Art of Computer Programming”) have set down bodies of work that demand attention.  Some of them, like Brendan Eich (CTO at the Mozilla Foundation) and Joshua Bloch (Chief Java Architect at Google), might be well-known because of their current positions more than their past work.  Others, like Fran Allen (2002 Turing Award winner), are probably unknown to 99% of the programmers on the planet.  No matter their fame, every single one of them has fascinating things to say about where we’ve been and where we’re going as a profession.

I think what struck me the most as I read through the first-hand accounts was just how little computer these guys had to work with back in the early days.  I thought I had it rough at the start of my career writing systems using HP3000 Basic and its two-character limit on variable names until I read about programming analog computers in the 50s and the early digital computers that supported as little as 200 words of memory.   It makes you wonder what people will think of our mildly multi-core servers thirty years from now.  It is also amazing how programming has remained just about as hard as it was back then.  Sure, we have better tools now, but our users expect much more too.

Although this book does not offer the sweeping, dispassionate view-point of a true history, it provides the invaluable personal perspective of the people who made history happen.  Reading this book is about the closest many of us will ever get to joggling punch cards and toggling switches to enter code.  It’s definitely worthwhile reading for every professional coder.

Check out “Coders at Work” at Amazon.

Review of “Founders at Work”

By the time I finished shutting down my company in March, 2009, I was seriously burned out on the idea of entrepreneurship. Starting a company is difficult. Selling off the bits and shutting it down after 11+ years is tougher. When I sat down and considered my future career path I really figured the only way forward was to go back and be an employee again – no more 70 hour work weeks, no more agonizing over the payroll and fewer sleepless nights. Sure, I’d lose some freedom and some financial upside, but the tradeoffs seemed like a no-brainer. I even turned down an opportunity to own a piece of a company that bought some of the IP from my former company and instead joined them as an employee.

Then I read “Founders at Work”, a collection of interviews with founders of famous technology companies. As I read through the stories, the few embers left over from the fire that had kept me self-employed for almost 20 years started to glow hot again. Each page felt like a personal conversation with one smart founder after another. My old heroes with their garage-startup VC war stories were all there — Steve Wozniak (Apple) and Dan Bricklin (Creator of VisiCalc) were two of my personal favorites. There were also great stories from Web 2.0-style founders that self-funded their companies like Joel Spolsky of Fogcreek. Every founder gave me a useful lesson. Every founder reminded me of something I liked about startups. By the end, the flame of entrepreneurship was burning hot in my gut once again.

How often do you find a book that inspires you? If you have ever considered starting a company or even joining a startup, this book is a must read. Just be careful; The inspiration, as Thomas Edison famously said, is only 1% of what it takes.

See this book at Amazon.com.

By the time I finished shutting down Objective Advantage in March, 2009, I was seriously burned out on the idea of entrepreneurship.  Starting a company is difficult.  Selling off the bits and shutting it down after 11+ years is tougher.  When I sat down and considered my future career path I really figured the only way forward was to go back and be an employee again – no more 70 hour work weeks, no more agonizing over the payroll and fewer sleepless nights.  Sure, I’d lose some freedom and some financial upside, but the tradeoffs seemed like a no-brainer.  I even turned down an opportunity to own a piece of a company that bought some of the IP from my former company and instead joined them as an employee.

Then I read “Founders at Work”, a collection of interviews with founders of famous technology companies.  As I read through the stories, the few embers left over from the fire that had kept me self-employed for almost 20 years started to glow hot again.   Each page felt like a personal conversation with one smart founder after another.  My old heroes with their garage-startup VC war stories were all there —  Steve Wozniak (Apple) and Dan Bricklin (Creator of VisiCalc) were two of my personal favorites.  There were also great stories from Web 2.0-style founders that self-funded their companies like Joel Spolsky of Fogcreek.  Every founder gave me a useful lesson.  Every founder reminded me of something I liked about startups.  By the end, the flame of entrepreneurship was burning hot in my gut once again.

How often do you find a book that inspires you?  If you have ever considered starting a company or even joining a startup, this book is a must read.  Just be careful; The inspiration, as Thomas Edison famously said, is only 1% of what it takes.

See this book at Amazon.com.

Nant Task to Synchronize Scripts Folder to Database Using RedGate SQL Compare API

Redgate’s SQL Compare is an indispensible tool for anyone doing development against Microsoft SQL Server. It’s ability to export the database structure to a scripts folder and then synchronize changes from a database back to the scripts folder is a its greatest feature; It allows the database structure to be placed under source control with a high degree of granularity since each database object becomes a script file.

Our build process includes a step to deploy a test/demo version of our web to a server. As part of that process, we need to update the database structure from the scripts folder. Thanks to the SQL Compare API that was included with RedGate’s SQL Toolbelt, I was able to put together an NAnt task that does the job. Here is the actual Nant target from our build script:

You’ll notice the target uses a couple of properties to point at the location of the project root (${project.dir}) and the SQL Server instance name (${sqlServerInstance}). Since the task does not specify a user name and password, the schemaSync task uses the Windows security credentials of the user running the build to logon to the database. In the case of our automated build, the user account that runs the build process has access to the database. You can change to a SQL Server login by providing sourceDbUserName and sourceDbPassword properties to schemaSync.

The complete list of supported attributes is as follows:

Attribute Description sourceServerName If synchronizing from a database, the name of the source database server. sourceDbName If synchronizing from a database, the name of the source database. sourceDbUserName If using SQL Server authentication, the name of the user in the source database. Omit this attribute to use Windows authentication. sourceDbPassword If using SQL Server authentication, the password of the user in the source database. Omit this attribute to use Windows authentication. sourceScriptsPath If synchronizing from a scripts folder, the path to the scripts folder. destinationServerName The name of the server to synchronize to. REQUIRED. destinationDbname The name of the database to synchronize to. REQUIRED. destinationDbUserName If using SQL Server authentication, the name of the user in the destination database. Omit this attribute to use Windows authentication. destinationDbPassword If using SQL Server authentication, the password of the user in the destination database. Omit this attribute to use Windows authentication.

You can download the source code from my GitHub SouthSideDevToys repository.

RedGate’s licensing will require you to purchase the SQL Comparison SDK after a demo period. At this time, the product sells for $695 and you should be able to get a 15 day demo from RedGate.

The code is fairly self-explanatory. I wrapped an Nant task around a set of classes that works with the API. The Nant task assumes that the destination will always be a database even though the underlying code is capable of synchronizing to a scripts folder. If you have questions about the code, feel free to comment and I will respond with more information.

RedGate just released Data Compare 8.0 that allows data comparisons and synchronization with scripts. RedGate says that the next version of their API, due out at the end of August 2009 (UPDATE: Can be obtained by licensed customers through support), will include this capability as well. Our database includes configuration data that we currently rollout via SQL scripts. This process is a pain to maintain because each time the data changes we have to update our scripts and test them. As soon as RedGate’s new API becomes available I plan to change this process to use Data Compare via Nant so that everything is automatic. I’ll publish the Nant task as soon as I have it working.

Redgate’s SQL Compare is an indispensable tool for anyone doing development against Microsoft SQL Server.  It’s ability to export the database structure to a scripts folder and then synchronize changes from a database back to the scripts folder is a its greatest feature; It allows the database structure to be placed under source control with a high degree of granularity since each database object becomes a script file.

Our build process includes a step to deploy a test/demo version of our web to a server.  As part of that process, we need to update the database structure from the scripts folder.  Thanks to the SQL Compare API that was included with RedGate’s SQL Toolbelt, I was able to put together an NAnt task that does the job.  Here is the actual Nant target from our build script:

<target name="updateDatabaseStructure">
    <schemaSync sourceScriptsPath="${project.dir}\db\OnpointConnect"  destinationServerName="${sqlServerInstance}" destinationDbName="OnpointConnect"/>
</target>

You’ll notice the target uses a couple of properties to point at the location of the project root (${project.dir}) and the SQL Server instance name (${sqlServerInstance}).  Since the task does not specify a user name and password, the schemaSync task uses the Windows security credentials of the user running the build to logon to the database.  In the case of our automated build, the user account that runs the build process has access to the database.  You can change to a SQL Server login by providing sourceDbUserName and sourceDbPassword properties to schemaSync.

The complete list of supported attributes is as follows:

Attribute Description
sourceServerName If synchronizing from a database, the name of the source database server.
sourceDbName If synchronizing from a database, the name of the source database.
sourceDbUserName If using SQL Server authentication, the name of the user in the source database.  Omit this attribute to use Windows authentication.
sourceDbPassword If using SQL Server authentication, the password of the user in the source database.  Omit this attribute to use Windows authentication.
sourceScriptsPath If synchronizing from a scripts folder, the path to the scripts folder.
destinationServerName The name of the server to synchronize to.  REQUIRED.
destinationDbname The name of the database to synchronize to.  REQUIRED.
destinationDbUserName If using SQL Server authentication, the name of the user in the destination database.  Omit this attribute to use Windows authentication.
destinationDbPassword If using SQL Server authentication, the password of the user in the destination database.  Omit this attribute to use Windows authentication.

You can download the source code from my GitHub SouthSideDevToys repository.

RedGate’s licensing will require you to purchase the SQL Comparison SDK after a demo period.  At this time, the product sells for $695 and you should be able to get a 15 day demo from RedGate.

The code is fairly self-explanatory.  I wrapped an Nant task around a set of classes that works with the API.  The Nant task assumes that the destination will always be a database even though the underlying code is capable of synchronizing to a scripts folder.  If you have questions about the code, feel free to comment and I will respond with more information.

RedGate just released Data Compare 8.0 that allows data comparisons and synchronization with scripts.  RedGate says that the next version of their API, due out at the end of August 2009 (UPDATE: Can be obtained by licensed customers through support), will include this capability as well.  Our database includes configuration data that we currently rollout via SQL scripts.  This process is a pain to maintain because each time the data changes we have to update our scripts and test them.  As soon as RedGate’s new API becomes available I plan to change this process to use Data Compare via Nant so that everything is automatic.  I’ll publish the Nant task as soon as I have it working.

Check Your Assumptions at the Door

Every year around this time I start going to McDonalds almost every day for lunch for one reason and one reason only: Monopoly. I’ve faithfully stuck game pieces to my little board every year in hopes of winning some valuable prize. I’m not greedy. I never dream about winning the big one. No, anything worth more than about $2 would be just fine with me. As you can guess, I’ve won plenty of fries and small cokes over the years but never the elusive $2+.

Anyway, these last couple of years they’ve gone to a web-based component located at http://www.playatmcd.com. It’s a rather pretty, Flash-based thing that asks you to put in annoyingly long codes from each of your stamps. Once it’s validated the code it lets you roll the dice and moves your piece around the board. All in all, it provides a perfectly satisfying McDonalds Monopoly experience. One particular feature of the game is that when you land on community chest or chance you get Coke rewards. When you land on the winning square, a message pops-up over the game board to inform you of your good fortune and a follow-up email shows up in your inbox with the info on how to logon to the Coke rewards site to claim your points. I never gave the feature much thought. I’d land on the square, I’d get the popup and a few minutes later the email would show up. It all seemed to work perfectly well until yesterday when I moved my email from Go Daddy to GMail.

It turns out the system isn’t very smart about the possibility of fast email. First, I rolled an 11 and while my piece was still moving GMail alerted me about a new message from McDonalds about a Coke reward. A little odd but not too bad. Clearly, the server sent the email at the same time it told the Flash client where to land my piece. Not perfect but probably unavoidable. The second thing that happened really bugged me. I rolled double threes and landed on a property. Just after landing, I got an email about another Coke reward, which did not make any sense since the game had told me nothing about winning another Coke reward. My second and last roll (no doubles this time) landed me on community chest, where I finally won the Coke reward I had received the email about a couple of minutes before.

So what does this tell me? Well, when the site receives the code from my game piece it must calculate the square where I will land. It might take one roll or it might take a couple. Regardless of how many moves it will take, it sends the prize email at the same time it tells the Flash client what to do. Either the Flash client decides how to get me to the right place in one or more rolls, or the server sends information about the necessary rolls to the Flash client. I never noticed this flaw in the logic before because my old email didn’t show up nearly fast enough to expose it. My guess is the developers of the site either missed this or figured nobody would notice. Once you see this happen the illusion that pressing the button to roll the dice means something is shattered and the game is no longer much fun. It makes me wonder why they bothered to implement the animated game board at all. After all, they have a facility that allows you to enter a code and see what you get without the animation.

At the end it all comes down to where bad assumptions can lead. The developers assumed I would not notice the pointlessness of rolling the dice. Given my reaction to what I saw, I guess somewhere in the primitive part of my brain I actually though the dice roll mattered.

I leave you with my favorite version of the old saying about assumptions.

Every year around this time I start going to McDonalds almost every day for lunch for one reason and one reason only: Monopoly.  I’ve faithfully stuck game pieces to my little board every year in hopes of winning some valuable prize.  I’m not greedy.  I never dream about winning the big one.  No, anything worth more than about $2 would be just fine with me.  As you can guess, I’ve won plenty of fries and small cokes over the years but never the elusive $2+.

Anyway, these last couple of years they’ve gone to a web-based component located at www.playatmcd.com.  It’s a rather pretty, Flash-based thing that asks you to put in annoyingly long codes from each of your stamps.  Once it’s validated the code it lets you roll the dice and moves your piece around the board.  All in all, it provides a perfectly satisfying McDonalds Monopoly experience.  One particular feature of the game is that when you land on community chest or chance you get Coke rewards.  When you land on the winning square, a message pops-up over the game board to inform you of your good fortune and a follow-up email shows up in your inbox with the info on how to logon to the Coke rewards site to claim your points.  I never gave the feature much thought.  I’d land on the square, I’d get the popup and a few minutes later the email would show up.   It all seemed to work perfectly well until yesterday when I moved my email from Go Daddy to GMail.

It turns out the system isn’t very smart about the possibility of fast email.  First, I rolled an 11 and while my piece was still moving GMail alerted me about a new message from McDonalds about a Coke reward.  A little odd but not too bad.  Clearly, the server sent the email at the same time it told the Flash client where to land my piece.  Not perfect but probably unavoidable.  The second thing that happened really bugged me.  I rolled double threes and landed on a property.  Just after landing, I got an email about another Coke reward, which did not make any sense since the game had told me nothing about winning another Coke reward.  My second and last roll (no doubles this time) landed me on community chest, where I finally won the Coke reward I had received the email about a couple of minutes before.

So what does this tell me?  Well, when the site receives the code from my game piece it must calculate the square where I will land.  It might take one roll or it might take a couple.  Regardless of how many moves it will take, it sends the prize email at the same time it tells the Flash client what to do.  The Flash client adds some nice animation of rolling dice and shows the piece moving around the board.  While that is going on, the email is making it’s way to my inbox, which is now so fast that I see the email before the piece stops moving.  I never noticed this flaw in the logic before because my old email didn’t show up nearly fast enough to expose it.  My guess is the developers of the site either missed this or figured nobody would notice.  Once you see this happen the illusion that pressing the button to roll the dice means something is shattered and the game is no longer much fun.   It makes me wonder why they bothered to implement the animated game board at all.  After all, they have a facility that allows you to enter a code and see what you get without the animation.

At the end it all comes down to where bad assumptions can lead.  The developers assumed I would not notice the pointlessness of rolling the dice.  Given my reaction to what I saw, I guess somewhere in the primitive part of my brain I actually though the dice roll mattered.

I leave you with my favorite version of the old saying about assumptions.

Estimating with Fogbugz

I spent the weekend putting together a task list and estimates for Onpoint Connect. We use Fogbugz for issue tracking but I always found it a little underpowered for estimating; Even though they added Evidence-Based Scheduling (EBS), there were a number of things missing that made it difficult to build a good estimate that could actually be discussed with other managers. Most notably, it was not possible to setup high-level tasks that contained a number of detailed sub-tasks. I am pleased to say that this has changed in Fogbugz version 7. The subscase feature allows you to create more detailed (estimatable) cases that roll-up into their parent cases. I used this feature extensively to end up with a list of about 10 major cases to discuss with management that contained the much more detailed and numerous cases for the developers to estimate. The EBS algorithm combines the estimate data with historical data on the developer’s estimate accuracy to figure out the probability of completion by certain dates. Fogbugz will even track the estimated work remaining and probable completion dates as they change and show me the trends. Version 7 of Fogbugz has a number of other improvements that impact scheduling including dependencies between milestones and the ability to distribute staff resources across multiple projects.

Fogugz continues to include well-designed features for handling support cases, support email and documentation. The new version has a plug-in architecture and available plug-ins to add features that help with agile development, time reporting, documentation and user stories. The bottom line is I can now recommend version 7 of Fogbugz as the ideal tool for issue tracking, scheduling and documentation for development teams using almost any methodology. It is available as a hosted solution ($25/user per month) or for installation ($199/user). Make sure to check it out.

I spent the weekend putting together a task list and estimates for Onpoint Connect.  We use Fogbugz for issue tracking but I always found it a little underpowered for estimating; Even though they added Evidence-Based Scheduling (EBS), there were a number of things missing that made it difficult to build a good estimate that could actually be discussed with other managers.  Most notably, it was not possible to setup high-level tasks that contained a number of detailed sub-tasks.  I am pleased to say that this has changed in Fogbugz version 7.  The subscase feature allows you to create more detailed (estimatable) cases that roll-up into their parent cases.  I used this feature extensively to end up with a list of about 10 major cases to discuss with management that contained the much more detailed and numerous cases for the developers to estimate.  The EBS algorithm combines the estimate data with historical data on the developer’s estimate accuracy to figure out the probability of completion by certain dates.  Fogbugz will even track the estimated work remaining and probable completion dates as they change and show me the trends.  Version 7 of Fogbugz has a number of other improvements that impact scheduling including dependencies between milestones and the ability to distribute staff resources across multiple projects.
Fogugz continues to include well-designed features for handling support cases, support email and documentation.  The new version has a plug-in architecture and available plug-ins to add features that help with agile development, time reporting, documentation and user stories.  The bottom line is I can now recommend version 7 of Fogbugz as the ideal tool for issue tracking, scheduling and documentation for development teams using almost any methodology.  It is available as a hosted solution ($25/user per month) or for installation ($199/user).  Make sure to check it out.

Open Source and Unreasonable Expectations

I like Sharp Architecture. Anybody who’s glanced at this blog must have spotted this by now. I flirted with the .NET Entity Framework (too much hoo ha in .NET 3.5 and .NET 4.0 not ready yet) and Teleriks’ ORM (not used widely enough for my tastes) before settling on NHibernate. Once I did that, it was pretty easy to decide on using Sharp Architecture since it provided a more or less complete architectural framework built around NHibernate. I started out with pre 1.0 and upgraded to 1.0 when it became available.

I don’t agree with every choice Sharp Architecture made. For example, I prefer the Spark view engine over the standard .NET ASPX engine. I also rather like the xVal client-side validation library. None of this was a problem since I had the source. I modified the original code generation templates to generate Spark views instead of ASPX views. While I was at it, I had them generate the little bit of xVal stuff needed and also gave them the ability to read object properties from my existing database tables. I also had to work through issues related to the new file handling mechanisms in the latest version of the T4 template engine.

Most recently, I wanted to upgrade to the latest version of Fluent NHibernate because it fixed some annoying bugs. I checked with the SharpArch group and discovered it wouldn’t be upgraded for awhile yet. However, someone was kind enough to offer a set of steps for performing the upgrade. Perfect, I thought, and started to work through them. Turned out there were some breaking changes in Fluent that were not mentioned in the steps. No big deal. They got me 90% of the way there. I worked through the rest of the issues one by one and within a couple of hours I had everything working. Thanks again SharpArch community!

The next morning someone posted a question about upgrading to Fluent that I actually knew how to answer. I started to draft up a reply but quickly realized there was too much detail to stick in a post to the group. Instead, I made a long blog post and sent in a reply with a link to the blog. A few questions later I decided to post the upgraded SharpArch binaries as well. Before I knew it, I had spent more time helping others than I had spent upgrading my solution in the first place. Again, no problem. It was the least I could do to start to pay back all the thousands of hours of work that had been contributed by others to make SharpArch possible in the first place.

So what does this experience teach? Well, open source is a community effort. Don’t expect the community to jump in and solve your problems if you are unwilling or unable to take on some of the necessary work. Usually, it took a whole lot more effort than you realize to get the library to its present state and community members will not always have the time or inclination to immediately do what you want them to do. If you are willing to roll up your sleeves, open source does give you the ability to do what you want when you want it since you will always have access to the code. This is rarely the case with commercial products. Finally, when you do add some capability, contribute it back to the community to help make the library better for all.

I like Sharp Architecture.  I flirted with the .NET Entity Framework (too much hoo ha in .NET 3.5 and .NET 4.0 not ready yet) and Teleriks’ ORM (not used widely enough for my tastes) before settling on NHibernate.  Once I did that, it was pretty easy to decide on using Sharp Architecture since it provided a more or less complete architectural framework built around NHibernate.  I started out with pre 1.0 and upgraded to 1.0 when it became available.

I don’t agree with every choice Sharp Architecture made.  For example, I prefer the Spark view engine over the standard .NET ASPX engine.  I also rather like the xVal client-side validation library.  None of this was a problem since I had the source.   I modified the original code generation templates to generate Spark views instead of ASPX views.  While I was at it, I had them generate the little bit of xVal stuff needed and also gave them the ability to read object properties from my existing database tables.  I also had to work through issues related to the new file handling mechanisms in the latest version of the T4 template engine.

Most recently, I wanted to upgrade to the latest version of Fluent NHibernate because it fixed some annoying bugs.  I checked with the SharpArch group and discovered it wouldn’t be upgraded for awhile yet.  However, someone was kind enough to offer a set of steps for performing the upgrade.  Perfect, I thought, and started to work through them.  Turned out there were some breaking changes in Fluent that were not mentioned in the steps.  No big deal.  They got me 90% of the way there.  I worked through the rest of the issues one by one and within a couple of hours I had everything working.  Thanks again SharpArch community!

The next morning someone posted a question about upgrading to Fluent that I actually knew how to answer.  I started to draft up a reply but quickly realized there was too much detail to stick in a post to the group.  Instead, I made a long blog post and sent in a reply with a link to the blog.  A few questions later I decided to post the upgraded SharpArch binaries as well.  Before I knew it, I had spent more time helping others than I had spent upgrading my solution in the first place.  Again, no problem.  It was the least I could do to start to pay back all the thousands of hours of work that had been contributed by others to make SharpArch possible in the first place.

So what does this experience teach?  Well, open source is a community effort.  Don’t expect the community to jump in and solve your problems if you are unwilling or unable to take on some of the necessary work.  Usually, it took a whole lot more effort than you realize to get the library to its present state and community members will not always have the time or inclination to immediately do what you want them to do.  If you are willing to roll up your sleeves, open source does give you the ability to do what you want when you want it since you will always have access to the code.   This is rarely the case with commercial products.  Finally, when you do add some capability, contribute it back to the community to help make the library better for all.