Nant Task to Synchronize Scripts Folder to Database Using RedGate SQL Compare API

Redgate’s SQL Compare is an indispensible tool for anyone doing development against Microsoft SQL Server. It’s ability to export the database structure to a scripts folder and then synchronize changes from a database back to the scripts folder is a its greatest feature; It allows the database structure to be placed under source control with a high degree of granularity since each database object becomes a script file.

Our build process includes a step to deploy a test/demo version of our web to a server. As part of that process, we need to update the database structure from the scripts folder. Thanks to the SQL Compare API that was included with RedGate’s SQL Toolbelt, I was able to put together an NAnt task that does the job. Here is the actual Nant target from our build script:

You’ll notice the target uses a couple of properties to point at the location of the project root (${project.dir}) and the SQL Server instance name (${sqlServerInstance}). Since the task does not specify a user name and password, the schemaSync task uses the Windows security credentials of the user running the build to logon to the database. In the case of our automated build, the user account that runs the build process has access to the database. You can change to a SQL Server login by providing sourceDbUserName and sourceDbPassword properties to schemaSync.

The complete list of supported attributes is as follows:

Attribute Description sourceServerName If synchronizing from a database, the name of the source database server. sourceDbName If synchronizing from a database, the name of the source database. sourceDbUserName If using SQL Server authentication, the name of the user in the source database. Omit this attribute to use Windows authentication. sourceDbPassword If using SQL Server authentication, the password of the user in the source database. Omit this attribute to use Windows authentication. sourceScriptsPath If synchronizing from a scripts folder, the path to the scripts folder. destinationServerName The name of the server to synchronize to. REQUIRED. destinationDbname The name of the database to synchronize to. REQUIRED. destinationDbUserName If using SQL Server authentication, the name of the user in the destination database. Omit this attribute to use Windows authentication. destinationDbPassword If using SQL Server authentication, the password of the user in the destination database. Omit this attribute to use Windows authentication.

You can download the source code from my GitHub SouthSideDevToys repository.

RedGate’s licensing will require you to purchase the SQL Comparison SDK after a demo period. At this time, the product sells for $695 and you should be able to get a 15 day demo from RedGate.

The code is fairly self-explanatory. I wrapped an Nant task around a set of classes that works with the API. The Nant task assumes that the destination will always be a database even though the underlying code is capable of synchronizing to a scripts folder. If you have questions about the code, feel free to comment and I will respond with more information.

RedGate just released Data Compare 8.0 that allows data comparisons and synchronization with scripts. RedGate says that the next version of their API, due out at the end of August 2009 (UPDATE: Can be obtained by licensed customers through support), will include this capability as well. Our database includes configuration data that we currently rollout via SQL scripts. This process is a pain to maintain because each time the data changes we have to update our scripts and test them. As soon as RedGate’s new API becomes available I plan to change this process to use Data Compare via Nant so that everything is automatic. I’ll publish the Nant task as soon as I have it working.

Redgate’s SQL Compare is an indispensable tool for anyone doing development against Microsoft SQL Server.  It’s ability to export the database structure to a scripts folder and then synchronize changes from a database back to the scripts folder is a its greatest feature; It allows the database structure to be placed under source control with a high degree of granularity since each database object becomes a script file.

Our build process includes a step to deploy a test/demo version of our web to a server.  As part of that process, we need to update the database structure from the scripts folder.  Thanks to the SQL Compare API that was included with RedGate’s SQL Toolbelt, I was able to put together an NAnt task that does the job.  Here is the actual Nant target from our build script:

<target name="updateDatabaseStructure">
    <schemaSync sourceScriptsPath="${project.dir}\db\OnpointConnect"  destinationServerName="${sqlServerInstance}" destinationDbName="OnpointConnect"/>
</target>

You’ll notice the target uses a couple of properties to point at the location of the project root (${project.dir}) and the SQL Server instance name (${sqlServerInstance}).  Since the task does not specify a user name and password, the schemaSync task uses the Windows security credentials of the user running the build to logon to the database.  In the case of our automated build, the user account that runs the build process has access to the database.  You can change to a SQL Server login by providing sourceDbUserName and sourceDbPassword properties to schemaSync.

The complete list of supported attributes is as follows:

Attribute Description
sourceServerName If synchronizing from a database, the name of the source database server.
sourceDbName If synchronizing from a database, the name of the source database.
sourceDbUserName If using SQL Server authentication, the name of the user in the source database.  Omit this attribute to use Windows authentication.
sourceDbPassword If using SQL Server authentication, the password of the user in the source database.  Omit this attribute to use Windows authentication.
sourceScriptsPath If synchronizing from a scripts folder, the path to the scripts folder.
destinationServerName The name of the server to synchronize to.  REQUIRED.
destinationDbname The name of the database to synchronize to.  REQUIRED.
destinationDbUserName If using SQL Server authentication, the name of the user in the destination database.  Omit this attribute to use Windows authentication.
destinationDbPassword If using SQL Server authentication, the password of the user in the destination database.  Omit this attribute to use Windows authentication.

You can download the source code from my GitHub SouthSideDevToys repository.

RedGate’s licensing will require you to purchase the SQL Comparison SDK after a demo period.  At this time, the product sells for $695 and you should be able to get a 15 day demo from RedGate.

The code is fairly self-explanatory.  I wrapped an Nant task around a set of classes that works with the API.  The Nant task assumes that the destination will always be a database even though the underlying code is capable of synchronizing to a scripts folder.  If you have questions about the code, feel free to comment and I will respond with more information.

RedGate just released Data Compare 8.0 that allows data comparisons and synchronization with scripts.  RedGate says that the next version of their API, due out at the end of August 2009 (UPDATE: Can be obtained by licensed customers through support), will include this capability as well.  Our database includes configuration data that we currently rollout via SQL scripts.  This process is a pain to maintain because each time the data changes we have to update our scripts and test them.  As soon as RedGate’s new API becomes available I plan to change this process to use Data Compare via Nant so that everything is automatic.  I’ll publish the Nant task as soon as I have it working.

Check Your Assumptions at the Door

Every year around this time I start going to McDonalds almost every day for lunch for one reason and one reason only: Monopoly. I’ve faithfully stuck game pieces to my little board every year in hopes of winning some valuable prize. I’m not greedy. I never dream about winning the big one. No, anything worth more than about $2 would be just fine with me. As you can guess, I’ve won plenty of fries and small cokes over the years but never the elusive $2+.

Anyway, these last couple of years they’ve gone to a web-based component located at http://www.playatmcd.com. It’s a rather pretty, Flash-based thing that asks you to put in annoyingly long codes from each of your stamps. Once it’s validated the code it lets you roll the dice and moves your piece around the board. All in all, it provides a perfectly satisfying McDonalds Monopoly experience. One particular feature of the game is that when you land on community chest or chance you get Coke rewards. When you land on the winning square, a message pops-up over the game board to inform you of your good fortune and a follow-up email shows up in your inbox with the info on how to logon to the Coke rewards site to claim your points. I never gave the feature much thought. I’d land on the square, I’d get the popup and a few minutes later the email would show up. It all seemed to work perfectly well until yesterday when I moved my email from Go Daddy to GMail.

It turns out the system isn’t very smart about the possibility of fast email. First, I rolled an 11 and while my piece was still moving GMail alerted me about a new message from McDonalds about a Coke reward. A little odd but not too bad. Clearly, the server sent the email at the same time it told the Flash client where to land my piece. Not perfect but probably unavoidable. The second thing that happened really bugged me. I rolled double threes and landed on a property. Just after landing, I got an email about another Coke reward, which did not make any sense since the game had told me nothing about winning another Coke reward. My second and last roll (no doubles this time) landed me on community chest, where I finally won the Coke reward I had received the email about a couple of minutes before.

So what does this tell me? Well, when the site receives the code from my game piece it must calculate the square where I will land. It might take one roll or it might take a couple. Regardless of how many moves it will take, it sends the prize email at the same time it tells the Flash client what to do. Either the Flash client decides how to get me to the right place in one or more rolls, or the server sends information about the necessary rolls to the Flash client. I never noticed this flaw in the logic before because my old email didn’t show up nearly fast enough to expose it. My guess is the developers of the site either missed this or figured nobody would notice. Once you see this happen the illusion that pressing the button to roll the dice means something is shattered and the game is no longer much fun. It makes me wonder why they bothered to implement the animated game board at all. After all, they have a facility that allows you to enter a code and see what you get without the animation.

At the end it all comes down to where bad assumptions can lead. The developers assumed I would not notice the pointlessness of rolling the dice. Given my reaction to what I saw, I guess somewhere in the primitive part of my brain I actually though the dice roll mattered.

I leave you with my favorite version of the old saying about assumptions.

Every year around this time I start going to McDonalds almost every day for lunch for one reason and one reason only: Monopoly.  I’ve faithfully stuck game pieces to my little board every year in hopes of winning some valuable prize.  I’m not greedy.  I never dream about winning the big one.  No, anything worth more than about $2 would be just fine with me.  As you can guess, I’ve won plenty of fries and small cokes over the years but never the elusive $2+.

Anyway, these last couple of years they’ve gone to a web-based component located at www.playatmcd.com.  It’s a rather pretty, Flash-based thing that asks you to put in annoyingly long codes from each of your stamps.  Once it’s validated the code it lets you roll the dice and moves your piece around the board.  All in all, it provides a perfectly satisfying McDonalds Monopoly experience.  One particular feature of the game is that when you land on community chest or chance you get Coke rewards.  When you land on the winning square, a message pops-up over the game board to inform you of your good fortune and a follow-up email shows up in your inbox with the info on how to logon to the Coke rewards site to claim your points.  I never gave the feature much thought.  I’d land on the square, I’d get the popup and a few minutes later the email would show up.   It all seemed to work perfectly well until yesterday when I moved my email from Go Daddy to GMail.

It turns out the system isn’t very smart about the possibility of fast email.  First, I rolled an 11 and while my piece was still moving GMail alerted me about a new message from McDonalds about a Coke reward.  A little odd but not too bad.  Clearly, the server sent the email at the same time it told the Flash client where to land my piece.  Not perfect but probably unavoidable.  The second thing that happened really bugged me.  I rolled double threes and landed on a property.  Just after landing, I got an email about another Coke reward, which did not make any sense since the game had told me nothing about winning another Coke reward.  My second and last roll (no doubles this time) landed me on community chest, where I finally won the Coke reward I had received the email about a couple of minutes before.

So what does this tell me?  Well, when the site receives the code from my game piece it must calculate the square where I will land.  It might take one roll or it might take a couple.  Regardless of how many moves it will take, it sends the prize email at the same time it tells the Flash client what to do.  The Flash client adds some nice animation of rolling dice and shows the piece moving around the board.  While that is going on, the email is making it’s way to my inbox, which is now so fast that I see the email before the piece stops moving.  I never noticed this flaw in the logic before because my old email didn’t show up nearly fast enough to expose it.  My guess is the developers of the site either missed this or figured nobody would notice.  Once you see this happen the illusion that pressing the button to roll the dice means something is shattered and the game is no longer much fun.   It makes me wonder why they bothered to implement the animated game board at all.  After all, they have a facility that allows you to enter a code and see what you get without the animation.

At the end it all comes down to where bad assumptions can lead.  The developers assumed I would not notice the pointlessness of rolling the dice.  Given my reaction to what I saw, I guess somewhere in the primitive part of my brain I actually though the dice roll mattered.

I leave you with my favorite version of the old saying about assumptions.

Estimating with Fogbugz

I spent the weekend putting together a task list and estimates for Onpoint Connect. We use Fogbugz for issue tracking but I always found it a little underpowered for estimating; Even though they added Evidence-Based Scheduling (EBS), there were a number of things missing that made it difficult to build a good estimate that could actually be discussed with other managers. Most notably, it was not possible to setup high-level tasks that contained a number of detailed sub-tasks. I am pleased to say that this has changed in Fogbugz version 7. The subscase feature allows you to create more detailed (estimatable) cases that roll-up into their parent cases. I used this feature extensively to end up with a list of about 10 major cases to discuss with management that contained the much more detailed and numerous cases for the developers to estimate. The EBS algorithm combines the estimate data with historical data on the developer’s estimate accuracy to figure out the probability of completion by certain dates. Fogbugz will even track the estimated work remaining and probable completion dates as they change and show me the trends. Version 7 of Fogbugz has a number of other improvements that impact scheduling including dependencies between milestones and the ability to distribute staff resources across multiple projects.

Fogugz continues to include well-designed features for handling support cases, support email and documentation. The new version has a plug-in architecture and available plug-ins to add features that help with agile development, time reporting, documentation and user stories. The bottom line is I can now recommend version 7 of Fogbugz as the ideal tool for issue tracking, scheduling and documentation for development teams using almost any methodology. It is available as a hosted solution ($25/user per month) or for installation ($199/user). Make sure to check it out.

I spent the weekend putting together a task list and estimates for Onpoint Connect.  We use Fogbugz for issue tracking but I always found it a little underpowered for estimating; Even though they added Evidence-Based Scheduling (EBS), there were a number of things missing that made it difficult to build a good estimate that could actually be discussed with other managers.  Most notably, it was not possible to setup high-level tasks that contained a number of detailed sub-tasks.  I am pleased to say that this has changed in Fogbugz version 7.  The subscase feature allows you to create more detailed (estimatable) cases that roll-up into their parent cases.  I used this feature extensively to end up with a list of about 10 major cases to discuss with management that contained the much more detailed and numerous cases for the developers to estimate.  The EBS algorithm combines the estimate data with historical data on the developer’s estimate accuracy to figure out the probability of completion by certain dates.  Fogbugz will even track the estimated work remaining and probable completion dates as they change and show me the trends.  Version 7 of Fogbugz has a number of other improvements that impact scheduling including dependencies between milestones and the ability to distribute staff resources across multiple projects.
Fogugz continues to include well-designed features for handling support cases, support email and documentation.  The new version has a plug-in architecture and available plug-ins to add features that help with agile development, time reporting, documentation and user stories.  The bottom line is I can now recommend version 7 of Fogbugz as the ideal tool for issue tracking, scheduling and documentation for development teams using almost any methodology.  It is available as a hosted solution ($25/user per month) or for installation ($199/user).  Make sure to check it out.

Open Source and Unreasonable Expectations

I like Sharp Architecture. Anybody who’s glanced at this blog must have spotted this by now. I flirted with the .NET Entity Framework (too much hoo ha in .NET 3.5 and .NET 4.0 not ready yet) and Teleriks’ ORM (not used widely enough for my tastes) before settling on NHibernate. Once I did that, it was pretty easy to decide on using Sharp Architecture since it provided a more or less complete architectural framework built around NHibernate. I started out with pre 1.0 and upgraded to 1.0 when it became available.

I don’t agree with every choice Sharp Architecture made. For example, I prefer the Spark view engine over the standard .NET ASPX engine. I also rather like the xVal client-side validation library. None of this was a problem since I had the source. I modified the original code generation templates to generate Spark views instead of ASPX views. While I was at it, I had them generate the little bit of xVal stuff needed and also gave them the ability to read object properties from my existing database tables. I also had to work through issues related to the new file handling mechanisms in the latest version of the T4 template engine.

Most recently, I wanted to upgrade to the latest version of Fluent NHibernate because it fixed some annoying bugs. I checked with the SharpArch group and discovered it wouldn’t be upgraded for awhile yet. However, someone was kind enough to offer a set of steps for performing the upgrade. Perfect, I thought, and started to work through them. Turned out there were some breaking changes in Fluent that were not mentioned in the steps. No big deal. They got me 90% of the way there. I worked through the rest of the issues one by one and within a couple of hours I had everything working. Thanks again SharpArch community!

The next morning someone posted a question about upgrading to Fluent that I actually knew how to answer. I started to draft up a reply but quickly realized there was too much detail to stick in a post to the group. Instead, I made a long blog post and sent in a reply with a link to the blog. A few questions later I decided to post the upgraded SharpArch binaries as well. Before I knew it, I had spent more time helping others than I had spent upgrading my solution in the first place. Again, no problem. It was the least I could do to start to pay back all the thousands of hours of work that had been contributed by others to make SharpArch possible in the first place.

So what does this experience teach? Well, open source is a community effort. Don’t expect the community to jump in and solve your problems if you are unwilling or unable to take on some of the necessary work. Usually, it took a whole lot more effort than you realize to get the library to its present state and community members will not always have the time or inclination to immediately do what you want them to do. If you are willing to roll up your sleeves, open source does give you the ability to do what you want when you want it since you will always have access to the code. This is rarely the case with commercial products. Finally, when you do add some capability, contribute it back to the community to help make the library better for all.

I like Sharp Architecture.  I flirted with the .NET Entity Framework (too much hoo ha in .NET 3.5 and .NET 4.0 not ready yet) and Teleriks’ ORM (not used widely enough for my tastes) before settling on NHibernate.  Once I did that, it was pretty easy to decide on using Sharp Architecture since it provided a more or less complete architectural framework built around NHibernate.  I started out with pre 1.0 and upgraded to 1.0 when it became available.

I don’t agree with every choice Sharp Architecture made.  For example, I prefer the Spark view engine over the standard .NET ASPX engine.  I also rather like the xVal client-side validation library.  None of this was a problem since I had the source.   I modified the original code generation templates to generate Spark views instead of ASPX views.  While I was at it, I had them generate the little bit of xVal stuff needed and also gave them the ability to read object properties from my existing database tables.  I also had to work through issues related to the new file handling mechanisms in the latest version of the T4 template engine.

Most recently, I wanted to upgrade to the latest version of Fluent NHibernate because it fixed some annoying bugs.  I checked with the SharpArch group and discovered it wouldn’t be upgraded for awhile yet.  However, someone was kind enough to offer a set of steps for performing the upgrade.  Perfect, I thought, and started to work through them.  Turned out there were some breaking changes in Fluent that were not mentioned in the steps.  No big deal.  They got me 90% of the way there.  I worked through the rest of the issues one by one and within a couple of hours I had everything working.  Thanks again SharpArch community!

The next morning someone posted a question about upgrading to Fluent that I actually knew how to answer.  I started to draft up a reply but quickly realized there was too much detail to stick in a post to the group.  Instead, I made a long blog post and sent in a reply with a link to the blog.  A few questions later I decided to post the upgraded SharpArch binaries as well.  Before I knew it, I had spent more time helping others than I had spent upgrading my solution in the first place.  Again, no problem.  It was the least I could do to start to pay back all the thousands of hours of work that had been contributed by others to make SharpArch possible in the first place.

So what does this experience teach?  Well, open source is a community effort.  Don’t expect the community to jump in and solve your problems if you are unwilling or unable to take on some of the necessary work.  Usually, it took a whole lot more effort than you realize to get the library to its present state and community members will not always have the time or inclination to immediately do what you want them to do.  If you are willing to roll up your sleeves, open source does give you the ability to do what you want when you want it since you will always have access to the code.   This is rarely the case with commercial products.  Finally, when you do add some capability, contribute it back to the community to help make the library better for all.

An Open Source Project to Watch

If you use .NET open source libraries in your development, you need to take a look at hornget – The Horn Package Management Project. Its initial goal is to provide a way to download and build popular .NET open source libraries, like Sharp Architecture and NHibernate, automatically resolving all necessary dependencies. The project is in very early stages and very rough around the edges. However, when you consider the couple of hours it takes to do something like upgrading Sharp Architecture to Fluent NHibernate 1.0 (as I did recently), you can see why something like hornget would be quite useful. I certainly intend to keep my eye on this one and, if time permits, at least contribute some testing effort.

If you use .NET open source libraries in your development, you need to take a look at hornget – The Horn Package Management Project.  Its initial goal is to provide a way to download and build popular .NET open source libraries, like Sharp Architecture and NHibernate, automatically resolving all necessary dependencies.  The project is in very early stages and very rough around the edges.  However, when you consider the couple of hours it takes to do something like upgrading Sharp Architecture to Fluent NHibernate 1.0 (as I did recently), you can see why something like hornget would be quite useful.  I certainly intend to keep my eye on this one and, if time permits, at least contribute some testing effort.