Interview on DevOps

From time to time, I get the opportunity to talk to industry reporters about agile and DevOps. Today, I was interviewed via email for the first time, which turned out pretty interesting. Here are the questions and answers from that interview.

Please briefly describe how the company is using DevOps, including when it began, which DevOps tools and for which types of projects.

We see DevOps as a culture that encompasses people, practices, tools and philosophy. In that sense, it has become central to everything we do to develop, maintain and operate our e-commerce sites for Blinds.com, JustBlinds.com, AmericanBlinds.com and, of course, Home Depot custom window coverings. Infrastructure is code that evolves in concert with our other software components. DevOps happens inside our agile development teams and often draws in specialized resources from our operations group. It also happens inside our infrastructure group and often draws in developers. It’s part of our DNA.

The tools aspect of it is pretty standard stuff. We use Git and GitHub for source control. All our application and infrastructure code is there. Puppet helps us with rolling out and managing servers. Our backends are mostly .NET so we use Octopus Deploy to help with rolling our code. TeamCity is in the middle of our development process and code there is used to expose deployments and tie them together with builds. Logs are mostly managed by Splunk though we’ve played with an ELK stack for this as well. Nagios is used for infrastructure monitoring. NewRelic is our app monitoring tool and we depend on it to alert us to problems with the user experience. All our alerts get fed into Pager Duty for escalation management. We’ve been experimenting with Consul for discovery and config.  We’re also experimenting with Docker. What’s holding us back there is .NET on Windows. Of course, that story is changing with .NET Core and Windows 2016 on the horizon so we have high hopes for Docker as a next step.

What were the business drivers for deploying DevOps?

Agile drove our adoption of DevOps. Our adoption of agile was driven by our organization’s culture more than anything else. One of our key values is “experiment without fear of failure”. Another is “improve continuously”. Over the years, our whole IT process had gotten into that uncomfortable place where limited resources lead to a difficult relationship with the rest of the business. They saw us as standing in the way of all the cool experimentation and improvement they wanted to do. Agile helped us break down the walls that had developed and form a true partnership for innovation. DevOps is a necessary part of the agile process. How can you innovate constantly if deployment requires an over-the-wall handoff and lots of manual intervention to get done? If operations and infrastructure are not intimately involved in the process, how can you support and manage it once it gets into production?

What benefits has the company seen from DevOps? 

DevOps enables agile, which allows us to continuously improve. It’s a big part of how we were able to deliver on all the promises of our new e-commerce platform, which lead directly to the acquisition by Home Depot. It has allowed us to continue to innovate and thrive inside a Fortune 50 corporation and take on new challenges to help drive innovation outside of the custom window coverings business.  DevOps is like oxygen for the agile process. Without it, it’s very possible that we would have ended up with “agile in name only” where agile terminology is used but nothing really changes and the organization doesn’t see the kind of exponential increase in innovation that we’re benefited from here.

Any challenges of deploying and using DevOps, and how were they addressed?

Our biggest challenges revolve around security and compliance especially now that we are part of one of the largest retailers in the world. We’re still learning how to deal with all that when it comes to sharing responsibility for deployment and infrastructure between developers, infrastructure and operations engineers. We’re constantly tempted to solve these problems with handoffs and work hard to avoid that. Now that we have trust across all the impacted groups it’s much easier to work through them and come up with ways to address compliance without undermining the velocity of innovation.

Rabbit Operations 0.9.0 Released

This version is a minor maintenance release that includes the ability to set a different expiration time for error messages to give you more time to analyze them and possibly replay them. It also includes an upgrade to the latest stable release of RavenDB.  Check out the project website for more information.

The next release, 0.10.0, will include some major improvements including a new GUI and the ability to view and analyze statistics collected from all messages.  You can check out the plan on our Trello board.

Rabbit Operations Version 0.8.0 Released

This is a minor release but it contains one significant goodie: The details dialog for errors now has a section that displays a nicely formatted stack dump. Here are the complete release notes:

  • Nicely formatted stack dump shown on details view of error message
  • Ability to view queue stats as a gauge
  • Improve performance of search screen especially when bringing back large sets of large messages
  • BUG FIX: Small memory leak in poller due to RavenDB profiler stats
  • BUG FIX: Retry to overcome RavenDB transients under high loads when there are more than 40 active queue pollers

Check it out the Project Site for more details.

RabbitOperations Project Launched

An early preview release of my new open source project, RabbitOperations, is now available at https://github.com/SouthsideSoftware/RabbitOperations. The idea is to provide some tools for managing real-world applications that use RabbitMQ. It will support popular message buses like NServiceBus and Rebus with error replay, audit & error logging, sophisticated search capabilities and likely an integration with NewRelic to log stats about queue lengths etc. This very early release lacks a UI and is only suitable for experimentation and potential contributors.

ASP.NET MVC Attribute Supporting SSL Terminated at Amazon Elastic Load Balancer

In my last post, I described an ASP.NET Web Api RequireHttps attribute that supports SSL terminated at a load balancer like Amazon’s ELB.  Here’s a RequireHttps attribute for ASP.NET MVC with load balancer support:

using System;
using System.Configuration;
using System.Web.Mvc;
using System.Web;
using System.Collections.Generic;

namespace MvcHelpers
{
    public class RequireHttpsSupportsLBAttibute : RequireHttpsAttribute
    {
        public override void OnAuthorization(AuthorizationContext filterContext)
        {
            if (filterContext.HttpContext.Request.IsSecureConnectionConsideringLoadBalancer()) return;

            base.OnAuthorization(filterContext);
        }
    }

    public static class HttpRequestBaseHelper
    {
        public static bool IsSecureConnectionConsideringLoadBalancer(this HttpRequestBase request)
        {
            return request.IsSecureConnection || LoadBalancerSecured(request);
        }

        public static bool LoadBalancerSecured(HttpRequestBase request)
        {
            if (string.Equals(request.Headers["X-Forwarded-Proto"],
                "https"
                StringComparison.InvariantCultureIgnoreCase))
            {
                return true;
            }

            return false;
        }
    }
}

ASP.NET Web API 2 RequireSsl Attribute With Support For Terminating SSL At Load Balancer

Most modern load balancers, including Amazon’s Elastic Load Balancer (ELB), allow you to configure them to handle SSL. Although they can forward the request to your web nodes using SSL, it is more efficient to offload the SSL processing to the load balancer and forward requests from there to your web servers using plain HTTP on port 80. Load balancers that support offloading SSL generally inject a “X-Forwarded-Proto” header into the request with the value “http” or “https” to indicate the protocol of the original request. This approach is quite secure as the load balancer typically replaces any “X-Forwarded-Proto” header present in the original request. This is true for ELB.

You can use this header in ASP.NET Web API to make sure a request is secure. For example, here’s an attribute you can put on any controller or controller method to require SSL. It supports SSL terminated at the load balancer as well as plain old SSL straight to the server:

using System;
using System.Configuration;
using System.Linq;
using System.Net.Http;
using System.Web.Http.Controllers;
using System.Web.Http.Filters;

namespace AspNetApiHelpers
{
    public class RequireHttpsAttribute : AuthorizationFilterAttribute
    {
        public override void OnAuthorization(HttpActionContext actionContext)
        {
            if (actionContext.Request.RequestUri.Scheme != Uri.UriSchemeHttps && !IsForwardedSsl(actionContext))
            {
                actionContext.Response = new HttpResponseMessage(System.Net.HttpStatusCode.Forbidden)
                {
                    ReasonPhrase = "HTTPS Required"
                };
            }
            else
            {
                base.OnAuthorization(actionContext);
            }
        }

        private static bool IsForwardedSsl(HttpActionContext actionContext)
        {
            var xForwardedProto = actionContext.Request.Headers.FirstOrDefault(x => x.Key == "X-Forwarded-Proto");
            var forwardedSsl = xForwardedProto.Value != null &&
                xForwardedProto.Value.Any(x => string.Equals(x, "https", StringComparison.InvariantCultureIgnoreCase));
            return forwardedSsl;
        }
    }
}