Continuous Deployment: Behind The Scenes of the Oxford Flood Network

Software deployment is hard to do well. When others’ businesses – and yours – rely on your software, you work hard to avoid problems, and make them easy to fix when they do occur. Depending on your business requirements, and your software architecture (e.g. Monolith vs Microservices), you choose between infrequent large releases and many small ones.

The Oxford Flood Network software isn’t yet publicly available, as it’s currently an R&D project. Even so, we’ve been constantly receiving sensor data for months now, necessitating uninterrupted server uptime, and immediate resolution of bugs. We’re constantly working on the code, so we need to keep deploying. What is the right methodology for us?

As an R&D team, our favoured process was “none”. Unstructured environments supposedly foster creativity, so we’ve jealously guarded our freedom from rigid processes. Improvised manual deployments have happened whenever we’ve had time and inclination. Should we challenge this orthodoxy?

The first Floodnet deployment was a sketchy proof-of-concept, and bugs arose daily. Manual deployment soon became tedious, so we found ourselves hacking out bugs with vim on the production server to save time. If our project were business-critical, we would rightly have been fired many times! We wondered if deploying the ‘proper’ way – checking changes into version control, awaiting automated testing, and deploying packages – could be automated, and made rapid enough for immediate bugfixes. I’d personally wanted to try so-called Continuous Deployment for ages, and here was a project which demanded it.

We ruthlessly streamlined the deployment process and scripted it for our Continuous Integration server. Our projects, if successful, are bound for our main Development teams, so we use the same tools as them: Subversion, Teamcity, and RPMs. It took a week or so to automate installation, building, testing, packaging, uploading to our EC2 instance, and deploying. Now we check in a change, and it is live about 2 minutes later.

This is brilliant.

We get immediate feedback of bugs from eyeballing the live version. Our test coverage is sparse, so we rely on this; we chose this approach so we can change our prototype quickly without regression tests making it harder (by definition). Test-Driven Development assumes you know exactly what you’re building when you start – if we knew that, it wouldn’t be research!

Although our software isn’t yet available to public users, it is often used in presentations and demos. Demonstration-led UX design might justify a blog post on its own, but always having the state of the art available to show to third parties and to management has proven very helpful.


Live demo of @oxfloodnet to @ruskin147 at @Nominet registrar conference #regconf (Source: @adamhleach)

The main difficulties are unfinished features, changes to interfaces between modules, and database changes (SQL or NoSQL). For now, we’re working one person per module, so we keep everything in sync by pausing the automation, or by deploying manually when needed. Branching is a better approach, but it’s harder in Subversion than in Git. Not perfect, but good enough so far.

Now we have a just-good-enough continuous deployment setup, it is easy to adopt it more widely. “Hello, World!” projects are trivial to build and package, so this is the best time to add them to the CI server. Keeping this build green throughout a project’s early life means that once it’s mature enough to start deploying, it takes moments to do so. Never again do we need to faff around with manual deployment, or graft an automated build onto an existing project.

We thought Continuous Deployment was too much process for an R&D prototype project. We were wrong. Highly iterative deployment is perfect for highly iterative projects, and it’s well worth the price.

20th May 2015

#Oxford Flood Network