Improving our database deployment process for the Oxford Flood Network

1st July 2015


We talked in an earlier blog post about using Continuous Deployment for our Oxford Flood Network software development.

Some parts of a system are straightforward to deploy like this. If a system is stateless, for example the front-end code (running in a user’s browser), it can be updated relatively easily.

For a user interface, it is of course especially helpful to be able to provide rapid feedback, but it would also be useful to provide rapid updates to the rest of the system. Server / API code can also be continously deployed, with the restriction that the API remains constant, or is at least maintains backwards compatibility. Breaking changes may occasionally be necessary, requiring a synchronous update.

But how can we handle the parts of the back-end code that use a relational database? It is hard to modify code which uses a database, because the database structure & data may also need to be modified. Continuous deployment works best when you can easily revert an unsuccessful change (for whatever reason), but changes to a database cannot be easily reverted – even restoring from a backup isn’t possible, as you’ll lose any database updates that were made in the period between the deployment and its reversion.

So what can we do? We want to:

  • make deployment easy to do, so we do it frequently
  • make deployment quick to do, so we minimise scheduled downtime
  • make deployment easy to do correctly, so we minimise the chance of errors as a result of the deployment

… but we want to keep this a lightweight/agile project, and we don’t have the luxury (or overhead) of a full beta environment, QA team etc.

improving1

We can use our Continuous Integration (CI) system to improve our manual process!

Let’s separately consider two aspects of state within a database:

  1. The state of the database structure itself
  2. The state of the data stored

The first thing we need is to detect database changes caused by code changes. We’re using a Java Persistence API provider for our database mapping, so the database schema is indirectly defined by the Java source code.

CI target #1 – make database changes explicitly visible, by failing the CI build if the schema is changed without updating the reference DDL.

In our CI server, we run a task that takes the Java source, builds it and uses it to generate a DDL representation of the database. This is compared with a reference copy of the database schema, and the build is failed if there is a difference.

improving2

Fig. 1: CI checks DDL from schema generation against reference

Why does this help?

  • When making a change to the Java source that affects the database schema, the developer also has to check in a change to the reference DDL file. This means that not only is the developer aware that they have made this change, but it’s also an acknowledgement that they own the changes both to the Java source and to the DB schema.
  • On the other hand, sometimes you realise that you can rewrite the code so as not to require a DB schema change after all, and having a CI failure prompts you to ask that question.

CI target #2 – make incremental schema modifications, as a sequence of files that can be tested by CI

Starting from the initial schema (an empty database), we have a set of DDL files (numbered incrementally) that, when run sequentially, generate the full schema. So when you need to make a database change, you create a new file that implements the change you need, and add it at the end of the list. (It doesn’t matter exactly how you do this – we just number the files sequentially & a script runs them in order.)

The continuous integration server runs the full set of incremental DDL files, and checks that they produce exactly the same schema as the reference database schema. If not, it fails the build.

improving3

Fig 2: CI checks that incrementals match reference DDL

Why does this help?

  • it tests that the new incremental will indeed produce the schema that is required by the Java source
  • each incremental DDL file describes the schema change that will be applied. This may in turn require a datafix to modify the data in the production system: The existence of the incremental DDL file is a prompt that a datafix may be required, and having the change clearly specified makes it easier to write a datafix.

How to generate the database incrementals?

We’re doing this manually at the moment, based on the changes to the reference DDL; I’m using a visual diff/merge tool (I like meld’s presentation) to make this a bit easier. We don’t have a large volume of changes to generate, and I’m keen that changes are manually reviewed. Another advantage of making manual changes is that we can do things like renaming a table / column while preserving content & integrity, whereas a naive automated tool would instead drop & recreate.

Are there tools that do this?

Yes, although I hadn’t been aware of them until I started writing this. They look rather heavyweight for our purposes right now, with some overhead to learn how to use. Creating our CI tests took less time to set up and debug than it took me to write this post – I’m probably more efficient at writing code than prose.

Background: Schema / DDL generation

How do we generate DDL from Java source code? The options are:

  • Get the JPA provider to generate DDL direct to a file
  • Get the JPA provider to populate the schema in an empty database, then dump the schema (using the database provider tools) to get a DDL file.

We’re using EclipseLink, a Java Persistence API provider, for our database code. We started off using EclipseLink’s ddl-generation option to generate a DDL script, and we added a command-line option to our application to trigger this. Unfortunately it stopped working when we changed to a different database back-end, but this turned out to be a blessing in disguise, because it forced us to use the database dump option, which then made the incremental test possible.

Future work

So far we’re only testing the changes to the database schema. I’d also like to add tests that also run datafixes as required before & after each structural change.