Containerization as the necessary end point of deployment automation

August 3, 2016

It started on Twitter:

@thatcks: I'll say this for containers: containerizing all our services would make it easier to spin up test instances. Building new VMs gets old.

@beamflash: Is Docker's success partly because it greatly improved on the status quo of deployment automation, and not containerisation per se?

Bearing in mind that I'm an outsider here, my view is that greatly improving the status of deployment automation requires something very closely akin to containerization. What you really want to have is a self-contained artifact that can be deployed somewhere, used, and then un-deployed again, with the host machine now reverted to its original state so that it's ready for the next artifact to be deployed to it. It's very important that the un-deployment step be able to reliably and completely remove all traces of your artifact's presence, because this is what's necessary to make the host reusable. If rolling an artifact on to a host can contaminate the host in any meaningful way, you wind up needing to trash and rebuild the host after you're done with the deployment; otherwise you have potentially important divergences between a newly built host and a host that's been in use for a while, divergences that may affect how your deployment artifacts behave.

(This should be unsurprising, because it's the same advantage that package systems have.)

Current software is not really set up to behave this way. It generally assumes that it can (and will) spray bits of itself all over various parts of your filesystem hierarchy, doesn't keep exacting track of every single file it ever touches (even log files and various sorts of temporary files) so it can remove them all again, and so on. It also generally exists in a web of dependencies with other packages that may not do this either. And in general, it's not self contained; instead it's intrinsically entangled with the state of the host system simply because it's using various basic things from the host system (such as the C library, various shared system configuration files, and so on).

If you want roll-on, roll-off artifacts that won't leave traces behind and that aren't entangled in the current state of the host system, you must somehow create and enforce strong isolation; the deployment artifact can't be able to mess up the host and it can't be able to depend on very many specifics of the host's state. As far as I can see, any form of deployment automation that can do this is going to wind up looking a lot like containerization, although the exact details can (and will) vary.

(See also A thought on containerization, isolation, and deployment, which sort of starts with CJ Silverio summarizing what you want .)

Written on 03 August 2016.
« My new key binding hack for xcape and dmenu
Some malware that sends interesting fake mailing list messages »

Page tools: View Source, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Aug 3 00:23:38 2016
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.