Why I increasingly think we're unlikely to ever use Docker

July 26, 2015

Apart from my general qualms about containers in our environment, I have increasingly wound up thinking that Docker itself is not a good fit for our environment even if we want to use some form of service containerization for various reasons. The problem is access to data.

Our current solution for services gaining access to service data is NFS filesystems from our central fileservers. Master DNS zone files, web pages and web apps for our administrative web server, core data files used by our mail gateway, you name it and it lives in a NFS filesystem. As far as I can tell from reading about Docker, this is rather against the Docker way. Instead, Docker seems to want you to wrap your persistent data up inside Docker data volumes.

It's my opinion that Docker data volumes would be a terrible option for us. They'd add an extra level of indirection for our data in one way or another and it's not clear how they allow access from different Docker hosts (if they do so at all). Making changes and so on would get more difficult, and we make changes (sometimes automated ones) on a frequent basis. In theory maybe we could use (or abuse) Docker features to either import 'local' filesystems (that are actually NFS mounts) into the containers or have the containers do NFS mounts inside themselves. In practice this clearly seems to be swimming upstream against the Docker current.

It's my strong view that much software has ways that it expects to be used and ways that it doesn't expect to be used. Even if you can make software work in a situation it doesn't expect, it's generally not a good idea to do so; you're almost always buying yourself a whole bunch of future pain and heartburn if you go against what the software wants. The reality is that the software is not a good fit for your situation.

So: Docker does not appear to be a good fit for how we operate and as a result I don't think it's a good choice for us.

(In general the containerization stories I read seem to use some sort of core object store or key store as their source of truth and storage. Pushing things to Amazon's S3 is popular, for example. I'm not sure I've seen a 'containerization in a world of NFS' story.)

Comments on this page:

The Docker data volume support, where docker creates and manages the data volume dirs for you, is a bit half-baked. E.g. it's easy to end up with dangling data volumes taking up disk space without any way provided by the docker tools to manage them. I don't think such data volumes are much used, and I wouldn't describe them as "the Docker way".

What is widely used is the ability to mount directories (or individual files) from the host filesystem into a container. Confusingly, that Docker documentation page calls this "mounting a host directory as a data volume", but to me the term "data volume" seems odd in this context (i.e. where docker does not manage the directory on the host filesystem). The option is the same for both approaches (-v) but the argument syntax is distinct when you explicitly supply the host directory to be mounted into the container. From inside the container, the effect is similar: There is a directory in the containers filesystem namespace that did not come from the container image, but rather refers to some directory elsewhere on the host filesystem).

It sounds like mounting bits of your NFS filesystems into containers should work fine for you. Other reasons may make Docker unattractive in your environment, but I don't think this is it.

By Miksa at 2015-07-26 05:59:19:

Just this week I came up with a use-case for Docker at the university, computation servers for researches.

Several research groups have powerful servers and some of them are maintained by the IT department with RHEL or Ubuntu LTS OSes. Problem is that researches need to run all sort of unmentionable software which often isn't available from the standard repos or any repo at all, or the versions are outdated. Just taking peek at what they have done to our pristine servers may make you nauseous.

Docker would seem the easiest solution for this. Keep the host clean and let the researches launch containers that suit their need, for example recent Fedora, debian, Bio-Linux, Scientific Linux or something else they need. The host will just provide them CPU, RAM and storage space. Would probably be quite a bit easier than KVM or XEN virtual machines.

But first we need to figure a solution for the security issues. My understanding is that if you give users the permission to launch containers you basically give them root to the host. Either we need to have the sudo-users in the groups set up the containers or we need to craft a launcher script that will only launch whitelisted container images with approved settings.

By Paulo Almeida at 2015-07-28 05:58:43:

I haven't thoroughly explored containers yet, so my opinion may change, but things like this are why I'm currently more interested in LXC/LXD than Docker. I'm used to managing Linux machines, I'm not sure the added complexity of learning the Docker way will be worth it for my needs.

By Lim at 2015-07-28 09:55:52:

do NFS mounts inside themselves

Rocket's pod concept fits here quite well. One process runs nfs and exposes the filesystem to the worker.

If one process dies, the pod gets shut down.

Written on 26 July 2015.
« Everything that does TLS should log the SSL parameters used
Spammers mine everything, Github edition »

Page tools: View Source, View Normal, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Sun Jul 26 02:02:16 2015
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.