My Linux container temptation: running other Linuxes

February 25, 2015

We use a very important piece of (commercial) software that is only supported on Ubuntu 10.04 and RHEL/CentOS 6, not anything later (and it definitely doesn't work on Ubuntu 12.04, we've tried that). It's currently on a 10.04 machine but 10.04 is going to go out of support quite soon. The obvious alternative is to build a RHEL 6 machine, except I don't really like RHEL 6 and it would be our sole RHEL 6 host (well, CentOS 6 host, same thing). All of this has led me to a temptation, namely Linux containers. Specifically, using Linux containers to run one Linux as the host operating system (such as Ubuntu 14.04) while providing a different Linux to this software.

(In theory Linux containers are sort of overkill and you could do most or all of what we need in a chroot install of CentOS 6. In practice it's probably easier and surer to set up an actual container.)

Note that I specifically don't want something like Docker, because the Docker model of application containers doesn't fit how the software natively works; it expects an environment with cron and multiple processes and persistent log files it writes locally and so on and so forth. I just want to provide the program with the CentOS 6 environment it needs to not crash without having to install or actually administer a CentOS 6 machine more than a tiny bit.

Ubuntu 14.04 has explicit support for LXC with documentation and appears to support CentOS containers, so that's the obvious way to go for this. It's certainly a tempting idea; I could play with some interesting new technology while getting out of dealing with a Linux that I don't like.

On the other hand, is it a good idea? This is certainly a lot of work to go to in order to avoid most of running a CentOS 6 machine (I think we'd still need to watch for eg CentOS glibc security updates and apply them). Unless we make more use of containers later, it would also leave us with a unique and peculiar one-off system that'll require special steps to administer. And virtualization has failed here before.

(I'd feel more enthused about this if I thought we had additional good uses for containers, but I don't see any other ones right now.)

Comments on this page:

By pch at 2015-02-25 04:38:54:

I guess my first thought is how would the software vendor react if they needed to troubleshoot a problem with the software and found it running in a container? I've run into a vendors in the past who refuse to support anything not exactly to their specification.

By Ewen McNeill at 2015-02-25 04:57:02:

In addition to the previous comment ("would it be supported", and "does that matter"), my main potential concern with that scenario is whether the Linux kernel in Ubuntu 14.04 would still work well with the RHEL/CentOS 6 userland (and/or your Important Application), especially if you're going to run a semi-full RHEL/CentOS 6 daemon set. Containers are basically "chroots on steriods" -- ie, with some UID/networking/resource specialisations too. So you're still running against the underlying (host) kernel.

AFAICT RHEL 6 comes with Linux 2.6.32 (so roughly like Ubuntu 10.04 LTS, give or take patches), whereas Ubuntu 14.04 come with Linux 3.13 -- roughly 4 years newer. (If the Important Application only supports RHEL 6/Ubuntu 10.04 I guess it's possible they also only support the Linux 2.6.32 kernel....)

FWIW, Docker can be persuaded (abused?) to do all sorts of more-general-container things outside the core minimal application container approach -- including running a full init, daemons, etc, setup. You lose some of the benefits of Docker at that point (eg, redeploying later versions of lightweight application containers), but you do still get to use the Docker REST interface (and CLI based on that) to control things which is reasonably sane.

That said, I've also heard good things about LXC as an interface; I've just never tried it myself.


Once upon a time we had a dependency on a commercial library that did not have a 64-bit version, and a dependency on a language feature which only worked on 64-bit machines. Luckily the two dependencies were in different executables which called each other via TCP; unfortunately, we didn't have spare machines so that we could run both 32-bit and 64-bit systems.

The answer, in that case, was to run 32-bit VMs on 64-bit hosts. KVM worked well for this. You still have 2x machines to administer, though.

Eventually the library vendor presented us with a 64-bit clean version and we decommissioned the 32-bit VMs.

The problem I have with containers is that they only appear to have one convincing use case: here's a fully supported server application in a container, if it breaks, treat it as a black box. I suppose that is appealing in some situations -- I could see running RT that way, for instance, or a wiki -- but mostly that's not what we want.

By cks at 2015-02-25 11:01:16:

I'm not too worried about support. To put it one way, if the vendor isn't supporting Ubuntu past 10.04 and RHEL past RHEL 6, I'm not sure we're going to get much support from them for this package in general.

(And if we do run into problems, we can always (re)build a real native machine.)

Written on 25 February 2015.
« How we do and document machine builds
My current issues with systemd's networkd in Fedora 21 »

Page tools: View Source, View Normal, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Feb 25 01:39:40 2015
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.