2012-12-27
Why I somewhat irrationally have a distrust of ZFS on Linux
By now there are a number of ways to run ZFS on Linux (for example here as a native kernel module and here as a user-level component connected to the kernel with FUSE). All such efforts are separate projects with no prospect of that ever changing because ZFS's CDDL license is fundamentally incompatible with the Linux kernel's GPLv2 licensing.
(Theoretically Oracle could decide to relicense ZFS. The odds of that happening appear to be very low.)
I'm not really interested in running important production filesystems through FUSE, which leaves ZFS kernel modules. I'm not fundamentally opposed to running out of kernel modules in production and in fact our iSCSI server software is an example of it. Despite this, the idea of ZFS on Linux makes me fundamentally nervous precisely because of the out of kernel issue. Put simply, I worry about ZFS on Linux projects being able to attract people who are familiar enough with the Linux kernel to do a good job of the kind of deep integration that ZFS needs. Because of ZSF's issues, you automatically lose all kernel hackers who care enough about kernel licensing issues or who feel relatively strongly about their work being included in the main kernel. You're left with kernel hackers who're willing to work on a license-incompatible project that will never be integrated, and my presumption is that there are not very many really good kernel hackers who meet this description.
(You can presumably get more such kernel hackers if you can pay them, but then that puts a ZFS on Linux project at the mercy of continuing funding.)
Initially I was going to be more pessimistic and strong about this than
I am now. What happened is that when I started to write this entry, I
pulled the git repo for ZFS on Linux and
started looking at 'git log'. Development is very active and it seems
to be done by people who know what they're doing. A lot depends on the
project lead (and thus funding from LLNL) and the project is not mature
yet, but those aren't risks that worry me; we'd almost certainly look at
the project only after it had stabilized. If it never stabilizes because
it loses development funding and developers, oh well so be it.
I'm not entirely happy with the situation, though. I still think that ZFS on Linux's perpetual outsider status makes me less confident about it than either an Illumos-derived distribution or FreeBSD, where ZFS is or seems to be an active part of the core development. It's possible that ZFS on Linux will prove to have sufficient extra advantages to overcome this, but I'm dubious.
Sidebar: ZFS on Linux ZFS version information
This is not mentioned in the zfsonlinux.org web pages, at least not that I could find. ZFS on Linux is at zpool version 28, which is the same as the current Illumos version (and also FreeBSD and OpenSolaris). Solaris 10 update 8 plus patches (what we're currently running) is at version 15. Oracle Solaris 11 seems to be at (their) version 32.
At this point, ZFS feature flags (see the comments here) do seem to be in Illumos (and FreeBSD?) but aren't yet in ZFS on Linux. However, the ZFS on Linux developers are working on this; given their pace of development, I suspect that support will appear relatively soon in an official release.
2012-12-18
Why I'm still using VMware
As I've mentioned before, in theory I should hate VMware because it involves more or less binary kernel modules and I've usually avoided that sort of thing like the plague. These days Linux has plenty of virtualization alternatives, ranging from somewhat more open to genuinely open source. Yet I still keep using VMware.
The short answer is that for me it remains the best of a bad lot of choices. Everything else is some combination of less convenient to interact with, more obnoxious (yes, I'm talking about VirtualBox and Oracle), or has more impact on my system configuration.
(I'm also not sure of the state of snapshotting and rolling back virtual disks, which is something that I use fairly frequently in VMware.)
There is no nice way to put this, so I'll put it plainly: convenience matters to me a lot and all of the open source software I've looked at is simply significantly less polished and directly usable than VMware is. Creating and adjusting VMs is easy in VMware's GUI and I'm especially happy with how well VMware handles VM (graphical) consoles. Everyone else seems to use VNC (or lately SPICE, which I have no experience with), but my interactions with VNC-based VM consoles have not been exactly inspiring. I'm especially dubious about using VNC for Windows guest consoles for various reasons.
(Yes, I prefer to interact with virtualization through a GUI. It's faster for what I do, partly because I can rapidly and directly select what VM I want to do something to, plus if I'm going to work with a VM's console I need a graphical window for it anyways.)
In theory I might be able to drive my primary Ubuntu test VM entirely from the command line and interact with it over SSH, but that's only one of my VMs. I'd like to have a system with both good command line support and a good GUI plus virtual console, but if I have to pick only one of those two I'm going to pick the latter.
By the way, I'm aware that my desires here are basically irrelevant to the open source virtualization people. In a way I agree with them in that they're almost certainly making what are the right technical choices in terms of how virtualization works, how virtualized networks fit into Linux, how VM consoles should interact with the rest of the system, and so on. I just don't care and I'm being selfish; I want the convenience that incestuous hackery delivers.
Sidebar: the two big features VMware is missing for me
The two features I would really like in VMware are an option to zero a
host guest disk (which would just delete and recreate the
underlying files) and an option to boot a VM into the 'choose a boot
device' BIOS menu. Since a VMware VM only has a very narrow window in
which you can activate that boot device menu (perhaps a second or two at
most), the lack of the latter is especially irritating.
2012-12-10
Things that systemd gets right
On Twitter, I recently put forward the heretical opinion that systemd is actually a good thing (as I've written a bit about before). Now, systemd is not flawless or without worrisome tendencies and it has a number of features that I'm indifferent to, but I do think that it gets quite a lot of things right. Today I feel like trying to list them off (partly so that I have this in one place for future use).
(A disclaimer: this is from the perspective of someone who runs servers and thus doesn't really care about systemd features like minimizing boot time or not actually starting various sorts of programs until someone asks for them.)
To begin with, a terminology note. What systemd calls a unit is what we would otherwise call an init script (well, it's a superset of that, but we'll ignore that for now). I'll be using 'unit' and 'units' throughout this.
So, in no particular order:
- systemd has a strong separation between system-supplied units, which
go in
/lib/systemd, and sysadmin-supplied units, which go in/etc/systemd. This is very helpful for keeping track of the latter. - you can override a system-supplied unit with a sysadmin-supplied one
without changing or removing the system-supplied one.
(Why, you would think that systemd was written by people who understood modern package management.)
- what units are enabled in various states is stored in the filesystem
in a visible form, not locked up in a magic database somewhere.
- you can have units installed without being activated,
unlike Upstart.
- systemd allows units to shim themselves into the startup order so that
they get started before some other unit; you do not have to alter
the other unit to enable this (unlike Upstart again).
(systemd is not perfect here; in the general case you can't reorder existing units without editing some of them. But you can do this by overriding the system-supplied unit with your own copy, per above.)
- systemd unit configuration files are easy to write and easy to read
(cf); they contain almost the minimal
information necessary with very little extraneous fluff. They do not
involve XML.
- systemd handles a lot of annoying infrastructure for you; for example,
you do not have to arrange to daemonize programs you run.
- systemd starts and restarts services in a consistent and isolated
environment, not in whatever your
current environment is when you run the start and restart commands.
- systemd keeps track of what processes belong to a particular service,
so it can both list all the processes that are part of a service and
tell you what service a particular process is part of. This is a boon
to manageability.
- because it actively tracks unit status, conditional restarts are
not dangerous; it shares this behavior with
any competently implemented active init system.
(SysV init scripts are a passive system, Upstart, Solaris SMF, and systemd are all active ones.)
- during boot, systemd reports unit startups as they happen (and reports
if they succeeded or failed). You would think that this is a basic
feature that everyone has, but no; neither SMF nor Upstart do this.
- unit names are succinct (unlike SMF).
- it apparently does per-user fair share scheduling by default (but I haven't had a chance to run systemd in a situation where I could really see this in action).
In common with other active systems, systemd starts units in parallel when possible. I don't consider this a striking advantage, especially because other systems do it too.
(I may update this with additional things as they occur to me or as people mention them, since I've probably missed some.)
Sidebar: how I feel about the competition
The competition that I know of is SMF and Upstart. SMF is encrusted with complexity and dates from the days when people thought XML was a good idea; it is 'enterprisey' in a bad way. I consider it a step backwards from System V init scripts. Upstart is a flawed attempt and not bold enough; even ignoring the flaws, it isn't a significant enough improvement over SysV init scripts to be worth the pain of conversion.
(In other words, Upstart is an improvement but not a significant and worthwhile one.)