2007-01-11
Xen versus VMware for me
I've recently spent some time thinking about virtualization technology, in particular how I feel about Xen versus VMware. In theory the choice should be a slamdunk for me, because VMware requires evil binary kernel modules and I have usually avoided that sort of thing even when it was somewhat painful (eg, I don't use ATI's binary X drivers). In practice, though, I am only distantly lukewarm on Xen and much more interested in VMware.
The simple summary why this is is that Xen is a commitment, but VMware is a flirtation.
I call VMware a flirtation because it will run on an otherwise ordinary system (provided that I have enough kernel source around to compile its modules). Installing and using VMware is thus a reasonably casual thing, something I can do without particularly perturbing the system. I can even normally boot my system with VMware's kernel modules loaded, and only contaminate things if I actually need it this time around.
Xen's hypervisor approach requires a much bigger change in my system; I can't add Xen to a running system, I have to boot the Xen environment from the start. Which requires a special Xen-enabled kernel, which requires special patches that you have to integrate into your kernel, which gives me nervous flashbacks. Being able to run stock kernels is valuable for me, and with Xen I give that up, so setting things up for Xen means a fairly significant commitment.
(If Xen ran on top of a stock kernel.org kernel, I would be much more interested, because it would be much less of a commitment.)
There are also additional reasons for my tilt towards VMware. First, VMware has a more polished environment right now, with a GUI and various convenient bits and all. Second, VMware's approach is much more useful for what I do with virtualization than Xen's is, because VMware is oriented towards running unmodified guest OSes in a complete virtual PC environment.
The Xen hypervisor approach is I think clearly the right way of doing things like running services in isolated virtual machines. But I'm not doing that; instead I'm using virtualization for testing, where I really want to run the exact same kernel and environment that I will wind up running on the real hardware. (And thus it is a definite feature that VMware emulates a complete PC, hokey BIOS booting and all.)
An interesting Bourne shell limitation
Presented in illustrated form, on at least Solaris 8, FreeBSD, and OpenBSD:
$ exec 9>/dev/null $ exec 10>/dev/null exec: 10: not found
(And the shell exits.)
The genuine Bourne shell only allows redirection to (or from)
single-digit file descriptors; if you give multiple digits, it
instead gets parsed as 'exec 10 >/dev/null
'.
(The limitation has been faithfully copied by at least some Bourne shell reimplementations and was retained in ksh, but bash has dispensed with it, so you won't see this behavior on (most) Linux machines.)
This limitation is implicitly in the V7 sh manpage, which says about redirection:
If one of the above is preceded by a digit then the file descriptor created is that specified by the digit (instead of the default 0 or 1).
In the grand tradition of reading Unix manpages, using the singular 'digit' means that multiple ones aren't allowed.
This wording was carried on into the Solaris sh
manpage, but not the
FreeBSD one. The FreeBSD situation is interesting, since I believe the
FreeBSD shell was rewritten from scratch; clearly the people doing the
rewrite knew about (and decided to preserve) the limitation, but then
they didn't document it.
(One might argue that the FreeBSD usage of '[n]> file
' in the manpage
implies that 'n
' can only be a single digit, but I think that the ice
is thin, if only because they use things like '[n1]<&n2
'.)