How I use virtualization (and what for)
The high level view is that I use virtualization for testing things on my workstation, which I think of as the typical sysadmin use of local virtualization. In terms of my taxonomy of virtualization usage, there are three different cases with a number of common elements. The common elements are that I only run VMs for a short time (I power them up, use them for a bit, and then shut them down again) and that I manage them by hand. Beyond that the cases split apart:
- I have a small number of long-lived images that I use for desktop
testing, primarily virtualized Windows desktops. These need good
(I wish you could legally and easily virtualize an OS X desktop, because it would make testing various things so much easier. Right now we have to resort to a small floating collection of old OS X machines, which in practice means that we don't routinely test our systems against OS X.)
- my test server VM has disposable images and needs to be like real
hardware (because it's usually the prototype for things that will
wind up on real hardware). Today my actual usage has a highly
variable setup and needs basic text-mode console access; however,
in practice the basic setup of a new image is extremely constant
(I could always start from one of six basic images) and it can
(See the sidebar for a longer discussion.)
- sometimes I wind up testing things other than yet another Ubuntu
based server. These have disposable images, need to be like real
hardware, have highly variable setup, and need at least basic
(In other words, today they're no different from my main test server VM but they could be if I handled my test server VM in a more efficient way.)
The result of all this is that my first priority is convenience. I don't care all that much about things like performance (provided that it's adequate, which it should be), scalability, or lots of management features, but because I interact with the virtualization system for much of the time that I'm using VMs at all, I care about how easy that is. A convenient, easy to use system avoids putting friction in the way of my testing and thus encourages me to do it more; an awkward, annoying one would tempt me to skimp on testing because really, do I need to wrestle with the VM system quite that much? Surely things are good enough as they are.
(Also, I don't want to discount the time saved and the lower aggravation from a system that's pleasant and smooth to use.)
(I didn't put this into my initial taxonomy, but image snapshots are relatively important to me. It's often more convenient to snapshot a server build partway through the customization process and then rollback to that point than to reinstall from scratch for yet another test cycle.)
PS: note that we don't currently have any production server virtualization. All of our virtualization right now is on people's desktops for testing.
Sidebar: Elaborating on my test server VM situation
The first step in bringing up any standard server here is to do a completely standardized Ubuntu basic install of 32-bit or 64-bit 8.04, 10.04, or 12.04 (almost always 12.04 right now, since it's the current LTS version). The result is specific to the server's IP address but is otherwise both fixed and independent of what the machine will be customized into, and once this basic install is done I do all of the remaining setup steps through ssh.
Currently I redo this basic install process from scratch almost every time around. But I don't actually need to do it this way. I could instead build six starter disk images (all with the same standard IP I use for the test server VM) and then just copy in the appropriate one every time I want to 'reinstall' my test server VM; this would cover almost everything that I do with it and save time and effort to boot.
(A small confession: this didn't occur to me until I was planning this entry and actively thinking about why I was going to say that my test VM had a highly variable setup.)