One way to break down how people use virtualization
Virtualization is such a general technology that even when people are using it for whole machine virtualization there are any number of different ways that they can use it. Today I want to present a number of usage dimensions for this (when I started I thought it was going to be a simple four-quadrant breakdown, but the more I thought the more dimensions showed up).
So, here's some axes that you can chart virtualization usage along:
- when you power up VMs, do you leave them running for a long time or
only run them for relatively short periods and then shut them down
- are running VMs totally headless, in need of only basic text-mode
interactions with the console, or do you need full console graphics
support (perhaps even with 3d acceleration)?
(The former is typical of long-running servers, the latter is typical of virtualized desktops.)
- regardless of how long they run when powered up, are your VM images
long-lived or do you use them and then throw them away?
(Long-running VMs are necessarily long-lived, but short running ones can still be long-lived too.)
- is the basic setup for a new VM more or less constant (perhaps to
the extreme of always starting with a cloned image) or highly
variable? (Clearly there's a continuum.)
- is VM management going to be automated or will it be done by hand
at small scale?
- how much does interacting with a VM need to be like interacting with real hardware?
(Some of these axes will be more or less forced if you operate at large scale with a lot of VMs.)
One of the reasons that thinking about this can be important is that different virtualization systems have different strengths and weaknesses in these areas. Understanding your own usage means understanding what's important to you and thus what virtualization systems are a good fit for you and what ones are a terrible fit.
(For one extreme hypothetical example, if your goal is small scale virtualized desktops it would be a terrible idea to pick a package that's aimed at headless ephemeral servers, full of support for things like fast cloning and rollback, command line management, mass starts and shutdowns, and with console access as an afterthought.)
In praise of KVM-over-IP systems
We're in the process of migrating to a new generation of server hardware and it is, for me, a kind of a sad moment. You see, while the new servers have generally better specifications, the old servers have one big thing that the new servers don't; a built in KVM-over-IP system as part of their remote management capabilities.
On the surface, perhaps KVM-over-IP ought not to be a big deal; all it really saves you is an occasional trip down to the machine room and we're not supposed to be that lazy. But this is the wrong way to look at it. What KVM-over-IP really means is that you can (re)install servers while doing other things.
When installing servers requires a trip to wherever the server is, it's an interruption and it means that you basically have to drop everything else you're doing to trek down to the machine room and babysit the server (don't forget the right install media, either). Interruptions are a pain, unproductive time is a pain, and sitting in a noisy, cold machine room is a pain too, so there's an incentive to avoid the whole thing as much as possible. All of this is friction.
KVM-over-IP systems remove this friction. You don't need to stop working on anything else, you don't need to leave your office, and you don't need all of the minutiae of installs (media, keyboard and display and mouse, a chair if you're going to be there long, a piece of paper with vital details about the machine, etc etc). If your installs take twenty minutes with fifteen minutes of non-interaction where you're just waiting for it to finish, no problems, this is just like any other fifteen minute process that sits in a window in the corner of your display until it's done. And if you need things during the install for some reason, you have the full resources of your usual working environment.
(A lot of this also applies to any other sort of testing or troubleshooting that needs console access to the machine. As long as you don't need to change hardware itself, you can do everything from your desk with your full working environment available to help; you don't have to stop everything else to relocate to the machine room so you can babysit the machine's console.)
Or in short, a KVM-over-IP system goes a long way towards making real servers just as convenient to deal with as virtualized ones on your desktop. And that's pretty convenient, when you get down to it.
PS: some people's answer to this is 'oh, I'll install the server in my office'. Given how noisy modern servers are, this is generally not going to be popular with people around you even if you can stand the noise yourself. It also doesn't help if the server is already set up down in the machine room and you're reinstalling it to, for example, repurpose it or upgrade its OS.
As a side note, where KVM-over-IP really shines is when you have several machines to (re)install at once, especially if you're trying to do it with as short a downtime as feasible. With KVM-over-IP, installing multiple machines just means a few more windows on your display.
Sidebar: remote power cycling and so on
I'm assuming that KVM-over-IP also includes 'remote media', where you can use ISO images on your desktop as (virtual) CD or DVD drives on the server. This seems to be a general feature these days.
I'm not including remote power cycling as a KVM-over-IP benefit because you can get that in a lot of ways. Particularly, most servers these days have a basic IPMI management interface that supports it.
KVM-over-IP is highly useful in certain circumstances, for example if a machine has problems, needs to have its console inspected, and everyone's at home or whatever. But I'm assuming that the answer to that one without a KVM-over-IP system is either a shrug or calling a taxi and anyways, that sort of thing is generally rare.