I used libvirt's 'virt-install' briefly and it worked nicely

August 22, 2024

My normal way of using libvirt based virtual machines has been to initially create them in virt-manager using its convenient GUI, if necessary use virt-viewer to access their consoles, and use virsh for basic operations like starting and stopping VMs and rolling VMs back to snapshots, which I make heavy use of. Then recently I wrote about why and how I keep around spare virtual machines, and wound up discovering virt-install, which is supposed to let you easily create (and install) virtual machines from the command line. My first experience with it went well, so now I'm going to write myself some notes.

(I spun up a new virtual machine from scratch in order to poke at FreeBSD a bit.)

Due to having set up a number of VMs through virt-manager, I had already defined the network I wanted as well as a libvirt storage pool where the disks for the new virt-install VM could go. With those already existing, using virt-install was mostly a long list of arguments:

virt-install -n vmguest7 \
   --memory 8192 -vcpus 2 --cpu host \
   -c /virt/images/freebsd/FreeBSD-14.1-RELEASE-amd64-dvd1.iso \
   --osinfo freebsd14.0 \
   --disk size=20 --disk size=20 \
   -w network=netN-macvtap \
   --graphics spice --noautoconsole

(I think I should have used '--cpu host-passthrough' instead, because I think '--cpu host' caused virt-install to copy the host CPU features into the new VM instead of telling the new VM to just use whatever the host had.)

This created a VM with 8 GB of RAM (FreeBSD's minimum recommended amount for root on ZFS), two CPUs that are just like the host, two 20 GByte disks, the right sort of networking (using the already defined libvirt network), and not trying to start any sort of console since I was ssh'd in to the VM host. Once started, I used virt-viewer on my local machine to connect to the console and went through the standard FreeBSD installer in order to gain experience with it and see how it would go when I later did this on physical hardware.

This didn't create quite the same thing that I would normally get in virt-manager; for instance, this VM was created with an 'i440FX' (virtual) chipset instead of the Q35 chipset that I normally use and that may be better (this might be fixed with '--machine q35' or perhaps '--machine pc-q35-6.2'). The 'CDROM' it wound up with is an IDE one instead of a SATA one, although FreeBSD had no objections to it. All of the various differences don't seem to be particularly important, since the result worked and I'm only doing this for testing. The VM's new disks did get sensible file names, ie ones based on the VM's name.

(When the install finished and rebooted, the VM powered off, but this might have been a peculiarity in how I did things.)

Virt-install can create transient VMs with --transient, but as its documentation notes, the disks for these VMs aren't deleted after the VM itself is cleaned up. There are probably ways to use virt-install and some additional tooling to get truly transient VMs, where even their disks are deleted afterward, but I haven't looked at that since right now it's not really a usage case I'm interested in. If I'm spinning up a VM today, I want it to stick around for at least a bit.

(I'm also not interested in virt-builder or the automatic install side of virt-install; to put it one way, I want virtual versions of our physical servers, and they're not installed through cloud-init or other completely automated ways. I do have a limited use for using guestfish to automatically modify VM filesystems.)


Comments on this page:

By B. Stack at 2024-08-23 09:13:27:

A lot of virt-install guests tend to power off at that first reboot time, if they are not Red Hat-based guest OSes. Or maybe it was non-Linux ones. It's a semi-common problem, and not a huge deal because you can then just `virsh start <domainname>`.

By Ian Z aka nobrowser at 2024-08-23 16:11:45:

I ended up deciding (like I had already - I just forgot, maybe multiple times :) that I'll never use any other VM backend than qemu/kvm, so all that abstraction which libvirt provides just gets in my way, and I configured the VMs I currently needed with a shell script using qemu-system -- which could be a one-liner or even an alias because it's just a single command, but I made it a multiline script for readability.

By Ruben Greg at 2024-08-25 09:26:01:

Thx for these tips especially for someone in Uni environment. Have you tried https://linuxcontainers.org/incus/ based VM (not just containers).

I found snapshotting, live moving to other hosts a great way especially to backup.

By cks at 2024-08-25 21:48:12:

I haven't tried incus because it's not something I currently have any use for. I use VMs only for testing (and containers not at all); we don't have any production need for them. For this limited use, the more straightforward it is to do things on top of stock Ubuntu and Fedora, the better. Both package libvirt and it basically just works (and the bits that require wrestling would require wrestling with anything because of how Linux does things).

Written on 22 August 2024.
« What a POSIX shell has to do with $PWD
My (current) view on open source moral obligations and software popularity »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Thu Aug 22 23:17:07 2024
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.