My virtualization setup so far with Linux's virt-manager and friends

April 20, 2022

Recently, I reached a tipping point of dissatisfaction with VMWare Workstation due to both performance problems and its non-handling of HiDPI for guest consoles. So as is sometimes my way, I completely changed my virtualization setup starting this Monday, switching over to virt-manager and friends, which is to say the stack of libvirt, QEMU, and KVM. For the most part the change has been relatively straightforward, although there have been learning experiences. Mostly I've been using my new setup through the virt-manager GUI (with some excursions into virt-viewer); I can do a lot with virsh commands, but virsh is a big program with a lot of things to remember. Possibly I will use virsh more in the future as I memorize selected sub-parts of it.

My primary use for virtualization requires the servers to have IPs on one of our (public) networks, one that my work desktop is also connected to. This has been a long standing weak point of the official Linux virtualization solutions, and I dealt with it in a way that was suggested to me way back then: I added another network interface to my desktop, connected it to the relevant network, and dedicated it to being a bridge to the virtual machines (in what is known as a 'macvtap' bridge). This is well documented by libvirt and seems to be the default behavior if you tell libvirt to bridge to some interface. I set my bridged network up by copying the XML from this 2016 article, making appropriate changes for my network device names, and loading it with 'virsh net-define ...'. Possibly I could also have set it up inside virt-manager, but I didn't try. This works fine to let my desktop talk to guests, although the traffic makes a round trip through my office switch.

(Using a dedicated network interface to bridge the virtual machines on to the network is wasteful but it means that I don't have to change any of my office workstation's complicated network setup, which has a maze of VLANs, iptables rules, policy based routing, and a Wireguard tunnel.)

I also have an internal NAT'd network for some Fedora test desktops. Initially I used virt-manager's (and virtsh's) 'NAT' connection type, but this had various bad side effects and I'm not sure it even fully worked in my complicated network environment. So I abandoned it in favour of a 'do it yourself' NAT setup. In virt-manager this is an "open network", which causes libvirt to not make any other networking changes, and then I have a script to set up a relatively standard collection of NAT rules. The one complication is that I can't run the script on boot, because libvirt only creates the 'virbr1' bridge involved on demand. Instead, I use a libvirt 'network' hook script, /etc/libvirt/hooks/network, to set everything up with the relevant virtual network has been started the first time.

(In the future, I may supplement this with another bridged network. Having to DIY my own NAT is less convenient and magical than VMWare Workstation, but I feel I understand it better and the whole thing is more under my control. For example, I've now made all NAT'd traffic come out a specific IP used only for this purpose, just in case.)

By default, libvirt puts all of your virtual machine disk images into /var/lib/libvirt/images. To change this, I used virt-manager to define a suitable directory in my VM filesystem as a new 'storage pool', stopped the "default" pool, renamed it to "default-var", and renamed my new storage pool to "default" to make it, well, the default pool. Similarly, I renamed libvirt's "default" virtual network (which is its own non-working NAT) to "default-nat" and made by own DIY NAT network setup be called "default". With the virtual networks, it was crucial to set the newly renamed "default-nat" network to not autostart on boot; otherwise, libvirt might helpfully insert its unwanted iptables rules and so on.

(My VM filesystem is on ZFS, but it uses the default ZFS recordsize of 128 Kb. It's somewhat tempting to make a special ZFS filesystem with a smaller recordsize and use it for libvirt images. With some directory shuffling and copying, I wouldn't even need to change the libvirt storage pool definition. I'm not certain how much this would improve my life; the ZFS pool the VM filesystem is in is on SSDs, and the whole thing seems to perform fine even with the default ZFS recordsize, despite disk image files effectively being databases.)

To use virt-manager without various sorts of authentication hassles, I put myself in the 'libvirt' group. At first I thought that 'virsh' itself (the command line tool for dealing with libvirt) was still only usable by root, but just now I found this serverfault answer that explains that by default, virsh (run as me) isn't connecting to the system libvirt QEMU environment but instead to some sort of empty QEMU environment for my normal Unix login (while virt-manager was connecting to the system QEMU). To fix this, you need to define $LIBVIRT_DEFAULT_URI to be 'qemu:///system'. I've opted to do this with a personal cover script, so now commands like 'virsh list --all' and 'virsh start fedora-35' work fine for me as a regular user. Apparently there's also a libvirt configuration file I could use, ~/.config/libvirt/libvirt.conf; in some brief testing, this works too (including remotely over SSH, with the libvirt uri 'qemu+ssh://desktop.work/system').

I converted a few VMWare disk images over to QEMU's native qcow2 format from my Fedora test VMs, but otherwise I (re)started all of my VMs from scratch, generally taking the opportunity to give them bigger disks than before (20 GB or even 30 GB doesn't go as far as it used to). Since I was restarting all of my VMs from scratch (even for ones where I was using converted disk images), I've mostly set them up using VirtIO disks and especially VirtIO video, which appears to work significant better in a HiDPI environment than 'QXL' video.

(Your mileage may vary between virt-manager and virt-viewer.)

Generally I've been setting up all of the VMs with 4 GB of RAM and 2 virtual CPUs; on my work desktop this leaves me with plenty of RAM (and CPU) even if I have a couple running at once, which I usually don't. Libvirt makes it easy to leave a VM running if I want to, but generally I shut them off once I'm done testing whatever I'm testing in them.

PS: The consoles of libvirt guest machines are normally exposed via SPICE, so you can in theory use any SPICE client to connect to them, such as Remmina. In practice, virt-manager and virt-viewer handle everything for you, including remote access. For local use, I could script something using 'virsh domdisplay <guest>', but making this work remotely over SSH would take some more work.

Written on 20 April 2022.
« Some bits on keeping isolated network interfaces organized (on Linux)
4K HiDPI monitors come in inconvenient sizes if you want two of them »

Page tools: View Source, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Apr 20 23:16:29 2022
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.