2021-08-25
I'm turning off dnf-makecache on my Fedora machines
Back when I was in a situation where I wanted to use minimal
bandwidth, one of the surprise
bandwidth uses was from Fedora's dnf-makecache
service and its
associated systemd timer. This services runs 'dnf makecache
--timer
', which will, to quote the manpage:
Downloads and caches metadata for enabled repositories. Tries to avoid downloading whenever possible (e.g. when the local metadata data hasn't expired yet or when the metadata timestamp hasn't changed).
However, if a DNF repository's metadata has changed, this will download the updated metadata. How much gets downloaded seems to vary, but it seems that it can very much add up to a surprise under some circumstances. The --timer option makes DNF not do this if you're on battery power, but you may not be.
(I'm not sure if 'dnf makecache
' respects NetworkManager's setting
to say that a connection is a "metered" connection, but even if it
does, that's a heuristic that can be fooled.)
Unfortunately, Fedora's timer settings for dnf-makecache are what I would politely call quite aggressive. The timer is set to activate once every one to two hours and also for ten minutes after boot. This is more or less perfect for delivering a bandwidth surprise when you don't want it.
All of this is a good reason to turn it off on my laptop, but that doesn't quite explain why I would want to turn it off in either my home desktop or my work desktop. There are two reasons for this. First, on my home desktop, leaving dnf-makecache enabled means that every so often I would have unpredictable surge in bandwidth usage, one that hit the limit of my DSL line, possibly at an inconvenient time (such as during a work video conference). I can live without that sort of surprise.
Second and more broadly, the cache isn't actually useful to me in
practice. I apply Fedora DNF updates by hand, and when I apply them
I always want to get the very latest updates, so before I run 'dnf
check-update
' I always force a metadata check anyway (with, in theory,
'dnf clean expire-cache
', although sometimes I forget and use the much
more heavyweight 'dnf clean metadata
'). Explicitly checking for and
being prepared to apply updates is also the time when I'm willing to see
my home bandwidth go to DNF, not other things.
Leaving dnf-makecache enabled on my work machine probably isn't doing any harm (it has plenty of Internet bandwidth), but it's one more piece of complexity and background (DNF) activity that could someday interfere with what I'm doing on the machine. Plus, I'm currently a bit irritated with dnf-makecache as a whole, so I'm inclined to get rid of it. Fedora machines already have enough magical things happening in the background (as do all modern Linux machines).
PS: I'm pretty certain I first got exposed to dnf-makecache's bandwidth usage (and NetworkManager metered connections) through this old Reddit note.
My interest in Intel's future discrete GPUs (and my likely disappointment)
As a Linux user who doesn't play games, I have modest needs in graphics that in theory should be easily met by integrated GPUs. My (future) standard is that I want to be able to drive two 4K displays at 60 Hz. But in practice, trying to use integrated GPUs for this limits significantly limits both my choice of CPUs and motherboards. AMD's current Ryzen line has generally limited integrated GPUs to lower performance CPUs (perhaps because of thermal limits), while it can be hard to find Intel CPU motherboards that support integrated graphics, especially if you want one that uses higher end chipsets in order to get other features (for example, two x4 capable NVMe drives and a decent number of SATA ports).
(All of this is an aspect of how I want a type of desktop PC that's generally skipped.)
In recent history, there are effectively only three choices for x86 graphics under Linux; Intel, AMD, and NVIDIA. If you care about open source driver quality, Intel is generally first (with only a few stumbles), AMD is second, and NVIDIA is a very distant third. However, Intel historically didn't make discrete GPUs, and so if you needed a discrete GPU (as I did on my work Ryzen based desktop), people who cared about open source drivers were strongly steered to AMD. And so my work desktop has a basic AMD GPU of the time.
For some time now, Intel has been lurching toward offering discrete GPUs (ie, GPU cards) in addition to their integrated GPUs. Recently we even sort of got a date for when the first ones might be available, which is theoretically the first quarter of next year (from Anandtech). This sounds great, and just what I'd like to make building another PC easier. A solid Intel discrete GPU card that's well supported by open source drivers would open up my choice of CPUs and motherboards while hopefully having fewer issues than AMD GPUs (or at least creating more competition to encourage both companies).
The flaw in this lovely picture of the future is that what Intel is likely to offer is probably not what I want. In a perfectly reasonable decision, Intel is apparently talking about starting with high-performance GPU cards, which are also expensive, probably hot, and unlikely to be passively cooled. This isn't really what I want; my ideal discrete GPU is inexpensive, low power consumption, and passively cooled or at least basically silent. I don't need a powerful gaming or GPU computation GPU, and I doubt I have any software that could use one.
(Well, darktable might be able to use the GPU through OpenCL, if I started taking and processing photos again.)
Even if Intel only offers mid-range and higher GPU cards, I might still end up choosing one. This isn't because I think I'll need the GPU compute power (although maybe someday), but instead because I'm not sure I fully trust low end GPU cards any more. Plus, my impression is that mid-range GPUs are paying more attention to being quiet at low usage levels, since people have realized that this is where they spend a lot of their time.