Improving my desktop's responsiveness with the cgroup V2 'cpu.idle' setting

December 12, 2024

I periodically build things on my Linux desktops that use all of the CPU that they can get; my two general examples are Firefox and the Go toolchain (Firefox uses all the CPU during its long compile process, while the Go toolchain uses it during self-tests). Doing this has historically caused my desktop session to get less responsive, which I tended to especially notice in how fast dmenu popped up when I invoked it (also).

(Because dmenu normally pops up so fast, my reflex is to hit my special key for it and then basically immediately start typing, with the expectation that my typing will go to dmenu's text entry widget. When my desktop is lagging, this goes wrong in the obvious way; my early 'dmenu' keystrokes either vanish or go into whatever text-accepting X window currently has focus. Since I use dmenu a lot, this makes lag very noticeable.)

Recently I wrote about using systemd-run to limit something's memory usage, and in passing noted the CPUWeight= property, and especially how it had the special value 'idle', which was supposed to make the cgroup only get scheduled when there wasn't anything else going on. Later, it occurred to me that I don't really care how fast Firefox or Go rebuild, and I'd trade slower build times for a desktop that stayed responsive. So as an experiment I tried running some Go and Firefox builds with 'systemd-run --user --scope -p CPUWeight=idle ....'. This seems to work very well for me, to the point where I barely notice when Go or Firefox is building. At the same time, this doesn't seem to drastically slow down the build, at least in the usual state where I'm not using my desktop for anything else particularly intensive.

(To make life easier for myself, I've written a little 'runidle' script that just does the systemd-run thing for me.)

Systemd's 'CPUWeight=idle' option sets the cpu.idle interface file to the special value of '1' in the cgroup v2 cpu controller. Although systemd-run is probably the most convenient way of doing this, anything that does this will probably have the same effect.

Setting cpu.idle is documented as being the cgroup analog of setting all of the processes to the SCHED_IDLE scheduling policy (which is sort of described in sched(7)). Given this, I've also experimented briefly with directly running the whole set of build processes as SCHED_IDLE processes, using chrt. This is done with, for example, 'chrt -i 0 ./all.bash'. This appears to work basically as well, but I'd rather put the whole thing in its own cgroup that's set to an overall idle scheduling policy.


Comments on this page:

By twila at 2024-12-13 00:12:06:

Process execution on Linux has, historically, had enormous slowdowns on loaded systems. I haven't checked lately, but in the Linux 2.x days it could easily take a second or several to launch a trivial program that would normally take milliseconds. Even now, process-launching is sufficiently heavyweight that I can only launch 1000-2000 /bin/true processes per second from a simple "bash" loop (assuming little other load), whereas millions of syscalls could be done in the same time. I'm not sure if it's the fork() or exec*() that's slow.

Anyway, if you can get two existing programs to communicate without spawning anything—like, if you can script a window manager to catch the hotkey and send a message to an already-running dmenu, which will then connect to the display server and create its window—it can really pay off. Back when XMMS (version 1) was the popular player, I did that with whatever window manager I was using at the time (for its play/pause/etc. hotkeys), and the difference was night and day. I was always a bit baffled that so many programs were so willing to create and destroy sub-processes; maybe the authors just had better computers than I.

How does this differ in practice to just using a high niceness for these jobs?

By cks at 2024-12-14 23:24:54:

My answer was long enough to wind up in Cgroup V2's cpu.idle setting versus process niceness. The short version is that I trust SCHED_IDLE more, even if I can't necessarily observe a clear difference right now.

Written on 12 December 2024.
« The long (after)life of some of our old fileserver hardware
Cgroup V2's cpu.idle setting versus process niceness »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Thu Dec 12 23:18:01 2024
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.