My mixed feelings about 'swap on zram' for Linux

June 9, 2020

Recently I read about how Fedora will be enabling 'swap on zram', including for upgraded machines, in a future version of Fedora. I suspect that a similar change may some day come to Ubuntu as well, because it's an attractive feature from some perspectives. My feelings are a bit more mixed.

Zram is a dynamically sized compressed block device in RAM (ie a compressed ramdisk); 'swap on zram' is using a zram device as a swap device (or as your sole swap device). This effectively turns inactive RAM pages into compressed RAM in an indirect way while pacifying the kernel's traditional desire to have some swap space. The pitch for swap on zram is very nicely summarized on the Fedora page as 'swap is useful, except when it's slow'. Being in RAM, swap on zram is very fast; it's the fastest swap device you can have, faster than SSD or even NVMe.

(This implies that how much of an advantage swap on zram is for your system depends partly on how fast your existing swap storage is. But RAM is still much faster than even NVMe.)

The drawback of swap on zram is that it is not really freeing up all of your memory to 'swap things out'; instead the estimate is that it will generally compress to about half the previous size. This drawback is the source of my mixed feelings about swap on zram for my Fedora desktops and our Ubuntu servers.

On my Fedora desktops, I generally barely use any swap space, which means that swap on zram would be harmless. If I do temporarily use a surge of swap space, being able to get the contents back fast is probably good; Linux has generally had an irritating tendency to swap out things I wanted, like bits of my window manager's processes. Both my home machine and my work machine have 32 GB of RAM, and peak swap usage over the past 120 days has been under a gigabyte, so I'm barely going to notice the memory effects. As a result I'm likely to leave swap on zram in its default enabled state when Fedora gives it to me.

Unfortunately this is not the case for our Ubuntu LTS servers. Those of our Ubuntu servers that use much swap at all tend to eventually end up with their swap space full or mostly full of completely idle data that just sits there. Keeping even a compressed version of this data in RAM is not what we want; we really want it to be swapped out of memory entirely. Swap on zram would be a loss of RAM for us on our Ubuntu servers. As a result, if and when Ubuntu enables this by default, I expect us to turn it off again.

One way to put this is that swap on zram is faster than conventional swap but not as useful and effective for clearing RAM. Which of these is more important is not absolute but depends on your situation. If you're actively swapping, then speed matters (fast swap lowers the chances of swapping yourself to death). If you're instead pushing out idle or outright dormant memory in order to make room for more productive uses of that RAM, then clearing RAM matters most.


Comments on this page:

Most of the systems I work with are usually properly sized and we usually also account for regular memory spikes (peak traffic, etc.). But even then having some swap is certainly beneficial, and I would argue "must-have".

My approach with ZRAM has always been some kind of hybrid, where I would have x amount of ZRAM with higher priority then regular swap. So when the system started using swap, it would first go to this faster device, at the expense of few CPU cycles.

I understand that you might still want to disable that option completely when you are 100% sure that swapped out pages will always be somewhat inactive, so you don't have trashing, but on systems that spike memory and cause that swap to be active, ZRAM is very much welcome.

By George at 2020-06-09 16:12:49:

You may try to use two swaps with different priorities, one for zram and second for normal block device. I have trouble to reason who should have higher priority, though.

By cks at 2020-06-09 17:04:04:

Ivan: my estimate (or guess) is that enough zram to do us much good in a swap storm would be too much zram eating memory normally and vice versa. A 512 MB or 1 GB zram won't absorb much, but a big zram will eat too much memory.

George: What I think we really want is a two stage swap out process. In the first stage memory moves from ready in RAM to compressed in RAM (to zram) and if it's unused for long enough it's then moved out of RAM to disk. I can't think of how to get something like that with relative priorities for two swap areas; one or the other is always at risk of being used at the wrong time or for the wrong things. However, this appears to be available as zswap.

(I'm not sure if zswap would do us any good, since I think our swap patterns are highly biased towards the sort of things that disk swap is good for.)

By Twirrim at 2020-06-11 12:14:09:

Back in the early-to-mid 90s, there was a company called Connectix that produced "RAM Doubler", that used to rely on transparent memory compression. It's always curious to see things come and go in cycles and take on slightly different approaches each time.

Probably the stronger similarity is to MagnaRAM from QEMM, https://en.wikipedia.org/wiki/QEMM#MagnaRAM

By Greg A. Woods at 2020-06-11 14:39:51:

I really don't know anything about ZRAM, and I'm not so sure I know anything about macOS/Darwin compressed RAM, but I think macOS does automatically do more or less the "right" thing in that it first compresses least recently used blocks and coalesces these compressed blocks together, and then as memory pressure increases further it starts moving the oldest of those regions of compressed blocks out to a swap device or file. Apparently the algorithms macOS uses are based on this paper:

https://www.usenix.org/legacy/publications/library/proceedings/usenix01/cfp/wilson/wilson_html/acc.html

My naive interpretation is that Linux ZRAM is inferior, but that Linux Zswap might be more comparable and more desirable.

Written on 09 June 2020.
« A Go time package gotcha with parsing time strings that use named time zones
The practical people problem with instance diversity in the Fediverse »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Tue Jun 9 00:02:36 2020
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.