My uncertainty about swapping and swap sizing for SSDs and NVMe drives
The traditional reason to avoid configuring a lot of swap space on your servers and to avoid using swap in general was that lots of swap space made it much easier for your system to thrash itself into total overload. But that's wisdom (and painful experience) from back in the days of system-wide 'global' swapping and your swap being on spinning rust (ie, hard drives). A lot of paging evicted memory back in (whether from swap or from its original files) is random IO and spinning rust had hard limits on how many IOPs a second it could do, which often had to be shared between swapping and real IO. And with global swapping, any process could be victimized by having to page things back in, or have its regular IO delayed by swapping IO. In theory, things could be different today.
Modern SSDs and especially NVMe drives are much faster and support many more IOPs a second, especially for read IO (ie, paging things back in). Paging is still quite slow if compared to simply accessing RAM, but it's not anywhere near as terrible as it used to be on spinning rust; various sources suggest that you might see page-in latencies of 100 microseconds or less on good NVMe drives, and perhaps only a few milliseconds on SSDs. Since modern SSDs and especially NVMe drives can reach and sustain very high random IO rates, this paging activity is also far less disruptive to other IO that other programs are doing.
(The figures I've seen for random access to RAM on modern machines is on the order of 100 nanoseconds. If we assume the total delay on a NVMe page-in is on the order of 100 microseconds (including kernel overheads), that means a page-in costs you around 1,000 RAM accesses. This is far better than it used to be, although it's not fast. Every additional microsecond of delay costs another 10 RAM accesses.)
Increasingly, systems also support 'local' swapping in addition to system wide 'global' swapping, where different processes or groups of processes have different RAM limits and so one group can be pushed into swapping without affecting other groups. The affected group will still pay a real performance penalty for all of the paging it's doing, but other processes should be mostly unaffected. They shouldn't have their pages evicted from RAM any faster than they otherwise would be, so if they weren't paging before they shouldn't be paging afterward. And with SSDs and NVMe drives having high concurrent IO limits, the other processes shouldn't be particularly affected by the paging IO.
If you're using SSDs or NVMe drives with enough IO capacity (and low enough latency), even system-wide swap thrashing might not be as lethal as it used to be. If everything works well with 'local' swapping, a particular group of processes could be pushed into swap thrashing by their excessive memory usage without doing anything much to the rest of the system; of course they might not perform well and perhaps you'd rather have them terminated and restarted. If all of this works, perhaps these days systems should have a decent amount of swap, much more than the minimal swap space that we have tended to configure so far.
(All of this is more true on NVMe drives than SSDs, though, and all of our servers still use SSDs for their system drives.)
However, all of this is theoretical. I don't know if it actually works in practice, especially on SSDs (where even a one millisecond delay for a page-in is the same cost as 10,000 accesses to RAM, and that's probably fast for SSDs). System wide swap thrashing on SSDs seems like a particularly bad case, and our most likely case on most servers. Per-user RAM limits seem like a better case for using a lot of swap, but even then we may not be doing people any real favours and they might be better off having the offending process just terminated.
(All of this was sparked by a Twitter thread.)
My experience with x2go is that it's okay but not compelling
Various people's tweets and comments on earlier entries pushed me into giving x2go a bit of a try, despite having low expectations because 'seamless windows' in X are challenging for remote desktop software. The results are somewhat mixed and my view so far is that x2go isn't compelling for me. To start with, what I want from any program like this is that it work like 'ssh -X' but perform faster. I specifically don't want to run a remote desktop; I want to run X programs remotely.
On the positive side, I was genuinely surprised by how much worked. X2go properly supported multiple programs opening multiple top level X windows that work just like regular X windows, and it even arranged for their X windows on my local display to have the correct window title, X class and resource name, icon, properties, and so on. Cut and paste worked (at least in xterm). I could even suspend a session and then resume it, with all of the windows disappearing from my display and then reappearing later. In a lot of ways, the windows displayed for the remote programs acted like they were real windows created through 'ssh -X', which definitely helped the overall experience.
(My window manager cares about the X class and resource names, for example, because it treats windows from some programs specially, especially xterms. That all worked with x2go remote xterm windows.)
Performance felt somewhat better than 'ssh -X' and my monitoring suggests that x2go was using clearly less bandwidth on my DSL link. However, some of this performance was clearly achieved by skipping updates, which could leave affected things like the desktops of VMWare machines feeling jerky. More text based things like GNU Emacs and xterm felt more like I was using them at work (or locally), although generally 'ssh -X' is already pretty good for them and I don't think there was much difference.
Unfortunately, x2go doesn't propagate your local X resource database into the remote X server that all of those remote X programs are creating windows on. Nor does it have a setting to scale up windows by 2x. This meant that remote programs weren't scaled properly for my HiDPI display and came out in tiny size (this normally requires at least some X properties). I was able to fix this by manually copying my X resources over, but the need to do this (manually or with some sort of wrapper automation) makes x2go far less friendly.
As I expected, Firefox's X remote control doesn't work over x2go because there's no 'Firefox' window on x2go's hidden X server (this may also affects the XSettings system). I think that the remote X windows are also oblivious to whether or not they've been iconified on the local X server. In another glitch, my VMWare machines couldn't change the X cursor, although regular remote windows could. And I was unable to find a way to make the overall x2go client window disappear or to automatically start a session; it appears to always require some GUI interactions.
(The x2goclient program is also very chatty to standard output.)
It's clear to me that x2go is not a good replacement for 'ssh -X' for short term disposable windows; there's simply too much fiddling around by comparison. Instead I would probably need to use x2go to establish a persistent launcher program, so that I wasn't constantly fiddling around in the GUI and re-establishing X resources and so on. Once the launcher was running under x2go, I could start up additional remote X programs from it on demand with just a mouse click or whatever, which is much closer to what I'd like.
Given all of the effort required to build and use a reliable and non-annoying x2go environment, along with the limitations and glitches, I currently don't feel that I'll use x2go very much. It would be a different matter if x2go could scale up windows by 2x, because then it would be great for VMWare (where the consoles of virtual machines are unscaled and tiny, although the VMWare GUI elements can be scaled up).
(Also, I couldn't get it to work to an Ubuntu 20.04 server, only to my office Fedora 33 desktop.)