Wandering Thoughts

2019-08-07

What has to happen with Unix virtual memory when you have no swap space

Recently, Artem S. Tashkinov wrote on the Linux kernel mailing list about a Linux problem under memory pressure (via, and threaded here). The specific reproduction instructions involved having low RAM, turning off swap space, and then putting the system under load, and when that happened (emphasis mine):

Once you hit a situation when opening a new tab requires more RAM than is currently available, the system will stall hard. You will barely be able to move the mouse pointer. Your disk LED will be flashing incessantly (I'm not entirely sure why). [...]

I'm afraid I have bad news for the people snickering at Linux here; if you're running without swap space, you can probably get any Unix to behave this way under memory pressure. If you can't on your particular Unix, I'd actually say that your Unix is probably not letting you get full use out of your RAM.

To simplify a bit, we can divide pages of user memory up into anonymous pages and file-backed pages. File-backed pages are what they sound like; they come from some specific file on the filesystem that they can be written out to (if they're dirty) or read back in from. Anonymous pages are not backed by a file, so the only place they can be written out to and read back in from is swap space. Anonymous pages mostly come from dynamic memory allocations and from modifying the program's global variables and data; file backed pages come mostly from mapping files into memory with mmap() and also, crucially, from the code and read-only data of the program.

(A file backed page can turn into an anonymous page under some circumstances.)

Under normal circumstances, when you have swap space and your system is under memory pressure a Unix kernel will balance evicting anonymous pages out to swap space and evicting file-backed pages back to their source file. However, when you have no swap space, the kernel cannot evict anonymous pages any more; they're stuck in RAM because there's nowhere else to put them. All the kernel can do to reclaim memory is to evict whatever file-backed pages there are, even if these pages are going to be needed again very soon and will just have to be read back in from the filesystem. If RAM keeps getting allocated for anonymous pages, there is less and less RAM left to hold whatever collection of file-backed pages your system needs to do anything useful and your system will spend more and more time thrashing around reading file-backed pages back in (with your disk LED blinking all of the time). Since one of the sources of file-backed pages is the executable code of all of your programs (and most of the shared libraries they use), it's quite possible to get into a situation where your programs can barely run without taking a page fault for another page of code.

(This frantic eviction of file-backed pages can happen even if you have anonymous pages that are being used only very infrequently and so would normally be immediately pushed out to swap space. With no swap space, anonymous pages are stuck in RAM no matter how infrequently they're touched; the only anonymous pages that can be discarded are ones that have never been written to and so are guaranteed to be all zero.)

In the old days, this usually was not very much of an issue because system RAM was generally large compared to the size of programs and thus the amount of file-backed pages that were likely to be in memory. That's no longer the case today; modern large programs such as Firefox and its shared libraries can have significant amounts of file-backed code and data pages (in addition to their often large use of dynamically allocated memory, ie anonymous pages).

In theory, this thrashing can happen in any Unix. To prevent it, your Unix has to decide to deliberately not allow you to allocate more anonymous pages after a certain point, even though it could evict file-backed pages to make room for them. Deciding when to cut your anonymous page allocations off is necessarily a heuristic, and so any Unix that tries to do it is sooner or later going to prevent you from using some of your RAM.

(This is different than the usual issue with overcommitting virtual memory address space because you're not asking for more memory than could theoretically be satisfied. The kernel has to guess how much file-backed memory programs will need in order to perform decently, and it has to do so at the time when you try to allocate anonymous memory since it can't take the memory back later.)

NoSwapConsequence written at 22:26:29; Add Comment

2019-08-05

dup(2) and shared file descriptors

In my entry on how sharing file descriptors with child processes is a clever Unix decision, I said:

This full sharing is probably easier to implement in the kernel than making an independent copy of the file descriptor (unless you also changed how dup() works). [...]

Currently, dup() specifically shares the file offset between the old file descriptor and the new duplicated version. This implies a shared file descriptor state within the kernel for at least file descriptors in the current process, and along with it some way to keep track of when the last reference to a particular shared state goes away (because only then can the kernel actually close the file and potentially trigger things like pending deletes).

Once you have to have this shared descriptor state within a single process, it's relatively straightforward to extend this to multiple processes, especially in the kind of plain uniprocessor kernel environment that Unix had for a long time. Basically, instead of having a per-process data structure for shared file descriptor state, you have a single global one, and everyone manipulates entries in it. You need reference counting regardless of whether file descriptor state is shared within a process or across processes.

(Then each process has a mapping from file descriptor number to the shared state. In early Unixes, this was a small fixed size array, the u_ofile array in the user structure. Naturally, early Unixes also had a fixed size array for the actual file structures for open files, as seen in V7's c.c and param.h. You can see V7's shared file structure here.)

PS: The other attraction of this in small kernel environments, as seen in the V7 implementation, is that if file descriptor state is shared across all processes, you need significantly fewer copies of the state for a given file that's passed to children a lot, as is common for standard input, standard output, and standard error.

DupAndSharedFileDescriptors written at 21:15:00; Add Comment

2019-08-03

Sharing file descriptors with child processes is a clever Unix decision

One of the things that happens when a Unix process clones itself and executes another program is that the first process's open file descriptors are shared across into the child (well, apart from the ones that are marked 'close on exec'). This is not just sharing that the new process has the same files or IO streams open, the way that it would have if it open()'d them independently; this shares the actual kernel level file descriptors. This full sharing means that if one process changes the properties of file descriptors, those changes are experienced by the other processes as well.

(This inheritance of file descriptors sometimes has not entirely desirable consequences, as does that file descriptor properties are shared. Running a program that leaves standard input set to O_NONBLOCK is often still a reliable way to get your shell to immediately exit after the program finishes. Many shells reset the TTY properties, but often don't think of O_NONBLOCK.)

This full sharing is probably easier to implement in the kernel than making an independent copy of the file descriptor (unless you also changed how dup() works). But it has another important property that makes it a clever choice for Unix, which is that the file offset is part of what is shared and this means that the following subshell operation can work as intended:

(sed -e 10q -e 's/^/a: /'; sed -e 10q -e 's/^/b: /') <afile

(Let's magically assume that sed doesn't use buffered reads and so will read only exactly ten lines each time. This isn't true in practice.)

If the file offset wasn't shared between all children, it's not clear how this would work. You'd probably have to invent some sort of pipe-like file descriptor that either shared the file offset or was a buffer and didn't support seeking, and then have the shell use it (probably along with some other programs).

Sharing the file offset is also the natural way to handle multiple processes writing standard output (or standard error) to a file, as in the following example:

(program1; program2; program3) >afile

If the file offset wasn't shared, each process would start writing at the start of afile and they'd overwrite each other's results. Again, you'd need some pipe-like trick to make this work.

(Once you have O_APPEND, you can use it for this, but O_APPEND appears to postdate V7 Unix; it's not in the V7 open(2) manpage.)

PS: The implementation of shared file descriptors across processes in old Unixes is much simplified by the fact that they're uniprocessor environments, so the kernel has no need to worry about locking for updating file offsets (or much of anything else to do with them). Only one process can be in the kernel manipulating them at any given time.

SharedFileDescriptorsClever written at 19:39:04; Add Comment

2019-07-28

What I want out of my window manager

One answer to what I want out of my window manager is 'fvwm'. It's my current window manager and I'm not likely to switch to anything else because I'm perfectly satisfied with it. But that's not a good answer, because fvwm has a lot of features and I'm not using them all. As with everyone who uses a highly customizable thing, my important subset of fvwm is probably not quite the same as anyone else's important subset of it.

(I'm thinking about what I want out of my window manager because Wayland is coming someday, and that means I'm almost certainly going to need a new window manager at some time in, say, the next ten years.)

I can't tell for sure what's important to me, because I'm sort of a fish in water when it comes to fvwm and my fvwm configuration; I've been using it exclusively for so long that I'm not certain what I'd really miss if I moved and what's unusual. With that said, I think that the (somewhat) unusual features that I want go like this (on top of a straightforward 'floating layout' window manager):

  • Something like FvwmIconMan, which is central to how I manage terminal windows (which I tend to have a lot of).

  • The ability to iconify windows to icons on the root window and then place those icons in specific locations where they'll stay. I also want to be able to record the location of those icons and reposition them back, because I do that. Putting iconified windows in specific places is how I currently manage my plethora of Firefox windows, including keeping track of what I'm going to read soon. As usual, icons need to have both an icon and a little title string.

    (Perhaps I should figure out a better way to handle Firefox windows, one that involves less clutter. I have some thoughts there, although that's for another entry. But even with Firefox handled, there are various other windows I keep around in iconified form.)

  • Multiple virtual desktops or screens, with some sort of pager to show me a schematic view of what is on what screen or desktop and to let me switch between them by the mouse. I also need key bindings to flip around between screens. It has to be possible to easily move windows (in normal or iconified form) from screen to screen, including from the command line, and I should be able to set it so that some icons or windows are always present (ie, they float from screen to screen).

  • Window title bars that can be either present or absent, because some of my windows have them and some don't. I'd like the ability to customize what buttons a window titlebar has and what they do, but it's not really important; I could live with everything with a titlebar having a standard set.

  • User-defined menus that can be brought up with a wide variety of keys, because I have a lot of menus that are bound to a lot of different keys. My fvwm menus are one of my two major ways of launching programs, and I count on having a lot of different key bindings to make them accessible without having to go through multiple menu levels.

  • User-defined key bindings, including key bindings that still work when the keyboard focus is on a window. Key bindings need to be able to invoke both window manager functions (like raising and lowering windows) and to run user programs, especially dmenu.
  • User-defined bindings for mouse buttons, because I use a bunch of them.

  • Minimal or no clutter apart from things that I specifically want. I don't want the window manager insisting that certain interface elements must exist, such as a taskbar.

  • What fvwm calls 'focus follows mouse', where the keyboard focus is on the last window the mouse was in even if the mouse is then moved out to be over the root window. I don't want click to focus for various reasons and I now find strict mouse focus to be too limiting.

Fvwm allows me great power over customizing the fonts used, the exact width of window borders, and so on, but for the most part it's not something I care deeply about if the window manager does a competent job and makes good choices in general. It's convenient if the window manager has a command interface for testing and applying small configuration changes, like FvwmConsole; restarting the window manager when you have a lot of windows is kind of a pain.

(As you might guess from my priorities, my fvwm configuration file is almost entirely menu configurations, key and mouse button bindings, and making FvwmIconMan and fvwm's pager work right. I have in the past tried tricky things, but at this point I'm no longer really using any of them. All of my vaguely recent changes have been around keyboard bindings for things like moving windows and changing sound volume.)

PS: Command line interface and control of the window manager would be pretty handy. I may not use FvwmCommand very often, but I like that it's there. And I do use the Perl API for my hack.

Sidebar: Fvwm's virtual screens versus virtual desktops

Fvwm has both virtual screens and virtual desktops, and draws a distinction between them that is covered in the relevant section of its manpage. I use fvwm's virtual screens but not its desktops, and in practice I treat every virtual screen as a separate thing. It can sometimes be convenient that a window can spill over from virtual screen to virtual screen, since it often gives me a way of grabbing the corner of an extra-large window. On the other hand, it's also irritating when a window winds up protruding into another virtual screen.

All of this is leading up to saying that I wouldn't particularly object to a window manager that had only what fvwm would call virtual desktops, without virtual screens. This is good because I think that most modern window managers have adopted that model for their virtual things.

WindowManagerWants written at 00:39:19; Add Comment

2019-07-22

Why file and directory operations are synchronous in NFS

One of the things that unpleasantly surprises people about NFS every so often is that file and directory operations like creating a file, renaming it, or removing it are synchronous. This can make operations like unpacking a tar file or doing a VCS clone or checkout be startlingly slow, much slower than they are on a local filesystem. Even removing a directory tree can be drastically slower than it is locally.

(Anything that creates files also suffers from the issue that NFS clients normally force a flush to disk after they finish writing a file.)

In the original NFS, all writes were synchronous. This was quite simple but also quite slow, and for NFS v3, the protocol moved to a more complicated scheme for data writes, where the majority of data writes could be asynchronous but the client could force the server to flush them all to disk every so often. However, even in NFS v3 the protocol more or less requires that directory level operations are synchronous. You might wonder why.

One simple answer is that the Unix API provides no way to report delayed errors for file and directory operations. If you write() data, it is an accepted part of the Unix API that errors stemming from that write may not be reported until much later, such as when you close() the file. This includes not just 'IO error' type errors, but also problems such as 'out of space' or 'disk quota exceeded'; they may only appear and become definite when the system forces the data to be written out. However, there's no equivalent of close() for things like removing files or renaming them, or making directories; the Unix API assumes that these either succeed or fail on the spot.

(Of course, the Unix API doesn't necessarily promise that all errors are reported at close() and that close() flushes your data to disk. But at least close() explicitly provides the API a final opportunity to report that some errors happened somewhere, and thus allows it to not report all errors at write()s.)

This lack in the Unix API means that it's pretty dangerous for a kernel to accept such operations without actually committing them; if something goes wrong, there's no way to report the problem (and often no process left to report them to). It's especially dangerous in a network filesystem, where the server may crash and reboot without programs on the client noticing (there's no Unix API for that either). It would be very disconcerting if you did a VCS checkout, started working, had everything stall for a few minutes (as the server crashed and came back), and then suddenly all of your checkout was different (because the server hadn't committed it).

You could imagine a network filesystem where the filesystem protocol itself said that file and directory operations were asynchronous until explicitly committed, like NFS v3 writes. But since the Unix API has no way to expose this to programs, the client kernel would just wind up making those file and directory operations synchronous again so that it could immediately report any and all errors when you did mkdir(), rename(), unlink(), or whatever. Nor could the client kernel really batch up a bunch of those operations and send them off to the network filesystem server as a single block; instead it would need to send them one by one just to get them registered and get an initial indication of success or failure (partly because programs often do inconvenient things like mkdir() a directory and then immediately start creating further things in it).

Given all of this, it's not surprising that neither the NFS protocol nor common NFS server implementations try to change the situation. With no support from the Unix API, NFS clients will pretty much always send NFS file and directory operations to the server as they happen and need an immediate reply. In order to avoid surprise client-visible rollbacks, NFS servers are then more or less obliged to commit these metadata changes as they come in, before they send back the replies. The net result is a series of synchronous operations; the client kernel has to send the NFS request and wait for the server reply before it returns from the system call, and the server has to commit before it sends out its reply.

(In the traditional Unix way, some kernels and some filesystems do accept file and metadata operations without committing them. This leads to problems. Generally, though, the kernel makes it so that your operations will only fail due to a crash or an actual disk write error, both of which are pretty uncommon, not due to other delayed issues like 'out of disk space' or 'permission denied (when I got around to checking)'.)

NFSSynchronousMetadata written at 21:10:08; Add Comment

2019-06-30

Understanding why 'root window' X under Wayland may matter

In a comment on my entry on how the X death watch has probably started, Christopher Barts said:

There is XWayland, and apparently it will support handling the root window, not just application windows: <link>

The missing piece of the puzzle is Xweston, which apparently allows you to run window managers for X on Wayland: <link>

When I read this, at first I was puzzled about how this could work and why this was interesting, because I couldn't imagine that an X window manager could manage Wayland windows (a view I've had for some time). Then I actually read the Xweston README and a little light turned on in my head. The important bit from it is this:

[...] Xweston should allow one to run Xwayland as a bare X server (i.e. taking over the root window) hosted by Weston.

What this would allow you to do is to use Wayland as the graphics driver for the X server. Provided that the Wayland protocol and API stay sufficiently static, this would mean that people wouldn't have to write or update graphics drivers for X, and that changes in the Linux (or Unix) graphics environment wouldn't require changes in X. Both of these would be handled purely by updates to Wayland compositors.

(Wayland would also serve as the input driver for X, and hopefully that side of things would also require fewer or no updates.)

At one level this is not too much different than how X sometimes works today. For instance, for Intel's graphics hardware the X server normally outsources most of this to OpenGL via glamour and modesetting (as I found out). However, specific X drivers can still be better than glamour and this still leaves the modesetting driver to keep up with any changes in kernel modesetting, DRM setup, and so on. Outsourcing all of that to Wayland gets X a stable interface that hopefully performs well in general, which obviously matters a bunch if there's no one really left to keep updating X's graphics and modesetting drivers (which seems to be part of the problem).

This doesn't deal with the other half of the problem of the X death watch, which is the quality of support for X in toolkits like GTK and programs like Firefox. Also, you'd (still) be restricted to X programs only; as far as I can see, you wouldn't want to try to run Wayland programs in this environment because they wouldn't integrate into your X session (and there might be other problems). If there start to be Wayland-only programs and toolkits, that could be a problem.

PS: Xweston hasn't seen any development for a long time, so I suspect that the project is dead. Probably a modern version would be built on top of wlroots in some way, since that seems to have become the Wayland compositor starting point of choice.

(A really energetic person could of course skip trying to turn wlroots into an X server backend and simply implement their favorite X window manager for Wayland using wlroots and then manage individual windows from X programs in parallel with windows from Wayland programs. Note that I don't know if all of the things that X window managers can do are possible in Wayland.)

(This is one of the entries that I write because I finally worked something out in my own head and I don't want to forget it later. Hopefully I worked it out correctly and I'm not missing something.)

WhyRootedXOnWayland written at 22:23:32; Add Comment

2019-06-26

The death watch for the X Window System (aka X11) has probably started

I was recently reading Christian F.K. Schaller's On the Road to Fedora Workstation 31 (via both Fedora Planet and Planet Gnome). In it, Schaller says in one section (about Gnome and their move to fully work on Wayland):

Once we are done with this we expect X.org to go into hard maintenance mode fairly quickly. The reality is that X.org is basically maintained by us and thus once we stop paying attention to it there is unlikely to be any major new releases coming out and there might even be some bitrot setting in over time. We will keep an eye on it as we will want to ensure X.org stays supportable until the end of the RHEL8 lifecycle at a minimum, but let this be a friendly notice for everyone who rely the work we do maintaining the Linux graphics stack, get onto Wayland, that is where the future is.

I have no idea how true this is about X.org X server maintenance, either now or in the future, but I definitely think it's a sign that developers have started saying this. If Gnome developers feel that X.org is going to be in hard maintenance mode almost immediately, they're probably pretty likely to also put the Gnome code that deals with X into hard maintenance mode. And public Gnome statements about this (and public action or lack of it) provide implicit support for KDE and any other desktop to move in this direction if they want to (and probably create some pressure to do so). I've known that Wayland was the future for some time, but I would still like it to not arrive any time soon.

(I'm quite attached to my window manager, and it is very much X only. I am not holding my breath for anything very much like it, especially not as far as something like FvwmIconMan.)

Gnome's view especially matters here because of GTK, which is used as a foundation by a number of important desktop programs such as Firefox (but not Chrome, which apparently has its own toolkit system). If X support decays in GTK, a lot of programs will start being affected, and I don't know how receptive the Gnome developers would be to fixes if they consider X support to be in hard maintenance mode.

(But a lot of this is worries, rather than anything concrete.)

PS: I have no idea what non-Linux Unixes are going to do here, especially for NVidia hardware where driver support is already lacking and often at the mercy of NVidia's corporate priorities and indifference.

XDeathwatchStarts written at 23:25:36; Add Comment

2019-06-19

How Bash decides it's being invoked through sshd and sources your .bashrc

Under normal circumstances, Bash only sources your .bashrc when it's run as an interactive non-login shell; for example, this is what the Bash manual says about startup files. Well, it is most of what the manual says, because there is an important exception, which the Bash manual describes as 'Invoked by remote shell daemon':

Bash attempts to determine when it is being run with its standard input connected to a network connection, as when executed by the remote shell daemon, usually rshd, or the secure shell daemon sshd. If Bash determines it is being run in this fashion, it reads and executes commands from ~/.bashrc, [...]

(You can tell how old this paragraph of the manual is because of how much prominence it gives to rshd. Also, note that this specific phrasing about standard input presages my discovery of when bash doesn't do this.)

As the result of recent events, I became interested in discovering exactly how Bash decides that it's being run in the form of 'ssh host command' and sources your .bashrc. There turn out to be two parts to this answer, but the summary is that if this is enabled at all, Bash will always source your .bashrc for non-interactive commands if you've logged in to a machine via SSH.

First, this feature may not even be enabled in your version of Bash, because it's a non-default configuration setting (and has been since Bash 2.05a, which is pretty old). Debian and thus Ubuntu turn this feature on, as does Fedora, but the FreeBSD machine I have access to doesn't in the version of Bash that's in its ports. Unsurprisingly, OmniOS doesn't seem to either. If you compile Bash yourself without manually changing the relevant bit of config-top.h, you'll get a version without this.

(Based on some digging, I think that Arch Linux also builds Bash without enabling this, since they don't seem to patch config-top.h. I will leave it to energetic people to check other Linuxes and other *BSDs.)

Second, how it works is actually very simple. In practice, a non-interactive Bash decides that it is being invoked by SSHD if either $SSH_CLIENT or $SSH2_CLIENT are defined in the environment. In a robotic sense this is perfectly correct, since OpenSSH's sshd puts $SSH_CLIENT in the environment when you do 'ssh host command'. In practice it is wrong, because OpenSSH sets $SSH_CLIENT all the time, including for logins. So if you use SSH to log in somewhere, $SSH_CLIENT will be set in your shell environment, and then any non-interactive Bash will decide that it should source ~/.bashrc. This includes, for example, the Bash that is run (as 'bash -c ...') to execute commands when you have a Makefile that has explicitly set 'SHELL=/bin/bash', as Makefiles that are created by the GNU autoconfigure system tend to do.

As a result, if you have ancient historical things in a .bashrc, for example clearing the screen on exit, then surprise, those things will happen for every command that make runs. This may not make you happy. For situations like Makefiles that explicitly set 'SHELL=/bin/bash', this can happen even if you don't use Bash as your login shell and haven't had anything to do with it for years.

(Of course it also happens if you have perfectly modern things there and expect that they won't get invoked for non-interactive shells, and you do use Bash as your login shell. But if you use Bash as your login shell, you're more likely to notice this issue, because routine ordinary activities like 'ssh host command' or 'rsync host:/something .' are more likely to fail, or at least do additional odd things.)

PS: This October 2001 comment in variables.c sort of suggests why support for this feature is now an opt-in thing.

PPS: If you want to see if your version of Bash has this enabled, the simple way to tell is to run strings on the binary and see if the embedded strings include 'SSH_CLIENT'. Eg:

; /etc/fedora-release 
Fedora release 29 (Twenty Nine)
; strings -a /usr/bin/bash | fgrep SSH_CLIENT
SSH_CLIENT

So the Fedora 29 version does have this somewhat dubious feature enabled. Perhaps Debian and Fedora feel stuck with it due to very long-going backwards compatibility, where people would be upset if Bash stopped doing this in some new Debian or Fedora release.

Sidebar: The actual code involved

The code for this can currently be found in run_startup_files in shell.c:

  /* get the rshd/sshd case out of the way first. */
  if (interactive_shell == 0 && no_rc == 0 && login_shell == 0 &&
      act_like_sh == 0 && command_execution_string)
    {
#ifdef SSH_SOURCE_BASHRC
      run_by_ssh = (find_variable ("SSH_CLIENT") != (SHELL_VAR *)0) ||
                   (find_variable ("SSH2_CLIENT") != (SHELL_VAR *)0);
#else
      run_by_ssh = 0;
#endif

[...]

Here we can see that the current Bash source code is entirely aware that no one uses rshd any more, among other things.

BashDetectRemoteInvocation written at 22:50:11; Add Comment

2019-06-02

I haven't customized my Vim setup and I'm not sure I should try to (yet)

I was recently reading At least one Vim trick you might not know (via). In passing, the article divides Vim users (and its tips) into purists, who deliberately use Vim with minimal configuration, and exobrains, who "stuff Vim full of plugins, functions, and homebrew mappings". All of this is to say that currently, as a Vim user I am a non-exobrain; I use Vim with minimal customization (although not none).

This is not because I am a deliberate purist. Instead, it's partly because I've so far perceived the universe of Vim customizations as a daunting and complex place that seems like too much work to explore when my Vim (in its current state) works well enough for me. Well, that's not entirely true. I'm also aware that I could improve my Vim experience with more knowledge and use of Vim's own built in features. Trying to add customizations to Vim when I haven't even mastered its relative basics doesn't seem like a smart idea, and it also seems like I'd make bad decisions about what to customize and how.

(Part of the dauntingness is that in my casual reading, there seem to be several different ways to manage and maintain Vim plugins. I don't know enough to pick the right one, or even evaluate which one is more popular or better.)

There are probably Vim customizations and plugins that could improve various aspects of my Vim experience. But finding them starts with the most difficult part, which is understanding what I actually want from my Vim experience and what sort of additions would clash with it. The way I've traditionally used Vim is that I treat it as a 'transparent' editor, one where my interest is in getting words (and sometimes code) down on the screen. In theory, a good change would be something that increases this transparency, that deals with some aspect of editing that currently breaks me out of the flow and makes me think about mechanics.

(I think that the most obvious candidate for this would be some sort of optional smart indentation for code and annoying things like YAML files. I don't want smart indentation all of the time, but putting the cursor in the right place by default is a great use of a computer, assuming that you can make it work well inside Vim's model.)

Of course the other advantage of mostly avoiding customizing my Vim experience is that it preserves a number of the advantages that make Vim a good sysadmin's editor. I edit files with Vim in a lot of different contexts, and it's useful if these all behave pretty much the same. And of course getting better at core Vim improves things for me in all of these environments, since core Vim is everywhere. Even if I someday start customizing my main personal Vim with extra things to make it nicer, focusing on core Vim until I think I have all of the basics I care about down is more generally useful right now.

(As an illustration of this, one little bit of core Vim that I've been finding more and more convenient as I remember it more is the Ctrl-A and Ctrl-X commands to increment and decrement numbers in the text. This is somewhat a peculiarity of our environment, but it comes up surprisingly often. And this works everywhere.)

PS: Emacs is not entirely simpler than Vim here as far as customization go, but I have a longer history with customizing Emacs than I do with Vim. And it does seem like Emacs has their package ecology fairly nailed down, based on my investigations from a while back for code editing.

VimMinimalCustomization written at 00:23:58; Add Comment

2019-05-31

Some things about where icons for modern X applications come from

If you have a traditional window manager like fvwm, one of the things it can do is iconify X windows so that they turn into icons on the root window (which would often be called the 'desktop'). Even modern desktop environments that don't iconify programs to the root window (or their desktop) may have per-program icons for running programs in their dock or taskbar. If your window manager or desktop environment can do this, you might reasonably wonder where those icons come from by default.

Although I don't know how it was done in the early days of X, the modern standard for this is part of the Extended Window Manager Hints. In EWMH, applications give the window manager a number of possible icons, generally in different sizes, as ARGB bitmaps (instead of, say, SVG format). The window manager or desktop environment can then pick whichever icon size it likes best, taking into account things like the display resolution and so on, and display it however it wants to (in its original size or scaled up or down).

How this is communicated in specific is through the only good interprocess communication method that X supplies, namely X properties. In the specific case of icons, the _NET_WM_ICON property is what is used, and xprop can display the size information and an ASCII art summary of what each icon looks like. It's also possible to use some additional magic to read out the raw data from _NET_WM_ICON in a useful format; see, for example, this Stackoverflow question and its answers.

(One reason to extract all of the different icon sizes for a program is if you want to force your window manager to use a different size of icon than it defaults to. Another is if you want to reuse the icon for another program, again often through window manager settings.)

X programs themselves have to get the data that they put into _NET_WM_ICON from somewhere. Some programs may have explicit PNGs (or whatever) on the filesystem that they read when they start (and thus that you can too), but others often build this into their program binary or compiled data files, which means that you have to go to the source code to pull the files out (and they may not be in a bitmap format like PNG; there are probably programs that start with a SVG and then render it to various sized PNGs).

(As a concrete example, as far as I know Firefox's official icons are in the 'defaultNN.png' files in browser/branding/official. Actual builds may not use all of the sizes available, or at least not put them into _NET_WM_ICON; on Fedora 29, for example, the official Fedora Firefox 66 only offers up to 32x32, which is tragically small on my HiDPI display.)

None of this is necessarily how a modern integrated desktop like Gnome or KDE handles icons for their own programs. There are probably toolkit-specific protocols involved, and I suspect that there is more support and encouragement for SVG icons than there is in EWMH (where there is none).

PS: All of this is going to change drastically in Wayland, since we obviously won't have X properties any more.

(This whole exploration was prompted by a recent question on the FVWM mailing list.)

ModernXAppIcons written at 00:50:28; Add Comment

(Previous 10 or go back to May 2019 at 2019/05/23)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.