Wandering Thoughts


(Graphical) Unix has always had desktop environments

One of the stories that you could tell about the X Window System and by extension graphical Unix is that first came (simple) window managers and xterm. Only later did Unix developed desktop environments like GNOME and KDE. This is certainly more or less the development that happened on open source PC Unixes and to a certain degree it's the experience many people had earlier on workstation Unix machines running X, but it's actually not historically accurate. In reality, Unix has had full scale desktop environments of various degrees of complexity more or less from the beginning of serious graphical Unix.

(This origin story of desktop environments on Unix is sufficiently attractive that I was about to use it in another entry until I paused to think a bit more.)

In the beginning, there was no X. Workstation vendors had to create their own GUI environments, and so of course they didn't just create a terminal emulator and window management; instead, they tended to create a whole integrated suite of GUI applications and an environment to run them in. One early example of this is Sun's Suntools/SunView, but SGI had one too and I believe most other Unix workstation vendors did as well (plus I believe CMU's Andrew had its own desktop-like environment). When X started to win out, these Unix vendors didn't abandon their existing GUI desktops to fall back to a much less well developed window manager and terminals experience; instead they re-implemented versions of their old desktop environments on top of X (such as Sun's OpenWindows) or created new ones such as the Common Desktop Environment (CDE).

Open source PC Unixes didn't follow this pattern because in the 1990s, there were few or no open source desktop environments. Since X window managers, xterm, and various other graphical programs were free software, that's what people had available to build their environments from, and that's what people did for a while. In this, the open source PC Unixes were recapitulating an earlier history of people running X on Unix vendor desktop workstations before the vendor itself supported X on their hardware and had built a desktop for it (and X itself started out this way, as initially distributed out of MIT and Project Athena).

(Some people then continued to use a basic X environment because they liked their version of it better than the Unix workstation vendor's desktop. Sometimes such a basic X environment ran faster, too, because the vendor had written a bunch of bloatware.)

DesktopsAlwaysThere written at 22:02:59; Add Comment


How NFS v3 servers and clients re-synchronize locks after reboots

NFS (v3) is usually described as 'stateless', by which we mean that the NFS clients hold all of the state and in theory all the server does is answer all of their requests one by one (the actual reality is more messy). However, NFS (v3) locks are obviously not stateless, in that the server and all of the NFS clients have to agree on what is and isn't locked (and by who). This creates a need to re-synchronize this state if something unfortunate happens to either a NFS client or the NFS server, so you don't get stuck locks and other problems. The NFS v3 locking protocol opted to take a relatively brute force approach to the problem.

If and when a NFS v3 client boots up, it sends a 'I have just rebooted' notice to every NFS server it had locks from, or even perhaps might have had locks from. The NFS servers all react to this notice by releasing any NFS locks they believe the NFS client holds. In the traditional Unix model of locks, which NFS v3 more or less follows, locks are released no later than when the relevant processes exit, and on a reboot all processes have 'exited' (even if what really happened is that the NFS client lost power, locked up entirely, or had a kernel panic). As far as I know it's harmless for a NFS client to send this notice to a NFS server it doesn't actually have any locks from, so NFS clients can do very simple things to keep a persistent record of what NFS servers they locked things on.

Things are more complicated with NFS servers. When a NFS server boots or reboots, it sends out a special 'I have rebooted' message to all NFS clients that it gave locks to, which causes all of the NFS clients to re-acquire those locks from the NFS server. However, there's a complication, because nothing prevents NFS clients from asking for new locks, including locks on files that were theoretically already locked by another client that hasn't yet reclaimed them. To prevent this from happening, a NFS v3 server that has rebooted enters a special reclaim locking mode for what is called a grace period. When a NFS client is reclaiming a lock in response to a server's notice, it sets a special 'this is a reclaim' flag on its lock request. While the server is in reclaim lock mode during its grace period, it only accepts these special 'reclaim' lock requests; ordinary lock requests are told to try again later with a special result code that tells the NFS client that the server is in the reclaim grace period.

(As with NFS client reboot notices, I believe it's harmless for an NFS server to send such notices to a client that doesn't think it holds locks from the server.)

These NFS client reclaim requests don't necessarily succeed, for various reasons (including two NFS clients thinking they both hold a lock on the same file). And I believe it's always possible for a NFS client to simply not have gotten the server's notification, so it has no idea it's supposed to start reclaiming locks and the locks it thinks it holds are, by default, invalid.

This notification process is actually a separate protocol from locking (which in NFS v3 is separate from the NFS protocol itself). Locking is the 'NLM' (Network Lock Manager) protocol; the bidirectional notification system is 'SM' or 'NSM' ((Network) Status Monitor).

In theory a NFS v3 server could allow you to force a re-synchronization of NFS lock state between server and clients at any time by flipping into reclaim mode, marking all of its locks as 'pending reclaim', sending out an 'I have rebooted' NSM notice, and then at the end of the grace period dropping any locks that hadn't been reclaimed by some client. This could even be reasonably non-intrusive. In practice I'm not sure any NFS server actually implements this; instead, I think they all treat server lock recovery as something that's only done on boot with no existing locks that have to be tracked and maybe dropped later.

NFS servers and clients typically store SM state somewhere on disk. You can read about Linux's normal approach in statd(8), and about FreeBSD's in rpc.statd(8). FreeBSD conveniently ships with the protocol definitions for NLM and (N)SM, which aren't too hard to read if you're interested.

NFSv3LockRecovery written at 22:55:28; Add Comment


Programming on Unix and automatic memory management

Due to the pervasive influence of C, Unix is commonly thought of as a programming environment of manual memory management; you program Unix in C and C requires manual memory management. Except that I maintain this is somewhat of an illusion. From the days of V7 Unix onward (if not earlier), a substantial amount of Unix programming has always been done in languages with automatic memory management. It's just that we called this programming "shell scripts", "awk", "make", and so on.

(Then later all of these programming environments got slammed together to create Perl. People don't necessarily like Perl, but I think it's pretty undeniable that it had a substantial presence in 1990s era Unix programming.)

This isn't just an aspect of general Unix development after all sorts of people got their hands on it (and created things like Perl). In Bell Labs Research Unix and then Plan 9, I think pretty much every new language (or language) created for and on Unix was one with automatic memory management. One exception is Plan 9's Alef, but no less a person than Rob Pike has said that one reason for Alef's failure was its lack of automatic memory management.

(Another exception is C++, although that didn't quite come out of the core Research Unix environment. Obviously C++ has been highly successful.)

In short, programming with automatic memory management has been a part of Unix from Bell Labs onward. It's not some new intruder; it's a normal part of Unix, and the creators of Unix were clearly not philosophically opposed to automatic memory management under any circumstances.

PS: Some of the automatic memory management is forced in various ways; for example, a declarative language like "make" doesn't really have room for manual memory management. But I can somewhat imagine a version of awk that required you to do manual handling of some things. How difficult it is to imagine a version of awk with any form of manual space management suggests why it has its actual form.

(This elaborates on a Fediverse post, and was sparked by this.)

UnixAndAutomaticMemoryManagement written at 22:56:17; Add Comment


The traditional workaround for stuck NFS(v3) locks

Up through NFS v3, advisory file locks over NFS were done through a separate protocol and set of systems, the "Network Lock Manager" (NLM) set of protocols (which I believe are best covered in File Locking over XNFS and Network Lock Manager Protocol). File locking is naturally a stateful system, where the server and the clients have to have the same state, but unfortunately the NFS v3 NLM protocol doesn't provide for any way for servers or clients to explicitly check that they agree on how things are. In theory this shouldn't happen; in practice, well, it does.

(NFS locking is designed to deal with server and client reboots, but either doing one or simulating it tends to be pretty disruptive.)

The most visible way for the server and clients to become de-synchronized is a stuck lock, where the server believes a client has a file locked but the client thinks it doesn't. A file with a stuck lock can never be (re-)locked by any programs trying to do so, and it will normally stay that way until the reboot of either or both of the server or the client the server thinks has the file locked. As a result, working around these stuck locks has been a concern of NFS server system administrators for a long time and people have come up with a traditional brute force solution.

(In informal conversation sysadmins may talk about 'clearing' a stuck lock this way, but we're not really doing that; we're working around it with brute force.)

The traditional brute force workaround is to carefully stop everything that would ordinarily try to touch the stuck file, copy it to a new file, and then if necessary rename the new file back to the old name. Often you then remove the old stuck file. Then you let whatever programs or systems were trying to lock things start up again so they can go back to using and locking the file. Often this can be as simple as:

; mv fileA fileA-stuck
; cp -a fileA-stuck fileA

(There are many variations of this 'copy and rename' process, depending on what you're worried about and how you want to proceed.)

This works because (NFS) file locks are almost invariably attached to the file's inode instead of its name. When you rename and copy the file, the new version has the same name and the same contents (well, we hope), but a different inode, one that the NFS server doesn't consider to be locked.

(When you delete the old file with its old inode, the server will generally drop the lock and even if it doesn't, you don't care any more; the file is inaccessible and no one will try to lock it by accident.)

These days, most people don't deal with NFS and when they do it's with NFS v4, which has locking integrated into the core protocol (and as a result, I believe has more reliable locking). This brute force workaround for stuck NFS v3 file locks is drifting toward cursed knowledge, if it isn't already there.

(We still have NFS v3 fileservers, so every so often this is relevant to us.)

NFSLocksStuckWorkaround written at 21:42:34; Add Comment


There are two facets to dd usage

Recently I shared a modern Unix superstition on the Fediverse:

Is it superstition that I do 'dd if=... bs=<whatever> | cat >/dev/null' instead of just having 'of=/dev/null' because I'm cautious about some version of dd optimizing that command a little too much? Probably.

There are various things you could say about this, but thinking about it has made me realize that in practice, there are two facets to dd, what you could call two usage cases, and they're somewhat in conflict with each other.

The first facet is dd as a way to copy data around. If you view dd this way, it's fine if some combination of dd, your C library, and the kernel optimize how this data copying is done. For example, if dd is reading or writing a file to or from a network socket, in many cases it would be desirable to directly connect the file and the network socket inside the kernel so that you don't have to flow data through user level. If you're using dd to copy data, you generally don't care exactly how it happens, you just want the result.

(Dd traditionally has some odd behavior around block sizes, but many people using dd to copy data don't actually want this behavior or care about it.)

The second facet is dd as a way to cause specific IO to happen. If you view dd this way, it is absolutely not safe for the collective stack to optimize how the data is copied. You want dd to do exactly the IO that you asked for, and not change that. If you read from a file and write to /dev/null you don't want dd to connect the file and /dev/null in the kernel and then the kernel to optimize this to do no IO. Reading the file (or the disk) was the entire point.

My impression is that historically, dd originated in the first usage case; it was created around the time of V5 Unix (cf, also) in order to "convert and copy a file" in the words of the V6 dd manual page. System administrators later pressed it into use for the second facet, because it allowed for relatively precise control and it seemed like a safe command that was unlikely to choke on odd sources of input or output or do anything unpredictable with the data it read and wrote.

You can criticize this, but Unix didn't and still doesn't have a standard tool that's explicitly about performing certain IOs. Maybe it should have one, since dd can be awkward to use for highly-specific IO. Also, at the time that system administrators started assuming that dd would perform their IO as 'written', I don't think anyone expected the degree of cleverness that modern Unix utilities and kernels exhibit (cf this note about GNU coreutils cat and GNU grep apparently optimizing the case of its output being /dev/null for a long time).

DdTwoFacets written at 22:01:15; Add Comment


ANSI colours aren't consistent across X terminal programs

There is a long standing set of 'ANSI colour codes' in terminal emulators, including terminal programs for X. Here is a table of them, and fidian/ansi will provide you with a convenient Bash script that will show you what these colors look like in your terminal program. The latter is potentially relevant because, shockingly, no two X terminal programs I've tried render these ANSI colours exactly the same (between xterm, urxvt, Gnome Terminal, and konsole; xfce4-terminal may render the same as Gnome Terminal in some quick tests).

I've traditionally been very against using colours in my terminals, because in my normal black on white choice, the colours programs chose often wind up looking like an angry fruit salad explosion. Given how glaringly annoying colours looked to me, I didn't really understand why other people liked them until, a few years ago, I noticed that the same colours in Gnome Terminal looked rather different and generally came across as less of an assault on my eyes. Using fidian/ansi and its --color-table option shows that I wasn't just imagining this; in side by side comparisons, Gnome Terminal seems to clearly shift colours (in a black on white setup) to less intense and more readable.

Beyond the colour shifts in Gnome Terminal, there are other interesting colour changes from what you might expect. For instance, in all terminal emulators, the result of rendering 'normal' white coloured text in a black on white terminal is not invisible white text, but a greyish colour that remains somewhat readable. There are also 'faint' versions of basic ANSI colours, and the interpretation of faint white text on a white background isn't necessarily what you'd expect and varies quite a bit between terminal programs (with urxvt seeming to ignore the faintness entirely for all colours).

With enough work I could probably find out what specific colours Gnome Terminal is using and adjust my xterm to use them, so I have less annoying colours on the rare occasions when I might need them. As a practical matter, I'm not that interested in having colours in my terminals; even most of Gnome Terminal's colours aren't all that appealing.

I'd like to put this forward as a reason for people to entirely avoid using colours in terminal programs and terminal environments, but I know that ship has sailed years ago. Apparently other people have found or set up colour sets in their terminals that they like and find a lot more readable than I do with any setup that I've seen.

TerminalColoursNotTheSame written at 21:33:42; Add Comment


Special tcpdump filtering options for OpenBSD's pflog interface

One of the convenient things that OpenBSD's pf packet filtering system can do is log packets of interest, as covered in the Packet Filtering section of the pf.conf manual page. These packets are logged to a special pflog network interface (where a daemon will generally write them to disk). Since this is a network interface, you can monitor traffic on it with OpenBSD's version of tcpdump (or use tcpdump to read the log file).

As part of this, the OpenBSD tcpdump has some special additional filtering options that are useful for selecting interesting traffic on this pf logging interface. These are covered in pcap-filter; many or all of them can be found by searching for mentions of pf(4). Here are the most notable ones that I want to remember.

action <something>
Matches if PF blocked, passed, nat'd, or did whatever to a particular packet. Using 'action block' or 'action pass' can significantly reduce your confusion if you have a mixture of pass and block pf rules that log traffic, as we do. Because we have such a mix, I'm trying to condition myself to always use 'action block' as part of tcpdump'ing pflog0.

(For example, you might be passing and logging some traffic so that you can see how much of it you have.)

inbound or outbound
I believe that these have the same meaning as in pfctl -ss output. If you match in 'inbound' packets, you'll match only things logged by 'in' rules; if you match on 'outbound' packets, you'll match only things logged by 'out' rules. Or at least you'll match when packets are logged as they come in versus when they're logged as they get sent out.

on <interface>
This matches packets that came from a specific interface, regardless of what sort of rule caused them to be logged. With appropriate interface names, this may better correspond to what you think of as 'inbound' or 'outbound'.

rnr <number>
This matches a specific rule number, but at this point your life gets a little tricky because you have to find out the number of the rule you want. The easiest way to do this is to run 'pfctl -vv -s rules | grep @' and then find your rule or rules of interest. This also doesn't help you if the rule number has changed from when the packet was logged (for example because you've changed your pf.conf). You can use 'rulenum' as a synonym for this.

(At least things have gotten better here than they used to be in 2011.)

I believe that 'action block' is pretty safe, but if you want 'everything but blocked' you may want to just use 'not action block' rather than trying to figure out which other actions your specific configuration of rules needs you to use.

Our OpenBSD pflog0 interfaces appear to only log a relatively modest amount of packet data; it's often not enough to do things like completely reconstruct many DNS replies. I'm not sure how you increase the packet size for pflog0 itself, unless it's controlled by the '-s snaplen' argument of pflogd (which I initially read as controlling how much of the packet data from pflog0 would be saved to the log file).

OpenBSDPflogTcpdump written at 22:02:40; Add Comment


The size of a window is complicated in X (or can be)

A simple model of the size of windows and how they can be resized is that windows have a size in pixels and they can be resized pixel by pixel. Okay, you probably want to make it so that windows can have a minimum and a maximum size, because not everything can sensibly be made arbitrarily small or large (if you set the minimum and the maximum the same, your window is implicitly not resizable). However, windows in X can be more complicated than that, as I sort of mentioned in passing in yesterday's entry on implementing 'grow down' window placement in Fvwm.

At the level of the X protocol, windows have a size in pixels and that's it. However, X has long had a way for programs to tell the window manager that they should only be sized and resized in fixed pixel sized amounts, not resized to arbitrary pixels. You can look at this information with the xprop program; you want the WM_NORMAL_HINTS property, which is described in the Xlib programming manual section 14.1.7 and section of the Inter-Client Communication Conventions Manual.

A major use of these quantized sizes is for terminal programs to tell the window manager that they should only be resized in units of whole characters. For example, an xterm window for me reports (among other things):

program specified size: 1309 by 796
program specified minimum size: 45 by 37
program specified resize increment: 16 by 33
program specified base size: 29 by 4

Here, 16x33 is roughly the character size, and the base size of 29x4 accounts for the scroll bar and some padding. If I take the program specified size (which is in pixels), subtract the base size, and divide by the resize increment, I get 80x24. Not coincidentally, if I started to resize this window, my window manager would report its size as '80x24', not '1309x796'.

These days, there's another surprising but potentially common place that you can encounter this quantization. Here is xprop output from Firefox:

program specified minimum size: 900 by 240
program specified maximum size: 16384 by 16384
program specified resize increment: 2 by 2
program specified base size: 900 by 240

I'm using a 4K display that Firefox considers 'HiDPI'. Firefox on HiDPI displays doubles the size of 'CSS pixels' so that people who do things like set set '10 px' font sizes don't create microscopic text (cf; Firefox also generally scales up images and so on). When Firefox doubles the size of CSS pixels, it quite reasonably decides that it should be resized in 2x2 increments, rather than pixel by pixel.

(Chrome doesn't do this on my HiDPI display, and I started up Chrome for the first time in months just to check this.)

This has the interesting side effect that if I resize a Firefox window, the 'window size' my window manager reports is not at all the size in pixels of the actual window. A Firefox window that is physically 2466 pixels by 1614 pixels can be reported as being '783 by 687'; this comes from subtracting the base size from the pixel size, then dividing by two in each direction.

Other programs that are automatically HiDPI aware can also behave this way. Based on checking some GTK based applications (such as Liferea), I believe that all HiDPI aware GTK applications set their resize increment to 2x2 by default. Many laptop screens are HiDPI these days, so people using Linux desktops on them may have seen this happening without realizing. Alternately, desktop environments may either omit reporting window sizes during resize entirely or not scale them if the resize increment is too low. This is generally sensible behavior; most users don't care about the exact pixel size of random windows, whereas they may well care about the character size of terminal windows.

(Cinnamon on a non HiDPI display doesn't report a window size for Firefox when I resize it, but does report one for a terminal window.)

As a side note, xterm and some other programs allow you to specify their geometry in characters instead of in pixels, so that 'xterm -geometry 100x40' means a 100 character by 40 character xterm window, not a 100 pixel by 40 pixel one. I haven't looked to see if this is automatically handled in the X client libraries as part of setting the base size and resize increment, or whether it's handled separately.

PS: Since running terminal windows has always been important to X, I suspect that some version of this was in X10, if not even earlier versions of X. Resizing your terminal window in character units, not pixel units, is an obviously attractive thing.

XWindowSizeComplicated written at 22:22:08; Add Comment


Implementing 'grow down' window placement in Fvwm on X11

In days long ago, I used Twm as my window manager. Twm uses manual window placement for new windows; when a program wants to create a new window and it doesn't explicitly specify where the window should be on the screen, you move your mouse around to wherever you want the window to be (well, usually the top left of the window). Then you can click the left mouse button to place it there, or the right mouse button to invoke a special twm window placement feature. To quote the manual page:

[...] Clicking pointer Button3 (usually the right pointer button) will give the window its current position but attempt to make it long enough to touch the bottom the screen.

I became addicted to this feature when I used twm, because it's so handy for making big (xterm) windows. You don't have to size or resize your default 80x24 xterm; instead you put the top wherever you want and then click the right button and bang, it automatically goes to the bottom of the screen. It can be used with other programs too, of course, although I don't usually want to resize them as much.

When I moved to FVWM as my window manager, I wanted to keep this feature despite Fvwm not natively supporting it. Over the years I've used a number of ways to get this in Fvwm, starting from patching the source code to much better, lower overhead ways. My current approach uses Fvwm's InitialMapCommand window style option, inspired by this Fvwm cookbook entry.

First we need a function that will do this 'grow down', with a conditional command

DestroyFunc GrownDownFunc
AddToFunc GrownDownFunc
+ I windowid $0 (PlacedByButton3)   Resize bottomright keep -0p

'Resize bottomright' says we're specifying the position of the bottom right of the window; 'keep' says we want to keep the x (horizontal) position the same, and '-0p' says we want the y (vertical) to be at the bottom of the screen, or as close to it as window size quantization allows.

Then we need to invoke this function, passing it the window ID of the new window so it can work on the right window:

Style * InitialMapCommand GrownDownFunc $[w.id]

Because this approach uses styles instead of something more broad (such as FvwmEvent), I could limit this to terminal windows if I wanted to, or specifically exempt some windows. For example, if I never wanted to yank around the size of new Firefox windows this way, I could add:

Style Firefox InitialMapCommand Nop

So far I haven't bothered to exclude certain programs or specifically limit this to terminal windows, although in practice I almost entirely use it on terminals and editors such as sam.

(I sometimes use it on Emacs windows, but usually I also want to widen them beyond 80 columns. Which is actually an argument that I should change my default Emacs window size. It feels a little bit depressing to widen things out that way, but modern coding is what it is and it's not like I'm short on the horizontal screen space.)

FvwmGrowDownWindowPlacement written at 23:00:15; Add Comment


Things I ran into when moving from Fvwm2 to Fvwm3

I don't use a standard desktop environment like Gnome, KDE, XFCE, or Cinnamon on my home and work desktops (I do use Cinnamon on my work laptop because it was the easy way). Instead I use a custom X11 environment with FVWM as the window manager. Specifically, I used FVWM version 2 ('fvwm2'), until recently when I had to switch to fvwm3, (aka 'fvwm 3') because that was the easiest way to work around what was effectively an API change in libX11 1.8 (see this Fedora issue, also (via)). Fvwm 2 isn't being actively updated any more so it was unlikely to get a 'fix' for the libX11 API change, while Fvwm3 had been fixed as part of general updates to its code.

(The libX11 people would argue that they were merely enforcing an already documented API restriction, but Hyrum's Law applies.)

As I write this, the latest tagged ('released', sort of) version of fvwm3 is 1.0.6a. I'm using a custom build of the current git tip, because it contains some bug fixes that are important to me (#810, #811, and #813). A future 1.0.7 fvwm3 release will include those (and probably also a real fix for #693/#818). The current state of things is that fvwm3 works stably for me and is just the same as fvwm2, which is basically why I didn't try upgrading to fvwm3 before now. However, it took some changes to my old fvwm2 configuration to get there.

Much of what I had to change was updating to modern fvwm2 practice instead of multiple decades old things. What I needed was:

  • while fvwm2 supports directly setting colors and some color effects in various places as well as colorsets, fvwm3 only supports colorsets. For historical reasons my fvwm2 configuration wasn't using colorsets, so I had to update it to do so.

    Colorsets are mostly okay but they have some slightly rough edges. One of them is that one use of colorsets (the GreyedColorset in MenuStyle, used for greyed out menu entries) uses only the colorset's foreground color. So I wound up with a colorset that existed to specify exactly one color.

  • in fvwm2, you could specify that you wanted to move a window relative to the (Xinerama) screen it was on with a short form of 'Move screen w'. In fvwm3 this old syntax has been dropped and I needed to switch to 'Move screen $[w.screen]'.

  • the fvwm3 documentation currently says that FvwmPrompt is the intended modern replacement for FvwmConsole. Unfortunately FvwmPrompt currently has quoting issues that can make it unsuitable for this, per issue #662. In the current fvwm3 'configure', you must build without Go support in order to still build FvwmConsole (I patched my setup to build both FvwmConsole and FvwmPrompt).

    To use FvwmPrompt, you'll need to include starting the 'FvwmMFL' module in your StartFunction. If you already start 'FvwmCommandS', that's now the same thing (in fvwm3 it transparently starts FvwmMFL), but you might as well update to use the real module name in fvwm3.

  • fvwm2's Xinerama support allowed you to specify what Xinerama screen you wanted (fvwm) geometries to be relative to by using '@0', '@1', and so on. In fvwm3, you must use the (X)RandR screen names instead, such as '@DisplayPort-0'. Fvwm3 has a notation for the primary RandR screen ('@p') independent of how it's connected today, but not for additional RandR screens, which can only be named by their current physical connectors.

    (I use this at work to position one FvwmPager window on each display, and to place FvwmIconMan.)

If you're a fvwm(2) user who's tempted to try out this transition, you can make your life easier by adding entries to both your fvwm2 and fvwm3 configurations to restart the other window manager, as well as the current one. So these days I have two 'Restart ..' entries in one of my menus:

+ "Restart Fvwm3"	Restart
+ "Restart Fvwm2"	Restart fvwm2

This makes it much easier to switch back and forth so you can see how something worked in fvwm2, or take a break from trying to make some bit of your fvwm3 configuration work. I found that it took me a number of go-arounds with fvwm3 restarts before I had my configuration settled down.

If you don't use per-screen placement in a multi-screen environment, I believe it's technically possible to use the same configuration file for fvwm2 and fvwm3, since fvwm2 supports colorsets and the '$[w.screen]' syntax. I opted to have a separate fvwm3 configuration file so that I could edit it around without worrying about blowing up my working fvwm2 environment.

PS: I'm not sure if fvwm3 has had an official real release yet, or if it's still considered somewhat in development and not necessarily quite ready for a (very small) flood of fvwm2 users. The libX11 1.8 change probably isn't going to leave people with much of a choice, though.

Fvwm2ToFvwm3 written at 22:34:22; Add Comment

(Previous 10 or go back to January 2023 at 2023/01/19)

Page tools: See As Normal.
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.