Wandering Thoughts

2019-02-17

Why I like middle mouse button paste in xterm so much

In my entry about how touchpads are not mice, I mused that one of the things I should do on my laptop was insure that I had a keyboard binding for paste, since middle mouse button is one of the harder multi-finger gestures to land on a touchpad. Kurt Mosiejczuk recently left a comment there where they said:

Shift-Insert is a keyboard equivalent for paste that is in default xterm (at least OpenBSD xterm, and putty on Windows too). I use that most of the time now as it seems less... trigger-happy than right click paste.

This sparked some thoughts, because I can't imagine giving up middle mouse paste if I have a real choice. I had earlier seen shift-insert mentioned in other commentary on my entry and so have tried a bit to use it on my laptop, and it hasn't really felt great even there; on my desktops, it's even less appealing (I tried shift-insert out there to confirm that it did work in my set of wacky X resources).

In thinking about why this is, I came to the obvious realization about why all of this is so. I like middle mouse button paste in normal usage because it's so convenient, because almost all of the time my hand is already on the mouse. And the reason my hand is already on the mouse is because I've just used the mouse to shift focus to the window I want to paste into. Even on my laptop, my right hand is usually away from the keyboard as I move the mouse pointer on the touchpad, making shift-Insert at least somewhat awkward.

(The exception that proves the rule for me is dmenu. Dmenu is completely keyboard driven and when I bring it up, Ctrl-Y to paste the current X selection is completely natural.)

I expect that people who use the keyboard to change window focus have a pretty different experience here, whether they're using a fully keyboard driven window manager or simply one where they use Alt-Tab (or the equivalent) to select through windows. My laptop's Cinnamon setup has support for Alt-Tab window switching, so perhaps I should try to use it more. On the other hand, making the text selection I'm copying is generally going to involve the mouse or touchpad, even on my laptop.

(I don't think I want to try keyboard window switching in my desktop fvwm setup for various reasons, including that I think you want to be using some version of 'click to focus' instead of mouse pointer based focus for this to really work out. Having the mouse pointer in the 'wrong' window for your focus policy seems like a recipe for future problems and unpleasant surprises. On top of that, X's handling of scroll wheels means that I often want the mouse pointer to be in the active window just so I can use my mouse's scroll wheel.)

PS: Even if it's possible to use keyboard commands to try to select things in xterm or other terminal emulators, I suspect that I don't want to bother trying it. I rather expect it would feel a lot like moving around and marking things in vi(m), with the added bonus of having to remember an entire different set of keystrokes that wouldn't work in Firefox and other non-terminal contexts.

unix/MouseMovementAndPaste written at 22:46:50; Add Comment

Some notes on heatmaps and histograms in Prometheus and Grafana

On Mastodon (or if you prefer, the Fediverse), I mentioned:

I have now reached the logical end point of running Prometheus on my desktop, which is that I have installed Grafana so I can see heatmap graphs of my disk IO latency distributions generated from the Cloudflare eBPF exporter.

It's kind of neat once I got all the bits going.

This isn't my first go-around on heatmaps and histograms, but this time around I found new clever mistakes to make on top of my existing confusions. So it's time for some notes, in the hopes that they will make next time easier.

Grafana can make heatmaps out of at least two different sorts of Prometheus metrics, showing the distribution of numeric values over time (a value heatmap). The first sort, which is simpler and the default if you set up a heatmap panel, is gauges or gauge-like things, such as the number of currently active Apache processes or the amount of CPU usage over the past minute (which you would generate with rate() from the underlying counters). You could visualize these metrics in a conventional graph, but in many cases the graph would wiggle around madly and it would be hard to see much in it. Showing the same data in a heatmap may provide more useful and readable information.

When used this way, Grafana automatically works out the heatmap buckets to use from the data values and groups everything together and it is all very magical. Grafana takes multiple samples for every bucket's time range, but not all that many samples, and there is no real way to control this. In particular, as the time range goes up Grafana will sample your metric at steadily courser resolution, even though it could use a finer resolution to get more detailed information for buckets. As a consequence, for gauges you almost certainly want to use avg_over_time or max_over_time instead of the raw metric.

(Using rate() on a counter already gives you this implicitly.)

The other sort of Grafana heatmaps are made from Prometheus histogram metrics, which the Grafana documentation calls 'pre-bucketed data'. With these, you have to go to the Axes tab of the panel and set the "Data format" to "Time series buckets", and you also normally set the "Legend format" to '{{le}}' in the Metrics tab so that the Y axis can come out right. Failing to change the data format will give you very puzzling heatmaps and it is not at all obvious what's wrong and how you fix it.

(It's a real pity that Grafana doesn't auto-detect that this is a Prometheus histogram metric and automatically switch the data format and so on for you. It would make things much more usable and friendly.)

Prometheus histogram metrics can be either counters or gauges. A histogram of the number of IMAP connections per user would be a gauge histogram, because it changes up and down as people log off. A histogram of disk IO latency is a counter histogram; it will normally only count up. You need to rate() or increase() counter histograms in order to get useful heatmap displays; gauge histograms can be used as-is, although you probably want to consider running them through avg_over_time or max_over_time.

(Prometheus's metric type information doesn't distinguish between these two sorts of histograms. If you're lucky, the HELP text for a particular histogram tells you which it is; if you're not, you get to deduce it from what the histogram is measuring and how it behaves over time.)

One easy to make mistake is to have your heatmap metric query in Grafana actually return more than one metric sequence. For instance, when I first set up a heatmap for my disk latency metrics, I didn't realized that they came in a 'read' and a 'write' version for each disk. The resulting combined heatmap was rather confusing, with all sorts of nonsensical bucket counts. In theory you can put such multiple metrics in the same heatmap by creating separate names in the legend format, for example '{{le}} {{operation}}', but in practice this gives you two (or more) heatmaps stacked on top of each other and is not necessarily what you want. As far as I know, there's no way to combine two metrics or superimpose two metrics in the same heatmap. Sadly, this does result in an explosion of heatmaps for things like disk latency, so you probably want to use some Grafana dashboard variables to select what disk (or perhaps disks) you want heatmaps from.

It seems surprisingly hard to find a colour scheme for Grafana heatmaps that both has a pleasant variation from common to uncommon values while still clearly showing that uncommon values are present. By default, Grafana seems to want to fade uncommon values out almost to invisibility, which is not what I want; I want uncommon values to stand out, because they are one of the important things I'm looking for with heatmaps and histograms in general. Perhaps this is a sign that Grafana heatmaps are not actually the best way of looking for unusual values in Prometheus histograms, although they are probably a good way of looking at details once I know that some are present.

(I've also learned some hard lessons about hand-building histogram metrics for Prometheus. My overall advice there is to delegate the job to someone else's code if you have the choice, because it's really hard to get right if you're doing it all yourself.)

PS: for things like disk IO latency distributions, where the tail end is multiple seconds but involves fractions like '2.097152', it helps to explicitly set the Y axis decimals to '1' instead of leaving it on auto. This helps the Y axis label take up less space so the buckets get more of it. For disk IO sizes, I even set the decimals to '0'. Grafana's obsession with extreme precision in this sort of thing is both impressive and irksome.

sysadmin/PrometheusGrafanaHeatmaps written at 00:47:08; Add Comment

2019-02-15

Accumulating a separated list in the Bourne shell

One of the things that comes up over and over again when formatting output is that you want to output a list of things with some separator between them but you don't want this separator to appear at the start or the end, or if there is only one item in the list. For instance, suppose that you are formatting URL parameters in a tiny little shell script and you may have one or more parameters. If you have more than one parameter, you need to separate them with '&'; if you have only one parameter, the web server may well be unhappy if you stick an '&' before or after it.

(Or not. Web servers are often very accepting of crazy things in URLs and URL parameters, but one shouldn't count on it. And it just looks irritating.)

The very brute force approach to this general problem in Bourne shells goes like this:

tot=""
for i in "$@"; do
  ....
  v="var-thing=$i"
  if [ -z "$tot" ]; then
    tot="$v"
  else
    tot="$tot&$v"
  fi
done

But this is five or six lines and involves some amount of repetition. It would be nice to do better, so when I had to deal with this recently I looked into the Dash manpage to see if it's possible to do better with shell substitutions or something else clever. With shell substitutions we can condense this a lot, but we can't get rid of all of the repetition:

tot="${tot:+$tot&}var-thing=$i"

It annoys me that tot is repeated in this. However, this is probably the best all-around option in normal Bourne shell.

Bash has arrays, but the manpage's documentation of them makes my head hurt and this results in Bash-specific scripts (or at least scripts specific to any shell with support for arrays). I'm also not sure if there's any simple way of doing a 'join' operation to generate the array elements together with a separator between them, which is the whole point of the exercise.

(But now I've read various web pages on Bash arrays so I feel like I know a little bit more about them. Also, on joining, see this Stackoverflow Q&A; it looks like there's no built-in support for it.)

In the process of writing this entry, I realized that there is an option that exploits POSIX pattern substitution after generating our '$tot' to remove any unwanted prefix or suffix. Let me show you what I mean:

tot=""
for i in "$@"; do
  ...
  tot="$tot&var-thing=$i"
done
# remove leading '&':
tot="${tot#&}"

This feels a little bit unclean, since we're adding on a separator that we don't want and then removing it later. Among other things, that seems like it could invite accidents where at some point we forget to remove that leading separator. As a result, I think that the version using '${var:+word}' substitution is the best option, and it's what I'm going to stick with.

programming/BourneSeparatedList written at 23:12:33; Add Comment

2019-02-14

A pleasant surprise with a Thunderbolt 3 10G-T Ethernet adapter

Recently, I tweeted:

I probably shouldn't be surprised that a Thunderbolt 10G-T Ethernet adapter can do real bidirectional 10G on my Fedora laptop (a Dell XPS 13), but I'm still pleased.

(I am still sort of living in the USB 2 'if it plugs in, it's guaranteed to be slow' era.)

There are two parts to my pleasant surprise here. The first part is simply that a Thunderbolt 3 device really did work fast, as advertised, because I'm quite used to nominally high-speed external connection standards that do not deliver their rated speeds in practice for whatever reason (sometimes including that the makers of external devices cannot be bothered to engineer them to run at full speed). Having a Thunderbolt 3 device actually work feels novel, especially when I know that Thunderbolt 3 basically extends some PCIe lanes out over a cable.

(I know intellectually that PCIe can be extended off the motherboard and outside the machine, but it still feels like magic to actually see it in action.)

The second part of the surprise is that my garden variety vintage 2017 Dell XPS 13 laptop could actually drive 10G-T Ethernet at essentially full speed, and in both directions at once. I'm sure that some of this is in the Thunderbolt 3 10G-T adapter, but still; I'm not used to thinking of garden variety laptops as being that capable. It's certainly more than I was hoping for and means that the adapter is more useful than we expected for our purposes.

This experience has also sparked some thoughts about Thunderbolt 3 on desktops, because plugging this in to my laptop was a lot more pleasant an experience than opening up a desktop case to put a card in, which is what I'm going to need to do on my work desktop if I need to test a 10G thing with it someday. Unfortunately it's not clear to me if there even are general purpose PC Thunderbolt 3 PCIe cards today (ones that will go in any PCIe x4 slot on any motherboard), and if there are, it looks like they're moderately expensive. Perhaps in four or five years, my next desktop will have a Thunderbolt 3 port or two on the motherboard.

(We don't have enough 10G cards and they aren't cheap enough that I can leave one permanently in my desktop.)

PS: My home machine can apparently use some specific add-on Thunderbolt 3 cards, such as this Asus one, but my work desktop is an AMD Ryzen based machine and they seem out of luck right now. Even the addon cards are not inexpensive.

tech/Thunderbolt10GSurprise written at 23:07:03; Add Comment

2019-02-13

An unpleasant surprise with part of Apache's AllowOverride directive

Suppose, not entirely hypothetically, that you have a general directory hierarchy for your web server's document root, and you allow users to own and maintain subdirectories in it. In order to be friendly to users, you configure this hierarchy like the following:

Options SymLinksIfOwnerMatch
AllowOverride FileInfo AuthConfig Limit Options Indexes

This allows people to use .htaccess files in their subdirectories to do things like disable symlinks or enable automatic directory indexes (which you have turned off here by default in order to avoid unpleasant accidents, but which is inconvenient if people actually have a directory of stuff that they just want to expose).

Congratulations, you have just armed a gun pointed at your foot. Someday you may look at a random person's .htaccess in their subdirectory and discover:

Options +ExecCGI
AddHandler cgi-script .cgi

You see, as the fine documentation will explicitly tell you, the innocent looking 'AllowOverride Options' does exactly what it says on the can; it allows .htaccess files to turn on any Options directive. Some of these options are harmless, such as 'Options Indexes', while others of them are probably things that you don't want people turning on on their own without talking to you first.

(People can also turn on the full 'Options +Includes', which also allows them to run programs through the '#exec' element, as covered in mod_include's documentation. For that matter, you may not want to allow them to turn on even the more modest IncludesNOEXEC.)

To deal with this, you need to restrict what Options people can control, something like:

AllowOverride [...] Options=Indexes,[...] [...]

The Options= list is not just the options that people can turn on, it is also the options that you let them turn off, for example if they don't want symlinks to work at all in their subdirectory hierarchy.

(It's kind of a pity that Options is such a grab-bag assortment of things, but that's history for you.)

As an additional note, changing your 'AllowOverride Options' settings after the fact may be awkward, because any .htaccess file with a now-disallowed Options setting will cause the entire subdirectory hierarchy to become inaccessible. This may bias you toward very conservative initial settings until people appeal, and then perhaps narrow exemptions afterward.

(Our web server is generously configured for historical reasons; it has been there for a long time and defaults were much looser in the past, so people made use of them. We would likely have a rather different setup if we were recreating the content and configuration today from scratch.)

web/ApacheAOSurprise written at 22:58:41; Add Comment

2019-02-12

Using grep with /dev/null, an old Unix trick

Every so often I will find myself writing a grep invocation like this:

find .... -exec grep <something> /dev/null '{}' '+'

The peculiar presence of /dev/null here is an old Unix trick that is designed to force grep to always print out file names, even if your find only matches one file, by always insuring that grep has at least two files as arguments. You can wind up wanting to do the same thing with a direct use of grep if you're not certain how many files your wildcard may match. For example:

grep <something> /dev/null */*AThing*

This particular trick is functionally obsolete because pretty much all modern mainstream versions of grep support a -H argument to do the same thing (as the inverse of the -h argument that always turns off file names). This is supported in GNU grep and the versions of grep found in FreeBSD, OpenBSD, NetBSD, and Illumos. To my surprise, -H is not in the latest Single Unix Specification grep, so if you care about strict POSIX portability, you still need to use the /dev/null trick.

(I am biased, but I am not sure why you would care about strict POSIX portability here. POSIX-only environments are increasingly perverse in practice (arguably they always were).)

If you stick to POSIX grep you also get to live without -h. My usual solution to that was cat:

cat <whatever> | grep <something>

This is not quite a pointless use of cat, but it is an irritating one.

For whatever reason I remember -h better than I do -H, so I still use the /dev/null trick every so often out of reflex. I may know that grep has a command line flag to do what I want, but it's easier to throw in a /dev/null than to pause to reread the manpage if I've once again forgotten the exact option.

unix/GrepDevNull written at 23:40:09; Add Comment

2019-02-11

Thinking about the merits of 'universal' URL structures

I am reasonably fond of my URLs here on Wandering Thoughts (although I've made a mistake or two in their design), but I have potentially made life more difficult for a future me in how I've designed them. The two difficulties I've given to a future self are that my URLs are bare pages, without any extension on the end of their name, and that displaying some important pages requires a query parameter.

The former is actually quite common out there on the Internet, as many people consider the .html (or .htm) to be ugly and unaesthetic. You can find lots and lots of things that leave off the .html, at this point perhaps more than leave it on. But it does have one drawback, which is that it makes it potentially harder to move your content around. If you use URLs that look like '/a/b/page', you need a web server environment that can serve those as text/html, either by running a server-side app (as I do with DWiki) or by suitable server configuration so that such extension-less files are text/html. Meanwhile, pretty much anything is going to serve a hierarchy of .html files correctly. In that sense, a .html on the end is what I'll call a universal URL structure.

What makes a URL structure universal is that in a pinch, pretty much any web server will do to serve a static version of your files. You don't need the ability to run things on the server and you don't need any power over the server configuration (and thus even if you have the power, you don't have to use it). Did your main web server explode? Well, you can quickly dump a static version of important pages on a secondary server somewhere, bring it up with minimal configuration work, and serve the same URLs. Whatever happens, the odds are good that you can find somewhere to host your content with the same URLs.

I think that right now there are only two such universal URL structures; plain pages with .html on the end, and directories (ie, structuring everything as '/a/b/page/'). The specific mechanisms of giving a directory an index page of some kind will vary, but probably most everything can actually do it.

On the other hand, at this point in the evolution of the web and the Internet in general it doesn't make sense to worry about this. Clever URLs without .html and so on are extremely common, so it seems very likely that you'll always be able to do this without too much work. Maybe one convenient source of publishing your pages won't support it but you'll be able to find another, or easily search for configuration recipes on the web server of your choice for how to do it.

(For example, in doing some casual research for this entry I discovered that Github Pages lets you omit the .html on URLs for things that actually have them in the underlying repository. Github's server side handling of this automatically makes it all work. See this stackoverflow Q&A, and you can test it for yourself on your favorite Github Pages site, eg. I looked at Github Pages because I was thinking of it as an example of almost no effort hosting one might reach for in a pinch, and here it is already supporting what you'd need.)

PS: Having query parameters on your URLs will make your life harder here; you probably need either server side access to something on the order of Apache's RewriteCond or to add some JavaScript into all the relevant pages that will look for any query parameters and do magic things with them that will either provide the right page content or at least redirect to a better URL.

(DWiki has decent reasons for using query parameters, but I feel like perhaps I should have tried harder or been cleverer.)

web/UniversalUrlStructures written at 23:00:50; Add Comment

2019-02-10

Open protocols can evolve fast if they're willing to break other people

A while back I read an entry from Pete Zaitcev, where he said, among other things:

I guess what really drives me mad about this is how Eugen [the author of Mastodon] uses his mindshare advanage to drive protocol extensions. All of Fediverse implementations generaly communicate freely with one another, but as Pleroma and Mastodon develop, they gradually leave Gnusocial behind in features. In particular, Eugen found a loophole in the protocol, which allows to attach pictures without using up the space in the message for the URL. When Gnusocial displays a message with attachment, it only displays the text, not the picture. [...]

When I read this, my immediate reaction was that this sounded familiar. And indeed it is, just in another guise.

Over the years, there have been any number of relatively open protocols for federated things that were used by more or less commercial organizations, such as XMPP and Signal's protocol. Over and over again, the companies running major nodes have wound up deciding to de-federate (Signal, for example). When this has happened, one of the stated reasons for it has been that being federated has held back development (as covered in eg LWN's The perils of federated protocols, about Signal's decision to drop federation). At the time, I thought of this as being possible because what was involved was a company moving to a closed product, sometimes the company doing much of the work (as in Signal's case).

What Mastodon (and Pleroma) illustrate here is that this sort of thing can be done even in open protocols where some degree of federation is still being maintained. All it needs is for the people involved being willing to break protocol compatibility with other implementations that aren't willing to follow along and keep up (either because of lack of time or disagreements in the direction that the protocol is being dragged). Of course this is easier when the people making the changes are the dominant implementations, but anyone can do it if they're willing to live with the consequences, primarily a slow tacit de-federation where messages may still go back and forth but increasingly they're not useful for one or both sides.

Is this a good thing or not? I have no idea. On the one hand, Mastodon is moving the protocol in directions that are clearly useful to people; as Pete Zaitcev notes:

[...] But these days pictures are so prevalent, that it's pretty much impossible to live without receiving them. [...]

On the other hand things are clearly moving away from a universal federation of equals and an environment where the Fediverse and its protocols evolve through a broad process of consensus among many or all of the implementations. And there's the speed of evolution too; faster evolution privileges people who can spend more and more time on their implementation and people who can frequently update the version they're running (which may well require migration work and so on). A rapidly evolving Fediverse is one that requires ongoing attention from everyone involved, as opposed to a Fediverse where you can install an instance and then not worry about it for a while.

(This split is not unique to network protocols and federation. Consider the evolution of programming languages, for example; C++ moves at a much slower pace than things like Go and Swift because C++ cannot just be pushed along by one major party in the way those two can be by Google and Apple.)

tech/OpenProtocolsAndFastEvolution written at 20:05:28; Add Comment

2019-02-09

'Scanned' versus 'issued' numbers for ZFS scrubs (and resilvers)

Sufficiently recent versions of ZFS have new 'zpool status' output during scrubs and resilvers. The traditional old output looks like:

scan: scrub in progress since Sat Feb  9 18:30:40 2019
      125G scanned out of 1.74T at 1.34G/s, 0h20m to go
      0B repaired, 7.02% done

(As you can probably tell from the IO rate, this is a SSD-based pool.)

The new output adds an additional '<X> issued at <RATE>' note in the second line, and in fact you can get some very interesting output in it:

scan: scrub in progress since Sat Feb  9 18:36:33 2019
      215G scanned at 2.24G/s, 27.6G issued at 294M/s, 215G total
      0B repaired, 12.80% done, 0 days 00:10:54 to go

Or (with just the important line):

      271G scanned at 910M/s, 14.5G issued at 48.6M/s, 271G total

In both cases, this claims to have 'scanned' the entire pool but has only 'issued' a much smaller amount of IO. As it turns out, this is a glaring clue as to what is going on, which is that these are the new sequential scrubs in action. Sequential scrubs (and resilvers) split the non-sequential process of scanning the pool into two sides, scanning through metadata to figure out what IOs to issue and then, separately, issuing the IOs after they have been sorted into order (I am pulling this from this presentation, via). A longer discussion of this is in the comment at the start of ZFS on Linux's dsl_scan.c.

This split is what the new 'issued' number is telling you about. In sequential scrubs and resilvers, 'scanned' is how much metadata and data ZFS has been able to consider and queue up IO for, while 'issued' is how much IO has been actively queued to vdevs. Note that it is not physical IO; instead it is progress through what 'zpool list' reports as ALLOC space, as covered in my entry on ZFS scrub rates and speeds.

(All of these pools I'm showing output from use mirrored vdevs, so the actual physical IO is twice the 'issued' figures.)

As we can see from these examples, it is possible for ZFS to completely 'scan' your pool before issuing much IO. This is generally going to require that your pool is relatively small and also that you have a reasonable amount of memory, because ZFS limits how much memory it will use for all of those lists of not yet issued IOs that it is sorting into order. Once your pool is fully scanned, the reported scan rate will steadily decay, because it's computed based on the total time the scrub or resilver has been running, not the amount of time that ZFS took to hit 100% scanned.

(In the current ZFS on Linux code, this memory limit appears to be a per-pool one. On the one hand this means that you can scan several pools at once without one pool limiting the others. On the other hand, this means that scanning multiple pools at once may use more memory than you're expecting.)

Sequential scrubs and resilvers are in FreeBSD 12 and will appear in ZFS on Linux 0.8.0 whenever that is released (ZoL is currently at 0.8.0-rc3). It doesn't seem to be in Illumos yet, somewhat to my surprise.

(This entry was sparked by reading this question to the BSD Now program, via BSD Now 281, which I stumbled over due to my Referer links.)

solaris/ZFSScrubScannedVsIssued written at 19:21:15; Add Comment

2019-02-08

Making more use of keyboard control over window position and size

For a long time, I've done things like move and resize windows in my highly customized X environment through the mouse. Using the mouse for this is the historical default in X window managers and X window manager setups, and I have a whole set of fvwm mouse bindings to make this fast and convenient. I may have used keyboard bindings for things like raising and lowering windows and locking the screen, but not beyond that.

(I was familiar with the idea, since keyboard-driven tiling windows are very much a thing in X circles. But I happen to like the mouse.)

Then I got a work laptop with a screen that was big enough to actually need window management and discovered that my mouse-driven window manipulation didn't really work well due to a touchpad not being a good mouse. This led to me exploring Cinnamon's keyboard bindings for re-positioning and re-sizing windows (once I discovered that they exist), and progressively using them more and more rather than shuffle windows around by hand on the touchpad.

(Unfortunately there doesn't seem to be a good list of Cinnamon's standard keybindings, or even anything obvious to tell you that they exist. I'm not sure how I stumbled over them.)

These keybindings have turned out to be quite pleasant to use and make for a surprisingly fluid experience. Cinnamon's basic positioning keybindings let you tile windows to the halves and quarters of the screen, which works out about right on my laptop, and of course it supports maximize. Recently I've also been using a keybinding for flipping to a new virtual desktop ('workspace' in Cinnamon terminology) and taking the current window with you, which is helpful for de-cluttering a screen that's getting too overgrown.

My laptop experience has pushed me into adding a certain number of similar keyboard bindings to my desktop fvwm setup. Most of these are pure positioning bindings that do things like move a window to the bottom left or bottom right corner of the screen, because this turns out to be an operation I want to do a fair bit on my desktop (partly so I can just randomly place a new window then snap it into position). I also created a keybinding that resets terminals and browser windows to my preferred size for them (along with a 'maximize window' keybinding), which has turned out to be very handy. Now that it's trivial to reset a window to the right size, I'm much more likely to temporarily resize windows to whatever is convenient for right now.

(A maximized Firefox on either my work or my home display is a little bit ridiculous, but sometimes I just want to hit things with a giant hammer and there are cases where it really is what I want.)

Clearly one of the things that's going on here is the usual issue of reducing friction, much as with creating keyboard controls for sound volume and other cases. Having specific, easily accessible controls for various sorts of operations makes it more likely that I will actually bother to do them, and when they're useful operations that's a good thing.

(It's not sufficient to have accessible controls, as I'm not using all of the fvwm keyboard bindings for this that I added. Some superficially attractive positioning and resizing operations turn out to be too infrequent for me to bother to remember and use the keybindings.)

sysadmin/KeyboardWindowControl written at 21:31:25; Add Comment

(Previous 10 or go back to February 2019 at 2019/02/07)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.