2018-11-25
Firefox's middle-click behavior on HTML links on Linux
When I wrote about my unusual use for Firefox's Private Browsing mode, I lamented in an aside that you couldn't attach custom behavior to middle-clicking links with modifier keys held down, at least on Linux. This raised an obvious question, namely what are the various behaviors of middle-clicking links on Linux with various modifier keys held down.
So here they are, for posterity, as of Firefox 63 or so:
Middle click or Shift + middle click | Your default 'open link in' behavior, either a new tab or a new window. For me, a new window. |
Ctrl + middle click | The alternate to your plain middle click behavior (so opening a new tab in the background for me). |
Shift + Ctrl + middle click | Open link in a new tab and then do the reverse of your 'when you open a link in a new tab, switch to it immediately' preference. |
If you have Firefox in its default preferences, where opening links in a new tab doesn't switch to it immediately, shift + ctrl + middle click will immediately switch to the new tab. If you have Firefox set to switch to new tabs immediately, shift + ctrl + middle click opens new tabs in the background.
Firefox on Linux appears to entirely ignore both Alt and Meta (aka Super) when handling middle clicks. It probably ignores other modifiers too, but I don't have any way of generating either CapsLock or NumLock in my X setup for testing. Note that your window manager setup may attach special meaning to Alt + middle clicks in windows (or Alt + the middle mouse button in general) that preempt the click from getting to Firefox; this was the case for me until I realized and turned it off temporarily for testing.
You might also wonder about modifiers on left clicks on links. In general, it turns out that adding modifiers to a left click turns it into a middle click. There is one interesting exception, which is that Alt plus left click ignores the link and turns your click into a regular mouse click on text; this is convenient for double-clicking words in links, or single-clicking to select sub-word portions of things.
(Perhaps I knew this at one point but forgot it or demoted it to reflexive memory. There's a fair amount about basic Firefox usage that I don't really think about and don't know consciously any more.)
Sadly, I suspect that the Firefox people wouldn't be interested in letting extensions attach custom behavior to Alt + middle clicks on links (with or without other modifiers), or Meta + middle clicks. These are really the only two modifiers that could sensibly have their behavior altered or modified, but since they're already ignored, allowing extensions to interpret them might cause disruption to users who've gotten used to Firefox not caring about either when middle-clicking.
As a side note, Shift plus the scroll wheel buttons changes the scroll wheel from scrolling up and down to scrolling left and right. Ctrl plus the scroll wheel buttons is text zoom, which is probably well known (certainly I knew it). Alt plus the scroll wheel is 'go forward/back one page', which I didn't know. Shift or Meta plus any other modifiers reverts the scroll wheel to its default 'scroll up/down' behavior, and Meta plus the scroll wheel also gives you the default behavior.
PS: Modifiers don't appear to change the behavior of right clicking at all; I always get the popup menu. The same is true if your mouse has physical rocker buttons, which Firefox automatically interprets as 'go forward one page' and 'go back one page'.
Update: There's a bunch of great additional information in the comments from James, including a useful piece of information about Shift plus right click. If you're interested in this stuff, you want to read them too.
How we monitor our Prometheus setup itself
On Mastodon, I said:
When you have a new alerting and monitoring system, 'who watches the watchmen' becomes an interesting and relevant question. Especially when the watchmen have a lot of separate components and moving parts.
If we had a lot of experience with Prometheus, we probably wouldn't worry about this; we'd be able to assume that everything was just going to work reliably. But we're very new with Prometheus, and so we get to worry about its reliability in general and also the possibility that we'll quietly break something in our configuration or how we're operating things (and we have, actually). So we need to monitor Prometheus itself. If Prometheus was a monolithic system, this would probably be relatively easy, but instead instead our overall Prometheus environment has a bunch of separate pieces, all of which can have things go wrong.
A lot of how we're monitoring for problems is probably basically standard in Prometheus deployments (or at least standard in simple ones, like ours). The first level of monitoring and alerts is things inside Prometheus:
- We alert on unresponsive host agents (ie, Prometheus node_exporter) as part of our
general checking for and alerting on down hosts; this will catch
when a configured machine doesn't have the agent installed or it
hasn't been started. The one thing it won't catch is a production
machine that hasn't been added to our Prometheus configuration.
Unfortunately there's no good automated way in our environment
to tell what is and isn't a production machine, so we're just
going to have to rely on remembering to add machines to Prometheus
when we put them into production.
(This alert uses the Prometheus '
up
' metric for our specific host agent job setting.) - We also alert if Prometheus can't talk to a number of other metrics
sources it's specifically configured to pull from, such as Grafana,
Pushgateway, the Blackbox agent itself, Alertmanager, and a couple
of instances of an Apache metrics exporter. This is also
based on the
up
metric, excluding the ones for host agents and for all of our Blackbox checks (which generateup
metrics themselves, which can be distinguished from regularup
metrics because the Blackbox check ones have a non-emptyprobe
label). - We publish some system-wide information for temperature sensor
readings and global disk space usage for our NFS fileservers, so we have checks to make sure
that this information is both present at all and not too old. The
temperature sensor information is published through Pushgateway,
so we leverage its
push_time_seconds
metric for the check. The disk space usage information is published in a different way, so we rely on its own 'I was created at' metric. - We publish various per-host information through the host agent's
textfile
collector, where you put files of metrics you want to publish in a specific directory, so we check to make sure that these files aren't too stale through thenode_textfile_mtime_seconds
metric. Because we update these files at varying intervals but don't want to have complex alerts here, we use a single measure for 'too old' and it's a quite conservative number.(This won't detect hosts that have never successfully published some particular piece of information at all, but I'm currently assuming this is not going to happen. Checking for it would probably be complicated, partly because we'd have to bake in knowledge about what things hosts should be publishing.)
All of these alerts require their own custom and somewhat ad-hoc rules. In general writing all of these checks feels like a bit of a slog; you have to think about what could go wrong, and then how you could check for it, and then write out the actual alert rule necessary. I was sort of tempted to skip writing the last two sets of alerts, but we've actually quietly broken both the global disk space usage and the per-host information publication at various times.
(In fact I found out that some hosts weren't updating some information
by testing my alert rule expression in Prometheus. I did a topk()
query on it and then went 'hey, some of these numbers are really
much larger than they should be'.)
This leaves checking Prometheus itself, and also a useful check on Alertmanager (because if Alertmanager is down, Prometheus can't send out the alert it detects). In some places the answer to this would be a second Prometheus instance that cross-checks the first and a pair of Alertmanagers that both of them talk to and that coordinate with each other through their gossip protocol. However, this is a bit complicated for us, so my current answer is to have a cron job that tries to ask Prometheus for the status of Alertmanager. If Prometheus answers and says Alertmanager is up, we conclude that we're fine; otherwise, we have a problem somewhere. The cron job currently runs on our central mail server so that it depends on the fewest other parts of our infrastructure still working.
(Mechanically this uses curl
to make the query through Prometheus's
HTTP API and then jq
to extract
things from the answer.)
We don't currently have any checks to make sure that Alertmanager can actually send alerts successfully. I'm not sure how we'd craft those, because I'm not sure Alertmanager exposes the necessary metrics. Probably we should try to write some alerts in Prometheus and then have a cron job that queries Prometheus to see if the alerts are currently active.
(Alertmanager exposes a count of successful and failed deliveries for the various delivery methods, such as 'email', but you can't find out when the last successful or failed notification was for one, or whether specific receivers succeeded or failed in some or all of their notifications. There's also no metrics exposed for potential problems like 'template expansion failure', which can happen if you have an error somewhere in one of your templates. If the error is in a rarely used conditional portion of a template, you might not trip over it for a while.)