Wandering Thoughts

2020-07-24

My varied types of Firefox windows

When I wrote about what I want out of my window manager, I mentioned that I have a plethora of Firefox windows, generally iconified to my desktop. This may sound like things are out of control (and in a way they are), but there is some method to my madness. I actually have a number of different sorts of Firefox windows.

In no particular order, I have Firefox windows for:

  • things that I'm actively working on or using. These are the windows that are most likely to be open, instead of iconified; I tend to iconify them only if I'm running out of space.

  • things that I'm relatively actively reading. I'm one of those people who gets distracted or just doesn't want to read one thing for too long (and sometimes I get interrupted), so I tend to have several things that I'm reading through at any given time.

  • things that I refresh and look at on a regular basis, either temporarily or on a regular basis. Now that I'm writing this, I've realized that I should shift some of these to my browser start page, because that's part of what it's there for.

  • things that I'm holding around as references for other things I'm doing. In theory these aren't permanent; in practice, sometimes the other thing falls down my priority list and its reference windows wind up sitting around for a long time.

    A related category is web pages I'm going to mention in email, a Wandering Thoughts entry, or something like that; here I have the web page still around as a way of both keeping its URL and reminding me of it.

  • things that I have aspirations of reading (or getting to) but in practice I'm not going to get to any time soon, or perhaps ever. This includes things that I've stopped being that interested in (but I can't admit it to myself and close the window), and things that I feel I should be interested in but, well, I'll read them later, someday.

Sometimes these windows have multiple tabs; this is especially common for references and things I'm actively working on.

I can't pile all of these different types of windows together in one clump (such as a bunch of tabs in a single window), even sorted by title, because I need to keep what type of window they are straight. Right now I keep track of that primarily based on where each window is iconified on my screen; some areas and some arrangements are for one purpose, other arrangements and areas are for others.

Some of these (the 'aspirations of reading' windows) could be dealt with better if I had a good way to archive a window and list and track my archived windows. This would probably take a Firefox addon; the ideal one would archive the entire window state (what Firefox currently saves in its session store, that's used to restore all your windows when restarting Firefox) and be able to completely return it to life, as if I'd never closed that window and all its tabs.

(Right now I have a little local HTML file where I sort of do this by hand. You can guess how often this happens, and it just has URLs and titles (and the date when I put them there), so it's less convenient than 'just give me the window back'.)

Some method to group and then de-group specific Firefox windows on demand would also help, because then I could have a group for each sort of thing and put windows into it that I'm not actively looking at right now. I'm not sure if I'd want this to be the same 'archive' mechanism as for things I don't expect to look at for some time, because that would probably put these other web pages a bit too far out of my mind. That probably means it's not something that should be done by Firefox but instead by my window manager somehow.

(It's quite possible that there are some good Firefox addons for dealing with this sort of thing. I haven't looked into the area very much, or even really thought about what might be possible to do inside Firefox.)

FirefoxMyVariedWindows written at 23:04:41; Add Comment

2020-07-07

I now think that blog 'per day' pages with articles are a mistake

Back in 2005 when I wrote DWiki, the engine that is used for Wandering Thoughts, there was an accepted standard structure for blogs that people followed, me included. For instance, it was accepted convention that the front page of your blog showed a number of the most recent articles, and you could page backward to older ones. Part of this structure was the idea that you would have a page for each day and that page would show the article or articles written that day (if any). When I put together DWiki's URL structure for blog-like areas, I followed this, and to this day Wandering Thoughts has these per-day pages.

I now think that these per day pages are not the right thing to do on the modern web (for most blogs), for three reasons. The first reason is that they don't particularly help real blog usability, especially getting people to explore your blog after they land on a page. Most people make at most one post a day, so exploring day by day doesn't really get you anything more than links in a blog entry to the next entry and the previous entry will (and if the links have the destination's title, they will probably be giving you more information than a day).

The second reason is that because they duplicate content from your actual articles, they confuse search engine based navigation. Perhaps the search engine will know that the actual entry is the canonical version and present that in preference to the per-day page where the entry is also present, but perhaps not. And if you do have two entries in one day, putting both of their texts on one page risks disappointment in someone who is searching for a combination of terms where one term is only in one entry and the other term is in a second.

The third and weakest reason is a consequence of how on the modern web, everything gets visited. Per-day pages are additional pages in your blog and web crawlers will visit them, driving up your blog's resource consumption in the process. These days my feelings are that you generally want to minimize the number of pages in your blog, not maximize them, something I've written about more in The drawback of having a dynamic site with lots of URLs on today's web. But this is not a very strong reason, if you have a reasonably efficient blog and you serve per-day pages that don't have the full article text.

I can't drop per-day pages here on Wandering Thoughts, because I know that people have links to them and I want those links to keep working as much as possible. The simple thing to do is to stop putting full entries on per-day pages, and instead just put in their title and a link to them (just as I already do on per-month and per-year pages); this at least gets rid of the duplication of entry text and makes it far more likely that search engine based navigation will deliver people to the actual entry. The more elaborate thing would be to automatically serve a HTTP redirect to the entry for any per-day page that had only a single entry.

(For relatively obvious reasons you'd want to make this a temporary redirect.)

There's a bit of me that's sad about this shift in blog design and web usage; the per-day, per-month, and per-year organization had a pleasant regularity and intuitive appeal. But I think its time has passed. More and more, we're all tending toward the kind of minimal URL structure typical of static sites, even when we have dynamic sites and so could have all the different URL structures and ways of accessing our pages that we could ask for.

BlogDroppingPerDayPages written at 00:09:25; Add Comment

2020-06-23

Sometimes it takes other people to show you some of your site's design flaws

Recently, I wrote an entry about people's efficiency expectations for generics in Go, and also wound up having a little discussion in the comments of that entry. The r/golang reddit linked to my entry, so I read it, and one thing I noticed was that one of the people commenting probably didn't realize that the entry and my comment on it had been written by the same person.

My first reaction was uncharitable, but then I put myself in that commentator's shoes and had a rather more humbling reaction. Looking at it from the outside, right now there's no particularly obvious sign in how I display comments here that the 'cks' who left a comment is in fact the author of the entry. There are contextual clues (for example 'cks' appears in several places around Wandering Thoughts, including the URL and my Fediverse link), but there's nothing that says it clearly. Even my name is not directly visible on the comment; I hide it behind an <abbr> element with a title, which is not obvious at the best of times and is probably completely invisible on mobile browsers, something I didn't know until yesterday.

(Because I'm likely to change how comments are displayed, right now the comment authorship for me looks like 'By cks at ...'. The 'cks' is the <abbr>, if it doesn't show in your browser.)

Obviously I should do something about this specific flaw in how DWiki (the wiki engine underlying this blog) displays comments written by myself, although I haven't decided exactly how it should look. But this is also a useful general lesson in how flaws in our own designs can linger until someone points them out, and also on how the flaws may not be pointed out in an obvious and explicit way. Any time you wind up thinking 'how could someone not see that?' about some aspect of your website, you should probably step back and take a serious attempt at figuring out why. There may be a good reason.

(This can be extended to more than websites. Over time, I've learned that when people miss something or misunderstand what I've written here, often I haven't quite written what I thought I did. I've assumed too much background, or I haven't written out what was obvious in my head, or I've cut some corners. It all looked good to me in reading it over before posting, because I knew what I was talking about, but other people don't. I've seen similar issues come up when I put together Grafana dashboards for our monitoring setup; I knew what they were saying and how to read them, but my co-workers didn't and so couldn't follow the morass.)

PeopleShowYouSiteFlaws written at 23:35:14; Add Comment

2020-06-22

Today I learned that HTML <abbr> may not do much on mobile browsers

For some time, I've been using HTML <abbr> elements with title attributes in my writing here on Wandering Thoughts. Sometimes I use it purely to provide a friendly expansion of abbreviations, like a TLS CA or MITM; sometimes the expansion acquires some additional commentary, such as the mention of <abbr> itself in this entry, and sometimes I use it for little asides. In a couple of contexts I use it to provide additional information; for example, any of my comments here (currently) say that they are 'by cks', where the <abbr> is used to add my name.

Today I had a reason to look at some of my pages that are using <abbr> in a mobile browser, specifically the iOS mobile browser. That was when I learned that iOS Safari doesn't render <abbr> in any visible way, which is fairly reasonable because there's no real way to interact with it; on desktops, an <abbr>'s title is shown when you hover the mouse over it, but on mobile there's no hover. This is a bit surprising because both MDN's <abbr> page and CanIUse currently say that it's fully supported on mobile browsers.

Once I started doing Internet searches it appears that this is a long standing issue and unlikely to change (because of the hover problem). There are various workarounds with both CSS and JavaScript, but I'm not certain I like any of them, especially with how I've historically used <abbr> here; some of my <abbr> usage would look very out of place if displayed inline in some way. Given that a decent amount of browsing comes from mobile these days, this is likely going to cause me to rethink how I use <abbr> here on Wandering Thoughts and likely use it a lot less, if at all. Probably a lot more terms will wind up as actual links to explanations of them, which is not necessarily a bad change overall.

This is a useful lesson to me that the web, and especially the mobile web, is an ongoing learning experience. Things that I think I know should be tested every so often, and I should look at my own sites in a mobile browser more often.

(As part of this, I should find out if there's a not too annoying and difficult way to look at and interact with my sites from an Android browser, despite not having any Android devices myself.)

HTMLAbbrAndMobileBrowsers written at 23:16:50; Add Comment

2020-05-22

Mixed feelings about Firefox Addons' new non-Recommended extensions warning

I don't look at addons on addons.mozilla.org very often, so I didn't know until now that Mozilla has started showing a warning on the page for many addons, such as Textern (currently), to the effect, well, let me just quote what I see now (more or less):

[! icon] This is not monitored for security through Mozilla's Recommended Extensions program. Make sure you trust it before installing.
Learn more

(Textern is among the Firefox addons that I use.)

This has apparently been going on since at least the start of March, per this report, or even further back (reddit), so I'm late to the party here.

On the one hand, I can see why Mozilla is doing this. Even in their more limited WebExtensions form, Firefox addons can do a great deal of damage to the security and privacy of the people who use them, and Mozilla doesn't have the people (or the interest) to audit them all or keep a close eye on what they're doing. Firefox addons aren't quite the prominent target that Chrome addons are, but things like the "Stylish" explosion demonstrates that people are happy to target Firefox too. What happened with Stylish also fairly convincingly demonstrates that requiring people to approve addon permissions isn't useful in practice, for various reasons.

On the other hand, this is inevitably going to lead to two bad outcomes. First, some number of people will be scared away from perfectly fine addons that simply aren't popular enough for Mozilla to bring them into the Recommended Extensions program. The second order consequence is that getting people to use a better version of an existing addon has implicitly gotten harder if the existing addon is a 'Recommended Extension'; yours may be better, but it also has a potentially scary warning on it.

(Arguably this is the correct outcome from a security perspective; yours may be better, but it's not necessarily enough better to make up for the increased risk of it not being more carefully watched.)

Second, some number of people will now be trained to ignore another security related warning because in practice it's useless noise to them. I think that this is especially likely if they've been directly steered to an addon by a recommendation or plug from somewhere else, and aren't just searching around on AMO. If you're searching on AMO for an addon that does X, the warning may steer you to one addon over another or sell you on the idea that the risk is too high. If you've come to AMO to install specific addon Y because it sounds interesting, well, the warning is mostly noise; it is a 'do you want to do this thing you want to do' question, except it's not even a question.

(And we know how those questions get answered; people almost always say 'yes I actually do want to do the thing I want to do'.)

Unfortunately I think this is a case where there is no good answer. Mozilla can't feasibly audit everything, they can't restrict AMO to only Recommended Extensions, and they likely feel that they can't just do nothing because of the harms to people who use Firefox Addons, especially people who don't already understand the risks that addons present.

FirefoxAddonsNewWarning written at 23:50:33; Add Comment

2020-05-13

The modern HTTPS world has no place for old web servers

When I ran into Firefox's interstitial warning for old TLS versions, it wasn't where I expected, and where it happened gave me some tangled feelings. I had expected to first run into this on some ancient appliance or IPMI web interface (both of which are famous for this sort of thing). Instead, it was on the website of an active person that had been mentioned in a recent comment here on Wandering Thoughts. On the one hand, this is a situation where they could have kept their web server up to date. On the other hand, this demonstrates (and brings home) that the modern HTTPS web actively requires you to keep your web server up to date in a way that the HTTP web didn't. In the era of HTTP, you could have set up a web server in 2000 and it could still be running today, working perfectly well (even if it didn't support the very latest shiny thing). This doesn't work for HTTPS, not today and not in the future.

In practice there are a lot of things that have to be maintained on a HTTPS server. First, you have to renew TLS certificates, or automate it (in practice you've probably had to change how you get TLS certificates several times). Even with automated renewals, Let's Encrypt has changed their protocol once already, deprecating old clients and thus old configurations, and will probably do that again someday. And now you have to keep reasonably up to date with web server software, TLS libraries, and TLS configurations on an ongoing basis, because I doubt that the deprecation of everything before TLS 1.2 will be the last such deprecation.

I can't help but feel that there is something lost with this. The HTTPS web probably won't be a place where you can preserve old web servers, for example, the way the HTTP web is. Today if you have operating hardware you could run a HTTP web server from an old SGI Irix workstation or even a DEC Ultrix machine, and every browser would probably be happy to speak HTTP 1.0 or the like to it, even though the server software probably hasn't been updated since the 1990s. That's not going to be possible on the HTTPS web, no matter how meticulously you maintain old environments.

Another, more relevant side of this is that it's not going to be possible for people with web servers to just let them sit. The more the HTTPS world changes and requires you to change, the more your HTTPS web server requires ongoing work. If you ignore it and skip that work, what happens to your website is the interstitial warning that I experienced and eventually it will stop being accepted by browsers at all. I expect that this is going to drive more people into the arms of large operations (like Github Pages or Cloudflare) that will look after all of that for them, and a little bit more of the indie 'anyone can do this' spirit of the old web will fade away.

(At the same time this is necessary to keep HTTPS secure, and HTTPS itself is necessary for the usual reasons. But let's not pretend that nothing is being lost in this shift.)

HTTPSNoOldServers written at 00:25:58; Add Comment

2020-04-25

Some notes on Firefox's interstitial warning for old TLS versions

Firefox, along with all other browsers, are trying to move away from supporting older TLS versions, which means means anything before TLS 1.2. In Firefox, the minimum acceptable TLS version is controlled about the about:config preference security.tls.version.min; in released versions of Firefox this is still '1' (for TLS 1.0), while in non-release versions it's '3' (for TLS 1.2). If you're using a non-release version and you visit some websites, you'll get a 'Secure Connection Failed' interstitial warning that's clear enough if you're a technical person.

The bottom of the warning text says:

This website might not support the TLS 1.2 protocol, which is the minimum version supported by Firefox. Enabling TLS 1.0 and TLS 1.1 might allow this connection to succeed.

TLS 1.0 and TLS 1.1 will be permanently disabled in a future release.

It then offers you a big blue 'Enable TLS 1.0 and 1.1' button. If you pick this, you're not enabling TLS 1.0 and 1.1 on a one-time basis or just for the specific website (the way you are with 'accept this certificate' overrides); you're permanently enabling it in Firefox preferences. Specifically, you're setting the security.tls.version.enable-deprecated preference to 'true' (from the default 'false').

As far as I've been able to see, the state of this '(permanently) enable deprecated TLS versions' setting is not exposed in the Preferences GUI, making its state invisible unless you know the trick (and even know to look). Perhaps when Mozilla raises the normal minimum TLS version in a Firefox release, they will expose something in Preferences (or perhaps they'll change to do something with per-site overrides, as they do for TLS certificates). In the mean time, if you want to find out about websites using older TLS versions through your normal browsing, you'll need to remember to reset this preference every time you need to use that big blue button to get a site to work.

(You might be doing this in Nightly or Beta, although probably you should avoid Nightly, or you might be doing this in a released version where you've changed security.tls.version.min yourself.)

FirefoxOldTLSWarning written at 00:05:20; Add Comment

2020-04-12

The appeal of doing exact string comparisons with Apache's RewriteCond

I use Apache's RewriteCond a fair bit under various circumstances, especially here on Wandering Thoughts where I use it in .htaccess to block undesirable things (cf). The default RewriteCond action is to perform a regular expression matches, and generally this is what I want; for instance, many web spiders have user agents that include their version number, and that number changes over time. However, recently I was reminded of the power and utility of doing exact string matches for some circumstances.

Suppose, not hypothetically, that you have some bad web spiders that crawl your site with a constant bogus HTTP Referer of:

http://www.google.co.uk/url?sa=t&source=web&cd=1

Or another web spider might crawl with an unusual and fixed user-agent of:

Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36

I could use regular expressions to match and block these, but that's at least annoying because both of these strings have various special regular expression characters that I'd have to carefully escape. So instead we can use RewriteCond's '=' option to do an exact string comparison. The one slightly tricky bit is that you want to enclose the entire thing in "'s, that is:

RewriteCond %{HTTP_REFERER} "=http://www.google.co.uk/url?sa=t&source=web&cd=1" [NC]

(The '[NC]' is perhaps overkill, especially as the spider probably never varies the case. But it's a reflex.)

As you can see, instances of '=' in the string don't have to be escaped. If the string I wanted to match (exactly) on had quotes in it, I'd have to look up how to escape them in Apache.

Now that I've looked up this RewriteCond option and gotten it working for me, I'm probably going to make more use of it. Various bad web spiders (and other software) has pretty consistent and unique signatures in various headers, which generally beats playing whack-a-mole with their IP address ranges.

(This probably isn't very useful outside of blocking bad people, although I suppose it could be used to rewrite only certain exact URLs while allowing others to fall through, or the reverse.)

ApacheRewriteCondExactMatch written at 22:40:50; Add Comment

2020-03-16

How Firefox could support automatically using local DNS over HTTPS servers

On the surface, one of the challenges for Firefox automatically using different DNS over HTTPS servers is that Firefox considers your ISP to be a threat. This means that Firefox doesn't want to just use your local DNS over HTTPS server any more than it wants to just use your normal local DNS server. Firefox's use of DNS over HTTPS is explicitly to avoid surveillance from various parties, including the local network, so to do this it needs to go straight to a trusted (public) DNS over HTTPS server.

But there is a leak in this security model, in the form of Firefox's canary domain for disabling its automatic DNS over HTTPS. Any local network can already tell Firefox to disable DNS over HTTPS, defeating this anti-snooping measure. This is necessary because Firefox can't reliably detect when DNS over HTTPS to a public DNS server won't work properly for the local network, so networks with special name resolution setups need some way to signal this to Firefox.

(As a practical matter, Firefox not supporting a way to disable its automatic DNS over HTTPS to public DNS servers would result in a certain amount of the remaining Firefox users dropping it, because it didn't work reliably in their network. So Mozilla's hand is forced on this, even though it allows ISPs to step in and snoop on people again.)

Since Firefox already supports disabling automatic DNS over HTTPS entirely through a network doing magic tricks with the canary domain, it could also support a way of using the canary domain to signal that Firefox should use a local DNS over HTTPS server. This is no worse than turning off DoH entirely (in both cases your DNS queries are going to the network operator), and has some advantages such as potentially enabling encrypted SNI.

(Firefox's threat model might say that it can't enable ESNI with an untrusted local DNS over HTTPS server that was picked up automatically.)

FirefoxLocalDNSOverHTTPS written at 00:53:28; Add Comment

2020-03-13

Sensible heuristics for when to use DNS over HTTPS can't work for us

If Firefox is using DNS over HTTPS in general, it has various heuristics for whether or not to use it to resolve any particular name; for instance, right now it doesn't use DNS over HTTPS for any domain in your DNS suffixes (this happens even if you explicitly turned on DNS over HTTPS, which disables checking for the canary domain). Presumably other browsers will also have their own set of heuristics when they implement DNS over HTTPS, and at some point the set of heuristics that various browsers use may even be documented.

(This DNS over HTTPS bypass is intended to deal with two cases; where the name you're looking up doesn't exist in public DNS, and where the name has a different IP address.)

Almost six months ago I wrote a cautiously optimistic entry about Firefox, DNS over HTTPS, and us, where I hoped that Firefox's heuristics would be able to deal with our split horizon DNS setup where some names resolve to different IPs internally than they do externally. Unfortunately, I've changed my mind (based on experience and experimentation since then); I now believe that no sensible set of heuristics can cover all of our cases, and so anyone using DNS over HTTPS (with external resolvers) will sooner or later be unable to connect to some of the websites run by people in the department.

The fundamental issue that sinks the entire thing here is that people sometimes want to host websites on their machines here but give them names not under our university domains (for various good reasons, such a regular yearly conference that just happens to be hosted here this time). We do know what these domains are, because we have to set up the split DNS for them, but it's definitely not appropriate to add them to our DNS suffixes and they have different public DNS servers than our internal resolvers.

(In some cases they have public DNS servers that aren't even associated with us, and we carefully shadow bits of their domain internally to make it work. We prefer to host their DNS, though.)

I can't think of any sensible heuristic that could detect this situation, especially if you don't want to leak information about the person's DNS lookups to the local resolver. You could detect connection failures and try a non DNS over HTTPS name lookup, but that leaks data in various circumstances and even if it works there's a long connection delay the first time around.

So I think we're going to always have the 'disable DNS over HTTPS' canary domain signal in our local DNS servers, and we'll hope that someday this signal is respected even for Firefox users who have explicitly turned on DNS over HTTPS (unless they explicitly picked a new option of 'always use DoH even if the local network signals that it shouldn't be used and might give you broken results'). This makes me a little bit sad; I'd like heuristics that magically worked, so we could let people use DNS over HTTPS and hide things from us that we don't want to know anyway.

DNSOverHTTPSHeuristicsAndUs written at 01:22:57; Add Comment

(Previous 10 or go back to March 2020 at 2020/03/11)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.