Wandering Thoughts archives

2020-06-23

Sometimes it takes other people to show you some of your site's design flaws

Recently, I wrote an entry about people's efficiency expectations for generics in Go, and also wound up having a little discussion in the comments of that entry. The r/golang reddit linked to my entry, so I read it, and one thing I noticed was that one of the people commenting probably didn't realize that the entry and my comment on it had been written by the same person.

My first reaction was uncharitable, but then I put myself in that commentator's shoes and had a rather more humbling reaction. Looking at it from the outside, right now there's no particularly obvious sign in how I display comments here that the 'cks' who left a comment is in fact the author of the entry. There are contextual clues (for example 'cks' appears in several places around Wandering Thoughts, including the URL and my Fediverse link), but there's nothing that says it clearly. Even my name is not directly visible on the comment; I hide it behind an <abbr> element with a title, which is not obvious at the best of times and is probably completely invisible on mobile browsers, something I didn't know until yesterday.

(Because I'm likely to change how comments are displayed, right now the comment authorship for me looks like 'By cks at ...'. The 'cks' is the <abbr>, if it doesn't show in your browser.)

Obviously I should do something about this specific flaw in how DWiki (the wiki engine underlying this blog) displays comments written by myself, although I haven't decided exactly how it should look. But this is also a useful general lesson in how flaws in our own designs can linger until someone points them out, and also on how the flaws may not be pointed out in an obvious and explicit way. Any time you wind up thinking 'how could someone not see that?' about some aspect of your website, you should probably step back and take a serious attempt at figuring out why. There may be a good reason.

(This can be extended to more than websites. Over time, I've learned that when people miss something or misunderstand what I've written here, often I haven't quite written what I thought I did. I've assumed too much background, or I haven't written out what was obvious in my head, or I've cut some corners. It all looked good to me in reading it over before posting, because I knew what I was talking about, but other people don't. I've seen similar issues come up when I put together Grafana dashboards for our monitoring setup; I knew what they were saying and how to read them, but my co-workers didn't and so couldn't follow the morass.)

PeopleShowYouSiteFlaws written at 23:35:14; Add Comment

2020-06-22

Today I learned that HTML <abbr> may not do much on mobile browsers

For some time, I've been using HTML <abbr> elements with title attributes in my writing here on Wandering Thoughts. Sometimes I use it purely to provide a friendly expansion of abbreviations, like a TLS CA or MITM; sometimes the expansion acquires some additional commentary, such as the mention of <abbr> itself in this entry, and sometimes I use it for little asides. In a couple of contexts I use it to provide additional information; for example, any of my comments here (currently) say that they are 'by cks', where the <abbr> is used to add my name.

Today I had a reason to look at some of my pages that are using <abbr> in a mobile browser, specifically the iOS mobile browser. That was when I learned that iOS Safari doesn't render <abbr> in any visible way, which is fairly reasonable because there's no real way to interact with it; on desktops, an <abbr>'s title is shown when you hover the mouse over it, but on mobile there's no hover. This is a bit surprising because both MDN's <abbr> page and CanIUse currently say that it's fully supported on mobile browsers.

Once I started doing Internet searches it appears that this is a long standing issue and unlikely to change (because of the hover problem). There are various workarounds with both CSS and JavaScript, but I'm not certain I like any of them, especially with how I've historically used <abbr> here; some of my <abbr> usage would look very out of place if displayed inline in some way. Given that a decent amount of browsing comes from mobile these days, this is likely going to cause me to rethink how I use <abbr> here on Wandering Thoughts and likely use it a lot less, if at all. Probably a lot more terms will wind up as actual links to explanations of them, which is not necessarily a bad change overall.

This is a useful lesson to me that the web, and especially the mobile web, is an ongoing learning experience. Things that I think I know should be tested every so often, and I should look at my own sites in a mobile browser more often.

(As part of this, I should find out if there's a not too annoying and difficult way to look at and interact with my sites from an Android browser, despite not having any Android devices myself.)

HTMLAbbrAndMobileBrowsers written at 23:16:50; Add Comment

2020-05-22

Mixed feelings about Firefox Addons' new non-Recommended extensions warning

I don't look at addons on addons.mozilla.org very often, so I didn't know until now that Mozilla has started showing a warning on the page for many addons, such as Textern (currently), to the effect, well, let me just quote what I see now (more or less):

[! icon] This is not monitored for security through Mozilla's Recommended Extensions program. Make sure you trust it before installing.
Learn more

(Textern is among the Firefox addons that I use.)

This has apparently been going on since at least the start of March, per this report, or even further back (reddit), so I'm late to the party here.

On the one hand, I can see why Mozilla is doing this. Even in their more limited WebExtensions form, Firefox addons can do a great deal of damage to the security and privacy of the people who use them, and Mozilla doesn't have the people (or the interest) to audit them all or keep a close eye on what they're doing. Firefox addons aren't quite the prominent target that Chrome addons are, but things like the "Stylish" explosion demonstrates that people are happy to target Firefox too. What happened with Stylish also fairly convincingly demonstrates that requiring people to approve addon permissions isn't useful in practice, for various reasons.

On the other hand, this is inevitably going to lead to two bad outcomes. First, some number of people will be scared away from perfectly fine addons that simply aren't popular enough for Mozilla to bring them into the Recommended Extensions program. The second order consequence is that getting people to use a better version of an existing addon has implicitly gotten harder if the existing addon is a 'Recommended Extension'; yours may be better, but it also has a potentially scary warning on it.

(Arguably this is the correct outcome from a security perspective; yours may be better, but it's not necessarily enough better to make up for the increased risk of it not being more carefully watched.)

Second, some number of people will now be trained to ignore another security related warning because in practice it's useless noise to them. I think that this is especially likely if they've been directly steered to an addon by a recommendation or plug from somewhere else, and aren't just searching around on AMO. If you're searching on AMO for an addon that does X, the warning may steer you to one addon over another or sell you on the idea that the risk is too high. If you've come to AMO to install specific addon Y because it sounds interesting, well, the warning is mostly noise; it is a 'do you want to do this thing you want to do' question, except it's not even a question.

(And we know how those questions get answered; people almost always say 'yes I actually do want to do the thing I want to do'.)

Unfortunately I think this is a case where there is no good answer. Mozilla can't feasibly audit everything, they can't restrict AMO to only Recommended Extensions, and they likely feel that they can't just do nothing because of the harms to people who use Firefox Addons, especially people who don't already understand the risks that addons present.

FirefoxAddonsNewWarning written at 23:50:33; Add Comment

2020-05-13

The modern HTTPS world has no place for old web servers

When I ran into Firefox's interstitial warning for old TLS versions, it wasn't where I expected, and where it happened gave me some tangled feelings. I had expected to first run into this on some ancient appliance or IPMI web interface (both of which are famous for this sort of thing). Instead, it was on the website of an active person that had been mentioned in a recent comment here on Wandering Thoughts. On the one hand, this is a situation where they could have kept their web server up to date. On the other hand, this demonstrates (and brings home) that the modern HTTPS web actively requires you to keep your web server up to date in a way that the HTTP web didn't. In the era of HTTP, you could have set up a web server in 2000 and it could still be running today, working perfectly well (even if it didn't support the very latest shiny thing). This doesn't work for HTTPS, not today and not in the future.

In practice there are a lot of things that have to be maintained on a HTTPS server. First, you have to renew TLS certificates, or automate it (in practice you've probably had to change how you get TLS certificates several times). Even with automated renewals, Let's Encrypt has changed their protocol once already, deprecating old clients and thus old configurations, and will probably do that again someday. And now you have to keep reasonably up to date with web server software, TLS libraries, and TLS configurations on an ongoing basis, because I doubt that the deprecation of everything before TLS 1.2 will be the last such deprecation.

I can't help but feel that there is something lost with this. The HTTPS web probably won't be a place where you can preserve old web servers, for example, the way the HTTP web is. Today if you have operating hardware you could run a HTTP web server from an old SGI Irix workstation or even a DEC Ultrix machine, and every browser would probably be happy to speak HTTP 1.0 or the like to it, even though the server software probably hasn't been updated since the 1990s. That's not going to be possible on the HTTPS web, no matter how meticulously you maintain old environments.

Another, more relevant side of this is that it's not going to be possible for people with web servers to just let them sit. The more the HTTPS world changes and requires you to change, the more your HTTPS web server requires ongoing work. If you ignore it and skip that work, what happens to your website is the interstitial warning that I experienced and eventually it will stop being accepted by browsers at all. I expect that this is going to drive more people into the arms of large operations (like Github Pages or Cloudflare) that will look after all of that for them, and a little bit more of the indie 'anyone can do this' spirit of the old web will fade away.

(At the same time this is necessary to keep HTTPS secure, and HTTPS itself is necessary for the usual reasons. But let's not pretend that nothing is being lost in this shift.)

HTTPSNoOldServers written at 00:25:58; Add Comment

2020-04-25

Some notes on Firefox's interstitial warning for old TLS versions

Firefox, along with all other browsers, are trying to move away from supporting older TLS versions, which means means anything before TLS 1.2. In Firefox, the minimum acceptable TLS version is controlled about the about:config preference security.tls.version.min; in released versions of Firefox this is still '1' (for TLS 1.0), while in non-release versions it's '3' (for TLS 1.2). If you're using a non-release version and you visit some websites, you'll get a 'Secure Connection Failed' interstitial warning that's clear enough if you're a technical person.

The bottom of the warning text says:

This website might not support the TLS 1.2 protocol, which is the minimum version supported by Firefox. Enabling TLS 1.0 and TLS 1.1 might allow this connection to succeed.

TLS 1.0 and TLS 1.1 will be permanently disabled in a future release.

It then offers you a big blue 'Enable TLS 1.0 and 1.1' button. If you pick this, you're not enabling TLS 1.0 and 1.1 on a one-time basis or just for the specific website (the way you are with 'accept this certificate' overrides); you're permanently enabling it in Firefox preferences. Specifically, you're setting the security.tls.version.enable-deprecated preference to 'true' (from the default 'false').

As far as I've been able to see, the state of this '(permanently) enable deprecated TLS versions' setting is not exposed in the Preferences GUI, making its state invisible unless you know the trick (and even know to look). Perhaps when Mozilla raises the normal minimum TLS version in a Firefox release, they will expose something in Preferences (or perhaps they'll change to do something with per-site overrides, as they do for TLS certificates). In the mean time, if you want to find out about websites using older TLS versions through your normal browsing, you'll need to remember to reset this preference every time you need to use that big blue button to get a site to work.

(You might be doing this in Nightly or Beta, although probably you should avoid Nightly, or you might be doing this in a released version where you've changed security.tls.version.min yourself.)

FirefoxOldTLSWarning written at 00:05:20; Add Comment

2020-04-12

The appeal of doing exact string comparisons with Apache's RewriteCond

I use Apache's RewriteCond a fair bit under various circumstances, especially here on Wandering Thoughts where I use it in .htaccess to block undesirable things (cf). The default RewriteCond action is to perform a regular expression matches, and generally this is what I want; for instance, many web spiders have user agents that include their version number, and that number changes over time. However, recently I was reminded of the power and utility of doing exact string matches for some circumstances.

Suppose, not hypothetically, that you have some bad web spiders that crawl your site with a constant bogus HTTP Referer of:

http://www.google.co.uk/url?sa=t&source=web&cd=1

Or another web spider might crawl with an unusual and fixed user-agent of:

Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36

I could use regular expressions to match and block these, but that's at least annoying because both of these strings have various special regular expression characters that I'd have to carefully escape. So instead we can use RewriteCond's '=' option to do an exact string comparison. The one slightly tricky bit is that you want to enclose the entire thing in "'s, that is:

RewriteCond %{HTTP_REFERER} "=http://www.google.co.uk/url?sa=t&source=web&cd=1" [NC]

(The '[NC]' is perhaps overkill, especially as the spider probably never varies the case. But it's a reflex.)

As you can see, instances of '=' in the string don't have to be escaped. If the string I wanted to match (exactly) on had quotes in it, I'd have to look up how to escape them in Apache.

Now that I've looked up this RewriteCond option and gotten it working for me, I'm probably going to make more use of it. Various bad web spiders (and other software) has pretty consistent and unique signatures in various headers, which generally beats playing whack-a-mole with their IP address ranges.

(This probably isn't very useful outside of blocking bad people, although I suppose it could be used to rewrite only certain exact URLs while allowing others to fall through, or the reverse.)

ApacheRewriteCondExactMatch written at 22:40:50; Add Comment

2020-03-16

How Firefox could support automatically using local DNS over HTTPS servers

On the surface, one of the challenges for Firefox automatically using different DNS over HTTPS servers is that Firefox considers your ISP to be a threat. This means that Firefox doesn't want to just use your local DNS over HTTPS server any more than it wants to just use your normal local DNS server. Firefox's use of DNS over HTTPS is explicitly to avoid surveillance from various parties, including the local network, so to do this it needs to go straight to a trusted (public) DNS over HTTPS server.

But there is a leak in this security model, in the form of Firefox's canary domain for disabling its automatic DNS over HTTPS. Any local network can already tell Firefox to disable DNS over HTTPS, defeating this anti-snooping measure. This is necessary because Firefox can't reliably detect when DNS over HTTPS to a public DNS server won't work properly for the local network, so networks with special name resolution setups need some way to signal this to Firefox.

(As a practical matter, Firefox not supporting a way to disable its automatic DNS over HTTPS to public DNS servers would result in a certain amount of the remaining Firefox users dropping it, because it didn't work reliably in their network. So Mozilla's hand is forced on this, even though it allows ISPs to step in and snoop on people again.)

Since Firefox already supports disabling automatic DNS over HTTPS entirely through a network doing magic tricks with the canary domain, it could also support a way of using the canary domain to signal that Firefox should use a local DNS over HTTPS server. This is no worse than turning off DoH entirely (in both cases your DNS queries are going to the network operator), and has some advantages such as potentially enabling encrypted SNI.

(Firefox's threat model might say that it can't enable ESNI with an untrusted local DNS over HTTPS server that was picked up automatically.)

FirefoxLocalDNSOverHTTPS written at 00:53:28; Add Comment

2020-03-13

Sensible heuristics for when to use DNS over HTTPS can't work for us

If Firefox is using DNS over HTTPS in general, it has various heuristics for whether or not to use it to resolve any particular name; for instance, right now it doesn't use DNS over HTTPS for any domain in your DNS suffixes (this happens even if you explicitly turned on DNS over HTTPS, which disables checking for the canary domain). Presumably other browsers will also have their own set of heuristics when they implement DNS over HTTPS, and at some point the set of heuristics that various browsers use may even be documented.

(This DNS over HTTPS bypass is intended to deal with two cases; where the name you're looking up doesn't exist in public DNS, and where the name has a different IP address.)

Almost six months ago I wrote a cautiously optimistic entry about Firefox, DNS over HTTPS, and us, where I hoped that Firefox's heuristics would be able to deal with our split horizon DNS setup where some names resolve to different IPs internally than they do externally. Unfortunately, I've changed my mind (based on experience and experimentation since then); I now believe that no sensible set of heuristics can cover all of our cases, and so anyone using DNS over HTTPS (with external resolvers) will sooner or later be unable to connect to some of the websites run by people in the department.

The fundamental issue that sinks the entire thing here is that people sometimes want to host websites on their machines here but give them names not under our university domains (for various good reasons, such a regular yearly conference that just happens to be hosted here this time). We do know what these domains are, because we have to set up the split DNS for them, but it's definitely not appropriate to add them to our DNS suffixes and they have different public DNS servers than our internal resolvers.

(In some cases they have public DNS servers that aren't even associated with us, and we carefully shadow bits of their domain internally to make it work. We prefer to host their DNS, though.)

I can't think of any sensible heuristic that could detect this situation, especially if you don't want to leak information about the person's DNS lookups to the local resolver. You could detect connection failures and try a non DNS over HTTPS name lookup, but that leaks data in various circumstances and even if it works there's a long connection delay the first time around.

So I think we're going to always have the 'disable DNS over HTTPS' canary domain signal in our local DNS servers, and we'll hope that someday this signal is respected even for Firefox users who have explicitly turned on DNS over HTTPS (unless they explicitly picked a new option of 'always use DoH even if the local network signals that it shouldn't be used and might give you broken results'). This makes me a little bit sad; I'd like heuristics that magically worked, so we could let people use DNS over HTTPS and hide things from us that we don't want to know anyway.

DNSOverHTTPSHeuristicsAndUs written at 01:22:57; Add Comment

2020-03-11

Some notes on the state of DNS over HTTPS in Firefox (as of March 2020)

Recently, we decided to add the magic marker that's used to explicitly disable DNS over HTTPS to our local DNS resolvers as a precaution against various things. Being sensible people, we then attempted to verify that we'd gotten it right, by explicitly enabling DNS over HTTPS in a sysadmin's test Firefox and then trying things with and without the canary domain. This failed and left us very puzzled, and it was only through a lucky bit of happenstance that we kind of discovered what seems to be going on (although what's going on is documented if you pay attention). So here are some notes.

First and most importantly, the canary domain is ignored if you've explicitly enabled DNS over HTTPS. We found this out from a tweet by Jan Schaumann (via), who filed a Mozilla bug over this behavior. The result of the bug was to cause Mozilla to update their documentation to mention this, both here (in a bit that's easy to miss in passing) and here (where it's made more obvious). The Mozilla bug (bug #1614751) contains a way of manipulating about:config settings to pretend that Firefox enrolled you in DoH so that you can test that you properly added the canary domain to your DNS resolver, in comment #1.

(It's possible that Mozilla will someday be persuaded to disable DoH when the canary domain is present no matter what the user asked for. In the mean time, people who have explicitly turned on DoH won't be able to connect to some of the web servers that we host due to our split horizon DNS.)

To make your life more confusing when you're testing, Firefox never uses DNS over HTTPS for domains in your DNS suffix list (which can come from DHCP or be explicitly configured, for example in /etc/resolv.conf on Unix systems). This can mean that either you need to manipulate your host settings to scrub out your usual DNS suffix list or you need a split horizon hostname that is not under them. Fortunately we were able to find some eventually, which allowed us to see that Firefox was still looking them up with DoH despite the canary domain being theoretically configured.

While testing Firefox, you can look at the state of its DNS stuff in about:networking. The 'DNS' tab will show you what Firefox thinks are your DNS suffixes and what names it has recently resolved, with or without DNS over HTTPS (the 'TRR' column, which is true if DoH was used). You can also directly do address lookups with the 'DNS Lookup' tab; addresses looked up here show up in the 'DNS' tab, so you can see if they were resolved with DNS over HTTPS (if the IP address isn't a sufficient sign).

I believe that Mozilla no longer documents any specific claims about what Firefox will do to detect names and situations where DNS over HTTPS doesn't work. Empirically, using an internal top level domain (we use '.sandbox') appears to result in Firefox not using DoH for the lookup, but I don't know if this happens because Firefox knows that this TLD doesn't exist or because it does a DoH lookup, fails to find the name, and retries through the local resolver.

(I can think of ways to find out, but they require more work than I want to bother with and anyway, Mozilla is likely to change all of these behaviors over time.)

FirefoxDNSOverHTTPSNotes written at 02:03:14; Add Comment

2020-03-09

Logging out of HTTP Basic Authentication in Firefox

We make significant use of HTTP Basic Authentication in our local web servers, because if you use Apache it's a nice simple way of putting arbitrary things behind password protection. It's not the most user-friendly thing these days and it's probably not what you want if you also need to handle things like user registration and password resets, but in our environment all of those are handled separately. However, it does have one little drawback, which is logging out.

Normal user web authentication schemes are pretty much all implemented with browser cookies and often a backend session database. This means that 'logging out' is a matter of removing the cookies, marking the session as logged out, or both at once. Logging you out is a straightforward and unobtrusive thing for a website to do (and it can even do so passively), and even if a website doesn't support logging out you can do it yourself by scrubbing the site's cookies (and Firefox is making it increasingly easy to do that).

There is no equivalent of this for HTTP Basic Authentication. The browser magically remembers the authentication information for as long as its running, there's no way for the website to gracefully signal to the browser that the information should be discarded, and Firefox doesn't expose any convenient controls for it to the user (Firefox doesn't even seem to consider HTTP Basic Authenticaion to be 'cookies and site data' that it will let you clear). Traditionally the only way to 'log out' from HTTP Basic Authentication was to quit out of your entire browser session, which is a bit obtrusive in these days of tons of windows, tabs, and established sessions with other websites.

Recently I learned that you can do better, although it's a bit obtrusive and not particularly user-friendly. The magic trick is that you can overwrite Firefox's remembered HTTP Basic Authentication user and password with a new, invalid pair by using a URL with embedded credentials. If you're currently authenticated to https://example.com/app, you can destroy that and effectively log out by trying to access 'https://bad:bad@example.com/app'. The drawback is that you'll get an authentication challenge popup that you have to dismiss.

(Chrome apparently no longer supports specifying credentials in URLs this way, so this trick won't work in it. Hopefully Firefox is not going to go the same way, at least not before it adds some sort of UI to let you discard HTTP Basic Authentication credentials yourself. MDN does describe this as deprecated in the MDN page on HTTP authentication, so it may be going away someday even in Firefox.)

You can definitely enter such an URL by hand (or modify the existing URL of the page in the URL bar to insert the 'bad@bad:' credentials bit) and it works. I believe that Firefox will still support links on web pages that have credentials embedded in them, so you could put a 'log out by trying to use this' link on your index page, but I haven't tested it. You'd only want to do this on websites aimed at technical users, because following such a link will provoke a HTTP Basic Authentication challenge that you have to cancel out of.

PS: You can apparently clear this information through 'Clear Recent History' by invoking that and then carefully selecting only 'Active Logins'. Since clearing any degree of history is an alarming thing for me and a mistake could be very bad (I keep my browser history forever), I'm not fond of this interface and don't expect to ever use it. People who are less attached to their browser history than I am (and so are more tolerant of slips than I am) may like it more.

FirefoxLogoutBasicAuth written at 23:35:57; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.