Wandering Thoughts


Mixed feelings about Firefox Addons' new non-Recommended extensions warning

I don't look at addons on addons.mozilla.org very often, so I didn't know until now that Mozilla has started showing a warning on the page for many addons, such as Textern (currently), to the effect, well, let me just quote what I see now (more or less):

[! icon] This is not monitored for security through Mozilla's Recommended Extensions program. Make sure you trust it before installing.
Learn more

(Textern is among the Firefox addons that I use.)

This has apparently been going on since at least the start of March, per this report, or even further back (reddit), so I'm late to the party here.

On the one hand, I can see why Mozilla is doing this. Even in their more limited WebExtensions form, Firefox addons can do a great deal of damage to the security and privacy of the people who use them, and Mozilla doesn't have the people (or the interest) to audit them all or keep a close eye on what they're doing. Firefox addons aren't quite the prominent target that Chrome addons are, but things like the "Stylish" explosion demonstrates that people are happy to target Firefox too. What happened with Stylish also fairly convincingly demonstrates that requiring people to approve addon permissions isn't useful in practice, for various reasons.

On the other hand, this is inevitably going to lead to two bad outcomes. First, some number of people will be scared away from perfectly fine addons that simply aren't popular enough for Mozilla to bring them into the Recommended Extensions program. The second order consequence is that getting people to use a better version of an existing addon has implicitly gotten harder if the existing addon is a 'Recommended Extension'; yours may be better, but it also has a potentially scary warning on it.

(Arguably this is the correct outcome from a security perspective; yours may be better, but it's not necessarily enough better to make up for the increased risk of it not being more carefully watched.)

Second, some number of people will now be trained to ignore another security related warning because in practice it's useless noise to them. I think that this is especially likely if they've been directly steered to an addon by a recommendation or plug from somewhere else, and aren't just searching around on AMO. If you're searching on AMO for an addon that does X, the warning may steer you to one addon over another or sell you on the idea that the risk is too high. If you've come to AMO to install specific addon Y because it sounds interesting, well, the warning is mostly noise; it is a 'do you want to do this thing you want to do' question, except it's not even a question.

(And we know how those questions get answered; people almost always say 'yes I actually do want to do the thing I want to do'.)

Unfortunately I think this is a case where there is no good answer. Mozilla can't feasibly audit everything, they can't restrict AMO to only Recommended Extensions, and they likely feel that they can't just do nothing because of the harms to people who use Firefox Addons, especially people who don't already understand the risks that addons present.

FirefoxAddonsNewWarning written at 23:50:33; Add Comment


The modern HTTPS world has no place for old web servers

When I ran into Firefox's interstitial warning for old TLS versions, it wasn't where I expected, and where it happened gave me some tangled feelings. I had expected to first run into this on some ancient appliance or IPMI web interface (both of which are famous for this sort of thing). Instead, it was on the website of an active person that had been mentioned in a recent comment here on Wandering Thoughts. On the one hand, this is a situation where they could have kept their web server up to date. On the other hand, this demonstrates (and brings home) that the modern HTTPS web actively requires you to keep your web server up to date in a way that the HTTP web didn't. In the era of HTTP, you could have set up a web server in 2000 and it could still be running today, working perfectly well (even if it didn't support the very latest shiny thing). This doesn't work for HTTPS, not today and not in the future.

In practice there are a lot of things that have to be maintained on a HTTPS server. First, you have to renew TLS certificates, or automate it (in practice you've probably had to change how you get TLS certificates several times). Even with automated renewals, Let's Encrypt has changed their protocol once already, deprecating old clients and thus old configurations, and will probably do that again someday. And now you have to keep reasonably up to date with web server software, TLS libraries, and TLS configurations on an ongoing basis, because I doubt that the deprecation of everything before TLS 1.2 will be the last such deprecation.

I can't help but feel that there is something lost with this. The HTTPS web probably won't be a place where you can preserve old web servers, for example, the way the HTTP web is. Today if you have operating hardware you could run a HTTP web server from an old SGI Irix workstation or even a DEC Ultrix machine, and every browser would probably be happy to speak HTTP 1.0 or the like to it, even though the server software probably hasn't been updated since the 1990s. That's not going to be possible on the HTTPS web, no matter how meticulously you maintain old environments.

Another, more relevant side of this is that it's not going to be possible for people with web servers to just let them sit. The more the HTTPS world changes and requires you to change, the more your HTTPS web server requires ongoing work. If you ignore it and skip that work, what happens to your website is the interstitial warning that I experienced and eventually it will stop being accepted by browsers at all. I expect that this is going to drive more people into the arms of large operations (like Github Pages or Cloudflare) that will look after all of that for them, and a little bit more of the indie 'anyone can do this' spirit of the old web will fade away.

(At the same time this is necessary to keep HTTPS secure, and HTTPS itself is necessary for the usual reasons. But let's not pretend that nothing is being lost in this shift.)

HTTPSNoOldServers written at 00:25:58; Add Comment


Some notes on Firefox's interstitial warning for old TLS versions

Firefox, along with all other browsers, are trying to move away from supporting older TLS versions, which means means anything before TLS 1.2. In Firefox, the minimum acceptable TLS version is controlled about the about:config preference security.tls.version.min; in released versions of Firefox this is still '1' (for TLS 1.0), while in non-release versions it's '3' (for TLS 1.2). If you're using a non-release version and you visit some websites, you'll get a 'Secure Connection Failed' interstitial warning that's clear enough if you're a technical person.

The bottom of the warning text says:

This website might not support the TLS 1.2 protocol, which is the minimum version supported by Firefox. Enabling TLS 1.0 and TLS 1.1 might allow this connection to succeed.

TLS 1.0 and TLS 1.1 will be permanently disabled in a future release.

It then offers you a big blue 'Enable TLS 1.0 and 1.1' button. If you pick this, you're not enabling TLS 1.0 and 1.1 on a one-time basis or just for the specific website (the way you are with 'accept this certificate' overrides); you're permanently enabling it in Firefox preferences. Specifically, you're setting the security.tls.version.enable-deprecated preference to 'true' (from the default 'false').

As far as I've been able to see, the state of this '(permanently) enable deprecated TLS versions' setting is not exposed in the Preferences GUI, making its state invisible unless you know the trick (and even know to look). Perhaps when Mozilla raises the normal minimum TLS version in a Firefox release, they will expose something in Preferences (or perhaps they'll change to do something with per-site overrides, as they do for TLS certificates). In the mean time, if you want to find out about websites using older TLS versions through your normal browsing, you'll need to remember to reset this preference every time you need to use that big blue button to get a site to work.

(You might be doing this in Nightly or Beta, although probably you should avoid Nightly, or you might be doing this in a released version where you've changed security.tls.version.min yourself.)

FirefoxOldTLSWarning written at 00:05:20; Add Comment


The appeal of doing exact string comparisons with Apache's RewriteCond

I use Apache's RewriteCond a fair bit under various circumstances, especially here on Wandering Thoughts where I use it in .htaccess to block undesirable things (cf). The default RewriteCond action is to perform a regular expression matches, and generally this is what I want; for instance, many web spiders have user agents that include their version number, and that number changes over time. However, recently I was reminded of the power and utility of doing exact string matches for some circumstances.

Suppose, not hypothetically, that you have some bad web spiders that crawl your site with a constant bogus HTTP Referer of:


Or another web spider might crawl with an unusual and fixed user-agent of:

Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36

I could use regular expressions to match and block these, but that's at least annoying because both of these strings have various special regular expression characters that I'd have to carefully escape. So instead we can use RewriteCond's '=' option to do an exact string comparison. The one slightly tricky bit is that you want to enclose the entire thing in "'s, that is:

RewriteCond %{HTTP_REFERER} "=http://www.google.co.uk/url?sa=t&source=web&cd=1" [NC]

(The '[NC]' is perhaps overkill, especially as the spider probably never varies the case. But it's a reflex.)

As you can see, instances of '=' in the string don't have to be escaped. If the string I wanted to match (exactly) on had quotes in it, I'd have to look up how to escape them in Apache.

Now that I've looked up this RewriteCond option and gotten it working for me, I'm probably going to make more use of it. Various bad web spiders (and other software) has pretty consistent and unique signatures in various headers, which generally beats playing whack-a-mole with their IP address ranges.

(This probably isn't very useful outside of blocking bad people, although I suppose it could be used to rewrite only certain exact URLs while allowing others to fall through, or the reverse.)

ApacheRewriteCondExactMatch written at 22:40:50; Add Comment


How Firefox could support automatically using local DNS over HTTPS servers

On the surface, one of the challenges for Firefox automatically using different DNS over HTTPS servers is that Firefox considers your ISP to be a threat. This means that Firefox doesn't want to just use your local DNS over HTTPS server any more than it wants to just use your normal local DNS server. Firefox's use of DNS over HTTPS is explicitly to avoid surveillance from various parties, including the local network, so to do this it needs to go straight to a trusted (public) DNS over HTTPS server.

But there is a leak in this security model, in the form of Firefox's canary domain for disabling its automatic DNS over HTTPS. Any local network can already tell Firefox to disable DNS over HTTPS, defeating this anti-snooping measure. This is necessary because Firefox can't reliably detect when DNS over HTTPS to a public DNS server won't work properly for the local network, so networks with special name resolution setups need some way to signal this to Firefox.

(As a practical matter, Firefox not supporting a way to disable its automatic DNS over HTTPS to public DNS servers would result in a certain amount of the remaining Firefox users dropping it, because it didn't work reliably in their network. So Mozilla's hand is forced on this, even though it allows ISPs to step in and snoop on people again.)

Since Firefox already supports disabling automatic DNS over HTTPS entirely through a network doing magic tricks with the canary domain, it could also support a way of using the canary domain to signal that Firefox should use a local DNS over HTTPS server. This is no worse than turning off DoH entirely (in both cases your DNS queries are going to the network operator), and has some advantages such as potentially enabling encrypted SNI.

(Firefox's threat model might say that it can't enable ESNI with an untrusted local DNS over HTTPS server that was picked up automatically.)

FirefoxLocalDNSOverHTTPS written at 00:53:28; Add Comment


Sensible heuristics for when to use DNS over HTTPS can't work for us

If Firefox is using DNS over HTTPS in general, it has various heuristics for whether or not to use it to resolve any particular name; for instance, right now it doesn't use DNS over HTTPS for any domain in your DNS suffixes (this happens even if you explicitly turned on DNS over HTTPS, which disables checking for the canary domain). Presumably other browsers will also have their own set of heuristics when they implement DNS over HTTPS, and at some point the set of heuristics that various browsers use may even be documented.

(This DNS over HTTPS bypass is intended to deal with two cases; where the name you're looking up doesn't exist in public DNS, and where the name has a different IP address.)

Almost six months ago I wrote a cautiously optimistic entry about Firefox, DNS over HTTPS, and us, where I hoped that Firefox's heuristics would be able to deal with our split horizon DNS setup where some names resolve to different IPs internally than they do externally. Unfortunately, I've changed my mind (based on experience and experimentation since then); I now believe that no sensible set of heuristics can cover all of our cases, and so anyone using DNS over HTTPS (with external resolvers) will sooner or later be unable to connect to some of the websites run by people in the department.

The fundamental issue that sinks the entire thing here is that people sometimes want to host websites on their machines here but give them names not under our university domains (for various good reasons, such a regular yearly conference that just happens to be hosted here this time). We do know what these domains are, because we have to set up the split DNS for them, but it's definitely not appropriate to add them to our DNS suffixes and they have different public DNS servers than our internal resolvers.

(In some cases they have public DNS servers that aren't even associated with us, and we carefully shadow bits of their domain internally to make it work. We prefer to host their DNS, though.)

I can't think of any sensible heuristic that could detect this situation, especially if you don't want to leak information about the person's DNS lookups to the local resolver. You could detect connection failures and try a non DNS over HTTPS name lookup, but that leaks data in various circumstances and even if it works there's a long connection delay the first time around.

So I think we're going to always have the 'disable DNS over HTTPS' canary domain signal in our local DNS servers, and we'll hope that someday this signal is respected even for Firefox users who have explicitly turned on DNS over HTTPS (unless they explicitly picked a new option of 'always use DoH even if the local network signals that it shouldn't be used and might give you broken results'). This makes me a little bit sad; I'd like heuristics that magically worked, so we could let people use DNS over HTTPS and hide things from us that we don't want to know anyway.

DNSOverHTTPSHeuristicsAndUs written at 01:22:57; Add Comment


Some notes on the state of DNS over HTTPS in Firefox (as of March 2020)

Recently, we decided to add the magic marker that's used to explicitly disable DNS over HTTPS to our local DNS resolvers as a precaution against various things. Being sensible people, we then attempted to verify that we'd gotten it right, by explicitly enabling DNS over HTTPS in a sysadmin's test Firefox and then trying things with and without the canary domain. This failed and left us very puzzled, and it was only through a lucky bit of happenstance that we kind of discovered what seems to be going on (although what's going on is documented if you pay attention). So here are some notes.

First and most importantly, the canary domain is ignored if you've explicitly enabled DNS over HTTPS. We found this out from a tweet by Jan Schaumann (via), who filed a Mozilla bug over this behavior. The result of the bug was to cause Mozilla to update their documentation to mention this, both here (in a bit that's easy to miss in passing) and here (where it's made more obvious). The Mozilla bug (bug #1614751) contains a way of manipulating about:config settings to pretend that Firefox enrolled you in DoH so that you can test that you properly added the canary domain to your DNS resolver, in comment #1.

(It's possible that Mozilla will someday be persuaded to disable DoH when the canary domain is present no matter what the user asked for. In the mean time, people who have explicitly turned on DoH won't be able to connect to some of the web servers that we host due to our split horizon DNS.)

To make your life more confusing when you're testing, Firefox never uses DNS over HTTPS for domains in your DNS suffix list (which can come from DHCP or be explicitly configured, for example in /etc/resolv.conf on Unix systems). This can mean that either you need to manipulate your host settings to scrub out your usual DNS suffix list or you need a split horizon hostname that is not under them. Fortunately we were able to find some eventually, which allowed us to see that Firefox was still looking them up with DoH despite the canary domain being theoretically configured.

While testing Firefox, you can look at the state of its DNS stuff in about:networking. The 'DNS' tab will show you what Firefox thinks are your DNS suffixes and what names it has recently resolved, with or without DNS over HTTPS (the 'TRR' column, which is true if DoH was used). You can also directly do address lookups with the 'DNS Lookup' tab; addresses looked up here show up in the 'DNS' tab, so you can see if they were resolved with DNS over HTTPS (if the IP address isn't a sufficient sign).

I believe that Mozilla no longer documents any specific claims about what Firefox will do to detect names and situations where DNS over HTTPS doesn't work. Empirically, using an internal top level domain (we use '.sandbox') appears to result in Firefox not using DoH for the lookup, but I don't know if this happens because Firefox knows that this TLD doesn't exist or because it does a DoH lookup, fails to find the name, and retries through the local resolver.

(I can think of ways to find out, but they require more work than I want to bother with and anyway, Mozilla is likely to change all of these behaviors over time.)

FirefoxDNSOverHTTPSNotes written at 02:03:14; Add Comment


Logging out of HTTP Basic Authentication in Firefox

We make significant use of HTTP Basic Authentication in our local web servers, because if you use Apache it's a nice simple way of putting arbitrary things behind password protection. It's not the most user-friendly thing these days and it's probably not what you want if you also need to handle things like user registration and password resets, but in our environment all of those are handled separately. However, it does have one little drawback, which is logging out.

Normal user web authentication schemes are pretty much all implemented with browser cookies and often a backend session database. This means that 'logging out' is a matter of removing the cookies, marking the session as logged out, or both at once. Logging you out is a straightforward and unobtrusive thing for a website to do (and it can even do so passively), and even if a website doesn't support logging out you can do it yourself by scrubbing the site's cookies (and Firefox is making it increasingly easy to do that).

There is no equivalent of this for HTTP Basic Authentication. The browser magically remembers the authentication information for as long as its running, there's no way for the website to gracefully signal to the browser that the information should be discarded, and Firefox doesn't expose any convenient controls for it to the user (Firefox doesn't even seem to consider HTTP Basic Authenticaion to be 'cookies and site data' that it will let you clear). Traditionally the only way to 'log out' from HTTP Basic Authentication was to quit out of your entire browser session, which is a bit obtrusive in these days of tons of windows, tabs, and established sessions with other websites.

Recently I learned that you can do better, although it's a bit obtrusive and not particularly user-friendly. The magic trick is that you can overwrite Firefox's remembered HTTP Basic Authentication user and password with a new, invalid pair by using a URL with embedded credentials. If you're currently authenticated to https://example.com/app, you can destroy that and effectively log out by trying to access 'https://bad:bad@example.com/app'. The drawback is that you'll get an authentication challenge popup that you have to dismiss.

(Chrome apparently no longer supports specifying credentials in URLs this way, so this trick won't work in it. Hopefully Firefox is not going to go the same way, at least not before it adds some sort of UI to let you discard HTTP Basic Authentication credentials yourself. MDN does describe this as deprecated in the MDN page on HTTP authentication, so it may be going away someday even in Firefox.)

You can definitely enter such an URL by hand (or modify the existing URL of the page in the URL bar to insert the 'bad@bad:' credentials bit) and it works. I believe that Firefox will still support links on web pages that have credentials embedded in them, so you could put a 'log out by trying to use this' link on your index page, but I haven't tested it. You'd only want to do this on websites aimed at technical users, because following such a link will provoke a HTTP Basic Authentication challenge that you have to cancel out of.

PS: You can apparently clear this information through 'Clear Recent History' by invoking that and then carefully selecting only 'Active Logins'. Since clearing any degree of history is an alarming thing for me and a mistake could be very bad (I keep my browser history forever), I'm not fond of this interface and don't expect to ever use it. People who are less attached to their browser history than I am (and so are more tolerant of slips than I am) may like it more.

FirefoxLogoutBasicAuth written at 23:35:57; Add Comment


The browsers are probably running the TLS show now

The news of the time interval is that Apple is limiting TLS certificate lifetimes to 398 days for certificates issued from September 1st onward (also, also). This effectively bypasses the CA/Browser Forum, where Google put forward a ballot on this in 2019 but couldn't get it passed (also). Specifically, it was voted down by a number of CAs; some CAs voted in favor, as did all browser vendors. Now Apple has decided to demonstrate who has actual power in this ecosystem and has simply put their foot down. What CAs want and how they voted is now irrelevant.

(Since Apple has led the way on this and all browser vendors want to do this, I expect Chrome, Firefox, and probably Microsoft Edge to follow before the end of the year.)

I wouldn't be surprised if other developments in TLS start happening this way (and if it was Apple driving them, because Apple is in some ways in the best political position to do this). At the same time it's worth noting that this is a change from how things used to be (as far as I know). Up until now, browser vendors have generally been fairly careful to build consensus and push CAs relatively lightly. If browser vendors are now going to be more aggressive about simply forcing CAs to do things, who knows what happens next.

At the same time, shortening the acceptable certificate validity period is the easiest change to force, because everyone can already issue and get shorter-lived certificates. The only way for a CA to not 'comply' with Apple's new policy would be to insist on issuing only long-lived certificates to customers against the wishes of the customers, and that's a great way to have the customers pack up and go to someone else. This is fundamentally different from a policy change that would require CAs to actively change their behavior, where the CAs could just refuse to do anything and basically dare the browser vendors to de-trust them all. On the third hand, Google more or less did force a behavior change by increasingly insisting on Certificate Transparency. Maybe we'll see more of that.

(And in a world with Let's Encrypt, most everyone has an alternative option to commercial CAs. At least right now, it seems unlikely that a browser vendor would try to force a change that LE objected to, partly because LE is now such a dominant CA. Just like browsers, LE is sort of in a position to put its foot down.)

BrowsersRunningTLSNow written at 00:25:31; Add Comment


The drawback of having a dynamic site with a lot of URLs on today's web

Wandering Thoughts, this blog, is dynamically generated from an underlying set of entries (more or less). As is common with dynamically generated sites, which often have a different URL structure than static sites, it has a lot of automatically generated URLs that provide various ways of viewing and accessing the underlying entries, and of course it creates links to those URLs in various places. Once upon a time this was generally fine and I didn't think much about it. These days my attitude has changed and I'm increasingly thinking about how to reduce the number of these automatically generated links (and perhaps to remove some of the URLs themselves).

The issue is that on the modern web, everything gets visited (even things behind links flagged as nofollow, although I wish otherwise). This includes automatically generated pages that are either pointless duplication or actively useless, for example because they don't have any actual content. If you have a highly dynamic web site that generates a lot of these URLs and links to them, sooner or later you'll waste time generating and serving these pages to robots that don't really care.

All of that sounds nicely abstract, so let me be concrete. Every entry on Wandering Thoughts can potentially have comments, and long ago I decided that I should provide syndication feeds for comments (as well as entries) and then put in links to the relevant syndication feeds for any page at the bottom of it (pages that are directories have syndication feeds for entries and comments). Plenty of my entries don't have any comments and so the comment syndication feed for them is empty, and even for entries that do have comments the feed is almost entirely static (since new comments are rare, especially on old entries). But because there are links to all of these comment syndication feeds, they get periodically fetched by crawlers.

(To put concrete numbers on this, in the past ten days over 2300 different entries here have had their comment syndication feeds retrieved, a number of them repeatedly. Some of the fetching is from web spiders that admit it, some of it may be from people clicking on links out of curiosity, and some of it is certainly from people and software that are cloaking their real activity. Over ten days, this is not gigantic.)

Once upon a time I would have had no qualms about exposing all of these comment syndication feed links as the right thing to do even in the face of this. These days I'm not so sure. I'm not going to make the feeds themselves go away, but I am considering not putting in the links in at least some cases (for example, if there are no comments on the entry). That would at least reduce how many visible links Wandering Thoughts generate, and somewhat reduce the pointless crawling and indexing that people are doing.

(I've actually been quietly reducing the number of syndication feeds that Wandering Thoughts exposes for some time, but the previous cases have been easy ones. Interested parties can see the 'atomfeed-virt-only-*' settings in DWiki's configuration file.)

ManyURLsModernDrawback written at 22:59:33; Add Comment

(Previous 10 or go back to February 2020 at 2020/02/02)

Page tools: See As Normal.
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.