Wandering Thoughts

2017-12-13

Our Apache file serving problem on our general purpose web server

One of the servers we run for our department is an old-fashioned general purpose web server that hosts things like people's home pages and the web pages for (some) research groups. In terms of content, we have a mix of static files, old-fashioned CGIs (run through suexec), and reverse proxies to user run web servers. One of the things people here do with this web server is use it to share research data files and datasets, generally through their personal home page because that's the easy way to go. Some of these files are pretty large.

When you share data, people download it; sometimes a lot of people, because sometimes computer scientists share hot research results. This is no problem from a bandwidth perspective; we (the department and the university) have lots of bandwidth (it's not like the old days) and we'd love to see it used. However, some number of the people asking for this data are on relatively slow connections, and some of these data files are large. When you combine these two, you get very slow downloads and thus client HTTP connections that stick around for quite a long time.

(Since 6am this morning, we've seen 27 requests that took more than an hour to complete, 265 that took more than ten minutes, and over 7,500 that took more than a minute.)

For historical reasons we're using the 'prefork' Apache MPM, and perhaps you now see the problem. Each low-bandwidth client that's downloading a big file occupies a whole worker process for what is a very long time (by web server standards). We feel we can only configure so many worker processes, mostly because each of them eats a certain amount of the machine's finite memory, and we've repeatedly had all our worker processes eaten up by these slow clients, locking out all other requests for other URLs for a while. The clients come and go, for reasons we're not certain of; perhaps someone is posting a link somewhere, or maybe a classroom of people are being directed to download some sample data or the like. It's honestly kind of mysterious to us.

(In theory we could also worry about how many worker processes we allow because each worker process could someday be a CGI that's running at the same time as other CGIs, and if we run too many CGIs at once the web server explodes. In practice we've already configured so many worker processes in an attempt to keep some request slots open during these 'slow clients, popular file' situations that our web server would likely explode if even half of the current worker processes were running CGIs at once.)

Right now we're resorting to using mod_qos to try to limit access to currently popular things, but this isn't ideal for several reasons. What we really want is a hybrid web serving model, where just pushing files out to clients is done with a lightweight, highly scalable method that's basically free but Apache can continue to handle CGIs in something like the traditional model. Ideally we could even turn down the 'CGI workers' count, now that they don't have to also be 'file workers'.

Changing web servers away from Apache isn't an option and neither is splitting the static files off to another server entirely. Based on my reading so far, trying to switch to the event MPM looks like our most promising option; in fact in theory the event MPM sounds very close to our ideal setup. I'm not certain how it interacts with CGIs, though; the Apache documentation suggests that we might need or want to switch to mod_cgid, and that's going to require testing (the documentation claims it's basically a drop-in replacement, but I'm not sure I trust that).

(Setting suitable configuration parameters for a thread-based MPM is going to be a new and somewhat exciting area for us, too. It seems likely that ThreadsPerChild is the important tuning knob, but I have no idea what the tradeoffs are. Perhaps we should take the default Ubuntu 16.04 settings for everything except MaxRequestWorkers and perhaps AsyncRequestWorkerFactor, which we might want to tune up if we expect lots of waiting connections.)

ApacheFileServingOurProblem written at 23:42:29; Add Comment

2017-12-10

Let's Encrypt and a TLS monoculture

Make no mistake, Let's Encrypt is great and I love them. I probably wouldn't currently have TLS certificates on my personal websites without them (since the free options have mostly dried up), and we've switched over to them at work, primarily because of the automation. However, there's something that I worry about from time to time with Let's Encrypt, and that's how their success may create something of a TLS monoculture.

In general it's clear that Let's Encrypt accounts for a large and steadily growing number of TLS certificates out there in the wild. Some recent reports I could find suggest that it may now be the largest single CA, at 37% of observed certificates (eg nettrack.info). Let's Encrypt's plans for 2018 call for doubling their active certificates and unique domains, and if this comes to pass their dominance is only going to grow. Some of this, as with us, will come from Let's Encrypt displacing certificates from other CAs on existing HTTPS sites, but probably LE hopes for a lot of it to come from more and more sites adopting HTTPS (with LE certificates).

This increasing concentration of TLS certificates from a single source has two obvious effects. The first effect is that it makes Let's Encrypt itself an increasingly crucial piece of the overall HTTPS infrastructure. If Let's Encrypt ever has problems, it will affect a whole lot of sites, and if it ever has security issues, it seems very likely that browsers will be even less prepared than usual to do much about it. That Let's Encrypt certificates only last for 90 days also seems likely to magnify any operational issues or scaling problems, since it increases the certificate issuance rate required to support any given number of active certificates.

(As far as security goes, fortunately increasingly mandatory certificate transparency makes it harder for an attacker to hide security exploits against a CA.)

Beyond security issues, though, this implies that any Let's Encrypt policies on who can or can't get TLS certificates (and under what circumstances) may have significant and disproportionate impact. Let's Encrypt is currently fairly unrestricted there, as far as I know, but this may not be under their control under all circumstances; for example, legal judgements might force them to restrict or block issuance of certificates to some groups, network areas, or countries.

The second effect is that HTTPS TLS certificate practices are likely to increasingly become dominated and defined by whatever Let's Encrypt does (and doesn't). When LE issues the majority of the active certificates in the world, your code and systems had better accept their practices and their certificates. If LE certificates include some field, you'd better be able to handle it; if they don't, you're not going to be able to require it. Of course, this gives Let's Encrypt political influence over TLS standards and operational practices, and this means that persuading Let's Encrypt about something is valuable and thus likely to be something people pursue. None of this is surprising; it's always been the case that a dominant vendor creates a de facto standard.

(The effects of Let's Encrypt on client TLS code are fortunately limited because there are plenty of extremely important HTTPS websites that are very unlikely to switch over to Let's Encrypt certificates. Google (including Youtube), Microsoft, Facebook, Twitter, Amazon, Apple, etc, are all major web destinations and all of them are likely to keep using non-LE certificates.)

LetsEncryptMonoculture written at 22:11:09; Add Comment

2017-12-08

Some thoughts on what StartCom's shutdown means in general

I wrote a couple of weeks ago about StartCom giving up its Certificate Authority business, and then I was reminded of it more recently when they sent my StartSSL contact address an email message about it. Perhaps unsurprisingly, that email was grumpier than their public mozilla.dev.security.policy message; I believe it was similar to what they posted on their own website (I saved it, but I can't be bothered to look at it now). Partly as a result of this, I've been thinking about what StartCom's shutdown means about the current state of the CA world.

Once upon a time, owning a CA certificate was a license to print money unless you completely fumbled it. Given that StartCom was willing to completely give up on what was once a valuable asset, it seems clear that those days are over now, but I think they're over from two sides at once. On the income side, free certificates from Let's Encrypt and other sources seem to be taking an increasingly large chunk out of everyone else's business. There are still people who pay for basic TLS certificates, but it's increasingly hard to see why. Or at least the number of such people is going to keep shrinking.

(Well, one reason is if automatic provisioning is such a pain that you're willing to throw money at certificates that last a year or more. But sooner or later people and software are going to get over that.)

However, I think that's not the only issue. It seems very likely that it's increasingly costly to operate a CA in a way that browsers like, with sufficient security, business processes, adherence to various standards, and so on. It's clear that CAs used to be able to get away with a lot of sloppy behaviors and casual practices, because we've seen some of those surface in, for example, mis-issued test certificates for real domains. That doesn't fly any more, so running a CA requires more work and more costs, especially if something goes badly wrong and you have to pass a strong audit to get back into people's good graces.

(In StartCom's case, I suspect that one reason their CA certificate became effectively worthless is that getting it re-accepted by Chrome and Mozilla would have required about as much work as starting from scratch with a new certificate and business. Starting from scratch might even be easier, since you wouldn't be tainted by StartCom's past. Thus I suspect StartCom couldn't find any buyers for their CA business and certificates.)

Both of these factors seem very likely to get worse. Free TLS certificates will only pick up momentum from here (Let's Encrypt is going to offer wildcard certificates soon, for example), and browsers are cranking up the restrictions on CAs. Chrome is especially moving forward, with future requirements such as Certificate Transparency for all TLS certificates.

(It seems likely that part of the expense of running a modern commercial CA is having people on staff who can participate usefully in places like the CA/Browser forum, because as a CA you clearly have to care about what gets decided in those places.)

StartComShutdownThoughts written at 00:08:40; Add Comment

2017-11-19

StartCom gives up on its Certificate Authority business

The big news recently in the web browser SSL world is that StartCom has officially given up on being a CA because, as I put it on Twitter, no one trusts them any more. As far as I know, this makes StartCom the first CA to go out of business merely because of dubious practices and shady business practices (ie, quietly selling themselves to WoSign), instead of total security failures (DigiNotar) or utter incompetence (ipsCA). I consider this a great thing for the overall health and security of the browser CA ecology, because it shows that browsers are now not afraid to use their teeth.

StartCom is far from unique in having dubious practices (although they may have had the most occurrences). All sorts of CAs have them (or have had them), and in the past those CAs have generally skated by with only minor objections from the browsers; no one seemed ready to actually drop a CA over these practices, perhaps partly because of the collective action problem here. As a result, CAs had very little incentive to not be a bit sloppy and dubious. Are your customers prepared to spend enough money on SHA1 certificates even though you shouldn't issue them any more? Well, perhaps you can find a way around that. And so on.

The good news is that those days are over now, and StartCom going out of business (apparently along with WoSign) shows the consequences of ignoring that. At least Mozilla and Chrome are demonstrably willing to remove CAs for mere sloppy behavior and dubious practices, even if they're still moving slowly on it (IE and Safari are more opaque here). Tighter CA standards benefit web security in the obvious way, and reducing the number of CAs and trusted CA certificates out there is one way to deal with the core security problem of TLS on the web. Unsurprisingly, I'm in favour of this. In practice we put a huge amount of trust in CAs, so I think that CAs need to be held to a high standard and punished when they fail or are sloppy.

(The next CA on the chopping block is Symantec, perhaps even after DigiCert buys their assets; Mozilla is somewhat dubious.)

Sidebar: My personal view of StartCom

In the past, I got free certificates through StartCom (in the form of StartSSL). Part of StartSSL's business model was giving away basic certificates for free and then charging for revocation, which looked reasonably fair until Heartbleed happened. After Heartbleed, the nice thing for StartCom to do would have been to waive the fee to revoke and re-issue current certificates; this would have neatly dealt with the dilemma of practical reactions to the possible private key compromise. You can probably guess what happened next; StartCom declined to do so. Even in the light of Heartbleed, they stuck to their 'pay us money' policy. As a result, I'm confident that lots of people did not revoke certificates and probably a decent number did not even roll them over (since that would have required paying another CA for new certificates).

From that point on, I disliked StartCom/StartSSL. When Let's Encrypt provided an alternate source of free TLS certificates, I was quite happy to leave them behind. Looking back now, it's clear to me that StartCom didn't actually care very much about TLS and web security; they cared mostly or entirely about making money, and if their policies caused real TLS security issues (such as people staying with potentially exposed certificate keys), well, tough luck. They could get away with it, so they did it.

StartComGivesUp written at 23:01:12; Add Comment

2017-11-02

I think Certificate Transparency is better for the web than HTTP Key Pinning

Recently, the Chrome developers announced Intent to Deprecate and Remove: Public Key Pinning (via). The unkind way to describe HTTP Public Key Pinning is that it's a great way to blow your foot off, or in our situation have well-meaning people blow it off for us. The kind way to describe HPKP is that it's intended to be a way to lessen the damage if a Certificate Authority (mis-)issues a certificate for your domain to someone else; in the right situation you can prevent that certificate from being successfully used (if browsers are willing to play along). Doubts about HPKP have been out there for some time, so this Chrome development isn't exactly surprising; even past enthusiasts have given up. Although it's not an exact replacement, today's way of dealing with the problem of (mis-)issued certificates is to increasingly insist on Certificate Transparency; you can read one view of the shift here (via).

I only have a relative outsider's view of all of this, but I've wound up feeling that Certificate Transparency is better overall for the web and for the TLS ecology than HPKP is for reasons that go beyond just the difficulty and dangers of using HPKP. One way to put the issue is that HPKP is fundamentally private and reactive, while CT logs are public and thus potentially proactive. The drawback of HPKP is that it only triggers when a bad TLS certificate is presented to a client that has pinning active; until that time, no one knows that the certificate exists, and even after that time the client may be the only one that knows (the attacker may block the client's attempt to report the HPKP violation, for example). By contrast, CT logs are public and published more or less immediately, which lets anyone monitor them for bad TLS certificates.

This matters because in general, there are two sorts of bad TLS certificates; ones that were issued through mistakes, and ones that were issued by thoroughly compromising a CA. Mistakenly issued certificates will be logged to CT logs, and so they can be detected in advance before they start doing harm to clients that don't have pinning active (or to clients that allow people to override the pinning). Certificates from fully compromised CAs will normally be kept out of CT logs precisely to avoid advance detection, but the more that browsers insist on certificates being in CT logs (and that's coming), the less that works. Also, mistakenly issued certificates seem to be far more common so far, with outright CA compromises relatively rare.

If HPKP is active in a client and the pins are carefully chosen so they're beyond an attacker's reach, HPKP may keep that client more secure than CT logs can. However, HPKP does nothing to help clients that don't have the pinning already active, and it doesn't necessarily let website operators know that there are bad certificates out there. CT logs provide somewhat less security to a highly secure client but create far higher global visibility about bad TLS certificates. This global visibility is better overall for the web; it's going to do more to increase the risks and costs of attacks with fraudulently obtained TLS certificates, for example.

If you're facing an attacker who's willing to burn their access to a CA for an immediate and relatively moderate duration attack against visitors to your website, then CT logs are clearly worse than HPKP; CT logs won't protect you against this, whereas HPKP would protect you for people with the pins active. However CT logs will at least let you know that an attack is probably happening (and you might be able to limit the damage somehow). And most websites are not dealing with attackers who're willing to expend that sort of resources. Plus, of course, most websites would never have deployed HPKP in the first place, so for them a world with CT logs is a net win.

(CT logs can't stop attacks in advance because TLS certificate revocation remains pretty much broken. In practice, the only fix for a serious attack is for browsers to rush out new versions that explicitly distrust things, and for that update to propagate to users. This takes time, and that time window is when the attackers can operate.)

PS: In theory a world with both CT logs and HPKP is a better place than a world with just CT logs. In practice, browser people have a finite amount of effort that they can devote to TLS security. Spending some of that effort on a low-usage, low-payoff feature like HPKP means that other, probably more valuable things won't get worked on. Plus, the mere existence of a feature creates complexity and surprising interactions.

CertTransOverHTTPKeyPinning written at 01:28:57; Add Comment

2017-10-07

I'm trying out smooth scrolling in my browsers

When I started out using web browsers, there was no such thing as browser smooth scrolling; graphics performance was sufficiently poor that the idea would have been absurd. When you tapped the spacebar, the browser page jumped, and that was that. When graphics sped up and browsers started taking advantage of it, not only was I very used to my jump scrolling but I was running on Linux (and on old hardware), so the smooth scrolling I got did not exactly feel very nice to me. The upshot was that I immediately turned it off in Firefox, and ever since then I've carried that forward (and explicitly turned it off in Chrome as well, and so on).

I've recently reversed that, switching over to letting my browsers use smooth scrolling. Although my reversal here was sudden, it's the culmination of a number of things. The first is that I made some font setup changes that produced a cascade of appearance changes in my browsers, so I was fiddling with my browser setup anyway. The font change itself was part of reconsidering long term habits that maybe weren't actually the right choice, and on top of that I read Pavel Fatin's Scrolling with pleasure, especially this bit:

What is smooth scrolling good for — isn’t it just “bells and whistles”? Nothing of the kind! Human visual system is adapted to deal with real-world objects, which are moving continuously. Abrupt image transitions place a burden on the visual system, and increase user’s cognitive load (because it takes additional time and effort to consciously match the before and after pictures).

A number of studies have demonstrated measurable benefits from smooth scrolling, [...]

On the one hand, I didn't feel like I had additional cognitive load because of my jump scrolling; if anything, it felt like jump scroll was easier than smooth scroll. On the other hand, people (myself included) are just terrible at introspection; my feelings could be completely wrong (and probably were), especially since I was so acclimatized to jump scrolling and smooth scrolling was new and strange.

Finally, in practice I've been doing 'tap the spacebar' whole page scrolling less and less for some time. Increasingly I scroll only a little bit at a time anyway, using a scroll wheel or a gesture. That made the change to smooth scrolling less important and also suggested to me that maybe there was something to the idea of a more continuous, less jumpy scroll, since I seemed to prefer something like it.

At this point I've been using browser smooth scrolling for more than a month. I'm not sure if it's a huge change, and it certainly doesn't feel as big of a change as my new fonts. In some quick experiments, it's clear that web pages scroll slower with smooth scrolling turned on, but at the same time that's also clearly deliberate; jump scroll is basically instant, while smooth scrolling has to use some time to actually be smooth. Switching to jump scrolling for the test felt disorienting and made it hard to keep track of where things were on the page, so at the least I've become used to how to work with smooth scrolling and I've fallen out of practice with jump scrolling on web pages.

On the whole I don't regret the change so far and I can even believe that it's quietly good for me. I expect that I'll stick with it.

(I admit that one reason I was willing to make the switch was my expectation that sooner or later both Firefox and Chrome were just going to take away the option of jump scrolling. Even if I wind up in the same place in the end, I'd rather jump early than be pushed late.)

TryingSmoothScrolling written at 01:05:44; Add Comment

2017-10-04

My new worry about Firefox 56 and the addons that I care about

In light of Firefox 57 and the state of old addons, where my old addons don't work in Firefox Nightly (or didn't half a month ago), my plan is to use Firefox 56 for as long as possible. By 'as long as possible', I mean well beyond when Firefox 56 is officially supported; I hope to keep using it until it actively breaks or is too alarmingly insecure for me. Using an actual released version of Firefox instead of a development version is an odd feeling, but now that it's out, I'm seeing a trend in my addons that is giving me a new thing to worry about. Namely, a number of them are switching over to being WebExtensions addons as preparation for Firefox 57.

This switch would be nothing to worry about if Firefox's WebExtensions API was complete (and Firefox 56 had the complete API), but one of the problems with this whole switch is exactly that Firefox's WE API is far from complete. There are already any number of 'we need feature X' bug reports from various extensions, and I'm sure that more are coming as people attempt to migrate more things to WebExtensions so that they'll survive the transition to Firefox 57. Unless things go terribly wrong, this means that future versions of Firefox are going to pick up more and more WebExtensions APIs that aren't in Firefox 56, and addon authors are going to start using those APIs.

In short, it seems quite likely that sticking with Firefox 56 is also going to mean sticking with older versions of addons. Possibly I'll have to manually curate and freeze my addons to pre WebExtensions versions in order to get fully working addons (especially ones that don't leak memory, which is once again a problem with my Firefox setup, probably because recent updates to various addons have problems).

(Fortunately Mozilla generally or always makes older versions of addons installable on addons.mozilla.org if you poke the right things. But I'm not looking forward to bisecting addon versions to find ones that work right and don't leak memory too fast.)

The optimistic view of the current situation with Firefox addons is that the WebExtensions versions of popular addons are basically beta at this point because of relatively low use. With Firefox 56 released, people moving more aggressively to be ready for Firefox 57, and the (much) greater use of WebExtensions addons, the quality of WE addons will go up fairly rapidly even with the current Firefox WebExtensions APIs. This could give me stable and hopefully non-leaking addons before people move on and addons become incompatible with Firefox 56.

Firefox56AddonWorry written at 02:03:06; Add Comment

2017-09-15

Firefox 57 and the state of old pre-WebExtensions addons

Back in June I wrote an entry about the current state of pre-WebExtensions addons and what I thought was the likely future of them. Based on Mozilla's wiki page on this, which said that legacy extensions could still be loaded on Nightly builds with a preference, I said I expected that they would keep working well into the future. As far as I can currently tell, I was wrong about that and my optimistic belief was completely off.

As of right now given the ongoing state of Firefox Nightly, it appears that Mozilla has abandoned practical compatibility with many old addons. With the preference set, Mozilla may allow Nightly to load them, but Mozilla won't necessarily keep them working; in fact, available evidence suggests that Mozilla has been cheerfully breaking old extension APIs left and right. The only old APIs that we can probably count on continuing to work are the APIs that Mozilla needs for their own use (the 'signed by Mozilla internally' status of legacy extensions in the wiki page), and it's clear that these APIs are not sufficient for many addons.

In particular, all of my addons remain in various states between broken and completely non-functional on Nightly. This situation has not been improving since the preference change landed; if anything they've been getting worse in more recent Nightly versions. Since this has persisted for weeks and weeks, I have to assume that Mozilla no longer cares about general legacy extensions on Nightly and they're either landing code improvements that have no support for them or are actively removing the code that supports legacy extension APIs. Or both. I can't blame Mozilla for this, since they've been saying for years now that they wanted to get rid of the old APIs and the old APIs were holding back Firefox development.

One implication of this is that Mozilla is now fairly strongly committed to their Firefox 57 extension plans, come what may. With legacy extensions broken in practice, Mozilla cannot simply re-flip the preference the other way to back out of the transition and return legacy extensions to operation. Nor do I think they have time to fix the code should they decide they want to. If I'm reading the Firefox release calendar correctly, we're about one or two weeks from Firefox Nightly transmuting into the Firefox 57 beta version, and then six weeks more until Firefox 57 is actually released.

The personal implication for me is that I've now reached the last Nightly version I can use, although it happened perhaps a month later than I thought it would way back when. Now that I look at the dates of things, I think my current Firefox 'Nightly' versions are actually before Firefox 56 branched off, so I should probably switch over to Firefox 56 and freeze on it. That way I'll at least get security fixes until November or so.

Firefox57OldAddonsState written at 01:54:58; Add Comment

2017-09-08

My view of the problem with Extended Validation TLS certificates

In a conversation on Twitter, I said:

EV isn't exactly a scam, but it is trying to (profitably) solve a problem that we don't actually know how to solve (and have failed at).

The main problem that plain old TLS certificates solve is making sure that you're talking to the real facebook.com instead of an imposter or a man in the middle. This is why they've been rebranded as 'Domain Validation (DV)' certificates; they validate the domain. DV certificates do this fairly well and fairly successfully; while there are ways to attack them, it's increasingly expensive and risky, and for various reasons the number of people hitting warnings and overriding them is probably going down.

The problem that Extended Validation TLS certificates are attempting to solve is that domain validation is not really sufficient by itself. You usually don't really care that you're talking to google.ca or amazon.com, you care that you're talking to Google or Amazon. In general people care about who (or what) they're connecting to, not what domain name it uses today for some reason.

(Mere domain validation also has issues like IDN homographs and domains called yourfacebooklogin.com.)

Unfortunately for EV certificates, this is a hard problem with multiple issues and we don't know how to solve it. In fact our entire history of trying to inform or teach people about web site security has been an abject failure. To the extent that we've had any meaningful success at all, it's primarily come about not through presenting information to people but by having the browser take away foot-guns and be more militant about not letting you do things.

There is no evidence that EV certificates as currently implemented in browsers do anything effective to solve this problem, and as Troy Hunt has written up there's significant anecdotal evidence that they do nothing at all. Nor are there any good ideas or proposals on the horizon to improve the situation so that EV certificates even come close to tackling the problem in the context where it matters.

Right now and for the foreseeable future what EVs deliver is math, not security. As math they provide you with what they claim to provide you, which makes them not exactly a scam but also not exactly useful. I'm sure the CAs would like for EV certificates to solve the problem they're nominally aimed at, but in the mean time the CAs are happy to take your money in exchange for some hand-curated bits.

Sidebar: Some general issues with what EV certificates are trying to do

First, as far as we know people don't think of who they're talking to in any conveniently legible form, like corporate ownership. We know what we mean by 'Facebook', 'Google', 'Amazon', and so on, but it can be very hard to map this to specific concrete things in the world. See Troy Hunt's saga for one example of translating a theoretically simple 'who this is' concept into something that was more or less legible to a certificate authority and came out more or less right and more or less understandable.

Second, we don't know how to present our theoretical 'who this site is' information to people in a way that they will actually notice, understand, and be able to use. Previous and current attempts to present this information in the browser in a form that people even notice, much less understand, have been abject failures.

Finally, we especially don't even know how to get people to even consider this issue. You see, I cheated in my description of the problem, because in reality people don't even think about who they're connecting most of the time. If it looks like Facebook and your browser isn't complaining, well, it probably is and you'll proceed. This is how people enter their Facebook credentials into 'yourfacebooklogin.com' (and we can't blame them for doing so, for any number of reasons).

(The final issue is a variant of the fundamental email phish problem.)

EVCertificateProblem written at 02:49:38; Add Comment

2017-08-25

The probable coming explosion of Firefox 57

Let's start with what I tweeted:

I don't think Mozilla understands how angry plenty of people are going to be when Firefox 57 is released and breaks many of their addons.

If you follow Firefox news, you already know what this is about; Firefox 57 is when Mozilla is going to turn off all old extensions, ones that are not WebExtensions (this is apparently different from being ready for Firefox Electrolysis, although all WebExtensions are okay with Electrolysis, or maybe there is something confusing going on).

Firefox Nightly builds recently switched over to disabling old extensions. Nominally this is just flipping the extensions.legacy.enabled preference from true to false and so I could flip it back, as I expected earlier this year. In practice all of my extensions immediately fell over, along with Nightly itself (it now hangs on startup if old extensions are enabled). Even the extensions that I thought were ready don't work now. Nightly is as Nightly does, and it was kind of broken for me in various ways even before this, but I consider this a pretty bad omen in general for the state of WebExtensions. According to Mozilla's release calendar we are less than three months away from the release of Firefox 57, and nothing works. Nor was there any sort of attempt at a graceful transition when I fired up the first version of Nightly with legacy addons turned off. I don't think I even got a warning.

(I don't know if this is because Firefox has no support for a graceful transition or because there are no WebExtensions versions of any of my addons, not even, say, NoScript, but neither option is a good sign.)

I'm pretty sure that Firefox has a great many current users who pay absolutely no attention to Firefox stuff due to complete lack of interest. These people are not up on news about WebExtensions and other such things, so they don't know about this change coming in Firefox 57. A certain number of these people use extensions extensively enough to create a new browser. As far as I can tell, these people are going to fire up their auto-updating Firefox install some time after November 13th and experience what is to them a completely different and much more stripped-down browser with what seems to be very little warning.

(Fedora is on Firefox 55 right now, and several of the extensions I have enabled in my testing Firefox aren't even Electrolysis compatible, much less are WebExtensions. I'm getting no warning nags about this on browser startup or any other time. Perhaps this is different on other platforms, but I suspect not.)

When these users get their browser yanked out from underneath them, they are going to be angry. And rightfully so; yanking people's browser out from underneath them is an extremely rude thing to do. Based on their behavior so far, I don't think Mozilla really gets this. The other interpretation is that Mozilla doesn't care if some number of people wind up angry, which I think is also a bad mistake. Mozilla seems to be so intent on going full speed ahead to WebExtensions and damn the torpedoes that they're ignoring a violent derailment coming up ahead of them.

By the way, one of the problems here is that the Firefox version of the WebExtensions API is incomplete in that any number of current extensions can't be (fully) implemented in it. You can track some of the depressing state of affairs by looking at arewewebextensionsyet. Remember, we're theoretically less than three months away from permanently turning off many of these extensions in their current state.

Firefox57ComingExplosion written at 21:59:11; Add Comment

(Previous 10 or go back to August 2017 at 2017/08/14)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.