Wandering Thoughts

2018-08-14

Our problem with HTTPS and user-created content

We have a departmental web server, where people can host their personal pages (eg) and pages for their research groups and so on, including user-run web servers behind reverse proxies. In other words, this web server has a lot of content, created by a lot of people, and essentially none of it is under our control. These days, in one sense this presents us with a bit of a problem.

Our departmental web server supports HTTPS (and has for years). Recent browser developments are clearly pushing websites from HTTP to HTTPS, even if perhaps not as much as has been heralded, and so it would be good if we were to actively switch over. But, well, there's an obvious problem for us, and the name of that problem is mixed content. A not insignificant number of pages on our web server refer to resources like CSS stylesheets using explicit HTTP URLs (either local ones or external ones), and so would and do break if loaded over HTTPS, where browsers generally block mixed content.

We are obviously not going to break user web pages just because the Internet would now kind of like to see us using HTTPS instead of HTTP; if we even proposed doing that, the users would get very angry at us. Nor is it feasible to get users to audit and change all of their pages to eliminate mixed content problems (and from the perspectives of many users, it would be make-work). The somewhat unfortunate conclusion is that we will never be able to do a general HTTP to HTTPS upgrade on our departmental web server, including things like setting HSTS. Some of the web server's content will always be in the long tail of content that will never migrate to HTTPS and will continue to be HTTP content for years to come.

(Yes, CSP has upgrade-insecure-requests, but that only helps for local resources, not external ones.)

Probably this issue is confronting anyone with significant amounts of user-created content, especially in situations where people wrote raw HTML, CSS, and so on. I suspect that a lot of these sites will stay HTTPS-optional for plenty of time to come.

(Our users can use a .htaccess to force HTTP to HTTPS redirection for their own content, although I don't expect very many people to ever do that. I have set this up for my pages, partly just to make sure that it worked properly, but I'm not exactly a typical person here.)

(This elaborates on an old tweet of mine, and I covered the 'visual noise' bit in this entry.)

HTTPSUserContentProblem written at 00:15:40; Add Comment

2018-08-05

Some more notes on Firefox 63 and perhaps later media autoplay settings

A few months ago I wrote some notes on Firefox's current media autoplay settings, which described the then state of affairs of Firefox Nightly and Firefox (and I then followed it up by discovering that Firefox needed some degree of autoplay support). It's perhaps not surprising that Mozilla has kept evolving this (given the general publicity around how people are unhappy about auto-playing videos), so things have changed around a bit in Firefox Nightly and thus presumably in Firefox 63 and later (until they change again). The current state of affairs has fixed some of my irritations but not all of them, and Mozilla has now made things more accessible in general.

First, Firefox now exposes a preference for media autoplay, and can have per-site settings for it. You find this in Preference → Privacy & Security, down in the 'Permissions' section, in a new setting that is (in English) 'For websites that autoplay sound'. Your options are allow, ask (or 'prompt'), and deny, and you also have an 'Exceptions' list. This preference corresponds to the media.autoplay.default setting. If set to 'ask', ever time you visit a new site that wants to autoplay something Firefox will pop up a little note about it, and then remember whatever you answer. If you block a site (or block all sites), you can still start video autoplay by hand.

(The old media.autoplay.enabled setting is now no longer used by anything. Also, it turns out that surfacing the preferences stuff is controlled by the media.autoplay.ask-permission setting, which Firefox Nightly defaults to true. It's possible that this means that some of this won't be visible and enabled in Firefox 63, but you can turn it on manually.)

As before, Firefox defaults to auto-playing silent or muted video. However, you can now control this through a setting, media.autoplay.allow-muted. Because this is an un-exposed setting, the 'can I play this' questions from Firefox always refer to 'media with sound' even if the video in question is silent or muted. This can be a little bit confusing. Unlike before, Firefox now always allows autoplay for bare video content such as directly linked .mp4s (these appear to be called 'video documents'), regardless of your settings (although they seem to start playing only when you switch to their tab). This is hard-coded in IsMediaElementAllowedToPlay() in AutoplayPolicy.cpp. Since I don't like this behavior, I hope that Mozilla adds a setting that allows us to control it, as they have for silent and muted video.

(Conveniently, the code in AutoplayPolicy.cpp now has log messages to record why something was allowed to autoplay; these are very helpful in understanding the code itself. One of the interesting new cases this exposes is that apparently addons can always autoplay things on their own internal pages.)

There is also now a separate check and setting for audio. Based on the code, setting media.autoplay.block-webaudio to true will more or less completely block web audio, possibly without any user override at all. Probably you don't want to do this unless you never, ever want your Firefox to play web audio under any circumstances at all.

The new setting media.autoplay.block-event.enabled appears to control whether JavaScript gets some sort of 'your autoplay is blocked' error when the autoplay is blocked by your settings. Firefox defaults it to off and I would leave it that way; presumably Mozilla knows what they're doing here.

So, the short version of what you want to do on Firefox Nightly and likely Firefox 63 if you want as little autoplay as possible is now:

  • go to Preferences → Privacy & Security and the Permissions area, and set 'For websites that autoplay sound' to 'Don't Autoplay'. Equivalently, set media.autoplay.default to 1, but I'd use the Preferences version just to be slightly safer.
  • in about:config, set media.autoplay.allow-muted to false.

Other relevant settings are now already at the values that you want. In particular, media.autoplay.enabled.user-gestures-needed now defaults to true, and probably will get removed some day in the future.

FirefoxMediaAutoplaySettingsII written at 23:08:19; Add Comment

2018-07-24

I doubt Chrome's new 'not secure' warning about HTTP sites will change much (at least right away)

In theory today (July 24th) is the start of a HTTP apocalypse, because Google has launched Chrome 68 and Chrome 68 labels all HTTP sites as 'not secure'. More exactly, it adds a 'not secure' label to the URL bar (or omnibox, if you prefer that term). It's possible that Firefox will follow now that Chrome has led the way on this and in any case Chrome apparently has about 60% of the browser market, so its decision here affects a lot of people. However, I don't think this is going to be as big a deal as you might expect (and as some people fear) for three interlinked reasons.

The first reason is the same fundamental issue as the one affecting EV certificates, which is that all this is doing (right now) is changing the URL bar a little bit. We have pretty good proof (from EV certificates among other things) that very few people pay much attention to the URL bar, and the 'not secure' is even less prominent than EV certificates were (EV certificates at least used a different colour). It seems fairly unlikely that people will even notice the change, which is an obvious prerequisite for them caring.

The second reason is that people mostly don't care about this. When people go to a website, it's because they want to see the website, and they really don't care about anything in the way (as we have seen in the past when browsers let people easily override TLS certificate warnings). There aren't likely to be very many people who will change their behavior because they're suddenly being warned (a very little bit) that their connection is 'not secure'. Without the users visibly caring, many sites will not have much extra motivation to change.

(They'll have some extra motivation; the 'not secure' is a nudge. But it's not a really strong nudge, at least not now.)

The third reason is that plenty of sites are going to remain HTTP (and thus 'Not secure') for a great deal of time to come. For many people, this will make the 'not secure' label a routine thing that they see all the time, and routine things rapidly lose any power they might once have had. If even a tenth of your web browsing is 'not secure' and nothing particularly bad happens, you're likely to conclude that the 'not secure' warning is unimportant and something you can freely ignore. This feeds into the other two reasons; unimportant things get ignored, and if you are one site in a crowd of many, why go to much work to change (especially if no one seems to care).

I understand why Google and other people are enthused about this and I think it's a positive step forward to an all-HTTPS world. But in my opinion the 'not secure' label is only the tip of the iceberg as far as its importance goes and we shouldn't expect that label to do much on its own. I suspect that the long run importance of this will be how it changes the attitudes of web developers and website operators, not any changes in user behavior.

(To put it one way, the 'not secure' label is the surface sign of an increasingly broad consensus view that HTTP needs to go away (for good reasons). That we have gotten far enough along in this view that the Chrome developers can make this change without facing a big backlash is the big thing, not the label itself)

HTTPInsecureDoubts written at 23:10:40; Add Comment

2018-07-06

I'm seeing occasional mysterious POST requests without Content-Types

Sometimes I go out of my way to turn over rocks in the web server logs for Wandering Thoughts, but other times my log monitoring turns them over for me. The latter is how I know that Wandering Thoughts has been seeing periodic bursts of unusual POST requests that don't appear to have a Content-Type. I saw another such burst today, so I'm going to write this one up.

Today's burst is six requests from a single IP (86.139.145.21), POST'ing to a single entry between 12:55:12 and 12:56:08. In fact there were two burst of three POSTs each, one burst at 12:55:12 and 12:55:13 and the second at 12:56:08. DWiki's logging say that all of them lacked a Content-Type but it didn't record any other details. This specific IP address made no other requests today, or even in the past nine days. On July 2nd, it was nine POSTs to this entry from 59.46.77.82 in three bursts of three, at 21:36:20, 21:42:2[12], and 21:53:35. Both IPs used a very generic User-Agent that I believe is simply the current Chrome on Windows 10.

In all of the cases so far, the POSTs are made directly to the URL of a Wandering Thoughts entry, not to, say, the 'write a comment' page. This is noteworthy because I don't have any forms or other links that do POST submissions to entry URLs; all references to entry URLs are plain links and thus everyone should be using GET requests. Anything that's deciding to make these POST requests is making them up, either by mistake or through some maliciousness.

(In the past I've seen zero length POSTs with a valid HTML form content-type, which I believe were also for regular entry URLs although past me didn't write that explicitly in the entry.)

There's a part of me that wants to augment DWiki's logging to record, say, the claimed Content-Length for these POST requests so I can see if they claim to have content or if they're 0-length. Probably this is going further in turning over rocks than I want to, unless I'm going to go all the way to logging the actual POST body to try to see what these people are up to.

(Apparently POSTs without a Content-Type are technically legal and you're supposed to interpret the contents as the generic application/octet-stream (unless you want to attempt to guess by inspecting the data, which you don't). See eg here, pointing to the HTTP 1.1 specification. However, all of my POST forms properly specify the content-type the browser should use, so this shouldn't be happening even for proper POST requests to valid POST URLs.)

PS: Apache probably accepts POSTs with no Content-Type to static, GET-only resources because Apache will accept pretty much anything you throw at it. DWiki is more cautious, although that's basically become a mistake.

POSTWithoutContentType written at 01:35:10; Add Comment

2018-07-01

Understanding the first imperative of a commercial Certificate Authority

A lot of things about the how the CA business operates and what CAs do is puzzling from the outside, and may even lead people to wondering how exactly a CA could ever do some particular crazy thing. I've come to feel that we can understand a lot by understanding that the first imperative of a commercial CA is to sell TLS certificates, no matter what it requires.

(This is different from the CA's first job of having its root certificates included in all of the browsers, which these days absolutely must include iOS and Android.)

There are well intentioned people at many commercial CAs that care about the overall security and health of the TLS ecosystem, and some of them hold some degree of power in their respective organizations. But they cannot change the overall nature of the beast that is a commercial CA, because being commercial means that they must make a profit somehow and that means selling certificates (and in order to grow, they must sell more certificates or more expensive certificates or both).

One important consequence of this is that commercial CAs are fairly highly motivated to push the edges of trust and security, especially today (given Let's Encrypt's increasing domination). Sure, their good employees have pushed back and will push back to the extent that they can, but that can only go so far. As we've seen over and over with email spam, sooner or later the people on the side of money win those arguments, and the only real limit is the increased willingness of browsers to kick CAs to the curb. So we shouldn't be at all surprised when CAs do bad stuff, especially now. One extremely cynical view of this dynamic is that commercial CAs don't really want to securely validate things, they want to find some excuse to take your money and give you some magic bits. If they can make that excuse be secure, that's great, but it's not the most important thing.

(Although I can't find the details now, I believe there was a CA that was accepting emailed 'scans' of 'official documents' from would-be customers as proof of control of domains. This seems obviously crazy from the outside.)

Another cynical way to look at the current situation is that a commercial CA's only remaining natural market is people who can't use Let's Encrypt certificates. Sometimes this will be people who can't deal with short duration certificates, but at least some of the time it's going to be people who can't pass LE's checks for some reason, probably a good reason. Commercial CAs are quite motivated to find some way to give them a certificate anyway.

(Commercial CAs also have a legacy market in people who either haven't heard of Let's Encrypt or don't understand it, but that market is going to shrink over time. We can probably expect commercial CAs to work hard with FUD to keep these people ignorant and in the fold.)

Next, no commercial CA is going to propose or support anything that cuts its own throat, no matter how good for security it would be, and while there are some motives for supporting measures that wind up increasing your operational costs (if this is somehow a benefit to you over your competition), there are limits (and CAs may be hitting them). Commercial CAs are also likely to try to persuade browsers to do things that help out EV certificates, and they're probably going to do a lot of that persuasion in public in order to try for greater pressure.

This shades into another obvious but sad consequence, which is that commercial CAs have a great motive for encouraging ignorance, superstition, and FUD, especially over things like EV certificates (see Troy Hunt tearing apart some recent CA marketing FUD, for example). If people with money don't understand that they can just get a DV TLS certificate from Let's Encrypt and it's just as good as an EV cert (see also), you have a chance to sell them your version of this commodity.

One conclusion I draw from this is that CAs are likely to refuse to drop the maximum certificate validity period down very low, because relatively long duration certificates are one area where they have something that Let's Encrypt doesn't.

(I've probably said some variant of this in past entries, but I haven't written it up as a full entry. For various reasons I feel like doing it today.)

Immediate post-publication update: See Digicert withdrawing from the CA Security Council and the HN comments on it, especially this discussion of the background of the CASC and so on.

CAFirstImperative written at 22:57:12; Add Comment

2018-06-14

Clearing cached HTTP redirections or HSTS status in Firefox

As far as I know, all browsers cache (or memorize) HTTP redirections, especially (allegedly) permanent ones, including ones that push you from one site to another. Browsers all also remember the HSTS status for websites, and in fact this is the entire point of HSTS. This is great in theory, but sometimes it goes wrong in practice (as I've noted before). For example, someone believes that they have a properly set up general HTTPS configuration for a bunch of sites, so they wire up an automatic permanent redirection for all of them, and then it turns out that their TLS certificates aren't set up right so they turn the HTTPS redirection off and go back to serving the sites over HTTP. In the mean time, you've visited one site and your Firefox has a death grip on the HTTP to HTTPS redirection, which very definitely doesn't work.

Such a cached but broken HTTP to HTTPS redirection recently happened to me in my main Firefox instance, so I set out on an expedition to find out how to fix it. The usual Internet advice on this unfortunately has the side effect of completely clearing your history of visited URLs for the site, which isn't something that I'm willing to do; my browser history is forever. Fortunately there's a different way to do it, which I found in this superuser.com answer. The steps I'm going to use in the future are:

  • get yourself a new, blank tab (although any source of a link to the site will work, such as my home page).
  • call up the developer tools Network tab, for example with Ctrl-Shift-E or Tools → Web Developer → Network.
  • tick the 'Disable Cache' tickbox.
  • enter the URL for the site into the URL bar (or otherwise go to the URL). This should give you an unredirected result, or at least force Firefox to actually go out to the web server and get another redirection, and as a side effect it appears to clear Firefox's memory of the old redirection.
  • turn the cache back on by unticking the 'Disable Cache' tickbox.

When I did this, it seemed necessary to refresh or force-refresh the page a few times with the cache disabled before it really took and flushed out the cached HTTP redirect.

(Apparently you can also do this by clearing only the cache through the History menu, see for example this answer. I didn't use this for various reasons, but it does appear to work. This presumably has the side effect of clearing all of your cache, for everything, but this may be tolerable.)

While I was trying to solve this issue I also ran across some pages on how to delete a memorized Firefox HSTS entry (without deleting your entire history for the site). The easiest way to do this is to shut down Firefox, find your profile directory, and then edit the file SiteSecurityServiceState.txt that's in it. This is a text file with a straightforward one site per line format; find the problem site in question and just delete the entry.

(People with more understanding of the format of each line might be able to de-HSTS a site's entry, but I'm lazy.)

PS: As more and more sites use HSTS, I suspect that Firefox is going to wind up changing how they store HSTS information away from the current text file approach. Hopefully they'll provide some way for an advanced user to force-forget HSTS entries for a host.

PPS: Sadly, I don't expect Firefox to ever provide the APIs that an addon would need to do this, especially for HSTS. Browsers probably really don't want to give addons any way of overriding a site's HSTS settings, and it certainly seems like a dangerous idea to me. The days when we could extend unreserved trust to browser addons are long over; the approach today is to cautiously give them only a very limited amount of power.

FirefoxClearRedirectsHSTS written at 00:57:44; Add Comment

2018-06-11

A website's design shows its actual priorities

I'll start with my tweet:

Reddit's new design makes it clear that Reddit is now fundamentally a discussion forum, not the aggregator of interesting links and things to read that it started out as and used to be. So it goes.

To explain this, I need to show a screenshot of the new and the old design. Let's start with the new design:

New Reddit r/golang

Ignoring all of the white space for now, look at the nice big title. Do you expect that to link to the actual article? Surprise, it doesn't; it takes you to the Reddit discussion of the article. The actual link to the article itself is the blue text in a much smaller font, and you can see here that it's truncated when displayed (despite how much extra space there is to show it in full). This design mostly wants you to click on the big prominent title, not the small, hard to hit blue thing.

(It turns out that you can make this more compact, but that doesn't make the links any bigger or more obvious.)

About all you can say about the prominence of the actual links in this design is that they're in blue, but modern web design is such that I'm not sure people these days assume that blue is a link instead of, well, just blue for some reason.

Compare this to the old design:

Old Reddit r/golang

Here the prominent titles, which are the things that are both the most obvious and the easiest to click on, actually link to the article. They're also the standard blue colour, which might actually be read as links in this less ornate design. This design also has a sidebar of useful links that go to places outside of Reddit, a sidebar that's not present in the new design no matter how wide you make your browser window.

(The old design also shows visited links in a different colour, as Reddit always has, unlike the new design.)

It's pretty clear to me that the old design intended people to click on the links to articles, taking you away from Reddit; you might then return back to read the Reddit comments. The new design intends for you to click on the links to the Reddit discussions; even on the individual discussion page for a link, the link itself is no more prominent than here. As it is, posts to r/golang and elsewhere are often simply on-Reddit questions or notes; with the new design, I expect that to happen more and more.

(I'm not terribly surprised by the new Reddit design, for the record, because it's very much like the mobile version of their website, which has long made the discussion page prominent and easy to hit and the actual links small and hard to hit. On mobile this is especially frustrating because you don't have a mouse and so hitting small targets is much harder. Perhaps the experience is slightly better in their app, but I won't be installing Reddit's app.)

I don't know if Reddit has been clear about their priorities for their new design; perhaps they have been. But it hardly matters since regardless of what they may have said, their actual goals and priorities show quite clearly in the end result. Design is very revealing that way.

Of course sometimes what it reveals is that you have no idea what your priorities are, so you're just randomly throwing things out and perhaps choosing based on what looks good. But competent design starts with goals and with the designer asking what's important and what's not. Even without that, decisions about things like relative font size almost always involve thinking about what's more and less important, because that's part of both how and why you decide.

(So yes, font size by itself sends a message. Other elements do too, even if they may be chosen through unconsidered superstition. But Reddit is a sufficiently big site and a total redesign of it is a sufficiently big thing that I doubt anything was done without being carefully considered.)

SiteDesignShowsPriorities written at 22:41:51; Add Comment

2018-06-03

Why I believe that HTTPS-only JavaScript APIs make sense

One of the things going on with browsers today is that they're moving to making more JavaScript APIs (especially new APIs) be available only for pages and resources loaded over HTTPS. One often believed reason for browsers to do this is that this helps drive adoption of HTTPS in general by providing both an incentive and a goad to site operators to move to HTTPS. If you want access to the latest attractive browser APIs for your JavaScript, you have to eat your vegetables and move to HTTPS. While I expect that browser people see this as one motivation for these restrictions, I think there are good reasons for 'must be served over HTTPS' restrictions beyond that. The easiest way to see this is to talk about site permissions.

Increasingly, JavaScript APIs are subject to user permission checks. If a website wants to use a location API to know where you are, or use your camera or microphone, or give you notifications, or various other things, the browser will ask you if you want to authorize this and then remember your choice. Beyond these special permissions, JavaScript from the same origin has additional access to cookies, data and so on that's not available to JavaScript from other origins (for obvious and good reasons; you wouldn't want my site to be able to fish out your cookies for Google).

The problem with these origin permissions is that if you give special permissions to a HTTP site, say, mygood.org, you are not really giving out the permissions that you think you are. Sure, you're allowing mygood.org to do these things, but you're also allowing anyone between you and mygood.org to do these things, because all of these people can inject JavaScript into pages as they fly by or simply redirect your connection to their portal or whatever. Your ISP can do this (and some have), the local coffee shop's wifi can, and so on. Everyone who can do this gets to act with whatever permissions you've given mygood.org. With sufficient cleverness they don't even have to wait until you visit the site itself; they can inject a redirection or an iframe that visits mygood.org for you.

In order for such site and origin permissions to really mean what they seem to mean, you need to be sure that you're only getting content from the origin; only then are you actually trusting only the origin, not the origin plus some extra places most people won't be aware of. The only general way we have to insure this is HTTPS. HTTPS makes sure that what you think is coming from the origin actually is. As a result, it's entirely sensible to restrict such special JavaScript APIs to be served over HTTPS, along with any API that you think might need per-site restrictions in the future.

(This is probably obvious to everyone who's deep in this stuff already, but it only occurred to me recently.)

WhyHTTPSOnlyAPIs written at 03:12:58; Add Comment

2018-05-31

What is the long term future for Extended Validation TLS certificates?

One of the things I wonder about with Extended Validation TLS certificates is what things will look like for them in the long term, say five to ten years. I don't think things will look like today, because as far as I can see EV certificates are in an unstable situation today since in practice they're invisible and so don't provide any real benefits. Commercial Certificate Authorities certainly very much want EV certificates to catch on and become more important, but so far it hasn't happened and it's quite possible that things could go the other way.

So here are some futures that I see for EV certificates, covering a range of possibilities:

  • EV certificates become essentially a superstition that lingers on as 'best practices' among large corporations for whom both the cost and the bureaucracy are not particularly a factor in their choices. These organizations are unlikely to go with CAs like Let's Encrypt anyway, so while they're paying for some TLS certificates they might as well pay a bit more, submit some more paperwork, and get something that makes a minor difference in browsers.

  • EV certificates will become quietly irrelevant and die off. CAs won't be able to do enough EV certificate business to make it worth sustaining the business units involved, so they'll quietly exit the unprofitable business.

  • Browsers will become convinced that EV certificates provide no extra value (and if anything they just confuse users in practice) and will remove the current UI, making EV certificates effectively valueless and killing almost all of the business. Browsers hold all the cards here and at least Mozilla has openly refused to commit to any particular UI for EV certificates. See, for example, Ryan Hurt's "Positive Trust Indicators and SSL", which also dumps some rain on EV certificate problems.

    One thing that could tip the browser balance here is scandals in CAs issuing (or not issuing) EV certificates improperly. If EV certificates seem not necessarily routinely worth extra trust, it becomes more likely that browsers will stop giving them any extra trust indicators.

  • CAs will persuade browser vendors to make some new browser features (in JavaScript, DOM and host APIs, CSS, etc) conditional on the site having an EV certificate, on the grounds that such sites are 'extra trustworthy'. I don't think this is likely to happen, but I'm sure CAs would like it to since it would add clear extra value to EV certificates and browsers are making APIs conditional on HTTPS.

    (A 'must be HTTPS' API restriction has a good reason for existing, one that doesn't apply to EV certificates specifically, but that's another entry.)

  • CAs will persuade some other organization to make some security standard require or strongly incentivize EV certificates; the obvious candidate is PCI DSS, which already has some TLS requirements. This would probably be easier than getting browsers to require EV certificates for things and it would also be a much stronger driver of EV certificate sales. I'm sure the CAs would love this and I suspect that at least some companies affected by PCI DSS wouldn't care too much either way. However, some CA moves on EV certificates might harm this.

    (On the other hand, some large ones would probably care a lot because they already have robust TLS certificate handling that would have to be completely upended to deal with the requirements of EV certificates. For instance, Amazon is not using an EV certificate today.)

On the balance the first outcome seems most likely to me at the moment, but I'm sure that CAs are working to try to create something more like the latter two since EV certificates are probably their best hope for making much money in the future.

(I also wonder what the Certificate Authority landscape will look like in five to ten years, but I have fewer useful thoughts on that apart from a hope that Let's Encrypt is not the only general-use CA left. I like Let's Encrypt, but I think that a TLS CA monoculture would be pretty dangerous.)

EVCertificatesEndgame written at 23:35:58; Add Comment

Extended Validation TLS certificates are basically invisible

Extended Validation TLS certificates are in theory special TLS certificates that are supposed to give users higher assurances about the website that they're visiting; Certificate Authorities certainly charge more for them (and generally do more verification). There are some fundamental problems with this idea, but there's also a very concrete practical problem, namely that EV certificates are effectively invisible.

Today, the only thing the presence or absence of an EV certificate does is that it changes the UI of the browser URL bar a little bit. Quick, how often do you pay any attention to your browser URL bar when you visit a site or follow a link? I pay so little attention to it that I didn't even notice that my setups of Firefox seem to have stopped showing the EV certificate UI entirely (and not because I turned much of it off in my main Firefox).

(It turns out that the magic thing that does this in Firefox is turning off OCSP revocation checks. I generally have OCSP turned off because it's caused problems for me. It's possible that websites using OCSP stapling will still show the EV UI in Firefox, but I don't have any to check. By the way, if you experiment with this you may need a browser restart to get the OCSP preference setting to really apply.)

This matters because if EV certificates are effectively invisible, it's not at all clear why you should bother going through the hassle of getting them and, more importantly for CAs, why you should pay (extra) for them. If almost no one can even notice if your website uses a fancy EV certificate, having a fancy EV certificate is doing you almost no good.

(This is an especially important question for commercial CAs, since Let's Encrypt is busy eating their business in regular 'Domain Validated' TLS certificates. It certainly appears that the future price of almost any basic DV certificate is going to be $0, which doesn't leave much room for the 'commercial' part of running a commercial CA.)

The current invisibility of EV certificates is not exactly a new issue or news, but I feel like doing my part to make it better known. There's a great deal of superstition that runs around the TLS ecosystem, partly because most people rightfully don't pay much attention to the details, and EV certificates being clearly better is part of that.

(EV certificates involve more validation and more work by the CA, at least right now. You can say that this intrinsically makes them better or you can take a pragmatic view that an improvement that's invisible is in practice nonexistent. I have no strong opinion either way, and I'll admit that if you offered me EV certificates with no extra hassle or cost, sure, I'd take them. Would I willingly pay extra for them or give up our current automation? No.)

EVCertificatesInvisible written at 00:01:22; Add Comment

(Previous 10 or go back to May 2018 at 2018/05/27)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.