Our problem with HTTPS and user-created content

August 14, 2018

We have a departmental web server, where people can host their personal pages (eg) and pages for their research groups and so on, including user-run web servers behind reverse proxies. In other words, this web server has a lot of content, created by a lot of people, and essentially none of it is under our control. These days, in one sense this presents us with a bit of a problem.

Our departmental web server supports HTTPS (and has for years). Recent browser developments are clearly pushing websites from HTTP to HTTPS, even if perhaps not as much as has been heralded, and so it would be good if we were to actively switch over. But, well, there's an obvious problem for us, and the name of that problem is mixed content. A not insignificant number of pages on our web server refer to resources like CSS stylesheets using explicit HTTP URLs (either local ones or external ones), and so would and do break if loaded over HTTPS, where browsers generally block mixed content.

We are obviously not going to break user web pages just because the Internet would now kind of like to see us using HTTPS instead of HTTP; if we even proposed doing that, the users would get very angry at us. Nor is it feasible to get users to audit and change all of their pages to eliminate mixed content problems (and from the perspectives of many users, it would be make-work). The somewhat unfortunate conclusion is that we will never be able to do a general HTTP to HTTPS upgrade on our departmental web server, including things like setting HSTS. Some of the web server's content will always be in the long tail of content that will never migrate to HTTPS and will continue to be HTTP content for years to come.

(Yes, CSP has upgrade-insecure-requests, but that only helps for local resources, not external ones.)

Probably this issue is confronting anyone with significant amounts of user-created content, especially in situations where people wrote raw HTML, CSS, and so on. I suspect that a lot of these sites will stay HTTPS-optional for plenty of time to come.

(Our users can use a .htaccess to force HTTP to HTTPS redirection for their own content, although I don't expect very many people to ever do that. I have set this up for my pages, partly just to make sure that it worked properly, but I'm not exactly a typical person here.)

(This elaborates on an old tweet of mine, and I covered the 'visual noise' bit in this entry.)


Comments on this page:

By Jukka at 2018-08-14 05:04:23:

But mixed-content delivery is often a real security risk. While you certainly do not want external (cross-origin) JavaScript content to be delivered via plain HTTP, you can do pretty nasty stuff already with modern CSS (i.e., provided that you are in a MitM position, which is not that difficult nowadays in university networks etc.). HSTS is a nice thing, but won't solve the root problems with bad/legacy web designs (such as those typically used by universities worldwide...).

By Nick at 2018-08-14 05:58:49:

Let's summarise the modern web.

a) Browser vendors provide insecure products ("nasty ... modern CSS", etc etc).

b) So web browsers provide a variety of ways to attack the user.

c) In order to correct (b) we are supposed to take advice from the people responsible for (a).

By Jukka at 2018-08-14 12:19:40:

Nick: I don't claim to understand the big picture, but surely it must be a little more complicated than this.

Though, there is a grain of truth here: "browser vendors" (that is, namely, Google today and Microsoft et al. yesterday) are indeed behind much of the risky functionality. They are also the ones running the show at W3C and whatnot. In this sense, I applaud Google et al. for putting effort to patch (with CSP, HSTS, etc.) the mess they're partially responsible for.

This said, I think the fundamental problem is that the Web never had any kind of a security model to begin with. You could probably find some good quote about this from Barnes-Lee or the like.

In some ways I'm disappointed that the HTTP "Upgrade" header (RFC 2817) never took off:

      GET http://example.bank.com/acct_stat.html?749394889300 HTTP/1.1
      Host: example.bank.com
      Upgrade: TLS/1.2
      Connection: Upgrade

      HTTP/1.1 101 Switching Protocols
      Upgrade: TLS/1.2, HTTP/1.1
      Connection: Upgrade

While opportunistic encryption isn't perfect, it's better than pure plain-text for many situations:

By Martin at 2018-08-26 17:20:25:

As the reverse proxy is under your control you could always inject JS that forces everything to HTTPS...

Also, I expect that in some point in the future Browsers will (for non-local connections) simply treat http:// as https:// and start breaking if something is not reachable via HTTPS. It would be the most secure thing to do and actually the easiest way to for Ops to fix their users stuff...

Written on 14 August 2018.
« The evolution of our account creation script
Go's net package doesn't have opaque errors, just undocumented ones »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Tue Aug 14 00:15:40 2018
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.