Wandering Thoughts archives

2018-02-02

Some practical tradeoffs involved in using HTTPS instead of HTTP

Today I tweeted:

I'm a little bit disappointed that the ASUS UEFI BIOS fetches BIOS updates over HTTP instead of HTTPS (I'm sure they're signed, though). But thinking more about it, it's probably a sensible decision on ASUS's part; there are a lot of potential issues with HTTPS in the field.

Let's start with the background. I haven't been paying attention to BIOSes for current hardware until recently and it turns out that modern UEFI BIOSes are startlingly smart. In particular, they're smart enough to fetch their own BIOS updates over the Internet; all you have to do is tell them how to connect to the network (DHCP, static IP address, etc). For various reasons I prefer to update the BIOS this way and when I did such an update today, I was able to confirm that the entire update process used only HTTP. At first I was reflexively disappointed but the more I thought about the practical side the more I feel that Asus made the right choice.

The problem with fetching things over HTTPS is that it exposes a number of issues. These include:

  • HTTPS interception middleware that requires you to load their root certificate. Should the BIOS try to handle this?

  • Certificate validation in general, which has been called "the most dangerous code in the world". This has unusual challenges in a BIOS environment (consider the issue of knowing what time and date it is) and may be more or less impossible to do really well as a result.

    (Let's ignore certificate revocation, seeing as everyone does.)

  • Long term changes in the set of CA roots that you want to use. BIOSes may live for many years and be updated infrequently, so Asus might find itself with a very limited selection of CAs (and hosting providers) that were accepted by half-decade or decade old BIOSes that they still wanted to support updates from.

    (Asus can update the CA roots as part of a BIOS update, but not everyone does BIOS updates very often.)

  • Similarly, long-term evolution in the set of TLS ciphers that are supported and acceptable. We're seeing a version of this as (Open)SSH evolves and drops support for old key exchange methods that are the only methods implemented by some old clients. TLS ciphers have sometimes turned over fairly drastically as weaknesses were found.

    (More generally, TLS has sometimes needed changes to deal with attacks.)

Right now you also have the issue of a transition away from older versions of TLS towards TLS 1.3. If you're looking forward five or ten years (which is a not unreasonable lifetime for some hardware), you might expect that the future world will be basically TLS 1.3 or even TLS 1.4 only by the end of that time. Obviously a BIOS you ship today can't support TLS 1.3 because the protocol isn't finalized yet.

Including a TLS client also opens up more attack surface against the BIOS, although this feels like a pretty obscure issue to me (it'd be an interesting way to get some code running behind an organization's firewall, but people don't run BIOS updates all that often).

Since people can usually download the BIOS updates by hand from an OS, Asus could have decided to accept the potential problems and failures. But this would likely create a worse user experience where in-BIOS update attempts failed for frustrating reasons, and I can't blame Asus for deciding that they didn't want to deal with the extra user experience risks. If the BIOS updates themselves are signed (and I assume that they are in these circumstances), the extra security risks of using HTTP instead of HTTPS are modest and Asus gets to avoid a whole other set of risks.

(Part of those risks are reputational ones, where people get to create security conference presentations about 'Asus fumbles TLS certificate verification' or the like. If nothing else, many fewer people will try to break BIOS update signature verification than will poke your TLS stack to see if they can MITM it or find other issues.)

PS: There is one situation where not being able to update your BIOS without an OS is a problem, namely when you need to apply a BIOS update to be able to install your OS in the first place (and you don't have a second machine). But this is hopefully pretty rare.

tech/BIOSViaHTTPSensible written at 23:41:58; Add Comment

X's network transparency was basically free at the time

I recently wrote an entry about how X's network transparency has wound up mostly being a failure for various reasons. However, there is an important flipside to the story of X's network transparency, and that is that X's network transparency was almost free at the time and in the context it was created. Unlike the situation today, in the beginning X did not have to give up lots of performance or other things in order to get network transparency.

X originated in the mid 1980s and it was explicitly created to be portable across various Unixes, especially BSD-derived ones (because those were what universities were mostly using at that time). In the mid to late 1980s, Unix had very few IPC methods, especially portable ones. In particular, BSD systems did not have shared memory (it was called 'System V IPC' for the obvious reasons). BSD had TCP and Unix sockets, some System V machines had TCP (and you could likely assume that more would get it), and in general your safest bet was to assume some sort of abstract stream protocol and then allow for switchable concrete backends. Unsurprisingly, this is exactly what X did; the core protocol is defined as a bidirectional stream of bytes over an abstracted channel.

(And the concrete implementation of $DISPLAY has always let you specify the transport mechanism, as well as allowing your local system to pick the best mechanism it has.)

Once you've decided that your protocol has to run over abstracted streams, it's not that much more work to make it network transparent (TCP provides streams, after all). X could have refused to make the byte order of the stream clear or required the server and the client to have access to some shared files (eg for fonts), but I don't think either would have been a particularly big win. I'm sure that it took some extra effort and care to make X work across TCP from a different machine, but I don't think it took very much.

(At the same time, my explanation here is probably a bit ahistorical. X's initial development seems relatively strongly tied to sometimes having clients on different machines than the display, which is not unreasonable for the era. But it doesn't hurt to get a feature that you want anyway for a low cost.)

I believe it's important here that X was intended to be portable across different Unixes. If you don't care about portability and can get changes made to your Unix, you can do better (for example, you can add some sort of shared memory or process to process virtual memory transfer). I'm not sure how the 1980s versions of SunView worked, but I believe they were very SunOS dependent. Wikipedia says SunView was partly implemented in the kernel, which is certainly one way to both share memory and speed things up.

PS: Sharing memory through mmap() and friends was years in the future at this point and required significant changes when it arrived.

unix/XFreeNetworkTransparency written at 01:12:50; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.