Chrome may start restricting requests to private networks

November 15, 2021

Chrome (and apparently Microsoft Edge) are likely to add new restrictions on allowing things to talk to private network addresses (in a surprisingly broad sense). The reference for this is Feature: Restrict "private network requests" for subresources from public websites to secure contexts (via), which describes the first steps. The first steps Chrome is making is that such "private network requests" may only be made from a public context that is secure, ie from a HTTPS website instead of a HTTP one.

(As far as I can tell, a HTTP website on a private network is currently going to be able to send you to another private network, although maybe not to localhost.)

However, Chrome's initial steps are not the contemplated full version. The full specification is Private Network Access, and it goes much further than Chrome does. Although I'm not fully up on my web specification reading, section 3.1, Integration with Fetch seems to say that even explicit user navigation (by clicking on a link) would be covered by this and could be blocked under some circumstances if it was going from a public context to a more private one.

A browser change like this is a potential problem for us because of our network setup. We have a 'split horizon' DNS setup, where the same DNS name resolves to different IP addresses depending on whether you're inside or outside our network perimeter, and we also have a number of public websites that actually live in private IP address space but that are NAT'd to public IPs by our external firewall. These public websites are linked to by other places and may come up on Internet search results, but if you're inside our network perimeter and you look up their name, you get a private IP address and you have to use this IP address to talk to them. To the Private Network Access specification, a person following a link to the website (or a public website loading some resource from them) looks just like the kind of thing that should be blocked.

(I wrote about this a few years ago when I maintained that browsers couldn't feasibly stop web pages from talking to private networks. Browsers seem more willing to break things these days. Our setup also causes us problems with DNS over HTTPS, but it would be very challenging to change for reasons beyond the scope of this entry.)

While the specification has some suggestions, I'm not sure that it would allow our specific situation even if we could get all of the people with web servers that are possibly affected by this to make changes to them. I'm also not entirely convinced that the changes necessary would be completely secure; we might have to leave things more open than they really should be. If nothing else, this appears to be another Chrome decision that's going to force people to read a bunch of things, understand them, and change their webservers.

(But perhaps the risk of CSRF attacks against insecure devices on private networks or localhost is severe enough to justify such a change. I see only one side of this whole issue, not both of them.)


Comments on this page:

By Walex at 2021-11-16 06:27:05:

«Browsers seem more willing to break things these days»

The main aim of many browsers is becoming to act as dedicated clients for almost "walled garden" major web sites with "curated" content, pursuing a heavily centralized model of the WWW. I doubt that access to random sites which contain "uncurated" content which may be "misinformation" is equally important. Most browsers, including Firefox, are designed around the interests of major publishers, not those of users.

As to the latter point a major example: if browsers were designed for the interests of users they would accept all cookies, but only send them when allowed by users to specific sites, instead of doing vice-versa, which is far more fragile, and encourages users to accept (and therefore send) all cookies.

By Alex at 2021-11-16 21:20:11:

You know what I'm going to say... IPv6! This is another area that it helps you with, on top of not needing to do split DNS or NAT, since you can just run everything on global IP space.

You can put v4-only servers behind a reverse proxy if their owners refuse to get v6 working on them, and similar v4-only clients can be given a dual-stack proxy. Anybody that refuses to use one or the other will just go over the existing v4 network, and will have to figure their own fixes out for the browser-sourced breakage.

Written on 15 November 2021.
« Go 1.18 will let you set the version of the "AMD64" architecture to target
Why we have public websites on private IPs (internally) »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Mon Nov 15 22:36:53 2021
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.