Why we have public websites on private IPs (internally)
In yesterday's entry about how Chrome may start restricting requests to private networks, I mentioned that we have various public websites that are actually on private IPs, as far as people inside our network perimeter are concerned. You might wonder why. The too-short answer is that we don't have enough public IPs to go around, but the longer answer is that it's because of how our internal networks are organized.
As a computer science department, we have a bunch of separate research groups and professors. Many of them have their own machines and their own network needs, so in our network layout we put them on separate subnets, what we call "sandbox" subnets (and then we have some more for things like random laptops). Because we don't have anywhere near enough public IP address space, these subnets use RFC 1918 private IP address space.
Various people and research groups need or want to run public websites on their own machines. Generally they want these machines to be in their own subnets; sometimes it's actively required for various reasons. This means we can't physically put all of these servers on one subnet (with public IPs). We do have to assign them public IPs so they're reachable from the world, but then we have to somehow translate requests to the public IPs to the private IPs. We've opted to do this with NAT on our external firewall.
We use NAT instead of anything else, such as reverse proxies, for a variety of reasons. Some people are dealing with sensitive data that should go directly to their carefully secured server (naturally they use HTTPS). Some people are doing odd things and we don't want to worry about the potential impact of their traffic on shared servers, or for that matter any performance restrictions that a shared server in the path might create. Some people are using their own web server specifically so they don't have to get whatever web software they're running working behind our existing reverse proxy system. With NAT done by the external firewall, we can make their web servers public with as much performance and as little impact on everyone as possible (and with a minimum of our server resources used).
However, it does mean that there is no feasible way for people inside our network perimeter to talk to the public IPs. The public IPs exist only for external traffic transiting inward through our external firewall, and internal traffic doesn't do that. In fact, in a reasonably normal case the browser is on the same internal subnet as the web server (because it's a developer working on the site).
Sidebar: IPv6 was not and is not the solution
This network design and this requirement for public machines with private IPs predates usable IPv6. It works today, which leaves us with very little to gain from blowing it up and moving to IPv6, especially since it will have to keep on working for IPv4 for the foreseeable future. It's also pretty much a required feature that the source IPs of people talking to the web servers don't get changed.
(For the foreseeable future there will also be internal IPv4 only servers and clients.)