2024-06-15
We don't know what's happening on our networks
In some organizations, a foundational principle of their network security (both internal and external) is that you should know about everything that is happening on the network. No program, no network service, no system should be accepting or sending unknown network traffic, and you should be able to completely inventory your expected traffic patterns. In some environments, this will include not just protocol level knowledge but also things like what DNS names should be being looked up. This detailed knowledge is obviously great for network security and for detecting intrusions; unexpected network traffic can be used to trigger investigations and maybe alerts.
(I suspect that this is often an aspirational goal that is not necessarily achieved.)
This is completely impossible in our (network) environment, which I can broadly describe as providing general networking to the research side of a university department. There are two aspects to this. The first aspect is that in our general network environment, there are plenty of desktops, laptops, phones, and other such devices on various pieces of our network, many of them personally owned by people. All of these devices are often running random software that phones home to random places at random times, doing all sorts of random outbound traffic (and no doubt pulling in some amount of inbound traffic in the process). Often the owner of the device has no idea that this traffic is happening, never mind where it's going to, since modern software feels free to talk to wherever it wants without telling you (and of course, the details change all the time).
The second aspect is that we don't quiz people here on what they're doing or demand that they tell us what they're up to before they do it. More broadly, our entire environment doesn't run neatly contained 'services', which can be inventoried before they're deployed, given security reviews, and so on. Instead, we provide an environment to people and they are free to use it as they like to get their (research) work done. If their work or their software needs to talk to something and our firewalls allow it, then they can just do it without having to slow down to talk to us. So even for servers (either ours or those run by people here), we can't predict the network traffic because it depends on what people are doing with them.
(If our firewalls don't allow some needed traffic, we'll generally change that once we know about the issue. In practice our outbound firewalls are relatively porous so a lot of internally-initiated activity will just work.)
But all of this leads to a broad issue, which is that in a university environment, it is not our business what people are doing, on the network or otherwise. If you want an analogy, we are in effect an ISP with some additional services, like printing (still surprisingly popular), (inbound) network security, email, web hosting, and general purpose computation. To have good knowledge of what was happening on our networks we'd have to be gatekeepers or panopticon observers (or both), and we are neither.
(In addition, many of the people using our environment are not employees of the university.)
Fundamentally we don't operate a tightly controlled network environment. Trying to operate as if we did (or should) to any significant degree would be a great way to cause all sorts of problems and get in the way of people doing a wide variety of reasonable things.