Why we have several hundred NFS filesystems in our environment
It may strike some people as unusual or extreme that we have 340 odd NFS mounts on our machines, as I mentioned in my entry on how systemd can cause an unmount storm during shutdown. There are several levels of why we have that many NFS mounts on our systems, especially all of the time. First, I'll dispose of a side reason, which is that we don't like conventional automounters (ie, any system that materializes (NFS) mounts 'on demand'). That means that all of our potential NFS filesystems are always mounted, and shifts the question to why we have so many NFS filesystems.
The starting point is that we have on the order of 2600 accounts, six ZFS fileservers, 38 TB of used space, and quite a number of ZFS pools (because of how people get space). This naturally spreads out data across multiple filesystems, and on top of this, our backup system operates on whole filesystems and there's a limit to how large we want one 'backup object' to ever be. Obviously if we limit the maximum size of filesystems, we get more of them. We also encourage people to not put all of their data in their home directory, so people and groups often have separate workdir filesystems that they use as work areas, for shared data, and so on. For various security reasons, our web server also requires people to put all of the data they intend to expose to it on specially designated workdir filesystems, instead of their home directory.
(Among other practical issues, it's much easier to safely expose a portion of a completely separate filesystem to other people or the web than something under your home directory. This extends to any situation where we want or need different NFS export permissions for two things; they have to be in separate filesystems, even if that means a new filesystem for one of them.)
ZFS filesystems are also the primary way we implement space restrictions and space guarantees for people. If you want something to only use X amount of space or always have X amount of space available to it, it has to be in a separate filesystem (and then we set appropriate ZFS properties to guarantee these things). Since both of these are popular with the people who ultimately call the shots on how space is used (because they paid for it), this lead to a certain amount of additional ZFS filesystems and thus NFS mounts.
This points to the larger scale reason that we have so many NFS filesystems, which is that filesystems are natural namespaces and having plenty of namespaces is useful. Since ZFS filesystems are basically free, any time people want to separate things it's natural to make a new (ZFS and NFS) filesystem to do so. There's a certain amount of overhead and so we try not to go too far with this (we're unlikely to ever support each user in their own filesystem), but it's quite a good thing to not have to squeeze everything into a limited number of filesystems. Ever since we moved to ZFS, our approach has been that if we're in doubt, we make a new filesystem. Our local tools are built to deal with this (for instance, automatically distributing new accounts across multiple potential home directory filesystems based on their current usage).