Why our Solaris fileservers still use the automounter
In theory, there's no need for your fileservers to use the automounter.
Even if you make the filesystems visible under their normal names as
well as in
/export, you could mount them explicitly in
However, doing so would significantly complicate our environment,
because it turns out that the automounter worries about a number of
issues for us.
Of course, one big reason to use the automounter is administrative convenience; you can use the same set of automounter maps on your fileservers as you do everywhere else, instead of having to build special ones and keeping the same information in at least two different places. (Whether fileservers should ever have NFS mounts is another question, but our local environment forces the issue and requires it.)
We have an additional complication, because for failover our NFS fileservers are virtual. This means that which physical machine owns a filesystem and thus needs to loopback mount it can change on the fly. It turns out that the Solaris automounter is smart enough to deal with this without any configuration changes and without us doing anything extra at failover time.
(I assume that it is noticing that the IP address it is trying to NFS mount from is one of the IP addresses associated with the current machine, even though the host name of the NFS server is not the machine's hostname.)
Using static mounts in this situation isn't impossible, but it'd be another thing we'd have to build into the failover startup stuff. The automounter conveniently handles it all for us.
(This entry was sparked by a comment on a recent entry.)