How Solaris 10's mountd
works
Due to security and complexity issues, Unix systems vary somewhat in exactly how they handle the server side of doing NFS mounts. I've recently been digging in this area, and this is what I've learned about how it works in Solaris 10.
The server side of mounting things from a Solaris 10 fileserver goes more or less like this:
- a client does the usual SUNRPC dance with portmapper and then sends
an RPC mount request to
mountd
- if the filesystem is not exported at all or if the options in the mount
request are not acceptable at all,
mountd
denies the request. mountd
checks to see if the client has appropriate permissions. This will probably include resolving the client's IP address to a hostname and may include netgroup lookups. This process looks only atro=
andrw=
permissions, and thus will only do 'is host in netgroup' lookups for netgroups mentioned there.- if the client passes,
mountd
looks up the NFS filehandle of the root of what the client asked for and sends off an RPC reply, saying 'your mount request is approved and here is the NFS filehandle of the root of it'.
You'll notice that mountd
has not told the kernel about the client
having access rights for the filesystem.
- at some time after the client kernel accepts the mount, it will
perform its first NFS request to the fileserver. (Often this
happens immediately.)
- if the fileserver kernel does not have information about whether
IP <X> is allowed to access filesystem <Y> in its authorization
cache, it upcalls to
mountd
to check. mountd
goes through permissions checking again, with slightly different code; this time it also looks at anyroot=
option and thus will do netgroup lookups for those netgroups too.mountd
replies to the kernel's upcall (we hope) with the permissions the client IP should have, which may be 'none'. The Solaris kernel puts this information in its authorization cache.
The mount daemon has a limit on how many simultaneous RPC mount requests it can be processing; this is 16 by default. There is some sort of limits on kernel upcalls, I believe including a timeout on how long the kernel will wait for any given upcall to finish before giving up, but I don't know what they are or how to find them in the OpenSolaris code.
Because this process involves doing the permissions checks twice
(and checks multiple NFS export options), it may involve a bunch
of duplicate netgroup lookups. Since netgroup lookups may be
expensive, mountd
caches the result of all 'is host <X> in netgroup
<Z>' checks for 60 seconds, including negative results. This
mountd
cache is especially relevant for us given our custom NFS
mount authorization.
(The combination of the kernel authorization cache with no timeout and this mountd
netgroup
lookup cache means that if you use netgroups for NFS access control,
a single lookup failure (for whatever reason) may have wide-ranging
effects if it happens at the wrong time. A glitch or two during a
revalidation storm could give you a whole lot of basically permanent
negative entries, as we've seen but not
previously fully understood.)
Where to find OpenSolaris code for all this
I'm going to quote paths relative to usr/src, which is the (relative) directory where OpenSolaris puts all code in its repository.
The mountd source is in cmd/fs.d/nfs/mountd. Inside mountd:
- the RPC mount handling code is in mountd.c:mount(). It checks NFS mount permissions as a side effect of calling the helpfully named getclientsflavors_new() or getclientsflavors_old() functions.
- the kernel upcalls are handled by nfsauth.c:nfsauth_access(), which calls mountd.c:check_client() to do the actual permission checking.
- the netgroup cache handling is done in netgroup.c:cache_check(), which is called from netgroup_check().
The kernel side of the upcall handling is in uts/common/fs/nfs, as
mentioned earlier. The actual upcalling
and cache management happens in nfs_auth.c:nfsauth_cache_get(),
using Solaris doors as the IPC mechanism between mountd
and the
kernel.
Comments on this page:
|
|