NFS filehandles from Linux NFS servers can be client specific

March 16, 2023

Under normal circumstances, we assume that NFS servers give out the same NFS filehandle for a given file (or directory or etc) to every NFS client. On Linux, this is not necessarily the case, although it usually will be.

To illustrate this, I'm going to get a second filehandle for the same NFS export as I did in my entry on NFS filesystem IDs, using /proc/fs/nfsd/filehandle (cf nfsd(7)):

>>> f = open("filehandle", mode="r+")
>>> f.write("128.100.x.x /w/435 128\n")
>>> r = f.read(); print(r)
\x 01 00 01 00 efbeadde

This is not what we got for /w/435's filehandle in the earlier entry, which was '\x 01 00 06 00 7341d08c 00034ae1 00000000 00000000' (embedding the normal kernel NFS server 'uuid' of the filesystem).

The structure of this block of hex comes from fs/nfsd/nfsfh.h. This is a version 1 filehandle, an ignored '0' auth type byte, a type 1 fsid, and a fileid that is 'FILEID_ROOT' (0), with an odd looking rest of the data. If we look at /proc/net/rpc/nfsd.fh/content we can see another version of this:

#domain fsidtype fsid [path]
@nfs_ssh 6 0x8cd04173e14a03000000000000000000 /w/435
128.100.X.X 1 0xdeadbeef /w/435

The actual fsid type is a clue as to what's going on here; it is 'FSID_NUM', meaning a four byte user specified identifier, also known as the fsid= field in exports(5). In this case the user specified identifier is 0xdeadbeef (decimal 3735928559, or -559038737 in /proc/fs/nfsd/export), encoded in the filehandle in a peculiar way.

The ultimate cause of this is Linux's NFS export permissions model. In many NFS servers, export settings are attached to the export point, such as /w/435, and these settings include what clients have access and so on. In Linux, you have things, such as netgroups, that have a collection of export settings for a particular export point. This creates a natural model for giving different clients different sets of permissions and attributes, but it also means that all export attributes are per-client, including ones such as fsid=. And since the filesystem id is necessarily part of the NFS filehandle, NFS filehandles as a whole can be different between different clients.

It's probably not very sensible to give different clients a different filesystem identifier for the same NFS export. But it's technically allowed, and the Linux kernel NFS server will play along if you do this. I haven't tested what happens if you give the NFS server back the 'wrong' filehandle (ie, if a @nfs_ssh machine gives the kernel a filehandle issued for 128.100.X.X).

(There are some operational reasons to accept such wrong filehandles, for example if 128.100.X.X is initially not part of @nfs_ssh but then gets added to it. On the other hand, not accepting the wrong version of a filehandle is arguably more secure if you have specifically set different filesystem IDs for different clients.)

PS: To make /proc/fs/nfsd/filehandle work, the relevant client or sort of client has to have mounted the filesystem, or perhaps there's some other way to push the necessary information from mountd into the kernel (cf how mountd and export handle NFS permissions and how to see and flush the kernel's NFS server authentication cache).


Comments on this page:

It's probably not very sensible to give different clients a different filesystem identifier for the same NFS export. But it's technically allowed, and the Linux kernel NFS server will play along if you do this.

Since it is about server-side control of the FSID, this particular way for filehandles to be client-specific would seem to be irrelevant to your original motivating use case, meaning that so far, your exploration hasn’t turned up anything that would show explicit control the FSID to be insufficient to avoid stale filehandles during (non-forklift upgrade) server migrations – correct?

By cks at 2023-03-18 18:04:34:

I believe so. However, if you can supply a uuid instead of just a fsid (which is somewhat implied although not explicitly stated), you might be able to look up the current automatically-generated uuid, then set it manually. I haven't actually tried this, though (although I do have available test systems and it would be an interesting experiment to do).

In our situation we would set the uuid for everyone (to the already established value from the old system), so having different UUIDs for different clients would be unnecessary.

Written on 16 March 2023.
« The extra hazards of mutual TLS authentication (mTLS) in web servers
Some reasons why CPUs might re-use unofficial NOPs for other things »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Thu Mar 16 23:11:28 2023
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.