The three levels of read-only NFS mounts
It's sometimes useful to understand that there are three ways that an NFS mounted filesystem can be 'read-only'. Let's call them three levels:
- You can mount the NFS filesystem read-only on the client. The client
kernel will then enforce this, disallowing write actions and so on.
These days this is generally mostly handled in high level VFS code, since it's
common behavior across filesystems.
As with all remote filesystems, this read-only status is purely local to your client machine. Your machine doesn't get to order the NFS server not to make any changes on the filesystem (that would be laughable) so the NFS server is perfectly entitled to allow the filesystem to change underneath you and to have other clients mount it read-write (and write to it). If NFS is working right, you will see those changes at some point.
- The server can export the NFS filesystem read-only (either to you
or just in general). The NFS server code will then disallow all
write actions that clients send it, returning an appropriate 'read
only filesystem' error to errant clients (if any). Even if the NFS
mount is exported read-only to all clients, it's still valid for the
exported filesystem to be changed locally on the NFS server.
(As far as I know, whether or not the NFS export is read-only is invisible to the client. It's purely something internal to the server and can even change on the fly.)
- On the server you can mount the exported filesystem read-only (or
otherwise set it that way). On competent NFS servers this disallows
all writes to the filesystem, regardless of whether they're NFS
or local and regardless of whether the filesystem was exported
read-only by the NFS server.
(On competent NFS servers, all NFS server operations on the exported filesystem go through the VFS et al and so have the standard handling of read-only mounts applied to them automatically.)
These can certainly be stacked on top of each other (a read-only server filesystem, NFS exported as read-only and mounted as read-only on clients) but they don't have to be. For instance you can NFS export filesystems as read-only but mount them read-write on clients (we do this here for complex reasons).
Now let's talk about atime and atime updates. In NFS, atime updates are the responsibility of the server, not the clients. More specifically they are generally the responsibility of the underlying server filesystem code or VFS, not specifically the NFS server code, and as such they can happen when you read data through a read-only NFS mount or even a read-only NFS export. The NFS clients asks to read data, the NFS server code makes a general VFS 'get me data' call, and as a side effect of this the VFS or the filesystem updates the atime (if atime updates are enabled at all).
(This implies that not all client reads necessarily update the server atime, because a client may satisfy a read from its own file cache instead of going to the server.)
If you think about it this is actually a feature. If you have atime enabled on a read-write filesystem mount, you have told the (server) kernel that you want to know when people read data from the filesystem and lo, this is exactly what you are getting. The read-only NFS export is just to tell the NFS server that it should not allow people to do 'write' VFS operations.
(Since you can export the same filesystem read-write to some clients and read-only to others, suppressing atime updates on read-only NFS exports could also produce odd effects. Read a file from client A and the atime updates, read the file from client B and it doesn't. And all because you didn't trust client B enough to let it actually make (filesystem level) changes to your valuable filesystem.)
Sidebar: NFS exporting of read-only filesystems
You might think that the NFS export process should notice when it's
exporting a read-only filesystem as theoretically read-write and
silently change it to read-only for you. One of the problems with this
is that on many systems it's possible to switch filesystems back and
forth between read-only and read-write status through various mechanisms
mount). In practice you might as well let the NFS server
accept the write operations and have the VFS then reject them; the
outcome is the same while the system is simpler and behaves better in
the face of various things happening.