Safely updating files that are read over NFS
To elaborate on an old entry a bit, let's suppose that you have important system data files that you expose to all of your machines via NFS, and that you need to re-generate and update them every so often.
When you regenerate files locally, you need to make sure that there's always a version of the file present, and that the file's always complete. When you add NFS there's a third, subtle requirement: you cannot immediately remove the old version because a process on another machine might be reading it, and NFS will happily yank the file out from under the process on the other machine.
(Normally, Unix doesn't completely remove a file until all processes drop their references to it, so a local process that's reading the old version of the file will keep being able to do so. NFS breaks this, because the NFS server has no knowledge of what files are open on the clients.)
The NFS issue means that things like plain
rsync are not safe ways
to update files, since they do the equivalent of writing a temporary
file and then
mv'ing it into place, which removes the old version
of the file on the spot. You need to preserve the old version of the
file under some other name, as in the recipe from the old entry.
rsync -b is not safe;
rsync does not take the steps
necessary to make sure that a version of your file is always present, so
there is a brief moment when programs could see no file at all.)
(As a corollary, the obvious way of directly rewriting the file in place with shell redirection is very dangerous. Any problems will probably leave you with a truncated file, and even without a problem a process that tries to read the file while you're rewriting it will see an incomplete or empty version.)