How NFS is unreliable for file-based locking
You hear a lot about how NFS is unreliable for file-based locking, but you rarely hear how and why, and understanding the details helps in understanding what can go wrong. The fundamental source of NFS's unreliability issue here is what I'll call the replay issue.
The communication between NFS clients and NFS servers is unreliable; both requests and replies can be dropped. Only the client worries about this, and it uses a simple approach: if it didn't get an answer to its request, it resends it.
If what was lost was the client's request, there's no problem. But if what was lost was the server's reply to the client's request, there's two potential problems. First, because some NFS operations are not idempotent an operating that got a successful result the first time will get a failure when retried. Second, an operation that is retried might be acting on a different version of an object than it thinks it is, because someone has modified the object in the mean time.
This is not a new or obscure issue; it was recognized quite early on in NFS's life. The general workaround is to add a request/reply cache to the NFS server, so that the NFS server can recognize when it gets a duplicate request and just send out another copy of the original reply. Since the cache has a finite size this isn't a sure cure, but in practice it works pretty well.
(NFS over TCP also helps, because the TCP layer makes things reliable unless you abort and reopen the TCP connection itself.)