A portability gotcha with accept()
Today's excitement came from trying to bring a local network daemon frontend up and running on a FreeBSD machine. It would spawn programs when you connected to it, but they always exited immediately after they printed their greeting banner; it was as if they were seeing an immediate end of file.
FreeBSD's ktrace
revealed that when the spawned programs went to
read from standard input, they got an EAGAIN result. Fortunately I've
stubbed my toe on this one before: this is the exact symptom of your
standard input being set non-blocking on you. But how was this
happening?
(The amusing way to hit this is to have a program that sets its normal standard input to nonblocking and forgets to reset it back when it exits, and then run it from your shell. Your shell usually exits immediately, possibly quietly disappearing a terminal window with it.)
It turns out that on FreeBSD, accept()
copies the properties of the
server socket to the newly created socket; these properties include
whether or not the socket is nonblocking. On Linux, this doesn't
happen; the new socket is created with default properties. My server
sockets were set nonblocking, just in case, so FreeBSD set up the new
connections as nonblocking, which my frontend cheerfully passed to the
newly spawned programs as their standard file descriptors.
Whoops.
FreeBSD's behavior turns out to be the traditional *BSD behavior, dating back to at least 4.0 BSD (we happen to have 4.0 BSD source online, since we're university packrats). Linux's behavior seems to be more Single Unix Standard compliant, based on their accept(2) manpage, although the ice may be thin.
(This is also covered in Dan Bernstein's Unix portability notes web page.)
Ironically, while Googling about this I found a linux-kernel thread about it here that I'm pretty sure I read at the time, back in 2000. (The thread has a reasonably good discussion of the whole issue.)
|
|