The weird effects of Firefox's remote control on Unix
I hope that Firefox's remote control feature is reasonably well known, but I suspect most people don't know about the surprising and relatively weird effects it has in the Unix version of Firefox; certainly it surprised my co-workers when they stumbled across it.
(Remote control is the feature where if you already have a running Firefox and try to start another one, it will just open a new browser window (or tab) in the first one.)
The root of the surprising behavior comes from how Firefox's remote control works. Rather than use a conventional IPC mechanism (Unix domain sockets, for example), instances of Firefox communicate with each other through X properties. Because X properties are not a per machine thing like Unix domain sockets, the Firefox remote control is global; a Firefox on a machine that you've ssh'd in to can remote control your local Firefox (and vice versa).
(This is what surprised my co-workers; they expected it to be a per machine and per user thing.)
The default Firefox setup on Unix is quite insistent on using remote control if at all possible, to the point where it is impossible to start two copies of Firefox, even on different machines. This can periodically be annoying, for example if you really need to do some particular browsing from a specific machine but don't want to shut down your regular Firefox session.
(It can also be puzzling if you don't realize what's going on; you might find that downloaded files aren't where they're supposed to be, or that some machine's web-based control interface just doesn't seem to be responding.)
Fortunately this behavior is all in the
firefox wrapper shell script,
which you can modify to get around the issue; see the
function and where the
ALREADY_RUNNING variable gets used. Note that
having more than one Firefox running will make any remote control stuff
you do potentially confusing, since you don't know which one will get
umount -f forces IO errors
Here is something that I had to find the moderately hard way:
When used on an NFS filesystem, Linux's
umount -fwill force any outstanding IO operations to fail with an error, whether or not the unmount will succeed.
This can be both good and bad, but on the whole I think it's mostly bad.
If the NFS server has died completely and the outstanding IO can never
succeed, you do want things to abort now instead of hanging around. In
umount -f's behavior is what you want, although it doesn't
really go far enough; ideally it would force the filesystem to unmount
no matter what.
umount -f is also commonly used to try to unmount unused NFS
filesystems when the NFS has crashed temporarily and is being recovered.
(Traditionally a basic
umount will hang if it cannot talk to the
NFS server; this may have changed in Linux by now, but if it has the
manpages haven't caught up and so I suspect that the sysadmin habit
If the filesystem is actually in use, you want the unmount attempt to quietly fail. Instead what you get is artificial, unnecessary IO errors for IO that would complete in due time. If you are lucky, programs merely complain loudly to users, alarming them, and manage to recover somehow; otherwise, programs malfunction and your users may lose data.
I believe that
umount -f is far more often used in the second case
than in the first case, and thus that this causes more problems than