Wandering Thoughts archives

2010-10-31

How I run two Firefoxes at once and still have remote control

Given how Firefox remote control works on Unix, you might wonder how I routinely run two separate copies of Firefox and still have my remote control environment work reliably enough that I haven't pitched one of the copies out the window long ago.

In theory, the answer is Firefox's -no-remote argument. In practice this doesn't work quite the way you want it to. The problem is that a Firefox instance run with -no-remote will still register itself as a valid target for remote control, alongside your main browser. You can start your second Firefox without problems, but from then onwards your remote control attempts may go to it, not to your main browser where you (probably) want them.

So I cheat. One of my customizations is that I changed the source code to rename the X properties that my main browser uses for remote control, precisely so that I could run it alongside a normal Firefox without either clashing.

This is obviously only an option for people who compile Firefox from source, but you don't have to go that far; you can just edit the binary. There are two binary files that you need to edit, mozilla-xremote-client and libxul.so, and in each you need to change the following strings:

_MOZILLA_VERSION _MOZILLA_LOCK _MOZILLA_COMMAND _MOZILLA_RESPONSE _MOZILLA_USER _MOZILLA_PROFILE _MOZILLA_PROGRAM _MOZILLA_COMMANDLINE

It's easy to find the right place to change, because the strings are all embedded together in the binary. Obviously their lengths need to stay the same; my traditional change is to change the _MOZILLA bit to _MEZILLA.

(I use GNU Emacs in overwrite mode for all of my binary editing needs, but there are probably better alternatives.)

If you prefer to not make binary edits to Firefox, it turns out that there's another way to do it; Firefox's remote control code checks to see if the program, the user, and the profile match before sending a command off to the remote control target. Thus the simple way to avoid all of the remote control problems is to run your testing Firefox with a non-default profile name.

(Manipulating the user name is possible and perhaps easier; you can just set $LOGNAME in your cover script. However, I don't know if this has other effects on Firefox or things that you may start from inside it.)

TwoFirefoxRemoteControl written at 20:50:27; Add Comment

2010-10-22

My theory on Unix's one chance to have a standard GUI

In an earlier entry I discussed my view on why Unix vendors never got together and created a standard GUI, the way they created POSIX. It's my view that the Unix world had basically one chance to create a standard GUI but fumbled it through (vendor) greed, although sadly understandable vendor greed.

Let's start with a question: why does Unix have a standard graphical environment? Because it does; for at least two decades now, X Windows has been the ubiquitous Unix way to do graphics (although the appearance and behavior put on top of it has been highly varied). And it's not because X had no competitors; rather the contrary, it had a lot. Most of the leading Unix workstation vendors had their competing graphical systems (which the vendors typically liked more than X, to boot).

What made X Windows successful anyways was that it was free and widely available for various Unix workstations. That it was free and available to users gave it ubiquity, especially in the early days before vendors liked it very much; that it was free to vendors made it ultimately cheaper for vendors to build on it (and contribute to its development) than to build everything themselves.

The difference between X Windows (which became a standard) and Unix GUIs (which didn't) is that no one ever set up an equivalent of the X Consortium that gave away a GUI for free. All of the various Unix GUIs were restricted in various ways, with no one that was as widely and freely available as X itself. Motif licensing was especially egregious and counterproductive, with the OSF doing every bit it could to make as much money as possible; my memories are that even the Motif runtime libraries were often an extra-cost Unix vendor add-on. This both reduced the development of local software that used Motif (places that didn't have a Motif runtime license used other toolkits) and significantly limited the usefulness of free software that used Motif.

(LessTif was a fairly big deal back in the days when Motif still mattered much.)

So my view is that the one way for there to have been a standard Unix GUI was for Motif to have been donated to the X Consortium, royalty-free. It might not have been a great GUI (although the Unix competition wasn't either), but it would have been ubiquitous and the ongoing development cost advantage alone would have made it hard for any single vendor to compete (GUIs are expensive to build).

(Somewhat to my surprise I discovered that OSF Motif actually is an official standard in the course of researching this. That the API is standardized doesn't help in practice, because the expensive bit is implementation.)

UnixStandardGUIChance written at 01:26:30; Add Comment

2010-10-11

The Unix directory problem and the history of directories

Back in the beginning, Unix was a surprisingly simple, straightforward, and limited operating system. Many things were implemented in pretty much the most straightforward way possible, including the filesystem and especially including directories.

In V7, directories were very simple. Directory entries were 16-byte structures with 2 bytes for the inode number and 14 for the file name. Directories were just a linear array of some size of these structures (stored in a series of disk blocks like a regular file); unused entries had an inode number of zero. BSD complicated things, but only a bit; when they added long file names in their 'Fast File System' modifications, they changed things only enough to introduce variable length filenames instead of fixed ones (the idea of using fixed-size 256 byte structures when most filenames were much shorter was clearly absurd). Various reimplementations of these same basic concepts, especially Linux's ext2 filesystem, have basically kept the same approach.

The problem with directories in all of the variants of the basic V7 filesystem design is that at their heart, they are linear arrays and thus suffer from all of the general drawbacks of (unsorted) linear arrays. To find a file that's present in a directory the kernel must scan an average of half of the directory; to verify that a file is not present (including before creating it), the kernel must scan all of the directory. Because we're dealing with disks such scans may need to do disk IO, with the attendant delays.

(One effect of this is that large directories hurt much more on a busy system than on a quiet one. On a quiet one with only a few large directories, their data blocks may well stay cached in RAM; on a busy one, that's much less likely due to the higher cache pressure. You can easily reach an overload point that kills performance.)

For the relatively small systems V7 was designed for, this was not a serious issue (or if it was, it was a tolerable one); disks were small, systems were modest, and users were sensible. But as Unix grew, the drawbacks of linear searches that involve disk IO became more and more readily apparent, and Unix vendors started trying to do something about it.

Unfortunately, doing better is complicated (for reasons that deserve another entry all by themselves), and so progress on this for existing filesystems and filesystem designs has generally been somewhat slow. Partly things have been slow because the problem has usually not been urgent; most systems are still relatively small, and Unix users have learned not to use really big directories because performance is really bad.

UnixLinearDirectories written at 01:32:00; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.