2010-09-27
Why there is no POSIX standard for a Unix GUI
The other day I was reading A Tale of Two Standards, where Jeremy Allison mentions in passing:
Interestingly enough the SUS doesn't cover such things as the graphical user interface (GUI's) elements, as the history of Unix as primarily a server operating system meant that GUI's were never given the importance needed for Unix to become a desktop system.
I don't think that this is the real reason that there is no POSIX or equivalent standard for Unix graphical user interfaces. Because I am a jaundiced and cynical person, I think that the real reason GUIs were not standardized is that they could not be, because they were being viciously fought over by the vendors all through the period when Unix standards were being created.
POSIX is effectively a common core of Unix, ie what everyone could more or less agree on and was basically already doing. Where it wasn't that, it was in an area (such as regular expressions or threading) where no one was really doing anything and so there were going to be no active losers. Where POSIX required that vendors change, the change were generally small (and often could be contained off in a corner that didn't disturb the rest of the system).
This did not describe GUIs in the Unix standardization period, not in the least. There was no common core; indeed, for a long time there was not even an agreement on the underlying graphical system being X Windows. Standardizing a GUI would have meant either inventing one from scratch or picking one vendor's GUI to basically win; this would screw either all vendors or all but one vendor (or group in the case of Motif).
Compounding the problem was licensing. GUIs are big and complex, which means that they are expensive to develop, which means that everyone who had one wanted money to let other people use it (well, their code for it). This significantly increases the stakes involved in the standardization of one; if you win, you probably get to collect a bunch of licensing money, and if you lose, as a practical matter you probably have to pay a rival a bunch of money. Is it any wonder that no vendor was interested in losing?
(This is the political perspective. There was also a technical perspective, which I would characterize as saying that GUI standardization was vastly premature at the time because we didn't know enough to make a good, enduring GUI standard, in either appearance or API. It may even be premature now, given that all of the de facto standard GUIs today are still evolving and changing.)
2010-09-25
How Usenet used to be a filesystem stress test
Once upon a time, there was Usenet.
Wait, that's not far enough back. Once upon a time, Usenet software
used the simplest, most straightforward way to store articles. Each
newsgroup was a separate directory in the obvious directory hierarchy
(so rec.arts.anime.misc was rec/arts/anime/misc under the news
spool root directory) and each article was a file in that directory.
Cross-posted articles were hardlinked between all of the newsgroup
directories.
(Given that hardlinks can't cross filesystem boundaries, you may notice an assumption here. Yes, this caused problems in the not too long run.)
Once Usenet started having much volume, this design turned Usenet spool filesystems into a marvelous (or hideous) worse case stress test for filesystem code:
- active newsgroups might have tens of thousands of articles, which
meant tens of thousands of entries in a single directory. At the
time when this started happening, all filesystems used linear
searches through directory data when looking up names.
(I believe but am not completely sure that Usenet was a major driving force behind the initial work on non-linear directory lookups.)
- file creates were usually randomly distributed around these directories, partly because servers generally made no attempt to batch articles from one newsgroup together when they propagated things around.
- file deletes were semi-random; articles might expire earlier or
later than other articles in the same newsgroup for various reasons.
(The first Usenet software did truly random file deletes; later software at least ordered the article deletions based on what directory they were in.)
- for a long time, the files were quite small (Usenet spools
often needed the inode to data ratio adjusted to create more inodes).
Once alt.binaries got active, the size distribution was extremely
lumpy; a bunch of small files, a lot of very large ones, and very
little in the middle.
- Usenet was effectively write-mostly random IO (at many sites, most
Usenet articles were never read except by the system). Even when
read IO was 'sequential' in some sense, as someone read through a
bunch of articles in a single newgroups, it wasn't at the simple
OS level because of the small separate files.
(Just to trip filesystems up, there were some large files that were read sequentially.)
Really, Usenet spools had it all, especially once the alt hierarchy got rolling. Now you may have a better understanding of why I said earlier that an old-style Usenet filesystem would be a ZFS scrub worst case.
(And it is not surprising that the traditional Usenet spool format was eventually replaced by a more optimized storage format in INN.)