Wandering Thoughts

2018-04-18

The sensible way to use Bourne shell 'here documents' in pipelines

I was recently considering a shell script where I might want to feed a Bourne shell 'here document' to a shell pipeline. This is certainly possible and years ago I wrote an entry on the rules for combining things with here documents, where I carefully wrote down how to do this and the general rule involved. This time around, I realized that I wanted to use a much simpler and more straightforward approach, one that is obviously correct and is going to be clear to everyone. Namely, putting the production of the here document in a subshell.

(
cat <<EOF
your here document goes here
with as much as you want.
EOF
) | sed | whatever

This is not as neat and nominally elegant as taking advantage of the full power of the Bourne shell's arcane rules, and it's probably not as efficient (in at least some sh implementations, you may get an extra process), but I've come around to feeling that that doesn't matter. This may be the brute force solution, but what matters is that I can look at this code and immediately follow it, and I'm going to be able to do that in six months or a year when I come back to the script.

(Here documents are already kind of confusing as it stands without adding extra strangeness.)

Of course you can put multiple things inside the (...) subshell, such as several here documents that you output only conditionally (or chunks of always present static text mixed with text you have to make more decisions about). If you want to process the entire text you produce in some way, you might well generate it all inside the subshell for convenience.

Perhaps you're wondering why you'd want to run a here document through a pipe to something. The case that frequently comes up for me is that I want to generate some text with variable substitution but I also want the text to flow naturally with natural line lengths, and the expansion will have variable length. Here, the natural way out is to use fmt:

(
cat <<EOF
My message to $NAME goes here.
It concerns $HOST, where $PROG
died unexpectedly.
EOF
) | fmt

Using fmt reflows the text regardless of how long the variables expand out to. Depending on the text I'm generating, I may be fine with reflowing all of it (which means that I can put all of the text inside the subshell), or I may have some fixed formatting that I don't want passed through fmt (so I have to have a mix of fmt'd subshells and regular text).

Having written that out, I've just come to the obvious realization that for simple cases I can just directly use fmt with a here document:

fmt <<EOF
My message to $NAME goes here.
It concerns $HOST, where $PROG
died unexpectedly.
EOF

This doesn't work well if there's some paragraphs that I want to include only some of the time, though; then I should still be using a subshell.

(For whatever reason I apparently have a little blind spot about using here documents as direct input to programs, although there's no reason for it.)

SaneHereDocumentsPipelines written at 23:05:30; Add Comment

2018-04-16

Some notes and issues from trying out urxvt as an xterm replacement

I've been using xterm for a very long time, but I'm also aware that it's not a perfect terminal emulator (especially in today's Unicode world, my hacks notwithstanding). Years ago I wrote up what I wanted added to xterm, and the recommendation I've received over the years (both on that entry and elsewhere) is for urxvt (aka rxvt-unicode). I've made off and on experiments with urxvt, but for various reasons I've recently been trying a bit more seriously to use it regularly and to evaluate it as a serious alternative to xterm for me.

One of my crucial needs in an xterm replacement is an equivalent of xterm's ziconbeep feature, which I use to see when an iconified xterm has new output. Fortunately that need was met a long time ago through a urxvt Perl extension written by Leah Neukirchen; you can get the extension itself here. In my version I took out the audible bell. Without this, urxvt wouldn't be a particularly viable option for me, so I'm glad that it exists.

Urxvt's big draw as an xterm replacement is that it will reflow lines as you widen and narrow it. However, for a long time this didn't seem to work for me, or didn't seem to work reliably. Back in last September I finally discovered that the issue is that urxvt only reflows lines after a resize if it's already scrolled text in the window. This is the case both for resizing wider and for resizing narrower, which can be especially annoying (since resizing wider can sometimes 'un-scroll' a window). This is something that I can sort of work around; these days I often make it a point to start out my urxvt windows in their basic 80x24 size, dump out the output that I'll want, and only then resize them to read the long lines. This mostly works but it's kind of irritating.

(I'm not sure if this is a urxvt bug or a deliberate design decision. Perhaps I should try reporting it to find out.)

Another difference is that xterm has relatively complicated behavior on double-clicks for what it considers to be separate 'words'; you can read the full details in the manpage's section on character classes. Urxvt has somewhat simpler behavior based on delimiter characters, and its default set of delimiters make it select bigger 'words' than xterm does. For instance, a standard urxvt setup will consider all of a full path to be one word, because / is not a delimiter character (neither is :, so all of your $PATH is one word as far as urxvt is concerned). I'm highly accustomed to xterm's behavior and I prefer smaller words here, because it's much easier to widen a selection than it is to narrow it. You can customize some of this behavior with urxvt's cutchars resource (see the urxvt manpage). Currently I'm using:

! requires magic quoting for reasons.
URxvt*cutchars:   "\\`\"'&()*,;<=>?@[]^{|}.#%+!/:-"

This improves the situation in urxvt but isn't perfect; in practice I see various glitches, generally when several of these delimiters happen in a row (eg given 'a...', a double-click in urxvt may select up to the entire thing). Since I'm using the default selection Perl extension, possibly I could improve things by writing some complicated regular expressions (or replace the selection extension entirely with a more controllable version where I understand exactly what it's doing). If I want to exactly duplicate xterm's behavior, a Perl extension is probably the only way to achieve it.

(I'm not entirely allergic to writing Perl extensions for urxvt, but it's been a long time since I wrote Perl and I'm not familiar with the urxvt extensions API, so at a minimum it's going to be a pain.)

Given these issues I'm not throwing myself into a complete replacement of my xterm usage with urxvt, but I am reaching for it reasonably frequently and I've taken steps to make it easier to use in my environment. This involves both making it as conveniently accessible as xterm and also teaching various bits of my window manager configuration and scripting that urxvt is a terminal window and should be treated like xterm.

This whole thing has been an interesting experience overall. It's taught me both how much I'm attuned to very specific xterm behaviors and how deeply xterm has become embedded into my overall X environment.

UrxvtNotes written at 00:26:59; Add Comment

2018-03-04

The value locked up in the Unix API makes it pretty durable

Every so often someone proposes or muses about replacing Unix with something more modern and better, or is surprised when new surface OSes (such as ChromeOS) are based on Unix (often Linux, although not always). One reason that this keeps happening and that some form of Unix is probably going to be with us for decades to come is that there is a huge amount of value locked up in the Unix API, and in more ways than are perhaps obvious.

The obvious way that a great deal of value is locked up in the Unix API is the kernels themselves. Whether you look at Linux, FreeBSD, OpenBSD, or even one of the remaining commercial Unixes, all of their kernels represent decades of developer effort. Some of this effort is in the drivers, many of which you could do without in an OS written from scratch for relatively specific hardware, but a decent amount of the effort is in core systems like physical and virtual memory management, process handling, interprocess communication, filesystems and block level IO handling, modern networking, and so on.

However, this is just the tip of the iceberg. The bigger value of the Unix API is in everything that runs on top of it. This comes in at least two parts. The first part is all of the user level components that are involved to boot and run Unix and everything that supports them, especially if you include the core of a graphical environment (such as some form of display server). The second part is all of the stuff that you run on your Unix as its real purpose for existing, whether this is Apache (or some other web server), a database engine, your own custom programs (possibly written in Python or Ruby or whatever), and so on. It's also the support programs for this, which blur the lines between the 'system' and being productive with it; a mailer, a nice shell, an IMAP server, and so on. Then you can add an extra layer of programs used to monitor and diagnose the system and another set of programs if you develop on it or even just edit files. And if you want to use the system as a graphical desktop there is an additional stack of components and programs that all use aspects of the Unix API either directly or indirectly.

All of these programs represent decades or perhaps centuries of accumulated developer effort. Throwing away the Unix API in favour of something else means either doing without these programs, rewriting your own versions from scratch, or porting them and everything they depend on to your new API. Very few people can afford to even think about this, much less undertake it for a large scale environment such as a desktop. Even server environments are relatively complex and multi-layered in practice.

(Worse, some of the Unix API is implicit instead of being explicitly visible in things like system calls. Many programs will expect a 'Unix' to handle process scheduling, memory management, TCP networking, and a number of other things in pretty much the same way that current Unixes do. If your new non-Unix has the necessary system calls but behaves significantly differently here, programs may run but not perform very well, or even malfunction.)

Also, remember that the practical Unix API is a lot more than system calls. Something like Apache or Firefox pretty much requires a large amount of the broad Unix API, not just the core system calls and C library, and as a result you can't get them up on your new system just by implementing a relatively small and confined compatibility layer. (That's been tried in the past and pretty much failed in practice, and is one reason why people almost never write programs to strict POSIX and nothing more.)

(This elaborates on a tweet of mine that has some additional concrete things that you'd be reimplementing in your non-Unix.)

UnixAPIDurableValue written at 18:51:44; Add Comment

The practical Unix API is more than system calls (or POSIX)

What is the 'Unix API'? Some people would be tempted to say that this is straightforward; depending on your perspective it's either the relatively standard set of core Unix system calls or the system calls and library functions required by POSIX. This answer is not wrong at one level, but in practice it is not a useful one.

As people have found out in the past, the real Unix API is the whole collection of behaviors and environments that Unix programs assume. It isn't just POSIX library calls; it's also the shell and standard utilities and files that are in known locations and standard capabilities and various other things. A 'Unix' without a useful $HOME environment variable and /tmp may be specification compliant (I haven't checked POSIX) but it's not useful, in that many programs that people want generally won't run on it.

In practice the Unix API is the entire Unix environment. What constitutes the 'Unix' environment instead of the environment specific to a particular flavour of Unix is an ever-evolving topic. Once upon a time mmap() was not part of the Unix environment (cf); today it absolutely is. I'm pretty certain that once upon a time the -o flag to egrep was effectively Linux specific (as it relied on egrep being GNU grep); today it's much closer to being part of Unix, as many Unixes either have GNU grep as egrep or have added support for -o. And so it goes, with the overall Unix API moving forward through de facto evolution.

Unless you intend for your program to be narrowly and specifically portable to POSIX or an even more minimal standard, it is not a bug for it to rely on portions of the broader, de facto Unix API. It's not even necessarily a bug to rely on APIs that are only there on some Unixes (for example Linux and FreeBSD), although it may limit how widely your program spreads. Even somewhat narrow API choices are not necessarily bugs; you may have decided to be limited in your portability or to at least require some common things to be available.

(The Go build process requires Bash on Unix, for example, although it doesn't require that /bin/sh is Bash.)

PS: This is a broader sense of the 'Unix API' (and a different usage) than I used when I wrote about whether the C runtime and library was a legitimate part of the Unix API. The broad Unix API is and always has been layered, and things like Go are deliberately implementing their own API on top of one of the lower layers. In a way, my earlier entry was partly about how separate the layers of the broad Unix API have to be; for example, can you implement a compatible and fully capable Bourne shell using only public Unix kernel APIs, or at most public C library APIs?

(Many people would say that a system where you could not do this was not really 'Unix', even if it complied with POSIX standards.)

UnixAPIMoreThanSyscalls written at 01:03:15; Add Comment

2018-02-18

Memories of MGR

I recently got into a discussion of MGR on Twitter (via), which definitely brings back memories. MGR is an early Unix windowing system, originally dating from 1987 to 1989 (depending on whether you go from the Usenix presentation, when people got to hear about it, to the comp.sources.unix, when people could get their hands on it). If you know the dates for Unix windowing systems you know that this overlaps with X (both X10 and then X11), which is part of what makes MGR special and nostalgic and what gave it its peculiar appeal at the time.

MGR was small and straightforward at a time when that was not what other Unix window systems were (I'd say it was slipping away with X10 and X11, but let's be honest, Sunview was not small or straightforward either). Given that it was partially inspired by the Blit and had a certain amount of resemblance to it, MGR was also about as close as most people could come to the kind of graphical environment that the Bell Labs people were building in Research Unix.

(You could in theory get a DMD 5620, but in reality most people had far more access to Unix workstations that you could run MGR on that they did to a 5620.)

On a practical level, you could use MGR without having to set up a complicated environment with a lot of moving parts (or compile a big system). This generally made it easy to experiment with (on hardware it supported) and to keep it around as an alternative for people to try out or even use seriously. My impression is that this got a lot of people to at least dabble with MGR and use it for a while.

Part of MGR being small and straightforward was that it also felt like something that was by and for ordinary mortals, not the high peaks of X. It ran well on ordinary machines (even small machines) and it was small enough that you could understand how it worked and how to do things in it. It also had an appealingly simple model of how programs interacted with it; you basically treated it like a funny terminal, where you could draw graphics and do other things by sending escape sequences. As mentioned in this MGR information page, this made it network transparent by default.

MGR was not a perfect window system and in many ways it was a quite limited one. But it worked well in the 'all the world's a terminal' world of the late 1980s and early 1990s, when almost all of what you did even with X was run xterms, and it was often much faster and more minimal than the (fancier) alternatives (like X), especially on basic hardware.

Thinking of MGR brings back nostalgic memories of a simpler time in Unix's history, when things were smaller and more primitive but also bright and shiny and new and exciting in a way that's no longer the case (now they're routine and Unix is everywhere). My nostalgic side would love a version of MGR that ran in an X window, just so I could start it up again and play around with it, but at the same time I'd never use it seriously. Its day in the sun has passed. But it did have a day in the sun, once upon a time, and I remember those days fondly (even if I'm not doing well about explaining why).

(We shouldn't get too nostalgic about the old days. The hardware and software we have today is generally much better and more appealing.)

MGRMemories written at 02:00:03; Add Comment

2018-02-02

X's network transparency was basically free at the time

I recently wrote an entry about how X's network transparency has wound up mostly being a failure for various reasons. However, there is an important flipside to the story of X's network transparency, and that is that X's network transparency was almost free at the time and in the context it was created. Unlike the situation today, in the beginning X did not have to give up lots of performance or other things in order to get network transparency.

X originated in the mid 1980s and it was explicitly created to be portable across various Unixes, especially BSD-derived ones (because those were what universities were mostly using at that time). In the mid to late 1980s, Unix had very few IPC methods, especially portable ones. In particular, BSD systems did not have shared memory (it was called 'System V IPC' for the obvious reasons). BSD had TCP and Unix sockets, some System V machines had TCP (and you could likely assume that more would get it), and in general your safest bet was to assume some sort of abstract stream protocol and then allow for switchable concrete backends. Unsurprisingly, this is exactly what X did; the core protocol is defined as a bidirectional stream of bytes over an abstracted channel.

(And the concrete implementation of $DISPLAY has always let you specify the transport mechanism, as well as allowing your local system to pick the best mechanism it has.)

Once you've decided that your protocol has to run over abstracted streams, it's not that much more work to make it network transparent (TCP provides streams, after all). X could have refused to make the byte order of the stream clear or required the server and the client to have access to some shared files (eg for fonts), but I don't think either would have been a particularly big win. I'm sure that it took some extra effort and care to make X work across TCP from a different machine, but I don't think it took very much.

(At the same time, my explanation here is probably a bit ahistorical. X's initial development seems relatively strongly tied to sometimes having clients on different machines than the display, which is not unreasonable for the era. But it doesn't hurt to get a feature that you want anyway for a low cost.)

I believe it's important here that X was intended to be portable across different Unixes. If you don't care about portability and can get changes made to your Unix, you can do better (for example, you can add some sort of shared memory or process to process virtual memory transfer). I'm not sure how the 1980s versions of SunView worked, but I believe they were very SunOS dependent. Wikipedia says SunView was partly implemented in the kernel, which is certainly one way to both share memory and speed things up.

PS: Sharing memory through mmap() and friends was years in the future at this point and required significant changes when it arrived.

XFreeNetworkTransparency written at 01:12:50; Add Comment

2018-01-26

X's network transparency has wound up mostly being a failure

I was recently reading Mark Dominus's entry about some X keyboard problems, in which he said in passing (quoting himself):

I have been wondering for years if X's vaunted network transparency was as big a failure as it seemed: an interesting idea, worth trying out, but one that eventually turned out to be more trouble than it was worth. [...]

My first reaction was to bristle, because I use X's network transparency all of the time at work. I have several programs to make it work very smoothly, and some core portions of my environment would be basically impossible without it. But there's a big qualification on my use of X's network transparency, namely that it's essentially all for text. When I occasionally go outside of this all-text environment of xterms and emacs and so on, it doesn't go as well.

X's network transparency was not designed as 'it will run xterm well'; originally it was to be something that should let you run almost everything remotely, providing a full environment. Even apart from the practical issues covered in Daniel Stone's slide presentation, it's clear that it's been years since X could deliver a real first class environment over the network. You cannot operate with X over the network in the same way that you do locally. Trying to do so is painful and involves many things that either don't work at all or perform so badly that you don't want to use them.

In my view, there are two things that did in general X network transparency. The first is that networks turned out to not be fast enough even for ordinary things that people wanted to do, at least not the way that X used them. The obvious case is web browsers; once the web moved to lots of images and worse, video, that was pretty much it, especially with 24-bit colour.

(It's obviously not impossible to deliver video across the network with good performance, since YouTube and everyone else does it. But their video is highly encoded in specialized formats, not handled by any sort of general 'send successive images to the display' system.)

The second is that the communication facilities that X provided were too narrow and limited. This forced people to go outside of them in order to do all sorts of things, starting with audio and moving on to things like DBus and other ways of coordinating environments, handling sophisticated configuration systems, modern fonts, and so on. When people designed these additional communication protocols, the result generally wasn't something that could be used over the network (especially not without a bunch of setup work that you had to do in addition to remote X). Basic X clients that use X properties for everything may be genuinely network transparent, but there are very few of those left these days.

(Not even xterm is any more, at least if you use XFT fonts. XFT fonts are rendered in the client, and so different hosts may have different renderings of the same thing, cf.)

What remains of X's network transparency is still useful to some of us, but it's only a shadow of what the original design aimed for. I don't think it was a mistake for X to specifically design it in (to the extent that they did, which is less than you might think), and it did help X out pragmatically in the days of X terminals, but that's mostly it.

(I continue to think that remote display protocols are useful in general, but I'm in an usual situation. Most people only ever interact with remote machines with either text mode SSH or a browser talking to a web server on the remote machine.)

PS: The X protocol issues with synchronous requests that Daniel Stone talks about don't help the situation, but I think that even with those edges sanded off X's network transparency wouldn't be a success. Arguably X's protocol model committed a lesser version of part of the NeWS mistake.

XNetworkTransparencyFailure written at 01:55:10; Add Comment

2018-01-16

You could say that Linux is AT&T's fault

Recently on Twitter, I gave in to temptation. It went like this:

@thatcks: Blog post: Linux's glibc monoculture is not a bad thing (tl;dr: it's not a forced monoculture, it's mostly people naturally not needlessly duplicating effort)

@tux0r: Linux is duplicate work (ref.: BSD) and they still don't stop making new ones. :(

@oclsc: But their license isn't restrictive enough to be free! We HAVE to build our own wheel!

@thatcks: I believe you can direct your ire here to AT&T, given the origins and early history of Linux. (Or I suppose you could criticize the x86 BSDs.)

My tweet deserves some elaboration (and it turns out to be a bit exaggerated because I mis-remembered the timing a bit).

If you're looking at how we have multiple free Unixes today, with some descended from 4.x BSD and one written from scratch, it's tempting and easy to say that the people who created Linux should have redirected their efforts to helping develop the 4.x BSDs. Setting aside the licensing issues, this view is ahistorical, because Linux was pretty much there first. If you want to argue that someone was duplicating work, you have a decent claim that it's the BSDs who should have thrown their development effort in with Linux instead of vice versa. And beyond that, there's a decent case to be made that Linux's rise is ultimately AT&T's fault.

The short version of the history is that at the start of the 1990s, it became clear that you could make x86 PCs into acceptable inexpensive Unix machines. However, you needed a Unix OS in order to make this work, and there was no good inexpensive (or free) option in 1991. So, famously, Linus Torvalds wrote his own Unix kernel in mid 1991. This predated the initial releases of 386BSD, which came in 1992. Since 386BSD came from the 4.3BSD Net/2 release it's likely that it was more functional than the initial versions of Linux. If things had proceeded unimpeded, perhaps it would have taken the lead from Linux and became the clear winner.

Unfortunately this is where AT&T comes in. At the same time as 386BSD was coming out, BSDI, a commercial company, was selling their own Unix derived from 4.3BSD Net/2 without having a license from AT&T (on the grounds that Net/2 didn't contain any code with AT&T copyrights). BSDI was in fact being somewhat cheeky about it; their 1-800 sales number was '1-800-ITS-UNIX', for example. So AT&T sued them, later extending the lawsuit to UCB itself over the distribution of Net/2. Since the lawsuit alleged that 4.3BSD Net/2 contained AT&T proprietary code, it cast an obvious cloud over everything derived from Net/2, 386BSD included.

The lawsuit was famous (and infamous) in the Unix community at the time, and there was real uncertainty over how it would be resolved for several crucial years. The Wikipedia page is careful to note that 386BSD was never a party to the lawsuit, but I'm pretty sure this was only because AT&T didn't feel the need to drag them in. Had AT&T won, I have no doubt that there would have been some cease & desist letters going to 386BSD and that would have been that.

(While Dr Dobb's Journal published 386BSD Release 1.0 in 1994, they did so after the lawsuit was settled.)

I don't know for sure if the AT&T lawsuit deterred people from working on 386BSD and tilted them toward working on Linux (and putting together various early distributions). There were a number of things going on at the time beyond the lawsuit, including politics in 386BSD itself (see eg the FreeBSD early history). Perhaps 386BSD would have lost out to Linux even without the shadow of the lawsuit looming over it, simply because it was just enough behind Linux's development and excitement. But I do think that you can say AT&T caused Linux and have a decent case.

(AT&T didn't literally cause Linux to be written, because the lawsuit was only filed in 1992, after Torvalds had written the first version of his kernel. You can imagine what-if scenarios about an earlier release of Net/2, but given the very early history of Linux I'm not sure it would have made much of a difference.)

LinuxIsATTsFault written at 00:07:10; Add Comment

2017-12-31

Is the C runtime and library a legitimate part of the Unix API?

One of the knocks against Go is, to quote from Debugging an evil Go runtime bug (partly via):

Go also happens to have a (rather insane, in my opinion) policy of reinventing its own standard library, so it does not use any of the standard Linux glibc code to call vDSO, but rather rolls its own calls (and syscalls too).

Ordinary non-C languages on Unixes generally implement a great many low level operations by calling into the standard C library. This starts with things like making system calls, but also includes operations such as getaddrinfo(3). Go doesn't do this; it implements as much as possible itself, going straight down to direct system calls in assembly language. Occasionally there are problems that ensue.

A few Unixes explicitly say that the standard C library is the stable API and point of interface with the system; one example is Solaris (and now Illumos). Although they don't casually change the low level system call implementation, as far as I know Illumos officially reserves the right to change all of their actual system calls around, breaking any user space code that isn't dynamically linked to libc. If your code breaks, it's your fault; Illumos told you that dynamic linking to libc is the official API.

Other Unixes simply do this tacitly and by accretion. For example, on any Unix using nsswitch.conf, it's very difficult to always get the same results for operations like getaddrinfo() without going through the standard C library, because these may use arbitrary and strange dynamically loaded modules that are accessed through libc and require various random libc APIs to work. This points out one of the problems here; once you start (indirectly) calling random bits of the libc API, they may quite reasonably make assumptions about the runtime environment that they're operating in. How to set up a limited standard C library runtime environment is generally not documented; instead the official view is generally 'let the standard C library runtime code start your main() function'.

I'm not at all sure that all of this requirement and entanglement with the standard C library and its implicit runtime environment is a good thing. The standard C library's runtime environment is designed for C, and it generally contains a tangled skein of assumptions about how things work. Forcing all other languages to fit themselves into these undocumented constraints is clearly confining, and the standard C library generally isn't designed to be a transparent API; in fact, at least GNU libc deliberately manipulates what it does under the hood to be more useful to C programs. Whether these manipulations are useful or desired for your non-C language is an open question, but the GNU libc people aren't necessarily going to even document them.

(Marcan's story shows that the standard C library behavior would have been a problem for any language environment that attempted to use minimal stacks while calling into 'libc', here in the form of a kernel vDSO that's designed to be called through libc. This also shows another aspect of the problem, in that as far as I know how much stack space you must provide when calling the standard C library is generally not documented. It's just assumed that you will have 'enough', whatever that is. C code will; people who are trying to roll their own coroutines and thread environment, maybe not.)

This implicit assumption has a long history in Unix. Many Unixes have only really documented their system calls in the form of the standard C library interface to them, quietly eliding the distinction between the kernel API to user space and the standard C library API to C programs. If you're lucky, you can dig up some documentation on how to make raw system calls and what things those raw system calls return in unusual cases like pipe(2). I don't think very many Unixes have ever tried to explicitly and fully document the kernel API separately from the standard C library API, especially once you get into cases like ioctl() (where there are often C macros and #defines that are used to form some of the arguments, which are of course only 'documented' in the C header files).

UnixAPIAndCRuntime written at 17:24:55; Add Comment

2017-12-25

There were Unix machines with real DirectColor graphics hardware

As I mentioned yesterday, one of the questions I wound up with was whether there ever was any Unix graphics hardware that actually used X11's unusual DirectColor mode. Contrary to what you might expect from its name, DirectColor is an indirect color mode, but one where the the red, green, and blue parts of a pixel's colour value index separate color maps.

The short version of the answer is yes. Based on picking through the X11R6 source code, there were at least two different makes of Unix machines that had hardware support for DirectColor visuals. The first is (some) Apple hardware that ran A/UX. Quoting from xc/programs/Xserver/hw/MacII/README:

These Apple X11 drivers support 1, 8, and 24 bit deep screens on the Macintosh hardware running A/UX. Multiple screens of any size and both depths are accommodated. The 24 bit default visual is DirectColor, and there is significant color flash when shifting between TrueColor and DirectColor visuals in 24 bit mode.

Based on a casual perusal of Wikipedia, it appears that some Quadra and Centris series models supported 24-bit colour and thus DirectColor.

(Support for DirectColor on Apple A/UX appears to have also been in X11R5, released in September of 1991, but the README wasn't there so I can't be sure.)

The other case is HP's HCRX-24 and CRX-24 graphics hardware (and also), which were used on their PA-RISC workstations, apparently the '700' and 9000 series. The Xhp manpage says:

24 PLANE SUPPORT FOR HCRX24 AND CRX24

This Xhp X11 sample server supports two modes for the HCRX24 and CRX24 display hardware: 8 plane and 24 plane, with 8 plane being the default. [...]

[...]

In depth 24 mode, the default visual type is DirectColor.

This support seems to have appeared first in X11R6, released in June of 1994. HP probably added it to HP/UX's version of the X server before then, of course.

It's possible that some other Unix workstations had graphics hardware that directly supported DirectColor, but if so they didn't document it as clearly as these two cases and I can't pick it out from the various uses of DirectColor in the X11R6 server source code.

(Since X11R6 dates from 1996 and PCs were starting to get used for Unix by that point, this includes various sorts of PC graphics hardware that X11R6 had drivers for.)

There seems to be support for emulating DirectColor visuals on other classes of underlying graphics hardware, and some things seem to invoke it. I don't know enough about X11 programming to understand the server code involved; it's several layers removed from what little I do know.

I admit that I was hoping that looking at the X server code could give me more definitive answers than it turned out to, but that's life in a large code base. It's possible that there's later Unix graphics hardware supports DirectColor, but my patience for picking through X11 releases is limited (although I did quickly peek at X11R6.4, from 1998, and didn't spot anything). People with more energy than me can pick through the x.org history page and the old releases archive to do their own investigation.

(The intel driver manpage suggests that the i810 and i815 integrated Intel graphics chipsets had hardware support for DirectColor, but that this support was removed in the i830M and onward. I would assume Intel decided it wasn't used enough to justify chipset support.)

PS: Note that later releases of X11 start dropping support for some older hardware; for example, the 'macII' support disappeared in X11R6.1. For what it's worth, the release notes up through X11R6 don't mention removing support for any graphics hardware; however, I haven't checked through all X11 releases from X11R2 through X11R4 to see if DirectColor hardware appeared briefly in one of them and then disappeared before X11R5.

DirectColorHardware written at 02:05:47; Add Comment

(Previous 10 or go back to December 2017 at 2017/12/24)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.