Memories of MGR
I recently got into a discussion of MGR on Twitter (via), which definitely brings back memories. MGR is an early Unix windowing system, originally dating from 1987 to 1989 (depending on whether you go from the Usenix presentation, when people got to hear about it, to the comp.sources.unix, when people could get their hands on it). If you know the dates for Unix windowing systems you know that this overlaps with X (both X10 and then X11), which is part of what makes MGR special and nostalgic and what gave it its peculiar appeal at the time.
MGR was small and straightforward at a time when that was not what other Unix window systems were (I'd say it was slipping away with X10 and X11, but let's be honest, Sunview was not small or straightforward either). Given that it was partially inspired by the Blit and had a certain amount of resemblance to it, MGR was also about as close as most people could come to the kind of graphical environment that the Bell Labs people were building in Research Unix.
(You could in theory get a DMD 5620, but in reality most people had far more access to Unix workstations that you could run MGR on that they did to a 5620.)
On a practical level, you could use MGR without having to set up a complicated environment with a lot of moving parts (or compile a big system). This generally made it easy to experiment with (on hardware it supported) and to keep it around as an alternative for people to try out or even use seriously. My impression is that this got a lot of people to at least dabble with MGR and use it for a while.
Part of MGR being small and straightforward was that it also felt like something that was by and for ordinary mortals, not the high peaks of X. It ran well on ordinary machines (even small machines) and it was small enough that you could understand how it worked and how to do things in it. It also had an appealingly simple model of how programs interacted with it; you basically treated it like a funny terminal, where you could draw graphics and do other things by sending escape sequences. As mentioned in this MGR information page, this made it network transparent by default.
MGR was not a perfect window system and in many ways it was a quite
limited one. But it worked well in the 'all the world's a terminal'
world of the late 1980s and early 1990s, when almost all of what
you did even with X was run
xterms, and it was often much faster
and more minimal than the (fancier) alternatives (like X), especially
on basic hardware.
Thinking of MGR brings back nostalgic memories of a simpler time in Unix's history, when things were smaller and more primitive but also bright and shiny and new and exciting in a way that's no longer the case (now they're routine and Unix is everywhere). My nostalgic side would love a version of MGR that ran in an X window, just so I could start it up again and play around with it, but at the same time I'd never use it seriously. Its day in the sun has passed. But it did have a day in the sun, once upon a time, and I remember those days fondly (even if I'm not doing well about explaining why).
(We shouldn't get too nostalgic about the old days. The hardware and software we have today is generally much better and more appealing.)
X's network transparency was basically free at the time
I recently wrote an entry about how X's network transparency has wound up mostly being a failure for various reasons. However, there is an important flipside to the story of X's network transparency, and that is that X's network transparency was almost free at the time and in the context it was created. Unlike the situation today, in the beginning X did not have to give up lots of performance or other things in order to get network transparency.
X originated in the mid 1980s and it was explicitly created to be portable across various Unixes, especially BSD-derived ones (because those were what universities were mostly using at that time). In the mid to late 1980s, Unix had very few IPC methods, especially portable ones. In particular, BSD systems did not have shared memory (it was called 'System V IPC' for the obvious reasons). BSD had TCP and Unix sockets, some System V machines had TCP (and you could likely assume that more would get it), and in general your safest bet was to assume some sort of abstract stream protocol and then allow for switchable concrete backends. Unsurprisingly, this is exactly what X did; the core protocol is defined as a bidirectional stream of bytes over an abstracted channel.
(And the concrete implementation of
$DISPLAY has always let
you specify the transport mechanism, as well as allowing your
local system to pick the best mechanism it has.)
Once you've decided that your protocol has to run over abstracted streams, it's not that much more work to make it network transparent (TCP provides streams, after all). X could have refused to make the byte order of the stream clear or required the server and the client to have access to some shared files (eg for fonts), but I don't think either would have been a particularly big win. I'm sure that it took some extra effort and care to make X work across TCP from a different machine, but I don't think it took very much.
(At the same time, my explanation here is probably a bit ahistorical. X's initial development seems relatively strongly tied to sometimes having clients on different machines than the display, which is not unreasonable for the era. But it doesn't hurt to get a feature that you want anyway for a low cost.)
I believe it's important here that X was intended to be portable across different Unixes. If you don't care about portability and can get changes made to your Unix, you can do better (for example, you can add some sort of shared memory or process to process virtual memory transfer). I'm not sure how the 1980s versions of SunView worked, but I believe they were very SunOS dependent. Wikipedia says SunView was partly implemented in the kernel, which is certainly one way to both share memory and speed things up.
PS: Sharing memory through
mmap() and friends was years in the
future at this point and required significant changes when it arrived.
X's network transparency has wound up mostly being a failure
I was recently reading Mark Dominus's entry about some X keyboard problems, in which he said in passing (quoting himself):
I have been wondering for years if X's vaunted network transparency was as big a failure as it seemed: an interesting idea, worth trying out, but one that eventually turned out to be more trouble than it was worth. [...]
My first reaction was to bristle, because I use X's network transparency
all of the time at work. I have several
programs to make it work very smoothly,
and some core portions of my environment would be basically impossible
without it. But there's a big qualification on my use of X's network
transparency, namely that it's essentially all for text. When I
occasionally go outside of this all-text environment of
emacs and so on, it doesn't go as well.
X's network transparency was not designed as 'it will run
well'; originally it was to be something that should let you run
almost everything remotely, providing a full environment. Even apart
from the practical issues covered in Daniel Stone's slide
clear that it's been years since X could deliver a real first class
environment over the network. You cannot operate with X over the
network in the same way that you do locally. Trying to do so is
painful and involves many things that either don't work at all or
perform so badly that you don't want to use them.
In my view, there are two things that did in general X network transparency. The first is that networks turned out to not be fast enough even for ordinary things that people wanted to do, at least not the way that X used them. The obvious case is web browsers; once the web moved to lots of images and worse, video, that was pretty much it, especially with 24-bit colour.
(It's obviously not impossible to deliver video across the network with good performance, since YouTube and everyone else does it. But their video is highly encoded in specialized formats, not handled by any sort of general 'send successive images to the display' system.)
The second is that the communication facilities that X provided were too narrow and limited. This forced people to go outside of them in order to do all sorts of things, starting with audio and moving on to things like DBus and other ways of coordinating environments, handling sophisticated configuration systems, modern fonts, and so on. When people designed these additional communication protocols, the result generally wasn't something that could be used over the network (especially not without a bunch of setup work that you had to do in addition to remote X). Basic X clients that use X properties for everything may be genuinely network transparent, but there are very few of those left these days.
xterm is any more, at least if you use XFT fonts. XFT
fonts are rendered in the client, and so different hosts may have
different renderings of the same thing, cf.)
What remains of X's network transparency is still useful to some of us, but it's only a shadow of what the original design aimed for. I don't think it was a mistake for X to specifically design it in (to the extent that they did, which is less than you might think), and it did help X out pragmatically in the days of X terminals, but that's mostly it.
(I continue to think that remote display protocols are useful in general, but I'm in an usual situation. Most people only ever interact with remote machines with either text mode SSH or a browser talking to a web server on the remote machine.)
PS: The X protocol issues with synchronous requests that Daniel Stone talks about don't help the situation, but I think that even with those edges sanded off X's network transparency wouldn't be a success. Arguably X's protocol model committed a lesser version of part of the NeWS mistake.
You could say that Linux is AT&T's fault
Recently on Twitter, I gave in to temptation. It went like this:
@tux0r: Linux is duplicate work (ref.: BSD) and they still don't stop making new ones. :(
@oclsc: But their license isn't restrictive enough to be free! We HAVE to build our own wheel!
@thatcks: I believe you can direct your ire here to AT&T, given the origins and early history of Linux. (Or I suppose you could criticize the x86 BSDs.)
My tweet deserves some elaboration (and it turns out to be a bit exaggerated because I mis-remembered the timing a bit).
If you're looking at how we have multiple free Unixes today, with some descended from 4.x BSD and one written from scratch, it's tempting and easy to say that the people who created Linux should have redirected their efforts to helping develop the 4.x BSDs. Setting aside the licensing issues, this view is ahistorical, because Linux was pretty much there first. If you want to argue that someone was duplicating work, you have a decent claim that it's the BSDs who should have thrown their development effort in with Linux instead of vice versa. And beyond that, there's a decent case to be made that Linux's rise is ultimately AT&T's fault.
The short version of the history is that at the start of the 1990s, it became clear that you could make x86 PCs into acceptable inexpensive Unix machines. However, you needed a Unix OS in order to make this work, and there was no good inexpensive (or free) option in 1991. So, famously, Linus Torvalds wrote his own Unix kernel in mid 1991. This predated the initial releases of 386BSD, which came in 1992. Since 386BSD came from the 4.3BSD Net/2 release it's likely that it was more functional than the initial versions of Linux. If things had proceeded unimpeded, perhaps it would have taken the lead from Linux and became the clear winner.
Unfortunately this is where AT&T comes in. At the same time as 386BSD was coming out, BSDI, a commercial company, was selling their own Unix derived from 4.3BSD Net/2 without having a license from AT&T (on the grounds that Net/2 didn't contain any code with AT&T copyrights). BSDI was in fact being somewhat cheeky about it; their 1-800 sales number was '1-800-ITS-UNIX', for example. So AT&T sued them, later extending the lawsuit to UCB itself over the distribution of Net/2. Since the lawsuit alleged that 4.3BSD Net/2 contained AT&T proprietary code, it cast an obvious cloud over everything derived from Net/2, 386BSD included.
The lawsuit was famous (and infamous) in the Unix community at the time, and there was real uncertainty over how it would be resolved for several crucial years. The Wikipedia page is careful to note that 386BSD was never a party to the lawsuit, but I'm pretty sure this was only because AT&T didn't feel the need to drag them in. Had AT&T won, I have no doubt that there would have been some cease & desist letters going to 386BSD and that would have been that.
(While Dr Dobb's Journal published 386BSD Release 1.0 in 1994, they did so after the lawsuit was settled.)
I don't know for sure if the AT&T lawsuit deterred people from working on 386BSD and tilted them toward working on Linux (and putting together various early distributions). There were a number of things going on at the time beyond the lawsuit, including politics in 386BSD itself (see eg the FreeBSD early history). Perhaps 386BSD would have lost out to Linux even without the shadow of the lawsuit looming over it, simply because it was just enough behind Linux's development and excitement. But I do think that you can say AT&T caused Linux and have a decent case.
(AT&T didn't literally cause Linux to be written, because the lawsuit was only filed in 1992, after Torvalds had written the first version of his kernel. You can imagine what-if scenarios about an earlier release of Net/2, but given the very early history of Linux I'm not sure it would have made much of a difference.)
Is the C runtime and library a legitimate part of the Unix API?
Go also happens to have a (rather insane, in my opinion) policy of reinventing its own standard library, so it does not use any of the standard Linux glibc code to call vDSO, but rather rolls its own calls (and syscalls too).
Ordinary non-C languages on Unixes generally implement a great many
low level operations by calling into the standard C library. This
starts with things like making system calls, but also includes
operations such as
getaddrinfo(3). Go doesn't do this; it implements
as much as possible itself, going straight down to direct system
calls in assembly language. Occasionally there are problems that
A few Unixes explicitly say that the standard C library is the
stable API and point of interface with the system; one example is
Solaris (and now Illumos). Although they don't casually change the
low level system call implementation, as far as I know Illumos
officially reserves the right to change all of their actual system
calls around, breaking any user space code that isn't dynamically
libc. If your code breaks, it's your fault; Illumos
told you that dynamic linking to
libc is the official API.
Other Unixes simply do this tacitly and by accretion. For example,
on any Unix using
nsswitch.conf, it's very difficult to always
get the same results for operations like
going through the standard C library, because these may use arbitrary
and strange dynamically loaded
modules that are accessed through
libc and require various random
libc APIs to work. This points out one of the problems here; once
you start (indirectly) calling random bits of the
libc API, they
may quite reasonably make assumptions about the runtime environment
that they're operating in. How to set up a limited standard C library
runtime environment is generally not documented; instead the official
view is generally 'let the standard C library runtime code start
I'm not at all sure that all of this requirement and entanglement with the standard C library and its implicit runtime environment is a good thing. The standard C library's runtime environment is designed for C, and it generally contains a tangled skein of assumptions about how things work. Forcing all other languages to fit themselves into these undocumented constraints is clearly confining, and the standard C library generally isn't designed to be a transparent API; in fact, at least GNU libc deliberately manipulates what it does under the hood to be more useful to C programs. Whether these manipulations are useful or desired for your non-C language is an open question, but the GNU libc people aren't necessarily going to even document them.
(Marcan's story shows that the standard C library behavior would have been a problem for any language environment that attempted to use minimal stacks while calling into 'libc', here in the form of a kernel vDSO that's designed to be called through libc. This also shows another aspect of the problem, in that as far as I know how much stack space you must provide when calling the standard C library is generally not documented. It's just assumed that you will have 'enough', whatever that is. C code will; people who are trying to roll their own coroutines and thread environment, maybe not.)
This implicit assumption has a long history in Unix. Many Unixes
have only really documented their system calls in the form of the
standard C library interface to them, quietly eliding the distinction
between the kernel API to user space and the standard C library API
to C programs. If you're lucky, you can dig up some documentation
on how to make raw system calls and what things those raw system
calls return in unusual cases like
I don't think very many Unixes have ever tried to explicitly and
fully document the kernel API separately from the standard C library
API, especially once you get into cases like
ioctl() (where there
are often C macros and
#defines that are used to form some of the
arguments, which are of course only 'documented' in the C header
There were Unix machines with real DirectColor graphics hardware
As I mentioned yesterday, one of the questions I wound up with was whether there ever was any Unix graphics hardware that actually used X11's unusual DirectColor mode. Contrary to what you might expect from its name, DirectColor is an indirect color mode, but one where the the red, green, and blue parts of a pixel's colour value index separate color maps.
The short version of the answer is yes. Based on picking through the X11R6 source code, there were at least two different makes of Unix machines that had hardware support for DirectColor visuals. The first is (some) Apple hardware that ran A/UX. Quoting from xc/programs/Xserver/hw/MacII/README:
These Apple X11 drivers support 1, 8, and 24 bit deep screens on the Macintosh hardware running A/UX. Multiple screens of any size and both depths are accommodated. The 24 bit default visual is DirectColor, and there is significant color flash when shifting between TrueColor and DirectColor visuals in 24 bit mode.
Based on a casual perusal of Wikipedia, it appears that some Quadra and Centris series models supported 24-bit colour and thus DirectColor.
(Support for DirectColor on Apple A/UX appears to have also been in X11R5, released in September of 1991, but the README wasn't there so I can't be sure.)
24 PLANE SUPPORT FOR HCRX24 AND CRX24
This Xhp X11 sample server supports two modes for the HCRX24 and CRX24 display hardware: 8 plane and 24 plane, with 8 plane being the default. [...]
In depth 24 mode, the default visual type is DirectColor.
This support seems to have appeared first in X11R6, released in June of 1994. HP probably added it to HP/UX's version of the X server before then, of course.
It's possible that some other Unix workstations had graphics hardware
that directly supported DirectColor, but if so they didn't document
it as clearly as these two cases and I can't pick it out from the
various uses of
DirectColor in the X11R6 server source code.
(Since X11R6 dates from 1996 and PCs were starting to get used for Unix by that point, this includes various sorts of PC graphics hardware that X11R6 had drivers for.)
There seems to be support for emulating DirectColor visuals on other classes of underlying graphics hardware, and some things seem to invoke it. I don't know enough about X11 programming to understand the server code involved; it's several layers removed from what little I do know.
I admit that I was hoping that looking at the X server code could give me more definitive answers than it turned out to, but that's life in a large code base. It's possible that there's later Unix graphics hardware supports DirectColor, but my patience for picking through X11 releases is limited (although I did quickly peek at X11R6.4, from 1998, and didn't spot anything). People with more energy than me can pick through the x.org history page and the old releases archive to do their own investigation.
(The intel driver manpage suggests that the i810 and i815 integrated Intel graphics chipsets had hardware support for DirectColor, but that this support was removed in the i830M and onward. I would assume Intel decided it wasn't used enough to justify chipset support.)
PS: Note that later releases of X11 start dropping support for some older hardware; for example, the 'macII' support disappeared in X11R6.1. For what it's worth, the release notes up through X11R6 don't mention removing support for any graphics hardware; however, I haven't checked through all X11 releases from X11R2 through X11R4 to see if DirectColor hardware appeared briefly in one of them and then disappeared before X11R5.
The interestingly different display colour models of X10 and X11
A while back I wrote about what X11's
TrueColor means. In comments Aristotle Pagaltzis and I wound up getting into the fun world
of X11 visual types, especially
the DirectColor visual type, which has an indexed colormap like
PseudoColor but with the R, G, and B channels indexed separately.
As a result of this I wound up with two questions: where did
DirectColor come from, and was there ever any hardware that actually
had real DirectColor support? Today I'm mostly going to tackle the
(If you look at
xdpyinfo output on your modern 24-bit TrueColor
display you'll almost certainly find a whole collection of 24-bit
DirectColor visuals. I'm pretty sure that these are implemented in
software, not hardware, although I could be wrong.)
Old versions of both X10 and X11 are available through x.org's archive, so we can go digging. The obvious place to start is investigating the last version of X10, X10 R4, and what its colour models were. Well, colour model, as it turns out; X10 apparently had only a version of what X11 calls PseudoColor. However, this version is somewhat different. Quoting from doc/Xlib/ch08a.t in the tarball:
The red, green and blue values are scaled between 0 and 65535; that is `on full' in a color is a value of 65535 independent of the number of bit planes of the display. Half brightness in a color would be a value of 32767, and off of 0. This representation gives uniform results for color values across displays with different number of bit planes.
I was going to say that this is different from X11, but it's actually not. At the protocol level X11 continued to use this scaled spectrum of colour values, although it will tell you how many bits there actually are in each RGB value.
X10 has no specific colormaps for greyscale displays, but notes:
On a multiplane display with a black and white monitor (greyscale, but not color), these values may be combined or not combined to determine the brightness on the screen.
Given the mention of a black & white monitor here, X10 is probably contemplating a situation where people connected a b&w monitor to a machine with colour graphics. Such a monitor might be driven from only a single colour channel, for example green; this was easy back in the days when each of R, G, and B might use a separate physical connector (usually BNC; this persisted into the VGA era).
The first version of X11 was of course X11 R1. It has the same six visual types that modern X11 (nominally) has; StaticGrey, StaticColor, TrueColor, GrayScale, PseudoColor, and DirectColor. For GrayScale, the protocol specification now notes:
GrayScale is treated the same as PseudoColor, except which primary drives the screen is undefined, so the client should always store the same value for red, green, and blue in colormaps.
(The protocol specification also notes that 'StaticGray with a two-entry colormap can be thought of as "monochrome"'.)
According to the X.org history page, X10R4 was released around the end of 1986 and X11R1 followed in September of 1987. One of the purposes of the X10 to X11 shift was to do a more hardware neutral and thus generalized redesign (see eg Wikipedia), so it's not really surprising to me that one part of that was taking a very simple and relatively limited colour model and generalizing it wildly. The X11R1 Xlib documentation already has the breakdown diagram in X11 visual types, and I suspect that the orthogonality on display was deliberate, even if at the time no one was sure if the capability would ever appear in actual hardware. Since X11 was defining a protocol for the long term, being general made a lot of sense; better to over-generalize when creating the protocol then never use some capabilities rather than under-generalize and have to try to figure out how to extend it later.
(As it turns out, X didn't get this perfect since all colours are in RGB. Modern hardware often also supports things like YUV colour.)
X11 R1 doesn't seem to contain code for any hardware that supports
DirectColor. While hardware code under
server/ mentions it, it's
always to report errors if you try to actually use DirectColor
visuals. It's possible that the device independent code was capable
of faking DirectColor on PseudoColor or TrueColor visuals, but I'm
(Investigation of other versions of X11 will be deferred to another entry, since this one is already long enough.)
X11 PseudoColor displays could have multiple hardware colormaps
When I talked about PseudoColor displays and window managers, I described things assuming that there was only a single hardware colormap. However, if you read the X11 documentation you'll run across tantalizing things like:
Most workstations have only one hardware look-up table for colors, so only one application colormap can be installed at a given time.
'Most' is not all, and indeed this is the case; there were Unix workstations with PseudoColor displays that had multiple hardware colormaps. As it happens I once used such a machine, my SGI R5K Indy. As a sysadmin machine we bought the version with SGI's entry level 8-bit XL graphics, but that was still advanced enough that it had multiple hardware colormaps instead of the single colormap that I was used to from my earlier machines.
When I was using the Indy I didn't really notice the multiple hardware colormaps, which is not too surprising (people rapidly stop noticing things that don't happen, like your display flashing as colormaps have to be swapped around), but in retrospect I think they enabled some things that I didn't think twice about at the time. I believe my Indy was the first time I used pictures as desktop backgrounds, and looking at the 1996 desktop picture in the appendix of this entry, that picture is full colour and not too badly dithered.
(As it happens I still have the source image for this desktop background and it's a JPEG with a reasonably large color range. Some of the dithering is in the original, probably as an artifact of it being scanned from an artbook in the early 1990s.)
In general, I think that having multiple hardware colormaps basically worked the way you'd expect. Any one program (well, window) couldn't have lots of colors, so JPEG images and so on still had to be approximated, but having a bunch of programs on the screen at once was no problem (even with the window manager's colors thrown in). I used that Indy through the era when websites started getting excessively colourful, so its multiple hardware colormaps likely got a good workout from Netscape windows.
(In 1996, Mozilla was well in the future.)
At the time and for years afterward, I didn't really think about how this was implemented in the hardware. Today, it makes me wonder, because X is normally what I'll call a software compositing display system where the X server assembles all pixels from all windows into a single RAM area and has the graphics display that (instead of telling the graphics hardware to composite together multiple separate bits and pieces). This makes perfect sense for a PseudoColor display when there's only one hardware colormap, but when you have multiple hardware colormaps, how does the display hardware know which pixel is associated with which hardware colormap? Perhaps there was a separate additional mapping buffer with two or three bits per pixel that specified the hardware colormap to use.
(Such a mapping buffer would be mostly static, as it only needs to change if a window with its own colormap is moved, added, or removed, and it wouldn't take up too much memory by 1996 standards.)
The fun of X11 PseudoColor displays and window managers
Yesterday, I described how X11's PseudoColor is an indirect colormap, where the 'colors' you assigned to pixels were actually indexes into a colormap that gave the real RGB colour values. In the common implementation (an 8-bit 'colour' index into a 24-bit colormap), you could choose colours out of 16 million of them, but you could only have 256 different ones in a colormap. This limitation creates an obvious question: on a Unix system with a bunch of different programs running, how do you decide on which 256 different colours you get? What happens when two programs want different sets of them (perhaps you have two different image display programs trying to display two different images at the same time)?
Since X's nominal motto is 'mechanism, not policy', the X server and protocol do not have an answer for you. In fact they aggressively provide a non-answer, because the X protocol allows for every PseudoColor window to have its own colormap that the program behind the window populates with whatever colours it wants. Programs can inherit colormaps, including from the display (technically the root window, but that's close enough because the root window is centrally managed), so you can build some sort of outside mechanism so everyone uses the same colormap and coordinates it, but programs are also free to go their own way.
Whenever you have a distributed problem in X that needs some sort of central coordination, the normal answer is 'the window manager handles it'. PseudoColor colormaps are no exception, and so there is an entire X program to window manager communication protocol about colormap handling, as part of the ICCCM; the basic idea is that programs tell the window manager 'this window needs this colormap', and then the window manager switches the X server to the particular colormap whenever it feels like it. Usually this is whenever the window is the active window, because normally the user wants the active window to be the one that has correct colors.
(In X terminology, this is called 'installing' the colormap.)
The visual result of the window manager switching the colormap to one with completely different colors is that other windows go technicolour and get displayed with false and bizarre colors. The resulting flashing as you moved back and forth between programs, changed images in an image display program, or started and then quit colour-intensive programs was quite distinctive and memorable. There's nothing like it in a modern X environment, where things are far more visually stable.
The window manager generally had its own colormap (usually associated
with the root window) because the window manager generally needed
some colours for window borders and decorations, its menus, and so
on. This colormap was basically guaranteed to always have black and
white color values, so programs that only needed them could just
inherit this colormap. In fact there was also a whole protocol for
creating and managing standard (shared) colormaps,
with a number of standard colormaps defined; you could use one of
these standard colormaps if your program just needed some colors
and wasn't picky about the exact shades. A minimal case of this was
if your program only used black and white; as it happens, this
describes many programs in a normal X system (especially in the
days of PseudoColor displays), such as
xclock, Emacs and
other GUI text editors, and so on. All of these programs could use
the normal default colormap, which was important to avoid colours
changing all of the time as you switched windows.
(For much of X's life, monochrome X displays were still very much a thing, so programs tended to only use colour if they really needed to. Today color displays are pervasive so even programs that only really have a foreground and a background colour will let you set those to any colour you want, instead of locking you to black and white.)
One of the consequences of PseudoColor displays for window managers was that (colour) gradients were generally considered a bad idea, because they could easily eat up a lot of colormap entries. Window managers in the PseudoColor era were biased towards simple and minimal colour schemes, ideally using and reusing only a handful of colours. When TrueColor displays became the dominant thing in X, there was an explosion of window managers using and switching over to colour gradients in things like window title bars and decorations; not necessarily because it made sense, but because they now could. I think that has fortunately now died down and people are back to simpler colour schemes.
What X11's TrueColor means (with some history)
If you've been around X11 long enough and peered under the hood a bit, you may have run across mentions of 'truecolor'. If you've also read through the manual pages for window managers with a sufficiently long history, such as fvwm, you may also have run across mentions of 'colormaps'. Perhaps you're wondering what the background of these oddities are.
Today, pixels are represented with one byte (8 bits) for each RGB color component, and perhaps another byte for the transparency level ('alpha'), partly because that makes each pixel 32 bits (4 bytes) and computers like 32-bit things much better than they like 24 bit (3 byte) things. However, this takes up a certain amount of memory. For instance, a simple 1024 by 768 display with 24 bits per pixel takes up just over 2 megabytes of RAM. Today 2 MB of RAM is hardly worth thinking about, but in the late 1980s and early 1990s it was a different matter entirely. Back then an entire workstation might have only 16 MB of RAM, and that RAM wasn't cheap; adding another 2 MB for the framebuffer would drive the price up even more. At the same time, people wanted color displays instead of black and white and were certainly willing to pay a certain amount extra for Unix workstations that had them.
If three bytes per pixel is too much RAM, there are at least two straightforward options. The first is to shrink how many bits you give to each color component; instead of 8-bit colour, you might do 5-bit color, packing a pixel into two bytes. The problem is that the more memory you save, the fewer colors and especially shades of gray you have. At 5-bit colour you're down to 32 shades of gray and only 32,768 different possible colors, and you've only saved a third of your framebuffer memory. The second is to do the traditional computer science thing by adding a layer of indirection. Instead of each pixel directly specifying its colour, it specifies an index into a colormap, which maps to the actual RGB color. The most common choice here is to use a byte for each pixel and thus to have a 256-entry colormap, with '24 bit' colour (ie, 8-bit RGB color components). The colormap itself requires less than a kilobyte of RAM, your 1024 by 768 screen only needs a more tolerable (and affordable) 768 KB of RAM, and you can still have your choice out of 16 million colors; it's just that you can only have 256 different colors at once.
(Well, sort of, but that's another entry.)
This 256-color indirect color mode is what was used for all affordable colour Unix workstations in the 1980s and most of the 1990s. In X11 terminology it's called a PseudoColor display, presumably because the pixel 'colour' values were not actually colors but instead were indexes into the colormap, which had to be maintained and managed separately. However, if you had a lot of money, you could buy a Unix workstation with a high(er) end graphics system that had the better type of color framebuffer, where every pixel directly specified its RGB color. In X11 terminology, this direct mapping from pixels to their colors is a TrueColor display (presumably because the pixel values are their true color).
(My memory is that truecolor systems were often called 24-bit color and pseudocolor systems were called 8-bit color. Depending on your perspective this isn't technically correct, but in practice everyone reading descriptions of Unix workstations at the time understood what both meant.)
Directly mapped 'truecolor' color graphics supplanted indirect pseudocolor graphics sometime in the late 1990s, with the growth of PCs (and the steady drop in RAM prices, which made two extra bytes per pixel increasingly affordable). It's probably been at least 15 years since you could find a pseudocolor graphics system on (then) decent current hardware; these days, 'truecolor' is basically the only colour model. Still, the terminology lingers on in X11, ultimately because X11 is at its heart a very old system and is still backward compatible to those days (at least in theory).
(I suspect that Wayland does away with all of the various options X11 has here and only supports the directly mapped truecolor model (probably with at least RGB and YUV). That would certainly be the sane approach.)
PS: It's true that in the late 1990s, you could still find Sun and perhaps SGI selling workstations with pseudocolor displays. This wasn't a good thing and contributed to the downfall of dedicated Unix workstations. At that point, decent PCs were definitely using truecolor 24-bit displays, which was part of what made PCs more attractive and most of the dedicated Unix workstations so embarrassing.
(Yes, I'm still grumpy at Sun about its pathetic 1999-era 'workstations'.)