X's network transparency has wound up mostly being a failure
I was recently reading Mark Dominus's entry about some X keyboard problems, in which he said in passing (quoting himself):
I have been wondering for years if X's vaunted network transparency was as big a failure as it seemed: an interesting idea, worth trying out, but one that eventually turned out to be more trouble than it was worth. [...]
My first reaction was to bristle, because I use X's network transparency
all of the time at work. I have several
programs to make it work very smoothly,
and some core portions of my environment would be basically impossible
without it. But there's a big qualification on my use of X's network
transparency, namely that it's essentially all for text. When I
occasionally go outside of this all-text environment of
emacs and so on, it doesn't go as well.
X's network transparency was not designed as 'it will run
well'; originally it was to be something that should let you run
almost everything remotely, providing a full environment. Even apart
from the practical issues covered in Daniel Stone's slide
clear that it's been years since X could deliver a real first class
environment over the network. You cannot operate with X over the
network in the same way that you do locally. Trying to do so is
painful and involves many things that either don't work at all or
perform so badly that you don't want to use them.
In my view, there are two things that did in general X network transparency. The first is that networks turned out to not be fast enough even for ordinary things that people wanted to do, at least not the way that X used them. The obvious case is web browsers; once the web moved to lots of images and worse, video, that was pretty much it, especially with 24-bit colour.
(It's obviously not impossible to deliver video across the network with good performance, since YouTube and everyone else does it. But their video is highly encoded in specialized formats, not handled by any sort of general 'send successive images to the display' system.)
The second is that the communication facilities that X provided were too narrow and limited. This forced people to go outside of them in order to do all sorts of things, starting with audio and moving on to things like DBus and other ways of coordinating environments, handling sophisticated configuration systems, modern fonts, and so on. When people designed these additional communication protocols, the result generally wasn't something that could be used over the network (especially not without a bunch of setup work that you had to do in addition to remote X). Basic X clients that use X properties for everything may be genuinely network transparent, but there are very few of those left these days.
xterm is any more, at least if you use XFT fonts. XFT
fonts are rendered in the client, and so different hosts may have
different renderings of the same thing, cf.)
What remains of X's network transparency is still useful to some of us, but it's only a shadow of what the original design aimed for. I don't think it was a mistake for X to specifically design it in (to the extent that they did, which is less than you might think), and it did help X out pragmatically in the days of X terminals, but that's mostly it.
(I continue to think that remote display protocols are useful in general, but I'm in an usual situation. Most people only ever interact with remote machines with either text mode SSH or a browser talking to a web server on the remote machine.)
PS: The X protocol issues with synchronous requests that Daniel Stone talks about don't help the situation, but I think that even with those edges sanded off X's network transparency wouldn't be a success. Arguably X's protocol model committed a lesser version of part of the NeWS mistake.
You could say that Linux is AT&T's fault
Recently on Twitter, I gave in to temptation. It went like this:
@tux0r: Linux is duplicate work (ref.: BSD) and they still don't stop making new ones. :(
@oclsc: But their license isn't restrictive enough to be free! We HAVE to build our own wheel!
@thatcks: I believe you can direct your ire here to AT&T, given the origins and early history of Linux. (Or I suppose you could criticize the x86 BSDs.)
My tweet deserves some elaboration (and it turns out to be a bit exaggerated because I mis-remembered the timing a bit).
If you're looking at how we have multiple free Unixes today, with some descended from 4.x BSD and one written from scratch, it's tempting and easy to say that the people who created Linux should have redirected their efforts to helping develop the 4.x BSDs. Setting aside the licensing issues, this view is ahistorical, because Linux was pretty much there first. If you want to argue that someone was duplicating work, you have a decent claim that it's the BSDs who should have thrown their development effort in with Linux instead of vice versa. And beyond that, there's a decent case to be made that Linux's rise is ultimately AT&T's fault.
The short version of the history is that at the start of the 1990s, it became clear that you could make x86 PCs into acceptable inexpensive Unix machines. However, you needed a Unix OS in order to make this work, and there was no good inexpensive (or free) option in 1991. So, famously, Linus Torvalds wrote his own Unix kernel in mid 1991. This predated the initial releases of 386BSD, which came in 1992. Since 386BSD came from the 4.3BSD Net/2 release it's likely that it was more functional than the initial versions of Linux. If things had proceeded unimpeded, perhaps it would have taken the lead from Linux and became the clear winner.
Unfortunately this is where AT&T comes in. At the same time as 386BSD was coming out, BSDI, a commercial company, was selling their own Unix derived from 4.3BSD Net/2 without having a license from AT&T (on the grounds that Net/2 didn't contain any code with AT&T copyrights). BSDI was in fact being somewhat cheeky about it; their 1-800 sales number was '1-800-ITS-UNIX', for example. So AT&T sued them, later extending the lawsuit to UCB itself over the distribution of Net/2. Since the lawsuit alleged that 4.3BSD Net/2 contained AT&T proprietary code, it cast an obvious cloud over everything derived from Net/2, 386BSD included.
The lawsuit was famous (and infamous) in the Unix community at the time, and there was real uncertainty over how it would be resolved for several crucial years. The Wikipedia page is careful to note that 386BSD was never a party to the lawsuit, but I'm pretty sure this was only because AT&T didn't feel the need to drag them in. Had AT&T won, I have no doubt that there would have been some cease & desist letters going to 386BSD and that would have been that.
(While Dr Dobb's Journal published 386BSD Release 1.0 in 1994, they did so after the lawsuit was settled.)
I don't know for sure if the AT&T lawsuit deterred people from working on 386BSD and tilted them toward working on Linux (and putting together various early distributions). There were a number of things going on at the time beyond the lawsuit, including politics in 386BSD itself (see eg the FreeBSD early history). Perhaps 386BSD would have lost out to Linux even without the shadow of the lawsuit looming over it, simply because it was just enough behind Linux's development and excitement. But I do think that you can say AT&T caused Linux and have a decent case.
(AT&T didn't literally cause Linux to be written, because the lawsuit was only filed in 1992, after Torvalds had written the first version of his kernel. You can imagine what-if scenarios about an earlier release of Net/2, but given the very early history of Linux I'm not sure it would have made much of a difference.)