The "personal computer" model scales better than the "terminal" model

June 30, 2025

In an aside in a recent entry, I said that one reason that X terminals faded away is that what I called the "personal computer" model of computing had some pragmatic advantages over the "terminal" model. One of them is that broadly, the personal computer model scales better, even though sometimes it may be more expensive or less capable at any given point in time. But first, let me define my terms. What I mean by the "personal computer" model is one where computing resources are distributed, where everyone is given a computer of some sort and is expected to do much of their work with that computer. What I mean by the "terminal" model is where most computing is done on shared machines, and the objects people have are simply used to access those shared machines.

The terminal model has the advantage that the devices you give each individual person can be cheaper, since they don't need to do as much. It has the potential disadvantage that you need some number of big shared machines for everyone to do their work on, and those machines are often expensive. However, historically, some of the time those big shared servers (plus their terminals) have been less expensive than getting everyone their own computer that was capable enough. So the "terminal" model may win at any fixed point in both time and your capacity needs.

The problem with the terminal model is those big shared resources, which become an expensive choke point. If you want to add some more terminals, you need to also budget for more server capacity. If some of your people turn out to need more power than you initially expected, you're going to need more server capacity. And so on. The problem is that your server capacity generally has to be bought in big, expensive units and increments, a problem that has come up before.

The personal computer model is potentially more expensive up front but it's much easier to scale it, because you buy computer capacity in much smaller units. If you get more people, you get each of them a personal computer. If some of your people need more power, you get them (and just them) more capable, more expensive personal computers. If you're a bit short of budget for hardware updates, you can have some people use their current personal computers for longer. In general, you're free to vary things on a very fine grained level, at the level of individual people.

(Of course you may still have some shared resources, like backups and perhaps shared disk space, but there are relatively fine grained solutions for that too.)

PS: I don't know if big compute is cheaper than a bunch of small compute today, given that we've run into various limits in scaling up things like CPU performance, power and heat limits, and so on. There are "cloud desktop" offerings from various providers, but I'm not sure these are winners based on the hardware economics alone, plus today you'd need something to be the "terminal" as well and that thing is likely to be a capable computer itself, not the modern equivalent of an X terminal..

Written on 30 June 2025.
« How history works in the version of the rc shell that I use
How you can wind up trying to allocate zero bytes in C »

Page tools: View Source.
Search:
Login: Password:

Last modified: Mon Jun 30 22:49:22 2025
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.