What I think about why graphics cards keep being successful
Graphics cards are the single most pervasive and successful sort of hardware accelerator in the computer world; they are a shining exception to how hardware acceleration has generally been bad. Given my views, I'm interested in figuring out why graphics cards are such an exception.
Here's my current thinking on why graphics cards work, in point form (and in no particular order):
- avid users (ie, gamers) are CPU constrained during operation as well as graphics constrained.
- avid users will pay significant amounts of money for graphics cards,
and will do so on a regular basis.
- there is essentially no maximum useful performance limit; so far,
people and programs can always use more graphics power.
- GPUs have found various ways of going significantly faster than the
CPU, ways that the CPU currently cannot match, including:
- significant parts of the problem they're addressing is naturally (and often embarrassingly) parallel; this makes it relatively simple to speed things up by just throwing more circuitry at the problem.
- they have almost always used high speed memory interfaces (or highly parallel ones), getting around the memory speed performance limit.
- while GPUs have problems with the costs of having the CPU actually
talk to them, they have found a number of ways to amortize that
overhead and work around it.
(For example, these days you rarely do individual graphics operations one by one; instead you batch them up and do them in bulk.)
- GPU vendors are successful enough to spend a lot of money on hardware design.
- GPU vendors iterate products rapidly, often faster than CPU vendors.
I think that many of these reasons can be inverted to explain why hardware acceleration is a hard problem, but that's another entry.