Wandering Thoughts archives

2013-05-04

What I see as RISC's big bets

At the time, performance oriented RISC was presented as the obviously correct next step for systems to take. Even today I can casually say that x86 won against RISC mostly because Intel spent more money and have people nod along with it. But I'm not sure that this is really the case, because I think you can make an argument that the whole idea behind (performance) RISC rested on some big bets.

As I see them, the main big bets were:

  1. CPU speeds would continue to be the constraint on system performance. This one is obvious; the only part of the system that a fast RISC improved was the CPU itself and that only mattered if the CPU was the limiting factor.

  2. Compilers could statically extract a lot of parallelism and scheduling opportunities, because this is what lets classic RISC designs omit the complex circuitry for out of order dynamic instruction scheduling.

    (Itanium is an extreme example of this assumption, if you consider it a RISC.)

  3. People would recompile their programs frequently for new generations of CPU chips, because this is a consequence of static scheduling and newer CPUs have different scheduling opportunities from older ones. If you can't make this assumption, new CPUs either run old code unimpressively or need to do dynamic instruction scheduling for old code. Running old code unimpressively (if people care about old code) does not sell new CPUs.

The third bet proved false in practice for all sorts of pragmatic reasons. My impression is that the second bet also wound up being false, with dynamic out of order instruction scheduling able to extract significantly more parallelism than static compiler analysis could. My memory is that together these two factors pushed later generation RISC CPU designs to include more and more complex instruction scheduling, diluting their advantages over (theoretically) more complex CISC designs.

(I'm honestly not sure how the first bet turned out for fast RISC over its era (up to the early 2000s, say). CPUs weren't anywhere near fast enough in those days but my impression is that as the CPU speed ramped up, memory bandwidth and latency issues increasingly became a constraint as well. This limited the payoff from pure CPU improvements.)

I don't see the RISC emphasis on 64 bit support as being a bet so much as an attempt to create a competitive advantage. (I may be underestimating how much work it took AMD to add 64 bit support to the x86 architecture.)

Update: I'm wrong about some of this. See the first comment for a discussion, especially about the out-of-order issue.

tech/RISCBigBets written at 03:02:46; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.