More on CISC (well, x86) versus RISC
May 21, 2012
Sometimes I am sadly slow at replying to comments. So I am hoisting my my reply to a comment on my first entry on why x86 won against RISC into an entry.
From --M's comment:
While this is true as far as I know, it doesn't mean that RISC won in the end. Not really, and not as RISC was understood back in the early 1990s.
Roughly speaking, we can say that there are two aspects of RISC, RISC as an internal CPU implementation technology and RISC as an Instruction Set Architecture (ISA) design approach. Of course these were initially tied together; it was your use of a RISC ISA that made it possible to create a simple CPU for that ISA using RISC implementation technology. A more complex ISA would not have allowed you to build a RISC CPU for it. The early 1990s view of 'RISC' called for both aspects, with a RISC ISA implemented by a RISC CPU.
However, as nothings noted in his comment, all sorts of CPU designers started stealing implementation ideas and technology from RISC CPUs pretty much the moment they were created (well, when people started writing about the ideas). In a sense this should be expected; from one view, RISC CPU cores that run code translated from a different ISA are simply the logical extension of microcode. Since the microcode ISA is not user-visible, CPU designers are free to change it completely from CPU generation to CPU generation and thus free to adopt good ideas for microcode architecture wherever they can find them.
RISC as a CPU implementation technology is alive and well in the x86; it 'won' in that sense. But that sense is relatively meaningless for most purposes, because we don't really care how CPUs go fast. What people really cared about in the early 1990s was RISC ISAs, and (1992 style) RISC ISAs unambiguously lost. CPUs implementing the x86 ISA were pushed to performance and price levels that could not be matched by CPUs implementing RISC ISAs and as a result RISC ISAs have basically died out.
Sidebar: why no one ever transitioned from x86 CPUs
nothings in his comment:
I've sort of written about this before. My short answer is that making such a transition away from x86 would have required a new CPU that delivered an actual tangible benefit to users over x86 CPUs and (due partly to AMD) there was no such alternate CPU available. Every CPU with better performance than x86 had significant drawbacks such as much worse price/performance ratios.
Fundamentally, Apple successfully transitioned from PowerPC to x86 because there was a benefit to users. x86 CPUs could do things that PowerPC CPUs could not; they ran faster, they ran cooler, and so on. As a secondary reason, Apple was able to execute the transition because they could simply not make future, better PowerPC machines that ran Mac OS; if users wanted to upgrade a machine to get more power, they had to go to x86. Windows has never had this option; pretty much as long as faster x86 CPUs appear, people can build machines that use them and run Windows.
Comments on this page:
* * *
Atom feeds are available; see the bottom of most pages.