Chris's Wiki :: blog/tech/Whyx86WonVsRISC Commentshttps://utcc.utoronto.ca/~cks/space/blog/tech/Whyx86WonVsRISC?atomcommentsDWiki2012-05-21T04:05:25ZRecent comments in Chris's Wiki :: blog/tech/Whyx86WonVsRISC.By Chris Siebenmann on /blog/tech/Whyx86WonVsRISCtag:CSpace:blog/tech/Whyx86WonVsRISC:76228dc63de5334bac0acd44807a614eb1266666Chris Siebenmann<div class="wikitext"><p>Department of belated replies: I put my commentary into an entry,
<a href="https://utcc.utoronto.ca/~cks/space/blog/tech/Whyx86WonVsRISCII">Whyx86WonVsRISCII</a>.</p>
</div>2012-05-21T04:05:25ZBy nothings on /blog/tech/Whyx86WonVsRISCtag:CSpace:blog/tech/Whyx86WonVsRISC:c7954ca311b3d5424b49cd456a944e0b0e72c13cnothings<div class="wikitext"><p>I see your two reasons really being linked in another (debatably) "simpler" reason:</p>
<p>Intel had deep pockets and a massive x86 userbase. That means they were able to outspend everyone else in terms of chip process; Intel chips were generally always a generation (or two?) ahead of the RISC processors, which means they had many more transistors for the same sized chip (and costing about the same in cost of goods).</p>
<p>That meant Intel could burn transitors on the CISC-to-RISC translation without having to make any visible sacrifices, and still have more transistors. (It did cost them extra cycles to perform that translation, but because it's hardware, those cycles could execute in parallel most of the time.) This made it easy to compete with the fastest RISC chips (the DEC Alpha, I guess.) And it's not like CISC-to-RISC was unknown in 1992; the first pipelined implementation of the VAX was the MicroVAX Rigel chip released in 1989 <a href="http://en.wikipedia.org/wiki/Rigel_%28microprocessor%29">http://en.wikipedia.org/wiki/Rigel_%28microprocessor%29</a> , which was essentially doing the same CISC-to-RISC that Intel would undertake with the Pentium Pro. </p>
<p>Over time, as CPUs spent more and more transistors on making single-threaded code fast (e.g. with out-of-order execution), the CISC-to-RISC translation unit also became a tiny, nearly invisible part of the chip (in terms of transistors), and yet Intel still had a process advantage. (Now that we are going multicore, that's not true; the amount of chip resources spent on CISC-to-RISC grow with the number of cores.)</p>
<p>Of course, it's true Intel also had a process advantage for Itanium, so the story is a little more complex, and probably biased towards 'desire for x86 compatibility' (although the Mac certainly was able to switch processors, so I'm not sure it would have been <em>impossible</em> for Windows to have, so I think it may also just have been just a bad design in Itanium--something Intel was certainly capable of, witness the Pentium 4).</p>
</div>2012-04-18T21:28:01ZFrom 94.194.182.118 on /blog/tech/Whyx86WonVsRISCtag:CSpace:blog/tech/Whyx86WonVsRISC:fbf363346e168080f7193696d833c9c30241d2e5From 94.194.182.118<div class="wikitext"><p>It's my understanding that modern CPUs don't execute the x86 machine code directly, they're actually RISC processors that execute a program that can run and optimise x86 instructions.</p>
<p>I'm not sure where I heard this from (at best it's a simplification of the truth), but it's not unprecedented. Take PyPy for example, it's written with a subset of python that runs and optimises standard Python, yet it outperforms CPython. It might sound like witchcraft, but it's easier to optimise this way, rather than dealing with (relatively) low level code.</p>
<p>--M</p>
</div>2012-04-15T18:06:07Z