How AMD killed the Itanium

July 15, 2005

I've been telling people versions of this story for a while, so I figure I might as well write it down for posterity (or at least entertainment).

When Intel started Itanium development work in the mid 1990s, it had a straightforward story: the x86 architecture was about to run into a performance cap, because of all the ugly warts it had inherited from its predecessors (a very limited supply of registers, a highly irregular instruction format, etc). To get the future performance that customers wanted, a modern warts-free RISC-oid architecture was needed: the IA-64.

This was no different from the stories all the other CPU vendors were telling at the same time. Unlike those CPU vendors, Intel realized something important: very few people buy new CPUs to run only new software. Even in the mid 1990s, most people were using Intel x86 CPUs to run their programs, so that was where the big CPU dollars were.

So Intel promised that there would be a magic x86 part glued on the side of the first few generations of Itaniums that would run all of your existing important programs. Because very few people are ever interested in upgrading to a computer that runs their existing programs slower, Intel needed this magic x86 part to run at least as fast as their real x86 chips.

Intel could get away with this for two reasons. First, x86 chips were relatively simple compared to the design that Intel was planning, so it should be easy to glue the core of one on the side of the new CPU. Second, Intel could make the x86 performance story come true simply by moving most of their CPU design manpower (and money) to the IA-64.

Then AMD showed up to ruin Intel's party by competing directly with them. It didn't matter that AMD didn't have faster CPUs to start with; AMD's existence meant that if Intel left them alone, AMD would surpass Intel and kill Intel's main revenue source. Intel had to crank up x86 performance to make sure that didn't happen. This probably had three effects:

  • people got diverted from IA-64 CPU design back to x86 CPU design;
  • because x86 got faster, Itanium had to get faster too;
  • the only way to make x86 faster was to make it more complicated, which made it harder to integrate a current-generation x86 into Itanium.

Naturally, the schedule for delivering a faster, more complicated Itanium slipped. Which kept making the problem worse, especially when making x86 chips go really fast started to require serious amounts of design talent. Instead of designing one high-performance CPU and doing a small amount of work on another CPU architecture, Intel was trapped in an increasingly vicious race to design two vastly different high-performance CPUs at the same time, and one of them had to be backwards compatible with the other.

It's no wonder the Itanium shipped years late, with disappointing performance and very disappointing x86 compatibility performance. (And heat issues, which didn't help at all.)

With AMD's recent x86-64 64-bit extension of the x86 architecture, Intel couldn't even claim that Itanium was your only choice if you needed 64-bit memory space and suchlike. Intel's capitulation to making its own almost 100% compatible x86 64-bit extension was inevitable, but probably the final stake in Itanium's heart. (And likely a very bitter pill for Intel to swallow.)

And that's how AMD killed the Itanium.


Comments on this page:

By Sam James at 2023-02-04 07:55:18:

What's your perspective on how VLIW fits into this? It's been often said that one of the reasons Itanium died a death is that it required a huge amount of compiler work that either never happened or wasn't good enough in order to deliver superior performance.

Indeed, to this day, the ia64 kernel and GCC port are both in rather disrepair and also lacking the most help maintenance wise (and in terms of dire problems) even compared to other niche platforms.

By cks at 2023-02-04 14:31:31:

My outsider's view is that VLIW as a whole was an extreme bet on static scheduling and statically discoverable parallelism. This bet wasn't uncommon in early RISC (although not universal, there's an informative comment about it on my entry on what I see about RISC's big bets), but it turned out to be a bad bet. For whatever reason, in practice people were never able to extract enough static parallelism from ordinary source code and the dynamic scheduling of out of order execution worked out much better. Since Itanium had bet on VLIW, it was hobbled from the beginning and the whole effort involved probably didn't help in the design's overall goal of going fast.

It's possible that without the VLIW, Intel would have been able to make Itanium fast enough (and soon enough) even with the x86 sidecar, although it doesn't feel likely to me. I suspect that without the x86 sidecar, a VLIW Itanium would have been dead too; VLIW is clearly harder to make go fast than other approaches, and a non-sidecar Itanium would have been competing purely against other RISCs for a much smaller and slower growing market.

(With or without the x86 sidecar, I think all RISCs were dead in practice once AMD showed up to crank up x86 performance, because of the weight of x86 code.)

By kodos at 2023-02-06 23:48:09:

I was there.

A few things are missed:

  • HP partnership

  • HP designers ... and _

  • Who else was competing in that market? How did they do?

Finally: was Itanium(tm) really a failure?

Written on 15 July 2005.
« The legend of Debian Linux
First Irritations with Fedora Core 4 »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Fri Jul 15 01:59:55 2005
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.