There have been genuinely interesting books about the tortuous path that new IT hardware follows to market: Tracy Kidders 1981 landmark The Soul of a New Machine, for example, superbly balances the human element against the complexities and limitations of new technology. Sad to say, Itanium Rising gives the genre a different and much less impressive aura.
Although this newly-published Prentice-Hall title does shed some light on the decisions that went into Intels daring move to the IA-64 architecture, it also casts confusing shadows with self-serving distortions and even contradictory statements about what it is that makes Itanium (allegedly) destined to succeed.
Authors Jim Carlson and Jerry Huck are both affiliated with Hewlett-Packard, which has made a big bet on Itanium as the successor to its own PA-RISC. Their desire to justify HPs decision comes across as a desperate “food fight” strategy, throwing every possible buzzword and argument at the reader in the hope that something will stick. In the process, they commit far too many self-serving simplifications and make outrageously false comparisons between Itanium and alternative architectures.
By the time we get to page 5, for example, were already being subjected to a sales pitch that makes no logical connection between Itaniums risks and potential enterprise rewards. Offering the scenario of a failed input/output card in a server, the authors assert that “On a typical Itanium-based system, you can pull out the card while the system is still running and plug another one in…You dont have to shut the machine off.” This is probably true—we could argue about whether enough Itanium machines have yet shipped to make the word “typical” meaningful—but this benefit is in no way due to the choice of microprocessor.
The authors cant seem to keep track of their own strategic views. On page 12, the authors predict, “the Itanium processor is the chip platform that will enable HP and Intel to stand out as a [sic] technology pioneers in an industry that is rapidly falling victim to mass commoditization.” On page 13, however, we learn that, “Because it is based on an industry standard architecture backed by volume manufacturing, Itanium-based systems can decisively claim the ability to run a computing environment at a significantly lower cost than a comparable RISC-based system.”
The sloppy syntax and careless editing of these excerpts is unfortunately common throughout the book: on page 51, for example, we read that Intel co-founder Gordon Moore said that the future of processor performance improvement depends on “the two size factors: bigger dice and finer dimensions.” After momentarily wondering if Moore had been using a gambling metaphor, I realized that an editor had almost surely inserted “dice” for what Moore habitually refers to as “dies.”
Whats most offensive, though, is not errors of form but travesties of content. Trying to distinguish the Itanium from all that has come before, the authors start waving their hands, saying that “All CISC and RISC machines are based on the model of a Von Neumann engine…where the processing must appear to be done sequentially.” The basic model of the Von Neumann machine, with a processor using storage registers for immediate data and a random access memory for storing other data and instructions, applies to the Itanium as well.
Page Two
Asserting that Itanium sets a new standard for floating-point mathematics, the authors assert that “a 64-bit wide number that an Itanium-based system can handle obviously holds many more levels of precision than a 32-bit number used by RISC machines.” Perhaps thats why 80-bit floating-point registers have been used for two decades, for example in Intels own 8087 co-processor that many AutoCAD users plugged into their first-generation IBM PCs.
There is a core proposition that distinguishes the Itanium from its competitors and its predecessors. Intel has gambled that the best way to get ahead of the complexity of on-chip instruction-scheduling hardware, which seeks out opportunities for parallelism on the fly, is to find those opportunities when a program is compiled and place parallelism in the code ahead of time: hence the Intel moniker EPIC, for “explicitly parallel instruction computing,” that defines the genuine difference between Itanium and what has come before.
In asserting EPICs advantages, though, the authors over-reach. They spend all of Chapter 4 telling us that Itanium is needed to meet the demands of “utility computing” as a service delivered through heterogeneous, highly distributed systems. But they tell us in Chapter 15 that Itaniums advantage over other architectures comes from the EPIC compilers ability to “make global optimizations” as it runs through the entire stream of code that performs a task. When the code that performs a high-level task is residing on many different systems and created and maintained by many different parties, however, any advantage based on global analysis seems suspect; the pre-Itanium alternative of seeking parallelism opportunities on the fly seems intuitively more likely to pay off in such an environment.
Publishers warn aspiring authors that a book has to be thick enough for its spine to display a title that can be seen by prospective buyers as they scan a bookstores shelves. Itanium Rising shows the symptoms of a worthwhile vendor white paper being bulked up to 198-page book length, including 26 pages of prefatory material, definitions of terms, and index.
My recommendation: if you want an Itanium sales pitch, dont buy a $30 book; let a sales rep buy you a $30 lunch that youre sure to find more substantial.
- Title: Itanium Rising: Breaking Through Moores Second Law of Computing Power
- Authors: Jim Carlson, Jerry Huck
- Publisher: Prentice-Hall
- Length: 204 pages
- Price: $29.99