Its not an easy time to be either Intel or AMD. The launch of puts the spotlight on the almost trivial difference between their strategies, but the companies are really just placing different bets in the same casino of complex and costly designs. Its a gaming hall that both of these chip makers were forced to enter by the relentless pressures of semiconductor fabrication progress.
Given the option, we wonder if both companies would rather just spend a quiet evening at home knitting memory chips and mobile processors. But that choice is not on the table, at least not at the margins that both companies need to support their present business models and future growth ambitions. Instead, both companies hope that the mass-market economics of high-volume processor production will draw resource-limited server manufacturers—and cost-conscious enterprise buyers—into their 64-bit game.
Server processing power is clearly a buyers market. Intel has been almost embarrassed by its own success in continuing to squeeze higher clock rates, if not proportional performance gains, from the aging Pentium architecture and its Xeon server-optimized configurations.
The question, though, is how much opportunity remains to analyze an ever-more-rapid flow of Pentium-style instructions, on the fly, in search of opportunities to transform them into entwined (but not entangled) streams of concurrent operations. Two different answers to that question define the difference between Itanium and Opteron as the next logical step for server CPUs.
Intel is betting that on-chip instruction-scheduling hardware, which emerged on x86 chips in the late 1990s to inject new life into 1980s code, is nearing its limit—that added cost, in terms of complexity of design and share of on-chip resources, will soon exceed increased contribution.
Each successive generation of Pentium processor has shown a smaller percentage increase in the number of instructions per clock cycle, with the Pentium 4 actually suffering a decline; thread-level parallelism, such as Intels Hyperthreading, offers perhaps another few tens of percentage points of performance gain, but this approach increases the workload at the edge of the chip where congestion is already a problem.