Test Leaves Sour Taste
If theres ever been a personal computer that everyones gotta get, its the Apple Power Mac G5. Even x86 fanatics have to admit that the G5 has everything they could possibly want in a personal computer. The G5 is a screamer, with a 1GHz front-side bus and an IBM 64-bit Power4 processor, and its as good-looking as it is fast.
Unfortunately, it appears that Apple has stolen a particularly tainted page from traditional PC vendors. Once, Steve Jobs marketed the Mac as the tool for individualists (think 1984s Super Bowl commercial) and then, for people who had some intellectual capacity (the Think campaign) and eventually, as a computer that simply worked without headaches and aggravation (the switch campaign), Apple now is taking on the PC industry with "benchmarketing."
Its right out of 1992, when PC vendors pumped up their efforts to cheat on benchmarks to claim a meaningless performance edge over a competitor. Sad to say, those benchmark shenanigans worked to a degree. For example, all the popular video card companies that remain today at one time exploited loopholes in benchmark design.
In the past, Apple took the high road, by either avoiding benchmarks or supporting only third-party ones in a limited fashion. This time, however, Apple is testing by comparing the G5 with two Dells: one a PC with the fastest Intel P4 and the other an Intel Xeon-based, dual-processor workstation. Although the G5 is based on a 64-bit processor designed by IBM, Apple did not benchmark against a comparable 64-bit Intel Itanium-based workstation.
The test, audited and run by VeriTest (note that VeriTest purchased eTesting Labs, a former Ziff Davis Media company, last year), was sponsored by Apple, which should raise some doubts regarding objectivity. More doubts can be stirred by looking at the test methodology (www. veritest.com/clients/reports/apple/apple_performance.pdf). Whenever a 64-bit platform is compared with a 32-bit platform with a different architecture, there will always be a question about what actually was tested: the capabilities of the platform or those of the compiler.
In this case, Apple chose to use GCCan open-source compiler (gcc.gnu.org). GCC is widely used, and it supports multiple platforms, so it is a good choice for doing an Apple-versus-Intel benchmark. However, Intels own compiler is better optimized for the P4 and Xeon architectures. In addition, Apple chose to do odd things, such as turning off hyperthreading on the Intel systems.
Its quite possible that leaving hyperthreading enabled would actually slow down the Intel systems on this benchmark. Thats because hyperthreading, which makes single processors act as if there were two processors, typically chokes on optimized benchmarks. But since hyperthreading is designed to make real-world applications run faster, what does that say about the relevance of the benchmarks to the work that people actually do?
Apples effort to raise them to prominence notwithstanding, benchmarks dont matter anymore, unless youre a gamer or have a specific application you need to test. For example, Id love to benchmark Avids video editing on a G5 versus a P4.
In case you havent noticed, though, processor performance improves every time the wind blows. And it blows a lot these days. In the early 1990s, Intel processors went from 60MHz to 66MHz to 90MHz over the course of two years. Nowadays, Intel cranks up performance a full gigahertz every year. In three months, the comparison will be invalidated since Apple and IBM dont improve performance as frequently as Intel does.
More important, corporations arent looking to spend $2,000 on a really fast word processor. Besides, desktop performanceexcept for overclocking, which is valued by gamers but which VeriTest didnt test eitherpales in importance compared with networking performance.
Dont get me wrong. I definitely want a G5. Everybody doesor should.
This new Mac is the fastest Mac ever, better designed and better looking than the cardboard cutouts from the margin-hungry PC players. But Apple is wrong to obscure a good story with a benchmark test that at best will be irrelevant three months from now.
John Taschek is at firstname.lastname@example.org.