Glitch in Chip Set Blights Intels Nocona Rollout

Although Intel's 64-bit "Nocona" deployment brought announcements from several OEMs, a "theoretical" bug was found that affects its supporting chip sets.

Server OEMs this week helped complete the second phase of Intels 64-bit "Nocona" deployment, although the launch was marred by a bug that affected the supporting chip sets.

Intel first launched the Nocona chip in June as a workstation processor. Nocona was the first Intel chip to use Intels 64-bit extensions, dubbed the "EM64T" technology. In the past week, vendors including Dell Inc., Hewlett-Packard Co. and IBM have rolled out new multiprocessor servers.

/zimages/6/28571.gifRead more here about the companies efforts to revamp their low-end servers.

In addition, Intel also announced the E7520 and E7320 chip sets, also known as the "Lindenhurst" and "Lindenhurst Value Segment (VS)" chip sets.

The Nocona-Lindenhurst shift to 64-bit computing was mitigated somewhat by Microsofts disclosure last week that it has pushed out its own 64-bit operating systems to early 2005. But Intel executives have maintained that the 64-bit extensions are useful primarily because they rid server OEMs of the 4GB address-space constraints present in 32-bit systems.

Intel executives also provided a more immediate spin by pointing out that the Nocona-Lindenhurst combination can cut TCO (total cost of ownership) by reducing power. The Lindenhurst chip set uses DDR-2 memory, which can consume as much as 40 percent less power than the ordinary DDR-1 chips.

The Nocona chip also includes a feature called DBS (demand-based switching) that uses a process similar to its SpeedStep power-management technology to cut power from a peak output of 103 watts down to 73 watts.

The impact of power on a server farm is a double expense—as power is consumed, waste heat is generated that must be cooled. "The impact on TCO is a little bit complex, but there is absolutely a correlation toward reducing the power consumption," said Ajay Malhotra, general manager of enterprise marketing and planning for Intels enterprise platforms group, based in Santa Clara, Calif. "That means less money spent on power, and less spent on cooling—the fans spin slower, too," reducing noise, he said.

Intel executives also said the initial revisions of the E7520 and E7320 chip sets contained a "root flaw" that affected the use of add-on cards. But the flaw only manifested itself under theoretical conditions, executives said, and will be fixed through a revised stepping, or version, of the chip.

"All complex, multimillion-transistor products can have errata, and Intel has had a rigorous process for dispositioning customer sightings for many years," Malhotra said. "These manufacturer-reported sightings can sometimes lead to either a spurious issue or the root cause of an erratum, i.e., behavior contrary to product specifications. Intel regularly publishes spec updates containing errata details."

"There has been an erratum identified in the Intel E7520 and Intel E7320 chip sets, which affects PCI Express slot functionality for add-in cards," Malhotra said. "We duplicated the reported issue and root-caused it in our validation labs, and we provided implications and workaround details to our customers.

"Under certain, synthetic, laboratory-simulated circumstances, the erratum can cause the memory controller and the I/O bridge to hang the system and lock up. This issue has not been seen in commercially available software. The issue has been root-caused, and we are planning to fix the issue in silicon with the fixed version available in early Q4."

The issue manifested itself when the chip sets received 10 to 15 times the normal number of interrupts they would receive under normal conditions, Intel spokesman Michael Houlihan added.

/zimages/6/28571.gifCheck out eWEEK.coms Infrastructure Center at for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.