SAN FRANCISCO-Intel is running an internal pilot program to show that workstations can be clustered together to give businesses and institutions access of HPC levels of compute power that normally they would not have.
The idea is to give what is becoming known as the “missing middle”-those businesses and researchers whose workloads demand high compute power but don’t have the money or access to HPC (high-performance computing) environments-the capabilities they need for their work.
“In each segment of the marketplace … we’re seeing a trend for the need for high-performance computing,” John Hengeveld, director of technical compute marketing for Intel’s Data Center Group said in an interview here at the Intel Developer Forum.
The pilot is being run by silicon design teams inside Intel, said Shesha Krishnapura, senior principal engineer for Intel’s IT Engineering Group.
Krishnapura said that Intel runs more than 100,000 servers, 60,000 of which are used for silicon design by about 20,000 silicon design engineers. Most of those 60,000 are multicore systems.
The design teams are located in multiple sites all over the world, and not every site has access to a local data center, he said. The problems that arise are ones of latency and space, Krishnapura said. As the workloads grow, more pressure is put on the data centers. In addition, with the kind of work the designers do, latency of 10 milliseconds or more can have a significant negative impact on the work.
When the servers in the data center start hitting capacity, the natural trend is to build more space, he said. And the latency is always a worry.
So for the past six months, Intel has been working with the design team on a concept officials are calling CCC, or Cubicle Clustered Computing. Traditionally, engineers use high-end laptops to access back-end blade servers, Krishnapura said. In the CCC pilot, workstations that are configured to the exact specifications as the servers. Those workstations are placed in each cubicle and secured so there can be no physical access.
The engineers then access the compute power of the workstations, housed locally, rather than servers that are in data centers farther away.
“The network latency [issue] is gone, because [the workstation] is local,” Krishnapura said.
At the same time, the workstations can be combined into a clustered environment, giving the engineers the compute power they need. The data storage is still done in the data centers, for security reasons, but the compute power is in the workstations. The systems support the IPMI management specification and can be managed remotely.
In addition, space in the data center is saved. For example, rather than housing 48 blade servers in a rack, the 48 workstations are distributed throughout the office.
Hengeveld said such an environment could be a boon for businesses in a host of areas-including financial services, oil and gas, and fluid dynamics-that are seeing a growing need for access to large amounts of compute power, but may not have the means to get it.
Having that local compute power also will be important as the industry continues it move to exascale computing. Few people will have access to the first exascale system, but once such systems are more widely deployed, with 40 or 50 out there, businesses and researchers will need local compute power to crunch that data that they get from the exascale systems, he said.
The Intel pilot involves about 200 engineers in five sites around the world, Krishnapura said. Hengeveld said the pilot has worked well, and now Intel is looking to get the concept into the industry.
“We’re talking about this as a viable idea for the missing middle,” Hengeveld said.