IBM, Google and Mellanox Technologies are among the founders of a new industry consortium that promises to speed up server performance by up to 10 times at a time when emerging workloads such as data analytics and artificial intelligence are rapidly increasing the workloads on systems.
The OpenCAPI Consortium is developing an open standard based on IBM’s Cache-Coherent Accelerator Processor Interconnect (CAPI) technology that will be used to connect CPUs, graphics cards, networking, memory, storage and other components within the system, a job now done by such technologies as PCIe 3.0. However, the new OpenCAPI standard will offer data speeds of up to 150 GB/s, or about 10 times faster than PCIe 3.0, according to consortium members.
OpenCAPI will enable systems to more tightly integrate the various components, and server makers will be able to design new systems that will put the compute power closer to the data and remove bottlenecks, all of which will improve performance.
Other founding members of the consortium are system makers Hewlett Packard Enterprise (HPE) and Dell EMC and chip and GPU makers Advanced Micro Devices, Micron, Nvidia and Xilinx. Notably absent is Intel, the dominant player in the server chip market that is going its own way on developing interconnect technologies.
Two key drivers for the new open standard are the rise of artificial intelligence (AI) and machine learning, which are forcing systems to collect and analyze massive amounts of information, and the need for accelerators like GPUs and field-programmable gate arrays (FPGAs) to work with CPUs to process the data, Brad McCredie, IBM Fellow and vice president of Power development, told eWEEK.
“These two processes are challenging,” McCredie said, adding that the industry will have to change the way it approaches processing and computing to address the emerging demands.
OpenCAPI is the latest open-source effort to focus on the challenges of processing and moving data not only with servers but also between systems as the computational workloads on them increase, due to not only AI and data analytics, but also the proliferation of mobile devices, the increase use of video, the internet of things (IoT) and cloud computing.
In May, many of the same industry players—such as IBM, ARM, AMD and Xilinx—announced an alliance to create a single data center interconnect fabric that will enable chips and accelerators from different vendors to communicate without the need for complex programming. The proponents of the Cache Coherent Interconnect for Accelerators (CCIX) said the new standard will make servers more efficient to run emerging workloads.
Earlier this week, several of those same companies launched the Gen-Z Consortium, which will develop a flexible, high-performance and low-latency memory semantic fabric that will enable systems to more easily and quickly move and access large amounts of data, with plans to see products using the new Gen-Z interconnect by 2018.
IBM’s McCredie said the OpenCAPI group is working with the Gen-Z Consortium, whose efforts are complementary. Essentially, OpenCAPI is focusing on moving data within the system, while Gen-Z is looking at interconnects between systems, though its standard will also be able to be used to link components within servers.