Hewlett Packard Enterprise has been working in its labs for more than two years on an interesting new project called The Machine, a bold move to re-imagine data center architecture by putting memory, instead of processors, into the forefront.
The Palo Alto, Calif.-based IT provider on Nov. 28 at its Discover London conference successfully demonstrated Memory-Driven Computing, a concept that puts memory at the center of the computing platform to realize performance and efficiency gains not possible today.
SAP, Oracle and others have done this with powering databases, but no one yet has demonstrated it running a server. Until now, however.
The Machine was announced two years ago, and longtime HPE veteran Martin Fink, the company’s CTO and head of HP Labs, has been the key driver behind the system’s development and roadmap. Fink, however, will retire at the end of this year after 30 years with the company; HP Labs is being folded into HPE’s Enterprise Group, which is headed by Executive Vice President and General Manager Antonio Neri.
According to the CEO, having HP Labs more closely tied with HPE’s enterprise business will make it easier for the company to create ways to get some of the new technologies being developed for The Machine—including silicon photonics, custom processors, a new operating system and the advanced memristor memory technology—into other systems.
Growing Number of Connected Devices Requiring New Computing Models
Gartner predicts that by 2020, the number of connected devices will reach 20.8 billion and generate an unprecedented volume of data, which is growing at a faster rate than the ability to process, store, manage and secure it with existing computing architectures.
“We have achieved a major milestone with The Machine research project—one of the largest and most complex research projects in our company’s history,” Neri said. “With this prototype, we have demonstrated the potential of Memory-Driven Computing and also opened the door to immediate innovation. Our customers and the industry as a whole can expect to benefit from these advancements as we continue our pursuit of game-changing technologies.”
The proof-of-concept prototype, which was brought online in October, shows the fundamental building blocks of the new architecture working together, just as they had been designed by researchers at HPE and its research arm, Hewlett Packard Labs. HPE has demonstrated:
–compute nodes accessing a shared pool of Fabric-Attached Memory;
–an optimized Linux-based operating system (OS) running on a customized System on a Chip (SOC);
–photonics/Optical communication links, including the new X1 photonics module, are online and operational; and
–new software programming tools designed to take advantage of abundant persistent memory.
During the design phase of the prototype, simulations predicted the speed of this architecture would improve current computing by multiple orders of magnitude. The company said it has run new software programming tools on existing products, illustrating improved execution speeds of up to 8,000 times on a variety of workloads. HPE said it expects to achieve similar results as it expands the capacity of the prototype with more nodes and memory.
In addition to bringing added capacity online, The Machine research project will increase focus on exascale computing. Exascale is a developing area of High Performance Computing (HPC) that aims to create computers several orders of magnitude more powerful than any system currently online.
HPE claims its Memory-Driven Computing architecture is scalable, from tiny IoT devices to the exascale, making it an ideal foundation for a wide range of emerging high-performance compute and data intensive workloads, including big data analytics.
Memory-Driven Computing and Commercialization
HPE said it is commercializing the technologies developed under The Machine research project into new and existing products. These technologies currently fall into four categories:
Non-Volatile Memory (NVM): HPE continues its work to bring true, byte-addressable NVM to market and plans to introduce it as soon as 2018/2019. Using technologies from The Machine project, the company developed HPE Persistent Memory—a step on the path to byte-addressable non-volatile memory, which aims to approach the performance of DRAM while offering the capacity and persistence of traditional storage. The company launched HPE Persistent Memory in the HPE ProLiant DL360 and DL380 Gen9 servers.
Fabric (including Photonics): Due to its photonics research, HPE has taken steps to future-proof products, such as enabling HPE Synergy systems that will be available next year to accept future photonics/optics technologies currently in advanced development. Looking beyond, HPE plans to integrate photonics into additional product lines, including its storage portfolio, as soon as 2018/2019. The company also plans to bring to market fabric-attached memory, using the high-performance interconnect protocol being developed under the recently announced Gen-Z Consortium, of which HPE recently joined.
Ecosystem enablement: Much work has already been completed to build software for future memory-driven systems. HPE launched a Hortonworks/Spark collaboration this year to bring software built for Memory-Driven Computing to market. In June, the company also began releasing code packages on Github to begin familiarizing developers with programming on the new memory-driven architecture. The company plans to put this code into existing systems within the next year and will develop next-generation analytics and applications into new systems as soon as 2018/2019.
Security: With this prototype, HPE demonstrated new, secure memory interconnects in line with its vision to embed security throughout the entire hardware and software stack. HPE plans to further this work with new hardware security features in the next year, followed by new software security features over the next three years. Beginning in 2020, the company plans to bring these solutions together with additional security technologies currently in the research phase.