On Wednesday, Sept. 13, at 11 a.m. PDT/2 p.m. EDT/7 p.m. GMT, @eWEEKNews will host its 59th monthly #eWEEKChat. The topic will be, "Memory-Based Computing: Future of the Data Center." It will be moderated by Chris Preimesberger, eWEEK's editor of features and analysis.
Some quick facts:
Topic: “Memory-Based Computing: Future of the Data Center”
Date/time: Sept. 13, 11 a.m. PDT/2 p.m. EDT/7 p.m. GMT
Tweetchat handle: Use #eWEEKChat to follow/participate, but it's easier and more efficient to use real-time chat room links.
"Memory-Based Computing: Future of the Data Center?"
Eleven months ago, Hewlett Packard Enterprise pulled the wraps off an interesting new project called The Machine, a bold move to re-imagine data center architecture by putting memory, instead of processors, into the forefront.
This was the newest iteration of something called Memory-Driven Computing, a concept that puts memory at the center of the computing platform to realize performance and efficiency gains not possible today.
SAP, Oracle and others have done this with powering databases, but no one yet has demonstrated it running a server--until HPE’s breakthrough.
Ostensibly, memory-driven computing will enable workloads to be completed much faster, enabling use cases such as artificial intelligence, big-data analytics, genome processing, oil and gas exploration, weather prediction and other others to become run-of-the-mill workloads.
Growing Number of Connected Devices Requiring New Computing Models
Gartner predicts that by 2020, the number of connected devices will reach 20.8 billion and generate an unprecedented volume of data, which is growing at a faster rate than the ability to process, store, manage and secure it with existing computing architectures. Thus, in-memory processing appears to be a viable solution for many of these tasks via exascale computing.
HPE’s idea of in-memory computing includes:
--compute nodes accessing a shared pool of Fabric-Attached Memory;
--an optimized Linux-based operating system (OS) running on a customized System on a Chip (SOC);
--photonics/Optical communication links, including the new X1 photonics module, are online and operational; and
--new software programming tools designed to take advantage of abundant persistent memory.
During the design phase of the prototype, simulations predicted the speed of this architecture would improve current computing by multiple orders of magnitude. The company said it has run new software programming tools on existing products, illustrating improved execution speeds of up to 8,000 times on a variety of workloads.
In addition to bringing added capacity online, The Machine research project will increase focus on exascale computing. Exascale is a developing area of High Performance Computing (HPC) that aims to create computers several orders of magnitude more powerful than any system currently online.
HPE claims its Memory-Driven Computing architecture is scalable, from tiny IoT devices to the exascale, making it an ideal foundation for a wide range of emerging high-performance compute and data intensive workloads, including big data analytics.
Some of the questions we’ll pose on Sept. 13 are:
How viable is in-memory computing for the average data center?
Is it just for the big players, or will any business be able to use it?
Will cloud be the most optimal way to access in-memory computing?
What are the drawbacks to in-memory computing?
Join us Sept. 13 at 11 a.m. Pacific/2 p.m. Eastern/7 p.m. GMT for an hour. Chances are good that you'll learn something valuable.