Storage Class Memory Paradigm Still Elusive, but NVMe Is Here to Stay

NEWS ANALYSIS: Despite all the talk surrounding Storage Class Memory, NVMe-based storage systems are supplying enough speed and throughput to handle the most demanding compute jobs.

Flash Memory Summit

SANTA CLARA, Calif.—There is a lot of agreement over the latest storage technologies that will drive adoption of flash in the enterprise. At the Flash Memory Summit here this week, every major vendor made an announcement based around non-volatile memory express (NVMe) or NVMe over Fabrics (NVMeF).

These protocols will enable flash storage arrays to run faster both locally and across networks, and help serve new applications that will produce real-time big data analytics.

But there's not nearly as much agreement on what’s the next big thing. What was being talked about, as has been the case the past few years, is a hybrid memory and storage paradigm known as Storage Class Memory (SCM) or Memory-Centric Computing.

These next-generation technologies promise to be denser than current DRAM and faster, more durable and potentially cheaper on a per-gigabyte basis than NAND (non-volatile memory).

There are a few contenders, most notably Intel's 3D XPoint (cross point), which was announced two years ago. The first Intel XPoint products came out this year in the form of Optane solid-state drives (SSDs).

The Holy Grail

Storage Class Memory has become a bit of a Holy Grail for storage and systems gurus, who envision a new computing paradigm where data can live in fast, DRAM-like storage areas in which data in memory is the center of the computer, rather than the CPU and compute functions, according to Western Digital Chief Technology Officer Martin Fink.

"We want to reduce the increase in data flows in all types of data centers, and get the data to a place and leave it there and then compute and process where it is," Fink said during his keynote here.

It's interesting to see how much Fink's new job at Western Digital resembles his old role at Hewlett Packard Enterprise, where he was CTO and director of HP Labs. It was at HP Labs where he led HPE's mythical "Machine" initiative, a "memory-centric computing" system designed around a special kind of memory called Memristors and fast photonic interconnects.

Memristor technology, also commonly referred to has resistive random access memory (ReRAM), is indeed one of the potentials for SCM along with XPoint. But ReRAM still faces technical and economic barriers for mass production and use.

Meanwhile, HPE carries on with its Machine project, but in much different form than it was originally conceived back in 2014. In November HPE said it "demonstrated Memory-Driven Computing," which the company developed as part of The Machine research project.

But at his Flash Memory Summit keynote, Siamak Nazari, HPE fellow and 3PAR chief architect, spoke at length about HPE’s SCM vision without once mentioning The Machine. So regardless of whether the vendors are talking about Storage Class Memory or Memory-Centric Computing, what "it" is, is still not clear.

The New Paradigm

What the next generation will require is the industry coming together to agree on what we are talking about. WD's Fink and other experts here said that the new paradigm will evolve around use cases and workloads, task-specific, engineered machines that are built with real-time analytics in mind.

“We will support all types big data wherever it lives and support the architecture that will enable that,” Fink said. “Our industry cannot be constrained by proprietary interfaces. It’s critical for the industry to innovate and accelerate forward.”

Until a new paradigm becomes more tangible, the new NVMe-based products coming out will do a lot toward enabling fast data processing at scale, especially those that support the new NVMeF specification. Coupled with software-defined storage controllers, NVMeF will enable users to pool flash storage drives and treat them as if they are one big local flash drive.

In servers, such functionality will give administrators more control over their systems. “Server storage is usually overprovisioned,” explained Bob Doud, senior director of marketing for Mellanox, which announced its BlueField system on a chip (SOC) reference design that integrates NVMeF along with a programmable ARM-based CPU. “Whereas, if you rack a box with a drive [array] and disaggregate compute using NVMeF, you can run your storage up to 80 percent capacity before you need to add another box for scale-out.”

It may not be nirvana, but today’s flash-based, NVMe-based storage systems will supply enough speed and throughput to handle the most demanding compute jobs, including real-time big data analytics and artificial intelligence computations. Good enough that SCM models may not be worth the wait.

Scot Petersen is a technology analyst at Ziff Brothers Investments, a private investment firm. He has an extensive background in the technology field. Prior to joining Ziff Brothers, Scot was the editorial director, Business Applications & Architecture, at TechTarget. Before that, he was the director, Editorial Operations, at Ziff Davis Enterprise. While at Ziff Davis Media, he was a writer and editor at eWEEK. No investment advice is offered in his blog. All duties are disclaimed. Scot works for a private investment firm, which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

Scot Petersen

Scot Petersen

Scot Petersen is a technology analyst at Ziff Brothers Investments, a private investment firm. Prior to joining Ziff Brothers, Scot was the editorial director, Business Applications & Architecture,...