Eight Reasons Why PCIe 4.0 is Important for Flash Storage

eWEEK DATA POINTS: Both flash storage and GPU manufacturers are impressed with the next generation of PCI Express (PCIe 4.0), soon to go live in data centers globally.

KIOXIA.PCIe.4.0.disk

For a decade and a half, PCIe connectivity hardware has delivered improved computing performance despite a smaller footprint on motherboards. The PCIe serial bus has helped usher in new advancements in computing applications, from increased capacity drives for in-memory processing to larger-capacity scratch disks and graphics processing units (GPUs) for 3D video and graphics processing.

PCIe (peripheral component interconnect express) is an interface standard for connecting high-speed components. Every desktop PC motherboard has a number of PCIe slots that can be can used to add GPUs (aka video cards or graphics cards), RAID cards, WiFi cards or SSD (solid-state drive) add-on cards.

PCIe-driven hardware continues to push the future of computing. With every new generation of PCIe connectivity, we see both an increase in transfer speeds and the number of available lanes for simultaneous data delivery—allowing for larger volumes of data to be transferred and used in short order.

This is why both flash storage and GPU manufacturers are impressed with the next generation of PCI Express (PCIe 4.0), soon to go live in data centers globally.

In this eWEEK Data Points article, Jeremy Werner, Senior VP and GM of the SSD business unit for KIOXIA America, Inc. (formerly Toshiba Memory America, Inc.), outlines the reasons why the PCIe 4.0 interface is important for flash storage. The new specification enables devices (such as SSDs, GPUs and NICs) to deliver I/O twice as fast as Gen3, particularly benefiting data-intensive, computational and emerging applications.

Here are eight reasons that make this interface compelling for flash storage:

Data Point No. 1: Twice the performance

The latest PCIe 4.0 revision can move data at approximately 2 gigabytes per second (GB/s) per lane (versus 1 GB/s per lane with PCIe Gen3), doubling performance and delivering 4-lane bandwidth of nearly 8 GB/s. Having twice the performance per lane can address demanding workloads such as machine learning (ML), NoSQL databases, and containerized or virtualized cloud computing. It can also reduce lane congestion to simplify systems, and reduce cost and power consumption. In general, users will get more performance from a server and see their energy consumption drop to perform a given amount of work.

Data Point No. 2: Choices in SSD class

PCIe 4.0 SSDs are driven by the non-volatile memory express (NVMe) specification enabling tiers of performance covering enterprise, data center and client devices. Enterprise and data center NVMe SSDs reside in servers and storage systems where enterprise-class SSDs are designed to run 24 hours/7 days a week for five years without downtime, provide the highest quality and reliability, and deliver the highest performance of any SSD class. Data center NVMe SSDs are a great choice to replace SATA SSDs and are designed for scale-out and hyperscale environments where read performance, QoS, and power efficiency are the key metrics. Client NVMe SSDs are typically smaller in size and feature lower capacities, and are designed for consumer devices ranging from notebook computers to VR headsets, delivering fast transfer rates, high durability against shock and vibration, and extra-long battery lives at the lowest cost.

Data Point No. 3: Driven by separate standards development consortia

The PCIe 4.0 specification is developed by the PCI-SIG standards group. SSDs based on the PCIe 4.0 interface comply with a set of instructions and commands known as the NVMe protocol, developed by NVM Express, Inc. As separate development efforts, each evolves independently, enabling focused and innovative capabilities that will propel PCIe NVMe SSDs forward. The PCIe specification focuses on the physical interface and driving bandwidth improvements, while the NVMe standards focus on the command set and management.

Data Point No. 4: Support for U.3 tri-mode infrastructures

Some companies are launching PCIe 4.0 NVMe SSDs with emerging SFF-TA-1001-compliant interfaces (also known as U.3) to connect with tri-mode backplane infrastructures that combine SAS, SATA and PCIe interfaces into one backplane managed by a SFF-TA-1005 Universal Backplane Management (UBM)-compliant system. SAS/SATA SSDs and HDDs, and PCIe 4.0 NVMe SSDs, can be mixed and matched within a UBM-enabled backplane, and generally support 2.5-inch U.3-compliant drives.

The ability to add, replace or interchange SSDs within one universal tri-mode backplane configuration increases customer flexibility and simplifies storage deployments, while providing a migration path between SATA, SAS and PCIe/NVMe storage media, all while protecting the initial storage investment. U.3-compliant SSDs generally maintain backwards-compatibility with 2.5-inch U.2 legacy NVMe-only slots.

Data Point No. 5: Catalyst for disaggregated NVMe-oF deployments

The ability to disaggregate and pool computing, storage and network resources independently and provision the right amount of resources for each application workload, is quickly becoming a de facto standard practice for cloud data centers, both private and public. This requires moving from a direct-attached storage architecture to a disaggregated shared storage model, where NVMe over Fabrics (NVMe-oF) is also rapidly becoming the network protocol of choice for cloud architectures. NVMe-oF shared storage is very convenient and efficient, and delivers the performance that customers are used to. PCIe 4.0 SSDs can be pooled and accessed at low latencies for host sharing, further enabling disaggregation in cloud architectures.

Data Point No. 6: Support for ‘DRAM-less’ client applications

Some client-based NVMe SSDs now utilize the new host memory buffer (HMB) feature to maintain high-performance without the use of integrated DRAM (dynamic random-access memory). Instead, HMB uses a portion of host memory to manage SSD flash memory, and delivers similar performance as SSDs with onboard DRAM. This cost-effective DRAM-less design has resulted in cost and power savings in very thin client form factors. PCIe 4.0 NVMe SSDs that utilize the HMB feature can now access host DRAM even faster and may use fewer PCIe lanes to provide the required performance for laptops, tablets and other mobile devices. The end result is an improvement of the mobile user experience and extension of battery-life and a reduction in cost when compared to previous client NVMe SSD generations.

Data Point No. 7: The future of PCIe 4.0 SSDs begins today

This year, enterprise and data center NVMe SSDs are projected to represent majority-use across data centers worldwide when compared to SATA and SAS SSDs. The combined segment (in units) is expected to grow from 42.8% in 2019, to 75% by the end of 2021 and 91% by the end of 2023 with PCIe 4.0 SSDs leading the way. Client NVMe SSDs represented 53% majority use in 2019, and growing, versus a shrinking 47% that used SATA SSDs. It is now clear that most companies will be deploying PCIe 4.0 NVMe SSDs in their next data center build-out. These SSDs will also be offered with more capacity, power, security and form factor options than any other SSD interface, making them an easy choice to fit perfectly into the diverse storage needs of both the present and future.

Data Point No. 8: Product Availability in 2020

PCIe 4.0 adoption has begun with client SSDs in the DIY gamer desktop segment. The massive adoption will occur in late 2020 and 2021 as next-generation enterprise and data center servers will ship with PCIe 4.0 network interface cards (NICs), host bus adapters (HBAs) and SSDs from leading vendors. It will then be followed by workstations and next-generation high-performance gaming systems and consoles, followed by storage systems (such as all-flash arrays), and notebook and desktop PCs.

If you have a suggestion for an eWEEK Data Points article, email [email protected].