This blog is part 2 of a four-part series that explains how Nonvolatile Memory Express (NVMe), NVMe over Fabrics (NVMe-oF), and new storage-class memory (SCM) are changing the game for data centers. For a deeper dive, download the white paper New Frontiers in Solid- State Storage.

 

The previous blog outlined the storage trends in media types, transports, and applications. Now we dive a little deeper into the transport technologies. We start with NVMe, a standard that describes a protocol for fast access to storage devices over some transport, such as the ubiquitous Peripheral Component Interconnect Express (PCIe). For simplicity, many people say NVMe instead of NVMe over PCIe when talking about device interfaces, but we will make a distinction as needed.

 

Some protocols run over pipes (or transmission methods). In that sense, NVMe over PCIe replaces the SCSI protocol over SAS (or other storage interconnects), offering huge improvements in throughput, input/output operations per second (IOPS), and latency. In the consumer space, NVMe is already being used (for example, in laptops, tablets, and cell phones) to provide faster direct access.

 

Inside NVMe

To understand why NVMe gets so much press, we need to go back to when storage was based solely on rotational media. That’s what SCSI was designed for, and it’s served us well for many years. But storage has changed a lot, and many parts of the protocol intended to meet the requirements of hard disk drives (HDDs) are no longer sufficient. Today, the limitations of the SCSI software stack—even over the fastest 12Gbps SAS interfaces—can present a bottleneck for modern solid-state disks (SSDs), a single command queue with limited depth. Also, the SAS interface uses a relatively inefficient circuit-switching protocol that increases latency as the system scales. As SSDs became faster, the bottlenecks moved from storage devices to the CPU, and so CPU efficiency became more important.

 

This change motivated the standards work (in which NetApp was an active leading member) to create a new scalable protocol specifically designed for fast solid-state media and many multicore CPUs. Enter NVMe, with streamlined submission/completion mechanisms, a massive number of I/O queues for each device, and higher queue depths. This approach results in better throughput, lower CPU overhead, and considerably lower I/O latencies. NVMe typically runs over PCIe, which is the same bus generated directly from modern CPUs, and so connects directly to the SSD interface with multiple PCIe lanes, currently delivering about 1GBps per lane of bandwidth. Other NVMe characteristics include end-to-end data protection, priority mechanisms, energy usage controls, and many more.

 

What to Expect from NVMe

For NVMe devices, three popular form factors are add-in card (AIC), M.2, and U.2 (such as 2.5″ SSD). NetApp has been shipping AIC- and M.2-based back-end NVMe devices in some of its FAS controllers for caching purposes with NetApp® Flash Cache™ intelligent caching. Indeed, a recent count indicates that over 6PB of NVMe has already been shipped. Of course, NetApp will roll out other form factors as they become feasible for enterprise storage. As new solutions take advantage of NVMe, they’ll allow storage OS to efficiently connect to SSDs at massive scale, allowing for more flexible, data-centric cluster architectures. Capabilities that used to be reserved for the most advanced high-performance computing clusters will become possible in any enterprise data center.

 

NVMe over Fabrics

The second context for NVMe is as a front-end protocol connecting servers to storage systems. NVMe-oF takes many of the relevant NVMe capabilities and extends them over a remote direct memory access (RDMA) or Fibre Channel fabric. It provides a host-side interface into storage systems and the ability to scale out to large numbers of NVMe devices, even over distances.

 

NVMe-oF supports Fibre Channel fabrics (FC-NVMe) and RDMA fabrics (InfiniBand, RoCE, and iWARP) with 100GbE speeds, and standards are in the works for 400GbE and 800GbE. These high-throughput, low-latency fabrics allow large-scale persistent storage to be effectively closer to the CPUs while reaping all the benefits of shared network storage and associated data management.

 

As datasets for real-time analytics grow in size, the big question for enterprise data centers is when can they get better speeds and latencies at scale? The answer is very soon, thanks to NVMe-oF. That’s why we believe, at least in the short and medium terms, that NVMe-oF will be a much bigger deal than NVMe drives alone.

 

With NVMe-oF, systems can access remote storage devices at incredibly fast speeds across the storage fabric. This ability significantly reduces the performance gap between local access and remote storage access. In a 100GbE fabric with NVMe-oF, for example, the entire storage fabric will add just a few microseconds to the system.

 

What to Expect from NVMe-oF

You can expect significant, tangible cost savings in your data center infrastructure from NVMe-oF, because it drives down CPU overhead in the I/O stack and allows external storage arrays to connect much more efficiently. By offloading some of the network processing to hardware in these fabrics, we can reduce CPU usage. Current applications such as business intelligence, analytics, and data warehousing will run much more efficiently, allowing for better utilization of both server compute and storage resources. Moreover, emerging applications with more stringent service-level objectives will be now be feasible in enterprise data centers.

 

Looking Forward

In the future, you’ll be able to use front-end NVMe-oF in NetApp all-flash arrays to gain higher throughput at lower latencies and use less CPU at the server and in the storage system. We’re moving quickly with NVMe and especially NVMe-oF innovations to deliver immediate improvements to customers in their real-world storage environments. At the same time, we are always aware that our customers still demand enterprise-caliber reliability and no-compromise data management, even with the latest technologies. Striking that balance is what makes NetApp a trusted partner to so many of our customers and keeps ONTAP® as the world’s #1 open networked branded storage OS* with a 25-year trusted track record of industry innovation delivered with enterprise quality.

 

Visit our booth at Flash Memory Summit, August 8–10, 2017, and attend our keynote, “Creating the Fabric of a New Generation of Enterprise Apps,” on Thursday, August 10, 11:30 a.m. to noon.

 

More Information

Explore the implications of these new innovations in the other three blog posts in this series:

*Source: IDC, Worldwide Quarterly Enterprise Storage Systems Tracker – 2016Q4, March 2, 2017.

Ravi Kavuri

Ravi Kavuri is the VP of Engineering for ONTAP at NetApp, leading the team responsible for ONTAP data management with file system, high availability, RAID and performance teams. Ravi joined NetApp in 2007 as the CTO for an emerging products group. Prior to joining NetApp, Ravi was a Distinguished Engineer at Sun Microsystems working on scalable storage systems and long term archival storage. Ravi joined SUN Microsystems as a part of the acquisition of StorageTek where he was a StorageTek Fellow working on file systems, long term retention of data and storage networking. He holds a BS degree in Electronics and Telecommunications and an MS in Software Engineering.

Add comment