Image Via Broadcom

Data storage technologies have advanced quickly over the past few decades, yet the pace of innovation today seems faster than ever.


In the market for external storage systems, hard disks ruled for a long, long time, but once flash came along it quickly took over. First came the hybrid systems (hard disks and flash together), then the all-flash systems, but both were constrained by the underlying storage transfer protocols. With the arrival of NVMe over Fibre Channel (NVMe/FC), that constraint has just been removed.


To understand this technology shift, and why it’s so important to IT leaders, keep in mind that both of today’s main networking protocols—Fibre Channel and Ethernet—use the small computer system interface (SCSI) command set for the storage protocol. SCSI was developed for mechanical storage media (and a variety of other devices) in the 1970s, and although it can handle the data flow to and from disk drives, it can’t keep pace with flash storage.


In particular, early hard disk drives suffered from substantial individual device latency, which was one of the prime motivations for the development of RAID controllers. And indeed, when the access time of your devices is measured in milliseconds, the latency of the network is fairly inconsequential. But as you decrease the latency of the end points the network becomes more visible. Plus, the latency of the software stack also becomes critical.


The result is that the speed gains possible from SSDs are now approaching a limit, because of bottlenecks in other parts of the data pathway. That’s why, for the first time in more than 30 years, the data storage industry is moving to a new, much faster standard for communicating between a host server and external storage systems – NVMe over Fibre Channel. This leverages the incredible performance gains in Fibre Channel, and the central role of the data fabric.


Also, it’s important to distinguish NVMe/FC from simply attaching NVMe devices on the back end of an existing SCSI controller. Sure, faster devices will yield potential performance gains. But this is like putting a race car engine in the family sedan–it will be a faster family car, but it won’t be a race car.

When Should You Transition to NVMe/FC?

The transition to NVMe/FC won’t occur overnight, as large organizations have infrastructure in play that will be running for some time. Also, experienced IT pros knows that it will take a bit for their current apps to be updated to take advantage of NVMe technology. In particular, one of the big advantages of NVMe is that it supports 64K command queues, each with a queue depth of 64K commands, compared with iSCSI, which has a single command queue of up to 64 commands.


Software developers will need to modify their code to leverage these new capabilities, which will happen in parallel with the build-out of the NVMe infrastructure. It’s similar to the success of the streaming media market, which needed the underlying infrastructure to first provide the performance and bandwidth. Once that was in place the applications and market morphed into things that could provide that streaming service and generate a booming revenue market.


In my experience, the software people will always figure out how to consume all of the performance exposed to them in hardware. But that’s because software is the competitive edge. Bigger, better and deeper app engagement with customers is the best long-term strategy, and that’s what NVMe is going to enable.

NetApp and NVMe/FC

The recent NetApp® announcement of ONTAP 9.4®, and simultaneously availability of a release candidate, marks the beginning of the trend toward NVMe/FC. For the AFF A300, A700 and A700s this software update allows the front end of the all-flash array to support the NVMe software stack. This means that servers with Gen 6 HBAs and the NVMe driver support can speak native NVMe to the array.


NetApp also announced that the A800 will ship with “end to end NVMe support”. This means that not only will there be NVMe devices inside the array, but they will no longer translate over SAS to a SCSI controller. And since this is supported on NVMe over Fibre Channel it can coexist with the SCSI over Fibre Channel traffic already running in NetApp customer environments on Gen 5 and Gen 6 Fibre Channel switch infrastructure without disruption. Application migration can be easily handled without touching a cable as was demonstrated at the NetApp Insight conferences in both Las Vegas and Berlin last year. This is massive.


Brocade and NetApp worked with Demartek, an independent analyst organization, to test and validate NVMe/FC performance. You can download the full test report here.

NVMe/FC Provides a Foundation for the Future

The advent of this technology provides the foundation for an entire new generation of performance applications. The scope of what those applications will bring to customers is going to continue to expand over time. But imagine an application being able to decompose a large model and stream it across parallel queues to say an NVIDIA chip with many GPU cores on it. Parallelism of the analysis can drastically shorten the interval to completion. Financial analysis, pharmaceutical modeling, virtual reality entertainment to name a few.


As with any house, you can build an amazing architecture given an excellent foundation. The same is true of IT. I believe that IT has become the new utility. It is the power company, the water utility. It must not fail because so much of our lives run through it. Whether work, finance, health care, entertainment; what part of our daily functions is not touched by it? But the scope of the application base continues to expand.


As somebody who spends way too much time traveling for work I am certainly guilty of becoming more and more enamored of what the application base is bringing us and the ability to manage aspects of my life through these applications. And with the NVMe over Fibre Channel announcement for ONTAP 9.4 and the forward view to the A800 it seems clear that NetApp is setting the foundation for the next generation of application development. I confess I find it hard to wait for what the future is going to bring.

AJ Casamento

As a 39-year veteran of the IT industry AJ has worked in a wide variety of market segments. Starting his career with Digital Equipment Corporation and working in various engineering and product development groups as well as having worked for Hewlett Packard, SUN Microsystems, IBM and Avnet before coming to Brocade. In more than 20 years as a Solutioneer at Brocade AJ has spent a great deal of time helping customers to understand various technologies and their implications. Spending most of the year on the road AJ, works with Brocade customers and partners on architectural decisions and implementations that affect their businesses.

Add comment