The NetApp® EF570 all-flash array and E5700 hybrid array introduced in September are the first NVMe-oF (NVMe over Fabrics) enterprise-class systems in the market. Both systems support an optional add-on InfiniBandTM host interface card that can run NVMe-oF up to EDR (Enhanced Data Rate) 100Gb/s speed, and were recently demonstrated at NetApp Insight in Las Vegas and at the Splunk Conference in Washington, DC.  If you’re going to Insight Berlin the week of November 13, there are classes and demos to learn more about NVMe.


NVMe has become the industry standard interface for PCIe SSDs, with a streamlined protocol and command set and fewer clock cycles per I/O. NVMe supports up to 64K queues and up to 64K commands per queue, which make it more efficient than the existing SCSI-based protocols such as SAS and SATA.


Although most implementations on the market today focus on just adding NVMe drives to the back-end storage system while keeping the front end to the host SCSI based, the NetApp EF570 and E5700 focus on the front end, with an NVMe over InfiniBand host interface card. Why? NVMe-oF can reduce latencies between the host server, and the storage system, even further.  This is critical for latency-sensitive data analytics workloads, where data access response times are critical to the business.


The optional 4-port 100Gb NVMe over InfiniBand is supported from the host to the front end of the EF570 and E5700, while the back end is still SCSI based with the SAS drives (see the following figure).

Why is the NetApp E/EF-Series implementation done over InfiniBand?  There are many good reasons:

  • InfiniBand has RDMA (Remote Direct Memory Access) built into it- results in extremely low latencies.
  • Like NVMe, RDMA communication is based on queuing mechanisms which makes it the perfect fit to extend NVMe over the fabric with low latency overhead.
  • The E/EF-Series has a long history and a lot of experience supporting other protocols over InfiniBand (SCSI based).
  • The same hardware on EF570 and E5700 that runs iSER (iSCSI Extensions for RDMA) or SRP (SCSI RDMA Protocol) can run NVMe-oF (but not at the same time).
  • All three protocols can coexist on the same fabric and even on the same InfiniBand host channel adapter (HCA) port on the host side.
  • All the InfiniBand components in the fabric (NetApp EF570, E5700, switches, HCAs) can negotiate the speed down as needed to connect to legacy components.

If you’re looking for the storage industry’s price/performance all-flash leader, and a 2U array that can deliver up to 21GBps of bandwidth, 1 million sustained IOPS, sub-100 microsecond latency, and support for 100Gb NVMe over InfiniBand, the EF570 is the solution for you.

Mike McNamara

Mike McNamara is a senior leader of product and solution marketing at NetApp with 25 years of data management and data storage marketing experience. Before joining NetApp over 10 years ago, Mike worked at Adaptec, EMC and HP. Mike was a key team leader driving the launch of the industry’s first cloud-connected AI/ML solution (NetApp), unified scale-out and hybrid cloud storage system and software (NetApp), iSCSI and SAS storage system and software (Adaptec), and Fibre Channel storage system (EMC CLARiiON). In addition to his past role as marketing chairperson for the Fibre Channel Industry Association, he is a member of the Ethernet Technology Summit Conference Advisory Board, a member of the Ethernet Alliance, a regular contributor to industry journals, and a frequent speaker at events. Mike also published a book through FriesenPress titled "Scale-Out Storage - The Next Frontier in Enterprise Data Management", and was listed as a top 50 B2B product marketer to watch by Kapos.

Add comment