Enterprise IT architects often rely on SANs (storage area networks) to run their most business-critical workloads like Oracle, Microsoft SQL Server, and SAP HANA. Their success is measured against the rigorous service level agreements (SLAs) they’ve promised their line-of-business customers.


To reliably build out new enterprise data storage systems, seasoned IT pros have learned to use specific industry benchmarks to guide their purchasing decisions – benchmarks they know reflect real-world workloads and operating environments. In this business, it isn’t acceptable to tell an IT leader that “your mileage may vary”. Instead, we have to ensure that for specific common data center configurations, any variance is small and predictable.


It’s like shopping for a car, now that vendor-neutral third-party websites like Edmunds.com have emerged as trusted arbiters, serving up unbiased data comparing different makes and models. For buyers, it’s particularly valuable knowing that all testing is done in accordance with a methodology that is rigorous, relevant, public, and reproducible.


That’s why storage architects are amazed over the new SPC-1™ results for the NetApp AFF A800, which achieved 2,401,171 SPC-1 IOPS™ with a 0.590 millisecond SPC-1 IOPS™ Response Time. This makes the A800 the top-performing enterprise all-flash array among the industry’s top-five storage providers, with the fastest overall response time.


Given that many enterprise storage buyers rely on SPC-1 results, it’s important to understand what they actually measure, and how. The Storage Performance Council (http://www.spcresults.org/) is a vendor-neutral standards body focused on providing benchmarks that provide a good proxy for real-world operating conditions, rather than using non-standard metrics that look good but don’t prove anything useful. Their SPC-1 benchmark uses a workload generator that tests a storage system using a combination of sequential and random reads and writes, thus mimicking the I/O behavior of latency-sensitive enterprise database and transaction processing workloads.


The SPC-1 benchmark has changed as enterprise data storage has evolved from hard disks to solid state drives (SSDs), so it’s important to distinguish between earlier SPC-1 v1 results (created prior to January 2017) and the latest SPC-1 v3 results. For instance, v1 results were based on writing and reading the same data pattern over and over again, which works fine for hard drives but would produce totally unrealistic results with SSDs and deduplication (dedupe).


With flash drives, test results can vary significantly depending on whether or not data reduction features (compression and deduplication) are enabled or disabled. The newer SPC-1 v3 benchmark can be run either way, which provides buyers with more realistic data for today’s operating environments, at the expense of making it impossible to accurately compare v1 and v3 test results.


Similarly, the SPC also provides price-performance metrics, which previously were quoted as dollars per IOP ($/SPC-1 IOPS). Starting with SPC-1 v3.6 (March 2018), this metric has changed to dollars per thousand IOPS ($/1,000 IOPS or KIOPS), to better reflect today’s massive data storage capacities.


The SPC-1 tests of the NetApp AFF A800 were conducted with compression and dedupe enabled, just as they would be under real-world conditions. The AFF A800 is an end-to-end NVMe-based all-flash array that delivers high performance, density, scalability, security, and network connectivity in a 4U chassis. It combines low-latency NVMe SSDs with available NVMe over Fibre Channel (NVMe/FC) host connectivity, enabling extremely low latency and massive throughput in a scaled-out cluster configuration.


The test system was configured as a cluster of six high-availability pairs of A800 nodes (12 nodes total), with each pair comprising 1TB of cache, 8 Fibre Channel ports (32 Gbps), and 24 1.92TB NVMe SSDs. The arrays were connected via QLogic 16Gb host bus adapters and Brocade 16Gb FC switches. (This configuration did not include NVMe/FC network connectivity, which will be the subject of a future benchmark test.)


Many other aspects of the test protocol ensure its relevance to real-world data centers:

  • Each of the host systems included two 16Gb Fibre Channel connections to the network, and each storage node had four FC connections to the network.
  • Data protection for all logical volumes was set to RAID DP, a double-parity implementation similar to RAID6 that prevents data loss when up to two drives fail, which is a standard feature of the ONTAP® operating system.
  • The test result supporting files, and the parameter tuning options used during the test are fully documented in the report, which is submitted to SPC, who then audits the methodology and results before providing their certification of the test results. These same configuration, methodologies, and results are also available publicly, making it easy for decision-makers to compare apples to apples.
Figure 1 – SPC-1 benchmark response time (in milliseconds) and throughput (in IOPS) for the NetApp AFF A800.

As shown in Figure 1, the SPC-1 results confirm that the A800 combines massive throughput with astonishingly low latency. In fact, industry expert Chris Mellor of The Register reported that “NetApp’s system delivered the most performance (IOPS) at both the lowest response time and lowest $/GB than any of the other vendors in the top 10 results.”


Every enterprise today needs high-performance data storage systems to run their business-critical applications, but it takes substantial skill, lab time, and money to independently gather the performance metrics needed to accurately assess their performance. That’s why third-party performance benchmarks, when properly designed and implemented, are so valuable to enterprises of all size.


The complete SPC-1 benchmark “full disclosure report” for the A800 is available for free download.

Mike Kieran

Mike Kieran is a Technical Marketing Engineer at NetApp, responsible for SAN and business applications. He has more than 20 years of experience as a technology strategist and communicator, focusing on the concerns of enterprise data center buyers. Mike’s work centers on creating messaging programs, go-to-market materials, and technical content. He writes and edits technical reports, white papers, presentations, customer stories, and blog posts.

Mike was previously at Nimble Storage, Gigya, Socialtext, and E-Color, where he managed teams responsible for creating outstanding customer satisfaction programs and compelling B2B tech marketing content. Mike studied physics and astronomy at the University of Toronto, and is the author of four books and 150+ magazine features, primarily on digital imaging technology. Many evenings you’ll find him in his woodshop, quietly building heirloom-quality hardwood toys and furniture.