15.36 TB SDD (Image via Samsung Newsroom).jpgSometimes it’s tough to discern the value of new technologies when they first enter the market. But thankfully with new high-capacity solid-state drives (SSDs), it’s pretty clear they can deliver immediate customer value. That makes it even more confusing that some vendors within the storage industry are expressing alarm and uncertainty about the new 15TB SSDs.

 

The case for SSDs has been compelling from the start, even for some of the more expensive ones. This is particularly true for space constrained data centers, mixed SAN and NAS environments, and applications that require both low latency and high capacity. As SSDs of all sizes have continued to become more affordable, more and more types of customers can consider them. And many of these customers are managing workloads that are ideally suited to the performance profile of high-capacity SSDs, such as business-critical applications where they want to avoid latency spikes that are visible at the app layer.

 

At NetApp, when we saw the coming of large-capacity SSDs, we made the move to support them in our ONTAP operating system. We were the first to ship storage systems with this capability, but I hasten to add that we didn’t build in support for high-capacity SSDs simply to be the first to do so. We did it because we have customers who need it now, and many more who will need and want it very soon.

 

So how do you measure the impact of high-capacity drives to your business? Important factors include space efficiency, performance gains and cost of ownership. In all categories, the case for these SSDs keeps getting better. But to be thorough, let’s dig into each one.

 

From a space efficiency standpoint, you can’t beat the new high-capacity all-flash arrays, which give you up to 321.3TB of raw storage in a single 2U form factor. That means a single 2U system using 15.3TB drives can provide more than 1PB (1,000TB) of effective capacity. To achieve the same with even the highest density SFF hard disk drives would require 52U of rack space (a standard rack is 42U) and 18 times as much power. Along with the space reduction, the savings in power and cooling are huge. In data centers where every square foot or square meter counts, the gains here are considerable. So NetApp all-flash arrays that use high-capacity SSDs can now address customer use cases where such significant relief has historically been impractical.

 

When they first appeared, all-flash arrays were used to replace hard disk systems for workloads requiring high performance. These hard drive systems delivered performance density of about 100 to 200 IOPs per terabyte, whereas NetApp all-flash arrays with high-capacity SSDs can deliver 8x to 16x that number1,650 IOPs per terabyte. Now, with significant drops in the cost per gigabyte of SSDs and further improvements in deduplication, compression and compaction technologies, our all-flash arrays can replace traditional disk arrays even where performance isn’t the driving factor. So, customers can get the performance and efficiency of all-flash arrays along with the cost of ownership advantages. Of course, for applications that require the highest performance customers can still choose smaller SSDs.

 

Earlier I alluded to some storage vendors expressing alarm and uncertainty about high-capacity SSDs. This might have to do with theirs not being on the market yet. Not a biggie: they’ll catch up. The fact remains, however, that when they do, it will be with an expensive silo product that doesn’t play well across your infrastructure.

 

Meanwhile, you can experience the advantages of high-capacity SSDs in your environment now with NetApp. For more information on how easy it is to get NetApp all-flash arrays for your business, including free trials, check out our Flash 3-4-5 promotion today.

mm

Mark Bregman

When Mark Bregman joined NetApp in September 2015, he brought to the company more than 30 years of technology experience and a passion for the process of innovation. He has held C-level and management roles for global firms including Symantec and IBM. Just prior to NetApp, Mark was CTO of SkywriterRX, Inc., an early-stage start-up using machine learning and natural language processing to analyze books. Before that, he held senior positions at Neustar, Symantec, Veritas, AirMedia, and IBM. He began his career at IBM’s Thomas J. Watson Research Center.
 
As NetApp SVP and CTO, Mark leads the company’s portfolio strategy and innovation agenda in support of the Data Fabric, NetApp’s vision for the future of data management. His responsibilities include evaluating where the biggest technical opportunities and risks are and helping to further develop and nurture the NetApp culture of innovation within the engineering team.
 
Mark is dedicated to addressing the underrepresentation of women in the fields of computer science and engineering. He has served as executive sponsor and an engaged member of the Women in Technology programs at all of his previous places of employment. Since 2009, he has served as a director of the Anita Borg Institute for Women and Technology. He also serves on the boards of the Bay Area Science and Innovation Consortium, ShoreTel, Inc., and SkywriterRX, Inc. He is a former member of the Naval Research Advisory Committee, a member of the American Physical Society and a senior member of IEEE. Mark holds a PhD, an MA, and an MPhil in physics from Columbia University and a BA in physics from Harvard College.