Thanks to VMware’s recent $1.26 billion purchase of Software-Defined-Networking (SDN) leader Nicira, and their new marketing push on the Software-Defined-Data-Center, everyone is running around trying to attach themselves to Software-Defined-Anything (SDx). This is as true for the storage market as it is any other segment of the technology ecosystem. It is a safe bet that there are a lot of storage companies, both old and new, scurrying around trying to figure out how to maneuver “Software-Defined” into their messaging.

This whole SDx concept is built on the idea that all virtualized data center resources (e.g. server, storage, networking, security) can be defined in software. These resources are then abstracted into a higher-level control plane where they are dynamically provisioned out in support different applications and/or services. The reason this is called Software-Defined is because we are at least two layers removed from the physical hardware at this point and all management, orchestration and provisioning of these services has to be done in software.


As it relates to storage, Software-Defined-Storage (SDS) is enabled by lower-level storage systems abstracting their physical resources into software in as dynamic, flexible and granular a manner as possible. These virtualized storage resources are then presented up to a control plane as “software-defined” services. The consumption and manipulation of these storage services is done through an orchestration layer like VMware, CloudStack or OpenStack. The quality and breadth of these services are highly dependent on virtualization and automation capabilities of the underlying hardware. More precisely, the control plane’s effectiveness is dependent on the virtualized resources it is presented from the layers below it. Without the granular abstraction of physical storage resources, and APIs to define, flex and apply policy to these resources dynamically, the control plane is limited in the services it can provision out to virtual machines or applications.


As you can see from the description above, SDS is a combination of virtualization, abstraction and control. A storage system by itself is not SDS. Storage is a supporting element for anyone looking to manage their infrastructure within the “Software-Defined” framework. There will be a lot of vendors trying to muddy the waters between Software-Only storage and Software-Defined Storage. No matter what anyone tries to tell you, they are not the same thing. Software-Only storage is still requires hardware. The fact that it is sold as software-only is more of a go-to-market strategy and packaging decision than a technology decision. Meanwhile, SDS is a higher-level framework for the orchestration, provisioning and consumption of storage.


In a storage system properly architected to support SDS, all of the management of system resources is done through software. These resources are then presented up to the control plane, in a fine-grained fashion, via REST APIs. These APIs enable the control plane to more precisely provision storage services to the unique needs of the applications running above it. The APIs are effectively relinquishing the management of these resources to the control plane to carve them up and flex as required. This is the way it should be. This communication layer is essential to supporting Software-Defined-Storage.


In the year ahead a lot of vendors will be quick to claim they are “software-defined-storage”. However, software-defined storage is NOT a storage system concept. No single product, system or platform makes up SDS, but that won’t prevent a lot of people from telling you otherwise. To quickly get to the signal in this forthcoming SDS marketing storm, here are a few more questions to ask:


When your vendor claims they are Software-Defined-Storage ask them how they virtualize the underlying hardware and present it up to the control plane.

  • Ask them if they can abstract and provision not only storage capacity but also performance.
  • When they claim they can, ask if it is possible to make an API call to the system for a 100gb volume with 1000 IOPS. Then ask them if they can dynamically adjust this policy on the fly through software.
  • Ask them if they have a complete API that allows automation of all storage services so that higher-level orchestration layers can fully exploit the benefits of SDS.

The “Software-Defined” movement has the chance to be a major leap forward for how infrastructure resources are provisioned, managed and automated. But a lot of pieces of the infrastructure need to come together to make the vision of a Software-Defined-Data-Center anything close to reality. As it relates to storage, in the coming year don’t be fooled by vendor quick claims of Software-Defined-Storage. Using the questions above, dig beyond the marketing smokescreen to understand what that really means. You might surprised at what you actually find.


(originally posted on

Dave Wright

Dave Wright, SolidFire CEO and founder, left Stanford in 1998 to help start GameSpy Industries, a leader in online videogame media, technology, and software. GameSpy merged with IGN Entertainment in 2004 and Dave served as Chief Architect for IGN and led technology integration with FIM / MySpace after IGN was acquired by NewsCorp in 2005. In 2007 Dave founded Jungle Disk, a pioneer and early leader in cloud-based storage and backup solutions for consumers and businesses. Jungle Disk was acquired by leading cloud provider Rackspace in 2008 and Dave worked closely with the Rackspace Cloud division to build a cloud platform supporting tens of thousands of customers. In December 2009 Dave left Rackspace to start SolidFire.

1 comment

  • … [Trackback]

    […] Informations on that Topic: […]