At VMworld 2015 the theme “Ready for Any” aims to empower organizations to develop, deploy, and consume IT. When you deliver virtual environments in your organization, a common threat to success is that your virtualization boat is likely still anchored by legacy storage. And that may be why your initiatives never seem to go quite as well as you had planned.


Before I get into the Software Defined Data Center (SDDC) story, here’s my obligatory push for you to join us at VMworld 2015 (Booth #929) for a host of parties, sessions, and great free stuff. Details are at the bottom of this post.


Now, onto the SDDC, and how now is the time for Software Defined Storage (SDS) to be in the limelight of this evolving story.

Let’s rewind to get SDDC context

On May 8th, 2012 Steve Herrod, then CTO of VMware, blogged for the first time about the Software Defined Data Center (SDDC). For the VMware administrator, the ability to manage hardware resources through a set of software capabilities would greatly increase the scope of their operational command.


The promise of the SDDC is its ability to manage all your hardware resources and granularly define the resource requirements to match your application needs. Fast forward three years that promise still falls short. At the compute layer, you are likely finding some success with that methodology. However, that is often not the case for the underlying storage you are relying on to support your virtual machines.


Storage has arguably been the biggest struggle. While VMware has developed a number of technologies such as VASA and VAAI to integrate with external storage, the problem lies with the legacy storage architectures themselves and their lack of performance controls. Simply put, you get capacity and performance coupled together; It’s hardware-defined.

What’s holding us back and where do today’s choices fall short?



As IT contends with legacy storage platforms that can’t abstract performance from capacity nor extend array capabilities through API to automation platforms, Software Defined Storage (SDS) falls short. Managing multiple levels of performance means having to calculate tiers of hardware. Often performance can only be guaranteed with construction of silos to isolate applications from each other.


The capabilities you need from SDS are critical to realizing the SDDC vision:

  • Performance pooled separately from capacity
  • Software controls that let you dial up performance or dial it down
  • Ability to add in small increments, non-disruptively, and scale out
  • Data services for efficiencies like dedupe, snapshots, replication
  • Fully exposed APIs for automation controls and to unlock IT consumption

SDDC success with SolidFire

Building the most agile SDDC for your VMware environments is within your reach today. SolidFire’s scale-out architecture allows you to size your storage platform to your exact needs. Once the platform is up and running, each volume you create for your VMs is associated with defined performance and capacity values.


Transforming your environment to support the move from silos to the SDDC is not an unmanageable task. As you adopt a SolidFire cluster, you’ll immediately notice the base platform providing the ability to set true Quality of Service (QoS) parameters uniquely per volume independently of capacity. Here’s a progression you might be able to realize with SolidFire for your VM environment:


SDS step 1 – Control performance: For the first time ever you get to set a minimum, maximum, and burst IOPS for each VM volume along with your capacity provisioning.


SDS step 2 – Add other workloads: Smash those silos! This means:

  • Mix your workloads – Microsoft SQL, Oracle, ERP, MySQL, MongoDB, and all of your other apps DO NOT need separate storage! QoS enables software control to deliver performance for thousands of those apps without contention.
  • VDI storage for free – Mixing those workloads lets you add VDI without adding costs.  This is enabled due to the extreme efficiency SolidFire platform sees on VDI workloads combined with QoS to deliver a great user experience.


SDS step 3 – Leverage the vCenter Plug-in: Virtualization and storage administrators can automate many tasks from a single familiar interface, the vSphere web client.


SDS step 4 – Extend into vSphere integrations:

  • SolidFire has API integration to fully compliment storage I/O controls, resulting in SIOC with guaranteed performance (QoS)
  • Utilize VMware API for Array Integration (VAAI) for block zeroing, reclaim, Xcopy, and ATS offloaded from your server hosts
  • Extend site-to-site DR capabilities with VMware’s Site Recovery Manager leveraging SolidFire’s Storage Replication Adaptor (SRA)
  • … And soon you’ll be investigating how to extend into the future of storage with VMware Virtual Volumes (VVols)


SDS step 5 – Start your automation journey to SDDC:

  • Leverage PowerShell tools to deploy storage without utilizing the UI, and improve the scale of multi-task execution with efficiency
  • Start creating storage policies in SPBM to get scale and self-service automation flowing within IT
  • As your sophistication grows, utilize vRealize Orchestrator and vRealize Automation to enable self-service IT consumption and full-scale automation for storage, VM, and application builders.


Resulting effects and business value

With SolidFire liberating storage you’ll realize faster deployments, scale with agility, predictable performance, automation, self-service options, and operational efficiency.


THIS is the Software Defined Storage the SDDC has been asking for since 2012!


All hands on deck for SDDC at VMworld!

Join us to get more information on how we deliver these results in many ways at VMworld 2015:

  1. Stop by Booth #929 to hear about our Great Feats and more about why SolidFire received the highest score for overall use case in Gartner’s Critical Capabilities for Solid-State Arrays* — Again.
  2. Join us at our Pursuit of Hoppiness party Monday night
  3. Learn and dialogue with our CEO Dave Wright about Flash in the Next Generation Data Center on Monday here
  4. Discover other sessions on VVols, Automation, and EUC both at Moscone and in the SolidFire suite here

For more information go to

Keith Norbie

At NetApp Keith drives Strategic Alliances in partnership with the business units and currently leads VMware, Data Protection (Veeam, Commvault, Rubrik) and SAN/Brocade. This applies strategy development, advising/collaborating with product managers, incubates new offerings, cross functional solution development, and executive interlocks. All of this is to drive GTM net new revenue to NetApp via partners in key areas like Private and Hybrid Multi-Cloud, EUC (VDI), Modernized Data Protection and Next Gen SAN for Enterprise Apps.

Keith joined NetApp in February 2016 with the acquisition of SolidFire and previously had 20 years in the channel as an executive including a successful acquisition built from a startup. He delivers a passion for delivering results through clarity, focus (less is more), relationships, intense curiosity, and seeking signal from noise.

Add comment