So far in our series of blog posts on the challenges of deploying Block Storage as a Service, we have discussed why it’s difficult to get both good performance and high efficiency with primary storage in the cloud. Because of these issues, what service providers are often left with is a sprawling, underutilized storage infrastructure that is extremely difficult to manage at scale.

 

Where does this management challenge come from? It’s primarily caused by a disconnect between how traditional enterprise storage systems have been managed and how cloud providers want to build and manage their infrastructure. In the enterprise, expensive and complex storage equipment is looked after by experienced storage administrators. Given the cost of the equipment and the value of the data being stored, having a well-trained human configuring and managing the storage on a daily basis makes a lot of sense. Traditional storage companies have built their management systems around the demands of storage administrators, and, as a result, have created complex and feature-filled administration tools.

 

The problem is that this model doesn’t scale. For a large-scale cloud, where you are growing quickly and deploying new storage on a weekly or even daily basis, and adding customers 24 hours a day, hiring an army of storage administrators to setup, configure, provision, manage, and troubleshoot that storage is not a viable option. The efficiencies of the cloud are not based on armies of administrators; they are based on efficient management through automation. Service providers don’t want to administer their storage; they want to automate it.

 

An illustration I like to use is to compare deployment of compute capacity to storage. Most cloud providers are extremely efficient at deploying new compute capacity. Automated server configuration, deployment, and management tools allow new racks of servers to be plugged into the network and immediately added to the pool of available compute capacity.  All of these activities are accomplished without an administrator ever logging in or configuring a single thing. How can you do that with storage today? Setting up new storage arrays is a complex and time-consuming process. Provisioning new storage or adding capacity is something that has to be done carefully to avoid disruption and ensure security and data isolation are preserved. Automated alerting and reporting are primarily done through proprietary vendor tools or complex integrations. Any automation capabilities or APIs tend to be afterthoughts and cover only a small portion of the system’s functionality.

 

What service providers are really looking for is a storage system that was designed with automation in mind from the start, with APIs that are both comprehensive yet easy to integrate, and with management capabilities that can be consumed by a machine just as easily as they can by a human. Only then is storage really ready for cloud scale.

 

Performance, efficiency, and management are just three of the challenges facing cloud providers who want to deploy primary block storage at scale. SolidFire was built from the ground up to address these challenges and many others. Soon, we will be telling you just how we do that. I can’t wait!

Dave Wright

Dave Wright, SolidFire CEO and founder, left Stanford in 1998 to help start GameSpy Industries, a leader in online videogame media, technology, and software. GameSpy merged with IGN Entertainment in 2004 and Dave served as Chief Architect for IGN and led technology integration with FIM / MySpace after IGN was acquired by NewsCorp in 2005. In 2007 Dave founded Jungle Disk, a pioneer and early leader in cloud-based storage and backup solutions for consumers and businesses. Jungle Disk was acquired by leading cloud provider Rackspace in 2008 and Dave worked closely with the Rackspace Cloud division to build a cloud platform supporting tens of thousands of customers. In December 2009 Dave left Rackspace to start SolidFire.