So you paid your money for the airfare, bought the outrageously overpriced hotel room next to the site, and even shelled out more for the VIP tickets. Yet with all that money spent …

 

You still have to wait in long queues to get on the rides!

 

Sounds familiar, doesn’t it?

 

In many ways, you as the service provider are in the same position when it comes to your storage. You paid for your overprovisioned array at the outset, you bought into the claims that you would have so many IOPS that you would never have any performance problems in the future, and you were dazzled by the lightning-fast response times and super-low latency numbers quoted.

 

It all looked good until you began scaling your operations and consolidating workloads onto the array. What started out as amazing with only a single workload or two has turned out to be a real pain at scale. Applications start stealing performance (IOPS) from one another, latency numbers soar, and the trouble tickets start rolling in.

 

“OK, no problem,” you think, remembering that the helpful salesperson told you not to worry, because their storage array’s quality of service (QoS) would protect your workload performance.

 

This is very much like the VIP passes at your favorite theme park. You may be quicker than those without the ticket, but you still have to wait in line.

 

Sadly, the same situation confronts you with your storage array. The QoS capabilities that were supposed to protect you from interfering applications don’t seem to work. This is when you realize that not all QoS techniques are the same (see my previous blog post, Demystifying Cloud Service Provider Quality of Service), and many just don’t work at large scale.

 

The situation is not all doom and gloom, though. There is a solution to the problem­—one that eliminates application interference and guarantees performance by enforcing three dimensions of control: maximum, minimum, and burst performance levels (see my blog post on 3-Dimensional QoS). The benefit is that when one application “spikes,” others aren’t affected, meaning that storage-based performance trouble tickets become almost a thing of the past. We actually got ESG to verify this for us, and they determined that up to 94% of all storage-based trouble tickets are eliminated by using the three-dimensional QoS of NetApp® SolidFire ®, which doesn’t use rate limiting.

 

The benefit is that with fewer trouble tickets due to storage and interfering workloads, any outstanding issues can be dealt with much faster. You go to the front of the queue, so to speak.

 

The best benefit of all, though, is the benefit to your customers. Having fewer performance issues—especially at scale—means predictable, repeatable user experience; happier customers; and ultimately greater revenues down the line.

 

Given that customer satisfaction is one of the highest drivers of customer churn in the service provider industry, why would you risk not being able to fully control the workloads you host and upsetting your most valuable assets—the customers?

 

To learn how SolidFire QoS improves performance control over other QoS methodologies, read my blog post Demystifying Cloud Service Provider Quality of Service. You’ll learn more about how SolidFire QoS goes above and beyond all others—and keeps the long lines for theme parks, not for customer trouble tickets.

Simon Wheeler

Simon is the Senior Product and Segment Marketing Manager at SolidFire where he helps service providers position, market, and generate demand for their NetApp SolidFire-based storage services. He has 25 years of product marketing and management experience in multiple industries.