Will the true QoS please stand up?

This is the third blog in a series that focuses on how the next generation service provider is differentiating and growing in a highly competitive market. The first topic, written about next generation hosting, can be found here. And the second topic, all about scale-out architecture, can be found here.

 

In the realm of data storage, the term “Quality of Service” (or “QoS”) has been widely promoted by industry vendors as the way to unlock key benefits to the service provider, yet its implementation has often not lived up to the marketing hype.

 

With many flavors of QoS in the market today, a lack of consistent definition about what QoS really is, and performance claims that can rely heavily upon implementation “small print,” it is not unexpected that cloud service providers have become understandably skeptical about vendor claims surrounding this potentially game-changing feature.

 

So what is driving the need for QoS?

 

For the traditional- or legacy-storage architecture service provider, various virtual workloads are operated on discrete pools of storage. This is done to protect the performance and availability of an individual application across a single storage array in a multiple application environment.

 

While this philosophy can provide capacity on-demand, its downfall lies with allocating performance resources efficiently as it is not built to support the individual capacity and performance requirements of collective workloads. As a result, more storage is purchased than is really needed, which in turn drives efficiencies down and costs up.

 

Recognizing this problem, storage vendors introduced/bolted-on a number of different QoS methodologies to try to reduce the efficiency risks associated with multi-tenant, multi-application arrays — all of this resulting in varying degrees of success.

 

These initial implementations of QoS typically fell into one or more of the following methodologies:

QoS format
Methodology
Things to think about / implications
No QoS at all

“We provide enough IOPS for any workload”
  • No guarantees for multiple workload environments

  • No protection from “noisy neighbor” applications

Storage Tiering
 

Combine different storage types to create different performance and capacity tiers. Predictive algorithm determines which data is “hot” vs. “cold”
  • Workload performance varies greatly as algorithm moves data between storage types (fast to slow, etc.)

  • “Noisy neighbor” applications steal performance

  • No control over individual applications

Prioritization
 

Ranking of applications into tiers such as “mission critical,” “moderate,” and “low”
  • Service providers often don’t know what applications are being deployed

  • Lack of control over any single application getting the performance needed

  • Performance is based on arbitrary levels

  • “Noisy neighbors” get worse if prioritized as “mission critical”

Rate limiting

Hard I/O limits are applied to individual application performance
  • Limits amount of performance a “noisy neighbor” application can access

  • No minimum performance guarantee

  • High performance/bursty applications can become capped and incur undesired latency

In recent years, a trend has emerged in legacy storage architectures toward a rate-limiting methodology to implement QoS, which can be effective with adequate application planning. The challenge for the service provider, though, is as the volume/diversity of customers and applications increases so does the need for robust control of performance. Limiting maximums alone just isn’t enough when operating at scale.

 

If we now contrast this with a web-scale next generation service provider (wNGDC), the situation is fundamentally different when it comes to Quality of Service. For the wNGDC, QoS has been integrated into the storage architecture from the ground up. It allows complete control of applications and volumes on a granular basis and enables a single-tenant, single-application customer experience within a multi-tenant, multi-application environment.

 

Does this sound like a pipe-dream?

 

Enter “guaranteed QoS.”

Guaranteed QoS is the methodology that allows the SolidFire storage architecture to enforce hard-performance controls at a granular level, ensuring a guaranteed amount of storage resources to each and every application.

 

Applications are assigned guaranteed amounts of IOPS that are respected regardless of any other application activity, capacity level, or I/O pattern. This makes guaranteed QoS fundamentally different to anything else out there.

 

This level of performance control, where “noisy neighbor” effects are completely eliminated can only be achieved when three dimensions of performance are defined and enforced on a granular level.

These three dimensions are:

  1. Maximum IOPS (common to rate limiting)

  2. Minimum IOPS (guaranteed minimum performance at all times)

  3. Burst IOPS (short-term extensions to max IOPS levels that reduce latency risks)

So if you think about it logically, if you don’t have robust control over each and every one of your applications, you can’t really ensure the Quality of Service that your customers are actually getting.

 

This is a risky place to be as a service provider — and a risk that you don’t have to take with SolidFire.

 

As service providers attempt to adapt to the constantly changing multi-tenant, multi-application landscape, predictable Quality of Service is an absolute must to differentiate above the competition. In NetApp SolidFire’s latest thought-leadership white paper, “A Service Provider’s Perspective: Designing the Next Generation Data Center,” we address this topic in detail and provide guidance to service providers on topics across the entire data center that enable business transformation and market leadership. Click here to download this essential guide.

mm

Simon Wheeler

Simon is the Senior Product and Segment Marketing Manager at SolidFire where he helps service providers position, market, and generate demand for their NetApp SolidFire-based storage services. He has 25 years of product marketing and management experience in multiple industries.