Without comprehensive control of storage performance, how can any sense of I/O quality be delivered? Are most QoS implementations just delivering different versions of poor quality? In this post I’ll walk through the top three types of QoS being offered by other storage vendors today. Spoiler alert: They are broken and give an incomplete service offering.
Check out the top 3 QoS technologies … and how they will all fail you.
Rate limiting | Is just a speed limit enough?
The analogy: You wouldn’t buy a car without knowing it can at least go the speed limit. Likewise, you shouldn’t put your application on storage that cannot provide a minimum amount of performance.
Rate limiting is very near sighted and assumes there will always be enough performance to even need to be limited. Storage is bursty. Many times, a volume’s speed needs to go beyond the rate-limited cap … what then? Unless there is a controlled guarantee of storage I/O performance, rate limiting is meaningless.
Prioritization | What’s 60% of zero?
The analogy: Say you received a job offer that paid you and another co-worker 60:40 each month, where you get 60% and your coworker gets 40%. That could be reasonable … or it couldn’t. But you certainly wouldn’t take it without ensuring you were guaranteed 60% of a reasonable number.
Prioritization only gives relative performance; it does not guarantee performance. You will just end up with different versions of bad.
Host-level prioritization makes things worse by increasing queue depth in order to get more performance with no concept of the health of the storage. The storage owner must provide the guarantee. Legacy storage systems react to increased queue depth by increasing latency, which is not the desired result. To make it even worse, if the app with the high prioritization is the one causing all the problems, this multiplies the issue.
Tiering | Getting what you need — eight hours too late
The analogy: Like jumping from line to line in the grocery store. By the time you get to the shorter line, the original line goes faster. So you switch back, only to find out a third line seems to be moving faster. Tiering causes you to constantly hop around rather than getting the performance you need.
Delivering performance consistently and on time is key to a successful cloud deployment. Using cycles to calculate how much performance is needed where and physically migrating data to a media that “might” provide better performance is a serious mess.
In some cases this process not only takes eight hours to make a decision, but it then blindly moves data based on capacity, hoping for better performance. Moving data across tiers is expensive; and after all that effort you may not even get good performance. Relying on media (SSD vs. 15k vs. 7k) with no concept of current load and available performance does not scale and will consistently fail applications.
Tiering adds load to a system in which you’re trying to correct the load. It also doesn’t take into account that the storage network, instead of the storage itself, could be the bottleneck. Tiering is always doing extra back-end work when you need it to focus most on your front-end performance.
What does work?
Any form of QoS that uses the word “quality” but does not have control doesn’t deserve to be called quality of service.
Don’t leave your application wanting with blind attempts to limit, prioritize and migrate data to achieve performance. The ability to set a minimum along with max and burst capabilities creates a full spectrum QoS instead of the desultory solutions discussed above.
There can be no quality without control! Learn everything you need to know about storage QoS in our Definitive Guide to Guaranteeing Storage Performance.