In my previous blog, I talked about a new product from NetApp-NetApp® Service Level Manager – that makes it significantly easier to manage and scale your storage, and improve operations with predictable performance and cost.
In this blog, let’s look at a typical use case example to help you understand how it works.
First, let’s get started with a set of one-time tasks for a storage admin or IT architect. Let’s say that:
- You have a mid-size or large enterprise IT environment
- You have many arrays representing different points for price-performance and efficiency: SAS, SATA, All-Flash, etc.
- The storage you deploy for your typical application workloads can be categorized into storage service level that look something like this:
- “Value” class: High capacity applications that are not super performance critical, e.g. email, web content, file shares, back-up targets
- “Performance” class: Database and virtualized applications
- “Extreme Performance” class: Latency sensitive application with the greatest impact to the business
Here we are viewing an example of the Extreme Performance service level, which states that is validated for SAP and IBM Watson workloads.
You may require more than three classes-five or six are common-but we don’t often see too many more than that.
For each storage service level class, the architect or administrator specifies service level objectives (SLOs) in terms of maximum latency permitted, peak IOPS allowed, and typical IOPS expected, as shown below (these happen to be the predefined, default storage service level classes that ship with the product):
|Service Level Class||Service
|Typical applications||High capacity applications: email, web content file shares, backup targets||Database and virtualized applications||Latency sensitive applications with greatest impact to the business|
Note that the Peak IOPS and Expected IOPS are expressed in terms of the amount of IO that an application requires from its storage. Since any given application might need a little-or a lot of capacity-the SLO policy needs to be expressed in relative terms that scale based on the amount of storage you are requesting.
With NSLM, the IOPS/TB values are used in several ways:
- Selecting the best storage medium. For workloads that need higher performance, you typically expect a higher number of IOPS per TB of capacity. Hence, in the default storage service level class definitions described above, the “Extreme Performance” class has a higher number of IOPS/TB than the “Value” class. NSLM uses this value to help assure that the storage media it selects can meet the needs of the workload.
- Assuring each workload gets what it needs…but not more. The Peak IOPS/TB value is used to set a maximum limit for the number of IOPS delivered to the workload to keep it from consuming more than you intend, and negatively impacting the performance of other workloads running on the same storage. As your capacity grows, that limit will likely need to grow as well to provide a consistent experience to your application users. NSLM does this by applying this IOPS/TB value to the maximum limit as TBs are added, for instance, by using ONTAP’s Auto-Grow feature.
- Determining how many workloads can be consolidated on the same storage. When determining where to provision the storage being requested, NSLM looks at both the Peak IOPS value requested as well as the Expected IOPS value (described above). By looking at these two values together-combined with the value of the maximum limit as capacity used, or TBs added-NSLM can determine the best location for provisioning the storage. This assures that the system can meet all the expected IOPS both in aggregate, and in peak situations.
In my next blog, I will cover the automatic discovery and mapping capabilities of NetApp Service Level Manager.