To meet SLAs, all workloads on a storage system must behave predictably. Predictability can prevent performance bottlenecks and “bully workloads.” To achieve predictability, you need to implement adaptive quality of service (QoS).


Suppose there are 10 workloads running on the same storage system, and most of them are I/O intensive. At some point, only a few of them will perform well, while others suffer from performance bottlenecks. You can avoid this situation if the storage system has enough performance headroom. But when you have a bully workload—also called a “noisy neighbor”—in your environment, it can eat up all the available IOPS, which can prevent you from meeting your SLAs for the workloads. This situation creates business challenges, especially if you are a service provider; it can also increase your operational expenditures.


The solution to this problem is storage quality of service (QoS). You use storage QoS to limit the throughput to workloads and to monitor workload performance. You can reactively limit workloads to address performance problems, and you can proactively limit workloads to prevent performance problems. By managing workload performance spikes through storage QoS, you can mitigate risks around meeting your performance objectives.


NetApp® Service Level Manager (SLM) helps decrease the cost and complexity of implementing QoS. Instead of having to manually manage individual QoS settings for hundreds or thousands of volumes, you can use SLM which also act as a QoS policy manager. SLM automates the management of QoS at the volume level by translating storage service level policies into the QoS settings for individual volumes. Any workload configured or managed through SLM is by default QoS managed.


To automatically achieve well-behaved workloads and predictable performance, you need to configure workloads through SLM. This blog post shows how SLM makes it simple and straightforward to set and enable adaptive QoS on a workload.

Implementing Adaptive QoS with SLM

Implementing adaptive QoS on a workload involves selecting the right policy, and then attaching that policy to the workload.


NetApp Service Level Manager has three predefined storage service levels or policies. Any workload that is configured and managed through SLM is automatically attached to a policy and managed through the service level objectives (SLOs) of that policy.


You can view service level definitions from the Manage Storage Service Levels pane in SLM.


To see the definition of the service level objectives, click the storage service level. You’ll see the peak IOPS and expected IOPS parameter details.


Peak IOPS/TB is the parameter that defines the QoS limit. In this example, the Value service level has Peak IOPS/TB set to 512. This parameter sets the throttle. In other words, any workload defined with this service level can burst up to 512 IOPS/TB. If workloads demand a higher or lower QoS limit, you can also create a customized storage service level with the desired peak IOPS/TB value. The SLA is defined at expected IOPS/TB, which is the minimum IOPS the workload should experience in its lifetime.


Now that QoS is set, how do you enable adaptive QoS?


SLM has an automated compliance engine that keeps polling for the volume size. When SLM detects an increase or decrease in used capacity, the compliance engine kicks in and adjusts the QoS values accordingly. In simple terms, in the Value storage service level, if the volume used capacity is 1TB, then the max IOPS limits are set to 512. When the used capacity grows to 2TB, SLM automatically adjusts the the max IOPS limit to 1024.

The graph shown here is an example of a thin-provisioned workload of 2TB with the Value service level. When a workload is configured, the max IOPS value is always equal to the expected IOPS value. Then, as used capacity of the workload changes, the QoS value is adjusted automatically according to the peak IOPS/TB definition.


After you choose the storage service level, you can provision the required workload.


The following example shows a sample CURL command  using API  to provision a LUN and enable adaptive QoS on it:

To learn more about how to provision a LUN with the desired storage service level through SLM, refer to the blog post Provision Storage Like a Service Provider with NetApp Service Level Manager.


After you provision the workload through SLM,If you need to find the current QoS limit for that workload.Run the GET operation on the API on the workload; in the response body, look for the parameter max_iops.


The following code shows a sample Get cURL for a LUN:

curl -X GET --header 'Accept: application/json' ''

And here is the Sample response body for the issued APIfor that command:


  "status": {

    "code": "SUCCESS"


  "result": {

    "total_records": 1,

    "records": [


        "storage_service_level_key": 46844bf5-705b-4c5d-8b50-163e7c8e3788,

        "storage_vm_key": "ba6bd68a-f4d9-34da-bf7c-0bc2f2c463cb",

        "used": 0,

        "size": 1099511627776,

        "space_efficiency_saved_size": 0,

        "serial_number": "806uu+INDs57",

        "storage_pool_key": "1135eded-e2cf-3239-90ed-ee3686dc2572",

        "max_iops": 128,

        "is_read_only": null,

        "storage_platform_resource_key": "5f852b37-3a23-11e6-945d-00a0989b486a:type=lun,uuid=de79d5ed-357e-4a7d-ac00-e9372ccb8860",

        "operational_state": "online",

        "is_clone": false,

        "parent_lun_key": null,

        "name": "Slo_lun",

        "budgeted_capacity": 0,

        "budgeted_iops": 0,

        "measured_io_density": null,

        "storage_platform_type": "Ontap",

        "host_usage": "image",

        "storage_compliance_state": null,

        "protection_compliance_state": null,

        "storage_platform_resource_type": "Lun",

        "created_timestamp": 1514974633864,

        "last_modified_timestamp": 1514974633864,

        "key": "751edb04-02cf-37fd-94ce-088e225ffc26"





SLM automatically takes care of setting the QoS limit value in ONTAP for each workload and adapts the QoS value accordingly.


In a storage environment, the most common root cause of performance problems is overdelivery. By limiting overdelivery, you can control costs and meet minimum performance expectations without incident. NetApp Service Level Manager automatically takes care of overdelivery problems. When you provision workloads through SLM, you can easily achieve a bully-free neighbor environment.


For more details, NetApp customers can refer to the NetApp Service Level Manager Installation and Setup Guide on the NetApp Support site. If you have questions, feel free to contact the NLSM team at

Priya Munshi

Priya is Solution Architect for manageability products and solutions group at NetApp. She is responsible for developing solutions and help customers implement netapp service level manager. She has more than a decade experience with storage, server ,cache, data protection, backup and recovery and automation. She works closely with customers to help migrate from traditional IT to “as a service model”.

Add comment