Storage service design is a popular topic around NetApp IT these days because of the huge impact it has had on our IT operations. We’ve written a series of blogs on our three-tier service catalog, which offers storage as a service that is predictive and easily consumable. We use a data-driven model that looks at performance and capacity together to identify, manage, and deliver the appropriate level of storage service.
One topic that we haven’t touched on in the storage service model is managing risk. Can a risk-based governance process be incorporated into service levels? Yes, it can. In this blog, I’ll describe how storage service levels are one of the best ways to control IT risk.
Aligning Business and IT
In order for business risks to drive service levels, we first must understand exactly how applications and infrastructure relate to the business. Yet the traditional business process/application views proved problematic. Instead, NetApp adopted a business capability and IT service model. Under this model, we group interdependent business processes into a business capability.
Then we group an ecosystem of applications into an IT service. Mapping an IT service to a business capability bridges the gap between the two groups (see graphic).
Service-Level Defined Risk
When we look at business capabilities from the risk perspective, we evaluate operational resiliency (OR)-or local production storage/data integrity–and disaster recovery (DR), or alternate storage/data integrity. Some of the questions we asked were: If a business capability goes down, how severe would the impact be? How long could the business group function without access to its data or applications?
Understanding when impact severities shift from minor to severe allows us to establish IT service requirements that are qualitative, quantitative, and consistent across the entire enterprise. We want to make sure they are not subject to an individual application or team’s point of view.
This is also where storage service levels and risk management come together. For example, we may find that the business can’t afford to lose more than one hour of data in order to operate and four hours for disaster recovery for a specific business capability. We then assign the ecosystem to a storage service level that supports this service level agreement (SLA).
Linking Risk and Performance to Service Levels
There are many benefits to this approach. The business now has the power to prioritize both the performance and the risk of its services through a storage service level. Operational SLAs are based on business needs and no longer dictated by IT, as they were in the past.
More importantly, the business and IT have moved conversations from hardware and application failure to the risks in supporting business capabilities through IT services. We have a shared understanding of what will happen to a business capability in the event of a data disruption. We also have a consistent risk approach across all the applications in that IT service’s ecosystem.
Not only does this approach create greater management efficiencies, but it also ensures a consistent and structured approach to risk that is tied to business requirements, not to IT policies. This, in turn, forms the foundation for good risk management practices in IT service delivery.
For more on NetApp IT and storage services levels, consult the following resources:
- Blog: The Importance of IO Density in Delivering Storage as a Service (Part 1)
- Blog: The Role of QoS in Delivering Storage as a Service (Part 2)
- TechONTAP Podcast: Storage Service Design (Episode 18)
- TechONTAP Podcast: Storage Service Design – Data Protection (Episode 19)
- TechONTAP Podcast: The Return of Storage Service Design (Episode 33)
- TechONTAP Podcast: Data Governance & Operational Point Objectives (Episode 49)
This was published as part of the NetApp-on-NetApp blog series which features advice from NetApp IT subject matter experts who share their real-world experiences using NetApp’s industry-leading storage solutions to improve IT service delivery.