This post is part 1 of a three-part series that explains how infrastructure analytics can be used to improve IT service delivery and reduce costs in a hybrid cloud environment. For a deeper dive, download the white paper “Data Insights and Control for the Service Provider Business Model.


Whatever your role in IT today, you are a service provider to someone, whether you realize it or not. If you’re a storage specialist, you’re a service provider to the VM administrators. If you’re a VM administrator, you’re a service provider to the application team. And if you’re part of the application team, you’re a service provider to the business. That business might be your own company or another company, but you’re providing a service, and you have SLAs you are responsible for delivering.


In the previous era of IT, it was easier. IT teams worked behind the scenes and set their own agenda. You could do your day job without getting challenged too much. Now, with cloud alternatives becoming pervasive, the onus is on the entire IT team to demonstrate that it is providing the services the business needs and delivering value for both on-premises and cloud-based workloads.


To do that, you need analytics tools that go deeper than the management tools you relied on in the past. In this series of blog posts, I’m going to look at the true value of infrastructure analytics and explain how you can use the data insights and control provided by analytics to deliver meaningful automation, control costs, and rationalize your company’s cloud usage.

Benefits of Infrastructure Analytics vs. Traditional SRM

The first thing you need to realize is that there’s a big difference between infrastructure analytics and storage resource management (SRM) tools you might have used in the past. SRM tools essentially provide you with a big spreadsheet with a bunch of speeds and feeds and IOPS. These tools provide raw data—dozens of columns and thousands of rows of it—but they deliver little in the way of value. Analytics turn all of that data into information that can be used to make intelligent decisions and increase the level of control you maintain over your environment.


You should always keep in mind that the information produced by analytics must be relevant to the person consuming it. For example, I would never go to a CFO and say, “If you install this tool, you can eliminate the need for 500TB of existing storage infrastructure.” The relevant information for a CFO is how much money the company will save after all the costs have been added up: capital costs, power/space/cooling, management costs, and so on, as illustrated below:


Sample cost summary


A problem with many analytics tools today is that they claim to do everything. Unfortunately, they often do it all badly. This is where NetApp® OnCommand® Insight (OCI) software is different. Rather than trying to do everything, OCI is designed to be fully open so it can integrate with other tools to provide greater insight. OCI is complementary to ITIL, asset management, orchestration, and business reporting tools. It provides a single source of truth for your entire IT infrastructure based on technical properties, configuration, and performance information.


A great example of this is OCI integration with ServiceNow. OCI knows all about the infrastructure, but doesn’t know anything about the business. ServiceNow knows everything about the business, but not a whole lot about the infrastructure. Putting the two together makes perfect sense. Knowing that a storage array is growing 100% year on year doesn’t add much value, but knowing which business units are consuming those resources and how much they are consuming has immense value. OCI makes deeper insights like this possible, while enabling more meaningful automation.

Paving the Road to Process-Based Automation

Many storage vendors provide tools to automate repetitive tasks such as LUN creation. However, in the larger scheme of things, task-based automation provides limited value. For example, if you automate the creation of a LUN, it might save a single administrator only a few minutes of time.


The real value lies in process-based automation. Before the LUN is created, all sorts of questions should be asked and answered. For example, if an internal customer goes to a storage admin and says, “I want a platinum LUN” and the admin simply fulfills the request, it usually results in a waste of resources. Instead, you want to compare the resources needed for the planned workload to the resources consumed by similar workloads that are already in use. You might then conclude that a less costly gold LUN is more appropriate.


Next you need to consider additional questions, such as:

  • Where have I got an array that’s suitable for a gold LUN?
  • Does that array have enough capacity?
  • Is the array going to run out of capacity or go out of service in the next six months?
  • Is the storage local to the compute resources needed by this workload?

OCI’s ability to answer these types of questions means that you can potentially extend your automation efforts and control the entire end-to-end provisioning process, including storage, networks, and virtual machines. Instead of saving one storage administrator a few minutes of time, you can save hours or days of effort on the part of many administrators: storage administrators, virtual machine administrators, capacity managers, and others. Many current OCI customers are working to elevate process-based automation to this level.

Turn Your Data into Information

In this post, I’ve provided an overview of how OCI analytics can turn data insights into actionable information. That information becomes extremely useful for decision support and to facilitate process-based automation that streamlines IT processes both on the premises and in the cloud. In my next post, I’ll look at some ways that analytics can help you get IT costs under control.

More Information

Discover how NetApp customers are benefiting from infrastructure analytics in these blog posts and by attending the NetApp Insight conference:

• Part 2: How to Uncover New Savings with Infrastructure Analytics
• Part 3: How to Determine Which Workloads Belong in the Cloud
• NetApp Insight 2017: OCI Breakout Sessions

Joshua Moore

Joshua is a Principal Technologist within the Cloud Analytics team at NetApp, and has spent many years in the field helping clients achieve their business goals with NetApp technology. He has a service provider background where, prior to his tenure at NetApp, held architecture and service strategy roles within a global systems integrator.

His primary focus is on identifying where and how Cloud Analytics can help organizations better meet their service level objectives, cost constraints and business goals in the near term, and more effectively realize their hybrid cloud strategy in the long term.

Add comment