With digital transformation, data is emerging as the biggest asset of any organization, pivoting business models in a way that was unimaginable just a few years ago. We are seeing every company transform into a data company. Although this transformation is creating possibilities for business like never before, it is possible only if businesses are able to extract the value from data. Digital transformation is disrupting data management, which helps build efficiencies, enhance value, and create revenue streams.

 

Hybrid cloud is the new standard architecture for data management and accelerating business outcomes. Acceleration of such possibilities means that enterprises today need systems that are so robust that performance and capacity can be taken for granted.

 

Even as this shift to data-centricity is becoming a norm, IT is under constant pressure to reduce operational costs. These transformations are also causing a shift in personas that operate the infrastructure. Instead of the point specialists of the last decade, IT generalists with broad skill sets now manage the full stack of infrastructure. Today, with the dwindling number of specialists, nobody has the time to fine-tune storage system performance anymore.

 

Data management, on the other hand, can prove to be time consuming and complex. These demands warrant data centers that operate in a cloudlike manner, proactive and predictable storage management, and optimal management of compute and network resources. Emergence of IT as a service (ITaaS) and the cost models of clouds are driving efficiency expectations that are on the rise. The system speed keeps growing, while the resources available to performance-tune applications keep decreasing.

 

Businesses need operational agility to accommodate the new applications that come up as fast as teams need them.

 

The writing on the wall is here for all of us to see: The way we have been managing data infrastructure was perfect for a world that no longer exists! Today, IT needs to deliver the right set of data services for its customers.

 

How do we lead the data revolution with fewer resources, more generalists, faster deployments, and bigger-than-ever workloads with simpler operations?

 

The answer lies in the need for simplicity in the way technology is managed—that is, letting technology manage the complexity. This approach is the biggest key to improving the user experience. Technology should manage the complexity for you and make things simple. So, we built a product that embodies the best practices of our process with a keen focus on ease, standardization, and controlling overprovisioning: NetApp® Service Level Manager.

 

NetApp Service Level Manager (SLM) is built for designing data management services that deliver predictable performance for any workload. It enables storage consumption based on defined service levels while ensuring service levels and performance guarantees, providing a robust foundation on the storage side for building a cloudlike infrastructure.

 

SLM provides a framework for NetApp customers who are looking for greater resource utilization and lower operational expenses for their storage service offerings. It allows you to perform storage operations more easily and at scale by using policies based on service-level objectives (SLOs). SLM also dramatically reduces the total cost of ownership for storage infrastructure by lowering operational costs and increasing storage utilization.

 

SLM allows IT to simultaneously achieve multiple storage and data management goals without requiring armies of people to create and manage customized IT infrastructures. Not only that, it optimizes the placement of workloads using artificial intelligence and machine learning (AI/ML) to change the performance service levels. SLM uses the intelligence built into both the product and the NetApp ONTAP® software to assure that the defined SLOs can be met for each storage service level class. It allows service providers or managers of internal clouds to provide guaranteed service from a performance perspective and have the confidence of delivering it. It offers the unique ability to just assign a service level to a workload and let the system manage the workload. Another important thing to keep in mind is that the SLM functionality is made available through REST APIs. If anyone is automating at the data center level, you can easily integrate using REST APIs. In short, SLM drives agility in bringing new services online in a timely manner and delivering predicable performance at the right cost points.

 

This is as simple as it can get!

 

I believe that NetApp Service Level Manager is a product embodying our consulting and solution delivery best practices in the simplest way possible. Every enterprise looking at accelerating data transformation in the easiest, most cost-efficient manner should put NetApp Service Level Manager at the top of its list.

 

For more information about NetApp Service Level Manager 1.2RC1, see the Release Notes.

Deepak Visweswaraiah

Deepak Visweswaraiah is the Managing Director for NetApp India and the Senior Vice President, Quality and Manageability Group (QMG). He is the site leader for NetApp’s Global Centre of Excellence (GCoE), located at Bangalore. As the Senior VP of QMG, he is responsible for delivering on the value of NetApp’s manageability products globally for a greater user insight in managing the power of data. This charter also extends to ensuring all NetApp products are inherently secure. A technology veteran with 30 years of experience, with 18 years in the United States in the high-tech industry, Deepak was the senior director and general manager of the Infrastructure Management Group at EMC India’s CoE. He held a similar role at i2 Technologies as a senior director before he joined EMC. Deepak holds a M.S. in Engineering from University of Texas and an MBA from University of Phoenix.

Add comment