integrated infrastructure part 1

In my last post, I explored why hybrid cloud management is becoming the essential norm.

 

But, to be honest, business leaders don’t spend much time thinking about management, or infrastructure, or anything about IT other than the ability to thrill users. And thrilling users is harder than ever.

 

Competitive innovation is a constant. Last year’s leading capability is this year’s has-been. High latency is a deal killer. Users have less patience with subpar service delivery, and now that brand loyalty is a thing of the past, your users will leave—and your competitors will win.

 

If you’re feeling a little stressed and disheartened by this, we can’t blame you. But the good news is that we have strong opinions about the need to thrill users.

The problem: IT fragmentation

We see several pervasive challenges:

  • The average enterprise uses 1,295 cloud services to serve emerging requirements.
  • Leading organizations are shifting toward microservices to facilitate agility, continuous improvement, and accelerated value.
  • Serving users means getting really good at edge computing.

These challenges reflect a fundamental problem that we call IT fragmentation. Resources, services, and data are getting scattered, which leads to inefficient utilization, impossible management, and cost overruns that can’t be addressed.

 

That’s a big problem—but also a big opportunity. We know that organizations need distributed architectures, from core to cloud to edge, to address customer requirements. But distributed architectures can lead to chaos unless they’re managed and optimized.

 

This series of blog posts will help you explore the idea of seamless, integrated infrastructure. We’ll look at how getting your infrastructure under one umbrella is the way to deliver the right capabilities to the right users, in the right place, at the right time, for the right cost.

 

We aren’t saying that there’s such a thing as a fully integrated infrastructure across multiple clouds, on-premises environments, and edge—yet. We’re just exploring a path that’s guided by a vision of integrated infrastructure. In this vision, data and services move freely between platforms, clouds, and environments without obscuring visibility, reducing control, or spiraling costs. And it’s possible to take the first steps toward this integrated vision by adopting foundational technologies that serve your organization’s needs for years to come.

 

Of course, seeing as we’re NetApp®, we think one of those foundational technologies should be storage.

Fully integrated infrastructure and your data storage strategy

The thing is, the days of storage being a capacity play are gone. Your storage has to have enough capacity for your data, but it also has to provide dozens of other capabilities to satisfy your requirements. Capabilities like data protection, resilience, analytics, and performance optimization are all table stakes.

 

But there’s another capability that’s easy to overlook if you’re accustomed to an on-premises data center or a single cloud—it’s cross-platform compatibility.

 

Here’s a simple question: How many APIs do you want for your storage? The right answer is that you need APIs across all platforms and places, protocols and drive types, vendors and technologies. But most organizations aren’t there yet. Their on-premises storage doesn’t use the same APIs as their cloud storage. Their block storage doesn’t use the same APIs as their file storage.

 

In a world of microservices, agility is a key to success. Any obstacles drive up costs and complexity and make innovation difficult. To avoid these problems, a forward-looking data storage strategy ought to consider cross-platform compatibility.

Edge computing challenges and possibilities

Cutting complexity makes sense for the core and the cloud, but how does it work for edge computing? After all, as Gartner points out, most edge environments are entirely customized, not standardized. And it’s hard to fully integrate mature infrastructure with first- second-generation infrastructure like edge computing that’s in a constant state of evolution and innovation.

 

But here’s the thing. Organizations need to integrate the edge. Gartner believes that, by 2022, more than 50% of enterprise data will be created and processed outside the data center or cloud. That data has real value, and it can’t live in a pool built with different tools, techniques, and technologies. Edge datasets need to be governed and analyzed just like any source of data in the enterprise. The infrastructures should be as integrated as possible so that edge data can be available to emerging services, whether they live in the edge, the core, or the cloud.

 

That integration is challenging, because many edge environments are built with solutions that are unlike anything in the data center or cloud. And their capabilities aren’t mature or are works-in-progress—operation, security and compliance, orchestration, and optimization capabilities that are all autonomous. These challenges have mostly been worked out in the core and the cloud, but they haven’t been fully solved at the edge.

 

So, from a storage perspective, what are the right priorities at the edge?

 

First, versatility matters. It would be nice to have a single platform that can serve multiple needs, such as being able to handle both block and file. It should support capacity requirements and offer essential services for insight and resilience without compromise.

 

At the same time, there’s a need for simplicity. Some data center platforms are overfeatured for edge. Smart vendors provide edge storage that can run on smaller devices or in a container that’s built with a subset of core or cloud storage functionality.

 

Finally, compatibility matters. Storage should be built for edge, yet compatible with cloud and core; edge shouldn’t be cut off from the rest of the enterprise. Ideally, edge storage should be provisioned and managed seamlessly and make data migration to core or cloud easy. And it should use the same APIs as core or cloud storage to simplify development efforts and reduce the risk of error.

Conclusion

Be sure to check back for part 2 of this series. We’re going to dive into the specifics of managing a standardized data architecture that includes the edge—and how to prepare for the coming data explosion. Think security, management, and availability.

 

Meg Matich

Meg Matich is a tech and culture blogger and a contributor to NetApp's cloud blog series.

Add comment