As the senior IT manager leading the engineering team that designs and deploys data storage solutions at NetApp, I know two things:
- Cloud, DevOps, *aaS (anything as a service) and growing infrastructure technologies such as Kubernetes are changing the way we deliver IT and deploy modern applications.
- Despite the number of folks who unwittingly or purposely trivialize it, storage infrastructure is still needed to deliver these services.
The challenge for teams like mine is defining how we deliver storage for these new environments and what role changes will result for storage engineers and administrators. Even though much has been said about the change in these roles, many are struggling with the transition and what it means. That kind of change makes some people nervous.
Let’s be clear. This new world of technology does not spell the end of times for traditional storage engineers. It’s not something to fear, nor does it need to be an insurmountable challenge.
We just need to be prepared and to adjust appropriately. When it comes to storage infrastructure, my team will continue to have similar concerns and worries to those we have today, such as performance, capacity, and security. These won’t change. What is changing, however, are the consumers of our storage services, the nature of their consumption, and the new skills needed to serve them.
The Future of Storage Consumption
Let’s talk about this new type of consumer and new consumption patterns. Many IT shops like ours are focused on improving the speed of infrastructure deployment and maximizing the value of automation to empower rapid and constant application deployment. To do this, we are leveraging hybrid multicloud environments managed through Kubernetes and DevOps practices using microservices-based architectures running in containers. With these advances comes a change in who we consider to be the “storage consumer.”
From my perspective, the goal is to no longer handhold an application architect, DBA, or project manager, or to manually configure and provide custom storage volumes, LUNs, or arrays. Instead, we must get out of the way and provide integration points and automation solutions for system processes that consume storage services on demand. The new storage consumer is the automated platform that is responsible for the continuous integration/continuous delivery processes that deploy new applications and application components.
The automated platform (or orchestrator) needs the access, controls, and automated endpoints to self-serve the storage infrastructure that it needs to consume. In this world, storage engineers should be concerned with building the standards, rules, and automation used by the platform to consume storage. They should not be the entity provisioning individual storage resources, such as volumes.
For example, if we are talking about a NetApp® ONTAP® environment, the engineer would still be responsible for defining the physical aspects of the cluster, the way that storage virtual machines (SVMs) are configured, and how the system is connected into the network. The orchestrator, on the other hand, should have the freedom to talk directly to the SVM and to provision all the volumes it needs. As storage engineers, we need to empower the automated platform (our customer) to consume storage resources rapidly and dynamically.
New Skills for the Age of Automation
This brings me back to my second point. Automation is vital to the modern delivery of storage services. As our customer evolves, so must we. Building, maintaining, and operating a dynamic storage infrastructure that is programmatically accessed by self-service processes requires storage engineers to develop new technical skills. Configuration policies should be defined in code and applied to the systems automatically. Additionally, storage engineers are likely to find themselves creating some of the code used by the orchestrator (the customer) to provision storage components.
As engineers, we need to adopt new skills that build on top of our storage knowledge. It’s important to build skills and familiarity with automation technologies like Ansible and Python, and to understand new cloud infrastructure technologies like Kubernetes. An understanding of how these technologies enable, accelerate, and interact with storage infrastructure is key. All standards need to be defined in code and hosted in repositories like Git instead of KB articles or Wiki pages.
Existing storage skills will remain relevant, but to be successful in delivering infrastructure as code, we need to focus on the new “consumer” and evolve our skillsets. My advice to all storage admins is to go learn some coding, learn how to manage your infrastructure with code, and become familiar with the new features and technologies provided by data management platforms like NetApp ONTAP, Ansible modules for NetApp ONTAP, NetApp Element® software, and NetApp E-Series systems, or provisioning Docker volumes by using NetApp Trident.
The NetApp on NetApp blogs feature advice from NetApp IT subject matter experts who share their real experiences in using industry-leading NetApp data management solutions to support business goals. To learn more, visit www.NetAppIT.com.