We hope you enjoyed our Intel Builders webinar, “Enterprise Applications and Persistent Storage with Docker Containers.” As we continue our work with data center builders migrating to next generation tools and technologies, we’d like to share some of the things we have been working on in the container ecosystem.

 

We have observed a clear trend emerging: enterprises trying to figure out how much of their applications can be containerized. This inevitably leads to an exploration of Docker and other options in the ecosystem. In the webcast, we introduced Docker fundamentals and architecture, elaborated on types of persistent data in containerized environments, and discussed NetApp’s approach to container data persistence with the NetApp’s Docker Volume Plug-in (nDVP).

 

We had some interesting questions from the audience but couldn’t quite get to all of them, so we’ll address them here.

 

Is the NetApp Docker Volume Plugin supported by NetApp Support?
Currently nDVP has community support, available via http://community.netapp.com and http://netapp.io/slack.

 

Does nDVP work with Docker data center?
Yes, each host in the Swarm cluster needs to have nDVP installed and configured identically. This will enable nDVP to provision volumes and access them as the containers move hosts in the cluster.

 

Does the nDVP do replication?
No, nDVP does not manage replication for storage objects. This is something that needs to be configured externally.

 

What would be the benefit of putting a database in a container?
Quickly creating test and development instances of a database in a container leveraging clones of the production data set can drastically change the amount of time needed to perform dev and QA testing for applications. Additionally, having the database in a container means that the application as a whole can be defined using a framework like Docker Compose, enabling developers to quickly create and recreate the entire application environment if desired.

 

Does every container need a volume from the nDVP?
No. Most microservice-based applications only have a single or small number of component services which provide persistence to the rest of the application. These containers are the only ones that need a persistent volume, using an enterprise storage array managed by nDVP.

 

What is the relationship between a Docker container and Kubernetes?
Kubernetes is a container orchestrator; it deploys multiple containers, which together represent an application across many different hosts in the Kubernetes cluster. Docker Engine is the execution engine for the container; however, it does not orchestrate the deployment of an application.

What happens when you need to move applications across zones? For example, if an application runs in a data center on the west coast and you want to migrate it to a data center on the east coast — is there an intelligent way to achieve that?
nDVP does not have this intelligence — this must be handled at the application layer. nDVP enables the seamless consumption of persistent storage from the portfolio of NetApp storage products, including ONTAP, SolidFire, and E-Series, by application developers, owners, and administrators. This means that an application team can get the storage they need for a container instance when and where they need it, without having to wait on the infrastructure team to provision and allocate the capacity.

 

We would highly encourage you to read our coauthored white paper with Intel and Docker titled How Intel and NetApp Bring Enterprise-Grade Storage to the Docker Ecosystem, which was the foundation for this webinar and blog series. It covers the basics for building a container infrastructure architecture that hosts a variety of common enterprise applications.

 

We invite you to join in the conversation and participate in our open source activities at thePub, NetApp’s developer community, and join our Slack channel to speak directly to the team.

Andrew Sullivan

Andrew has worked in the information technology industry for over 10 years, with a rich history of database development, DevOps experience, and virtualization. He is currently focused on storage and virtualization automation, and driving simplicity into everyday workflows.