Photo credit: Exam / Alberto G. / CC

Pop quiz, SAT analogy-style:


server virtualization : virtual machine :: operating system virtualization : ____?____


The college-entry-exam-anxiety portion of this blog is now over, and if you answered “container,” congratulations! You’ve hit on one of the hottest trends in IT. A late 2015 report from Datadog, a SaaS-based monitoring and analytics platform for IT infrastructure, operations, and development teams, recently released a report indicating container usage was up 5x in just one year.


Containers are facilitating rapid, agile development like never before. But questions still persist on container basics, namely:

  1. How do they differ from virtual machines?
  2. If containers are, by their nature transitory and disposable, how can you utilize them alongside persistent storage?
  3. How do they complement existing virtualization and/or orchestration solutions?

In part one of a three-part blog series, we’ll cover the first question: how virtual machines (VMs) are different from containers.


What are virtual machines (VMs)?

As server processing power and capacity increased, bare metal applications weren’t able to utilize the new abundance in resources. Thus VMs were born, designed by running software on top of physical servers in order to emulate a particular hardware system. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It’s what sits between the OS and hardware and is necessary to virtualize the server.


Within each virtual machine runs a unique operating system. VMs with different operating systems can be run on the same physical server – a Unix VM can sit alongside a Linux-based VM, etc. Each VM has its own binaries/libraries and application(s) that it services, and the VM may be many gigabytes large.


Server virtualization provided a variety of benefits, one of the biggest being the ability to consolidate applications onto a single system. Gone were the days of single application/single server, and virtualization ushered in cost savings through reduced footprint, faster server provisioning, and improved disaster recovery (because the DR site hardware no longer had to mirror the primary data center). Development also benefitted from this physical consolidation because greater utilization on larger, faster servers freed up subsequently unused servers to be repurposed for QA, development, or lab gear.


What are containers?

Operating system (OS) virtualization has grown in popularity over the last decade as a means to enable software to run predictably and well when moved from one server environment to another. Containers provide a way to run these isolated systems on a single server/host OS.


Containers sit on top of a physical server and its host OS, e.g. Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only, with each container able to be written to through a unique mount. This makes containers exceptionally “light” – containers are only megabytes in size and take just seconds to start, versus minutes for a VM.


The benefits of containers often derive from their speed and lightweight nature; many more containers can be put onto a server than onto a traditional VM. Containers are “shareable” and can be used on a variety of public and private cloud deployments, accelerating dev and test by quickly packaging applications along with their dependencies. Additionally, containers reduce management overhead. Because they share a common operating system, only a single operating system needs care and feeding (bug fixes, patches, etc). This concept is similar to what we experience with hypervisor hosts; fewer management points but slightly higher fault domain. Also, you cannot run a container with a guest operating system that differs from the host OS because of the shared kernel – no Windows containers sitting on a Linux-based host.



VMs and Containers differ on quite a few dimensions, but primarily because containers provide a way to virtualize an OS in order for multiple workloads to run on a single OS instance, whereas with VMs, the hardware is being virtualized to run multiple OS instances. Containers’ speed, agility and portability make them yet another tool to help streamline software development.


In our next All About Containers blog post, we’ll cover the containers uses cases that are strong fits for persistent storage.


In the meantime, check out SolidFire’s newly released Docker plug-in on Github, and view our Docker plug-in demo below.


Kelly Boeckman