Containerization & Vitualization

Containers and virtual machines both allow you to abstract the workload from the underlying hardware, but there are important differences in the two approaches that need to be taken into account.

Containerization is a lightweight alternative to full machine virtualization that involves encapsulating an application in a container with its own operating environment. This provides many of the benefits of loading an application onto a virtual machine, as the application can be run on any suitable physical machine without any worries about dependencies.

A virtual machine mimics a complete server. In a typical virtualized server, each VM “guest” includes a complete operating system along with any drivers, binaries or libraries, and then the actual application. Each VM then runs atop a hypervisor, which itself runs on a host operating system and in turn operates the physical server hardware. It’s a tried-and-true approach, but it’s also easy to see how each iteration of the guest operating system and supporting binaries can cause duplication between VMs; it wastes precious server memory, which limits the number of VMs that each server can support.

Containerization has gained recent prominence with the open-source Docker. Docker containers are designed to run on everything from physical computers to virtual machines, bare-metal servers, OpenStack cloud clusters, public instances and more.


Containerization is actually just another approach to virtualization. Where it differs, though, is in dispensing with the conventional virtual machine (VM) layer, which is quite a radical departure from the way we’ve thought of cloud computing up until now.


In a classic infrastructure-as-a-service (IaaS) architecture — think Amazon Web Services, Microsoft Azure, or a VMware-based private cloud — you distribute your computing on virtual machines that run on, but are not tied to, physical servers. Over time, cloud datacenter operators have become very good at automating the provisioning of those VMs to meet demand in a highly elastic fashion — one of the core attributes of cloud computing.

Advantages over Vms

The trouble with VMs is that that they’re an evolution from an original state when every workload had its own physical server. All the baggage that brings with it makes them very wasteful. Each VM runs a full copy of the operating system along with the various libraries required to host an application. The duplication leads to a lot of memory, bandwidth and storage being used up unnecessarily.

Docker may have been the first to bring attention to containerization, but it’s no longer the only container system option. CoreOS recently released a streamlined alternative to Docker called Rocket.

And Canonical, developers of the Ubuntu Linux-based operating system, has announced the LXD containerization engine for Ubuntu, which will also be integrated with OpenStack.

Microsoft is working on its own containerization technology called Drawbridge, which will likely be featured in Windows Server and Azure in the future. And Spoon is another Windows alternative that will enabled containerized applications to be run on any Windows machine that has Spoon installed, regardless of the underlying infrastructure.

Windows Container (Hyper-V)

A technology like Docker is instrumental in automating the principles of devops. Its predefined library images are operationally pretested, allowing developers to just go ahead and deploy. Chris Swan, CTO of CohesiveFT, says that this encourages the practice of rapid testing and ‘fast failing’ while iterating.


The other shoe to drop when it comes to containers is security. It doesn’t have much, which is a huge problem for a public cloud environment.

“Containers have critical limitations in areas like OS support, visibility, risk mitigation, administration, and orchestration. This is especially true for the newer brands of containerization which do not (yet) have a significant management and security ecosystem, in contrast to more mature solutions like Solaris containers,” said Andi Mann, vice president and a member of the Office of the CTO at CA.

The problem is that containers share the same hooks into the kernel, said Hightower. “That’s a problem because if there are any vulnerabilities in that kernel in doing multitenency, and if I know of an exploit the others don’t know about, I have a way to get into your containers. Containers have not yet demonstrated that they can deliver the same secure boundaries that a VM still has.”

Hightower said someone showed Docker just a few months ago how to escape the system and gain full access to the server, so no one is ready to say you can get rid of virtual machines just yet, especially in a public cloud environment like Amazon or Google.

“You don’t want to have untrusted actors on the same server. A single organization can still get benefit, but a cloud provider might not want to spin their whole business on containers with no VMs to isolate their business,” he said.

The second area of weakness for containers is they are not proven scalable. “Containers face many challenges to scale. It’s one thing to do a Web app but it’s another thing to do a multitenant, complex enterprise app with a lot of data of interest,” said Lyman.

But that can be turned into a positive as well. “Containers are a really good match for microservices, where you chop up the app into chunks for different teams, so everyone works on their specialty area,” said Hightower. “Containers are good for that use case.

Co-Existence with Virtualization

Because they have different strengths, weaknesses and functions, virtual machines and containers should not be viewed as competitors but as complimentary.

“Containers and VMs are destined to be close companions in the cloud of clouds. Just as one cloud is not enough, and so too, one virtualization technology is not enough. Each technology provides a different response to different use cases, and in many cases work together to solve those challenges,” said Mann.

“Containers are especially good for early development, for example, because the speed of manual provisioning/deprovisioning greatly outweighs the improved manageability of a virtual machine in an environment where everything is new and rapidly changing,” he added.

The two are very much complementary to each other, adds Hightower. “Now you need fewer VMs and probably can go back to a bare metal server with no virtualization. If you are good at VMs, you can use containers for everything.”


Containers have interest from startups like Docker, CoreOS and Shippable but also big names like Google with Kubernetes, IBM supporting Docker on its BlueMix PaaS service, Amazon supports containers on Elastic Cloud 2, HP and Microsoft have their own efforts, and Red Hat’s OpenShift PaaS.

It’s an emerging technology which has an interesting usage scenario & like any other new technology it has it’s limitation attached to it. But over the years, if supported by further development, it has a lot of potential and can see even larger adoption once the limitations are lifted or reduced to a sizable extent.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at

Up ↑

%d bloggers like this: