Contrary to popular perception, containers have been around for quite some time. Containerization technology first emerged as a service called chroot, in version 7 of UNIX. Its next iteration was as Jails, in the year 2000, as part of the FreeBSD operating system. In 2008, containers came to Linux and finally captured the popular imagination as Docker in 2013.
Docker made it easier for developers to package code, libraries, environment variables, and configuration files into a self-contained container image. With Docker, containers could be easily spun up and pushed into production environments. Kubernetes went a step further, making it possible to deploy and manage containers at scale.
But what's the big deal with containers, you ask? To answer that we have to pivot to another technology; Virtualization. Containers and virtualization are bound by the same goals. Both reflect the relentless march of data center/infrastructure technology towards hyper scalability, workload isolation and portability and resource usage optimization.
Virtualization one-upped legacy bare metal infrastructure through the introduction of virtual machines (VMs). VMs are abstracted from the underlying physical hardware, using a software layer called a Hypervisor. Whereas legacy bare metal servers were only able to support a single application, Virtualization allowed multiple VM’s running isolated process or applications to run on a single bare metal server, sharing the underlying resources.
Before virtualization came along most hardware only saw a 5 to 15% utilization rate. With virtualization, workload density improved dramatically, jacking up infrastructure utilization rates. Stacking VMs onto bare metal servers also made the infrastructure more scalable; new VM’s could be easily provisioned on top of the current infrastructure to respond to increased demand.
Workload portability also improved, if only across the proprietary virtual machine formats of virtualization vendors. But it was still a problem. VMs did still lock-in service providers into the virtual machine formats of virtualization vendors. In addition, migrating VMs meant migrating the entire OS along with it.
VMs however, still have a pretty big resource footprint. Each VM on virtualized hardware has to run its own operating system which in turn requires RAM, storage, and CPU resources. These resources build-up over hundreds of VMs across multiple virtualized environments and can become a considerable drain on resources. Resources which could otherwise be allocated to applications are being used simply because every VM runs its own operating system.
Containers address both these challenges. Containerization technologies like docker introduced operating system (OS) abstraction, enabling multiple containers to leverage the same underlying OS. When used on top of virtualized environments, containers improve VM workload density, since multiple containers can be packed onto the same VM.
Containers can also be deployed directly onto bare metal servers without the use of a hypervisor. Deploying containers directly onto bare metal servers frees up the resources that would have otherwise been allocated to individual VM OS instances in a virtualized environment. This essentially means that containerized environments on bare metal support the same workloads with less resources as compared to a purely virtualized environment. This does however give up all the associated benefits that come with virtualization and having infrastructure as software.
Additionally, freeing up workloads from OS dependencies makes them extremely portable. For DevOps teams this means increased flexibility in terms of where their applications can run. Containerized workloads are no longer locked-in into proprietary VM formats and can be moved seamlessly across multiple local or cloud hosts.
Request a quick 20 minute demo to learn how best to leverage cloud VM’s for your Kubernetes clusters and make infrastructure cost savings of up to 30%.
Fan of all things cloud, containers and micro-services!
A step by step walkthrough of deploying a highly available, reliable and resilient Kubernetes cluster leveraging AWS EC2 spot instances as worker nodes using both Kops and EKS.
October 9, 2019
2 min read
FinOps is a cross domain discipline that represents a set of tools, best practices and processes aimed towards making software and infrastructure more cost effective. In this article we provide an introduction to Kubernetes Finops.
September 10, 2019
2 min read
In this article, we will dive into Kubernetes best practices for CIOs and CTOs. It is based on our blog series outlining best practices for DevOps and Kubernetes admins and provides a broader more zoomed-out view of best practices in production.
August 13, 2019
2 min read