Contrary to popular perception, containers have been around for quite some time. Containerization technology first emerged as a service called chroot, in version 7 of UNIX. Its next iteration was as Jails, in the year 2000, as part of the FreeBSD operating system. In 2008, containers came to Linux and finally captured the popular imagination as Docker in 2013.
Docker made it easier for developers to package code, libraries, environment variables, and configuration files into a self-contained container image. With Docker, containers could be easily spun up and pushed into production environments. Kubernetes went a step further, making it possible to deploy and manage containers at scale.
But what's the big deal with containers, you ask? To answer that we have to pivot to another technology; Virtualization. Containers and virtualization are bound by the same goals. Both reflect the relentless march of data center/infrastructure technology towards hyper scalability, workload isolation and portability and resource usage optimization.
Virtualization one-upped legacy bare metal infrastructure through the introduction of virtual machines (VMs). VMs are abstracted from the underlying physical hardware, using a software layer called a Hypervisor. Whereas legacy bare metal servers were only able to support a single application, Virtualization allowed multiple VM’s running isolated process or applications to run on a single bare metal server, sharing the underlying resources.
Before virtualization came along most hardware only saw a 5 to 15% utilization rate. With virtualization, workload density improved dramatically, jacking up infrastructure utilization rates. Stacking VMs onto bare metal servers also made the infrastructure more scalable; new VM’s could be easily provisioned on top of the current infrastructure to respond to increased demand.
Workload portability also improved, if only across the proprietary virtual machine formats of virtualization vendors. But it was still a problem. VMs did still lock-in service providers into the virtual machine formats of virtualization vendors. In addition, migrating VMs meant migrating the entire OS along with it.
VMs however, still have a pretty big resource footprint. Each VM on virtualized hardware has to run its own operating system which in turn requires RAM, storage, and CPU resources. These resources build-up over hundreds of VMs across multiple virtualized environments and can become a considerable drain on resources. Resources which could otherwise be allocated to applications are being used simply because every VM runs its own operating system.
Containers address both these challenges. Containerization technologies like docker introduced operating system (OS) abstraction, enabling multiple containers to leverage the same underlying OS. When used on top of virtualized environments, containers improve VM workload density, since multiple containers can be packed onto the same VM.
Containers can also be deployed directly onto bare metal servers without the use of a hypervisor. Deploying containers directly onto bare metal servers frees up the resources that would have otherwise been allocated to individual VM OS instances in a virtualized environment. This essentially means that containerized environments on bare metal support the same workloads with less resources as compared to a purely virtualized environment. This does however give up all the associated benefits that come with virtualization and having infrastructure as software.
Additionally, freeing up workloads from OS dependencies makes them extremely portable. For DevOps teams this means increased flexibility in terms of where their applications can run. Containerized workloads are no longer locked-in into proprietary VM formats and can be moved seamlessly across multiple local or cloud hosts.
Request a quick 20 minute demo to learn how best to leverage cloud VM’s for your Kubernetes clusters and make infrastructure cost savings of up to 30%.
Fan of all things cloud, containers and micro-services!
This is the second instalment in our blog series about monitoring Kubernetes resource metrics in production. In this post, we complement the process of Kubernetes resource monitoring with Prometheus by installing Grafana and leveraging the Prometheus data source to create information-rich dashboards in a user-friendly visual format.
April 3, 2019
2 min read
In this blog post we will take a deep dive into Kubernetes QOS classes. We will start by looking at the factors that determine whether a pod is assigned a Guaranteed, Burstable or BestEffort QOS class. We will then look at how QOS class impacts the way pods are scheduled by the Kubernetes Scheduler, how it impacts the eviction order of pods by the Kubelet as well as what happens to them during node OOM events.
March 22, 2019
2 min read
In this instalment of the Kubernetes in Production blog series, we take a deep dive into monitoring Kubernetes resource metrics. We will see why monitoring resources is important for Kubernetes in production, choosing which resource metrics to monitor, setting up the tools required including Metrics-Server and Prometheus and querying metrics.
March 7, 2019
2 min read