Contrary to popular perception, containers have been around for quite some time. Containerization technology first emerged as a service called chroot, in version 7 of UNIX. Its next iteration was as Jails, in the year 2000, as part of the FreeBSD operating system. In 2008, containers came to Linux and finally captured the popular imagination as Docker in 2013.
Docker made it easier for developers to package code, libraries, environment variables, and configuration files into a self-contained container image. With Docker, containers could be easily spun up and pushed into production environments. Kubernetes went a step further, making it possible to deploy and manage containers at scale.
But what's the big deal with containers, you ask? To answer that we have to pivot to another technology; Virtualization. Containers and virtualization are bound by the same goals. Both reflect the relentless march of data center/infrastructure technology towards hyper scalability, workload isolation and portability and resource usage optimization.
Virtualization one-upped legacy bare metal infrastructure through the introduction of virtual machines (VMs). VMs are abstracted from the underlying physical hardware, using a software layer called a Hypervisor. Whereas legacy bare metal servers were only able to support a single application, Virtualization allowed multiple VM’s running isolated process or applications to run on a single bare metal server, sharing the underlying resources.
Before virtualization came along most hardware only saw a 5 to 15% utilization rate. With virtualization, workload density improved dramatically, jacking up infrastructure utilization rates. Stacking VMs onto bare metal servers also made the infrastructure more scalable; new VM’s could be easily provisioned on top of the current infrastructure to respond to increased demand.
Workload portability also improved, if only across the proprietary virtual machine formats of virtualization vendors. But it was still a problem. VMs did still lock-in service providers into the virtual machine formats of virtualization vendors. In addition, migrating VMs meant migrating the entire OS along with it.
VMs however, still have a pretty big resource footprint. Each VM on virtualized hardware has to run its own operating system which in turn requires RAM, storage, and CPU resources. These resources build-up over hundreds of VMs across multiple virtualized environments and can become a considerable drain on resources. Resources which could otherwise be allocated to applications are being used simply because every VM runs its own operating system.
Containers address both these challenges. Containerization technologies like docker introduced operating system (OS) abstraction, enabling multiple containers to leverage the same underlying OS. When used on top of virtualized environments, containers improve VM workload density, since multiple containers can be packed onto the same VM.
Containers can also be deployed directly onto bare metal servers without the use of a hypervisor. Deploying containers directly onto bare metal servers frees up the resources that would have otherwise been allocated to individual VM OS instances in a virtualized environment. This essentially means that containerized environments on bare metal support the same workloads with less resources as compared to a purely virtualized environment. This does however give up all the associated benefits that come with virtualization and having infrastructure as software.
Additionally, freeing up workloads from OS dependencies makes them extremely portable. For DevOps teams this means increased flexibility in terms of where their applications can run. Containerized workloads are no longer locked-in into proprietary VM formats and can be moved seamlessly across multiple local or cloud hosts.
Request a quick 20 minute demo to learn how best to leverage cloud VM’s for your Kubernetes clusters and make infrastructure cost savings of up to 30%.
Fan of all things cloud, containers and micro-services!
Cloud native has taken the IT landscape by storm. But what is it? We sat down with Pini Reznik, CTO at Container Solutions and co-author of “Cloud Native Transformation: Practical Patterns for Innovation” to try and figure out what exactly Cloud native is, which specific technology pieces, processes and cultural dynamics need to come together to create Cloud native environments and the best way for organisations to forge into the Cloud native future.
April 22, 2020
2 min read
In this instalment of our Kubernetes best practices series we review the concepts of Kubernetes tenants and multi-tenancy, identify the challenges that have to be overcome and outline best practices for DevOps and cluster admins operating multi-tenant Kubernetes clusters.
April 20, 2020
2 min read
Part four of our Kubernetes and Cloud native application checklist evaluates service mesh tools based on ease of use in cloud native environments as well as their traffic management, security and observability feature-sets.
April 8, 2020
2 min read