Over the last couple of years, Kubernetes has emerged as the leading container orchestration tool. Despite having a steep learning curve, companies are jumping on the Kubernetes bandwagon, making it one of the fastest growing open source projects of all time.
As companies transition to production environments on Kubernetes, leveraging multiple cloud providers and on-premise infrastructures, one of the first challenges they encounter is how to accurately allocate Kubernetes costs to different teams, services, applications and departments.
If you have a production grade Kubernetes environment up and running, and haven't heard from Finance yet, trust me you will.
As you scale your Kubernetes environment and your deployments grow larger, you’ll need to understand the costs generated as a result. It is quite difficult to optimize container environments like Kubernetes as it is difficult to understand what exactly is happening within each container. Additionally, companies are spending much more on adopting container technologies, which feeds into the overall IT spend. Case in point - the number of companies spending $500,000 annually on their container environments saw an increase of 27% as compared to last year.
Management wants transparency into these costs and their respective allocation to different services, teams, applications and departments. Accurately allocating these costs to different services and teams, while being a major requirement in itself, is also a first step towards ensuring that the underlying infrastructure is being efficiently utilized.
Why Kubernetes cost allocation is not easy?
The core problem with Kubernetes cost allocation and chargeback efforts is the lack of visibility into how Kubernetes cluster resource usage is related to the cost of the underlying cloud or on-premise infrastructure Kubernetes is running on top of.
Kubernetes' shared resources model pools the entire underlying compute infrastructure into an overarching entity called clusters. Clusters in turn are made up of individual units of memory and CPU (compute) called nodes. Nodes can be anything from a virtual machine running on a public cloud provider to a physical one running in your data centers. By abstracting away this complexity and also adding another layer of OS abstraction on top, Kubernetes drives efficiency and agility for development and operations teams.
This shared resources model also means that multiple teams and services utilize resources from the same underlying infrastructure. This makes it difficult for IT managers and Finance departments to accurately allocate Kubernetes costs to different services, teams, departments and applications.
The fact that Kubernetes is agnostic to the underlying infrastructure and can run across any number of cloud providers and/or physical infrastructure also means that your containers can pop up anywhere across nodes in your cluster. Keeping track of where nodes and clusters are actually running is essential to cost allocation and chargeback efforts.
Questions about the overall cost footprint of Kubernetes clusters can have easy answers. Overall Kubernetes costs can be easily correlated to public cloud provider bills (doing this for on-premises infrastructure is not so straightforward). However, a quick look under the hood reveals that visibility into cost drivers and being able to allocate cluster costs to specific teams, applications, services or departments is quite difficult.
Can I use Kubernetes Namespaces to allocate costs?
Kubernetes namespaces are a way of dividing a Kubernetes cluster into several virtual clusters. These virtual clusters can then be allocated to specific teams, services or applications. Cluster resources can also be allocated to namespaces using resource quotas.
Kubernetes namespaces are a good starting point for cost allocation. However, it is important to note that namespaces only provide a way for virtual partitioning of your resources. To get fine grained visibility into the cluster resource usage of different namespaces and how they translate into underlying infrastructure costs requires the use of dedicated tools.
The last couple of years have seen a mass migration of enterprises to microservices architectures and container technologies. Current container monitoring and optimization solutions leave much to be desired. At Replex, we are dedicated to putting you back in charge of your IT infrastructure usage. Replex gives you detailed insights and granular control over your container infrastructure cost, governance and optimization. We also have dedicated Kubernetes reporting which helps you manage and allocate costs as well as optimize Kubernetes resource usage.
Would you like to dig deeper and understand how different teams or applications are driving your costs? Request a quick 20 minute demo to see how you can seamlessly allocate Kubernetes costs while saving up to 30% on infrastructure costs using Replex.io.
Fan of all things cloud, containers and micro-services!
Cloud native has taken the IT landscape by storm. But what is it? We sat down with Pini Reznik, CTO at Container Solutions and co-author of “Cloud Native Transformation: Practical Patterns for Innovation” to try and figure out what exactly Cloud native is, which specific technology pieces, processes and cultural dynamics need to come together to create Cloud native environments and the best way for organisations to forge into the Cloud native future.
April 22, 2020
4 min read
In this instalment of our Kubernetes best practices series we review the concepts of Kubernetes tenants and multi-tenancy, identify the challenges that have to be overcome and outline best practices for DevOps and cluster admins operating multi-tenant Kubernetes clusters.
April 20, 2020
4 min read
Part four of our Kubernetes and Cloud native application checklist evaluates service mesh tools based on ease of use in cloud native environments as well as their traffic management, security and observability feature-sets.
April 8, 2020
4 min read