Containerization (when done right) has the potential to make DevOps more productive and the CI/CD pipeline more seamless. For one, it allows DevOps teams to spend more time developing and shipping new features rather than debugging code to fit a specific environment. It also frees up teams from managing infrastructure and concentrate on managing the applications, themselves.
With Kubernetes these benefits are magnified many times over. Kubernetes makes it easier to manage and deploy containers at scale, further reducing release cycles.
In bringing all these benefits, however, Containers and Kubernetes do introduce an additional layer and increased complexity into enterprise tech stacks. This is bad news for IT managers since it essentially means loss of visibility and control over costs.
From the standpoint of IT managers, looking for visibility and control over costs, containers are still plagued by old world problems;
How many resources are individual teams and applications consuming?
How much does this resource usage cost?
How can I allocate these costs to individual applications and teams?
It is often difficult to figure out which teams or applications consume how much and which parts of the infrastructure. This is even harder given the fact that production clusters share resources among many different teams and applications. All of this makes Kubernetes environments a black hole for IT managers with zero cost visibility and no way to allocate costs to individual teams, applications or services.
Cost allocation and chargeback are essential features of enterprise tech stacks. And since Kubernetes is on track to being an integral part of that stack, not being able to allocate or chargeback costs, could be a major spanner in the works.
Additionally, organizations typically have much more granular resource consumption and cost visibility requirements across any number of custom groupings. Allocating costs and getting a consistent view of resource usage without a dedicated solution is often a non-starter.
Resource utilization is another concern that IT managers have to deal with;
What is the average resource utilization of my infrastructure?
How do I ensure optimum infrastructure utilization?
Is there any way I can optimize resource usage to reduce infrastructure costs?
Resource utilization is the ratio of the resources being consumed by all the pods running in a cluster as compared to the total resources available on the cluster. According to Rightscale, cloud resources only see a 35% utilization rate on average. This holds true in the Kubernetes context too, which means that 75% of VM costs are being contributed by idle resources.
Ensuring optimal resource utilization of the underlying infrastructure is essential to Kubernetes cost optimization efforts. On public cloud, this would translate into ensuring that the underlying virtual machines (AWS EC2 instances, Digital ocean droplets) are utilized efficiently.
When launching pods, developers too have to answer hard questions;
Have I allocated the correct amount of resources to my container or pods?
Is there enough resource headroom to ensure my pods will not run out of resources during traffic spikes?
Or, are my containers bloated, with much higher resources allocated than required?
Once pods go into production, resource utilization concerns rear up;
How much of the requested resources are my pods consuming and what is the average utilization?
Can I do a better job of balancing resource requests and limits?
Extending this to the cloud introduces even more challenges. Kubernetes can be deployed on any one of a number of public cloud providers as well as in hybrid or multi-cloud configurations. Cloud VMs also come in all shapes and sizes as well as billing profiles.
Matching resource requirements to the hundreds of possible cloud and VM combinations and forecasting resource demands over all of these scenarios, while ensuring lowest costs, quickly goes out of the domain of DevOps teams.
Replex.io answers all these questions and more. Our dedicated Kubernetes solution has been built from the ground up to address the twin challenges of cost visibility and optimization for enterprise Kubernetes deployments. It comprises three modules; Collect, Optimize and Report. Together they provide granular visibility into resource consumption, utilization, and costs across all infrastructure classes. The Optimize module flags in-efficient resource utilization and unallocated resources and improves resource utilization. On average we see an improvement of up to 30% in resource utilization and a corresponding 30% decrease in costs.
Interested in learning how to optimize your Kubernetes deployments?
Fan of all things cloud, containers and micro-services!
Cloud native has taken the IT landscape by storm. But what is it? We sat down with Pini Reznik, CTO at Container Solutions and co-author of “Cloud Native Transformation: Practical Patterns for Innovation” to try and figure out what exactly Cloud native is, which specific technology pieces, processes and cultural dynamics need to come together to create Cloud native environments and the best way for organisations to forge into the Cloud native future.
April 22, 2020
3 min read
In this instalment of our Kubernetes best practices series we review the concepts of Kubernetes tenants and multi-tenancy, identify the challenges that have to be overcome and outline best practices for DevOps and cluster admins operating multi-tenant Kubernetes clusters.
April 20, 2020
3 min read
Part four of our Kubernetes and Cloud native application checklist evaluates service mesh tools based on ease of use in cloud native environments as well as their traffic management, security and observability feature-sets.
April 8, 2020
3 min read