[Kubernetes, Cloud, FinOps, Cloud Native]

Introduction to FinOps for Kubernetes: Challenges and Best Practices - Part III

Part 3 of our Introduction to FinOps for Kubernetes: Challenges and Best Practices article series, which outlines a comprehensive list of best practices aimed at implementing FinOps processes for cloud native Kubernetes environments.

Hasham Haider

Hasham Haider

July 12, 2021

3 minute read

This is part 3 of our Introduction to FinOps for Kubernetes: Challenges and Best Practices article series. In Part 1 we outlined some of the core challenges associated with implementing FinOps processes for cloud native Kubernetes environments. Part 2 outlined real world FinOps best practices that could be employed in cloud native Kubernetes environments.

This instalment will extend the best practices list outlined in part 2 as well as identifying how best to overcome the challenges that Kubernetes throws up in a FinOps context. 

Ensure Pod Tracking for Accurate Cost Allocation

Kubernetes clusters are resource pools created by multiple nodes sharing their resources. Each node type, especially in Kubernetes environments deployed on public cloud has a different price tag. Nodes host pods, which are ephemeral with the average lifespan of a container being one day. Pods can also be rescheduled across node types, or cloud provider zoner for multi-AZ clusters. 

All of these factors make pod tracking essential for accurately allocating the costs of that pod and by extension of the Kubernetes environment to teams, business units or any other custom grouping. 

A best practice in this context is for engineering to implement node affinity. Node affinity constrains a pod so that it only runs on a particular node(set of nodes). nodeSelector, which is a field in the podSpec, is the simplest recommended method of doing this. 

To automate pod tracking and reduce engineering overhead and manual effort, FinOps teams can also opt to use tools like Replex, which make Kubernetes pod tracking and cost allocation seamless. 

Factor in Costs for Stateful Applications

Stateful applications need to store user sessions. In the context of Kubernetes this would entail forwarding each client session to the same container instance. Stateful application costs are usually relevant in cases where organisations need to allocate costs to clients of that application. Since clients will use a specific instance of that application, accurately measuring the costs of that instance (pod/group of pods) is essential to accurate cost allocation.  

Stateful applications also usually have some sort of a database attached to persist data.

In the context of stateful applications, a best practice is for engineering to select session affinity based on the clients IP addresses by setting service.spec.sessionAffinity to "ClientIP". Kubernetes admins can also set the maximum time during which traffic will be forwarded (sticky time) by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds to the appropriate time. 

Another best practice in the context of databases attached to persistent applications is to decouple the database engine and the data itself. The database engine should be containerised while the data itself can be kept on the container host itself. 

This can be done using host volumes:

$ docker run -dit -v /var/myapp/data:/var/lib/postgresql/data postgres

Calculate Custom Rates and Amortizations

As the cloud has matured so has its billing model, with multiple discount options available. These include RIs (reserved instances), negotiated custom rates, CUDs (custom usage discounts) and spot instances. Most enterprises with a significant cloud footprint tend to leverage at least one of these options. Some of these also end up serving as the underlying infrastructure layer upon which Kubernetes environments are deployed. 

A best practice when implementing a cloud native FinOps framework is to factor-in these discounts into the overall costs of the Kubernetes environment. This will ensure that teams and business units have an accurate view of their Kubernetes costs. 

Replex with its cloud provider billing integration makes this process seamless with minimal manual effort required. Custom discounts and enterprise agreements can be factored into overall Kubernetes costs with just a few simple steps. 

Create Showback and Chargebacks

Showback and chargeback differ slightly based on whether IT departments actually charge teams/business units for IT resources consumed (chargeback), or simply calculate the cost of resources consumed without charging teams/business units (showback). Both are important in a cloud native FinOps context. 

Most of the best practices outlined earlier enable chargeback/showback for Kubernetes environments. These include proper tagging/labelling, equitably allocating shared resources and factoring in custom rates. Since nodes variants have different price tags, accurate cost allocation also requires pod tracking to identify which node pods are deployed on. 

With Replex chargeback/showback is as easy as connecting your Kubernetes clusters. Once connected Replex aggregates billing data, cluster topology information and cluster metrics to provide comprehensive chargeback/showback. 

Interested in learning more about the FinOps framework? 

Download our detailed guide to Cloud FinOps for FinOps teams, executives, DevOps, engineering, finance and procurement.

The ultimate guide to cloud finops Download Ebook
Hasham Haider

Author

Hasham Haider

Fan of all things cloud, containers and micro-services!

Want to Dig Deeper and Understand How Different Teams or Applications are Driving Your Costs?

Request a quick 20 minute demo to see how you can seamlessly allocate Kubernetes costs while saving up to 30% on infrastructure costs using Replex.

Contact Us