Kubernetes labels are powerful tools. Labels allow DevOps to identify, group and manage Kubernetes objects. They also support bulk operations on Kubernetes objects using the label selector. I recently wrote about Kubernetes labels and best practices for working with them in production environments.
Despite being ultra useful inside the Kubernetes ecosystem, Kubernetes labels have limited applicability outside of it. For one, they cannot easily be integrated into public cloud provider tagging tools. This is important, since it means DevOps teams have to maintain two separate sets of labels for Kubernetes deployments; one for the public cloud and one for Kubernetes itself.
This also means that DevOps teams have to spend extra time and effort to ensure that the two tagging mechanisms are consistent. It also nips any cost analysis or allocation efforts in the bud.
In today's blog we will take a look at a specific tagging mechanism in Kops called Cloud labels. Cloud labels provide cross-domain functionality since they map to AWS tags and can be used to do many interesting things like cost analysis and allocation. But before we do that, let's first take a quick look at tagging in general and why it is important.
Labelling or Tagging simply means attaching meaningful metadata to your cloud resources. Labels are key/value pairs. A good example of this is the environment label with the key "environment" and the value "Dev", "staging" or "production".
Labelling or tagging has become an important piece of the DevOps toolkit. Most public cloud providers provide a tagging mechanism out of the box.
Labels are extremely useful tools for identifying, grouping and managing cloud resources. If we take the example of AWS EC2 instances, tagging allows DevOps to quickly identify a specific instance as being a part of a certain environment, belonging to a certain team or running a specific application.
Multiple EC2 instances can also be filtered or grouped together based on tags e.g. filtering all EC2 instances belonging to the Dev team. Labels or tags can also be used for governance and access control.
Cost reporting, analysis and allocation also become possible with a well thought out and consistent tagging policy. Costs can be grouped together into cost centers or allocated to individual teams, applications or any number of organizational or deployment cross-sections. As with Kubernetes labels, AWS tags also support bulk operations on EC2 instances e.g. scheduled shutdowns and backups.
Before we get into Cloud labels, it might be useful to take a look at Kops itself.
Kops does for Kubernetes clusters what Kubernetes did for containers. Kops makes it easy to work with Kubernetes clusters in the same way that Kubernetes made it easy to work with containers.
Here is how the Github page defines Kops:
"Kops helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters from the command line".
Kops has official support for AWS, with GCE in beta. Besides these cloud providers, it is possible to spin up Kubernetes clusters on almost any cloud provider using Kops, with varying levels of functionality provided.
Cloud labels are a Kops feature which allow us to attach label keys and values to the AWS resources you spin up. The great thing about Cloud labels is that they map to AWS tags. This means that when we apply a Cloud label tag to a Kubernetes cluster in Kops, it will automatically show up as an AWS tag on the EC2 instances and auto scaling groups, that are spun up as a part of the cluster.
Here is an example of a „create cluster“ Kops command with the Cloud labels of „Stack“ and „Team“.
Once the cluster comes up, you can head over to the „instances“ view on your AWS dashboard and click on the individual instances to see the tags.
Once we have attached Cloud labels to our Kubernetes clusters we can use both the AWS cost explorer and billing tools to analyze and allocate Kubernetes costs. Before we can do this, however, we will need to activate both the AWS generated cost allocation tag and the user-defined cost allocation tag. We will also need to configure the monthly cost allocation report in the AWS billing console.
Besides this we need to develop a more expansive tagging policy, with Cloud labels for cost centers, application names and cluster names among others. For production Kubernetes environments, cross-cluster resource usage has to be taken into account too by fine-tuning the tagging policy.
Following are some screenshots of the cost analysis reports that are generated:
The image above shows the costs for all Kubernetes clusters running on AWS, over a period of 6 months.
The above report breaks down Kubernetes costs by cluster, over a period of six months.
We cover all this and more in our Kubernetes cost analysis and allocation white paper. In the white paper we generate granular cluster level costs analysis reports with the AWS cost explorer tool. We also use the cost allocation tool to allocate Kubernetes costs across a wide range of organizational and deployment cross-sections including teams, applications, stacks and clusters.
Want to learn more? Download the Kubernetes cost analysis and allocation ebook for AWS
Fan of all things cloud, containers and micro-services!
This is the second instalment in our blog series about monitoring Kubernetes resource metrics in production. In this post, we complement the process of Kubernetes resource monitoring with Prometheus by installing Grafana and leveraging the Prometheus data source to create information-rich dashboards in a user-friendly visual format.
April 3, 2019
4 min read
In this blog post we will take a deep dive into Kubernetes QOS classes. We will start by looking at the factors that determine whether a pod is assigned a Guaranteed, Burstable or BestEffort QOS class. We will then look at how QOS class impacts the way pods are scheduled by the Kubernetes Scheduler, how it impacts the eviction order of pods by the Kubelet as well as what happens to them during node OOM events.
March 22, 2019
4 min read
In this instalment of the Kubernetes in Production blog series, we take a deep dive into monitoring Kubernetes resource metrics. We will see why monitoring resources is important for Kubernetes in production, choosing which resource metrics to monitor, setting up the tools required including Metrics-Server and Prometheus and querying metrics.
March 7, 2019
4 min read