[Kubernetes, Kubernetes tags]
Implementing a comprehensive and well thought out tagging policy for Kubernetes cluster resources is the first step towards cost allocation, reporting and optimization. In this blog post we will learn how to tag AWS EC2 instances, EBS volumes, autoscaling groups and ENI's using cloud label tags.
Hasham Haider
January 29, 2019
14 minute read
Tagging or labelling cloud resources is an important piece of the enterprise governance puzzle. Tagging allows organizations to stay on top of their cloud game and keep track of their cloud resources for a variety of governance purposes. These include (but are not limited to) cost allocation, reporting and optimization.
For IT managers or DevOps teams tags pack even more functionality, allowing them to automate operations like provisioning infrastructure or back-ups and put security policies in place.
Public cloud providers encourage an extensive and well thought out tagging policy and provide tools to make this process seamless.
However, a crucial ingredient to having a comprehensive tagging regime is visibility into the underlying infrastructure layer. As containers and Kubernetes see more adoption, this layer is increasingly hidden from DevOps. Cluster deployment management tools like Kops obscure it even further.
These tools also bring their own tagging mechanisms to the table on top of the one’s already provided by cloud providers. So how do you make these disparate tagging mechanisms talk to each other?
In this blog post we will take a deep dive into tagging AWS resources provisioned for Kubernetes clusters by Kops. We will be tagging multiple AWS resources including AWS EC2 instances, EBS volumes, ENIs, and auto scaling groups. So let’s get into it.
On AWS, Kubernetes nodes correspond to AWS instances. We will start off by seeing how to tag AWS instances that are spun up as part of a kubernetes cluster.
We can tag AWS instances at cluster creation time by defining cloud label tags in Kops. This is what the create cluster command looks like in Kops with three cloud label tags:
The above command will create a cluster configuration with four AWS t2.small instances.
We can then spin up our cluster using:
Kops will create a default instance group called nodes for this cluster. All four t2.small instances will be placed in this instance group and will be tagged with the ApplicationName, Team and Stack tags. We can see these cloud label tags on the AWS EC2 console.
Kops will also spin up a master instance group with 1 m3.medium master node, with the same cloud label tags. You can see these cloud label tags in the screenshot below:
The corresponding autoscaling groups on AWS will also be tagged with the cloud label tags we defined.
Here are the cloud label tags for the auto scaling group with the worker nodes:
And the master node auto scaling group:
We can also add additional cloud label tags to our instances after the cluster has been spun up, using the Kops edit command:
This will open up the cluster configuration file. Simply add additional tags under spec
Now update your cluster to add the new cloud label tags:
We might also have to do a rolling-update cluster to force a rolling update:
Rolling update performs validation of the cluster, and deems the cluster valid as long as all required nodes and pods are operational. If you get a message saying ”No rolling-update required”, you might have to force one:
Doing this will add the new cloud label tag to both the AWS instances as well as the auto scaling groups:
We can also edit both the master and worker instance group configuration files to add new cloud label tags.
This will open up the configuration file for the instance group nodes.
Now add your additional tags under spec
Next update the cluster to apply the tags
And
Or
Doing this will add the new cloud label tags to the AWS instances that are part of that instance group:
We can also apply the same tags to the instance group for the master node via the kops edit command.
Cloud label tags can also be added to EC2 instances while creating new instance groups. To do this, we first create a yaml file with our instance group configuration, including the cloud label tags we want to attach to the new instances.
Save the configuration file as new_ig.yaml. Next run the following command to create the instance group configuration in Kops:
Now we need to update our cluster to spin up the new instance group:
and
Or
This will spin up a new instance group within our cluster with 2 t2.small AWS instances. Both of these instances will have the cloud label tags attached to them.
The new AWS auto scaling group will also have the cloud label tags attached.
Let’s now move on to EBS volumes. As of now there is no way to attached cloud label tags to EBS volumes directly via Kops. Adding cloud label tags to clusters or instance groups via Kops does not automatically tag the attached EBS volumes.
However, there is a pretty handy workaround authored by Mike Lapidakis that uses AWS lambda to copy cloud label tags from the AWS instances to the attached EBS volumes. Using AWS Lambda might result in charges, so keep that in mind before following the steps below.
To do this we first need to create two policies; one for AWS Lambda and one for EC2 tags.
On your AWS console, goto IAM>Policy>Create Policy click on Json and copy the following code into the editor.
Click on review policy, name and describe your policy and then click on create policy.
Now create the Lambda policy by following the above steps for the following Json code
Next we need to create an IAM role and attach both these policies to it. On your AWS console, go to IAM > Roles > Create New Role. Click on AWS Service>Lambda and then click on Next>Permissions.
In the filter field search for the two policies we created earlier and select them. Now click on Next:Tags, attach a tag to your policy, click on Review, name and describe the role and then click on Create role.
Next goto your AWS Lambda console, click on Author from scratch, choose python 3.6 as the runtime, in Role click on “Choose an existing role”, and then choose the role we created earlier. Next click on create function.
Scroll down to the code editor section and copy the following code into it
The above function copies the cloud label tags we attached to our EC2 instances and applies them to the attached EBS volumes. Make sure you change the region name and cloud label tags to the ones matching your setup.
Next save your function and click on Test.
You should see a success message on your Lambda function dashboard. Now go back to the Elastic Block Store section of the EC2 dashboard and click on one of the volumes. You will be able to see the cloud label tags we defined in our Lambda function attached to the EBS volumes.
These are the same cloud label tags we attached to our Kubernetes cluster EC2 instances, spun up using Kops.
Similar to EBS volumes AWS Network interfaces that are spun up by Kubernetes as part of our cluster are not tagged with cloud label tags.
Here again, we can use an AWS Lambda function to attach the cloud label tags we defined for our EC2 Kubernetes cluster instances to ENIs.We need to create a similar Lambda function that copies the AWS EC2 instance cloud label tags and attaches them to our ENIs. Here is what the code for that function will look like:
Saving and running this Lambda function will copy the cloud label tags from our instances to our ENIs. Now if we check the Network Interfaces section of our EC2 dashboard, we will be able to see the attached tags
In this blog post we covered the tagging process for AWS EC2 instances, EBS volumes, ENIs and auto scaling groups. A notable omission are AWS ELBs. Since ELBs for clusters spun up using Kops are provisioned by Kubernetes itself, it is not yet possible to attach cloud label tags to them directly via Kops.
Tagging AWS resources, however, is only half the story. Cost allocation and governance for larger deployments spanning multiple infrastructure variants, requires a much more extensive tagging regime with a certain degree of automation.
Replex’s dedicated Kubernetes solution allows you to do granular cost allocation and chargeback for your Kubernetes clusters. It provides detailed cost insights based on both kubernetes and infrastructure (cloud) groupings as well as custom organizational groupings like teams or applications. The optimize module also provides actionable intelligence on how to rightsize instances and reduce costs based on resource usage and utilization metrics.
Interested in learning how to allocate Kubernetes costs to individual teams and applications and make cost savings of up to 30% using Replex.io?
Author
Fan of all things cloud, containers and micro-services!
Part five of our Kubernetes and Cloud native application checklist evaluates cloud native storage tools based on ease of installation and continued operations in cloud native environments as well as the feature set provided.
June 15, 2020
14 min read
Read article
A comprehensive guide to managed Kubernetes distributions outlining the features that CIOs, CTOs and ITDMs need to consider when evaluating enterprise Kubernetes distributions.
June 8, 2020
14 min read
Read article
Cloud native has taken the IT landscape by storm. But what is it? We sat down with Pini Reznik, CTO at Container Solutions and co-author of “Cloud Native Transformation: Practical Patterns for Innovation” to try and figure out what exactly Cloud native is, which specific technology pieces, processes and cultural dynamics need to come together to create Cloud native environments and the best way for organisations to forge into the Cloud native future.
April 22, 2020
14 min read
Read article