[Kubernetes, Kubernetes tags]

How to Tag Kubernetes Cluster Resources on AWS using Kops

Implementing a comprehensive and well thought out tagging policy for Kubernetes cluster resources is the first step towards cost allocation, reporting and optimization. In this blog post we will learn how to tag AWS EC2 instances, EBS volumes, autoscaling groups and ENI's using cloud label tags.

Hasham Haider

Hasham Haider

January 29, 2019

14 minute read

Tagging or labelling cloud resources is an important piece of the enterprise governance puzzle. Tagging allows organizations to stay on top of their cloud game and keep track of their cloud resources for a variety of governance purposes. These include (but are not limited to) cost allocation, reporting and optimization.

For IT managers or DevOps teams tags pack even more functionality, allowing them to automate operations like provisioning infrastructure or back-ups and put security policies in place.

Public cloud providers encourage an extensive and well thought out tagging policy and provide tools to make this process seamless.

However, a crucial ingredient to having a comprehensive tagging regime is visibility into the underlying infrastructure layer. As containers and Kubernetes see more adoption, this layer is increasingly hidden from DevOps. Cluster deployment management tools like Kops obscure it even further.

These tools also bring their own tagging mechanisms to the table on top of the one’s already provided by cloud providers. So how do you make these disparate tagging mechanisms talk to each other?

Book cover of 'Kubernetes Cost Analysis and Allocation on AWS' and button to download

In this blog post we will take a deep dive into tagging AWS resources provisioned for Kubernetes clusters by Kops. We will be tagging multiple AWS resources including AWS EC2 instances, EBS volumes, ENIs, and auto scaling groups. So let’s get into it.

Tagging Kubernetes nodes/EC2 instances using Kops Cloud labels

On AWS, Kubernetes nodes correspond to AWS instances. We will start off by seeing how to tag AWS instances that are spun up as part of a kubernetes cluster.

We can tag AWS instances at cluster creation time by defining cloud label tags in Kops. This is what the create cluster command looks like in Kops with three cloud label tags:

kops create cluster \
--node-count=4 \
--node-size=t2.small \
--cloud=aws \
--zones=eu-central-1a \
--name=mycluster1.k8s.local \
--cloud-labels="Stack=Prod,ApplicationName=Frontend,Team=Kube" \

The above command will create a cluster configuration with four AWS t2.small instances.

We can then spin up our cluster using:

kops update cluster mycluster1.k8s.local --yes

Kops will create a default instance group called nodes for this cluster. All four t2.small instances will be placed in this instance group and will be tagged with the ApplicationName, Team and Stack tags. We can see these cloud label tags on the AWS EC2 console.

kubernetes-tags-aws-instances

Kops will also spin up a master instance group with 1 m3.medium master node, with the same cloud label tags. You can see these cloud label tags in the screenshot below:

kubernetes-tags-aws-instances-2

The corresponding autoscaling groups on AWS will also be tagged with the cloud label tags we defined.

Here are the cloud label tags for the auto scaling group with the worker nodes:

kubernetes-tags-aws-autoscaling-groups

 And the master node auto scaling group:

kubernetes-tags-aws-autoscaling-groups

We can also add additional cloud label tags to our instances after the cluster has been spun up, using the Kops edit command:

kops edit cluster

This will open up the cluster configuration file. Simply add additional tags under spec

Spec:
cloudLabels:
Owner: hasham

Now update your cluster to add the new cloud label tags:

kops update cluster --name mycluster1.k8s.local --yes

We might also have to do a rolling-update cluster to force a rolling update:

kops rolling-update cluster mycluster1.k8s.local --yes

Rolling update performs validation of the cluster, and deems the cluster valid as long as all required nodes and pods are operational. If you get a message saying ”No rolling-update required”, you might have to force one:

kops rolling-update cluster mycluster1.k8s.local --yes --force

Doing this will add the new cloud label tag to both the AWS instances as well as the auto scaling groups:

kubernetes-tags-aws-instances

We can also edit both the master and worker instance group configuration files to add new cloud label tags.

Kops edit ig nodes

This will open up the configuration file for the instance group nodes.

Now add your additional tags under spec

Spec:
cloudLabels:
Owner: replex

Next update the cluster to apply the tags

Kops update cluster --name mycluster1.k8s.local --yes

 And

 kops rolling-update cluster mycluster1.k8s.local --yes

Or

kops rolling-update cluster mycluster1.k8s.local --yes --force

Doing this will add the new cloud label tags to the AWS instances that are part of that instance group:

kubernetes-tags-aws-instances

We can also apply the same tags to the instance group for the master node via the kops edit command.

Cloud label tags can also be added to EC2 instances while creating new instance groups. To do this, we first create a yaml file with our instance group configuration, including the cloud label tags we want to attach to the new instances.

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
    labels:
        kops.k8s.io/cluster: mycluster1.k8s.local
    cloudLabels:
        Stack: Prod
        ApplicationName: Frontend
        Team: Kube
    name: more_nodes
spec:
    image: kope.io/k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17
    machineType: t2.small
    maxSize: 2
    minSize: 2
    nodeLabels:
        kops.k8s.io/instancegroup: nodes
role: Node
subnets:
    - eu-central-1a

Save the configuration file as new_ig.yaml. Next run the following command to create the instance group configuration in Kops:

 kops create -f new_ig.yaml

Now we need to update our cluster to spin up the new instance group:

kops update cluster mycluster1.k8s.local --yes

and

kops rolling-update cluster mycluster1.k8s.local --yes

Or

kops rolling-update cluster mycluster1.k8s.local --yes --force

This will spin up a new instance group within our cluster with 2 t2.small AWS instances. Both of these instances will have the cloud label tags attached to them.

kubernetes-tags-aws-instances

The new AWS auto scaling group will also have the cloud label tags attached.

kubernetes-tags-aws-autoscaling-groups

 Tagging AWS EBS volumes using Kops Cloud labels

Let’s now move on to EBS volumes. As of now there is no way to attached cloud label tags to EBS volumes directly via Kops. Adding cloud label tags to clusters or instance groups via Kops does not automatically tag the attached EBS volumes.

However, there is a pretty handy workaround authored by Mike Lapidakis that uses AWS lambda to copy cloud label tags from the AWS instances to the attached EBS volumes. Using AWS Lambda might result in charges, so keep that in mind before following the steps below.

To do this we first need to create two policies; one for AWS Lambda and one for EC2 tags.

On your AWS console, goto IAM>Policy>Create Policy click on Json and copy the following code into the editor.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:Describe*",
"ec2:CreateTags",
"tag:getResources",
"tag:getTagKeys",
"tag:getTagValues",
"tag:addResourceTags",
"tag:removeResourceTags"
],
"Resource": "*"
}
]
}

Click on review policy, name and describe your policy and then click on create policy.

Now create the Lambda policy by following the above steps for the following Json code

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
}
]
}

Next we need to create an IAM role and attach both these policies to it. On your AWS console, go to IAM > Roles > Create New Role. Click on AWS Service>Lambda and then click on Next>Permissions.

In the filter field search for the two policies we created earlier and select them. Now click on Next:Tags, attach a tag to your policy, click on Review, name and describe the role and then click on Create role.

Next goto your AWS Lambda console, click on Author from scratch, choose python 3.6 as the runtime, in Role click on “Choose an existing role”, and then choose the role we created earlier. Next click on create function.

kubernetes-tags-aws--lambda

Scroll down to the code editor section and copy the following code into it

import boto3
import logging
logger = logging.getLogger()
logger.setLevel(logging.ERROR)
ec2 = boto3.resource('ec2', region_name="eu-central-1")
#Set this to True if you don't want the function to perform any actions
debugMode = False
def lambda_handler(event, context):
    base = ec2.instances.all()
    for instance in base:
#Tag the EBS volumes with cloud label tags
        for vol in instance.volumes.all():
#print(vol.attachments[0]['Device'])
            if debugMode == True:
                print("[DEBUG] " + str(vol))
                tag_cleanup(instance, vol.attachments[0]['Device'])
            else:
                tag = vol.create_tags(Tags=tag_cleanup(instance, vol.attachments[0]['Device']))
                print("[INFO]: " + str(tag))
#------------- Functions ------------------
def tag_cleanup(instance, detail):
    tempTags=[]
    v={}
    for t in instance.tags:
#pull the name tag
        if t['Key'] == 'Name':
            v['Value'] = t['Value'] + " - " + str(detail)
            v['Key'] = 'Name'
            tempTags.append(v)
#Set the important tags that should be written here
         elif t['Key'] == 'Team':
             print("[INFO]: Team Tag " + str(t))
             tempTags.append(t)
        elif t['Key'] == 'Stack':
            print("[INFO]: Stack Tag " + str(t))
            tempTags.append(t)
        elif t['Key'] == 'ApplicationName':
            print("[INFO]: Applicationname Tag " + str(t))
            tempTags.append(t)
        else:
            print("[INFO]: Skip Tag - " + str(t))
    print("[INFO] " + str(tempTags))
    return(tempTags)

The above function copies the cloud label tags we attached to our EC2 instances and applies them to the attached EBS volumes. Make sure you change the region name and cloud label tags to the ones matching your setup.

Next save your function and click on Test.

You should see a success message on your Lambda function dashboard. Now go back to the Elastic Block Store section of the EC2 dashboard and click on one of the volumes. You will be able to see the cloud label tags we defined in our Lambda function attached to the EBS volumes.

kubernetes-tags-aws-ebs-volume

These are the same cloud label tags we attached to our Kubernetes cluster EC2 instances, spun up using Kops.

Tagging AWS Network Interfaces using Kops Cloud labels

Similar to EBS volumes AWS Network interfaces that are spun up by Kubernetes as part of our cluster are not tagged with cloud label tags.

Here again, we can use an AWS Lambda function to attach the cloud label tags we defined for our EC2 Kubernetes cluster instances to ENIs.We need to create a similar Lambda function that copies the AWS EC2 instance cloud label tags and attaches them to our ENIs. Here is what the code for that function will look like: 

import boto3
import logging
logger = logging.getLogger()
logger.setLevel(logging.ERROR)
ec2 = boto3.resource('ec2', region_name="eu-central-1")
#Set this to True if you don't want the function to perform any actions
debugMode = False
def lambda_handler(event, context):
    base = ec2.instances.all()
    for instance in base:
#Tag the Network Interfaces with cloud label tags
        for eni in instance.network_interfaces:
#print(eni.attachment['DeviceIndex'])
            if debugMode == True:
                print("[DEBUG] " + str(eni))
                tag_cleanup(instance, "eth"+str(eni.attachment['DeviceIndex']))
            else:
                tag = eni.create_tags(Tags=tag_cleanup(instance, "eth"+str(eni.attachment['DeviceIndex'])))
                print("[INFO]: " + str(tag))
#------------- Functions ------------------
def tag_cleanup(instance, detail):
    tempTags=[]
    v={}
    for t in instance.tags:
#pull the name tag
        if t['Key'] == 'Name':
            v['Value'] = t['Value'] + " - " + str(detail)
            v['Key'] = 'Name'
            tempTags.append(v)
#Set the important tags that should be written here 
         elif t['Key'] == 'Team':
             print("[INFO]: Team Tag " + str(t))
             tempTags.append(t)
        elif t['Key'] == 'Stack':
            print("[INFO]: Stack Tag " + str(t))
            tempTags.append(t) 
        elif t['Key'] == 'ApplicationName':
            print("[INFO]: Applicationname Tag " + str(t))
            tempTags.append(t) 
        else:
            print("[INFO]: Skip Tag - " + str(t))
    print("[INFO] " + str(tempTags))
    return(tempTags)

Saving and running this Lambda function will copy the cloud label tags from our instances to our ENIs. Now if we check the Network Interfaces section of our EC2 dashboard, we will be able to see the attached tags

kubernetes-tags-aws-eni

Conclusion 

In this blog post we covered the tagging process for AWS EC2 instances, EBS volumes, ENIs and auto scaling groups. A notable omission are AWS ELBs. Since ELBs for clusters spun up using Kops are provisioned by Kubernetes itself, it is not yet possible to attach cloud label tags to them directly via Kops.

Tagging AWS resources, however, is only half the story. Cost allocation and governance for larger deployments spanning multiple infrastructure variants, requires a much more extensive tagging regime with a certain degree of automation.

Replex’s dedicated Kubernetes solution allows you to do granular cost allocation and chargeback for your Kubernetes clusters. It provides detailed cost insights based on both kubernetes and infrastructure (cloud) groupings as well as custom organizational groupings like teams or applications. The optimize module also provides actionable intelligence on how to rightsize instances and reduce costs based on resource usage and utilization metrics.

Interested in learning how to allocate Kubernetes costs to individual teams and applications and make cost savings of up to 30% using Replex.io?

Request a Demo!

Kubernetes Cost analysis and allocation on AWS guide Download Ebook
Hasham Haider

Author

Hasham Haider

Fan of all things cloud, containers and micro-services!

Allocate and Optimize Kubernetes Costs and Make Infrastructure Cost Savings of up to 30%

Request a quick 20 minute demo to see how you can allocate Kubernetes costs to individual teams and applications while saving up to 30% on infrastructure costs using Replex.

Schedule a Meeting