[Kubernetes, Kubernetes Labels, Kubernetes Best Practices]

9 (additional) Best Practices for Working with Kubernetes Labels and Label Selectors

This is the second instalment in our Kubernetes Labels and Label Selectors Best Practices blog series. In this article, we dive deeper into best practices for Label Selectors in the context of Kubernetes Controllers.

Hasham Haider

Hasham Haider

February 22, 2019

4 minute read

We recently wrote about 9 best practices for working with Kubernetes labels. This article will expand on the earlier post by looking at best practices for both labels and label selectors, in the context of Kubernetes controllers.

But first a quick word about Kubernetes labels and label selectors.

Kubernetes labels and label selectors are powerful tools and together allow you to identify, organize, select and operate on Kubernetes objects in bulk.

Kubernetes labels are key/value pairs that can be attached to Kubernetes objects. Labels are meant to specify identifying attributes of Kubernetes objects.

Label selectors are exactly what their name says. They allow you to select Kubernetes objects based on labels and do interesting things with them.

One way to use label selectors is to use them to identify the pods that Kubernetes controllers manage. These controllers include Replicasets, ReplicationControllers, Deployments, StatefulSets and more.

Always specify label selector

We cannot overestimate the importance of labelling Kubernetes objects. When creating controllers always ensure you specify labels for the controller itself. This can be done in the .metadata.labels field of the config. A best practice is to label controllers with the same labels as the pod template (.spec.template.metadata.labels).

If you don't explicitly specify labels for the controller, Kubernetes will use the pod template label as the default label for the controller itself. The pod selector will also default to pod template labels if unspecified. This behaviour has however changed in (version 1.8 and later) the API version apps/v1 in which the label selector and controller labels do not default to the pod's template labels.

When using (version 1.8 or later) API version apps/v1 always specify label selector and controller labels explicitly.

Don’t forget to specify labels for the pod template

Pod templates are required for most Kubernetes controllers including ReplicaSets, ReplicationControllers, Deployments, DaemonSets and StatefulSets. Pod templates are used by other Kubernetes objects and controllers to create pods.

Every pod template (.spec.template) itself requires appropriate labels to be specified. Labels can be specified in the .spec.template.metadata.labels field of the config.

Ensure pod template label is the same as label selector

Label selectors are used to identify, group and select Kubernetes objects. Kubernetes controllers use label selectors (.spec.selector) to ensure that the desired number of pods with the same label key and value as the selector, are operational.

When spinning up Kubernetes controllers like ReplicaSets, Deployments or StatefulSets always ensure that the label selector is the same as the pod template label. If the label selector and pod template labels are different, they will be rejected by the Kubernetes API.

Avoid label overlap among label selectors and pod labels

Once you have created a controller with a certain label selector, you should avoid creating other controllers or pods which match that selector. If you do create other pods with the same labels, either directly, or with other controllers, the pods created will lead to contention between the controllers and will result in them not functioning properly.

Do not specify pod selector for a Job

When creating a Job controller, the pod selector field is optional and should not be specified for most use-cases. Kubernetes will automatically assign a pod selector to a job, which does not overlap with the label selectors or labels of other controllers or pods.

If your use-case requires you to specify a pod selector for a Job controller, make sure it does not overlap with the label selectors of other jobs or controllers. The job pod selector should also be unique as compared to the labels of other pods.

Do not change or update the pod-template-hash label for Deployments

Pod template hash labels are created by the Deployment controller for every ReplicaSet it creates or manages. It ensures that these ReplicaSets do not overall. Therefore changing or updating this label is not recommended.

Avoid label selector updates

Label selector updates are not recommended and should be avoided. However, if you do add new selectors, make sure that you also add them as corresponding labels in the pod template too. Not updating the pod template labels will throw up a validation error.

When you change the key of a label selector you should also make the corresponding change in the pod template label key.

This behaviour has however changed in API version apps/v1 in which it is no longer possible to update or change label selector.

Manipulate label selectors for debugging

Label selectors can be used as a quick and easy way to debug pods. This can be done by removing the labels that match the label selector of the relevant controller from a pod. This will result in the pod being orphaned and a new pod being spun up by the controller to replace it. The isolated pod can then be analyzed safely without impacting the production environment.

Manage multiple releases with labels and controllers

Labels are also a handy tool to manage and operate multiple releases simultaneously. This is especially true of handling canary deployments.

For example, a service that targets pods with the “frontend” and “production” labels can be made to target a canary release by setting up a separate ReplicaSet for the canary pod and assigning the labels “frontend”, “production” and “canary” to it. The stable pods will also be part of a separate ReplicaSet and assigned the labels “frontend”, “production” and “stable”. This way the service will target both canary and stable releases.

In this blog post we dived into best practices for label selectors in the context of Kubernetes controllers. We have also outlined best practices for Kubernetes labels in an earlier blog post which complement the ones outlined here. DevOps and IT managers are best placed to develop conventions around Kubernetes labels and ensure consistent organization-wide use. Integrating these best practices into development and operations toolkits will save a lot of production pain down the line.

Ready for Kubernetes in Production? Download the Kubernetes Production Readiness and Best Practices Checklist: 

Banner with book cover and download button

Hasham Haider

Author

Hasham Haider

Fan of all things cloud, containers and micro-services!

Want to Dig Deeper and Understand How Different Teams or Applications are Driving Your Costs?

Request a quick 20 minute demo to see how you can seamlessly allocate Kubernetes costs while saving up to 30% on infrastructure costs using Replex.

Schedule a Meeting