[Kubernetes, Production Readiness Checklist, Kubernetes Best Practices]
In this instalment of our Kubernetes best practices series we review the concepts of Kubernetes tenants and multi-tenancy, identify the challenges that have to be overcome and outline best practices for DevOps and cluster admins operating multi-tenant Kubernetes clusters.
Hasham Haider
April 20, 2020
7 minute read
Multi tenancy is a hot topic these days in the cloud native Kubernetes world. We experienced this increased interest first hand at KubeCon Barcelona back in May last year and more recently at KubeCon San Diego. Both Kubecon’s had a total of 12 sessions and talks dedicated to multi-tenant Kubernetes.
Multi-tenancy is a hard nut to crack in the context of Kubernetes. This is supported by the results of a survey we conducted where 74% of respondents indicated the use of separate clusters for individual teams, projects and environments. One reason for this are the security and resource isolation challenges that come with multi-tenant Kubernetes clusters.
In this article we will briefly review the concepts of Kubernetes tenants and multi-tenancy, identify the challenges that have to be overcome and outline best practices for DevOps and cluster admins operating multi-tenant Kubernetes clusters.
The Kubernetes multi-tenancy SIG defines a tenant as representing a group of Kubernetes users that has access to a subset of cluster resources (compute, storage, networking, control plane and API resources) as well as resource limits and quotas for the use of those resources. Resource limits and quotas lay out tenant boundaries. These boundaries extend to the control plane allowing for grouping of the resources owned by the tenant, limited access or visibility to resources outside of the control plane domain and tenant authentication.
Multi-tenant Kubernetes clusters are one’s that share cluster resources among tenants. Based on the definition provided earlier, tenants can be anything from groups of users to distributed teams, customers, applications, departments or projects. Examples include multiple distributed DevOps teams deploying applications to the same cluster, staging and production environments residing on the same cluster, Kubernetes clusters being shared among multiple end-users or customers and multiple applications or workloads sharing a cluster’s resources.
There are two multi-tenancy models in Kubernetes: Soft and Hard multi-tenancy.
Soft multi-tenancy trusts tenants to be good actors and assumes them to be non-malicious. Soft multi-tenancy is focused on minimising accidents and managing the fallout if they do.
Hard multi-tenancy assumes tenants to be malicious and therefore advocates zero trust between them. Tenant resources are isolated and access to other tenant’s resources is not allowed. Clusters are configured in a way that isolate tenant resources and prevent access to other tenant’s resources.
Let’s move on now to multi-tenancy best practices in the context of Kubernetes. In the section below we will outline best practices for DevOps and cluster administrators operating multi-tenant Kubernetes clusters.
A starter best practice in the context of Kubernetes multi-tenancy is to categorize namespaces into groups. Three such namespace groups are recommended:
System namespaces: Exclusively for system pods
Service namespaces: These namespaces should run services or applications that need to be accessed by services in other namespaces.
Tenant Namespaces: Tenant namespaces should be spun up to run applications that do not need to be accessed from other namespaces in the cluster.
Once categorized a best practice is to create separate namespaces for individual tenants; for example creating a separate namespace for individual DevOps teams or individual namespaces housing the dev and staging environments.
Tenants can also own more than one namespaces. Cluster admins do need to ensure however that only the members of that specific team have access to the namespace and only they can perform operations on Kubernetes objects in it.
Another best practice is to create a hierarchy of cluster personas scoped to varying levels of permissions and the operations they can perform. The Kubernetes multi-tenancy SIG outlines four such personas:
Cluster admin: Full read/write privileges for all resources in the cluster including those owned by tenants
Cluster view: Read privileges for all resources in the cluster including those owned by tenants
Tenant admin: Permissions to create new tenants, read/write resources scoped to that tenant, update or delete created tenants, no privileges to read/write resources scoped to other tenants/namespaces
Tenant user: Read/write privileges for all resources scoped to that tenant
Teams can pick and choose between these personas based on the size and requirements their application stack, team composition as well as other organisational environments.
Enabling RBAC is another best practice in the context of Kubernetes multi-tenancy. Once enabled cluster administrators can create the previously mentioned personas using the four API objects provided by RBAC: Roles, ClusterRoles, RoleBindings and ClusterRoleBindings. Disabling ABAC (Attribute Based Access Control) and static file based access control is also recommended.
Network policies allow cluster admins to govern how groups of pods can communicate. A best practice is for admins to isolate tenant namespaces using the network policy resource. A sample policy is provided here. When using a networking plugin cluster admins should also ensure that it supports Kubernetes network policy.
Admission controllers allow cluster admins to govern and control how Kubernetes clusters are used. Admission controllers can both change incoming API requests or deny them.
A best practice for cluster administrators operating multi-tenant Kubernetes clusters is to enable the PodSecurityPolicy admission controller. PodSecurityPolicy kicks in on pod creation and modification to ensure that the new pod specification meets the set of conditions defined. Here is a sample PodSecurityPolicy that cluster admins can implement.
Besides PodSecurityPolicy cluster admins should also enable the following admission controllers:
To ensure resources are not wasted and avoid disproportionate resource usage across tenants, a best practice is to implement Kubernetes namespace resource quotas. Resource quotas allow for control of total resource usage (CPU, memory, storage) for the entire namespace scoped to a single tenant. Cluster admins should also ensure that tenants cannot create, update, patch or delete resource quotas.
Another best practice in the context of Kubernetes multi-tenancy is to ensure that tenants do not have access to non-namespaced resources. Non-namespaced resources are ones that do not belong to any specific node or namespace but to the cluster as a whole. Cluster admins should ensure that tenants cannot view, edit, create or delete cluster scoped resources such as Node, ClusterRole, ClusterRoleBinding.
To see a list of all non-namespaced resources:
kubectl --kubeconfig cluster-admin api-resources --namespaced=false
To see whether tenants can perform operations on resources:
kubectl --kubeconfig “tenant_name” auth can-i “verb” “resource”
Continuing from the previous best practice cluster admins should also ensure that tenants cannot access namespaced resources belonging to other tenants. Namespaces resources are ones that are scoped to a specific namespace. As such cluster admins should ensure that tenants from other namespaces cannot view, edit, create or delete namespaces resources belonging to other tenants.
To see a list of all namespaced resources belonging to a tenant:
kubectl --kubeconfig “tenant_name” api-resources --namespaced=true
To see whether tenants can perform operations on namespaced resources belonging to other tenants:
kubectl --kubeconfig “tenant_name” -n “namespace_name” “verb” “resource”
There is a subset of namespaced resources that should not be accessible to tenants of that namespace. These include the default network policy, namespace resource quotas and role bindings. Any changes to these resources can impact other tenants and their ability to consume or access resources. A best practice therefore is for cluster admins to ensure that this type of resource cannot be modified by tenants.
To view list of resources managed by cluster admin:
kubectl --kubeconfig=cluster-admin -n “namespace_name” get all -l =
To verify that the resource cannot be modified by the namespace tenant (this requires labelling resources managed by the cluster admin):
kubectl --dry-run=true --kubeconfig=”tenant_name” -n “namespace_name” annotate key1=value1
Tenants can use host volumes or directories to access shared data or escalate privileges. A best practice therefore is for cluster admins to prevent tenants from creating pods with volumes of type HostPath. Cluster admins can do this natively using Kubernetes PodSecurityPolicy. Here is one example of a PodSecurityPolicy that prevents mounting HostPath.
The Kubernetes multi-tenancy SIG provides an e-2-e test tool that can be used to validate multi-tenant Kubernetes clusters. The e-2-e tool validates clusters based on a set of recommended multi-tenancy benchmarks.
To run the tool do the following:
git clone https://github.com/kubernetes-sigs/multi-tenancy.git
cd multi-tenancy/benchmarks
Edit the config file (config.yaml) with your cluster configuration.
Run tests:
go test ./e2e/tests
Or with path to config file
go test ./e2e/tests -config
Want to learn more? Download the Complete Best Practices Checklist with Checks, Recipes and Best Practices for Resource Management, Security, Scalability and Monitoring for Production-Ready Kubernetes:
Author
Fan of all things cloud, containers and micro-services!
Part five of our Kubernetes and Cloud native application checklist evaluates cloud native storage tools based on ease of installation and continued operations in cloud native environments as well as the feature set provided.
June 15, 2020
7 min read
Read article
A comprehensive guide to managed Kubernetes distributions outlining the features that CIOs, CTOs and ITDMs need to consider when evaluating enterprise Kubernetes distributions.
June 8, 2020
7 min read
Read article
Cloud native has taken the IT landscape by storm. But what is it? We sat down with Pini Reznik, CTO at Container Solutions and co-author of “Cloud Native Transformation: Practical Patterns for Innovation” to try and figure out what exactly Cloud native is, which specific technology pieces, processes and cultural dynamics need to come together to create Cloud native environments and the best way for organisations to forge into the Cloud native future.
April 22, 2020
7 min read
Read article