Let's start by stating the obvious: Cloud native is "not" about the cloud. Counter intuitive as that may sound cloud native practices are not limited by the underlying infrastructure and can be adopted across any number of public, private, hybrid or on-premises infrastructures.
The same is true of cloud native tools, which can be deployed across "traditional" infrastructure technologies. These technologies however do need to emulate the cloud delivery model, which we will get into later. This essentially means that instead of being a cornerstone of cloud native, the cloud is only one of a number of tools that comprise the cloud native landscape.
Let’s also answer the why of cloud native. Cloud native, as most other technologies in the IT landscape, is driven by the demands for speed and agility placed on the modern software delivery life cycle. Modern applications need to evolve quickly to meet changing customer demands or gain competitive advantages with innovative new features. Cloud native practices and tools allow organisations to do just that.
Cloud native is set of tools and practices that allows organisations to build, deploy and operate software applications more frequently, predictably and reliably. Any organisation that builds and operates software applications using cloud native tools and adopts cloud native practices to do so can be said to be cloud native irrespective of the underlying infrastructure.
The distinction between tools and practices is an important one which is mostly ignored when describing a cloud native architecture. Cloud native tools are the specific technology pieces that go into the cloud native puzzle, practices refer to the underlying cultural dynamics of working with those tools. We will cover cloud native cultural practices in more detail later on in the article.
Cloud native is a continuously evolving concept as new tools are developed and new practices take root in today's fast moving IT landscape. However we can identify a baseline of tools and practices that are common to most cloud native architectures. In the next few sections we will identify and describe these tools and practices.
The order in which these concepts are presented does not reflect their relative importance or the order in which they should be implemented. Cloud native, like consciousness in the human brain, is an emergent quality of organisations that implement a specific set of tools and adopt the cultural practices that go along with them.
The cloud or more precisely the cloud delivery model of provisioning, consuming and managing resources is an essential part of cloud native architectures. Cloud native tools and cultural practices can only be adopted in an architecture that supports the on-demand dynamic provisioning of storage, compute and networking resources. This can be either a public cloud provider or an in-house private cloud solution that provides a cloud-like delivery model to IT teams.
Containers are the lifeblood of cloud native applications. Individual containers package application code and all the resources required to run that code in a discrete unit of software. Containerisation makes applications more manageable allowing other technology pieces of the cloud native landscape to chip in and provide new and innovative solutions for application design, scalability, security and reliability etc.
Containerised applications are much more portable as compared to their VM based counterparts and use the underlying resources more efficiently. They also have a much lower management and operational overhead.
Since containers are platform agnostic, they result in lower integration issues and reduced testing and debugging as part of the developer workflow. The ease with which containers can be created, destroyed and updated leads to an overall acceleration in speed to market for new application features, allowing organisations to keep up with changing customer demands.
Microservices architecture is a way of architecting applications as a collection of services. It breaks down applications into easy manageable 'microservices', each performing a specific business function, owned by an individual team and communicating with other related microservices.
Microservices architecture fits in nicely with the cloud native cultural practice of self contained, agile, autonomous teams with native knowledge and skills to develop, test, deploy and operate each microservice.
Breaking up applications in this way allows application components to be developed, deployed, managed and operated independently. This has major implications for developer productivity and deployment speed of applications. The loosely coupled nature of microservices applications also means that production issues in any one microservice do not lead to application wide outages. This makes it easier to contain production issues as well as respond and recover quickly.
Breaking applications down into a collection of loosely coupled microservices leads to an increase in the volume of service to service communication. Most cloud native applications comprise of hundreds of microservices, communicating in complex webs.
Service meshes manage this complex web of service to service communication at scale and make it secure, fast and reliable. Istio, Lankerd and Cosul are prominent examples from the cloud native landscape. Service meshes work by decoupling communication protocols from application code and abstracting it to an infrastructure layer atop TCP/IP. This reduces the overhead for developers who can concentrate on building new features rather then managing managing communcations.
Continuous integration and delivery can refer to both a set of practices as well as the tools that support those practices, aimed at accelerating software development cycles and making them more robust and reliable. CICD tools automate crucial stages of the software release cycle. They also encourage a culture of shared responsibility, small incremental changes to application code, continuously integrating and testing those changes and using version control. CICD practices also extend to the delivery and deployment stages by ensuring new features are production ready once they go through the automated integration and testing phase.
Modern enterprise applications span multiple containerised microservices deployed across a number of public and private cloud hosts. Deploying, operating and scaling the fleets of containerised microservices making up these applications, while ensuring high availability is no easy task. This is where container orchestrators like Kubernetes shine.
Kubernetes makes it easier to provision, deploy and scale fleets of containerised microservices applications. It handles most of the mechanics of placing containers on hosts, load balancing across hosts as well as removing and re-spawning containers under the hood. Network and storage abstractions coupled with standardised resource and configuration definitions add an additional layer of portability on top of containerisation.
All of this makes Kubernetes an indispensable cog in cloud native environments. Kubernetes by itself does not qualify as cloud native, however no environment can truly be cloud native without some sort of orchestration engine at its heart.
Culture is a nebulous concept. While most agree that it exists and influences behaviour and practices within organisations it is not easy to pin down.
The same is true of the cultural component of cloud native systems and architectures. Cloud native culture is mostly confused with cloud native tools. While cloud native tools are relatively easy to integrate into pre existing workflows for building and releasing software products, culture is hard to adopt or even define.
For the purposes of this article, we define culture, as a collection of practices centered around the way organisations build, release and operate software products. Culture is the way these organisations build products, not the tools they use to build them.
A recent survey by the Replex team of IT practitioners at KubeCon Barcelona identifies cultural change as the biggest obstacle to cloud native adoption. It comes out ahead of complexity, planning and deployment and the lack of internal interest in terms of relative difficulty. In this section we will identify some of these practices that epitomise cloud native culture. Let’s start with the most obvious candidate, DevOps.
Even though DevOps predates cloud native it is widely considered an essential component of cloud native systems or at the very least an essential on ramp to cloud native culture. DevOps breaks down the silos that traditional dev and ops teams operated in by encouraging and facilitating open two way communication and collaboration. This leads to better team integration and promises accelerated software delivery and innovation.
SRE, an iteration of DevOps developed internally by google, takes this one step further by taking a software centric view of operations. It encourages traditional developers to internalize operations skills including networking, system administration and automation.
The way in which cloud native applications are architected requires the formation of close knit cross-functional teams responsible for individual components (microservices) of applications. These teams have end to end responsibility for developing, testing, deploying and operating these components and therefore need to internalize a broad set of skills.
Better alignment with rapidly changing customer demands and the drive to gain competitive advantages with new features requires organisations to adopt a culture of frequent, small, rapid releases. CICD practices such as building in automation, shared responsibility and being production ready at all times are also crucial components of cloud native culture.
Now that we have wrapped our heads around the concept of a cloud native architecture let’s take another stab at defining a cloud native application.
A cloud native application is a set of multiple loosely coupled containerised microservices, deployed using an orchestration engine like Kubernetes with the cloud or a cloud like delivery model as an underlying layer.
Cloud native applications are not static entities however and evolve continuously in response to the external environment. This is where the wider constellation of supporting tools comes into play, which need to be adopted to enable teams to develop, test and deploy code more frequently and reliably.
All of this is supported by an underlying cultural layer which advocates the removal of silos in the software life cycle. One way to accomplish this is to create small, agile independent teams with end to end responsibility for developing, testing, deploying and operating applications. These cross functional teams comprising developers, DevOps and SREs would also be responsible for integrating tools, building in automation, monitoring, self-healing, managing performance and ensuring high availability.
So what distinguishes cloud native applications from traditional ones? In this section we will briefly review some features of cloud native applications that give them an edge over traditional applications.
Cloudnative applications are inherently scalable. Some of this can be attributed to the in-built scalability of the underlying technologies they are supported by. Take Kubernetes for example, it can scale both applications and the underlying infrastructure based on a number of business, application and server metrics. The same holds true for the cloud, which has built-in scalability mechanisms for most services.
As opposed to monitoring, observability is a property of a system. Systems are observable if their current state and in turn performance can be inferred based only on outputs. The support for traces, metrics, events and logs, in most of the underlying technology means that cloud native applications are highly observable.
Reliability and resilience is another feature of cloud native applications. The fact that they are composed of multiple loosely coupled services means that application wide shutdowns are rare. Since problems are contained, disaster recovery is also relatively easier. The disaster recovery mechanisms of the underlying technology and the cloud native cultural practices of version control and using git as a single source of truth add to the ease of disaster recovery.
Manageability is another aspect which is easier because of the loosely coupled nature of cloud native applications. New features can be easily integrated and are immediately deployable using a number of deployment techniques. Individual application components can be updated without turning the lights off all over.
Immutability is a cultural aspect of cloud native architectures and plays an important role in how these applications are managed. Immutability is the practice of replacing containers instead of updating them in place. This practice is increasingly taking hold across cloud native environments and is cited by the CNCF as a defining feature of cloud native architectures and applications.
Declarative APIs is another feature that the CNCF cites. Declarative APIs concentrate on outcomes rather than explicitly mapping out a set of actions. Cloud native applications are usually integrated with lightweight APIs such as representational state transfer (REST), Google’s open source remote procedure call (gRPC) or NATS.
The CNCF cloud native landscape is a collection of open source and third party tools under the umbrella of the cloud native computing foundation. These tools cover most aspects of cloud native environments and are aimed at helping companies build end-to-end technology stacks for developing, deploying, operating and monitoring cloud native applications.
In the next section we will review some of these cloud native tools targeted towards specific aspects of cloud native environments. Let’s start with developer tools.
Kubernetes, while making it easier to deploy and operate containerized applications, has also introduced a number of abstractions into the application development workflow. These new abstractions include but are not limited to pods, nodes, namespaces and deployments. Developers need to familiarize themselves with these new abstractions and incorporate them into already existing development workflows.
The following tools allow developers to do just that by decluttering the development pipeline for Kubernetes based cloud native applications and reducing management overhead for developers.
Draft aids developers by providing two main features: Draft create which automatically spins up the artifacts needed to run Kubernetes applications and Draft up which builds container images from code and deploys it to a Kubernetes cluster
Skaffold allows developers to iterate on application code locally, build container images and deploy it to local or remote clusters as well as providing a minimal CICD pipeline.
Telepresence accelerates application development by allowing developers to develop services locally, connect those services to remote clusters and automatically triggers updates whenever changes occur locally.
Okteto allows developers to spin up development environments in remote Kubernetes clusters, detects local code changes and synchronizes them to remote dev environments. By doing this it enables developers to work with their favorite tools locally, accelerates application development and reduces integration issues.
CICD tools aim to accelerate application development and delivery as well as reduce integration and production issues. Most CICD tools predate cloud native architectures, and as such are not aligned with the specific requirements of these architectures. Some CICD providers have however developed variants targeted towards such architectures. Besides these completely new CICD tools built from the ground up for cloud native architectures have also cropped up. In the next section we will review some of these tools.
Jenkins X allows developers to build CICD pipelines without having to know the internals of Kubernetes or keeping up with the ever-growing list of its functionalities. It has a number of nifty build-in features including baseline CICD pipelines that incorporate DevOps and GitOps best practices, team and preview environments and feedback on issues and pull requests.
Gitlab is another feature rich CICD platform which integrates into the larger Gitlab suite of tools. One interesting feature is Auto DevOps which spins up predefined CICD pipelines whenever new projects are created. Deploy boards are another and help DevOps monitor the health and status of CI environments running on Kubernetes.
Argo is built from the ground up for Kubernetes based cloud native applications and leverages Kubernetes CRDs to implement CICD pipelines. This allows pipelines to be managed using native Kubernetes tools like kubectl and also means that they have a much broader integration with other Kubernetes services. Argo monitors live applications and compares it to the one kept under version control. In the event of any diversion it automatically triggers a synching mechanism.
GoCD can be easily installed in Kubernetes clusters using a Helm chart which spins up a GoCD server and elastic agents as pods. CD pipelines can then be defined as either json or yaml files. GoCD also allows DevOps to import sample pipelines to get up and running quickly and configure them with native Kubernetes artefacts like secrets, service accounts and API tokens.
Cloud native applications are usually split into functional pieces called micro services that communicate with each other to perform higher order application functions. This leads to an increase in the volume of communications between application components and a correspondingly larger networking footprint. Cloud VMs, containers and Kubernetes, which are essential underlying technologies supporting cloud native applications, have their own networking requirements.
Cloud native networks tools make it easier to manage networking for cloud native applications. In the next section we will review some of these tools featured in the CNCF cloud native landscape.
Weavenet is a virtual layer 2 network that connects containers on the same host or across multiple hosts. As opposed to most other cloud native network tools, it does not require an external data store, significantly reducing the overhead for developers and DevOps managing these networks. Weavenet uses VXLan encapsulation, encrypts traffic using either ESP of IPsec or NACL, supports partially connected networks and automatically forwards traffic via the fastest path between two hosts. Besides this it also supports Kubernetes network policies, service discovery and load balancing.
Calico functions without encapsulation as a layer 3 network as well as with encapsulation using either VXlan or IP-In-IP. It can also dynamically switch between the two based on whether traffic traverses a subnet boundary or stays within it. Calico can use both etcd as well as the Kubernetes API datastore and supports TLS encryption for communication between etcd and Calico components. It natively integrates into managed Kubernetes services from most public cloud providers, and supports both unicast and anycast IP. It also provides a native network policy resource that expands on the Kubernetes network policy feature set.
Flannel deploys a virtual layer 2 overlay network that spans across the entire cluster and can use both VXlan and host-gw as a backend. UDP can also be used, however it is only recommended for debugging. Similar to most other cloud native networks it uses etcd as a datastore, and can be run using IPsec or Wireguard backends to encrypt traffic. Unlike most other cloud native networks however, Flannel does not support the Kubetnretes network policy resource.
Cilium is an open source cloud native network built on top of BPF. BPF enables it to perform filtering at the kernel level as well as support highly scalable load balancing for traffic between containers and to external services. Cilium supports both VXlan and Geneve encapsulation, can be configured to use either etcd or Consul as data stores and also supports IPsec encryption. It extends the Kubetnetes network policy resource to add support for layer 7 policy enforcement on ingress and egress for Http and kafka as well as egress support for CIDRs.
Contiv is an open source cloud native network from Cisco that supports multiple operational modes including L2, L3, overlay and ACI. It uses etcd as a datastore and has its own built-in network policy resource that replaces the vanilla Kubernetes network policy resource. The built-in policy resource supports both bandwidth policies, that allow users to control the overall resource use of a group of containers and isolation that allows them to control the access of a group of containers. Contiv supports overlapping IPs across hosts and enables multi-tenant support. It uses the DNS protocol for service discovery, does not require queries to external data stores for IP or port information, has built-in support for service load balancing and allows admins to manage users, authorization and LDAP.
In the previous section we outlined multiple cloud native network tools. At first sight these tools seem to have a lot in common with service meshes. There are some important differences however. Managing inter-service communications at the scale required by today’s microservices based enterprise applications quickly becomes infeasible with existing networking tools. Securing, monitoring and orchestrating these communications as well as implementing observability paradigms like tracing and logging add additional complexity.
Service meshes operate alongside cloud native network tools and extend their feature-set by adding security, orchestration, tracing and logging for inter-service communication. Service mesh tools create an abstraction layer on top of microservices allowing DevOps to manage, orchestrate, monitor, observe and secure the communications between those services.
Service meshes are composed of two main components: the control plane and the data plane. In Kubernetes environments the data plane is usually a proxy like envoy deployed alongside a microservice as a side-car container. Proxies handle all traffic to and from the microservice based on policies configured in the control plane.
Istio is one of the most popular service mesh tools in the CNCF cloud native landscape. It is very well integrated into Kubernetes, both in standalone Kubernetes environments as well as managed Kubernetes offerings from major cloud providers. Istio uses an extended version of the envoy proxy and deploys it alongside each microservice pod in Kubernetes environments. It has a broad feature set allowing DevOps to configure and create policies for circuit breakers, timeouts, retries, AB/testing, canary rollouts, and staged rollouts. Security features include support for mTLS encryption, authentication and authorization as well as certificate management. DevOps can also monitor service metrics including ones for latency and traffic and gain access to distributed traces and logs.
Similar to Istio, Consul from Hashicorp uses envoy as a proxy and can be easily installed in cloud native Kubernetes environments as well as managed Kubernetes offerings. Consul works by injecting a Connect sidecar (running the envoy proxy) alongside each pod in the cluster. Consul’s L7 traffic management feature supports A/B testing, Blue/Green deployments, circuit breaking, fault injections and bespoke policies to manage ingress and egress traffic. Services can be registered manually or automatically (using container orchestrators) with a dedicated registry that keeps track of all running services and their health. ACLs allow DevOps to manage authentication and authorization and secure inter-service communication. It also supports mTLS encryption and provides multiple certificate management tools including a built-in CA system. Metrics are captured for all envoy proxies in a prometheus time series which can then be graphed in Grafana. Distributed traces and logs are also supported as part of the observability feature-set.
Kuma is an open source platform agnostic service mesh from Kong, that operates equally well across multiple platforms including Kuberntes, VMs and bare metal. It uses the envoy proxy and stores all of its state and configuration in the Kubernetes API server. Kuma injects an instance of the kuma-dp sidecar container alongside each service pod. Kuma-dp in turn invokes the envoy proxy and connects to the Kuma control plane, which can be used to create and configure the service mesh. Once installed DevOps can configure routing rules for Blue/Green deployments and canary releases as well as manage communication dependencies between services. Inter-service traffic is encrypted using mTLS, which can also be used for AuthN/Z. It also provides both a native certificate authority as a well as support for multiple third party ones. DevOps can collect metrics across all data planes using Prometheus and graph it using pre-built Grafana dashboards. They can also configure policies for health checks, distributed tracing and logs.
Want to dig deeper? Download the Complete CIOs Guide to Kubernetes:
Fan of all things cloud, containers and micro-services!
Cloud native has taken the IT landscape by storm. But what is it? We sat down with Pini Reznik, CTO at Container Solutions and co-author of “Cloud Native Transformation: Practical Patterns for Innovation” to try and figure out what exactly Cloud native is, which specific technology pieces, processes and cultural dynamics need to come together to create Cloud native environments and the best way for organisations to forge into the Cloud native future.
April 22, 2020
16 min read
Part four of our Kubernetes and Cloud native application checklist evaluates service mesh tools based on ease of use in cloud native environments as well as their traffic management, security and observability feature-sets.
April 8, 2020
16 min read
Part three of our Kubernetes and Cloud native application checklist dives into cloud native networking tools. We also compare them based on the type of network encapsulation used, degree of support for network policy, encryption, service discovery and load balancing.
March 5, 2020
16 min read