[Cloud Native]

A Comprehensive Guide to Cloud Native Application Architecture: Tools, Culture and Features

In this guide to Cloud Native Application Architecture we outline the tools, cultural practices and concepts that come together to create a cloud native organisation. We also outline the features of cloud native applications that give them an edge over traditional monolithic applications.

Hasham Haider

Hasham Haider

February 3, 2020

8 minute read

Let's start by stating the obvious: Cloud native is "not" about the cloud. Counter intuitive as that may sound cloud native practices are not limited by the underlying infrastructure and can be adopted across any number of public, private, hybrid or on-premises infrastructures.

The same is true of cloud native tools, which can de deployed across "traditional" infrastructure technologies. These technologies however do need to emulate the cloud delivery model, which we will get into later. This essentially means that instead of being a cornerstone of cloud native, the cloud is only one of a number of tools that comprise the cloud native landscape.

Let’s also answer the why of cloud native. Cloud native, as most other technologies in the IT landscape, is driven by the demands for speed and agility placed on the modern software delivery life cycle. Modern applications need to evolve quickly to meet changing customer demands or gain competitive advantages with innovative new features. Cloud native practices and tools allow organisations to do just that.

What is Cloud Native?

Cloud native is set of tools and practices that allows organisations to build, deploy and operate software applications more frequently, predictably and reliably. Any organisation that builds and operates software applications using cloud native tools and adopts cloud native practices to do so can be said to be cloud native irrespective of the underlying infrastructure.

The distinction between tools and practices is an important one which is mostly ignored when describing a cloud native architecture. Cloud native tools are the specific technology pieces that go into the cloud native puzzle, practices refer to the underlying cultural dynamics of working with those tools. We will cover cloud native cultural practices in more detail later on in the article.

Cloud native is a continuously evolving concept as new tools are developed and new practices take root in today's fast moving IT landscape. However we can identify a baseline of tools and practices that are common to most cloud native architectures. In the next few sections we will identify and describe these tools and practices.

The order in which these concepts are presented does not reflect their relative importance or the order in which they should be implemented. Cloud native, like consciousness in the human brain, is an emergent quality of organisations that implement a specific set of tools and adopt the cultural practices that go along with them.

Cloud Native Tools

Cloud Delivery Model

The cloud or more precisely the cloud delivery model of provisioning, consuming and managing resources is an essential part of cloud native architectures. Cloud native tools and cultural practices can only be adopted in an architecture that supports the on-demand dynamic provisioning of storage, compute and networking resources. This can be either a public cloud provider or an in-house private cloud solution that provides a cloud-like delivery model to IT teams. 

Containers

Containers are the lifeblood of cloud native applications. Individual containers package application code and all the resources required to run that code in a discrete unit of software. Containerisation makes applications more manageable allowing other technology pieces of the cloud native landscape to chip in and provide new and innovative solutions for application design, scalability, security and reliability etc. 

Containerised applications are much more portable as compared to their VM based counterparts and use the underlying resources more efficiently. They also have a much lower management and operational overhead.

Since containers are platform agnostic, they result in lower integration issues and reduced testing and debugging as part of the developer workflow. The ease with which containers can be created, destroyed and updated leads to an overall acceleration in speed to market for new application features, allowing organisations to keep up with changing customer demands.

Microservices

Microservices architecture is a way of architecting applications as a collection of services. It breaks down applications into easy manageable 'microservices', each performing a specific business function, owned by an individual team and communicating with other related microservices.

Microservices architecture fits in nicely with the cloud native cultural practice of self contained, agile, autonomous teams with native knowledge and skills to develop, test, deploy and operate each microservice.

Breaking up applications in this way allows application components to be developed, deployed, managed and operated independently. This has major implications for developer productivity and deployment speed of applications. The loosely coupled nature of microservices applications also means that production issues in any one microservice do not lead to application wide outages. This makes it easier to contain production issues as well as respond and recover quickly.

Service Mesh

Breaking applications down into a collection of loosely coupled microservices leads to an increase in the volume of service to service communication. Most cloud native applications comprise of hundreds of microservices, communicating in complex webs. 

Service meshes manage this complex web of service to service communication at scale and make it secure, fast and reliable. Istio, Lankerd and Cosul are prominent examples from the cloud native landscape.  Service meshes work by decoupling communication protocols from application code and abstracting it to an infrastructure layer atop TCP/IP. This reduces the overhead for developers who can concentrate on building new features rather then managing managing communcations. 

Continuous Integration and Delivery

Continuous integration and delivery can refer to both a set of practices as well as the tools that support those practices, aimed at accelerating software development cycles and making them more robust and reliable. CICD tools automate crucial stages of the software release cycle. They also encourage a culture of shared responsibility, small incremental changes to application code, continuously integrating and testing those changes and using version control. CICD practices also extend to the delivery and deployment stages by ensuring new features are production ready once they go through the automated integration and testing phase.

Orchestration

Modern enterprise applications span multiple containerised microservices deployed across a number of public and private cloud hosts. Deploying, operating and scaling the fleets of containerised microservices making up these applications, while ensuring high availability is no easy task. This is where container orchestrators like Kubernetes shine.

Kubernetes makes it easier to provision, deploy and scale fleets of containerised microservices applications. It handles most of the mechanics of placing containers on hosts, load balancing across hosts as well as removing and re-spawning containers under the hood. Network and storage abstractions coupled with standardised resource and configuration definitions add an additional layer of portability on top of containerisation.

All of this makes Kubernetes an indispensable cog in cloud native environments. Kubernetes by itself does not qualify as cloud native, however no environment can truly be cloud native without some sort of orchestration engine at its heart.

Cloud Native Culture

Culture is a nebulous concept. While most agree that it exists and influences behaviour and practices within organisations it is not easy to pin down.

The same is true of the cultural component of cloud native systems and architectures. Cloud native culture is mostly confused with cloud native tools. While cloud native tools are relatively easy to implement and integrate into pre existing workflows for building and releasing software products, culture is hard to adopt or even define.

For the purposes of this article, we define culture, as a collection of practices centered around the way organisations build, release and operate software products. Culture is the way these organisations build products, not the tools they use to build them.

A recent survey by the Replex team of IT practitioners at KubeCon Barcelona identifies cultural changes as the biggest obstacle to cloud native adoption. It comes out ahead of complexity, planning and deployment and the lack of internal interest in terms of relative difficulty. In this section we will identify some of these practices that epitomise cloud native culture. Let’s start with the most obvious candidate, DevOps.

Even though DevOps predates cloud native it is widely considered an essential component of cloud native systems or at the very least an essential on ramp to cloud native culture. DevOps breaks down the silos that traditional dev and ops teams operated in by encouraging and facilitating open two way communication and collaboration. This leads to better team integration and promises accelerated software delivery and innovation.

SRE, an iteration of DevOps developed internally by google, takes this one step further by taking a software centric view of operations. It encourages traditional developers to internalize operations skills including networking, system administration and automation.

The way in which cloud native applications are architected requires the formation of close knit cross functional teams responsible for individual components (microservices) of applications. These teams have end to end responsibility for developing, testing, deploying and operating these components and therefore need to internalize a broad set of skills.

Better alignment with rapidly changing customer demands and the drive to gain competitive advantages with new features requires organisations to adopt a culture of frequent, small, rapid releases. CICD practices such as building in automation, shared responsibility and being production ready at all times are also crucial components of cloud native culture.

Cloud Native Applications

Now that we have wrapped our heads around the concept of a cloud native architecture let’s take another stab at defining a cloud native application.

A cloud native application is a set of multiple loosely coupled containerised microservices, deployed using an orchestration engine like Kubernetes with the cloud or a cloud like delivery model as an underlying layer.

Cloud native applications are not static entities however and evolve continuously in response to the external environment. This is where the wider constellation of supporting tools comes into play, which need to be adopted to enable teams to develop, test and deploy code more frequently and reliably.

All of this is supported by an underlying cultural layer which advocates the removal of silos in the software life cycle. One way to accomplish this is to create small, agile independent teams with end to end responsibility for developing, testing, deploying and operating applications. These cross functional teams comprising developers, DevOps and SREs would also be responsible for integrating tools, building in automation, monitoring, self-healing, managing performance and ensuring high availability.

Cloud Native Application Features

So what distinguishes cloud native applications from traditional ones? In this section we will briefly review some features of cloud native applications that give them an edge over traditional applications.

Scalable

Cloudnative applications are inherently scalable. Some of this can be attributed to the in-built scalability of the underlying technologies they are supported by. Take Kubernetes for example, it can scale both applications and the underlying infrastructure based on a number of business, application and server metrics. The same holds true for the cloud, which has built-in scalability mechanisms for most services.

Observable

As opposed to monitoring, observability is a property of a system. Systems are observable if their current state and in turn performance can be inferred based only on outputs. The support for traces, metrics, events and logs, in most of the underlying technology means that cloud native applications are highly observable.

Resilient and Manageable

Reliability and resilience is another feature of cloud native applications. The fact that they are composed of multiple loosely coupled services means that application wide shutdowns are rare. Since problems are contained, disaster recovery is also relatively easier. The disaster recovery mechanisms of the underlying technology and the cloud native cultural practices of version control and using git as a single source of truth add to the ease of disaster recovery.

Manageability is another aspect which is easier because of the loosely coupled nature of cloud native applications. New features can be easily integrated and are immediately deployable using a number of deployment techniques. Individual application components can be updated without turning the lights off all over.

Immutability and API driven

Immutability is a cultural aspect of cloud native architectures and plays an important role in how these applications are managed. Immutability is the practice of managing applications where applications are replaced instead of being updated in place. This practice is increasingly taking hold across cloud native environments and is cited by the CNCF as a defining feature of cloud native architectures and applications.

Declarative APIs is another feature that the CNCF cites. Declarative APIs concentrate on outcomes rather than explicitly mapping out a set of actions. Cloud native applications are usually integrated with lightweight APIs such as representational state transfer (REST), Google’s open source remote procedure call (gRPC) or NATS.

Want to dig deeper? Download the Complete CIOs Guide to Kubernetes:

Download Guide

Kubernetes Production Readiness and Best Practices Checklist Kubernetes Production Readiness and Best Practices Checklist Cover Download Checklist
Hasham Haider

Author

Hasham Haider

Fan of all things cloud, containers and micro-services!

Want to Dig Deeper and Understand How Different Teams or Applications are Driving Your Costs?

Request a quick 20 minute demo to see how you can seamlessly allocate Kubernetes costs while saving up to 30% on infrastructure costs using Replex.

Schedule a Meeting