[Kubernetes, Edge computing]

Kubernetes in the Age of Edge Computing

As Kubernetes becomes a vital part of the DevOps toolkit and moves into the enterprise space, how can it help in managing and accelerating IOT edge computing use-cases?

Hasham Haider

Hasham Haider

January 14, 2019

3 minute read

Having conquered centralized infrastructure of all shapes and sizes, from cloud to on-premise and everything in between, Kubernetes is on the move to the edge.

The edge is where infrastructure is headed too. All of the major cloud providers from AWS to GCP, Azure and IBM provide managed IOT edge services.

At its core the idea of edge computing is very simple. Edge nodes are essentially miniature versions of full scale on-premise or cloud data centers. They incorporate the compute, storage and networking capabilities of regular data centers to varying degrees and sit closer to edge devices.

The computing era has seen regular centralization and decentralization cycles, driven by changing market requirements. It went from a central room sized computer which no one could afford to distributed compute devices (desktops and laptops) and data centers and back again to a centralized cloud.

Edge computing is simply the next iteration in this cycle, where the cloud is chopped up into little pieces and distributed geographically. And there is a reason for it.

Why edge computing?

Edge computing is driven in most part by the proliferation of smart edge devices. An edge device is anything that collects data, from a smart sensor deployed by a giant manufacturing business to your toothbrush. These devices are projected to cross 50 billion by the end of 2020.

Along with the increase in number, the richness and volume of the data they collect is also increasing. Both these factors lead to an increasingly bigger bandwidth footprint for edge devices. And the internet has only so much of it. By bringing compute nearer to these devices, edge computing essentially frees up the network from the pressure of this increased data.

Proponents of edge computing also point to the latency benefits that accrue as compute moves to the edge. Since data no longer has to traverse the internet to be processed, insights can be gained faster, fuelling business innovation.

What does Kubernetes have to do with it?

The edge has always been enticing. But it's also intimidating. Multiple challenges ranging from connectivity and scalability to security and reliability need to be solved before it sees widespread adoption.

The edge might be coming, but that doesn’t mean the cloud is going anywhere. Nor are on-premise data centers. Seen from the point of view of IT managers, the edge adds another layer to enterprise infrastructure on top of the already existing on-premise and cloud infrastructure. It has to be managed and brought under the umbrella of the enterprise’s current resource management/scheduling systems.

logos of docker, mesos, and kubernetes and a green button

Additionally edge infrastructure is likely to end up with all kinds of physical compute hardware from raspberry pis to regular intel processors.

Since Kubernetes is infrastructure agnostic, it is a great candidate for managing this diverse set of hardware. Edge infrastructure is by definition a resource constrained environment. Kubernetes and containers have the potential to use these resources efficiently, extracting every last bit of juice from the available resources.

All of this makes Kubernetes the perfect resource scheduler for the entire enterprise resource pool, from the cloud to edge infrastructure.

But what about edge devices? Here too, Kubernetes brings its native scalability features to the table, allowing edge devices to be managed at scale. Microsoft recently explored one such application with their Azure IOT hub and virtual kubelet projects.

The virtual kubelet is a virtual Kubernetes node that allows almost anything, from a third party resource scheduler to a VM, to masquerade as a Kubernetes node. Doing this for the Azure IOT hub allows all the edge devices that are part of that hub to be managed as a single Kubernetes node.

The virtual kubelet extends the Kubernetes cluster to include both the cloud deployment as well as the edge devices, allowing both to be managed through a single pane of glass.

Where does Replex come in?

Replex is the central analytics and optimization solution for the modern infrastructure stack; from cloud to bare-metal and from containers to serverless. As enterprises invest in edge devices and start leveraging edge infrastructure to support these devices, infrastructure sprawl increases. Inventorizing all of this infrastructure and having a consistent single pane of glass view across all infrastructure variants becomes essential.

In addition to this, Replex also provides granular usage, cost and utilization metrics for the enterprise infrastructure stack. We drive cost visibility, optimization and granular financial reporting for our customers across all infrastructure variants. Infrastructure usage and utilization metrics help IT managers optimize their infrastructure footprint and save costs. These metrics are even more important in an edge scenario where decisions about deploying and maintaining edge infrastructure need to take into account the actual cost and utilization metrics.

Kubernetes Production Readiness and Best Practices Checklist Kubernetes Production Readiness and Best Practices Checklist Cover Download Checklist
Hasham Haider

Author

Hasham Haider

Fan of all things cloud, containers and micro-services!

Allocate and Optimize Kubernetes Costs and Make Infrastructure Cost Savings of up to 30%

Request a quick 20 minute demo to see how you can allocate Kubernetes costs to individual teams and applications while saving up to 30% on infrastructure costs using Replex.

Schedule a Meeting