Having conquered centralized infrastructure of all shapes and sizes, from cloud to on-premise and everything in between, Kubernetes is on the move to the edge.
The edge is where infrastructure is headed too. All of the major cloud providers from AWS to GCP, Azure and IBM provide managed IOT edge services.
At its core the idea of edge computing is very simple. Edge nodes are essentially miniature versions of full scale on-premise or cloud data centers. They incorporate the compute, storage and networking capabilities of regular data centers to varying degrees and sit closer to edge devices.
The computing era has seen regular centralization and decentralization cycles, driven by changing market requirements. It went from a central room sized computer which no one could afford to distributed compute devices (desktops and laptops) and data centers and back again to a centralized cloud.
Edge computing is simply the next iteration in this cycle, where the cloud is chopped up into little pieces and distributed geographically. And there is a reason for it.
Edge computing is driven in most part by the proliferation of smart edge devices. An edge device is anything that collects data, from a smart sensor deployed by a giant manufacturing business to your toothbrush. These devices are projected to cross 50 billion by the end of 2020.
Along with the increase in number, the richness and volume of the data they collect is also increasing. Both these factors lead to an increasingly bigger bandwidth footprint for edge devices. And the internet has only so much of it. By bringing compute nearer to these devices, edge computing essentially frees up the network from the pressure of this increased data.
Proponents of edge computing also point to the latency benefits that accrue as compute moves to the edge. Since data no longer has to traverse the internet to be processed, insights can be gained faster, fuelling business innovation.
The edge has always been enticing. But it's also intimidating. Multiple challenges ranging from connectivity and scalability to security and reliability need to be solved before it sees widespread adoption.
The edge might be coming, but that doesn’t mean the cloud is going anywhere. Nor are on-premise data centers. Seen from the point of view of IT managers, the edge adds another layer to enterprise infrastructure on top of the already existing on-premise and cloud infrastructure. It has to be managed and brought under the umbrella of the enterprise’s current resource management/scheduling systems.
Additionally edge infrastructure is likely to end up with all kinds of physical compute hardware from raspberry pis to regular intel processors.
Since Kubernetes is infrastructure agnostic, it is a great candidate for managing this diverse set of hardware. Edge infrastructure is by definition a resource constrained environment. Kubernetes and containers have the potential to use these resources efficiently, extracting every last bit of juice from the available resources.
All of this makes Kubernetes the perfect resource scheduler for the entire enterprise resource pool, from the cloud to edge infrastructure.
But what about edge devices? Here too, Kubernetes brings its native scalability features to the table, allowing edge devices to be managed at scale. Microsoft recently explored one such application with their Azure IOT hub and virtual kubelet projects.
The virtual kubelet is a virtual Kubernetes node that allows almost anything, from a third party resource scheduler to a VM, to masquerade as a Kubernetes node. Doing this for the Azure IOT hub allows all the edge devices that are part of that hub to be managed as a single Kubernetes node.
The virtual kubelet extends the Kubernetes cluster to include both the cloud deployment as well as the edge devices, allowing both to be managed through a single pane of glass.
Replex is the central analytics and optimization solution for the modern infrastructure stack; from cloud to bare-metal and from containers to serverless. As enterprises invest in edge devices and start leveraging edge infrastructure to support these devices, infrastructure sprawl increases. Inventorizing all of this infrastructure and having a consistent single pane of glass view across all infrastructure variants becomes essential.
In addition to this, Replex also provides granular usage, cost and utilization metrics for the enterprise infrastructure stack. We drive cost visibility, optimization and granular financial reporting for our customers across all infrastructure variants. Infrastructure usage and utilization metrics help IT managers optimize their infrastructure footprint and save costs. These metrics are even more important in an edge scenario where decisions about deploying and maintaining edge infrastructure need to take into account the actual cost and utilization metrics.
Fan of all things cloud, containers and micro-services!
Implementing a comprehensive and well thought out tagging policy for Kubernetes cluster resources is the first step towards cost allocation, reporting and optimization. In this blog post we will learn how to tag AWS EC2 instances, EBS volumes, autoscaling groups and ENI's using cloud label tags.
January 29, 2019
3 min read
What questions should DevOps teams and IT managers be asking about their production Kubernetes deployments? Read on to learn how you can gain more visibility into your production Kubernetes deployments and ensure optimized resource usage.
January 28, 2019
3 min read
Cloud labels are a Kops feature which allow us to attach key value pairs to Kubernetes objects. Cloud labels map to AWS tags and allow us to allocate and analyse Kubernetes costs.
January 21, 2019
3 min read