Kubernetes is all the rage these days. Everyone from service providers to enterprises and developer teams want to dip their toes. This accelerating adoption has triggered a race of sorts between public cloud providers to offer managed Kubernetes services. DigitalOcean announced early access for theirs's in May this year. Right now, it’s in limited availability with general availability and a roll out across all regions scheduled for Q1, 2019.
We sat down with Daniel Levy (Head of customer success) and Fabian Barajas (Solutions engineer) to get the inside scoop on DigitalOcean Kubernetes and what they hope to achieve with it.
DigitalOcean has been developer specific from the get go. Simplicity and ease of use are central tenets of their cloud service. Here is what Moisey Uretsky (co-founder), had to say in response to a question about discounts for reserved droplets:
„We aren't going to go the route of reserved or non-reserved instances because we want to keep our pricing as straightforward and simple as possible...we want to provide the best price possible to all of our customers without them needing to breakout calculators and spreadsheets to figure out how much their bill will be at the end of the month. “
This philosophy of keeping things simple, trickles down into naming conventions too. DigitalOcean is calling their managed Kubernetes offering – surprise – DigitalOcean Kubernetes. This is a departure from the acronyms other cloud providers use; AWS EKS, GCP GKS and azure AKS for example.
So why launch a managed Kubernetes service in the first place? Daniel Levy puts it this way:
“the idea behind offering a managed Kubernetes platform is to allow development teams to do what they love; build great applications, rather than worrying about provisioning and operating the underlying infrastructure. “
DigitalOcean is already among the top 6 platforms for Kubernetes deployments. So, offering a managed service to make developer's lives easier, makes perfect sense.
Fabian calls it a natural progression for DigitalOcean as they evolve and beef up their cloud services with additional features and technologies. The pace of new feature launches has been frantic with load balancers, cloud firewalls, spaces, CPU-Optimized droplets and a new dashboard going live. The managed Kubernetes offering continues this trend.
Fabian also points to the steep learning curve and management overhead that is a part of the Kubernetes experience; “Most developers would rather containerize their applications and push their code out without having to worry about continuous updates and managing availability and scalability for each cluster.”
So, how do I get started? Well, that’s pretty straight forward too. Clicking on clusters in the create menu, will take you to a create a cluster page. Once there you can choose the Kubernetes version you want to deploy as well as the DigitalOcean region you want to spin your cluster up in. Then create a node pool for your cluster. The default node pool has a set of three worker nodes (this can be lowered depending on your cost constraints). You then add your SSH keys, tags, choose a name for your cluster and click create cluster.
Load balancers, block storage, firewalls and ingress controllers can also be deployed automatically as you spin up a cluster. This makes cluster deployments even easier and further reduces management overhead. Spaces though, still have to be created manually.
Once the cluster is up and running, the control plane including the kube-apiserver, etcd and kube-scheduler are managed by DigitalOcean. Worker nodes are provisioned in sets of three for high availability and are placed in customer accounts. Both the control plane and worker nodes are continuously monitored, enabling clusters to self-heal and trigger pod rescheduling.
DigitalOcean regions are made up of single data centers, so worker nodes do end up in the same data center. The same is true of master nodes, since they are located in the same region as worker nodes. However, droplets get distributed over multiple hypervisors which could be on different racks.
Pricing is simple too. But that isn't because of DigitalOcean Kubernetes not charging for master node management. GCP and Azure do the same. The real advantage with DigitalOcean Kubernetes pricing is the fact that worker nodes leverage droplets, which have a simple pricing structure of their own. Other cloud primitives like load balancers, block storage, spaces, cloud firewalls and floating IPs are also charged at regular DigitalOcean prices.
In the pipeline are features like automatic updates (which, of course you have to agree to first) and leveraging droplet tags to match pod resource requirements with the correct droplet variant e.g. running a video encoding software on a compute optimized droplet.
Autoscaling is also in the works; both for the cloud service itself and the managed Kubernetes offering. Autoscaling will automatically add or remove droplets from clusters as application resource requirements change.
DigitalOcean Kubernetes is a simple and cost-effective solution for running managed Kubernetes clusters. It takes over the often-time-consuming process of provisioning, managing and monitoring the underlying compute, storage and networking infrastructure for the Kubernetes control plane. This allows DevOps team to spend more time developing applications rather than managing the infrastructure. You can request access to DigitalOcean Kubernetes here.
Request a quick 20 minute demo to learn how to allocate and optimize Kubernetes costs and make infrastructure cost savings of up to 30% using Replex.io.
Fan of all things cloud, containers and micro-services!
This is the second instalment in our blog series about monitoring Kubernetes resource metrics in production. In this post, we complement the process of Kubernetes resource monitoring with Prometheus by installing Grafana and leveraging the Prometheus data source to create information-rich dashboards in a user-friendly visual format.
April 3, 2019
4 min read
In this blog post we will take a deep dive into Kubernetes QOS classes. We will start by looking at the factors that determine whether a pod is assigned a Guaranteed, Burstable or BestEffort QOS class. We will then look at how QOS class impacts the way pods are scheduled by the Kubernetes Scheduler, how it impacts the eviction order of pods by the Kubelet as well as what happens to them during node OOM events.
March 22, 2019
4 min read
In this instalment of the Kubernetes in Production blog series, we take a deep dive into monitoring Kubernetes resource metrics. We will see why monitoring resources is important for Kubernetes in production, choosing which resource metrics to monitor, setting up the tools required including Metrics-Server and Prometheus and querying metrics.
March 7, 2019
4 min read