[Kubernetes, Kubernetes cost, Kubernetes tags]

The Ultimate Kubernetes Cost Guide: Adding Persistent Volumes to the Mix

The second installment in our Kubernetes cost comparison guide for AWS, GCE, Azure and Digital Ocean. In this blog post we add persistent volume cost to our earlier cost comparison, for a broader cost estimate.

Hasham Haider

Hasham Haider

January 14, 2019

9 minute read

This is the second installment in our blog series comparing Kubernetes costs across public cloud providers. In the first installment we compared the cost for running a 100 core/400 GB Kubernetes cluster across AWS, GCP, Microsoft Azure and Digital Ocean. You can check out the complete Kubernetes cost comparison here.

However, no Kubernetes cost comparison can be complete without including the costs for persistent volumes. Persistent volumes are storage pools provisioned specifically for Kubernetes clusters and can be attached to individual pods. They are distinct from the storage volumes attached to regular cloud instances, although in most cases they are supported by the very same storage infrastructure.

In this blog post, we are going to add a persistent storage component to our Kubernetes cluster. We will choose a baseline persistent storage requirement and will compare the costs for it across all four public cloud providers. We will then add these to our earlier cost comparison to get a more holistic cost estimate.

But before we get started, a word about persistent volumes and why we need them: Persistent volumes are necessitated by the way the Kubernetes scheduler works. The Kubernetes scheduler is a controller that assigns pods to Kubernetes nodes. These can be newly created pods or ones that are being rescheduled because of pod shutdowns.

The scheduler can also shift around already existing pods according to its own internal logic, once new pods are added. This means that pods can end up on a completely different node than the one they were originally scheduled on. Since pods get shifted around, they are no longer able to access the data saved onto traditional storage volumes attached to cloud provider instances.

To get around this, Kubernetes uses persistent volumes. Persistent volumes are abstracted from the underlying storage infrastructure. This could be a block storage device in the case of a cloud provider.

Users can then use persistent volumes claims to request some amount of that storage to be attached to their Kubernetes pods. Attaching persistent volumes to pods via persistent volume claims allows data to persist across pod restarts and crashes.

Alright, now that we have that out of the way, let’s get to the cost comparison itself. We will assume a 4000 GB persistent storage requirement for our 100 core/400 GB Kubernetes cluster and compare costs across all four public cloud providers.


On AWS Kubernetes currently supports Elastic Block Store (EBS) volumes as persistent storage for clusters. EBS volumes can be provisioned in ReadWriteOnce access mode, which means that they can only be mounted on a single node at a time.

EBS volumes are automatically replicated inside an availability zone for durability and disaster recovery. They come in two main flavors; traditional HDD backed EBS volumes and SSD backed EBS volumes.

Both are further subdivided into two categories each. Throughput optimized (st1) and cold (sc1) for HDD backed volumes and provisioned IOPS (io1) and general purpose (gp2) for SSD backed volumes. All these storage classes differ in the max number of IOPS/volume and the maximum throughput/volume and also have different pricing structures.

For the purposes of our cluster, we will provision a 4000 GB EBS volume with the gp2 storage class. The gp2 storage class has a maximum IOPS/volume of 10,000 and a maximum throughput/volume of 160 MiB/s.

The monthly costs for a 4000 GB EBS volume equal $400. Since we based our earlier cost comparison on a one year run time for our cluster, so the yearly persistent storage costs for our cluster equal $4,800.

Kubernetes Cost Comparison cover and download button


On GCE persistent disks can be used as persistent volumes for Kubernetes clusters. GCE persistent disks support both ReadWriteOnce and ReadOnlyMany access modes. This means that they can be attached to a single node in read-write mode and to multiple nodes in read-only mode.

Like AWS EBS volumes they also come in both HDD backed and SSD backed flavors and are further subdivided into categories which provide multiple performance-price variations.

A comparable storage option to AWS gp2 on GCE are zonal SSD persistent disks. Zonal SSD persistent disk data is distributed across multiple physical disks for redundancy. One thing to note about GCE persistent disks is that the maximum allowed IOPS and throughput scales with the size of the volume and the number of vCPUs per instance.

With a 4000 GB regional SSD persistent disk we get maximum IOPS reads of 60,000 and maximum IOPS writes of 30,000. It also gives us a maximum sustained throughput of 400 MB/s for writes and 1200 MB/s for reads.

A 4000 GB SSD backed zonal persistent disk will cost us $748/month, so the total yearly persistent storage cost for our cluster equals $8,976.


On Azure, Kubernetes supports both Azure files and Azure managed disks as persistent storage for clusters. Azure files support all three access modes: ReadWriteOnce, ReadOnlyMany and ReadWriteMany. Azure managed disks on the other hand support only ReadWriteOnce access mode.

Like AWS and GCE block storage, Azure managed disks can be backed by both HDD and SSD. SSD backed disks come in two flavors: standard and premium. GCE maintains 3 replicas of data on the premium Azure managed disks, inside the same region, which ensure high availability and durability of data.

Premium SSD managed disks maximum IOPS and throughput also scales with the size of the disk. A 4000 GiB premium SSD disk we will get us a maximum IOPS of 7500 and a maximum throughput of 250 MB/s.

Azure offers a number of fixed sized managed disks and prices are rounded off to the nearest disk size. In our case, the disk size will be rounded off to 4,096 GiB which will cost us $495.57/month or $5,946.84/year.

Digital Ocean

Digital Ocean’s native block storage solution can also be used with Kubernetes as persistent storage for clusters. Block storage volumes are backed by SSD and support ReadWriteOnce access mode. Volumes are also replicated across multiple racks to ensure high availability and durability.

Maximum IOPS and throughput values for volumes depend on droplet types and also support burst modes. Standard droplets support a baseline maximum of 5000 IOPS and 200 MB/s throughput and can burst up to 7,500 IOPS and 300 MB/s throughput.

A 4000 GB persistent volume for our Kubernetes cluster using Digital Ocean block storage, will cost us $400/month or a total of 4,800/year.

Now that we have the persistent storage costs from all four public cloud providers, we can collect them into a table:

4000 GB persistent volumes


Max throughput


AWS SSD backed gp2


160 MiB/s


GCE zonal SSD persistent disks

60,000 reads

30,000 writes

1200 MB/s reads

400 MB/s writes


Azure Premium SSD managed disks


250 MB/s


Digital Ocean SSD block storage


200 MB/s


Now let's add these persistent storage costs to our earlier cost comparison. Digital Ocean has also published the costs for their managed Kubernetes offering, since our last blog post, so we will include those in the cost comparison table too. 

100 core/400 GB Kubernetes cluster with 4000 GB persistent storage




Digital Ocean

Direct Deployment (on-demand instances)





Direct Deployment (70% reserved instances)





Managed Kubernetes (EKS,GKE, AKS, DO - on-demand instances)





Managed Kubernetes (EKS,GKE, AKS, DO - 70% reserved instances)





 To conclude, GCE has the highest persistent storage cost for our cluster. It does however give us much higher IOPS and throughput as compared to the other options. AWS and Digital Ocean tie for the lowest persistent storage cost.

In terms of overall cluster costs, AWS still leads the pack, in spite of having the lowest persistent storage cost. GCP and Azure costs are closely matched when we use 1-year reserved instances for our cluster. It’s important to note here that Azure requires all reserved instance costs to be paid upfront. for our for their managed Kubernetes offerings. When using on-demand instances, Azure costs far outstrip GCE. Digital Ocean gives us the lowest overall cost for running our Kubernetes cluster.

Kubernetes workloads can differ to a great degree in terms of the CPU, RAM and persistent storage size they require. Enterprise workloads in general also require a much broader feature set that can only be provided by one of the bigger public cloud providers. Both cost optimization and workload feature requirements need to be stitched into a larger enterprise strategy when choosing an optimized infrastructure footprint.

Request a quick 20 minute demo to learn how best to leverage cloud resources for your Kubernetes clusters and make infrastructure cost savings of up to 30% using Replex.io.

Hasham Haider


Hasham Haider

Fan of all things cloud, containers and micro-services!

Want to Dig Deeper and Understand How Different Teams or Applications are Driving Your Costs?

Request a quick 20 minute demo to see how you can seamlessly allocate Kubernetes costs while saving up to 30% on infrastructure costs using Replex.

Schedule a Meeting