[Kubernetes, Cloud Native]

Kubernetes and Cloud native Application Checklist: Cloud Native Network Tools

Part three of our Kubernetes and Cloud native application checklist dives into cloud native networking tools. We also compare them based on the type of network encapsulation used, degree of support for network policy, encryption, service discovery and load balancing.

Hasham Haider

Hasham Haider

March 5, 2020

7 minute read

This is part three of our on-going series exploring tools and practices that make it easier to manage and operate cloud native applications. In part one we reviewed cloud native development tools aimed at de-cluttering the development workflow and making it more seamless. Part two outlined CICD tools and evaluated them based on how well they integrate into Kubernetes based cloud native environments.

In this installment we will outline and evaluate networking tools aimed at Kubernetes based cloud native environments. Before we do that however, let’s start off with a quick word about cloud native networks. 

Networking in Kubernetes based cloud native environments

Modern cloud native applications are composed of fleets of loosely coupled containerized micro services communicating in complex mesh networks. Cloud VMs, which have a networking overhead of their own, serve as an underlying layer. Add to this the various abstractions Kubernetes introduces and the communication requirements they entail and the network quickly becomes complex. 

There are four elements of Kubernetes based cloud native networks that need to be considered:

  • Communication between containers
  • Communication between pods and services 
  • Communication between services and external sources
  • Communication between pods

There are many different ways these communications can be configured in the Kubernetes networking model. Kubernetes however places the following restrictions on each implementation:

  • Pods must be able to communicate with all other pods on it’s node without NAT
  • System pods/agents can communicate with all pods on the node they are running on
  • Pods in the host network must also be able to communicate with all pods running on all other nodes without NAT. 

Kubernetes assigns each pod its own network namespace and allots it a unique IP address. Containers that run inside these pods share the network stack/namespace of the pod and can reach each other via local host. Individual containers do need to coordinate port usage to avoid any port conflicts. 

When using the host network mode, Pods share the network namespace of the Docker host and can communicate with all other pods on all other nodes. 

CNIs and network plugins are important concepts in the context of cloud native networks. CNI is a specification and a collection of libraries for writing network plugins. CNIs are agnostic to the container runtime and work equally well with Kubernetes, Mesos or Cloudfoundry. 

CNI based network plugins are in turn responsible for creating and configuring network interfaces for containers and perform the actual provisioning of IP addresses. CNIs and network plugins together de-couple application code from network configuration. This reduces the overhead for developers who no longer have to worry about baking in network specifications and configurations into application code.

Let's now move on to an evaluation of different network plugins for Kubernetes. 

Weavenet

Weavenet has the highest rating in the cloud native network section of the CNCF landscape. Weavenet works by creating a virtual layer 2 network that connects Docker containers on the same host or across multiple hosts, cloud environments or data centers. 

As opposed to other cloud native networking solutions Weavenet does not require an external cluster data store. This makes Weavenet easier to configure, manage and operate without the overhead of managing an external cluster store. It also helps avoid issues starting and stopping containers during network connectivity problems. 

Baked in protocols like fast datapath also reduce latency and improve throughput between hosts. Fast datapath chooses the fastest path to forward traffic between two hosts, irrespective of where those hosts are located. Based on the quality of network connections, Weavenet can also fall back to a slower packet forwarding mechanism called sleeve.  

Encryption is also taken care of. The fast datapath protocol encrypts traffic using ESP of IPsec and is controlled via the the IP transformation framework (XFRM). For the sleeve overlay control plane traffic (TCP) and data plane traffic (UDP) is encrypted using NACL

Another feature is the support for partially connected networks. Weavenet connects all hosts in a mesh network and requires only a single connection to establish full connectivity with other network segments. Mesh networks however can lead to an explosion in the number of connections and can become hard to manage and operate. 

Weave net also uses VXlan encapsulation which might lead to it being slightly slower as compared to other cloud native networks that use native routing. Additionally overlay networks like VXlan are relatively harder to debug.

Other notable features include support for service discovery, load balancing and Kubernetes network policies. The built in network policy controller supports both ingress and outgress traffic and allows or blocks traffic based on the policy spec. 

Calico

Calico is more flexible as compared to other cloud native networks when it comes to traffic encapsulation. For example it can run in both overlay/encapsulated mode, using either VXlan or IP-In-IP encapsulation or without encapsulation as a layer 3 network. It can also dynamically switch between the two for specific use-cases e.g using encapsulation for traffic traversing a subnet boundary and no-encapsulation for traffic within. Calico allows fine grained control over communication between containers, virtual machines, and bare metal hosts.

As with most cloud native networks, Calico uses etcd as a data store to provide communication and to ensure that it can reliably build a network. Alternatively, it can also use the Kubernetes API datastore, albeit with some functional restrictions

Calico natively integrates with managed Kubernetes services from most of the public cloud providers including  Amazon, Microsoft, Google, and IBM. It also supports self-managed Kubernetes environments on these cloud providers. 

Communication between etcd and Calico components can be encrypted using TLS. Calico also recommends configuring TLS for communication between Calico components and kube-apiserver. 

Calico supports both unicast and anycast IP connectivity. Support for multicast IP is in the works. In addition it fully supports the Kubernetes network policy API. Infact Calico provides a native network policy resource that provides an extended policy feature set for users requiring more granular policies. A complete list of Calico policy features can be found here.

Flannel

Flannel is an open source cloud native network from CoreOs and has the second highest rating in the cloud native network section of the CNCF landscape.  The default network model is a virtual layer 2 overlay network that spans the entire cluster and uses VXlan encapsulation. Each host is allocated a subnet which can in turn allocate IP addresses to individual containers. 

Advanced users with higher performance requirements can also opt to use host-gw as a backend. UDP can also be used however it is only recommended for debugging.

As with most other cloud native network tools, Flannel uses etcd to store configuration data and subnet assignments. To encrypt traffic, Flannel can also be run using either the IPsec or WireGuard backends. Flannel does not support Kubernetes network policy as of yet. 

Overall, Flannel has a smaller feature set as compared to other cloud native networks in this list, however it is easier to install and configure. 

Cilium

Cilium is another open source cloud native network with seamless Kubernetes integration. It operates on multiple layers including layer 2/3 for networking and layer 7 for the application. Cilium is built on BFP, which is a Linux kernel technology for filtering packets. BPF allows Cilium to perform filtering at the kernel level rather than the application level. 

BPF also enables it to perform highly scalable load balancing for traffic between containers and to external services. 

At its most basic, Cilium deploys a flat layer 3 network that spans multiple clusters and connects application containers. Individual hosts can allocate IP addresses to application containers without coordination between hosts.

Cilium supports both an overlay network with built-in support for VXlan and Geneve encapsulation as well as native routing that leverages the regular routing table of the Linux host. The overlay network mode connects hosts in a mesh network and also supports other encapsulation formats including all the ones from Linux. The mesh network can also be extended to multiple clusters. 

Cilium supports both etcd as well as consul as data stores for network configuration and also supports IPsec for encryption. It also supports and builds on the native Kubernetes network policy resource using Kubernetes CRDs. The Policy CRD extends vanilla Kubernetes network policies to add support for layer 7 policy enforcement on ingress and egress for HTTP and Kafka. The policy resource also adds egress support for CIDRs which enables secure access to external services.

Contiv

Contiv is an open source cloud native network from Cisco. It supports multiple operational modes including L2, L3, overlay and ACI. Contiv runs two components in Kubernetes clusters: netplugin on cluster worker hosts and netmaster on master hosts. As with most other cloud native networks it leverages etcd as a cluster data store. 

Contiv does not support the native Kuberentes network policy resource and instead has its own built in network policy feature. The built-in policy feature supports two types of policies: bandwidth that allows users to control the overall resource use of a group of containers and isolation that allows them to control the access of a group of containers.

IP addresses are allocated to each container and are not bound to application groups or microservices. This allows multi-tenant support and overlapping IPs across tenants. Service discovery with Contiv uses the DNS protocol and does not require queries to external data stores for IP or port information. Containers become immediately reachable as soon as they are connected to an endpoint group (EPG). It also has built-in support for service load balancing as well as allowing admins to manage users, authorizations and LDAP.

Want to learn more? Download the Complete CIOs Guide to Kubernetes:

Download Guide

Kubernetes Production Readiness and Best Practices Checklist Kubernetes Production Readiness and Best Practices Checklist Cover Download Checklist
Hasham Haider

Author

Hasham Haider

Fan of all things cloud, containers and micro-services!

Want to Dig Deeper and Understand How Different Teams or Applications are Driving Your Costs?

Request a quick 20 minute demo to see how you can seamlessly allocate Kubernetes costs while saving up to 30% on infrastructure costs using Replex.

Schedule a Meeting