The basics of Kubernetes networking

How does networking in Kubernetes cluster work?

Kamil Lelonek
Kamil Lelonek  - Software Engineer

--

When you deploy your application to a Kubernetes cluster, you usually want to have it accessible from the outside. For real-world production applications, one of the important questions to ask is how to get external traffic into your container.

Here comes the Kubernetes networking model which, on the one hand, is very complex and advanced but, on the other, quite reasonable and well-documented to help us understand and implement it.

Terminology

For a more detailed glossary, you may want to visit my recent Kubernetes article where I explained all the details necessary to understand the core elements of the platform:

Otherwise, let me describe some of these concepts briefly here.

In Kubernetes, a Pod is the most basic deployable unit within a cluster. A Pod runs one or more Containers. Zero or more Pods run on a Node. Therefore, it is crucial to understand that a Pod is not actually an equivalent to a single Container but is a collection of Containers and a Node represents a physical or a virtual machine in a cluster.

You can assign arbitrary key-value pairs called Labels to any resource. Kubernetes uses Labels to group multiple related Pods into a logical unit called a Service. A Service has a stable IP address, ports, and provides load balancing among the set of Pods whose Labels match all the Labels you define in the label selector when you create the Service.

NodePort

A ClusterIP is an internally reachable IP for the Kubernetes cluster and all Services within it. For NodePort, a ClusterIP is created firstly and then all traffic is load balanced over a specified port. The NodePort itself is just an iptable rule to forward traffic on a port to a ClusterIP.

The port setting exposes the Service on the specified port internally within the cluster. The request is forwarded to one of the Pods on the TCP port specified by the targetPort field. Note that a Service can map an incoming port to any targetPort but the application needs to be listening for network requests on this port for the Service to work.

By default, the targetPort will be set to the same value as the port field and TCP is the default protocol for services.

NodePort is a configuration setting you declare in a service’s YAML. Set the service spec’s type to NodePort. Then, Kubernetes will allocate a specific port on each Node to that service, and any request to your cluster on that port gets forwarded to the service:

For instance, suppose the IP address of one of the cluster nodes is 200.0.100.2. Then, for the example above, a client calls the Service at 200.0.100.2:4444. The request is forwarded to one of the member Pods on TCP port 4444 as well. The member Pod must have a container listening on :4444 too and app: api label.

As you can imagine, a Service also provides load balancing. Clients call a single, stable IP address, and their requests are balanced across the Pods that are members of the Service.

Related image

Ingress

In Kubernetes, an Ingress is an object that allows accessing your services from outside a cluster. It is not a type of a regular service, it’s a completely independent resource. Instead, it sits in front of multiple services and acts as a reverse proxy and single entry-point to your cluster that routes the request to different services.

You configure access by creating a collection of rules that define which inbound connections reach which services. It lets you consolidate your routing rules to a single resource, and gives you powerful options for configuring these rules.

Configuring an ingress is quite easy. In the following example you can see an example of a linked service:

One of the important parts, when deploying an Ingress controller, is to pick its particular implementation. The most basic Ingress is the NGINX Ingress Controller, where the NGINX takes on the role of reverse proxy, while also functioning as SSL. Google Cloud’s Kubernetes Engine deploys a “Google Cloud controller” by default which responds to Ingress resources and provisions a Google Cloud load balancers. If a cloud provider natively handles ingress, you need to specify the annotation kubernetes.io/ingress.class: "nginx" when you would like to claim the ingress-nginx controller.

When you create the Ingress, the controller creates and configures an HTTP(S) load balancer according to the information in the Ingress and the associated Services. Also, the load balancer is given a stable IP address that you can associate with a domain name.

In the example above, you have associated a load balancer’s IP address with the domain name kamil.lelonek.me. When a client sends a request to kamil.lelonek.me, the request is routed to a Kubernetes Service named example-node-port on port 4444. An Ingress object must be associated with one or more Service objects, each of which is associated with a set of Pods.

If you are interested in Deployment definition, you will find it in my previous article:

Notice once again that Ingress is a completely independent resource to a Service. You declare, create and destroy it separately to services.

Subscribe to get the latest content immediately
https://tinyletter.com/KamilLelonek

Summary

Hopefully, one of the most advanced concepts in Kubernetes is now more clear for you. You should be able to expose your application to the world and route all external traffic straight to your server.

--

--