Projects:Kubernetes
According to Wikipedia, Kubernetes (commonly stylized as k8s) is an open-source container orchestration system for automating application deployment, scaling, and management. I've heard a lot about it, and it seems to solve some problems I'm encountering sometimes, so I'd like to get to know it better. While doing so, I wanted to make a write-up of what I found out.
This page is not intended as a tutorial, but it will link to various tutorials I found useful. In this way, I hope that it can be used as a reference for someone setting up Kubernetes to be able to learn about it more quickly.
Disclaimer: I'm a beginner in the Kubernetes area, so this tutorial will contain errors. Please correct errors where you find them, and if possible, write some explanation! The "I" in this article is Sjors, but parts are possibly written by other people.
Problem
We start with a Linux desktop, running some version of Arch Linux. Someone wants to run an application on it for which only binaries are available, but those binaries were compiled on Ubuntu. Now, system libraries are different between Arch Linux and Ubuntu, and while it's possible to create binaries that run independent of system libraries (called "static binaries"), in this particular situation let's assume the binaries aren't static, but you still want to run them on your Arch installation.
You can get a second machine and run Ubuntu on it, or similarly you could use a Virtual Machine. But, there's a simpler and more efficient solution: Docker allows you to install, within your Linux distribution (the "host"), another Linux distribution (let's call it "guest" for now). The host and the guest may be completely different – technically, only the kernel of the host is used, and of course the host and guest must have compatible processor architectures. You run the guest environment within Docker (a "Docker Container") and run that binary in it.
This is how easy that is:
sjors@somebox:~$ lsb_release -a LSB Version: 1.4 Distributor ID: Arch Description: Arch Linux Release: rolling Codename: n/a sjors@somebox:~$ docker run -ti ubuntu:bionic bash root@a5b210d251c2:/# apt-get update && apt-get -y install lsb-release [....] root@a5b210d251c2:/# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.1 LTS Release: 18.04 Codename: bionic
The -it
flag makes this an interactive container that can run bash
; just as easily, you can run detached containers running webservers, and use the integrated port-forwarding features to make them accessible to the outside.
Now, Docker has some problems of its own:
- You start Docker containers by accessing the Docker daemon; the daemon runs containers as root and allows you to start a container with a bind-mount of "/". Basically, having access to the Docker daemon means you have root on the system, but you need access to do anything. It's all-or-nothing.
- When your Docker machine goes down, all containers are gone. You'll have to either restart all containers manually, or have boot-scripts that set them up, but there's no automatic restart mechanism.
- When you want to run more Docker containers than fit on one machine, there's no horizontal scaling mechanism built-in.
- When you want to run a service multiple times, e.g. for redundancy, you need to schedule them manually multiple times and roll your own method of load-balancing them.
Kubernetes provides a solution for these problems, while otherwise looking very much like Docker. In fact, when you're familiar with Docker some of the commands below will also be very familiar to you.
Concepts
In this section, I'll explain some of Kubernetes' concepts quickly, and add links if you want to know more.
- Container: Like with Docker, this is one 'guest environment' in which you can run anything. Usually, Kubernetes containers are, in fact, Docker containers.
- Pod: The basic unit you actually schedule in Kubernetes. Usually, a Pod contains one Container, but a Pod can consist of multiple Containers which can be a very useful feature. More on that later. A Pod isn't durable: when anything causes a Pod to stop (e.g. the machine it runs on has a power outage), it won't be restarted automatically.
- Deployment: An indication of "desired state" of the cluster in terms of the pods you always expect to have. The Kubernetes system will always try to match these Deployments to what it's actually running. Basically, a Deployment is the way to start a Pod and ensure it stays running.
- Job: An indication that you want to run some command to completion. When the cluster has a Job, it will keep re-creating its Pods until a given number of them succeed successfully. Basically, a Job is the way to start a Pod and ensure it finishes once.
- Volume: As in Docker, changes to containers are temporary and will be gone when the container stops. If you want to keep those changes after a restart, like in Docker, you make a Volume. They are also useful to share data between containers. In Kubernetes, Volumes are kept over restarts of Containers, but not over restarts of Pods, unless they are Persistent. More on that later.
- Service: When your Pod contains some application, such as a webserver, you can make its TCP port available as a Service so that people (inside or outside the cluster) can connect to it. For an application you want to run redundantly, multiple Pods can be started; you'll configure them to share the same Service. This way, when you connect to the Service, you'll get one of the running Pods behind it. Instant redundancy!
- Namespace: Kubernetes supports running multiple "virtual clusters" on the infrastructure of one "physical cluster". Those virtual clusters are called "namespaces", and you can restrict access to certain namespaces. Normally, you're only working with the "default" namespace.
Those are some concepts that allow you to use a Kubernetes cluster. In this guide, we'll also be setting up the infrastructure behind that:
- Node: A machine that actually runs the Containers. Can be bare-metal or a virtual machine, or even an embedded IoT device. A Node runs a process called "Kubelet" which interacts with the Docker daemon (usually) to set everything up, but normally you never communicate directly with it.
- Control Plane: A set of some applications (API server, etcd, controller manager, scheduler...) that make sure the cluster is "healthy". For example, it starts Pods when you request it to, but also when a Node goes down that was running Pods for some Deployment, it restarts those Pods elsewhere. These master applications themselves also run in Pods, in a separate namespace called "kube-system".
- Master node: Otherwise a normal Node, but it runs the Control Plane applications. By default, a Master node will only run Pods for these applications, but you can configure it to allow normal Pods too. There can be multiple Master nodes, for redundancy of the cluster.
Understanding networking
Networking within a Kubernetes cluster isn't difficult, but requires some specific explanation. Applications within Kubernetes containers need to be able to access each other, regardless of whether they run on the same Node or not. To make it worse, Nodes may be running behind firewalls or may be running in different subnets. Luckily, Kubernetes has some mechanisms that make it very robust against this.
When setting up a Kubernetes cluster, there are two important internal IP ranges throughout the cluster:
- The Pod network range. This internal range is automatically split over Nodes, and Pods get individual addresses from it.
- For example, you can set this to 10.123.0.0/16; the master node will likely get 10.123.0.0/24 and the second Node you add after that gets 10.123.1.0/24 and so on. A Pod running on this second node may have 10.123.1.55 as an IP address. (If the Pod has multiple containers, all of them will have the same IP address.)
- The service network range. When you register a Service, such as "my-fun-webserver", it automatically gets an IP address within this range. An application called the 'kube-proxy', running automatically on every Node, then takes care that any communication with this IP address is forwarded to one of the actual Pods providing that service (by configuring iptables). Fun fact: the Kubernetes API server registers itself as a service and is always available at the first host address of the range you chose.
- For example, your service network range may be 10.96.0.0/16; the Kubernetes API service makes itself available at 10.96.0.1. When you communicate with this IP address, the communication is automatically translated (by iptables) to be sent to the Pod IP address of the Kubernetes API service, e.g. 10.123.1.55.
It's important that these ranges don't overlap, and they also both shouldn't overlap with any relevant IP ranges within your existing network! The Kubernetes folks suggest you use something within 10.0.0.0/8 if your local network range is within 192.168.0.0/16 and vice-versa.
Since there is no one-size-fits-all solution to networking between Nodes, Kubernetes allows that to be done by specific networking plugins, called CNI. There are a number of such plugins, but a friend of mine experienced with running Kubernetes clusters explained why you should probably go for Weave:
- It uses the vxlan standard. vxlan allows tunneling traffic between nodes automatically. When a packet should travel from one Node to another, a small UDP/IP header is prepended to it, so it is sent to the right node, where the header is taken off and routing continues.
- This method allows it to cross almost all difficult network setups as long as you can have one UDP port forwarded between the machines.
- Weave is smart enough to figure out the most efficient way to use vxlan given your Linux kernel version.
- It's also pretty simple: just a single Go binary.
Kubernetes takes care that the pod network range and service network range is not only usable within pods, but also on the nodes. So, using the example values above, `https://10.96.0.1/` will reach the Kubernetes API server within pods and on nodes, also highly-available if you have multiple masters, which is pretty convenient.
Some more important features of Kubernetes networking:
- A Kubernetes cluster automatically runs a "CoreDNS" pod, which provides DNS to all other pods. It forwards requests outside the cluster to an upstream DNS server, but most importantly, provides an internal `cluster.local` DNS zone that you can use to look up other pods or services. For example, `kubernetes.default.svc.cluster.local` resolves to 10.96.0.1, as above. (In that hostname, 'kubernetes' is the service name, 'default' is the namespace.)
- When a pod is listening on some TCP port, you don't need to use Services to reach them externally:
kubectl port-forward pod/foobarbaz 8080:80
forwards local port 8080 to port 80 of a pod called 'foobarbaz', and for this to work yourkubectl
can run on any machine with credentials to access the API server, it doesn't need to be part of the cluster.
Setting it all up
With that all behind us, let's start setting up our first cluster!
For a useful cluster, you'll need at least one machine to be the master, but of course we'll use at least two so we can call it an actual cluster. The master node is strongly recommended to have at least two CPUs and 2 GB of RAM. The other nodes can be a bit smaller.
For my tests, I've used four machines:
- Kubetest1 - 1 CPU, 1 GB RAM - 145.131.6.177 - Ubuntu 16.04
- Kubetest2 - 1 CPU, 1 GB RAM - 145.131.8.75 - Ubuntu 16.04
- Kubetest3 - 1 CPU, 1 GB RAM - 145.131.5.151 - Ubuntu 16.04
- Kubetest4 - 2 CPU, 2 GB RAM - 145.131.6.179 - Ubuntu 16.04
(I initially wanted a three-machine cluster, and forced the master on a 1 CPU/1 GB RAM machine, but processes were being killed because of going out-of-memory. So, kubetest4 became a bit bigger and will be my master.)
Now, the machines in your cluster should be able to access each other on all ports used by Kubernetes (see this list here). Important note: the machines should also be able to access themselves on their external IP address! I was initially having problems because of this, where pods on a worker node could reach the master normally, but pods on the master node couldn't. It was because the external IP address of the master were used in some communications, which was ironically impossible just on the master itself.
So, once you have your machines up, and you know they can reach each other, we'll start installation.
- First, we choose our Pod network range and Service range. See previous section. I used 10.107.0.0/16 for the network range and 10.16.0.0/16 for the service range.
- Optionally, we disable swap on the machine, as Kubernetes prefers to run without it. To do this, check
/proc/swaps
for your swap locations and runswapoff
on them, then remove the entries from/etc/fstab
. Or, leave them enabled and later disable the check in Kubernetes. - We install the Docker daemon. On Ubuntu,
sudo apt-get install docker.io
does it. - We install the basic Kubernetes tools:
kubeadm
,kubelet
andkubecfg
. Here's a nice guide for it. - We use
kubeadm
to create our cluster. Just follow this guide, until the "Installing a pod network add-on" step.- I used:
kubeadm init --pod-network-cidr "10.107.0.0/16" --service-cidr "10.16.0.0/16"
- After this,
kubectl get nodes
should already respond with your master node, andkubectl get pods --all-namespaces
should mention thekube-system
pods that make up the Control Plane as discussed above!
- I used:
- We install a pod network add-on. As discussed before, we use Weave, but YMMV. See this guide on Weave setup, with an important note: if you don't use Kubernetes' default Pod network CIDR, pay attention to the exact
kubectl apply
step.- I used:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=10.107.0.0/16"
- After this,
kubectl get pods --all-namespaces
should show some additional pods. Most importantly, it should now show that CoreDNS is running.
- I used:
- Now, we can join our other nodes.
kubeadm init
will show akubeadm join
command at the end of its output, which you can run on the other nodes. If you don't have that output anymore, see this page on how to recreate a valid join command.- I also copied the
/etc/kubernetes/admin.conf
from the master to the other nodes (root-readable only), and setKUBECONFIG=/etc/kubernetes/admin.conf
, so I could runkubectl
on them. But that's up to you!
- I also copied the
Hopefully, after understanding all the concepts discussed earlier, this process was a matter of mere minutes!
Your node list should be complete now:
root@kubetest4:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kubetest1 Ready <none> 2d2h v1.13.4 kubetest2 Ready <none> 2d2h v1.13.4 kubetest3 Ready <none> 2d5h v1.13.4 kubetest4 Ready master 2d5h v1.13.4
By the way, you can run kubectl
commands from your own machine as well by copying /etc/kubernetes/admin.conf
from the master to your $HOME/.kube/config
, and possibly after some firewall config to allow communications.
TO DO: It should be possible to generate user credentials instead of taking admin credentials. But how?
Creating some basic pods
You'll see your cluster has no pods in the default namespace:
$ kubectl get pods No resources found.
The simplest thing we can do, now that we have a cluster, is run a random image (which is a Docker image, in this case ubuntu:bionic
, pulled from Docker Hub) with a random command, like before:
$ kubectl run --restart=Never -ti --image=ubuntu:bionic bash If you don't see a command prompt, try pressing enter. root@bash:/# apt-get update && apt-get -y install lsb-release [....] root@bash:/# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.1 LTS Release: 18.04 Codename: bionic
Indeed, your pod will be shown now:
$ kubectl get pods NAME READY STATUS RESTARTS AGE bash 1/1 Running 0 56s
In the command above, we gave --restart=Never
. There are three parameters to this option:
Never
: When the pod exits, it will not be recreated. If this is given,kubectl
will just create a Pod.OnFailure
: When the pod exits with failure, it will be recreated, otherwise not. In other words,kubectl
will start a Job to create the Pod. (If that doesn't make sense, quickly check up on Jobs in the Concepts section above!)Always
: When the pod exits, it will be recreated. You guessed it: in this case,kubectl
will create a Deployment. (If you didn't guess it, re-check the Concepts section!)
The default is --restart=Always
so you'll see the container is recreated like this:
$ kubectl run -ti --image=ubuntu:bionic bash If you don't see a command prompt, try pressing enter. root@bash-58654c7f4b-9bhcq:/# touch foobarbaz root@bash-58654c7f4b-9bhcq:/# exit Session ended, resume using 'kubectl attach bash-58654c7f4b-9bhcq -c bash -i -t' command when the pod is running [...wait for a bit until the pod comes back up...] $ kubectl attach bash-58654c7f4b-9bhcq -c bash -i -t If you don't see a command prompt, try pressing enter. root@bash-58654c7f4b-9bhcq:/# ls foobarbaz ls: cannot access 'foobarbaz': No such file or directory
As you can see, the container restarted, as /foobarbaz
did not exist anymore when re-attaching after the exit
. Any state in the filesystem of the container/pod will be gone upon restart.
If you tried this, you can check and remove the deployment like this:
$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE bash 1/1 1 1 3s $ kubectl delete deployment bash deployment.extensions "bash" deleted
Storage using Volumes
So let's try adding a Volume to our pod, to see if we can make some changes persistent. Kubernetes supports many types of volumes; in this case we use emptyDir
which is just a locally stored disk (initially empty).
There is no command-line parameter to kubectl run
to add volumes. Internally, kubectl run
translates your commandline to a JSON request to the Kubernetes API server; we'd have to add any additional requests directly into the JSON. This can be done with the --overrides
flag, but at this point, it is probably easier to switch to sending those commands ourselves. We can use JSON for this too, but many users use YAML for this, so we will too.
The command above, kubectl run --restart=Never -ti --image=ubuntu:bionic bash
, translates to the following YAML:
apiVersion: v1 kind: Pod metadata: name: bash spec: containers: - name: bash image: ubuntu:bionic stdin: true stdinOnce: true tty: true
As you can see, this is a Pod named "bash" and with one container, also called "bash", running ubuntu:bionic
.
To create this Pod and attach to it, we write the code above to bash.yaml
and run these commands:
$ kubectl create -f bash.yaml pod/bash created $ kubectl attach -ti bash -c bash If you don't see a command prompt, try pressing enter. root@bash:/# exit 0 exit $ kubectl delete pod bash
Now, we will recreate the pod with an emptyDir
volume mounted at /foo
.
$ cat bash.yaml apiVersion: v1 kind: Pod metadata: name: bash spec: volumes: - name: testing-volume emptyDir: {} containers: - name: bash image: ubuntu:bionic stdin: true stdinOnce: true tty: true volumeMounts: - mountPath: /foo name: testing-volume $ kubectl create -f bash.yaml pod/bash created $ kubectl attach -ti bash -c bash If you don't see a command prompt, try pressing enter. root@bash:/# mount | grep foo /dev/mapper/ubuntu--vg-root on /foo type ext4 (rw,relatime,errors=remount-ro,data=ordered) root@bash:/# exit 0 exit $ kubectl delete pod bash
Of course, this volume isn't persistent just yet; restarts of the pod will cause it to be recreated (the Volume has the "lifetime of the Pod") so it actually doesn't serve our purpose.
For this, we need to create a PersistentVolume, that exists even outside our Pod, but is linked to it. (Here's another tutorial for this if you want.)
To do: "HostPath" persistent volumes are only supported on single-node clusters, but our cluster has multiple nodes. What's the easiest way to get persistent storage?
Creating your own pods
To do: Set up Docker Registry. Push images to it. Start them in a pod.
To do
- Play with deployments.
- Play with deployments with redundancy.
- Play with volumes, including persistent volumes.
- Play with services.
- TCP ports ingressing.
- Play with kubectl port-forward.
- Kubectl set image for rolling release?
- Kubernetes Dashboard
- Attempt Kubernetes upgrade from 1.13 to 1.14
- Try getting information on a pod from inside it using the Kubernetes API
- Play with Statefulset / Daemonset