Projects:Kubernetes: verschil tussen versies

Naar navigatie springen Naar zoeken springen
4.670 bytes toegevoegd ,  10 mrt 2019
geen bewerkingssamenvatting
Geen bewerkingssamenvatting
Regel 72: Regel 72:
It's important that these ranges don't overlap, and they also both shouldn't overlap with any relevant IP ranges within your existing network! The Kubernetes folks suggest you use something within 10.0.0.0/8 if your local network range is within 192.168.0.0/16 and vice-versa.
It's important that these ranges don't overlap, and they also both shouldn't overlap with any relevant IP ranges within your existing network! The Kubernetes folks suggest you use something within 10.0.0.0/8 if your local network range is within 192.168.0.0/16 and vice-versa.


To do: explain CNI
Since there is no one-size-fits-all solution to networking between Nodes, Kubernetes allows that to be done by specific networking plugins, called CNI. There are a number of such plugins, but a friend of mine experienced with running Kubernetes clusters explained why you should probably go for Weave:
 
* It uses the vxlan standard. vxlan allows tunneling traffic between nodes automatically. When a packet should travel from one Node to another, a small UDP/IP header is prepended to it, so it is sent to the right node, where the header is taken off and routing continues.
* This method allows it to cross almost all difficult network setups as long as you can have one UDP port forwarded between the machines.
* Weave is smart enough to figure out the most efficient way to use vxlan given your Linux kernel version.
* It's also pretty simple: just a single Go binary.


= Setting it all up =
= Setting it all up =


With that all behind us, let's start setting up our first cluster!
With that all behind us, let's start setting up our first cluster!
For a useful cluster, you'll need at least one machine to be the master, but of course we'll use at least two so we can call it an actual cluster. The master node is strongly recommended to have at least two CPUs and 2 GB of RAM. The other nodes can be a bit smaller.
For my tests, I've used four machines:
* Kubetest1 - 1 CPU, 1 GB RAM - 145.131.6.177 - Ubuntu 16.04
* Kubetest2 - 1 CPU, 1 GB RAM - 145.131.8.75  - Ubuntu 16.04
* Kubetest3 - 1 CPU, 1 GB RAM - 145.131.5.151 - Ubuntu 16.04
* Kubetest4 - 2 CPU, 2 GB RAM - 145.131.6.179 - Ubuntu 16.04
(I initially wanted a three-machine cluster, and forced the master on a 1 CPU/1 GB RAM machine, but processes were being killed because of going out-of-memory. So, kubetest4 became a bit bigger and will be my master.)
Now, the machines in your cluster should be able to access each other on all ports used by Kubernetes (see [https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports this list here]). '''Important note''': the machines should also be able to access themselves on their external IP address! I was initially having problems because of this, where pods on a worker node could reach the master normally, but pods on the master node couldn't. It was because the external IP address of the master were used in some communications, which was ironically impossible just on the master itself.
So, once you have your machines up, and you know they can reach each other, we'll start installation.
* First, we choose our Pod network range and Service range. See previous section. I used 10.107.0.0/16 for the network range and 10.16.0.0/16 for the service range.
* Optionally, we disable swap on the machine, as Kubernetes prefers to run without it. To do this, check <code>/proc/swaps</code> for your swap locations and run <code>swapoff</code> on them, then remove the entries from <code>/etc/fstab</code>. Or, leave them enabled and later disable the check in Kubernetes.
* We install the Docker daemon. On Ubuntu, <code>sudo apt-get install docker.io</code> does it.
* We install the basic Kubernetes tools: <code>kubeadm</code>, <code>kubelet</code> and <code>kubecfg</code>. [https://kubernetes.io/docs/setup/independent/install-kubeadm/ Here's a nice guide for it].
* We use <code>kubeadm</code> to create our cluster. [https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ Just follow this guide], until the "Installing a pod network add-on" step.
* I used: <code>kubeadm init --pod-network-cidr "10.107.0.0/16" --service-cidr "10.16.0.0/16"</code>
* After this, <code>kubectl get nodes</code> should already respond with your master node, and <code>kubectl get pods --all-namespaces</code> should mention the <code>kube-system</code> pods that make up the Control Plane as discussed above!
* We install a pod network add-on. As discussed before, we use Weave, but YMMV. See [https://www.weave.works/docs/net/latest/kubernetes/kube-addon/ this guide on Weave setup], with an important note: if you don't use Kubernetes' default Pod network CIDR, pay attention to the exact <code>kubectl apply</code> step.
* I used: <code>kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=10.107.0.0/16"</code>
* After this, <code>kubectl get pods --all-namespaces</code> should show some additional pods. Most importantly, it should now show that CoreDNS is running.
* Now, we can join our other nodes. <code>kubeadm init</code> will show a <code>kubeadm join</code> command at the end of its output, which you can run on the other nodes. If you don't have that output anymore, see [https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/#token-based-discovery-with-ca-pinning this page on how to recreate a valid join command].

Navigatiemenu