Projects:Kubernetes: verschil tussen versies

Naar navigatie springen Naar zoeken springen
18.086 bytes toegevoegd ,  5 feb 2020
k
Minnozz heeft pagina HSNWiki:Kubernetes hernoemd naar Projects:Kubernetes over een doorverwijzing
Geen bewerkingssamenvatting
k (Minnozz heeft pagina HSNWiki:Kubernetes hernoemd naar Projects:Kubernetes over een doorverwijzing)
 
(24 tussenliggende versies door 3 gebruikers niet weergegeven)
Regel 281: Regel 281:
As seen before, Kubernetes supports [https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes many volume types], some of which are naturally persistent because they store the data on an external service.
As seen before, Kubernetes supports [https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes many volume types], some of which are naturally persistent because they store the data on an external service.


The quickest way to set up persistent storage is to set up a NFS server:
The quickest way to set up persistent storage is to set up a NFS server. (Remember that in production, you'll want to go for something redundant, such as Ceph or Gluster or clustered NFS.)


<pre>
<pre>
Regel 583: Regel 583:
* <code>NodePort</code> is a service type indicating that the application should be externally accessible using a "service port" on all Nodes. The service port will be allocated by Kubernetes (you can choose it, but that's not recommended) and distributed to all nodes, so that a connection to any node on the service port will automatically end up on one of its running Pods. A NodePort service also automatically gets a ClusterIP, so you can use that, too.
* <code>NodePort</code> is a service type indicating that the application should be externally accessible using a "service port" on all Nodes. The service port will be allocated by Kubernetes (you can choose it, but that's not recommended) and distributed to all nodes, so that a connection to any node on the service port will automatically end up on one of its running Pods. A NodePort service also automatically gets a ClusterIP, so you can use that, too.
* <code>LoadBalancer</code> is a service type indicating that the application should be externally accessible using a provided load balancer. By default, this works like the <code>NodePort</code> but on specific cloud providers you'll also get an allocated external IP address, on which a wanted port is listening and end up on one of the running Pods. I'm running this on my own cluster, not one hosted by a cloud provider, so I won't create a <code>LoadBalancer</code> service. If you'd like to, [https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer this page] explains how they work.
* <code>LoadBalancer</code> is a service type indicating that the application should be externally accessible using a provided load balancer. By default, this works like the <code>NodePort</code> but on specific cloud providers you'll also get an allocated external IP address, on which a wanted port is listening and end up on one of the running Pods. I'm running this on my own cluster, not one hosted by a cloud provider, so I won't create a <code>LoadBalancer</code> service. If you'd like to, [https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer this page] explains how they work.
** There is a controller that allows you to use <code>LoadBalancer</code> services on your own bare-metal cluster that doesn't run on a cloud provider. It's called [https://metallb.universe.tf/ MetalLB] and it works by having a pool of external routable IP addresses to choose from; when allocating from that pool, it starts announcing that IP address via either ARP or BGP onto an arbitrary Node, so that traffic to that IP ends up there. If the Node goes down, MetalLB elects a new leader node and re-announces the IP there, so that the service is moved.
* <code>ExternalName</code> doesn't actually set up any forwarding, but allows you to register an internal name that forwards to a given name in DNS elsewhere. This allows migration to/from Kubernetes.
* <code>ExternalName</code> doesn't actually set up any forwarding, but allows you to register an internal name that forwards to a given name in DNS elsewhere. This allows migration to/from Kubernetes.
* Not a service type, but if your service uses HTTP, you can use Ingress instead of Service to make your service externally accessible. More on that later.
* Not a service type, but if your service uses HTTP, you can use Ingress instead of Service to make your service externally accessible. More on that later.
Regel 639: Regel 640:
</pre>
</pre>


Sure enough, if you visit your external IP on port 80, you should see the same page served by Nginx appear!
Sure enough, if you visit your external IP on port 80, you should see the same page served by Nginx appear! As described before, you can have a similar approach without having a fixed Node to connect to; the controller [https://metallb.universe.tf/ MetalLB] chooses a node randomly then uses ARP or BGP to announce an IP address on it. But, this sort of setup only works in a controlled subnet so I can't try it on this cluster.


== A summary so far ==
== A summary so far ==
Regel 648: Regel 649:
** Master nodes are nothing special, except they (also) run Pods that together form the Kubernetes Control Plane
** Master nodes are nothing special, except they (also) run Pods that together form the Kubernetes Control Plane
* Pods, the basic unit of scheduling; they run on Nodes and consist of at least one Container running an actual Docker Image
* Pods, the basic unit of scheduling; they run on Nodes and consist of at least one Container running an actual Docker Image
** Pods have an IP address within the Pod networking range
* Deployments, which are a way to tell Kubernetes to always have some type of Pod running
* Deployments, which are a way to tell Kubernetes to always have some type of Pod running
* Jobs, which are a way to tell Kubernetes to keep running some type of Pod until it finishes successfully
* Jobs, which are a way to tell Kubernetes to keep running some type of Pod until it finishes successfully
* Services, which are a way to make some application in Pods accessible over TCP (inside and/or outside the cluster)
* Services, which are a way to make some application in Pods accessible over TCP (inside and/or outside the cluster)
** Services can have a 'virtual' IP address within the Service networking range, they can have a NodePort all Nodes listen on, and/or they can have an external IP statically or dynamically provided by a LoadBalancer.
* Volumes, which provide various kinds of storage to Pods
* Volumes, which provide various kinds of storage to Pods
** Persistent Volumes are provided by the cluster administrator to allow storage
** Persistent Volumes are provided by the cluster administrator to allow storage
Regel 658: Regel 661:
== Accessing a Deployment using an Ingress ==
== Accessing a Deployment using an Ingress ==


[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingresses] are like Services, but for HTTP only. This specialisation allows adding a number of additional features, such as having multiple applications behind one URL or hostname (micro-services), SSL termination and splitting load between different versions of the same service (canarying).
[https://kubernetes.io/docs/concepts/services-networking/ingress/ Ingresses] are like Services, but for HTTP only. This specialisation allows adding a number of additional features, such as having multiple applications behind one URL or hostname (e.g. micro-services), SSL termination and splitting load between different versions of the same service (canarying).


Ingress is currently in beta (v1beta1), meaning that the feature is well-tested and will continue to exist, but details may change. Consider this before using it in production.
Ingress is currently in beta (v1beta1), meaning that the feature is well-tested and will continue to exist, but details may change. Consider this before using it in production.
Regel 664: Regel 667:
Like LoadBalancer Services, creating an Ingress does not immediately change anything in the cluster. You need to have an Ingress Controller for anything to change in the cluster after you create an Ingress. There's many [https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/ Ingress Controller plugins] to choose from; I will try [https://github.com/containous/traefik Traefik] since it supports Let's Encrypt out of the box. (Some cloud providers may provide an Ingress Controller out of the box.)
Like LoadBalancer Services, creating an Ingress does not immediately change anything in the cluster. You need to have an Ingress Controller for anything to change in the cluster after you create an Ingress. There's many [https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/ Ingress Controller plugins] to choose from; I will try [https://github.com/containous/traefik Traefik] since it supports Let's Encrypt out of the box. (Some cloud providers may provide an Ingress Controller out of the box.)


First of all, we want our service to have a well-known hostname. I use <code>kubetest.sjorsgielen.nl</code> and register this host in DNS to point at one of my nodes (in this case, <code>kubetest2</code>).
First of all, we set up Traefik. For this, we'll need to create some service types we haven't seen before: service accounts, cluster role bindings and config maps. Bear with me for a bit while we set up Traefik:
 
<pre>
$ cat traefik-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
$ kubectl apply -f traefik-account.yaml
serviceaccount/traefik-ingress-controller created
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
</pre>
 
Now we create a ConfigMap for Traefik's configuration:
 
<pre>
$ cat traefik-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: traefik-configmap
  namespace: kube-system
data:
  traefik.toml: |
    defaultEntryPoints = ["http", "https"]
    insecureSkipVerify = true
 
    [entryPoints]
      [entryPoints.http]
        address = ":80"
      [entryPoints.https]
        address = ":443"
        [entryPoints.https.tls]
      [entryPoints.admin]
        address = ":8080"
 
    [kubernetes]
      [kubernetes.ingressEndpoint]
        publishedService = "kube-system/traefik-ingress-service-external"
 
    [api]
    entryPoint = "admin"
$ kubectl apply -f traefik-configmap.yaml
configmap/traefik-configmap created
</pre>
 
That being done, we now start the Traefik deployment:
 
<pre>
$ cat traefik.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    name: traefik-ingress
    namespace: kube-system
    labels:
        k8s-app: traefik-ingress-lb
spec:
    replicas: 1
    selector:
        matchLabels:
            k8s-app: traefik-ingress-lb
    template:
        metadata:
            labels:
                k8s-app: traefik-ingress-lb
                name: traefik-ingress-lb
        spec:
            volumes:
            - name: traefik-configmap
              configMap:
                name: traefik-configmap
            serviceAccountName: traefik-ingress-controller
            terminationGracePeriodSeconds: 60
            containers:
            - image: traefik
              name: traefik-ingress-lb
              ports:
              - name: web
                containerPort: 80
              - name: https
                containerPort: 443
              - name: admin
                containerPort: 8080
              volumeMounts:
              - mountPath: "/config"
                name: "traefik-configmap"
              args:
              - --loglevel=INFO
              - --configfile=/config/traefik.toml
$ kubectl apply -f traefik.yaml
deployment.extensions/traefik-ingress created
</pre>
 
What did this do?
* We created the service account and privileges Traefik needs to find Ingresses, Services and Endpoints.
* We created a ConfigMap, a hard-coded type of Volume that is commonly used to supply configuration inside Pods. This ConfigMap causes Traefik to listen on ports 80, 443 and 8080.
* Then, we created a Deployment that runs the Traefik image with the given configmap and service account.
* Note that you won't find these deployments and pods using the normal <code>kubectl get pods</code> (etc) commands unless you give <code>-n kube-system</code> to select the kube-system namespace.
 
You should see a <code>traefik-ingress-...</code> pod with status <code>Running</code> when you run <code>kubectl get pods -n kube-system</code>; if that's not the case, you should stop here and investigate what's wrong.
 
To use Traefik, we'll configure two things:
* External connections end up at it
* It reads the hostname and path of requests, and sends them onwards to the correct Service
 
The first thing we've already discussed before: it requires setting up a LoadBalancer Service if you're running on a cloud provider; if you're not, like me, you can set up a ClusterIP Service with an ExternalIP and the side-note of a single-point-of-failure applies here as well. (Note that we expose only ports 80 and 443, not 8080; this is the administrator port of Traefik.)
 
<pre>
$ cat traefik-service-external.yaml
apiVersion: v1
kind: Service
metadata:
  name: traefik-ingress-service-external
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 443
      name: https
  externalIPs:
  - "145.131.8.75"
</pre>
 
The <code>externalIPs</code> mentioned here should be the external IP of one of your Nodes. At this point you can also create a record in DNS to point to this IP address if you want; I created <code>kubetest.sjorsgielen.nl IN A 145.131.8.75</code>.
 
Having this set up should cause <code>http://kubetest.sjorsgielen.nl/</code> to end up within Traefik. It will give a "404 page not found" result, as Traefik doesn't know about any Ingresses yet to forward your request to.
 
You can check the Traefik dashboard to see that it's up. Currently, we'll need a port-forward for that:
 
<pre>
$ kubectl port-forward -n kube-system deployment/traefik-ingress 8080:8080
Forwarding from [::1]:8080 -> 8080
Forwarding from 127.0.0.1:8080 -> 8080
</pre>
 
Now, visit <code>http://localhost:8080/</code> and you should see the Traefik dashboard. It will show no frontends and no backends, as we haven't created any Ingresses yet for it to route. So let's create one for our Nginx service:
 
<pre>
$ cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    name: nginx-ingress
    annotations:
        traefik.frontend.rule.type: PathPrefixStrip
spec:
    rules:
    - host: kubetest.sjorsgielen.nl
      http:
        paths:
        - path: /nginx
          backend:
            serviceName: nginx-service
            servicePort: 80
$ kubectl apply -f nginx.yaml
ingress.extensions/nginx-ingress created
</pre>
 
So what does this mean?
 
* It's an Ingress type, meaning it's a message to the cluster/Traefik that we want to have a Service externally accessible over HTTP.
* The service will be reachable on the Host <code>kubetest.sjorsgielen.nl</code> -- this acts like a sort of virtual server in Apache, where different hosts can serve different content.
* The request Path must begin with <code>/nginx</code>; the <code>traefik.frontend.rule.type: PathPrefixStrip</code> annotation will cause the <code>/nginx</code> prefix to be stripped off before the request is forwarded.
* The requests will be forwarded to the <code>nginx-service</code> service on port 80.
 
In other words, http://kubetest.sjorsgielen.nl/nginx/index.html will be forwarded to http://nginx-service/index.html. And indeed, it shows the same Nginx page again! Also, if you go to the Traefik dashboard again, you'll see the frontend and backend have appeared and also you'll be able to see the average response time on the Health tab.
 
Now, you could replace your port-forward to the Traefik dashboard with a Service and an Ingress so you can make it externally accessible on your hostname (or a different one) as well. I'll leave that as an exercise to you!
 
== Let's encrypt this ==
 
There's one very nice feature of Traefik I didn't want you to miss out on. It of course supports TLS, and it can automatically get your certificates through any ACME provider such as Let's Encrypt.
 
For this, we change our ConfigMap to include a <code>[acme]</code> section and also to auto-forward all HTTP requests to HTTPS:
 
<pre>
$ cat traefik-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: traefik-configmap
  namespace: kube-system
data:
  traefik.toml: |
    defaultEntryPoints = ["http", "https"]
    insecureSkipVerify = true
 
    [entryPoints]
      [entryPoints.http]
        address = ":80"
        [entryPoints.http.redirect]
          entryPoint = "https"
      [entryPoints.https]
        address = ":443"
        [entryPoints.https.tls]
      [entryPoints.admin]
        address = ":8080"
 
    [acme]
    email = 'your e-mail address'
    storage = "acme.json"
    caServer = "https://acme-v01.api.letsencrypt.org/directory"
    entryPoint = "https"
    onDemand = true
      [acme.httpChallenge]
      entryPoint = "http"
 
    [kubernetes]
      [kubernetes.ingressEndpoint]
        publishedService = "kube-system/traefik-ingress-service-external"
 
    [api]
    entryPoint = "admin"
$ kubectl apply -f traefik-configmap.yaml
configmap/traefik-configmap configured
</pre>
 
Now, unfortunately, changing ConfigMaps doesn't automatically update the Pods that use it. So, we can destroy our Pod and the Deployment will recreate it with the correct configuration:
 
<pre>
$ kubectl get pods -n kube-system | grep traefik
traefik-ingress-6dcd896c78-7w2k6      1/1    Running  0          8d
$ kubectl delete pod traefik-ingress-6dcd896c78-7w2k6 -n kube-system
$ kubectl get pods -n kube-system | grep traefik
traefik-ingress-6dcd896c78-8gl9t      1/1    Running  0          15s
</pre>
 
Traefik will start requesting a TLS certificate when the first TLS request comes in. That may take a minute for the LetsEncrypt challenge to resolve, but after this, you should be able to access your hostname via HTTPS and it should present a valid certificate. In my case, https://kubetest.sjorsgielen.nl/nginx gives the same working page! Also, we've configured the http forward, so http://kubetest.sjorsgielen.nl/nginx just forwards there. Hassle-free TLS, done!


= Creating your own images =
= Creating your own images =


So far, we've set up the standard container <code>ubuntu:bionic</code>. It's pulled from the Docker Hub at https://hub.docker.com/_/ubuntu. Docker Hub is a central registry for images. In the same way you can pull many images from there, such as the minimal Linux image <code>alpine</code>.
So far, we've usually set up the standard container <code>ubuntu:bionic</code>. It's pulled from the Docker Hub at https://hub.docker.com/_/ubuntu. Docker Hub is a central registry for images. In the same way you can pull many images from there, such as the minimal Linux image <code>alpine</code> or the image running in our Traefik pod, <code>traefik</code>.


But, if we want to run our own Docker images inside Kubernetes, it will need to be able to pull them as well. This can be done by uploading our images to Docker Hub, but for our own experimentation, let's set up our own registry and plug Kubernetes into it.
But, if we want to run our own Docker images inside Kubernetes, it will need to be able to pull them as well. This can be done by uploading our images to Docker Hub, but for our own experimentation, let's set up our own registry and plug Kubernetes into it.


To do: Set up Docker Registry. Push images to it. Start them in a pod.
To begin with, the registry will need storage for its images. True to our earlier experiments, we start by creating a persistent volume claim. (I'll assume there's a persistent volume to fulfill it; if not, check above how to create one yourself.)
 
<pre>
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: registry-files
spec:
  storageClassName: default
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
</pre>
 
The registry deployment:
 
<pre>
apiVersion: apps/v1
kind: Deployment
metadata:
  name: registry
spec:
  selector:
    matchLabels:
      app: registry
  replicas: 1
  template:
    metadata:
      labels:
        app: registry
    spec:
      volumes:
      - name: registrystorage
        persistentVolumeClaim:
          claimName: registry-files
      containers:
      - name: registry
        image: registry:2
        ports:
        - containerPort: 5000
        volumeMounts:
        - mountPath: /var/lib/registry
          name: registrystorage
</pre>
 
And a Service + Ingress to make it accessible on a new hostname. I found that Docker doesn't support accessing a registry with a path prefix, so we have to give it its own hostname. Luckily, with Traefik, it's easy to route; you'll only have to add a record in DNS.
 
<pre>
apiVersion: v1
kind: Service
metadata:
  name: registry-service
spec:
  selector:
    app: registry
  ports:
    - name: registry
      port: 5000
      protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: registry-ingress
spec:
  rules:
  - host: kuberegistry.sjorsgielen.nl
    http:
      paths:
      - path: /
        backend:
          serviceName: registry-service
          servicePort: 5000
</pre>
 
After a minute, as before, https://kuberegistry.sjorsgielen.nl/v2/ (replace with your own hostname) should return 200 OK with a page content of "{}".
 
To test whether it's working, let's take the Ubuntu Docker image and push it onto our registry, as per [https://docs.docker.com/registry/ more or less these instructions]. Here, it's important that the registry is well-reachable over HTTPS, as Docker will only allow non-SSL HTTP communication over localhost! (Although you could get around this with a <code>kubectl port-forward</code>.)
 
<pre>
$ docker pull ubuntu
$ docker image tag ubuntu kuberegistry.sjorsgielen.nl/myubuntu
$ docker push kuberegistry.sjorsgielen.nl/myubuntu
[...] Retrying in 10 seconds
</pre>
 
That seems to fail. As before, we can figure out the root cause by getting the logs of the Registry pod:
 
<pre>
$ kubectl logs registry-6bf4dbcfb-9csf5
[...]
time="2019-03-28T21:44:04.465658668Z" level=error msg="response completed with error" err.code=unknown err.detail="filesystem: mkdir /var/lib/registry/docker: permission denied" err.message="unknown error" go.version=go1.11.2 http.request.host=kuberegistry.sjorsgielen.nl http.request.id=c00f2785-30b0-469d-bcff-70a12c0f604b http.request.method=POST http.request.remoteaddr=10.107.160.0 http.request.uri="/v2/myubuntu/blobs/uploads/" http.request.useragent="docker/18.06.1-ce go/go1.10.4 git-commit/e68fc7a kernel/4.4.0-112-generic os/linux arch/amd64 UpstreamClient(Docker-Client/18.06.1-ce \(linux\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=125.482304ms http.response.status=500 http.response.written=164 vars.name=myubuntu
</pre>
 
A "permission denied" error in "mkdir /var/lib/registry/docker". Now, we may not know the PersistentVolume behind whatever is mounted in the registry, but we can quickly find out by checking <code>kubectl describe deployment registry</code>, <code>kubectl get pvc</code> and <code>kubectl describe pv registry-storage</code>. In my case, it's because root squashing is enabled on my NFS mount and the directory is being accessed by root, therefore by an anonymous uid/gid, which doesn't have rights in the directory. It's easily fixed and now the push works:
 
<pre>
$ docker push kuberegistry.sjorsgielen.nl/myubuntu
The push refers to repository [kuberegistry.sjorsgielen.nl/myubuntu]
b57c79f4a9f3: Pushed
d60e01b37e74: Pushed
e45cfbc98a50: Pushed
762d8e1a6054: Pushed
latest: digest: sha256:f2557f94cac1cc4509d0483cb6e302da841ecd6f82eb2e91dc7ba6cfd0c580ab size: 1150
</pre>
 
Now, let's make our own Docker image, push it, and start it in a Pod!
 
Here's an example Dockerfile that runs a tiny Perl-based webserver that always responds with its own hostname:
 
<pre>
$ cat Dockerfile
FROM ubuntu:bionic
 
RUN apt-get update \
&& apt-get install -y libmojolicious-perl \
&& rm -rf /var/lib/apt/lists/*
 
# Normally, you'd use COPY here, but I wanted to keep this in one file
RUN echo "#!/usr/bin/env perl"                      >>/app.pl \
&& echo "use Mojolicious::Lite;"                    >>/app.pl \
&& echo "get '/' => sub {"                          >>/app.pl \
&& echo "  shift->render(text => 'Hello World!'); " >>/app.pl \
&& echo "};"                                        >>/app.pl \
&& echo "app->start;"                              >>/app.pl \
&& chmod +x /app.pl
 
EXPOSE 3000
CMD ["/app.pl", "daemon", "-l"]
$ docker build -t kuberegistry.sjorsgielen.nl/helloworld:latest .
$ docker push kuberegistry.sjorsgielen.nl/helloworld:latest
</pre>
 
At this point, you should be able to write a Deployment, Service and Ingress for this application, using the examples above. <code>kubectl apply</code> should then start the Pod, Traefik should route the service and whatever host/path you configured should quickly be reachable and respond with "Hello World". We've created our own image and ran it on your cluster!


= To do =
= To do =


* Play with auto-HTTPS ingresses (e.g. Traefik)
* Kubectl set image for rolling release?
* Kubectl set image for rolling release?
* Kubernetes Dashboard
* Kubernetes Dashboard
* Attempt Kubernetes upgrade from 1.13 to 1.14
* Attempt Kubernetes upgrade from 1.13 to 1.14
** https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/
** First, do a normal apt upgrade (the kubernetes packages are held and will not be modified)
** Then, unhold the kubeadm package on the master, upgrade it to the right version, then re-hold it
*** This only worked for me after unholding the kubelet and upgrading it as well.
** On the master, "kubeadm upgrade plan", then "kubeadm upgrade apply v1.14.x"
** Upgrade CNI controller by re-running the same <code>kubectl apply</code> as earlier
** Unhold the kubelet and kubectl packages on the master, upgrade them and re-hold them, then restart the kubelet
** For each worker, unhold the kubeadm package, upgrade it, rehold it; cordon (drain) the node; upgrade the node config; install the new kubelet version and restart it; uncordon the node.
*** Here too, this only worked for me after unholding the kubelet and upgrading it as well.
* Try getting information on a pod from inside it using the Kubernetes API
* Try getting information on a pod from inside it using the Kubernetes API
** https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-api-from-a-pod
** <code>wget --ca-certificate=/run/secrets/kubernetes.io/serviceaccount/ca.crt -qO- https://kubernetes.default.svc.cluster.local/api/</code>
** Doesn't need using the Kubernetes API, can be done using env vars: https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
* Play with native cronjobs
* Play with native cronjobs
* Play with Statefulset / Daemonset
* Play with Statefulset / Daemonset
Regel 688: Regel 1.109:
** Allow K8s API communication from a pod, but only to receive information about itself
** Allow K8s API communication from a pod, but only to receive information about itself
** Basically: Make it impossible to root a node even with "broad" privileges on the Kubernetes API server
** Basically: Make it impossible to root a node even with "broad" privileges on the Kubernetes API server
** https://kubernetes.io/docs/concepts/policy/pod-security-policy/
* Limiting pods in memory, CPU, I/O
* Limiting pods in memory, CPU, I/O
* Limiting pods in network communication
* Limiting pods in network communication
[[Categorie:Projects]]

Navigatiemenu