Projects:Kubernetes: verschil tussen versies

Naar navigatie springen Naar zoeken springen
3.164 bytes toegevoegd ,  16 mrt 2019
Regel 367: Regel 367:
$ kubectl apply -f nginx.yaml
$ kubectl apply -f nginx.yaml
</pre>
</pre>
A note about <code>kubectl apply</code> ("declarative management") versus <code>kubectl create</code> ("imperative management"): in this case, <code>apply</code> and <code>create</code> would do the same thing as the Deployment described in <code>nginx.yaml</code> doesn't exist yet. However, would you change <code>nginx.yaml</code> and run <code>kubectl create</code> again, you'd get an error. "Imperative management" (create, delete, replace) means you're telling kubectl what action is necessary, while "declarative management" means you're telling kubectl what the state of the cluster should be, and it will perform the correct action for you. Both are fine in a production context; from now on, this page will be using <code>apply</code> where possible since that seems to be the community consensus in tutorials.
Let's check if the deployment has been created and the pods as well:
<pre>
$ kubectl get deployments
NAME              READY  UP-TO-DATE  AVAILABLE  AGE
nginx-deployment  1/2    2            1          11h
$ kubectl get pods                                     
NAME                                READY  STATUS              RESTARTS  AGE
nginx-deployment-58b6c946d5-fnqr6  1/1    Running            0          49s
nginx-deployment-58b6c946d5-p2nlm  0/1    ContainerCreating  0          49s
</pre>
One pod has been created, the other one is still in <code>ContainerCreating</code> state. Let's check why...
<pre>
$ kubectl describe pod nginx-deployment-58b6c946d5-p2nlm
[....]
  Warning  FailedMount  15s  kubelet, kubetest1  (combined from similar events): MountVolume.SetUp failed for volume "websource" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fac670f9-47d7-11e9-a977-001dd8b7660c/volumes/kubernetes.io~nfs/websource --scope -- mount -t nfs 145.131.6.179:/persistent/html /var/lib/kubelet/pods/fac670f9-47d7-11e9-a977-001dd8b7660c/volumes/kubernetes.io~nfs/websource
Output: Running scope as unit run-r368fd7089b0a46139882e708a89f8926.scope.
mount: wrong fs type, bad option, bad superblock on 145.131.6.179:/persistent/html,
      missing codepage or helper program, or other error
      (for several filesystems (e.g. nfs, cifs) you might
      need a /sbin/mount.<type> helper program)
      In some cases useful info is found in syslog - try
      dmesg | tail or so.
</pre>
In other words, it is failing to start because the volume <code>websource</code> (our NFS mount) cannot be started. You can see from the output that this is running on the kubelet for <code>kubetest1</code>, and the error comes from <code>mount -t nfs 145.131.6.179:/persistent/html/...</code>. The error is correct: we need an NFS mount helper tool that isn't installed on kubetest1. I run <code>apt-get install nfs-common</code> on it, and sure enough, the pod is soon running:
<pre>
$ kubectl get pods
NAME                                READY  STATUS    RESTARTS  AGE
nginx-deployment-58b6c946d5-fnqr6  1/1    Running  0          6m42s
nginx-deployment-58b6c946d5-p2nlm  1/1    Running  0          6m42s
</pre>


TODO: explain difference between apply and create
TODO: explain difference between apply and create

Navigatiemenu