Score:0

Pods from kube-system CrashLoopBackOff

us flag

EDIT. I have done the exact same step on ubuntu server 20.05 and it's working fine...

I Create new cluster kube on ubuntu server 22.04 but I have several issue. Pods from kube-system going up and down. I checked logs but I cannot found issue.

kubectl get po -A

kubectl describe po calico-kube-controllers-7bdbfc669-kdts2 -n kube-system

Sometime I cannot use kubectl I think it's because kube-api pods is down.

connection refused

rbo@ubuntuserver:~$ kubectl get po -A
NAMESPACE     NAME                                      READY   STATUS             RESTARTS         AGE
kube-system   calico-kube-controllers-7bdbfc669-kdts2   1/1     Running            7 (6m13s ago)    16m
kube-system   calico-node-jz5xb                         1/1     Running            7 (7m9s ago)     16m
kube-system   coredns-787d4945fb-l4bf5                  1/1     Running            6 (5m59s ago)    5h26m
kube-system   coredns-787d4945fb-nt8lh                  1/1     Running            4 (93s ago)      5h26m
kube-system   etcd-ubuntuserver                         1/1     Running            16 (6m40s ago)   5h26m
kube-system   kube-apiserver-ubuntuserver               1/1     Running            15 (4m29s ago)   5h26m
kube-system   kube-controller-manager-ubuntuserver      0/1     CrashLoopBackOff   17 (2m21s ago)   5h25m
kube-system   kube-proxy-lc5nm                          0/1     CrashLoopBackOff   15 (44s ago)     5h26m
kube-system   kube-scheduler-ubuntuserver               1/1     Running            17 (5m40s ago)   5h25m
rbo@ubuntuserver:~$ journalctl -n 30
Dec 13 21:17:12 ubuntuserver kubelet[662]: E1213 21:17:12.362890     662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-lc5nm_kube-system(0f9da167-6b5b-4530-8ae4-067bcfd88098)\"" pod="kube-system/kube-proxy-lc5nm" podUID=0f9da167-6b5b-4530-8ae4-067bcfd88098
Dec 13 21:17:13 ubuntuserver kubelet[662]: I1213 21:17:13.366892     662 scope.go:115] "RemoveContainer" containerID="4621fe31f41ed1c053e77f495ed215271c4dd12b080405c533dc84e1185680d4"
Dec 13 21:17:13 ubuntuserver containerd[672]: time="2022-12-13T21:17:13.368653147Z" level=info msg="CreateContainer within sandbox \"41195dbd058b802b9812da2ce092a6298580768d9c42401a808f0f5d02342ba5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:17,}"
Dec 13 21:17:13 ubuntuserver containerd[672]: time="2022-12-13T21:17:13.379867865Z" level=info msg="CreateContainer within sandbox \"41195dbd058b802b9812da2ce092a6298580768d9c42401a808f0f5d02342ba5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:17,} returns container id \"4f86b5e615ba3f0b987024bf64c3063275c3090a4c3b3689172de592908aedc6\""
Dec 13 21:17:13 ubuntuserver containerd[672]: time="2022-12-13T21:17:13.380737267Z" level=info msg="StartContainer for \"4f86b5e615ba3f0b987024bf64c3063275c3090a4c3b3689172de592908aedc6\""
Dec 13 21:17:13 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-4f86b5e615ba3f0b987024bf64c3063275c3090a4c3b3689172de592908aedc6-runc.Jtkdkn.mount: Deactivated successfully.
Dec 13 21:17:13 ubuntuserver containerd[672]: time="2022-12-13T21:17:13.450275779Z" level=info msg="StartContainer for \"4f86b5e615ba3f0b987024bf64c3063275c3090a4c3b3689172de592908aedc6\" returns successfully"
Dec 13 21:17:14 ubuntuserver kubelet[662]: I1213 21:17:14.329726     662 scope.go:115] "RemoveContainer" containerID="d90c8c392ef73770f2161ab12a98cdbdc3e3f7937239f8b10763f760d6091201"
Dec 13 21:17:14 ubuntuserver kubelet[662]: E1213 21:17:14.330141     662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ubuntuserver_kube-system(f8f5737540cef58793e773c366765eac)\"" pod="kube-system/kube-controller-manager-ubuntuse>
Dec 13 21:17:18 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-da16c36612417ba0f5ab81357a0a2452cf4ae38b47cb9eab860e2d7bbe0de637-runc.VJwxXb.mount: Deactivated successfully.
Dec 13 21:17:18 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-da16c36612417ba0f5ab81357a0a2452cf4ae38b47cb9eab860e2d7bbe0de637-runc.SXK3sR.mount: Deactivated successfully.
Dec 13 21:17:19 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-56191d51b9dfe7b56c320ba85c897d7deafdf070364777a698782d6685ffe256-runc.KeqqBx.mount: Deactivated successfully.
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.083816298Z" level=info msg="StopPodSandbox for \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\""
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.083907197Z" level=info msg="TearDown network for sandbox \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\" successfully"
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.083949736Z" level=info msg="StopPodSandbox for \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\" returns successfully"
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.084731070Z" level=info msg="RemovePodSandbox for \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\""
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.084757189Z" level=info msg="Forcibly stopping sandbox \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\""
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.084814886Z" level=info msg="TearDown network for sandbox \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\" successfully"
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.088007063Z" level=info msg="RemovePodSandbox \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\" returns successfully"
Dec 13 21:17:25 ubuntuserver kubelet[662]: I1213 21:17:25.349409     662 scope.go:115] "RemoveContainer" containerID="23c70b66c62384bca25329d3a7ab5c24e209cb791339edff70f4016235ea5dea"
Dec 13 21:17:25 ubuntuserver kubelet[662]: E1213 21:17:25.349754     662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-lc5nm_kube-system(0f9da167-6b5b-4530-8ae4-067bcfd88098)\"" pod="kube-system/kube-proxy-lc5nm" podUID=0f9da167-6b5b-4530-8ae4-067bcfd88098
Dec 13 21:17:25 ubuntuserver kubelet[662]: I1213 21:17:25.366261     662 scope.go:115] "RemoveContainer" containerID="d90c8c392ef73770f2161ab12a98cdbdc3e3f7937239f8b10763f760d6091201"
Dec 13 21:17:25 ubuntuserver kubelet[662]: E1213 21:17:25.366602     662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ubuntuserver_kube-system(f8f5737540cef58793e773c366765eac)\"" pod="kube-system/kube-controller-manager-ubuntuse>
Dec 13 21:17:28 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-da16c36612417ba0f5ab81357a0a2452cf4ae38b47cb9eab860e2d7bbe0de637-runc.UWyAsT.mount: Deactivated successfully.
Dec 13 21:17:29 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-da16c36612417ba0f5ab81357a0a2452cf4ae38b47cb9eab860e2d7bbe0de637-runc.X1afEe.mount: Deactivated successfully.
Dec 13 21:17:36 ubuntuserver kubelet[662]: I1213 21:17:36.366771     662 scope.go:115] "RemoveContainer" containerID="23c70b66c62384bca25329d3a7ab5c24e209cb791339edff70f4016235ea5dea"
Dec 13 21:17:36 ubuntuserver kubelet[662]: E1213 21:17:36.367009     662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-lc5nm_kube-system(0f9da167-6b5b-4530-8ae4-067bcfd88098)\"" pod="kube-system/kube-proxy-lc5nm" podUID=0f9da167-6b5b-4530-8ae4-067bcfd88098
Dec 13 21:17:39 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-56191d51b9dfe7b56c320ba85c897d7deafdf070364777a698782d6685ffe256-runc.yA3w8Y.mount: Deactivated successfully.
Dec 13 21:17:40 ubuntuserver kubelet[662]: I1213 21:17:40.362760     662 scope.go:115] "RemoveContainer" containerID="d90c8c392ef73770f2161ab12a98cdbdc3e3f7937239f8b10763f760d6091201"
Dec 13 21:17:40 ubuntuserver kubelet[662]: E1213 21:17:40.363111     662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ubuntuserver_kube-system(f8f5737540cef58793e773c366765eac)\"" pod="kube-system/kube-controller-manager-ubuntuse>
lines 1-30/30 (END)
Score:0
si flag

Using kubeadm and containerd. A kubeadm reset including manual deletion of $HOME/.kube/config and /etc/cni/net.d seemed to fix it for me.

Reset kubeadm.

$ sudo kubeadm reset

Remove this config file or overwrite this later when prompted.

$ sudo rm $HOME/.kube/config

Clear the CNI configuration (I think this was the key part for me). Use at your own risk (as I have little knowledge about what's in there, I'm fairly new), it wouldn't harm to backup those files!

$ sudo rm -rf /etc/cni/net.d

'Kubeadm init' should now initialize a working cluster without kube-system pods running into crashLoopBackOff 'spontaneously'.

$ sudo kubeadm init
Score:0
us flag

Following this tuto is working fine on 22.04. I skipped kernel config and containerd config what's is not mandatory on 20.04.

Tutorial install kube on ubuntu server 22.04

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.