I'm building a Kubernetes cluster in virtual machines running Ubuntu 18.04 managed by Vagrant. I've successfully added the first worker node to the cluster, but a pod on this node fails to initialize.
Running kubectl get nodes
on the Control Plan Node yields:
NAME STATUS ROLES AGE VERSION
c1-cp1 Ready control-plane 2d2h v1.24.3
c1-node1 NotReady <none> 152m v1.24.3
Here are the possibly relevant events on the c1-node1
node:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning InvalidDiskCapacity 65m kubelet invalid capacity 0 on image filesystem
Warning Rebooted 65m kubelet Node c1-node1 has been rebooted, boot id: 038b3801-8add-431d-968d-f95c5972855e
Normal NodeNotReady 65m kubelet Node c1-node1 status is now: NodeNotReady
Here are the pods on this node:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-node-hshzj 0/1 Init:CrashLoopBackOff 8 (4m ago) 109m 10.0.2.15 c1-node1 <none> <none>
kube-system kube-proxy-8zk2q 1/1 Running 1 (19m ago) 153m 10.0.2.15 c1-node1 <none> <none>
Here are the events on the kube-proxy-8zk2q
node:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 161m default-scheduler Successfully assigned kube-system/calico-node-hshzj to c1-node1
Normal Pulled 161m kubelet Container image "docker.io/calico/cni:v3.24.5" already present on machine
Normal Created 161m kubelet Created container upgrade-ipam
Normal Started 161m kubelet Started container upgrade-ipam
Normal Pulled 157m (x5 over 161m) kubelet Container image "docker.io/calico/cni:v3.24.5" already present on machine
Normal Created 157m (x5 over 161m) kubelet Created container install-cni
Normal Started 157m (x5 over 161m) kubelet Started container install-cni
Warning BackOff 156m (x12 over 160m) kubelet Back-off restarting failed container
Normal SandboxChanged 70m kubelet Pod sandbox changed, it will be killed and re-created.
What could be causing this pod to fail to initialize?