The stage: a control plane machine, kubernetes 1.24.3 on a baremetal Ubuntu 22.04, installed with kubeadm, there is also one worker node. The whole set-up worked like a charm for 4 months until some unknown silent kaboom yesterday (I actually don't exclude a sudden hardware issue)
The problem: port 6443 is listed by netstat for the first few minutes after the control plane machine start-up, and then disappears. While it's on, the apiserver is irresponsive anyway - any connection attempt to it is reset by the peer. I.e. there should be some serious problems on the kube-apiserver side, but I can't figure out what it's unhappy with.
I checked some obvious things - ip address didn't change, enough disk space, k8s' certificates are not expired. So I need to check kube-apiserver
logs somehow.
As for the logs, the official page says:
On systemd-based systems, you may need to use journalctl instead of
examining log files.
But... what component should I run journalctl for?? If I run it for kubelet (journalctl -u kubelet
), I don't see much of the logs related to apiserver apart from "can't connect to :6443"
And I don't see any service named kube-apiserver
or alike when I run e.g. just systemctl
... Also, there are no logs in /var/log/
(not surprising since it's a systemd-based system, but I checked nevertheless)
I wonder is there a way to check the apiserver's logs, or is there some gotcha that I'm missing? Would appreciate any help on this subject!