I have add a new node to my k8s cluster, but I found some allocated to this node cannot show logs like this:
$ kubectl logs -n xxxx xxxxx-6d5bdd7d6f-5ps6k
Unable to connect to the server: EOF
Using Lens gives error logs like this:
Failed to load logs: request to http://127.0.0.1:49271/api-kube/api/v1/namespaces/xxxxxxx/pods/xxxx34-27736483--1-hxjpv/log?tailLines=500×tamps=true&container=xxxxxx&previous=false failed, reason: socket hang up
Reason: undefined (ECONNRESET)
I believe there's some problem in this node, when I use port-forwarding:
$ kubectl port-forward -n argocd svc/argocd-notifications-controller-metrics 9001:9001
error: error upgrading connection: error dialing backend: dial tcp 10.0.6.20:10250: i/o timeout
I think the internal IP 10.0.6.20 is wrong.
All kube-proxy pods shows running from kubectl
-> % kgp -o wide -n kube-system | grep kube-proxy
kube-proxy-7pg9d 1/1 Running 1 (2d20h ago) 29d 10.0.6.20 worker4
kube-proxy-cqh2c 1/1 Running 1 (15d ago) 29d 10.0.6.3 worker3
kube-proxy-lp4cd 1/1 Running 0 29d 10.0.6.1 worker1
kube-proxy-r6bgw 1/1 Running 0 29d 10.0.6.2 worker2
But using crictl pods
on each node looking for these pods
# crictl pods | grep kube-proxy
ceef94b060e56 2 days ago Ready kube-proxy-7pg9d kube-system 1 (default)
418bd5b46c2b9 4 weeks ago NotReady kube-proxy-7pg9d kube-system 0 (default)
Shows Ready or NotReady
I am using Calico for CNI, in ip_vs mode.
How can I fix this?