Check below possible solutions :
First review the logs of journalctl -xeu
kubelet output, check if the error while dialing dial unix /var/run/cri-dockerd.sock: connect: no such file or directory
then you restart and enable cri-dockerd services like below :
sudo systemctl enable cri-dockerd.service
sudo systemctl restart cri-dockerd.service then
sudo systemctl start kubelet
It may work for you,please go through github link for more info.
1)The Kubeconfig environmental variable is probably not set.
export KUBECONFIG=/etc/kubernetes/admin.conf
or $HOME/.kube/config
2)The user’s $HOME directory has no .kube/config
file.
If you don’t have a .kube or config file
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config sudo chown (id -u):$(id -g)
$HOME/.kube/config
Alternatively you can also export KUBECONFIG variable like this:
export KUBECONFIG=$HOME/.kube/config
3)The server/port configured in the config file above is wrong.
Is it the same as the IP/hostname of the master server? if not did you copy it? You might want to fix that.
By the way you can get the hostname by issueing the hostname command on your cli. or
ifconfig for the ip.
4)Docker service may be down, hence the kubeapi pod isn’t running
sudo systemctl start docker
sudo systemctl start kubelet
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown (id -u):(id -g) $HOME/.kube/config
5)Kubelet service may be down. This may be due to the fact that swap is enabled:
- sudo -i
- swapoff -a
- exit
- strace -eopenat kubectl version
and you can type kubectl get nodes again as shown in the below.
6)May be another cause ‘disk space’:
Check “df -h”
, no overlay or shm (mounted on /var/lib/docker…
) was listed, until you increased the free disk space.
7)Follow the below similar process to resolve your issue
master
kubeadm reset
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.211.40 --
kubernetes-version=v1.18.0
kubeadm join 192.168.211.40:6443 --token s7apx1.mlxn2jkid6n99fr0 \
--discovery-token-ca-cert-hash sha256:2fa9da39110d02efaf4f8781aa50dd25cce9be524618dc7ab91a53e81c5c22f8
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
node1
$ kubeadm reset
$ kubeadm join 192.168.211.40:6443 --token s7apx1.mlxn2jkid6n99fr0 \
--discovery-token-ca-cert-hash sha256:2fa9da39110d02efaf4f8781aa50dd25cce9be524618dc7ab91a53e81c5c22f8
node2
$ kubeadm reset
$ kubeadm join 192.168.211.40:6443 --token s7apx1.mlxn2jkid6n99fr0 \
--discovery-token-ca-cert-hash sha256:2fa9da39110d02efaf4f8781aa50dd25cce9be524618dc7ab91a53e81c5c22f8
master
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 5m18s v1.18.6
node1 Ready <none> 81s v1.18.6
node2 Ready <none> 43s v1.18.6
$ scp /root/.kube/config [email protected]:/root/.kube/config
$ scp /root/.kube/config [email protected]:/root/.kube/config
8)Please try this if you still have the issue :
iptables may cause issues after you reboot your instance.
sudo su
iptables -P INPUT ACCEPT ALL
iptables -F
Also refer to This document describes steps to troubleshoot kubectl error for more information.
Also check this similar SO for more information.