Score:0

Kubeadm 1.24 with containerd. Kubeadm init fail (centos 7)

gb flag

I try to install a single node cluster on centos 7, with kubadm 1.24 and with containerd, i followed the installation steps,

and i did: containerd config default > /etc/containerd/config.toml and passed : SystemdCgroup = true

but the kubeadm init fails at :

[root@master-node .kube]# kubeadm init
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
        [WARNING HTTPProxy]: Connection to "https://10.XXXXXXXX" uses proxy "http://proxy-XXXXXXXXX.com:8080/". If that is not intended, adjust your proxy settings
        [WARNING HTTPProxyCIDR]: connection to "10.96.XXXXXXXX" uses proxy "http://proxy-XXXXXXXXX.com:8080/". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-node] and IPs [10.96.0.1 10.XXXXXXXX]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-node] and IPs [10.XXXXXX 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-node] and IPs [10.XXXXXXX 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

systemctl status kubelet : is Active: active (running)

and the logs : journalctl -xeu kubelet :

mai 20 17:07:05 master-node kubelet[8685]: E0520 17:07:05.715751    8685 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reas
mai 20 17:07:05 master-node kubelet[8685]: E0520 17:07:05.809523    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"
mai 20 17:07:05 master-node kubelet[8685]: E0520 17:07:05.910121    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.010996    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.111729    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.185461    8685 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://10.3
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.212834    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.313367    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.413857    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"
mai 20 17:07:06 master-node kubelet[8685]: I0520 17:07:06.433963    8685 kubelet_node_status.go:70] "Attempting to register node" node="master-node"
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.434313    8685 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.
mai 20 17:07:06 master-node kubelet[8685]: W0520 17:07:06.451759    8685 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDr
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.451831    8685 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSID
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.514443    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.573293    8685 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Un
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.573328    8685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.573353    8685 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.573412    8685 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.574220    8685 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Un
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.574254    8685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.574279    8685 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.574321    8685 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.615512    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.716168    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"
mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.816764    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"

And /var/log/message : is a lot of :

May 22 12:50:00 master-node kubelet: E0522 12:50:00.616324   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

and

[root@master-node .kube]# systemctl status containerd

● containerd.service - containerd container runtime
   Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/containerd.service.d
           └─http_proxy.conf
   Active: active (running) since dim. 2022-05-22 12:28:59 CEST; 22min ago
     Docs: https://containerd.io
 Main PID: 18416 (containerd)
    Tasks: 111
   Memory: 414.6M
   CGroup: /system.slice/containerd.service
           ├─18416 /usr/bin/containerd
           ├─19025 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id c7bc656d43ab9b01e546e4fd4ad88634807c836c4e86622cd0506a0b2216c89a -address /run/container...
           ├─19035 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id b9097bd741e5b87042b4592d26b46cce5f14a24e609e03c91282a438c2dcd7f8 -address /run/container...
           ├─19047 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 979ac32bd88c094dae25964159066202bab919ca2aea4299827807c0829c3fa2 -address /run/container...
           ├─19083 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id a6bcd2c83034531d9907defce5eda846dbdfcf474cbfe0eba7464bb670d5b73d -address /run/container...
           ├─kubepods-burstable-pod07444178f947cc274160582c2d92fd91.slice:cri-containerd:27b2a5932689d1d62fa03024b9b9542e24bc5fda8d5088cbeecf72f66afd4251
           │ └─19266 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-ad...
           ├─kubepods-burstable-pod817561003fea443230cdbdc318133c3d.slice:cri-containerd:c5c8abc23cb256e2b7f01e767ea18ba6b78f851b68f594349cb6449e2c2c2409
           │ └─19259 kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/contro...
           ├─kubepods-burstable-pod68dc7c99c505d2f1495ca6aaa1fe2ba6.slice:cri-containerd:231b0ecd5ad9e49e2276770f235a753b4bac36d0888ef0d1cb24af56e89fa23e
           │ └─19246 etcd --advertise-client-urls=https://10.32.67.20:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var...
           ├─kubepods-burstable-podc5c33a178f011135df400feb1027e3a5.slice:cri-containerd:9cf36107d9881a5204f01bdc6a45a097a3130ae5c3a237b02dfa03978b21dc42
           │ └─19233 kube-apiserver --advertise-address=10.32.67.20 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca...
           ├─kubepods-burstable-pod817561003fea443230cdbdc318133c3d.slice:cri-containerd:a6bcd2c83034531d9907defce5eda846dbdfcf474cbfe0eba7464bb670d5b73d
           │ └─19140 /pause
           ├─kubepods-burstable-pod07444178f947cc274160582c2d92fd91.slice:cri-containerd:c7bc656d43ab9b01e546e4fd4ad88634807c836c4e86622cd0506a0b2216c89a
           │ └─19133 /pause
           ├─kubepods-burstable-pod68dc7c99c505d2f1495ca6aaa1fe2ba6.slice:cri-containerd:b9097bd741e5b87042b4592d26b46cce5f14a24e609e03c91282a438c2dcd7f8
           │ └─19124 /pause
           └─kubepods-burstable-podc5c33a178f011135df400feb1027e3a5.slice:cri-containerd:979ac32bd88c094dae25964159066202bab919ca2aea4299827807c0829c3fa2
             └─19117 /pause

mai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.146209618+02:00" level=info msg="StartContainer for \"231b0ecd5ad9e49e2276770f23...9fa23e\""
mai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.151240012+02:00" level=info msg="CreateContainer within sandbox \"c7bc656d43ab9b01e546e4f...
mai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.151540207+02:00" level=info msg="StartContainer for \"27b2a5932689d1d62fa03024b9...fd4251\""
mai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.164666904+02:00" level=info msg="CreateContainer within sandbox \"a6bcd2c83034531d9907def...
mai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.166282219+02:00" level=info msg="StartContainer for \"c5c8abc23cb256e2b7f01e767e...2c2409\""
mai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.277928704+02:00" level=info msg="StartContainer for \"9cf36107d9881a5204f01bdc6a...essfully"
mai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.288703134+02:00" level=info msg="StartContainer for \"c5c8abc23cb256e2b7f01e767e...essfully"
mai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.290631867+02:00" level=info msg="StartContainer for \"231b0ecd5ad9e49e2276770f23...essfully"
mai 22 12:45:56 master-node containerd[18416]: time="2022-05-22T12:45:56.293864738+02:00" level=info msg="StartContainer for \"27b2a5932689d1d62fa03024b9...essfully"
mai 22 12:46:55 master-node containerd[18416]: time="2022-05-22T12:46:55.476960835+02:00" level=error msg="ContainerStatus for \"58ef67cb3c64c5032bf0dac6f1913e53e...
Hint: Some lines were ellipsized, use -l to show in full.

[root@master-node .kube]# systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since dim. 2022-05-22 12:45:55 CEST; 6min ago
     Docs: https://kubernetes.io/docs/
 Main PID: 18961 (kubelet)
    Tasks: 16
   Memory: 44.2M
   CGroup: /system.slice/kubelet.service
           └─18961 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kube...

mai 22 12:51:25 master-node kubelet[18961]: E0522 12:51:25.632732   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"
mai 22 12:51:30 master-node kubelet[18961]: E0522 12:51:30.633996   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"
mai 22 12:51:35 master-node kubelet[18961]: E0522 12:51:35.634586   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"
mai 22 12:51:40 master-node kubelet[18961]: E0522 12:51:40.635415   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"
mai 22 12:51:45 master-node kubelet[18961]: E0522 12:51:45.636621   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"
mai 22 12:51:50 master-node kubelet[18961]: E0522 12:51:50.637966   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"
mai 22 12:51:55 master-node kubelet[18961]: E0522 12:51:55.639255   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"
mai 22 12:52:00 master-node kubelet[18961]: E0522 12:52:00.640514   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"
mai 22 12:52:05 master-node kubelet[18961]: E0522 12:52:05.641452   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"
mai 22 12:52:10 master-node kubelet[18961]: E0522 12:52:10.642237   18961 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkRe...itialized"
Hint: Some lines were ellipsized, use -l to show in full.

and

[root@master-node yum.repos.d]# rpm -qa|grep containerd
containerd.io-1.6.4-3.1.el7.x86_64

[root@master-node yum.repos.d]# rpm -qa |grep kube
kubeadm-1.24.0-0.x86_64
kubectl-1.24.0-0.x86_64
kubelet-1.24.0-0.x86_64
kubernetes-cni-0.8.7-0.x86_64

Also i tried to install Calico :

[root@master-node .kube]# kubectl apply -f calico.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?

and

[root@master-node ~]# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_KUBEADM_ARGS=--node-ip=10.XXXXXX --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock --cgroup-driver=systemd
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

i can't figure out if :

[edit : i answer to my questions below ]

  • due to containerd, i have to run a kubeadm init --config.yaml ? answer : => [NO]
  • if i have to install a CNI like Calico first ? answer : => [NO kubeadm init can be ok without]

[edit] same installation is ok with google dns, and no company proxy.

Ralph avatar
aw flag
I run into the exact same problem with Debian 11 and tried to solve this issue for 3 days, since I switched from containerd to cri-o. With cri-o it works like a charm. No idea why containerd does make so much trouble. See also here: https://ralph.blog.imixs.com/2022/11/21/switching-from-containerd-to-cir-o/
Score:0
gb flag

i followed theses steps : https://computingforgeeks.com/install-kubernetes-cluster-on-centos-with-kubeadm/ successfully on a home computer with same OS :

  • vm with centos 7 minimal,
  • with containerd,
  • kubeadm 1.24. only differences are no company proxy , no company dns, so i guess the problem is around proxy and dns.

the kubeadm init was OK, and master node is up

the only adaptation i did was in the file : /etc/yum.repos.d/kubernetes.repo to pass this to 0 = "repo_gpgcheck=0"

now i need to findout why it's not working with company proxy.

Score:0
mc flag

Make sure that containerd is working before you run kubeadm. If you have nerdctl, try:

nerdctl run -it --rm gcr.io/google-samples/env-show:1.1

Problems? Maybe the CRI integration isn't configured. Try:

containerd config default > /etc/containerd/config.toml 
systemctl restart containerd

That should help you sort it, but you might need to provide more debug info.

gb flag
yes containerd config default > /etc/containerd/config.toml was done and i tried with and without this : To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true And I can't install nerdctl with the local repo i have now could confirm i don't need to install docker if i have containerd ? could confirm if kubeadm init should work if i have not installed a network plug in ?
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.