Score:0

k8s master node stuck in NotReady

my flag

I am trying to setup a single node k8s cluster, but I am having issues with it being stuck in NotReady

I get this if I run describe on the node

Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Fri, 13 May 2022 16:48:19 +0200   Fri, 13 May 2022 16:48:19 +0200   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Fri, 13 May 2022 18:05:31 +0200   Fri, 13 May 2022 16:38:24 +0200   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Fri, 13 May 2022 18:05:31 +0200   Fri, 13 May 2022 16:38:24 +0200   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Fri, 13 May 2022 18:05:31 +0200   Fri, 13 May 2022 16:38:24 +0200   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                False   Fri, 13 May 2022 18:05:31 +0200   Fri, 13 May 2022 16:38:24 +0200   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  192.168.50.186
  Hostname:    intel-nuc
...
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (6 in total)
  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-intel-nuc                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         87m
  kube-system                 kube-apiserver-intel-nuc             250m (3%)     0 (0%)      0 (0%)           0 (0%)         87m
  kube-system                 kube-controller-manager-intel-nuc    200m (2%)     0 (0%)      0 (0%)           0 (0%)         87m
  kube-system                 kube-flannel-ds-f4mz7                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      78m
  kube-system                 kube-proxy-gjbjn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         87m
  kube-system                 kube-scheduler-intel-nuc             100m (1%)     0 (0%)      0 (0%)           0 (0%)         88m

I can't find any errors on the node, and I've installed a flannel network controller and changed it's cidr setting to 10.244.0.0/24, but other than that it is exactly what is on master in their git repo.

kubectl -n kube-system logs kube-flannel-ds-f4mz7
I0513 14:48:18.130988       1 main.go:205] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W0513 14:48:18.131094       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0513 14:48:18.427877       1 kube.go:120] Waiting 10m0s for node controller to sync
I0513 14:48:18.428024       1 kube.go:378] Starting kube subnet manager
I0513 14:48:19.428219       1 kube.go:127] Node controller sync successful
I0513 14:48:19.428249       1 main.go:225] Created subnet manager: Kubernetes Subnet Manager - intel-nuc
I0513 14:48:19.428258       1 main.go:228] Installing signal handlers
I0513 14:48:19.428558       1 main.go:454] Found network config - Backend type: vxlan
I0513 14:48:19.428616       1 match.go:189] Determining IP address of default interface
I0513 14:48:19.429528       1 match.go:242] Using interface with name wlp0s20f3 and address 192.168.50.186
I0513 14:48:19.429580       1 match.go:264] Defaulting external address to interface address (192.168.50.186)
I0513 14:48:19.429699       1 vxlan.go:138] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I0513 14:48:19.430968       1 device.go:82] VXLAN device already exists
I0513 14:48:19.431213       1 device.go:90] Returning existing device
I0513 14:48:19.431823       1 kube.go:339] Setting NodeNetworkUnavailable
I0513 14:48:19.629092       1 main.go:332] Setting up masking rules
I0513 14:48:19.929866       1 main.go:353] Changing default FORWARD chain policy to ACCEPT
I0513 14:48:19.929994       1 main.go:366] Wrote subnet file to /run/flannel/subnet.env
I0513 14:48:19.930018       1 main.go:370] Running backend.
I0513 14:48:19.930040       1 main.go:391] Waiting for all goroutines to exit
I0513 14:48:19.930071       1 vxlan_network.go:61] watching for new subnet leases
I0513 14:48:19.932511       1 iptables.go:231] Some iptables rules are missing; deleting and recreating rules
I0513 14:48:19.932528       1 iptables.go:255] Deleting iptables rule: -s 10.244.0.0/24 -d 10.244.0.0/24 -m comment --comment flanneld masq -j RETURN
I0513 14:48:20.027801       1 iptables.go:255] Deleting iptables rule: -s 10.244.0.0/24 ! -d 224.0.0.0/4 -m comment --comment flanneld masq -j MASQUERADE --random-fully
I0513 14:48:20.028018       1 iptables.go:231] Some iptables rules are missing; deleting and recreating rules
I0513 14:48:20.028037       1 iptables.go:255] Deleting iptables rule: -s 10.244.0.0/24 -m comment --comment flanneld forward -j ACCEPT
I0513 14:48:20.030548       1 iptables.go:255] Deleting iptables rule: ! -s 10.244.0.0/24 -d 10.244.0.0/24 -m comment --comment flanneld masq -j RETURN
I0513 14:48:20.127958       1 iptables.go:255] Deleting iptables rule: -d 10.244.0.0/24 -m comment --comment flanneld forward -j ACCEPT
I0513 14:48:20.128885       1 iptables.go:255] Deleting iptables rule: ! -s 10.244.0.0/24 -d 10.244.0.0/24 -m comment --comment flanneld masq -j MASQUERADE --random-fully
I0513 14:48:20.131400       1 iptables.go:243] Adding iptables rule: -s 10.244.0.0/24 -m comment --comment flanneld forward -j ACCEPT
I0513 14:48:20.228048       1 iptables.go:243] Adding iptables rule: -s 10.244.0.0/24 -d 10.244.0.0/24 -m comment --comment flanneld masq -j RETURN
I0513 14:48:20.233896       1 iptables.go:243] Adding iptables rule: -s 10.244.0.0/24 ! -d 224.0.0.0/4 -m comment --comment flanneld masq -j MASQUERADE --random-fully
I0513 14:48:20.333838       1 iptables.go:243] Adding iptables rule: ! -s 10.244.0.0/24 -d 10.244.0.0/24 -m comment --comment flanneld masq -j RETURN
I0513 14:48:20.432009       1 iptables.go:243] Adding iptables rule: ! -s 10.244.0.0/24 -d 10.244.0.0/24 -m comment --comment flanneld masq -j MASQUERADE --random-fully
I0513 14:48:20.530266       1 iptables.go:243] Adding iptables rule: -d 10.244.0.0/24 -m comment --comment flanneld forward -j ACCEPT

I might not really know what I am reading in the logs for flannel, but from what I understand there are no real issues. And the cni file seems to be correct

[munhunger@intel-nuc net.d]$ ls
10-flannel.conflist
[munhunger@intel-nuc net.d]$ cat 10-flannel.conflist 
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}
[munhunger@intel-nuc net.d]$ pwd
/etc/cni/net.d

am I missing something obvious, or why isn't my k8s node getting ready?

tuskiomi avatar
dz flag
also getting this
Rajesh Dutta avatar
br flag
1) Please check if you have flannel in the following directory.:/opt/cni/bin? 2) what is the output of $KUBELET_NETWORK_ARGS? is the kubelet started using the flannel config? You can check the kubelet startup arguments.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.