Score:2

Kubernets 1.21.3 The recommended value for "clusterCIDR" in "KubeProxyConfiguration"

cn flag

I am trying to join new node to existing v1.21.3 cluster with Calico CNI. join command giving clusterCIDR warning.

How to fix this subnet warning message?

# kubeadm join master-vip:8443 --token xxx --discovery-token-ca-cert-hash sha256:xxxx
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0809 14:10:27.717696   75868 utils.go:69] The recommended value for "clusterCIDR" in "KubeProxyConfiguration" is: 10.201.0.0/16; the provided value is: 10.203.0.0/16
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

update:

I was using 10.201.0.0/16 during the cluster setup, later I changed to 10.203.0.0/16. not sure where its still getting 10.201.0.0/16 subnet value.

Here is the sub net value.

# sudo cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr
    - --cluster-cidr=10.203.0.0/16

kubectl cluster-info dump | grep cluster-cidr
                            "--cluster-cidr=10.203.0.0/16",
                            "--cluster-cidr=10.203.0.0/16",
                            "--cluster-cidr=10.203.0.0/16",

step to update pod CIDR from 10.201.0.0/16 to 10.203.0.0/16

  1. using this command updated the kubeadm-confg configmap kubectl -n kube-system edit cm kubeadm-config

podSubnet: 10.203.0.0/16

  1. Updated kuber-controller-manger and restarted it.

sed -i 's/10.201.0.0/10.203.0.0/' /etc/kubernetes/manifests/kube-controller-manager.yaml

after updating the IP.

all config shows subnet as 10.203.0.0 but pods creating in `10.201.0.0' subnet.

# kubectl get cm kube-proxy -n kube-system -o yaml |grep -i clusterCIDR
    clusterCIDR: 10.203.0.0/16
# kubectl get no -o yaml |grep -i podcidr
    podCIDR: 10.203.0.0/24
    podCIDRs:
    podCIDR: 10.203.1.0/24
    podCIDRs:
    podCIDR: 10.203.2.0/24
    podCIDRs:
    podCIDR: 10.203.3.0/24
    podCIDRs:
    podCIDR: 10.203.5.0/24
    podCIDRs:
    podCIDR: 10.203.4.0/24
    podCIDRs:
    podCIDR: 10.203.6.0/24
    podCIDRs:
Mikołaj Głodziak avatar
id flag
Could you add to the question following information: IP address of the main controller and worker node that are you trying to attach, output of the command `sudo cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr` and `kubectl cluster-info dump | grep cluster-cidr` (both from the main controller)? Did you edit somehow running configuration on the main controller? Which network did you use in `kubeadm` command - `sudo kubeadm init --pod-network-cidr={network}`?
sfgroups avatar
cn flag
@MikołajGłodziak added inline response to the post.
Mikołaj Głodziak avatar
id flag
Could you please describe (tutorial, steps) how did you change your `clusterCIDR` network from `10.201.0.0/16` to `10.203.0.0/16` ? I am trying to replicate your issue.
Score:2
cn flag

I managed to replicate your issue. I got the same error. There is a need to update few other configuration files.

To fully change pods and nodes IP pool you need to update podCIDR and ClusterCIDR values in few configuration files:

  • update ConfigMap kubeadm-confg - you did it already

  • update file /etc/kubernetes/manifests/kube-controller-manager.yaml - you did it already

  • update node(s) definition with proper podCIDR value and re-add them to the cluster

  • update ConfigMap kube-proxy in kube-system namespace

  • add new IP pool in Calico CNI and delete the old one, recreate the deployments

Update node(s) definition:

  1. Get node(s) name(s): kubectl get no - in my case it's controller
  2. Save definition(s) to file: kubectl get no controller -o yaml > file.yaml
  3. Edit file.yaml -> update podCIDR and podCIDRs values with your new IP range, in your case 10.203.0.0
  4. Delete old and apply new node definition: kubectl delete no controller && kubectl apply -f file.yaml

Please note you need to do those steps for every node in your cluster.

Update ConfigMap kube-proxy in kube-system namespace

  1. Get current configuration of kube-proxy: kubectl get cm kube-proxy -n kube-system -o yaml > kube-proxy.yaml
  2. Edit kube-proxy.yaml -> update ClusterCIDR value with your new IP range, in your case 10.203.0.0
  3. Delete old and apply new kube-proxy ConfigMap: kubectl delete cm kube-proxy -n kube-system && kubectl apply -f kube-proxy.yaml

Add new IP pool in Calico and delete the old one:

  1. Download the Calico binary and make it executable:

    sudo curl -o /usr/local/bin/calicoctl -O -L  "https://github.com/projectcalico/calicoctl/releases/download/v3.20.0/calicoctl"
    sudo chmod +x /usr/local/bin/calicoctl
    
  2. Add new IP pool:

    calicoctl create -f -<<EOF
    apiVersion: projectcalico.org/v3
    kind: IPPool
    metadata:
      name: my-new-pool
    spec:
      cidr: 10.203.0.0/16
      ipipMode: Always
      natOutgoing: true
    EOF
    

    Check if there is new IP pool: calicoctl get ippool -o wide

  3. Get the configuration to disable old IP pool -> calicoctl get ippool -o yaml > pool.yaml

  4. Edit the configuration: -> add disabled:true for default-ipv4-ippool in the pool.yaml:

    apiVersion: projectcalico.org/v3
    items:
    - apiVersion: projectcalico.org/v3
      kind: IPPool
      metadata:
        creationTimestamp: "2021-08-12T07:50:24Z"
        name: default-ipv4-ippool
        resourceVersion: "666"
      spec:
        blockSize: 26
        cidr: 10.201.0.0/16
        ipipMode: Always
        natOutgoing: true
        nodeSelector: all()
        vxlanMode: Never
        disabled: true
    
  5. Apply new configuration: calictoctl apply -f pool.yaml

    Excepted output of the calicoctl get ippool -o wide command:

    NAME                  CIDR            NAT    IPIPMODE   VXLANMODE   DISABLED   SELECTOR   
    default-ipv4-ippool   10.201.0.0/16   true   Always     Never       true       all()      
    my-new-pool           10.203.0.0/16   true   Always     Never       false      all()      
    
  6. Re-create pods that are in 10.201.0.0 network (in every namespace, including kube-system namespace): just delete them and they should re-create instantly in new IP pool range , for example:

    kubectl delete pod calico-kube-controllers-58497c65d5-rgdwl -n kube-system
    kubectl delete pods coredns-78fcd69978-xcz88  -n kube-system
    kubectl delete pod nginx-deployment-66b6c48dd5-5n6nw
    etc..
    

    You can also delete and apply deployments.

After applying those steps, there is no warning about clusterCIDR value when adding new node. New pods are created in proper IP pool range.

Source:

sfgroups avatar
cn flag
Updated to `10.203.0.0` value but still pod creating with `10.201.0.0.` address. like this `coredns-85d9df8444-8dpql 1/1 Running 0 10m 10.201.22.206`
sfgroups avatar
cn flag
Calico has the IP address range for pods, do we need to change this also to match podcidr? `calicoctl get ippool -o wide NAME CIDR NAT IPIPMODE VXLANMODE DISABLED SELECTOR default-ipv4-ippool 10.201.0.0/16 true Always Never false all()`
Mikolaj S. avatar
cn flag
Good point, my mistake. I edited my answer how to fix it. Please check it and let me know if it is working.
sfgroups avatar
cn flag
I see the steps, its crayz, still pod getting created in `10.201.0.0` subnet. here calico pool `# calicoctl get ippool -o wide NAME CIDR NAT IPIPMODE VXLANMODE DISABLED SELECTOR default-ipv4-ippool 10.201.0.0/16 true Always Never false all() my-new-pool 10.203.0.0/16 true Always Never true all() `
Mikolaj S. avatar
cn flag
From the output that you sent, it seems you disabled `10.203.0.0` subnet, instead of `10.201.0.0` ;)
Mikolaj S. avatar
cn flag
I added the excepted output to my answer.
sfgroups avatar
cn flag
Good catch, I overlooked the output. Thanks for the help.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.