I am creating a dual stack Kubernetes cluster with kubeadm, and installing Calico. I am using the below configuration file for kubeadm:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 0.0.0.0
bindPort: 6443
nodeRegistration:
criSocket: "unix:///var/run/containerd/containerd.sock"
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.27.1
controlPlaneEndpoint: "{{ cp_endpoint }}:6443"
networking:
serviceSubnet: "10.96.0.0/16,2a12:f840:42:1::/112"
podSubnet: "10.244.0.0/14,2a12:f840:1:1::/56"
dnsDomain: "cluster.local"
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
The cluster starts up and the node is marked as ready. However, the CoreDNS and Calico Kube Controllers pods never become ready. See the output from kubectl get pods -A -o wide below:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-system calico-kube-controllers-789dc4c76b-tw2gp 0/1 Running 5 (2m18s ago) 7m20s 2a12:f840:1:9d:d490:4faf:378d:fd03 ip-10-0-1-114 <none> <none>
calico-system calico-node-dvkcg 1/1 Running 0 7m20s 10.0.1.114 ip-10-0-1-114 <none> <none>
calico-system calico-typha-7578549c55-wlk6f 1/1 Running 0 7m20s 10.0.1.114 ip-10-0-1-114 <none> <none>
calico-system csi-node-driver-vwz2h 2/2 Running 0 7m20s 2a12:f840:1:9d:d490:4faf:378d:fd00 ip-10-0-1-114 <none> <none>
kube-system coredns-5d78c9869d-fwc5g 0/1 Running 0 7m27s 2a12:f840:1:9d:d490:4faf:378d:fd01 ip-10-0-1-114 <none> <none>
kube-system coredns-5d78c9869d-r98d6 0/1 Running 0 7m27s 2a12:f840:1:9d:d490:4faf:378d:fd02 ip-10-0-1-114 <none> <none>
kube-system etcd-ip-10-0-1-114 1/1 Running 0 7m42s 10.0.1.114 ip-10-0-1-114 <none> <none>
kube-system kube-apiserver-ip-10-0-1-114 1/1 Running 0 7m42s 10.0.1.114 ip-10-0-1-114 <none> <none>
kube-system kube-controller-manager-ip-10-0-1-114 1/1 Running 0 7m43s 10.0.1.114 ip-10-0-1-114 <none> <none>
kube-system kube-proxy-hlq74 1/1 Running 0 7m27s 10.0.1.114 ip-10-0-1-114 <none> <none>
kube-system kube-scheduler-ip-10-0-1-114 1/1 Running 0 7m42s 10.0.1.114 ip-10-0-1-114 <none> <none>
tigera-operator tigera-operator-549d4f9bdb-c2c8m 1/1 Running 0 7m27s 10.0.1.114 ip-10-0-1-114 <none> <none>
When inspecting the logs for the CoreDNS pods, I get the below error:
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
[INFO] plugin/ready: Still waiting on: "kubernetes"
A similar error is seen in the Calico Kube Controllers pod's logs:
2023-06-15 10:17:18.315 [ERROR][1] client.go 290: Error getting cluster information config ClusterInformation="default" error=Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: network is unreachable
2023-06-15 10:17:18.315 [INFO][1] main.go 138: Failed to initialize datastore error=Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: network is unreachable
These pods receive IPv6 addresses and when inspecting the main kube-system services (e.g. API controller, DNS, etc.), they all have IPv4 addresses only. My assumption is that both pods and services for core services should be fully dual stack. However, I don't know which configuration options enable this in addition to those I have already added.
The below description of the kubernetes service seems to back this up: kubectl describe service kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.0.1
IPs: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 10.0.2.167:6443
Session Affinity: None
Events: <none>
I attempted to add the below to the ClusterConfiguration, which didn't make any impact.
controllerManager:
extraArgs:
cluster-cidr: "10.244.0.0/14,2a12:f840:1:1::/56"
service-cluster-ip-range: "10.96.0.0/16,2a12:f840:42:1::/112"
I also attempted to register the node IP addresses directly in the configuration using the below:
kubeletExtraArgs:
node-ip: 10.0.2.167,2a05:d01c:345:dc03:aeb6:5cde:e434:1c34
This did then list both IP addresses in the node description, but made no impact on the connectivity:
Addresses:
InternalIP: 10.0.2.167
InternalIP: 2a05:d01c:345:dc03:aeb6:5cde:e434:1c34