I've created a cluster using kudeadm and everything works fine up until the point when I try to set up firewall rules like only allowing 10.0.0.0/8 traffic in. This somehow disables my VPN pod (Wireguard) to connect to the Kubernetes kube-dns from my local machine. There is some black magic going which is above my pay grade.
If I allow all traffic from the all public IPs within the cluster, then everything works fine again. So my best guess is that it's because kube-proxy only uses those public IPs and somehow proxies connections over them. The solution to this would be to force kubelet and kube-proxy to only use the private IPs of the instances (10.0.0.0/8).
I've tried:
- Setting
kubelet's --node-ip (see here). The kubelet process then runs with that option and it sets some annotation on that node (e.g. alpha.kubernetes.io/provided-node-ip: 10.0.3.1) but the IP of the kube-proxy pod does not change (after restart). Someone said that Kubernetes takes the IP reported by the cloud provider (see here) if provided, but my cloud provider doesn't do that (see here). I've also used kubeadm reset and let the node rejoin, just to make sure.
- Setting
bindAddress in the kube-proxy ConfigMap also has no effect. I set the IP of a node there and restarted its kube-proxy to test this. No effect. Haven't found a way to configure this on the disc of the node itself, since the kube-proxy only uses the ConfigMap kube-proxy as volume.
Might be of interest:
- Kubernetes version: 1.24
- Cloud provider: Hetzner
Thanks for any help!