I am running a kubernetes cluster composed of three nodes. Control plane node is running inside an Azure VM, and two worker nodes on two separate bare-metal servers. I have set up the cluster with kubeadm
and two worker nodes joined without issues. I installed weave as CNI and spawning pods and creating deployments works fine. I reached issues when I tried to set up nginx ingress for external access. When applying this manifest
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: demo
port:
number: 80
I get the following error: Error from server (InternalError): error when creating "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.98.59.50:443: connect: no route to host
.
After inspecting the issue, I also realized that whenever I try to curl any cluster IP address (e.g., curl 10.32.0.9
), I get this error: curl: (7) Failed to connect to 10.32.0.9 port 80 after 3056 ms: No route to host
.
After googling a bit, it seems that the issue was something with firewalls. I enabled all inbound and outbound ports on azure portal, but the issue still persists.