After a successful setup of a high available Kubernetes Cluster using kubeadm, I'm not able to access the kubernetes dashboard web ui.
First of all, let me explain the current cluster topology: within my local network, there are three different bare metal servers running, where each of them hosts a master and worker node. One of those machines runs a nginx load balancer as well.
After the setup, I copied the cluster config file to my local working machine (MacBook Pro) and ran kubectl cluster-info
, everything seams working fine:
Kubernetes control plane is running at https://load-balancer:6443
CoreDNS is running at https://load-balancer:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
At the end, the Kubernetes Dashboard deployment was made according to the official docs, but I'm not able to access the dashboard ui through my working machine (it's connected to the same network obviously) and I can't figure out why.
Always getting the error when requesting http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
that says:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "error trying to reach service: dial tcp 10.40.0.1:8443: connect: no route to host",
"reason": "ServiceUnavailable",
"code": 503
}
What's even stranger, a few minutes ago the ui worked but I couldn't sign in using the token and now it's gone again.
Any suggestions what the problem could be?