Score:0

Kubernetes pod can not resolve domain name if it is running on a specific node

id flag

We have an on premises Kubernetes cluster, running on nodes with hostnames node1.mycompany.local through node7.mycompany.local. We also have a database server on node16.mycompany.local, outside the Kubernetes cluster.

When a pod runs on node4 or node7, it can not resolve database domain name and fails. If I move the pod to a different node other than 4 or 7, it can connect to the database and runs without a problem.

When I ssh into any of the nodes in the cluster, I can ping to the database server by hostname without any problems.

When running a docker container directly without Kubernetes, we specify extra hostnames alongside IP's for the container to resolve, but I don't know how Kubernetes is handling this, because I couldn't find any config which specifies IP's of the external nodes.

My kubernetes version is:

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0"...
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5+coreos.0",...

What can cause this problem?

p10l avatar
us flag
Is there a reason why you are using so outdated version of K8s?
uylmz avatar
id flag
There isn't a technical reason. People knowledgeable about this stuff left the company and we aren't comfortable enough with it to take the risk of updating.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.