Score:0

Expose consul dns on one k8 cluster and use as stubdomains on core-dns of another k8 cluster

be flag

I basically have two production clusters, say K81 and K82. K81 is having all infrastructure micro-services and k82 is hosting the micro-services applications of the production website which needs the infrastructure service support running on K81. Consul is one of the infrastructure services running on K81.

What I am trying to do is, I want to expose the consul DNS service on the Kubernetes cluster, say K81 to another Kubernetes cluster, say K82, so that PODs on K82 can resolve service names ( of K82 ) discovered by the consul sync catalog on K82.

What have I tried for that so far?

I installed consul servers, clients, consul sync catalog, and consul dns on K81 using its official helm chart. All services are up and running fine. Also, I see consul could discover all services running on K81 using its consul sync catalog service.

By default, consul DNS service uses ClusterIP on K81 after installing from the helm chart. But K82 cannot connect to ClusterIP of K81. For that, I created another service for consul DNS on K81 exposing it outside using an AWS NLB.

Consul service status on K81:

( K81 ) Ξ ~ → kubectl -n consul get services
NAME                             TYPE           CLUSTER-IP       EXTERNAL-IP                                                                     PORT(S)                                                                   AGE
consul                           ExternalName   <none>           consul.service.consul
                               <none>                                                                    177d
consul-consul-connect-injector   ClusterIP      100.70.195.176   <none>
                               443/TCP                                                                   283d
consul-consul-dns                ClusterIP      100.69.74.119    <none>
                               53/TCP,53/UDP                                                             283d
consul-consul-dns-nlb            LoadBalancer   100.64.241.12    a576e1851eb4abcdxyz.elb.eu-west-1.amazonaws.com   53:31938/UDP                                                              3d
consul-consul-dns-nodeport       NodePort       100.66.123.72    <none>
                               53:30025/TCP                                                              2d22h
consul-consul-server             ClusterIP      None             <none>
                               8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP   283d
consul-consul-ui                 ClusterIP      100.67.85.29     <none>
                               80/TCP

Consul services resolves fine on K81:

( K81 ) Ξ ~ → kubectl exec busybox -- nslookup consul.service.consul
Server:    100.64.0.10
Address 1: 100.64.0.10 kube-dns.kube-system.svc.cluster.local

Name:      consul.service.consul
Address 1: 100.114.120.71 consul-consul-server-1.consul-consul-server.consul.svc.cluster.local
Address 2: 100.117.61.75 consul-consul-server-2.consul-consul-server.consul.svc.cluster.local
Address 3: 100.118.198.81 consul-consul-server-0.consul-consul-server.consul.svc.cluster.local


( K81 ) Ξ ~ → kubectl exec busybox -- nslookup redis-master-redis.service.consul
Server:    100.64.0.10
Address 1: 100.64.0.10 kube-dns.kube-system.svc.cluster.local

Name:      redis-master-redis.service.consul
Address 1: 100.118.239.9 redis-master-0.redis-headless.redis.svc.cluster.local

Now for testing, I chose one of the private IPs 172.26.1.149 of consul dns private load balancer a576e1851eb4abcdxyz.elb.eu-west-1.amazonaws.com from K81 and use it as stub domains on core-dns configuration on K82

$ host  a576e1851eb4abcdxyz.elb.eu-west-1.amazonaws.com
a576e1851eb4abcdxyz.elb.eu-west-1.amazonaws.com has address 172.26.1.149
a576e1851eb4abcdxyz.elb.eu-west-1.amazonaws.com has address 172.26.2.180
a576e1851eb4abcdxyz.elb.eu-west-1.amazonaws.com has address 172.26.3.38

Tried the following block in coredns configmap, according to the reference

consul.local:53 {
    errors
    cache 30
    forward . 172.26.1.149
}

Note: I already have VPC peered of both K81 and K82 clusters and as testing, I tried to connect between private IPs of each worker node IPs of both clusters. I can see they connect each other fine and so do the NLB private IPs as well.

But I am unable to get any POD on K82 to resolve any services running on K81:

( K82 ) Ξ ~ → kubectl exec busybox -- nslookup consul.service.consul
Server:         100.64.0.10
Address:        100.64.0.10#53

** server can't find consul.service.consul: NXDOMAIN

command terminated with exit code 1

( K82 ) Ξ ~ → kubectl exec busybox -- nslookup redis-master-redis.service.cons

Server:         100.64.0.10
Address:        100.64.0.10#53

** server can't find redis-master-redis.service.cons: NXDOMAIN

command terminated with exit code 1

I have been struggling with this for the past week now. I hope it should work this way or am I missing something here?

Please correct me if I am approaching the setup in the wrong way.

Since apologize for the long text here.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.