Score:0

Unable to reach host machine from pod in kubernetes cluster

cn flag
sb9

I have 1 master and 1 worker kubernetes cluster setup using kubeadm on on Fedora linux KVM virtualization environment with pod cidr range - 10.244.0.0/16. Using flannel networking policy.

Master node: hostname - fedkubemaster ip address - 192.168.122.161 Worked node: hostname - fedkubenode ip address - 192.168.122.27 (NOTE - My Host FQDN's are not DNS resolvable)

$ kubectl get nodes -o wide
NAME            STATUS   ROLES                  AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                                KERNEL-VERSION            CONTAINER-RUNTIME
fedkubemaster   Ready    control-plane,master   2d20h   v1.23.3   192.168.122.161   <none>        Fedora Linux 35 (Workstation Edition)   5.15.16-200.fc35.x86_64   docker://20.10.12
fedkubenode     Ready    <none>                 2d6h    v1.23.3   192.168.122.27    <none>        Fedora Linux 35 (Workstation Edition)   5.15.16-200.fc35.x86_64   docker://20.10.12

Here is my route from master node, worker node

[admin@fedkubemaster ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.122.1   0.0.0.0         UG    100    0        0 enp1s0
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.1.0      10.244.1.0      255.255.255.0   UG    0      0        0 flannel.1
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-25b1faebd814
192.168.122.0   0.0.0.0         255.255.255.0   U     100    0        0 enp1s0
[admin@fedkubenode ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.122.1   0.0.0.0         UG    100    0        0 enp1s0
10.244.0.0      10.244.0.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.1.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.122.0   0.0.0.0         255.255.255.0   U     100    0        0 enp1s0

I am using this dnsutil pod yml defnition for testing my connectivity to my host machines

apiVersion: v1
kind: Pod
metadata:
  name: dnsutils
  namespace: default
spec:
  containers:
  - name: dnsutils
    image: k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

Here is my ip addr and ip route show output from within the dnsutils pod.

root@dnsutils:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 7a:50:37:bc:4b:45 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.2/24 brd 10.244.1.255 scope global eth0
       valid_lft forever preferred_lft forever
root@dnsutils:/# 
root@dnsutils:/# ip route show
default via 10.244.1.1 dev eth0 
10.244.0.0/16 via 10.244.1.1 dev eth0 
10.244.1.0/24 dev eth0  proto kernel  scope link  src 10.244.1.2 

I am trying to do a nslookup and ping to my host machines FQDN and it does not resolve. Then i tried to do ping with their respective IP address in which the master node shows output as Packet filtered and worker node is able to respond with IP address.

root@dnsutils:/# nslookup fedkubemaster
;; connection timed out; no servers could be reached

root@dnsutils:/# nslookup fedkubenode  
;; connection timed out; no servers could be reached
root@dnsutils:/# ping fedkubemaster
ping: unknown host fedkubemaster
root@dnsutils:/# ping fedkubenode  
ping: unknown host fedkubenode
root@dnsutils:/# ping 192.168.122.161
PING 192.168.122.161 (192.168.122.161) 56(84) bytes of data.
From 10.244.1.1 icmp_seq=1 Packet filtered
From 10.244.1.1 icmp_seq=2 Packet filtered
^C
--- 192.168.122.161 ping statistics ---
2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1013ms

root@dnsutils:/# ping 192.168.122.27 
PING 192.168.122.27 (192.168.122.27) 56(84) bytes of data.
64 bytes from 192.168.122.27: icmp_seq=1 ttl=64 time=0.286 ms
64 bytes from 192.168.122.27: icmp_seq=2 ttl=64 time=0.145 ms

The issue is i want to get my host machines FQDN be resolvable from within the Pod but i am unable to understand how i can fix it. It seems there is no route to resolve my host FQDN from within the pod which is also reflecting in the coredns logs. Here is the error.

[admin@fedkubemaster networkutils]$ kubectl logs -f coredns-64897985d-8skq2 -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[ERROR] plugin/errors: 2 2603559064493035223.1593267795798361043. HINFO: read udp 10.244.0.2:38440->192.168.122.1:53: read: no route to host
[ERROR] plugin/errors: 2 2603559064493035223.1593267795798361043. HINFO: read udp 10.244.0.2:34275->192.168.122.1:53: read: no route to host

I am trying to figure our if there is anyway i can add the route to the pods by default but not that familiar with them to fix it.

Please suggest. Let me know if any other details required.

Thanks Sudhir

Score:0
cn flag
sb9

I was able to resolve my issue by temporarily disabling the firewall service on both master and worker.

[admin@fedkubemaster ~]$ sudo systemctl stop firewalld.service
[admin@fedkubemaster ~]$ sudo systemctl disable firewalld.service

[admin@fedkubenode ~]$ sudo systemctl stop firewalld.service
[admin@fedkubenode ~]$ sudo systemctl disable firewalld.service

But the thing that i still need to understand as to why even though all the required ports as per kubernetes documentation were enabled why it was causing this issue.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.