Score:0

kubernetes pods running on the same node can't access to each other through service

bs flag

I am stuck with trying call from one pod to another through service when they are on same Node.

to explain

  • Node1 - service1a (pod1A), service1b (pod1B)
  • Node2 - service2a (pod2A)

when:

  • PING pod1A -> pod1B OK
  • PING pod1A -> pod2A OK
  • CURL pod1A -> service2A OK
  • CURL pod1A -> service1B TIMEOUT

Spend several days changing some part of configurations and also search same issuse on internet but no luck.

only same issue was here: https://stackoverflow.com/questions/64073696/pods-running-on-the-same-node-cant-access-to-each-other-through-service , but soved by moving from IPVS to IPTABLES. I have IPTables.

PODS: (names was changed)

NAMESPACE              NAME                                            READY   STATUS    RESTARTS       AGE    IP              NODE          NOMINATED NODE   READINESS GATES
dev                    pod/pod1B-5d595bf69-8fgsm                       1/1     Running   0              16h    10.244.2.63     kube-node2    <none>           <none>
dev                    pod/pod1A-database-596f76d8b5-6bdqv             1/1     Running   0              16h    10.244.2.65     kube-node2    <none>           <none>
dev                    pod/pod2A-dbb8fd4d-xv54n                        1/1     Running   1              15h    10.244.3.104    kube-node3    <none>           <none>
kube-system            pod/coredns-6d4b75cb6d-6b2cn                    1/1     Running   4 (50d ago)    292d   10.244.0.10     kube-master   <none>           <none>
kube-system            pod/coredns-6d4b75cb6d-6m7q2                    1/1     Running   4 (50d ago)    292d   10.244.0.11     kube-master   <none>           <none>
kube-system            pod/etcd-kube-master                            1/1     Running   2 (50d ago)    50d    172.31.42.90    kube-master   <none>           <none>
kube-system            pod/kube-apiserver-kube-master                  1/1     Running   2 (50d ago)    50d    172.31.42.90    kube-master   <none>           <none>
kube-system            pod/kube-controller-manager-kube-master         1/1     Running   1 (50d ago)    50d    172.31.42.90    kube-master   <none>           <none>
kube-system            pod/kube-flannel-ds-bwkjg                       1/1     Running   0              62s    172.31.45.210   kube-node3    <none>           <none>
kube-system            pod/kube-flannel-ds-g9v9m                       1/1     Running   0              66s    172.31.42.90    kube-master   <none>           <none>
kube-system            pod/kube-flannel-ds-hljj5                       1/1     Running   0              30s    172.31.42.77    kube-node2    <none>           <none>
kube-system            pod/kube-flannel-ds-k4zfw                       1/1     Running   0              65s    172.31.43.77    kube-node1    <none>           <none>
kube-system            pod/kube-proxy-68k5n                            1/1     Running   0              35m    172.31.45.210   kube-node3    <none>           <none>
kube-system            pod/kube-proxy-lb6s9                            1/1     Running   0              35m    172.31.42.90    kube-master   <none>           <none>
kube-system            pod/kube-proxy-vggwk                            1/1     Running   0              35m    172.31.43.77    kube-node1    <none>           <none>
kube-system            pod/kube-proxy-wxwd7                            1/1     Running   0              34m    172.31.42.77    kube-node2    <none>           <none>
kube-system            pod/kube-scheduler-kube-master                  1/1     Running   1 (50d ago)    50d    172.31.42.90    kube-master   <none>           <none>
kube-system            pod/metrics-server-55d58b59c9-569p5             1/1     Running   0              15h    10.244.2.69     kube-node2    <none>           <none>
kubernetes-dashboard   pod/dashboard-metrics-scraper-8c47d4b5d-2vxfj   1/1     Running   0              16h    10.244.2.64     kube-node2    <none>           <none>

SERVICES:(names changed)

NAMESPACE              NAME                                TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                                           AGE    SELECTOR
default                service/kubernetes                  ClusterIP      10.96.0.1        <none>         443/TCP                                           292d   <none>
dev                    service/pod1B(service1b)            ClusterIP      10.102.52.69     <none>         5432/TCP                                          42h    app=database
dev                    service/pof2A(service2a)            ClusterIP      10.105.208.135   <none>         8080/TCP                                          42h    app=keycloak
dev                    service/pod1A(service1a)            ClusterIP      10.111.140.245   <none>         5432/TCP                                          42h    app=keycloak-database
kube-system            service/kube-dns                    ClusterIP      10.96.0.10       <none>         53/UDP,53/TCP,9153/TCP                            292d   k8s-app=kube-dns
kube-system            service/metrics-server              ClusterIP      10.111.227.187   <none>         443/TCP                                           285d   k8s-app=metrics-server
kubernetes-dashboard   service/dashboard-metrics-scraper   ClusterIP      10.110.143.2     <none>         8000/TCP                                          247d   k8s-app=dashboard-metrics-scraper

use kube-proxy IPTABLES and services is set to CLUSTER IP (I use about 2-3 NodePort but I think this is not a problem)..

Kube-proxy deamonset setting

apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    bindAddressHardFail: false
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.244.10.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocal:
      bridgeInterface: ""
      interfaceNamePrefix: ""
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: 127.0.0.1:10249
    mode: ""
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      forwardHealthCheckVip: false
      networkName: ""
      rootHnsEndpointName: ""
      sourceVip: ""
  kubeconfig.conf: |-
    apiVersion: v1
    kind: Config
    clusters:
    - cluster:
        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        server: https://172.31.42.90:6443
      name: default
    contexts:
    - context:
        cluster: default
        namespace: default
        user: default
      name: default
    current-context: default
    users:
    - name: default
      user:
        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
  annotations:
    kubeadm.kubernetes.io/component-config.hash: sha256:ebcfa3923c1228031a5b824f2edca518edc4bd49fd07cedeffa371084cba342b
  creationTimestamp: "2022-07-03T19:28:14Z"
  labels:
    app: kube-proxy
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "40174591"
  uid: cfadfa22-ed43-4a3f-9897-25f605ebb8b9

flannel deamonset setting

apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"cni-conf.json":"{\n  \"name\": \"cbr0\",\n  \"cniVersion\": \"0.3.1\",\n  \"plugins\": [\n    {\n      \"type\": \"flannel\",\n      \"delegate\": {\n        \"hairpinMode\": true,\n        \"isDe$  creationTimestamp: "2022-07-03T20:34:32Z"
  labels:
    app: flannel
    tier: node
  name: kube-flannel-cfg
  namespace: kube-system
  resourceVersion: "40178136"
  uid: ccb81719-7013-4f0b-8c20-ab9d5c3d5f8e

IP TABLES kube-system

iptables -L -t nat | grep kube-system
KUBE-MARK-MASQ  all  --  ip-10-244-2-69.us-east-2.compute.internal  anywhere             /* kube-system/metrics-server:https */
DNAT       tcp  --  anywhere             anywhere             /* kube-system/metrics-server:https */ tcp DNAT [unsupported revision]
KUBE-MARK-MASQ  all  --  ip-10-244-0-11.us-east-2.compute.internal  anywhere             /* kube-system/kube-dns:dns-tcp */
DNAT       tcp  --  anywhere             anywhere             /* kube-system/kube-dns:dns-tcp */ tcp DNAT [unsupported revision]
KUBE-MARK-MASQ  all  --  ip-10-244-0-10.us-east-2.compute.internal  anywhere             /* kube-system/kube-dns:metrics */
DNAT       tcp  --  anywhere             anywhere             /* kube-system/kube-dns:metrics */ tcp DNAT [unsupported revision]
KUBE-MARK-MASQ  all  --  ip-10-244-0-10.us-east-2.compute.internal  anywhere             /* kube-system/kube-dns:dns-tcp */
DNAT       tcp  --  anywhere             anywhere             /* kube-system/kube-dns:dns-tcp */ tcp DNAT [unsupported revision]
KUBE-MARK-MASQ  all  --  ip-10-244-0-10.us-east-2.compute.internal  anywhere             /* kube-system/kube-dns:dns */
DNAT       udp  --  anywhere             anywhere             /* kube-system/kube-dns:dns */ udp DNAT [unsupported revision]
KUBE-MARK-MASQ  all  --  ip-10-244-0-11.us-east-2.compute.internal  anywhere             /* kube-system/kube-dns:dns */
DNAT       udp  --  anywhere             anywhere             /* kube-system/kube-dns:dns */ udp DNAT [unsupported revision]
KUBE-MARK-MASQ  all  --  ip-10-244-0-11.us-east-2.compute.internal  anywhere             /* kube-system/kube-dns:metrics */
DNAT       tcp  --  anywhere             anywhere             /* kube-system/kube-dns:metrics */ tcp DNAT [unsupported revision]
KUBE-SVC-TCOU7JCQXEZGVUNU  udp  --  anywhere             ip-10-96-0-10.us-east-2.compute.internal  /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
KUBE-SVC-JD5MR3NA4I4DYORP  tcp  --  anywhere             ip-10-96-0-10.us-east-2.compute.internal  /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
KUBE-SVC-Z4ANX4WAEWEBLCTM  tcp  --  anywhere             ip-10-111-227-187.us-east-2.compute.internal  /* kube-system/metrics-server:https cluster IP */ tcp dpt:https
KUBE-SVC-ERIFXISQEP7F7OF4  tcp  --  anywhere             ip-10-96-0-10.us-east-2.compute.internal  /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
KUBE-MARK-MASQ  tcp  -- !ip-10-244-0-0.us-east-2.compute.internal/16  ip-10-96-0-10.us-east-2.compute.internal  /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
KUBE-SEP-OP4AXEAS4OXHBEQX  all  --  anywhere             anywhere             /* kube-system/kube-dns:dns-tcp -> 10.244.0.10:53 */ statistic mode random probability 0.50000000000
KUBE-SEP-A7YQ4MY4TZII3JTK  all  --  anywhere             anywhere             /* kube-system/kube-dns:dns-tcp -> 10.244.0.11:53 */
KUBE-MARK-MASQ  tcp  -- !ip-10-244-0-0.us-east-2.compute.internal/16  ip-10-96-0-10.us-east-2.compute.internal  /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
KUBE-SEP-HJ7EWOW62IX6GL6R  all  --  anywhere             anywhere             /* kube-system/kube-dns:metrics -> 10.244.0.10:9153 */ statistic mode random probability 0.50000000000
KUBE-SEP-ZJHOSXJEKQGYJUBC  all  --  anywhere             anywhere             /* kube-system/kube-dns:metrics -> 10.244.0.11:9153 */
KUBE-MARK-MASQ  udp  -- !ip-10-244-0-0.us-east-2.compute.internal/16  ip-10-96-0-10.us-east-2.compute.internal  /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
KUBE-SEP-R7EMXN5TTQQVP4UW  all  --  anywhere             anywhere             /* kube-system/kube-dns:dns -> 10.244.0.10:53 */ statistic mode random probability 0.50000000000
KUBE-SEP-VR6VIIG2A6524KLY  all  --  anywhere             anywhere             /* kube-system/kube-dns:dns -> 10.244.0.11:53 */
KUBE-MARK-MASQ  tcp  -- !ip-10-244-0-0.us-east-2.compute.internal/16  ip-10-111-227-187.us-east-2.compute.internal  /* kube-system/metrics-server:https cluster IP */ tcp dpt:https
KUBE-SEP-6BOUBB2FEQTN2GDB  all  --  anywhere             anywhere             /* kube-system/metrics-server:https -> 10.244.2.69:4443 */
iptables -L -t nat | grep dev
KUBE-MARK-MASQ  all  --  ip-10-244-3-104.us-east-2.compute.internal  anywhere             /* dev/pod2A:pod2A */
DNAT       tcp  --  anywhere             anywhere             /* dev/pod2A:pod2A */ tcp DNAT [unsupported revision]
KUBE-MARK-MASQ  all  --  ip-10-244-2-63.us-east-2.compute.internal  anywhere             /* dev/pod1B:pod1B */
DNAT       tcp  --  anywhere             anywhere             /* dev/pod1B:pod1B */ tcp DNAT [unsupported revision]
KUBE-MARK-MASQ  all  --  ip-10-244-2-65.us-east-2.compute.internal  anywhere             /* dev/pod1A:pod1A */
DNAT       tcp  --  anywhere             anywhere             /* dev/pod1A:pod1A */ tcp DNAT [unsupported revision]
KUBE-SVC-MI7BJVF4L3EWWCLA  tcp  --  anywhere             ip-10-105-208-135.us-east-2.compute.internal  /* dev/pod2A:pod2A cluster IP */ tcp dpt:http-alt
KUBE-SVC-S2FASJERAWCYNV26  tcp  --  anywhere             ip-10-111-140-245.us-east-2.compute.internal  /* dev/pod1A:pod1A cluster IP */ tcp dpt:postgresql
KUBE-SVC-5JHIIG3NJGZTIC4I  tcp  --  anywhere             ip-10-102-52-69.us-east-2.compute.internal  /* dev/pod1B:pod1B cluster IP */ tcp dpt:postgresql
KUBE-MARK-MASQ  tcp  -- !ip-10-244-0-0.us-east-2.compute.internal/16  ip-10-102-52-69.us-east-2.compute.internal  /* dev/pod1B:pod1B cluster IP */ tcp dpt:postgresql
KUBE-SEP-FOQDGOYPAUSJGXYE  all  --  anywhere             anywhere             /* dev/pod1B:pod1B -> 10.244.2.63:5432 */
KUBE-MARK-MASQ  tcp  -- !ip-10-244-0-0.us-east-2.compute.internal/16  ip-10-105-208-135.us-east-2.compute.internal  /* dev/pod2A:pod2A cluster IP */ tcp dpt:http-alt
KUBE-SEP-AWG5CHYLOHV7OCEH  all  --  anywhere             anywhere             /* dev/pod2A:pod2A -> 10.244.3.104:8080 */
KUBE-MARK-MASQ  tcp  -- !ip-10-244-0-0.us-east-2.compute.internal/16  ip-10-111-140-245.us-east-2.compute.internal  /* dev/pod1A:pod1A cluster IP */ tcp dpt:postgresql
KUBE-SEP-YDMKERDDJRZDWWVM  all  --  anywhere             anywhere             /* dev/pod1A:pod1A -> 10.244.2.65:5432 */

Can annyone help me to figure out why am I unable to call connect from pod to service on same node?

Michal avatar
bs flag
Looks like there were more issues at once. 1. Different versions of kubelet (1.24.3, 1.24.5...) and server (1.26.x) 2. after upgrade all to the same 1.26.9 there hase to be a change on kube-proxy [thx github](https://github.com/k0sproject/k0s/issues/2849)
pt flag
You mark a question as "solved" by selecting an answer as correct. Write your own answer if you've solved your problem. Alternately, you can delete the question if you don't think it's worth it to write up the answer.
Score:1
bs flag

Solved, but maybe someone else can find here a solution. The issue weres more problems at once.

  1. We had different versions of Server, Client and Kubelets on workers (1.24.3, 1.24.5, .... event 1.26.x).
  2. Setting on kube-proxy / masqueradeAll was not correct for versions 1.26

After upgrading all plane and nodes to the same version, in our case using kubeadm from 1.24.x->1.25.9->1.26.4 + upgrade ubuntu OS apt upgrade

Cluster start to be stable again and all nodes was correctly attached kubectl get nodes

The final change was due to upgrade 1.25 -> 1.26 and discussion on GitHub mweissdigchg answer ...experienced the same issues after upgrading from v1.25 to v1.26.

... turned out that kube-proxy iptables configuration masqueradeAll: true rendered our services unreachable from pods on other nodes. It seems like the default configuration changed from masqueradeAll: false to masqueradeAll: true....

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.