Score:0

How to allow a tcp service (not http) on custom port inside kubernetes

je flag

I have a container running an OPC-server on port 4840. I am trying to configure my microk8s to allow my OPC-Client to connect to the port 4840. Here are examples of my deployment and service:

(No namespace is defined here but they are deployed through azure pipelines and that is where the namespace is defined, the namespace for the deployment and service is "jawcrusher")

deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: jawcrusher
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jawcrusher
  strategy: {}
  template:
    metadata:
      labels:
        app: jawcrusher
    spec:
      volumes:
        - name: jawcrusher-config
          configMap:
            name: jawcrusher-config
      containers:
      - image: XXXmicrok8scontainerregistry.azurecr.io/jawcrusher:#{Version}#
        name: jawcrusher
        ports:
          - containerPort: 4840
        volumeMounts:
          - name: jawcrusher-config
            mountPath: "/jawcrusher/config/config.yml"
            subPath: "config.yml"
      imagePullSecrets:
        - name: acrsecret

service.yml

apiVersion: v1
kind: Service
metadata:
  name: jawcrusher-service
spec:
  ports:
  - name: 4840-4840
    port: 4840
    protocol: TCP
    targetPort: 4840
  selector:
    app: jawcrusher
  type: ClusterIP
status:
  loadBalancer: {}

So now I want to tell microk8s to serve my OPC-Server from port 4840 "externally". So for example if my dns to the server is microk8s.xxxx.internal I would like to connect with my OPC-Client to microk8s.xxxx.internal:4840.

I have followed this example: https://microk8s.io/docs/addon-ingress (scroll down a bit for the tcp-part)

It says to update the tcp-configuration for the ingress, this is how it looks after I updated it:

nginx-ingress-tcp-microk8s-conf:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-ingress-tcp-microk8s-conf
  namespace: ingress
  ......
data:
  '4840': jawcrusher/jawcrusher-service:4840
binaryData: {}

It also says to expose the port in the ingress-controller. This is what it looks like after adding a new port:

nginx-ingress-microk8s-controller:

spec:
      containers:
        - name: nginx-ingress-microk8s
          image: registry.k8s.io/ingress-nginx/controller:v1.2.0
          args:
            - /nginx-ingress-controller
            - '--configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf'
            - >-
              --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
            - >-
              --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
            - '--ingress-class=public'
            - ' '
            - '--publish-status-address=127.0.0.1'
          ports:
            - name: http
              hostPort: 80
              containerPort: 80
              protocol: TCP
            - name: https
              hostPort: 443
              containerPort: 443
              protocol: TCP
            - name: health
              hostPort: 10254
              containerPort: 10254
              protocol: TCP
            ####THIS IS WHAT I ADDED####
            - name: jawcrusher
              hostPort: 4840
              containerPort: 4840
              protocol: TCP

After I have updated the daemonset it restarts all the pods. The port seem to be open, if I run this script is outputs:

 Test-NetConnection -ComputerName microk8s.xxxx.internal -Port 4840                                                                                                                                                                                                                                                                             
 ComputerName     : microk8s.xxxx.internal                                                                            
 RemoteAddress    : 10.161.64.124                                                                                        
 RemotePort       : 4840                                                                                                 
 InterfaceAlias   : Ethernet 2                                                                                           
 SourceAddress    : 10.53.226.55
 TcpTestSucceeded : True

Before I did the changes it said TcpTestSucceeded: False.

But the OPC-Client cannot connect. It just says: Could not connect to server: BadCommunicationError.

I see an error message in the ingress-daemonset-pod logs when I try to connect to the server with my OPC-Client:

2023/02/15 09:57:32 [error] 999#999: *63002 connect() failed (111: Connection refused) while connecting to upstream, client: 10.53.225.232, server: 0.0.0.0:4840, upstream: "10.1.98.125:4840", bytes from/to client:0/0, bytes from/to upstream:0/0

10.53.225.232 is the client machines ip address (where the OPC-Client is run) and 10.1.98.125 is the ip number of the pod running the OPC-server.

So it seems like it has understood that external port 4840 should be proxied/forwarded to my service which in turn points to the OPC-server-pod. But why do I get an error...

I don't want to use a port forward solution but I tried this just to see if it would work.

kubectl port-forward service/jawcrusher-service 5000:4840 -n jawcrusher --address='0.0.0.0'

This allows me to connect with my OPC-client to the server on port 5000. But I don't want to use port forward, I want a more permanent solution.

Does anyone see if I made a mistake somewhere or knows how to do this in microk8s.

Update 1: This is what netstat outputs:

root@jawcrusher-7787464c5-8w5ww:/jawcrusher# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:4840          0.0.0.0:*               LISTEN      1/python
tcp6       0      0 ::1:4840                :::*                    LISTEN      1/python
in flag
Hi Viktor Eriksson welcome to S.F. Any time I see "${complex Ingress setup} doesn't work but `kubectl port-forward` does", it smells like the container is listening on 127.0.0.1, whereas all Pods must listen to 0.0.0.0. Can you reach :4840 on the Service (or Pod's IP) from another Pod within the cluster?
Viktor Eriksson avatar
je flag
I thought that if port-forward worked it meant that the container was setup correctly. No at the moment I dont have that possibility. Is there a command I can run to check bound interface? I am a Linux noob but I ran this cmd, does this tell you anything good? Updated question with output for netstat.
Viktor Eriksson avatar
je flag
Omg omg i think i works now. Change server url from localhost to 0.0.0.0. And it seems to work. Thank you so much!
Viktor Eriksson avatar
je flag
Create a real answer so I can accept it :)
in flag
I'm always happy when it's something simple :) I hope you enjoy your stay in Server Fault!
Score:1
in flag

Almost any circumstance where Ingress or Service setups don't work but running kubectl port-forward does, it is almost always that the container is listening on 127.0.0.1, whereas all Pods must listen to 0.0.0.0. The reason kubectl port-forward works all the time is that it actually uses a copy of socat spawned in the same network cgroup as the Pod's sandbox container, so its connectivity is not representative of the cluster's connectivity to the Pod

A quick test can be to kubectl exec into any other Pod in the cluster (or ssh/ssm-ing into the Node should also work) and issuing a curl (or wget) to the Pod's IP address and see if it is reachable from elsewhere in the cluster. Since kubernetes design mandates that all Pods can connect with each other (modulo network policy), it's a cheap test to find out if anyone could reach that container, not just Ingress or Service configurations

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.