Score:1

How to set up port-forwarding in MicroK8S across a cluster (ideally without a manifest)?

cn flag

I am learning K8S using MicroK8S. I have a three-node cluster, each node having 16G of RAM. The cluster has entered HA mode automatically. The cluster sits on my home LAN.

Here are my nodes:

name IP colour role
arran 192.168.50.251 yellow leader
nikka 192.168.50.74 blue worker
yamazaki 192.168.50.135 green worker

Set-up

I have a web app running on a pod in the cluster. It responds on port 9090. Here is how I got it running.

I have a image on a development laptop that I turn into a tarball:

docker save k8s-workload > k8s-workload.docker.tar

I then send that tarball to the leader of the cluster:

scp k8s-workload.docker.tar 192.168.50.251:/home/myuser/

I then sideload this image into all nodes on the cluster:

root@arran:/home/myuser# microk8s images import < k8s-workload.docker.tar
Pushing OCI images to 192.168.50.251:25000
Pushing OCI images to 192.168.50.135:25000
Pushing OCI images to 192.168.50.74:25000

I then verify the MIME type and the checksum of the image, on every node, as I had some problems with that:

root@arran:/home/myuser# microk8s ctr images list | grep workload
docker.io/library/k8s-workload:latest   application/vnd.docker.distribution.manifest.v2+json    sha256:725b...582b 103.5 MiB linux/amd64

Finally I run the workload, ensuring that K8S does not try to pull an image (it is unnecessary, but the default policy is to try anyway):

root@arran:/home/myuser# microk8s kubectl run k8s-workload --image=k8s-workload --image-pull-policy='Never' --port=9090
pod/k8s-workload created

I then confirm that this was successful, from the leader node:

root@arran:/home/myuser# microk8s kubectl get pods -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP             NODE       NOMINATED NODE   READINESS GATES
k8s-workload   1/1     Running   0          83m   10.1.134.216   yamazaki   <none>           <none>

Running the app

In order to access the web app from my development laptop, I expose the app to start with on just one node. The pod is running on the Yamazaki node, so initially I run this from that node:

root@yamazaki:/home/myuser# microk8s kubectl port-forward pod/k8s-workload 9090 --address='0.0.0.0'
Forwarding from 0.0.0.0:9090 -> 9090

This works fine.

Problem

I would like to access the app by making a request to any node in the cluster, and not just this one. Currently the app only runs on one node and I would like it to work even if I make a web request to another node.

I know that K8S has the internal networking to do what I want. For example, if I run the port-forward command on Arran (and kill the same on Yamazaki) then the app will still work, even though the pod is running on Yamazaki only. But I can still only access the app from one IP (Arran, where the port forwarder is running).

Of course, I could do what I want by running the port-forwarder on an SSH session on every node. But I'd like to run something that survives after all SSH sessions are killed.

Ideally I would like to do this with a console command, but I wonder if I will need a YAML manifest for this. From my research so far, I think I need a ClusterIP.


Update/research 1

I'm reading the K8S manual page on bare-metal load balancing. Part of the page recommended that the user apply a 646-line config file, which would be pretty counterproductive for a learning scenario.

This sample seems more sensible, but it is not clear there how the LB is being instructed to run on all hosts.

Update/research 2

I also found this resource specifically for MicroK8S, which recommends an ingress addon. Unfortunately this requires my workload be set up as a Service, and for now I only have a Pod, so think this is out.

Score:1
by flag

Lets use a NodePort service, it is an abstract way to expose an application running on a set of Pods as a network service, it will route incoming traffic on a port of each Node to your Service.

We can define it like that:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort
  ports:
    - port: 9090
      targetPort: 9090
      nodePort: 30090
  selector:
    run: k8s-workload

Then we apply the manifest and it should work microk8s kubectl apply -f my-service.yaml

halfer avatar
cn flag
This looks good, thank you. I've tried it - but I think there is a jigsaw puzzle piece missing. I've requested the web page at the LAN IP address of my three nodes, with the 9090 port in each case, and all connections are refused (I've temporarily taken down my firewall on my laptop). I've checked all cluster nodes with netstat - nothing is listening on port 9090.
halfer avatar
cn flag
I have a suspicion that it is listening on the wrong IP address - I want it to listen to `192.168.50.*`, but the service description mentions IPs of the form `10.152.183.*` (I assume that is a K8S virtual network). I wonder if I can specify that it is to listen to `0.0.0.0`, which seems to work fine for the port forwarding script.
halfer avatar
cn flag
Incidentally, what is the significance of the 30090 port?
halfer avatar
cn flag
This may be the puzzle piece: I'm trying to add `externalIPs: [192.168.50.251]` under the `spec` key, and it seems to have made a difference to the `service` configuration. But Netstat still reports that 9090 is not bound anywhere in the host.
halfer avatar
cn flag
Ooh! I think my Netstat is faulty. With the addition of the `externalIPs` we now have lift-off - I can now access the web app from a LAN machine. Interestingly I have specified just the leader IP here (Arran), which now responds to both 9090 and 30090; the other two only respond to 30090. It looks like 30090 is a proxy address that will direct network traffic to the leader (as that is where the NodePort is), and then K8S redirects the traffic again to Yamazaki (as that is where the workload is). Phew!
halfer avatar
cn flag
(For the historical record: I used [this question](https://stackoverflow.com/a/57406008) to give me some syntax notes on `externalIPs`).
halfer avatar
cn flag
Ooh, I think I might need to add an amendment to one of my prior comments. If I access the non-workload nodes via HTTP (Arran and Nikka) then the app sees the `REMOTE_ADDR` in the `10.1.*` virtual address space; if I access the workload node via HTTP (Yamazaki) then the app sees a `REMOTE_ADDR` of the LAN IP, `192.168.50.135`. So it looks like it is smart enough to route it directly, rather than hopping to the external IP as an intermediate step.
Saxtheowl avatar
by flag
When you tried to access your service on port 9090, it did not work because this port is not exposed outside your cluster, but when you tried port 30090, it worked because that's the nodePort you've set in your service manifest
halfer avatar
cn flag
Righto, thanks! I need to make another amendment - the `externalIPs` is not necessary. I thought I'd tried `30090` without the external IPs, but maybe not - this only serves to expose the `9090` port in addition. I have removed the `externalIPs`, and now 9090 no longer responds on the cluster leader, but `30090` works on any cluster member. This is what I was after .
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.