Score:1

How to debug why a Ingress Controller in MicroK8S is pointing to the wrong service?

cn flag

I have built a little three-node Kubernetes cluster at home, for learning purposes. Each 16GB node runs Ubuntu Server and MicroK8S. I have set up a leader (arran) and two followers (nikka and yamazaki).

root@arran:/home/me# microk8s kubectl get nodes -o wide
NAME       STATUS   ROLES    AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
arran      Ready    <none>   5d3h    v1.26.4   192.168.50.251   <none>        Ubuntu 22.04.2 LTS   5.15.0-71-generic   containerd://1.6.15
nikka      Ready    <none>   4d14h   v1.26.4   192.168.50.74    <none>        Ubuntu 22.04.2 LTS   5.15.0-71-generic   containerd://1.6.15
yamazaki   Ready    <none>   3d16h   v1.26.4   192.168.50.135   <none>        Ubuntu 22.04.2 LTS   5.15.0-71-generic   containerd://1.6.15

Here is the status of the cluster, with ingress and dashboard manually enabled. You can see it has switched into HA mode:

root@arran:/home/me# microk8s status
microk8s is running
high-availability: yes
  datastore master nodes: 192.168.50.251:19001 192.168.50.74:19001 192.168.50.135:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # (core) The Kubernetes dashboard
    ha-cluster           # (core) Configure high availability on the current node
    helm                 # (core) Helm - the package manager for Kubernetes
    helm3                # (core) Helm 3 - the package manager for Kubernetes
    hostpath-storage     # (core) Storage class; allocates storage from host directory
    ingress              # (core) Ingress controller for external access
    metrics-server       # (core) K8s Metrics Server for API access to service metrics
    registry             # (core) Private image registry exposed on localhost:32000
    storage              # (core) Alias to hostpath-storage add-on, deprecated
  disabled:
    cert-manager         # (core) Cloud native certificate management
    community            # (core) The community addons repository
    dns                  # (core) CoreDNS
    gpu                  # (core) Automatic enablement of Nvidia CUDA
    host-access          # (core) Allow Pods connecting to Host services smoothly
    kube-ovn             # (core) An advanced network fabric for Kubernetes
    mayastor             # (core) OpenEBS MayaStor
    metallb              # (core) Loadbalancer for your Kubernetes cluster
    minio                # (core) MinIO object storage
    observability        # (core) A lightweight observability stack for logs, traces and metrics
    prometheus           # (core) Prometheus operator for monitoring and logging
    rbac                 # (core) Role-Based Access Control for authorisation

Here are my running pods, which come from my manifest (see later):

root@arran:/home/me# microk8s kubectl get pods -o wide
NAME                               READY   STATUS    RESTARTS      AGE    IP             NODE       NOMINATED NODE   READINESS GATES
hello-world-app                    1/1     Running   1 (14h ago)   47h    10.1.134.199   yamazaki   <none>           <none>
my-pod                             1/1     Running   2 (14h ago)   5d1h   10.1.150.208   arran      <none>           <none>

Here are the services at present:

root@arran:/home/me# microk8s kubectl get services -o wide
NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE     SELECTOR
kubernetes            ClusterIP   10.152.183.1     <none>        443/TCP         5d3h    <none>
nginx-service         NodePort    10.152.183.120   <none>        80:30000/TCP    2d12h   app.kubernetes.io/name=hello-world-app
hello-world-service   NodePort    10.152.183.205   <none>        80:30032/TCP    47h     app.kubernetes.io/name=hello-world-app
dashboard-service     NodePort    10.152.183.237   <none>        443:32589/TCP   47h     app.kubernetes.io/name=kubernetes

I suspect the problem is in the manifest, which I have built in a bit of a copy-and-paste fashion from the K8S and MicroK8s manuals:

apiVersion: v1
kind: Pod
metadata:
  name: hello-world-app
  labels:
    app.kubernetes.io/name: hello-world-app
spec:
  containers:
    - name: nginx
      image: nginx:latest
      ports:
        - name: http
          containerPort: 80
          protocol: TCP

---

apiVersion: v1
kind: Service
metadata:
  name: hello-world-service
spec:
  selector:
    app.kubernetes.io/name: hello-world-app
  ports:
  - port: 80
    targetPort: 80
  type: NodePort

---

# Not sure this will work - do we need a NodePort to the dashboard?
apiVersion: v1
kind: Service
metadata:
  name: dashboard-service
spec:
  selector:
    app.kubernetes.io/name: kubernetes
  ports:
  - port: 443
    targetPort: 443
  type: NodePort

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: http-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-service
            port:
              number: 80

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubernetes
            port:
              number: 443

Now, I have a "hello world" app, I've given it a node port, and then exposed that using an ingress plugin. This has come available on http://192.168.50.251/ (port 80). However I have tried to do the same for the Kubernetes dashboard, by adding a port and an ingress route (port 443), but https://192.168.50.251/ points to "hello world" and not the dashboard as I intend.

The single-file manifest file has been fully applied with microk8s kubectl apply -f manifest.yml.

What can I try next?

halfer avatar
cn flag
I've been advised that NodePorts are just for nodes, not clusters, and that I am unlikely to need them. So I have commented out those two sections in the manifest, and re-applied.
halfer avatar
cn flag
Also I think the backend service for the dashboard ingress should not be `kubernetes`, it should be `dashboard-service`. I have applied that too, and it has not changed anything.
Score:0
cn flag

I have solved this in a very different way, which does not need a manifest at all. MicroK8S offers some helper scripts to get this working. I needed two sessions on the K8S leader server.

In the first session, I ran this:

root@arran:/home/me# microk8s dashboard-proxy
Checking if Dashboard is running.
Infer repository core for addon dashboard
Waiting for Dashboard to come up.
Trying to get token from microk8s-dashboard-token
Waiting for secret token (attempt 0)
Dashboard will be available at https://127.0.0.1:10443
Use the following token to login:
eyJhbGciOiJSUzI1NiIsImtpZCI6IkJ1US1DZEVmUjM2ZWZZcjg5UTh5eXdQUFpLYnNpMVV1YWZPM0o2ZEEtQlUifQ.eyJpc3JiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJtaWNyb2s4cy1kYXNoYm9hcmQtdG9rZW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA1M2Y3ZThhLTFiNWUtNDFkZi1hMmI0LWFlNzY3M2ZlZmMwNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpkZWZhdWx0In0.EL1IfT8lh1gT7VKYHrWzZlNLhxP8kWKzZPdxzi7IL2Il9zL4Pg3ZMI5YmCv5s-IrVIKmUfFGsHu4G30gcqmg0FdkBhPkBSOvmfnB77mGjCMGSaIToHIySI_9HBB3Ea3i91bx_n9TJC3DVIKtEVdLx3p73_ygQBUmZ0QUs4MUf1mAIBkL7ltq58y9CUr88nuLWnQ2oUiIdtRpnz4Tw2V8Bin5rWQj2af_PeVKGfxBJBTsmmUADdby8vjZ-GTWCTcCZ3IEbLTx9jsWsf9qb2KYohnCfXBJPx8WbGw8Hkyvm3DjrjtzfZyiW4rPLTD7v8Oo0GimUrpBm6hZWmTd8rixQg

And in the second session I ran this:

root@arran:/home/me# microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 8080:443 --address='0.0.0.0'
Forwarding from 0.0.0.0:8080 -> 8443

The K8S dashboard is then exposed on the leader host: https://192.168.50.251:8080/. From here, just paste in the lengthy token from above, to sign in.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.