I have built a little three-node Kubernetes cluster at home, for learning purposes. Each 16GB node runs Ubuntu Server and MicroK8S. I have set up a leader (arran) and two followers (nikka and yamazaki).
root@arran:/home/me# microk8s kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
arran Ready <none> 5d3h v1.26.4 192.168.50.251 <none> Ubuntu 22.04.2 LTS 5.15.0-71-generic containerd://1.6.15
nikka Ready <none> 4d14h v1.26.4 192.168.50.74 <none> Ubuntu 22.04.2 LTS 5.15.0-71-generic containerd://1.6.15
yamazaki Ready <none> 3d16h v1.26.4 192.168.50.135 <none> Ubuntu 22.04.2 LTS 5.15.0-71-generic containerd://1.6.15
Here is the status of the cluster, with ingress
and dashboard
manually enabled. You can see it has switched into HA mode:
root@arran:/home/me# microk8s status
microk8s is running
high-availability: yes
datastore master nodes: 192.168.50.251:19001 192.168.50.74:19001 192.168.50.135:19001
datastore standby nodes: none
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
ha-cluster # (core) Configure high availability on the current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
hostpath-storage # (core) Storage class; allocates storage from host directory
ingress # (core) Ingress controller for external access
metrics-server # (core) K8s Metrics Server for API access to service metrics
registry # (core) Private image registry exposed on localhost:32000
storage # (core) Alias to hostpath-storage add-on, deprecated
disabled:
cert-manager # (core) Cloud native certificate management
community # (core) The community addons repository
dns # (core) CoreDNS
gpu # (core) Automatic enablement of Nvidia CUDA
host-access # (core) Allow Pods connecting to Host services smoothly
kube-ovn # (core) An advanced network fabric for Kubernetes
mayastor # (core) OpenEBS MayaStor
metallb # (core) Loadbalancer for your Kubernetes cluster
minio # (core) MinIO object storage
observability # (core) A lightweight observability stack for logs, traces and metrics
prometheus # (core) Prometheus operator for monitoring and logging
rbac # (core) Role-Based Access Control for authorisation
Here are my running pods, which come from my manifest (see later):
root@arran:/home/me# microk8s kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-world-app 1/1 Running 1 (14h ago) 47h 10.1.134.199 yamazaki <none> <none>
my-pod 1/1 Running 2 (14h ago) 5d1h 10.1.150.208 arran <none> <none>
Here are the services at present:
root@arran:/home/me# microk8s kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 5d3h <none>
nginx-service NodePort 10.152.183.120 <none> 80:30000/TCP 2d12h app.kubernetes.io/name=hello-world-app
hello-world-service NodePort 10.152.183.205 <none> 80:30032/TCP 47h app.kubernetes.io/name=hello-world-app
dashboard-service NodePort 10.152.183.237 <none> 443:32589/TCP 47h app.kubernetes.io/name=kubernetes
I suspect the problem is in the manifest, which I have built in a bit of a copy-and-paste fashion from the K8S and MicroK8s manuals:
apiVersion: v1
kind: Pod
metadata:
name: hello-world-app
labels:
app.kubernetes.io/name: hello-world-app
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-service
spec:
selector:
app.kubernetes.io/name: hello-world-app
ports:
- port: 80
targetPort: 80
type: NodePort
---
# Not sure this will work - do we need a NodePort to the dashboard?
apiVersion: v1
kind: Service
metadata:
name: dashboard-service
spec:
selector:
app.kubernetes.io/name: kubernetes
ports:
- port: 443
targetPort: 443
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: http-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes
port:
number: 443
Now, I have a "hello world" app, I've given it a node port, and then exposed that using an ingress plugin. This has come available on http://192.168.50.251/ (port 80). However I have tried to do the same for the Kubernetes dashboard, by adding a port and an ingress route (port 443), but https://192.168.50.251/ points to "hello world" and not the dashboard as I intend.
The single-file manifest file has been fully applied with microk8s kubectl apply -f manifest.yml
.
What can I try next?