First of all, I think there is no nfs
modules on microk8s
. On my Ubuntu 22.04.1 Server, the available modules are listed below:
$ microk8s status
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
ingress # (core) Ingress controller for external access
metrics-server # (core) K8s Metrics Server for API access to service metrics
rbac # (core) Role-Based Access Control for authorisation
disabled:
cert-manager # (core) Cloud native certificate management
community # (core) The community addons repository
gpu # (core) Automatic enablement of Nvidia CUDA
host-access # (core) Allow Pods connecting to Host services smoothly
hostpath-storage # (core) Storage class; allocates storage from host directory
kube-ovn # (core) An advanced network fabric for Kubernetes
mayastor # (core) OpenEBS MayaStor
metallb # (core) Loadbalancer for your Kubernetes cluster
observability # (core) A lightweight observability stack for logs, traces and metrics
prometheus # (core) Prometheus operator for monitoring and logging
registry # (core) Private image registry exposed on localhost:32000
storage # (core) Alias to hostpath-storage add-on, deprecated
For the NFS implementation, you need to have NFS server configured and create storage driver on the cluster.
Requirements
You should already have a NFS server outside the clusters, you can use the hosts or spawn another VM and configure NFS server with below configuration:
# Assuming you use Ubuntu VM
# Install the NFS Kernel
sudo apt-get install nfs-kernel-server
# Create a directory to be used for NFS
sudo mkdir -p /srv/nfs
sudo chown nobody:nogroup /srv/nfs
sudo chmod 0777 /srv/nfs
Then edit the /etc/exports
file. Make sure that the IP addresses of all your MicroK8s nodes are able to mount this share. For example, to allow all IP addresses in the 10.0.0.0/24
subnet:
sudo mv /etc/exports /etc/exports.bak
echo '/srv/nfs 10.0.0.0/24(rw,sync,no_subtree_check)' | sudo tee /etc/exports
Finally, restart the NFS server:
sudo systemctl restart nfs-kernel-server
Please adjust with your configuration.
Installation method on clusters:
- Install the CSI driver for NFS
We will use the upstream NFS CSI driver. First, we will deploy the NFS provisioner using the official Helm chart.
Enable the Helm3 addon (if not already enabled) and add the repository for the NFS CSI driver:
microk8s enable helm3
microk8s helm3 repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
microk8s helm3 repo update
Then, install the Helm chart under the kube-system
namespace with:
microk8s helm3 install csi-driver-nfs csi-driver-nfs/csi-driver-nfs \
--namespace kube-system \
--set kubeletDir=/var/snap/microk8s/common/var/lib/kubelet
After deploying the Helm chart, wait for the CSI controller and node pods to come up using the following kubectl
command …
microk8s kubectl wait pod --selector app.kubernetes.io/name=csi-driver-nfs --for condition=ready --namespace kube-system
… which, once successful, will produce output similar to:
pod/csi-nfs-controller-7bd5678cbc-nc6l2 condition met
pod/csi-nfs-node-lsn6n condition met
At this point, you should also be able to list the available CSI drivers in your Kubernetes cluster …
microk8s kubectl get csidrivers
… and see nfs.csi.k8s.io in the list:
NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE
nfs.csi.k8s.io false false false <unset> false Persistent 23h
- Create a StorageClass for NFS
Next, we will need to create a Kubernetes Storage Class that uses the nfs.csi.k8s.io
CSI driver. Assuming you have configured an NFS share /srv/nfs
and the address of your NFS server is 10.0.0.1
, create the following file:
# sc-nfs.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
server: 10.0.0.1
share: /srv/nfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- hard
- nfsvers=4.1
Note: The last line of the above YAML indicates a specific version of
NFS. This should match the version of the NFS server being used - if
you are using an existing service please check which version it uses
and adjust accordingly.
Then apply it on your MicroK8s
cluster with:
microk8s kubectl apply -f - < sc-nfs.yaml
- Create a new PVC
The final step is to create a new PersistentVolumeClaim using the nfs-csi storage class. This is as simple as specifying storageClassName: nfs-csi
in the PVC definition, for example:
# pvc-nfs.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
storageClassName: nfs-csi
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 5Gi
Then create the PVC with:
microk8s kubectl apply -f - < pvc-nfs.yaml
If everything has been configured correctly, you should be able to check the PVC…
microk8s kubectl describe pvc/test-pvc
… and see that a volume was provisioned successfully:
Name: test-pvc
Namespace: default
StorageClass: nfs-csi
Status: Bound
Volume: pvc-0d7e0c27-a6d6-4b64-9451-3209f98d6472
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
volume.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: <none>
Events: <none>
That’s it! You can now use this PVC to run stateful workloads on your MicroK8s cluster.
You just need to re-create point 3 to create new PVC.