I have Ganesha NFS server provisioner installed in my cluster as STS according to the documentation. I run it with just a mounted block storage PVC.
I deploy a separate NFS server per namespace and I want to limit which pods in the namespace have access to the NFS server. I create the following network policy for ingress that allows access to the NFS server (labeled name=nfs-server
) from pods in the same namespace with a label app-name=my-app
(all ports):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
name: allow-ingress-nfs
name: allow-ingress-nfs
namespace: "my-namespace"
spec:
ingress:
- from:
- podSelector:
matchLabels:
app-name: my-app
podSelector:
matchLabels:
name: nfs-server
policyTypes:
- Ingress
This is the NFS server pod (from STS):
apiVersion: v1
kind: Pod
metadata:
labels:
name: nfs-server
name: nfs-server-0
namespace: "my-namespace"
spec:
containers:
- args:
- -provisioner=my-namespace-nfs
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: SERVICE_NAME
value: nfs-server
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8
name: nfs-server
volumeMounts:
- mountPath: /export
name: export
volumes:
- name: export
persistentVolumeClaim:
claimName: nfs-export
I try to mount a PVC of this NFS server from the following pod:
apiVersion: v1
kind: Pod
metadata:
labels:
app-name: my-app
name: nginx
name: nginx-74c5d976b7-g782p
namespace: "my-namespace"
spec:
containers:
image: my/image
imagePullPolicy: Always
name: nginx
volumeMounts:
- mountPath: /mnt/files
name: files
readOnly: true
volumes:
- name: files
persistentVolumeClaim:
claimName: files
This is the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers:
- kubernetes.io/pvc-protection
name: files
namespace: "my-namespace"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: my-namespace-nfs
volumeMode: Filesystem
volumeName: pvc-uuid
This is the NFS storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: my-namespace-nfs
mountOptions:
- vers=4.1
provisioner: my-namespace-nfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
However, pods get stuck in CreatingContainer
with the following error:
Mounting command: mount
Mounting arguments: -t nfs -o vers=4.1 10.245.245.191:/export/pvc-name /var/lib/kubelet/pods/id/volumes/kubernetes.io~nfs/pvc-name
Output: mount.nfs: Connection timed out
I've observed the same behavior in DigitalOcean Kubernetes 1.27 (they use Cilium) with non-shared CPU nodes and GKE 1.26
If I delete all network policies the issue disappears completely.
Not sure if this affects anything but I do not run Ganesha in privileged mode and I limit it by mem/CPU.