If each pod in your StatefulSet requires its down PV, you should be creating your StatefulSet using a volumeClaimTemplates
section, as shown in the documentation:
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one PersistentVolumeClaim. In the nginx example above, each Pod receives a single PersistentVolume with a StorageClass of my-storage-class and 1 Gib of provisioned storage. If no StorageClass is specified, then the default StorageClass will be used. When a Pod is (re)scheduled onto a node, its volumeMounts mount the PersistentVolumes associated with its PersistentVolume Claims. Note that, the PersistentVolumes associated with the Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted. This must be done manually.
So if you create a StatefulSet like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: example
spec:
replicas: 3
template:
spec:
containers:
- name: whoami
image: docker.io/containous/whoami:latest
ports:
- name: http
containerPort: 80
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Then you'll end up with three pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
example-0 1/1 Running 0 78s
example-1 1/1 Running 0 73s
example-2 1/1 Running 0 68s
And three PVCs:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-example-0 Bound pvc-b31c564d-65fb-43ef-af5d-64faeaae6897 1Gi RWO standard 53s
data-example-1 Bound pvc-28f83158-d10d-450b-a723-954fc113eb92 1Gi RWO standard 48s
data-example-2 Bound pvc-55ae7f94-7ea9-4690-9e27-01e8a09cc3da 1Gi RWO standard 43s
Each one linked to a specific pod.