Score:1

Can't rebuild deployment with PersistentVolumeClaim

gl flag

I want to create a MongoDB deployment with a PersistentVolumeClaim.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: auth-mongo-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-mongo-depl
spec:
  selector:
    matchLabels:
      app: auth-mongo-pod-label
  template:
    metadata:
      labels:
        app: auth-mongo-pod-label
    spec:
      containers:
        - name: auth-mongo-pod
          image: mongo
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: auth-mongo-volume
              mountPath: /data/db
      volumes:
        - name: auth-mongo-volume
          persistentVolumeClaim:
            claimName: auth-mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: auth-mongo-srv
spec:
  selector:
    app: auth-mongo-pod-label
  ports:
    - protocol: TCP
      port: 27017
      targetPort: 27017

full code

First build after creating autopilot cluster:

Starting deploy...
 - Warning: Autopilot set default resource requests for Deployment default/auth-depl, as resource requests were not specified. See http://g.co/gke/autopilot-defaults
 - deployment.apps/auth-depl created
 - service/auth-srv created
 - persistentvolumeclaim/auth-mongo-pvc created
 - Warning: Autopilot set default resource requests for Deployment default/auth-mongo-depl, as resource requests were not specified. See http://g.co/gke/autopilot-defaults
 - deployment.apps/auth-mongo-depl created
 - service/auth-mongo-srv created
 - Warning: Autopilot set default resource requests for Deployment default/react-client-depl, as resource requests were not specified. See http://g.co/gke/autopilot-defaults
 - deployment.apps/react-client-depl created
 - service/react-client-srv created
 - ingress.networking.k8s.io/ingress-service created
Waiting for deployments to stabilize...
 - deployment/auth-depl: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
    - pod/auth-depl-77fd8b57f5-vk8cf: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
 - deployment/auth-mongo-depl: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
    - pod/auth-mongo-depl-7d967468f6-rkh79: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
 - deployment/react-client-depl: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
    - pod/react-client-depl-68dcb844f6-b8fm9: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
 - deployment/auth-depl: Unschedulable: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
    - pod/auth-depl-77fd8b57f5-vk8cf: Unschedulable: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
 - deployment/react-client-depl: Unschedulable: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
    - pod/react-client-depl-68dcb844f6-b8fm9: Unschedulable: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
 - deployment/react-client-depl is ready. [2/3 deployment(s) still pending]
 - deployment/auth-depl is ready. [1/3 deployment(s) still pending]
 - deployment/auth-mongo-depl is ready.
Deployments stabilized in 2 minutes 31.285 seconds

mongo pod

If I click on "Rebuild" in Google Cloud Build, I get:

Starting deploy...
 - deployment.apps/auth-depl configured
 - service/auth-srv configured
 - persistentvolumeclaim/auth-mongo-pvc unchanged
 - deployment.apps/auth-mongo-depl configured
 - service/auth-mongo-srv configured
 - deployment.apps/react-client-depl configured
 - service/react-client-srv configured
 - ingress.networking.k8s.io/ingress-service unchanged
Waiting for deployments to stabilize...
 - deployment/auth-depl: 0/5 nodes are available: 5 Insufficient cpu, 5 Insufficient memory. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
    - pod/auth-depl-666fdb5c64-cqnf6: 0/5 nodes are available: 5 Insufficient cpu, 5 Insufficient memory. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
 - deployment/auth-mongo-depl: 0/5 nodes are available: 5 Insufficient cpu, 5 Insufficient memory. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
    - pod/auth-mongo-depl-958db4cd5-db5pr: 0/5 nodes are available: 5 Insufficient cpu, 5 Insufficient memory. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
 - deployment/react-client-depl: 0/5 nodes are available: 5 Insufficient cpu, 5 Insufficient memory. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
    - pod/react-client-depl-54998f6c5b-wswz7: 0/5 nodes are available: 5 Insufficient cpu, 5 Insufficient memory. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
 - deployment/auth-mongo-depl: Unschedulable: 0/1 nodes available: 1 node is not ready
    - pod/auth-mongo-depl-958db4cd5-db5pr: Unschedulable: 0/1 nodes available: 1 node is not ready
 - deployment/auth-depl is ready. [2/3 deployment(s) still pending]
 - deployment/react-client-depl is ready. [1/3 deployment(s) still pending]
1/3 deployment(s) failed
ERROR
ERROR: build step 0 "gcr.io/k8s-skaffold/skaffold:v2.2.0" failed: step exited with non-zero status: 1

second pod is stuck pod events

I am not sure why it can't scale up "Node scale up in zones us-central1-f associated with this pod failed: GCE quota exceeded. Pod is at risk of not being scheduled."; how can I be exceeding a quota with such a simple deployment? Although it does say "Scheduled" after "FailedAttachVolume".

Should I change my skaffold.yml file to not even attempt to rebuild deployments of databases? I am not familiar with taint; should I be changing a setting so that the same node is used?

I did try ReadWriteMany in a previous cluster but it didn't work. pod events

Score:0
gl flag

I ended up using a StatefulSet, since it is required anyway for horizontal scaling.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: auth-mongo-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Mi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: auth-mongo-set
spec:
  selector:
    matchLabels:
      app: auth-mongo-pod-label
  serviceName: auth-mongo-srv
  template:
    metadata:
      labels:
        app: auth-mongo-pod-label
    spec:
      containers:
        - name: auth-mongo-pod
          image: mongo
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: auth-mongo-volume
              mountPath: /data/db
      volumes:
        - name: auth-mongo-volume
          persistentVolumeClaim:
            claimName: auth-mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: auth-mongo-srv
spec:
  selector:
    app: auth-mongo-pod-label
  ports:
    - protocol: TCP
      port: 27017
      targetPort: 27017

I think that now each pod has its own PVC, so there are no issues when trying to replace an old pod with a new one (with deployments, both pods use the same pvc).

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.