Score:-1

How can I release previously allocated resources of a deleted pod?

pl flag

I already had 3 Cassandra node/pods running. I deleted them and tried to re create them again using the same YAML file as following, on the same Kind cluster, but it stuck pending status:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: cassandra
  labels:
    app: cassandra
spec:
  serviceName: cassandra
  replicas: 3
  selector:
    matchLabels:
      app: cassandra
  template:
    metadata:
      labels:
        app: cassandra
    spec:
      terminationGracePeriodSeconds: 1800
      containers:
      - name: cassandra
        image: gcr.io/google-samples/cassandra:v13
        imagePullPolicy: Always
        ports:
        - containerPort: 7000
          name: intra-node
        - containerPort: 7001
          name: tls-intra-node
        - containerPort: 7199
          name: jmx
        - containerPort: 9042
          name: cql
        resources:
          limits:
            cpu: "500m"
            memory: 1Gi
          requests:
            cpu: "500m"
            memory: 1Gi
        securityContext:
          capabilities:
            add:
              - IPC_LOCK
        lifecycle:
          preStop:
            exec:
              command: 
              - /bin/sh
              - -c
              - nodetool drain
        env:
          - name: MAX_HEAP_SIZE
            value: 512M
          - name: HEAP_NEWSIZE
            value: 100M
          - name: CASSANDRA_SEEDS
            value: "cassandra-0.cassandra.default.svc.cluster.local"
          - name: CASSANDRA_CLUSTER_NAME
            value: "K8Demo"
          - name: CASSANDRA_DC
            value: "DC1-K8Demo"
          - name: CASSANDRA_RACK
            value: "Rack1-K8Demo"
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
        readinessProbe:
          exec:
            command:
            - /bin/bash
            - -c
            - /ready-probe.sh
          initialDelaySeconds: 15
          timeoutSeconds: 5
        # These volume mounts are persistent. They are like inline claims,
        # but not exactly because the names need to match exactly one of
        # the stateful pod volumes.
        volumeMounts:
        - name: cassandra-data
          mountPath: /cassandra_data
  # These are converted to volume claims by the controller
  # and mounted at the paths mentioned above.
  # do not use these in production until ssd GCEPersistentDisk or other ssd pd
  volumeClaimTemplates:
  - metadata:
      name: cassandra-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: fast
      resources:
        requests:
          storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
  type: pd-ssd
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: cassandra
  name: cassandra
spec:
  clusterIP: None
  ports:
  - port: 9042
  selector:
    app: cassandra

As I searched through the internet I believe that the problem is because of lack of resources, but I guess this is happening because previously allocated resources to deleted nodes/pods are still occupied. But I don't know how can I release them?

I tried kubectl top nodes:

NAME                 CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
kind-control-plane   205m         2%     1046Mi          6%        
kind-worker          171m         2%     2612Mi          16%   

It seems everything is fine?

Maybe the problem is with hard disc allocation that I don't know how to check?

pt flag
What sort of diagnostics have you performed so far? Are there any events in the namespace that seem relevant?
best_of_man avatar
pl flag
@larsks I can't try `kubectl logs cassandra-0` or `kubectl describe pod cassandra-0` because it says there is no pod with such a name.
best_of_man avatar
pl flag
@larsks: I don't know other kind of diagnostics to try. Please let me know what else can I do?
Score:1
mc flag

If a pod is in a pending state then typically, this is a result of a lack of resources. First check the events of the pod to know the reason why pod is in pending state. For this use following command

Kubectl describe pod-name

This events will give the insight why the pod is in a pending state. A common reason for a pod going into a pending state is lack of memory or storage. you may have exhausted the resources available in nodes. One way to get the exhausted resources back is to clean up the nodes by deleting the unwanted pods and deployments.

This official document contains information to debug the pods in kubernetes.

This document helps to debug the statefulsets in k8s.

If you need a example to deploy cassandra in k8s, this offical k8s document will help.

best_of_man avatar
pl flag
Unfortunately `Kubectl describe pod-name` doesn't work and says there is no such a pod, I think it's because it's not been created yet while it is on pending state. But I am agree that there is a resource problem but I don't know how to release all previously allocated resources? I tried to even delete and re create my `Kind` cluster but it seems it didn't help!
best_of_man avatar
pl flag
I edited my question, maybe it helps.
Dharani Dhar Golladasari avatar
mc flag
Can you confirm your statefulset is deployed by executing this `kubectl get statefulset`
best_of_man avatar
pl flag
Yes I see `NAME READY AGE cassandra 0/3 43h`
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.