Score:0

Amazon EKS: Moving pods from one node group to another

eg flag

I currently have a Managed Node Group serving my EKS cluster and have added another:

  • NodeGroup1 [current - 20gb ec2 disk]
  • NodeGroup2 [new - 80gb ec2 disk]

I'd like to migrate my current pods from Node Group 1 to Node Group 2 to allow more disk size for the worker nodes, and then stop using Node Group 1. node groups image

I've made the new node group as such:

eksctl create nodegroup --cluster prod --name NodeGroup2 --node-type t3.medium --nodes 2 --nodes-min 0 --nodes-max 4 --enable-ssm --managed --node-volume-size 80

I have a pod disruption budget set up but at the moment I can tolerate downtime. As long as everything makes it over from one managed node group to the new one, I'm happy.

Can I simply do eksctl delete nodegroup NodeGroup1 and it will move everything to the second node group?

Here's what my node groups, deployments and pods look like:

~$ eksctl get nodegroups --cluster prod -o json
[
    {
        "StackName": "",
        "Cluster": "prod",
        "Name": "NodeGroup2",
        "Status": "ACTIVE",
        "MaxSize": 4,
        "MinSize": 0,
        "DesiredCapacity": 2,
        "InstanceType": "t3.medium",
        "ImageID": "AL2_x86_64",
        "CreationTime": "2021-11-07T04:15:49.595Z",
        "NodeInstanceRoleARN": "arn:aws:iam::redacted:role/eks-node-group",
        "AutoScalingGroupName": "eks-...1d",
        "Version": "1.20"
    },
    {
        "StackName": "",
        "Cluster": "prod",
        "Name": "NodeGroup1",
        "Status": "ACTIVE",
        "MaxSize": 4,
        "MinSize": 0,
        "DesiredCapacity": 2,
        "InstanceType": "t3.medium",
        "ImageID": "AL2_x86_64",
        "CreationTime": "2021-05-25T06:52:25.437Z",
        "NodeInstanceRoleARN": "arn:aws:iam::redacted:role/eks-node-group",
        "AutoScalingGroupName": "eks-...01",
        "Version": "1.20"
    }
    
~$ kubectl get pods -A    

NAMESPACE              NAME                                            READY   STATUS    RESTARTS   AGE
default                grafana-agent                                   1/1     Running   0          2d23h
default                promtail-2a23                                   1/1     Running   0          3d
default                promtail-2vg2                                   1/1     Running   0          3d
default                prod-application-34                             1/1     Running   0          3d
default                prod-applicationworker-6l                       1/1     Running   0          3d
kube-system            aws-load-balancer-controller                    1/1     Running   0          2d23h
kube-system            aws-node-5rzk3                                  1/1     Running   0          3d
kube-system            aws-node-keljg                                  1/1     Running   0          3d
kube-system            cluster-autoscaler                              1/1     Running   0          2d23h
kube-system            coredns-3254s                                   1/1     Running   0          3d
kube-system            coredns-48grd                                   1/1     Running   0          2d23h
kube-system            kube-proxy-6vx89                                1/1     Running   0          3d
kube-system            kube-proxy-rqb23                                1/1     Running   0          3d
kube-system            metrics-server                                  1/1     Running   0          2d23h
kubernetes-dashboard   dashboard-metrics-scraper                       1/1     Running   0          2d23h
kubernetes-dashboard   kubernetes-dashboard                            1/1     Running   0          2d23h

~$ kubectl get deployments -A
NAMESPACE              NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
default                grafana-agent                  1/1     1            1           5d23h
default                prod-application               1/1     1            1           160d
default                prod-applicationworker         1/1     1            1           37d
kube-system            aws-load-balancer-controller   1/1     1            1           166d
kube-system            cluster-autoscaler             1/1     1            1           166d
kube-system            coredns                        2/2     2            2           167d
kube-system            metrics-server                 1/1     1            1           166d
kubernetes-dashboard   dashboard-metrics-scraper      1/1     1            1           165d
kubernetes-dashboard   kubernetes-dashboard           1/1     1            1           165d

If there's a way I can temporarily cordon a whole node group and then drain it, to the other nodegroup (before I delete the first) that would be amazing.

Sorry if this is a simple question but I've ended up reading so many documents that slightly contradict each other.

Cheers, Mike.

moonkotte avatar
in flag
Based on some research, eks managed node groups has [taints](https://docs.aws.amazon.com/eks/latest/userguide/node-taints-managed-node-groups.html), so setting taint on `NodeGroup1` with `noExecute` should move all pods to another node group. More details about [taints and tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.