Score:0

AWS EC2 resource utilization

kz flag

Wondering how everyone else looks at recourse utilization in AWS EC2 instances. For example, I'm trying to 'right-size' many of our over-provisioned instances to the correct instance type/performance specifications. In doing so, I've been sizing down to instance sizes that may have 80%+ memory utilization for a given workload.

The idea being that recourse utilization is different when working with cloud instances than when working with bare metal hardware or on-prem virtualization/hyper-visors. When using bare metal/on-prem servers/hyper-visors, admins usually look at resource utilization and want to see memory below 50% or whatever; or in other words 80%+ memory utilization look like a bad thing, and an admin would then maybe allocate MORE memory to get the baseline utilization lower. The thought being that during times of increased traffic or workload demands, the server will be able to handle it without slowing down/noticeable performance loss.

However, when working with cloud instances you're paying for used resources on-demand, you're no longer paying for your resources upfront in buying physical hardware- so theoretically you'd WANT to be using most of your memory allocation as your baseline to optimize cost savings for the instance type/size. Additionally the resources are burstable, so even at times when you go to 100% utilization, you won't see a performance drop or hangup, and so then high memory utilization is an ideal situation for sizing workloads that typically use the same amount of memory. In other words for a workload that sits between 80%-95% memory utilization 95% of the time.

I want to know if this thinking is flawed in anyone's opinion. Do admins using EC2 instances still try to keep their memory utilization low? To me this sounds like antiquated habits that no longer apply. And so long as your cpu stays below the allocated baseline % to accrue credits, and your memory stays below 100% for the majority of the time, then that is the ideal setup to minimize instance cost while maintaining same performance. Thoughts?

Wilson Hauck avatar
jp flag
Additional DB information request, please. RAM size, # cores, any SSD or NVME devices on MySQL Host server? Post TEXT data on justpaste.it and share the links. From your SSH login root, Text results of: A) SELECT COUNT(*), sum(data_length), sum(index_length), sum(data_free) FROM information_schema.tables; B) SHOW GLOBAL STATUS; after minimum 24 hours UPTIME C) SHOW GLOBAL VARIABLES; D) SHOW FULL PROCESSLIST; E) STATUS; not SHOW STATUS, just STATUS; G) SHOW ENGINE INNODB STATUS; for server workload tuning analysis to provide performance improving suggestions.
Score:0
la flag

I'd look at the recommendations from Compute Optimizer. It should give you a couple of options to adjust the size of your EC2 instances based on the observed workload.

Score:0
gp flag
Tim

I generally agree with what you have written. You want fairly high CPU and memory allocation, and once you get close to the limits auto-scaling to add capacity is usually the best approach. You do what to leave some free RAM for things like disk caching, particularly since EBS is network disk.

The T series instances can somewhat burst CPU, though their baseline is a percentage of a servers CPU capacity. T3 unlimited means you can simply pay for a higher proportion of CPU, but you can't get more cores or RAM. Other instance types don't have burst capacity as far as I'm aware. It's fairly simple to resize instances, though there is a few minutes downtime, but putting them behind a load balancer can help eliminate downtime if they're web servers.

I have a t3a.nano that is overprovisioned. It's running five low volume Wordpress websites with Nginx, MySQL, PHP, a Dropbox clone type tool, a password manager in Docker, and other bits and pieces. That's 5% of a CPU core, 0.5GB RAM, 0.5GB swap, and 12GB disk. Currently it has 38MB RAM free, 183MB as cache, so in effect it's using 291MB of RAM. That's the smallest server AWS sells, other than maybe an ARM / Graviton instance. If there was a pico instance with 256MB RAM I might give it a go!

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.