Score:0

400% CPU usage on Linux server

vi flag

Recently we moved our full stack website to new server (Vultr instance). After moving to new server we are constantly getting 400% CPU usage issue.

It's been fifth day since we are getting 400% CPU usage.

Debugging issue leads us to the articles regarding vacuum / autovacuum which is used by postgress database to reclaims storage occupied by dead tuples. And it is recommended to keep ON.

We don't want to terminate the process as our site is live and we don't want to mess up anything.

Here's more on what we have done for the server migration.

We cloned and deployed frontend application using nginx. For backend, we make dbdump of database (dbdump file of 25 MB) from the old server and restored on new server We did restored data twice first time as just trial and second time while actually moving the data. Before importing data we dropped existing database. Also we used official postgres docker image for our database.

Here's the result of htop command from the server. High CPU usage process image

Now we don't understand what to do to reduce server CPU load.

NOTE: We are running default configuration of the postgres docker image.

Device info

OS: Ubuntu 22.04.1 LTS

RAM: 8 GB

Processor: 4x AMD EPYC-Rome Processor

Vultr, CPU Optimized - 4 vCPU, 8192 MB RAM, 75 GB NVMe, 6.00 TB Transfer

in flag
Welcome to AskUbuntu! Unfortunately there is not enough information here to pinpoint a cause. Could you [edit] your question to include: (0) the version of Ubuntu you’re running (1) the hardware specs of the server (is it bare metal? Virtual? AWS? Etc.) (2) the size of the PostgreSQL database. With this, it may be possible to offer some areas to investigate
Sneh Jain avatar
vi flag
Sure. Thank you.
in flag
Just to confirm, you’re using a c6a.xlarge instance? Is there any additional EBS volume attached aside from the primary where the OS is installed? What other operations is the server doing? The most common reason for this sort of issue I’ve seen (when using EC2 instances) is I/O contention
Sneh Jain avatar
vi flag
There's no EBS volume. We are storing files on the instance it self. And for the data we are using official postgress image with default configurations. I just confirmed instance details with my client. I have updated the intense details. Please check again.
Sneh Jain avatar
vi flag
@ArturMeinild Done. Thanks!
Score:1
vi flag

I was able to fix it by restarting postgres docker container. Now the server is running normally.

Artur Meinild avatar
vn flag
Sometimes the solution is simple - most likely some kind of I/O congestion like Matigo suggested.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.