Score:9

How can I limit the CPU and RAM usage for a process?

in flag

I'm on a Ubuntu VPS with SFTP and a console. I need a specific process to only use 60% of my CPU and 2048 MB of RAM.

I also need that another process only uses 30% of CPU and 1024MB of RAM.

How can I limit the CPU and RAM usage of a process?

muru avatar
us flag
Does this answer your question? [How to limit resource usage for a given process?](https://askubuntu.com/questions/1045076/how-to-limit-resource-usage-for-a-given-process)
Score:13
in flag

Be warned: Here there be dragons.

When you start going down the path of specifically controlling resources of applications / processes / threads to this extent, you begin to open a literal Pandora's box of problems when it comes time to debug an issue that your rate limiting did not take into account.

That said, if you believe that you know what you're doing, there are three options available to you: nice, cpulimit, and control groups (Cgroups).

Here is a TL;DR for these three methods:

Nicenice {process}

This is a very simple way to prioritise a task and is quite effective for "one off" uses, such as reducing the priority of a long-running, computationally-expensive task that should use more of the CPU when the machine is not being used by other tasks (or people).

CPU Limitcpulimit -l 60 {process}

If your server performance suffers (a.k.a. stalls) when the CPU usage exceeds a certain amount, then cpulimit can help reduce the pressure on the system. It does this by pausing the process at different intervals to keep it under a defined ceiling by sending SIGSTOP and SIGCONT signals to the process. cpulimit does not change the nice value of the process, instead it monitors and controls the real-world CPU usage.

You will find that cpulimit is useful when you want to ensure that a process doesn't use more than a certain portion of the CPU, which your question alludes to, but a disadvantage is that the process cannot use all of the available CPU time when the system is idle (which nice allows).

CGroups

sudo cgcreate -g cpu:/restrained
sudo cgset -r cpu.shares=768 restrained
sudo cgexec -g cpu: restrained {process}

Cgroups — control groups — are a feature built into the Linux kernel that enables you to control how resources should be allocated. With Cgroups you can specify how much CPU, memory, bandwidth, or combinations of these resources can be used by the processes that are assigned to a group.

A key advantage of Cgroups over nice or cpulimit is that the limits are applied to a set of processes; not just one. nice and cpulimit are also limited to restricting the CPU usage of a process, whereas Cgroups can limit other process resources.

If you go down the rabbit-hole of Cgroups then you can hyper-optimise a system for a specific set of tasks.

mx flag
and about limiting RAM usage; can you give example to do that via CGroups?
Score:10
cn flag

A heads up: if you don't want to give the process a hard limit, just a priority, look up the nice command. This answer will assume you want a hard limit.

Limiting CPU Usage

This wonderful answer to a different question explains it pretty well

Install cpulimit

sudo apt-get install cpulimit

It provides different methods of limiting the CPU usage of a process foo to say, 20%

  • By its process-name: sudo cpulimit -e foo -l 20.

  • By its absolute path name: sudo cpulimit -P /usr/bin/foo -l 20

  • By its PID:

  1. Find the PID of the process: pidof foo. (say, it outputs 1881)
  2. sudo cpulimit -p 1881 -l 20

Limiting Memory Usage

For more options, see this post on how to limit RAM usage.

For example, to limit process 12345 to 2048 MB of RAM usage, you could use the prlimit command

$ prlimit --pid 12345 --as=2048000000
mx flag
Do note that `limiting address space` is **not the same** as `limiting RAM usage`. Calling it `limit virtual memory (RAM+swap)` is just a _little_ better, but still incorrect. One can easily have program which require 3GB or address space, while using only 100MB of RAM and 0 MB of swap, for example (see `mmap(2)` & friends, sparse arrays etc).
cocomac avatar
cn flag
@MatijaNalis If you know of a better way, then post an answer. Or, edit one of the existing answers.
Peter Cordes avatar
fr flag
@MatijaNalis: Does the traditional ulimit `-m <memory>` (in KiB) work? It's supposed to limit resident set size, according to `ulimit --help`. (There's a separate ulimit `-v virtual memory` (in KiB)). For CPU time there's also the traditional `ulimit -t <seconds>` setting, which I assume just has the kernel kill a process that exceeds it.
mx flag
@PeterCordes unfortunately I don't think so (although ResidentSetSize is exactly what @newalvaro9 wants to limit) - not in any non-archaic kernels, anyway. `strace bash` reveals that `ulimit -m` calls `prlimit64(0, RLIMIT_RSS, ...` and man page `prlimit64(2)` says (among other things) that `RLIMIT_RSS [....] limit has effect only in Linux 2.4.x, x < 30`. So unless that documentation is wrong in my Debian Bullseye, it won't help (one might write a test program to check it, but...)
Peter Cordes avatar
fr flag
@MatijaNalis: That makes sense. With memory overcommit on by default, it would be hard to make it useful. Allocation of physical pages happens too late for mmap / malloc to return NULL, so the only options would be swapping out other pages to make room (creating swap thrashing when one process was out of RSS) or killing it, neither of which are particularly desirable. And presumably doing the accounting would suck for performance on a multi-core system for multi-threaded processes. But it also makes sense that the `ulimit -m` shell option can't go away, nor can the API / ABI fields.
Peter Cordes avatar
fr flag
It might make sense for a Linux build of `bash` to print `(no effect in Linux after 2.4.30)` as part of the help output though, so other people don't waste their time on it, if there's basically zero chance that a future Linux version would make it do anything again.
Score:0
jp flag

After looking for answers to this question for hours, I found a one-liner, working out-of-the-box on Ubuntu:

Process #1:

systemd-run --scope -p CPUQuota=60% -p MemoryMax=2048M -p MemoryHigh=1940M --user [yourcommand1]

Process #2:

systemd-run --scope -p CPUQuota=30% -p MemoryMax=1024M -p MemoryHigh=970M --user [yourcommand2]

This command uses Cgroups under the hood but abstracts its complexity for you.

Note: MemoryMax is a hard cap, so we also use the MemoryHigh parameter (here arbitrarily set at 95% of MemoryMax) to handle memory limits more gracefully, as stated in the MemoryMax description from the link below:

If memory usage cannot be contained under the limit, out-of-memory killer is invoked inside the unit. It is recommended to use MemoryHigh= as the main control mechanism and use MemoryMax= as the last line of defense.

More info can be found in the systemd.resource-control documentation

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.