On Linux, you can pin a process to a cpu with taskset
. (See man taskset
). (See also https://unix.stackexchange.com/questions/425065/linux-how-to-know-which-processes-are-pinned-to-which-core)
You can also use cpulimit
(see man cpulimit
) which uses control groups to set a cpu usage limit where 100 would be the equivalent of one CPU.
To use either of these, you will have to find out which process is launched for the user, and append the above commands. I'm not familiar with Virtualmin, but it being open source I suspect that you will be able to find out where it launches these processes fairly easily, by grepping the source code.
The CPU time in limits.conf
is CPU time in minutes as shown in the Time column by top
so it's only a measure of how long the process is allowed to run exclusively.
The "core" value in limits.conf (see man limits.conf
) refers to the size of a core dump file, which is a dump you can enable of a process if it crashes, for debugging.
The background to this is that Linux was designed to do exactly the opposite - spread tasks across CPUs - and it's exceedingly good at that. The only reason to reasonably limit tasks to a CPU on a multi core machine, is to do something with perfect timing - or timing attacks.
Other options are to lock the OS to certain cores from the bootloader, or you can emulate singular CPU's or run containers pinned to certain CPUs.