My home-server is running some aged hardware (Core i5-3450, software RAID1 on SATA disks) and often has problems when I run performance-intensive things like a compile-job besides "normal" services that run in the background (DNS, Web, DHCP, Mail, etc. etc.)
Recently I nearly trashed my system when a compile job got to nearly 100% CPU and in consequence the wait I/O skyrocketed as well and basically no other process could got any free resources anymore.
I read about cgroup2 and gave it a try. Set the grub option systemd.unified_cgroup_hierarchy=1
and rebooted. Created a new cgroup, activated I/O and CPU controller and was able to throw my compiler's PID into the cgroup.procs and was able to successfully put it into the "background" running with only 5% CPU and not interfering with the rest of the system. The compile job nearly took forever (several days), but I didn't mind. However multiple times per day, suddenly my cpu.max setting vanished and the system got out of equilibrium again. After reading more about cgroup and systemd I believe it's because I did meddle with systemd controls and basically systemd seemed to reset my custom settings back to the default (cpu.max == 'max 1000000') which then started trashing my system.
So I read about how to do it properly.
First I set Delegate=true
to my [email protected]
file and rebooted.
Then I tried spawning a new "job" by running systemd-run --user CPUQuota=5% stress-ng --matrix 0 -t 10m
which worked successfully, i.e. the CPU usage per process was limited to 5%/4 per stress-ng thread!
I can see a new directory below /sys/fs/cgroup/user.slice/user-1000.slice/[email protected]
which contains my setting:
cat /sys/fs/cgroup/user.slice/user-1000.slice/[email protected]/run-raef937da699b484b80f1bf03bc049f7a.service/cpu.max
5000 100000
and cgroups.procs
contains the corresponding PID of stress-ng. Success!
Now I might want to change that setting. Either because it's too low or for what other reason. May I just write another value directly to cpu.max? Or am I supposed to use a systemd command tool for doing that? (Which one?)
My other problem is: If I don't know beforehand that a command I am going to run might have a bad impact on my system performance-wise, how can I constraint it later-on without killing it and re-running using systemd-run?
Assume I just run "stress-ng" in my shell (without systemd-run), I can see it's PID being member of the normal session cgroup
cat /proc/742779/cgroup
0::/user.slice/user-1000.slice/session-2232.scope
But that cgroup is managed by systemd, so I am not supposed (nor am I able to with normal user rights) to write into it's cpu.max, io.max, etc. structures, correct?
What ways of limiting do I have for already running processes running in the session-xxx.scope
instead of under my [email protected]
? Can I "take ownership" of PID 742779 and transfer it somehow into my [email protected]
session?
I know there is cgcreate, cgclassify, etc. but those are cgroup1 only, correct? (At least they give me cgcreate: libcgroup initialization failed: Cgroup is not mounted
as a result.)