Score:2

Systemd stops user manager and kills all user processes

mm flag

I have many podman containers running under a user. The processes running in them are resource intensive at times (CPU and memory).

Until recently we didn't have any problems. But after an unavoidable software update to the one of the programs running inside the containers, the containers keep dying daily all at the same time. I doubled the available RAM which helped temporarily but the problem is back.

I found the following lines in /var/log/syslog, that always comes before the shutdown:

Jul 24 17:01:26 xxx1 systemd[1]: session-5.scope: Deactivated successfully.
Jul 24 17:01:26 xxx1 systemd[1]: session-5.scope: Consumed 9.924s CPU time.
Jul 24 17:01:36 xxx1 systemd[1]: Stopping User Manager for UID 1000...

There's a CPU usage spike shortly before this because the containers do a scheduled tasks all the same time.

I haven't changed any systemd settings from the original (Ubuntu 22.04LTS). And in the /etc/systemd/system.conf the DefaultCPUAccounting is set to no.

I suspect that there could be some other limitations causing the shutdown (eg: number of tasks), but I can't find any information in the logs about what prompted the stopping of the user manager.

How can I find the reason for the stop?

Score:5
fr flag

From your logs, I would say you have the cause and effect reversed: the user sessions stop first; then, after a 10-second timer, systemd-logind stops the user manager as "unneeded" (which is standard behavior unless "linger" mode has been enabled for that user).

Start by doing loginctl enable-linger <user> to disable the automatic GC of the user service manager; that's something you should have enabled anyway whenever you want to have "permanent" user services. (If you remember you had it enabled before, I'd first look in /var/lib/systemd/linger to check whether the flag file is still there – something might have removed it.)

If this helps, proceed with trying to figure out why the user sessions are stopping – it depends on what was previously holding them open before (a local console login? an SSH session?).


DefaultCPUAccounting is not relevant to the problem; cgroup-based CPU accounting may only throttle processes but (unlike ulimit) will not outright kill them. The "Consumed xxx CPU time" message is just informative.

BenVida avatar
mm flag
Thank you very much! The linger was not enabled, so I corrected it now. Somehow it worked just fine for months. I'll check on the process that is starting the containers. It's launched from a cron job. I'll see the results by tomorrow and I'll give an update.
user1686 avatar
fr flag
Yeah, that's not a good way to start user services. It sounds like previously you might have had e.g. cron set up to open a PAM session and the cronjob holding it open more or less by accident... With linger enabled, you can just add them into `default.target` if you want them to start on boot.
Will avatar
it flag
@BenVida In most cases, the cleanest (and most robust) solution in a systemd-managed distribution is to ditch the cron job, and create a dedicated systemd `.service` file along with an accompanying systemd `.timer` file.
BenVida avatar
mm flag
Thank you for all the guidance. I enabled the linger, and added the Podman socket to the systemd based on this: https://docs.podman.io/en/latest/markdown/podman-system-service.1.html I figured out that the podman system service was started up by me logging in and ran with the default 5 sec inactivity setting. So when the containers got something heavy going on then there was no activity to the podman service for more than 5 sec and it shut itself down. With the above handlings the containers stayed running even when the CPU spike came.
Togor avatar
in flag
Thanks and this solution really helped me with the same issue.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.