Score:0

Why does OOM killer works although memory is enough?

va flag

On one of the worker nodes of the Kubernetes cluster, a situation occurs periodically (several times a day) when the OOM killer is triggered, killing the "manager" process I assume it's because configuration of cgroups. How do I configure it in the proper way?

[Wed Mar 1 09:39:07 2023] memory: usage 524288kB, limit 524288kB, failcnt 574916 Can you explain, this is RES (RSS) memory threshold of the process "manager", which invoke OOM, isn't this?

[Wed Mar  1 09:39:07 2023] manager invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=985
[Wed Mar  1 09:39:07 2023] manager cpuset=cri-containerd-781fb27a383d1309c5395ab1db20b5d6dbad6159be3e6d95d7cabd0c4e57eafd.scope mems_allowed=0
[Wed Mar  1 09:39:07 2023] CPU: 2 PID: 20605 Comm: manager Tainted: G        W        4.15.0-197-generic #208-Ubuntu
[Wed Mar  1 09:39:07 2023] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020
[Wed Mar  1 09:39:07 2023] Call Trace:
[Wed Mar  1 09:39:07 2023]  dump_stack+0x6d/0x8b
[Wed Mar  1 09:39:07 2023]  dump_header+0x71/0x282
[Wed Mar  1 09:39:07 2023]  oom_kill_process+0x21f/0x420
[Wed Mar  1 09:39:07 2023]  out_of_memory+0x116/0x4e0
[Wed Mar  1 09:39:07 2023]  mem_cgroup_out_of_memory+0xbb/0xd0
[Wed Mar  1 09:39:07 2023]  mem_cgroup_oom_synchronize+0x2e8/0x320
[Wed Mar  1 09:39:07 2023]  ? mem_cgroup_css_reset+0xe0/0xe0
[Wed Mar  1 09:39:07 2023]  pagefault_out_of_memory+0x13/0x60
[Wed Mar  1 09:39:07 2023]  mm_fault_error+0x90/0x180
[Wed Mar  1 09:39:07 2023]  __do_page_fault+0x46b/0x4b0
[Wed Mar  1 09:39:07 2023]  ? __audit_syscall_exit+0x236/0x2b0
[Wed Mar  1 09:39:07 2023]  do_page_fault+0x2e/0xe0
[Wed Mar  1 09:39:07 2023]  ? page_fault+0x2f/0x50
[Wed Mar  1 09:39:07 2023]  page_fault+0x45/0x50
[Wed Mar  1 09:39:07 2023] RIP: 0033:0x46543c
[Wed Mar  1 09:39:07 2023] RSP: 002b:000000c00c6593e0 EFLAGS: 00010202
[Wed Mar  1 09:39:07 2023] RAX: 0000000000000000 RBX: 0000000000006000 RCX: 0000000000040000
[Wed Mar  1 09:39:07 2023] RDX: 0000000000000000 RSI: 0000000000040000 RDI: 000000c01afaa000
[Wed Mar  1 09:39:07 2023] RBP: 000000c00c659408 R08: 0000000000000070 R09: 0000000000000000
[Wed Mar  1 09:39:07 2023] R10: 000000c00010de60 R11: 0000000000000001 R12: 000000c00010de70
[Wed Mar  1 09:39:07 2023] R13: 0000000000000000 R14: 000000c000503a00 R15: 00000000029fb820
[Wed Mar  1 09:39:07 2023] Task in /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddc8e382_bd5b_4164_b430_03fdb8a55cf9.slice/cri-containerd-781fb27a383d1309c5395ab1db20b5d6dbad6159be3e6d95d7cabd0c4e57eafd.scope killed as a result of limit of /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddc8e382_bd5b_4164_b430_03fdb8a55cf9.slice/cri-containerd-781fb27a383d1309c5395ab1db20b5d6dbad6159be3e6d95d7cabd0c4e57eafd.scope
[Wed Mar  1 09:39:07 2023] memory: usage 524288kB, limit 524288kB, failcnt 574916
[Wed Mar  1 09:39:07 2023] memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0
[Wed Mar  1 09:39:07 2023] kmem: usage 3556kB, limit 9007199254740988kB, failcnt 0
[Wed Mar  1 09:39:07 2023] Memory cgroup stats for /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddc8e382_bd5b_4164_b430_03fdb8a55cf9.slice/cri-containerd-781fb27a383d1309c5395ab1db20b5d6dbad6159be3e6d95d7cabd0c4e57eafd.scope: cache:0KB rss:520588KB rss_huge:221184KB shmem:0KB mapped_file:0KB dirty:0KB writeback:660KB inactive_anon:0KB active_anon:520632KB inactive_file:92KB active_file:8KB unevictable:0KB
[Wed Mar  1 09:39:07 2023] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[Wed Mar  1 09:39:07 2023] [20576] 65532 20576   288460   110217  1085440        0           985 manager
[Wed Mar  1 09:39:07 2023] Memory cgroup out of memory: Kill process 20576 (manager) score 1827 or sacrifice child
[Wed Mar  1 09:39:07 2023] Killed process 20576 (manager) total-vm:1153840kB, anon-rss:434640kB, file-rss:6228kB, shmem-rss:0kB
[Wed Mar  1 09:39:08 2023] oom_reaper: reaped process 20576 (manager), now anon-rss:0kB, file-rss:0kB, shmem-r
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.