Score:2

OOM keeps killing VirtualBox

ma flag

For the past few weeks I've had a serious problem. The Out Of Memory killer of my Ubuntu host keeps killing my VirtualBox session (a Win10 instance). I've assigned only 3Gb to Win10 and there's 16Gb on my host, plus as much swap [CORRECTION: (*)]. I don't even need to do anything in Windows for this to happen, I can just leave the logon screen on without logging in, and after a few minutes it will get reaped.

My guess is that this has nothing to do with VBox, it just gets reaped because it's the largest memory consumer.

But what is going ? As soon as the used physical mem hits 15Gb, boom, kswapd0 thrashes for 10 minutes and VBox gets killed. The swap hardly gets used at all (it stays below 1Gb of use, according to systemmonitor)

Here's what dmesg has to say:

NetworkManager invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0
CPU: 7 PID: 1415 Comm: NetworkManager Tainted: G        W  OE     5.19.0-26-generic #27-Ubuntu
Hardware name: Dell Inc. Latitude 7420/07MHG4, BIOS 1.14.1 12/18/2021
Call Trace:
<TASK>
show_stack+0x4e/0x61
dump_stack_lvl+0x4a/0x6d
dump_stack+0x10/0x18
dump_header+0x53/0x246
oom_kill_process.cold+0xb/0x10
out_of_memory+0x101/0x2f0
__alloc_pages_may_oom+0x112/0x1e0
__alloc_pages_slowpath.constprop.0+0x4ac/0x9b0
__alloc_pages+0x31d/0x350
alloc_pages+0x90/0x1c0
folio_alloc+0x1d/0x60
filemap_alloc_folio+0x8e/0xb0
__filemap_get_folio+0x1c7/0x3c0
filemap_fault+0x144/0x910
__do_fault+0x39/0x120
do_read_fault+0xf5/0x170
do_fault+0xa6/0x300
handle_pte_fault+0x117/0x240
__handle_mm_fault+0x696/0x740
handle_mm_fault+0xba/0x2a0
do_user_addr_fault+0x1c1/0x680
exc_page_fault+0x80/0x1b0
asm_exc_page_fault+0x27/0x30
RIP: 0033:0x562d5e907c45
Code: Unable to access opcode bytes at RIP 0x562d5e907c1b.
RSP: 002b:00007ffc2b269db0 EFLAGS: 00010286
RAX: 00000000ffffffff RBX: 0000562d60a9dbc0 RCX: 00000000000000ff
RDX: 0000000000143bb5 RSI: 0000562d60a9dc58 RDI: 0000562d60a9d898
RBP: 0000562d60a9d800 R08: 0000000000000187 R09: 0000000000000002
R10: 00007ffc2b269df0 R11: 48164b1643c927fa R12: 0000562d60a9d800
R13: 00007ffc2b269fb8 R14: 00007ffc2b269fc0 R15: 0000562d60a60d90
</TASK>
Mem-Info:
active_anon:488243 inactive_anon:2424656 isolated_anon:0
            active_file:1388 inactive_file:1294 isolated_file:90
            unevictable:38011 dirty:0 writeback:0
            slab_reclaimable:33372 slab_unreclaimable:72134
            mapped:1152675 shmem:677800 pagetables:28551 bounce:0
            kernel_misc_reclaimable:0
            free:66993 free_pcp:3217 free_cma:0
Node 0 active_anon:1952972kB inactive_anon:9698624kB active_file:5552kB inactive_file:5176kB unevictable:152044kB isolated(anon):0kB isolated(file):360kB mapped:4610700kB dirty:0kB writeback:0kB shmem:2711200kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB kernel_stack:36912kB pagetables:114204kB all_unreclaimable? no
Node 0 DMA free:13312kB boost:0kB min:64kB low:80kB high:96kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1330 15590 15590 15590
Node 0 DMA32 free:62528kB boost:0kB min:5764kB low:7204kB high:8644kB reserved_highatomic:0KB active_anon:25924kB inactive_anon:614516kB active_file:380kB inactive_file:60kB unevictable:16416kB writepending:0kB present:1547020kB managed:1480888kB mlocked:2024kB bounce:0kB free_pcp:2332kB local_pcp:480kB free_cma:0kB
lowmem_reserve[]: 0 0 14259 14259 14259
Node 0 Normal free:192132kB boost:138936kB min:200688kB low:216124kB high:231560kB reserved_highatomic:2048KB active_anon:1927048kB inactive_anon:9084108kB active_file:5672kB inactive_file:5532kB unevictable:135628kB writepending:0kB present:14934016kB managed:14609576kB mlocked:19264kB bounce:0kB free_pcp:10644kB local_pcp:1396kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0 0
Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 2*2048kB (UM) 2*4096kB (M) = 13312kB
Node 0 DMA32: 208*4kB (UE) 164*8kB (UME) 98*16kB (UME) 56*32kB (UME) 52*64kB (UE) 77*128kB (UME) 129*256kB (UE) 21*512kB (ME) 0*1024kB 0*2048kB 0*4096kB = 62464kB
Node 0 Normal: 8168*4kB (UEH) 7514*8kB (UEH) 4380*16kB (UEH) 852*32kB (UEH) 3*64kB (H) 4*128kB (H) 0*256kB 2*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 191856kB
Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
719398 total pagecache pages
37844 pages in swap cache
Swap cache stats: add 583122, delete 545193, find 135102/172662
Free swap  = 0kB
Total swap = 999420kB
4124257 pages RAM
0 pages HighMem/MovableOnly
97801 pages reserved
0 pages hwpoisoned
Tasks state (memory values in pages):
[  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[    747]     0   747    76984      561   638976       42          -250 systemd-journal
[    803]     0   803     7075     1146    73728      140         -1000 systemd-udevd
...
[  97317] 10705 97317  1932577   924002  9097216        0           200 VirtualBoxVM
[  97621] 10705 97621   673626    17766  1384448        0           100 Isolated Web Co
[  97875] 10705 97875  2543969   418177  6213632        0           200 java
[  98221] 10705 98221    60478     2021   212992        0           200 ion.clangd.main
oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-10705.slice/user@10705.service/app.slice/app-virtualbox-f53ab59e62d342c192d9fe7f637f3855.scope,task=VirtualBoxVM,pid=97317,uid=10705
Out of memory: Killed process 97317 (VirtualBoxVM) total-vm:7730308kB, anon-rss:338076kB, file-rss:3293128kB, shmem-rss:64804kB, UID:10705 pgtables:8884kB oom_score_adj:200

(*) No, that was wrong:

$ swapon --show
NAME      TYPE      SIZE   USED PRIO
/dev/dm-2 partition 976M 963,4M   -2

It starts to make sense now. I just use too much mem for other things, and I don't have enough swap and since it's inside an encrypted LVM, I have no idea how to increase its size with gparted or other.

guiverc avatar
cn flag
Please provide Ubuntu release details; which OOM killer are you talking about; `systemd-oomd` was introduced with 22.04 so is that what you are asking about? or a different one from earlier releases? Systemd-oomd can be disabled (question/answers exist on how on this site)
dargaud avatar
ma flag
How would I know? `ps` shows `oom_reaper` running in the background. I'm running Kubuntu 22.10 with 5.19.0-26-generic
Score:0
ma flag

I seem to have solved this by extending my swap partition. This can be done on a live system with a few commands (swapoff / lvresize / mkswap / swapon).

Here's a good rundown: https://www.thegeekdiary.com/how-to-extend-and-reduce-swap-space-on-lvm2-logical-volume/

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.