Score:5

High memory usage that cannot be traced to a service or application

in flag

In Ubuntu 22.04 I am experiencing a strange issue with memory. I can't seem to find out why my memory usage is as high as it is. When I boot my laptop memory usage is as expected, but over time somethings seems to eat up my memory, but the system does not seem to know what it is used for.

After the laptop has been running for a couple of days and been in sleep over night free -m reports the following:

               total        used        free      shared  buff/cache   available
Mem:           14812        7329        2683        1810        4799        5348
Swap:           2047         416        1631

All applications has been closed, and according to this tool the accumulated memory used by all applications is 1.7 GiB. So what is using the last ~5.5GiB (7.2-1.7) ?

Am I missing something?

enter image description here

in flag
File buffers are generally the answer here. The memory will be released back for applications in the event there is pressure for resources. Otherwise, recently-accessed and/or commonly used files will often be kept (partially) in memory for faster access later
dhojgaard avatar
in flag
Thank you for your comment. I don't think you are right here. Look at the htop screenshot. In the mem-usage bar the blue color represents buffers. There are almost none. Loads of cache though, and you are right cache will be free'd when necessary, but an almost idle system should not be using +7GiB memory.
user10489 avatar
in flag
Looks like your gnome shell has a memory leak. But most of that is virtual, not resident.
dhojgaard avatar
in flag
@user10489 i dont think virtual memory is relevant in this case. Virtual Memory is not real memory.
Hi-Angel avatar
es flag
Does running a `sudo smem -c "name user pid vss pss rss swap" | awk '{ total_pss += $5; print }; END { print "PSS: " total_pss }'` prints a much lower PSS value than "used" as well?
user10489 avatar
in flag
@dhojgaard: I agree, that was my point.
dhojgaard avatar
in flag
@Hi-Angel yes, right now it is pretty extreme. used is: 12071480, and PSS is: 57554694. I can only trace 6.2G of the 12.8G of memory used. This is extremely strange.
dhojgaard avatar
in flag
@ArturMeinild Yes i am using ZFS, but max_arc_size is 512M as you can see in my htop screenshot. I had to restart my laptop due to running out of memory, so memory has not gone up too much yet. My ps output is here: https://pastebin.com/raw/aYHy376E
Artur Meinild avatar
vn flag
It looks like Brave browser is consuming huge amounts of memory. Could you please update your post when the situation worsens with new screenshots and output, so they all show the status at the same moment? Thanks.
dhojgaard avatar
in flag
@ArturMeinild Yeah i know - i have too many open Grafana Dashboards. Sucks up memory easily, but again - that is traceable :) I will update the post when the situation is bad again. Can take some time. Will pile up over days and several suspends.
dhojgaard avatar
in flag
@ArturMeinild Ok so memory has piled up a bit more. I have closed most applications to be able to trace it easier. See the ps output here: https://pastebin.com/raw/hYBCySfN I am able to trace 2.2GB RAM, but i am using more than 5 according to free -m
Artur Meinild avatar
vn flag
Linux rule of thumb: It's either a process or a file(system).
Hi-Angel avatar
es flag
I would report a kernel bug. Though you might have hard time doing so due to non-upstream modules such as ZFS *(well, you can report to launchpad though, but it's useless, it's always more useful to make sure a problem is reproduced in upstream version of a project, then report a bug there)*.
Artur Meinild avatar
vn flag
@Hi-Angel don't you think it's a good idea to find out of its ZFS or a ramdisk before filing a bug report?
Hi-Angel avatar
es flag
@ArturMeinild sure. Kernel upsteam likely would not accept a bugreport with out-of-tree modules, so if OP wanted to report it, they would have to reproduce it without ZFS anyway.
Artur Meinild avatar
vn flag
@dhojgaard did you read my answer, and did you check if any of the possible causes apply? Or maybe you can think of other filesystem causes applicable to you, and I'll happily apply it to the answer in any case.
Hi-Angel avatar
es flag
Can you also show the output of `cat /proc/meminfo`?
dhojgaard avatar
in flag
@Hi-Angel sure, its here: https://pastebin.com/raw/NmgF4frY And the data should be good. Right now i have 4-5GB of memory used that cannot be traced.
Hi-Angel avatar
es flag
Hmm, well, I see 6.4G being used in AnonPages. It's a mapped memory, [usually that's basically what processes constituted of](https://unix.stackexchange.com/a/677020/59928). But in your case IIUC the actual usage by processes is much lower? Well, if you want to dig deeper, I think you could ask on a kernel mailing list how to break down what these AnonPages belong to. Most likely it's ZFS as others mentioned *(remember it's an out-of-tree module, so it may do weird things; not to say it doesn't warrant a bugreport though)*, but yeah, would be nice to pin that down for sure.
Artur Meinild avatar
vn flag
Does `sudo slabtop` reveal anything? I know this is late, but I just became aware of this tool.
Hi-Angel avatar
es flag
@ArturMeinild judging by `man free`, the slab memory is included into `buff/cache` column, and since the "leak" is in `used` instead that means it can't be in slabs. So I doubt `slabtop` may be useful in this case.
Score:8
vn flag

This answer is written in attempt to troubleshoot possible causes for "unaccounted" memory usage, that may go beyond what the OP is experiencing.

Background

In Linux, memory usage can largely be attributed to two elements:

  • Usage by processes/threads
  • Usage by a filesystem

In the case of this question, the OP can't account for all used memory by processes/threads. Hence, it's very likely that the remaining memory is used by the kernel for filesystem operations.

Probable causes

I'm currently aware of 3 main reasons for excessive filesystem memory usage:

  1. ZFS ARC cache (or similar filesystem cache, that registers as "normal" memory usage)
  2. Ramdisks (tmpfs and similar filesystems)
  3. Native filesystem compression, combined with excessive disk I/O (so that compression/recompression happens with constant memory consumption as a result)

To troubleshoot these 3 scenarios, I would recommend the following:

1. ZFS ARC cache

Run this command to get detailed information on ZFS ARC cache:

arcstat -a 

The stat reported by size will show up as normal used memory.

From your post, it's obvious that 0.5 GB is used here.

2. Ramdisks

Run this command to get information about any current tmpfs filesystem ramdisks:

df -hl -t"tmpfs"

Any amount reported by used will show up as normal used memory.

Ubuntu (and many other Linux variants) have a default ramdisk under /dev/shm. Applications may use this space, and thus you can easily check with:

ls -ahl /dev/shm

In addition, tmpfs usage is also shown under shared in the free output, so from your post its obvious that 1.8 GB is used here.

3. FS compression

Check for disk IO with this command: (iostat is a part of the sysstat package)

iostat

Combine this with knowledge about enabled filesystem compression for the relevant disks. For ZFS, run this command to get compression properties for all Zpools and datasets:

zfs get compression

If you have high disk IO, combined with enabled native compression, this can result in excessive memory reported as normal used memory.

dhojgaard avatar
in flag
Thank you for your answer. I cannot accept it as a correct answer this time as i cannot verify that ZFS is actually using the memory.
Artur Meinild avatar
vn flag
@dhojgaard, well in your case it seems at least 2.3 GB in the original post is accounted for. You have 0.5 GB ARC cache and 1.8 GB shared memory (most likely `tmpfs`). And the rest could probably be ZFS compression. So it seems all 3 elements from my answer is coming into play for you.
Score:6
pe flag

Are you using compression on your filesystem or a copy-on-write filesystem like BTRFS or ZFS?

It's possible that the excessive memory usage is due to the kernel trying to (re)compress files in real time or copying /versioning each write when big files are changing often.

Memory pressure becomes especially noticeable if you have large files like images for virtual machines where files change often and are sized in gigabytes instead of megabytes.

You can look into disabling compression on a filesystem/subvolume/volume level or individual file level and or disabling COW (copy on write)

BTRFS:
Its supported removing compression for particular files with the btrfs command:

 property set <file> compression none

to remove COW (copy on write) for a particular folder:

chattr -R +C /directory/of/your/vm_images/

ZFS:
I don't believe you can remove compression on individual files. Disabling compression on a volume will only affect future writes so you will need to copy each file individually for a files to have compression disabled.

zfs set compression=off name_of_zfs_vol
Artur Meinild avatar
vn flag
This is a really plausible explanation.
Artur Meinild avatar
vn flag
You are correct, ZFS compression is configured for the entire volume and datasets.
Hi-Angel avatar
es flag
If a compression buffer isn't being accounted in `buff/cache` column of `free`, that I'd believe is a bug, either of the kernel, or of the file system module.
dhojgaard avatar
in flag
Thank you for your answer. I cannot accept it as a correct answer this time as i cannot verify that ZFS is actually using the memory.
dhojgaard avatar
in flag
However i can confirm that i am using ZFS with compression together with LXD containers, so your explanation is quite palusible.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.