Some thinking and judgement is required by you to understand your environment and do capacity planning. What does it mean to the organization that this host is performing well? Hint: users care about if "its slow", not memory or CPU utilization. How far can you push utilization and still have an adequate safety margin?
You appear to be using glances, a Python psutil based resource monitor. At first glance (ha) it has decent UX, sane data sources, and tells you about alerts, nice. Default memory alerts are 50% 70% 90%, which to me is fairly conservative and escalates from more than enough to concerning to heavy memory pressure. Check if this makes sense in your environment, configure different thresholds if necessary.
But percent of what memory metric? On Linux, glances defers to psutil. psutil calculates total minus available. Which is a reasonable thing to do, caches and other easy recoverable things are excluded from the ratio. Although there are legacy fallback calculations for old kernels, so how this is measured could vary.
During these "high" memory consumption alerts, collect raw /proc/meminfo
output and analyze it. It is possible for memory consumption to exist outside the address space of process. Including shared memory segments, or kernel data structures.
This host is a systemd system that runs docker and a few other things. Get the memory use per group by running systemd-cgtop --order=memory
and docker stats
Often per group stats are easier to understand than accounting for the many processes on the system. Maybe containers still exist, even though most of their processes have stopped.