I'm currently running into an issue where one of our Proxmox VMs, running on Debian 11, suffers of very sudden CPU overloads. This has happened past week already and again today. When this happens, the server is completely unresponsive. We can't even access it through the Proxmox console as it won't accept any input. This is what the CPU graph (average) looks like:

Memory, network or disk usage don't show any sudden spikes when this happens. It's only the CPU maxing out. The VM has two virtual cores, so I suppose the problem lies on a single-core application.
The VM is used for several customer projects as a staging environment. There are several applications running including PostgreSQL, Node.js and PHP. We have a New Relic agent running on the machine and have checked the process history:

As you can see, some Node.js application seems to be the culprit. The affected process doesn't show any details, though. Now the issue is: How do we diagnose this? There are multiple Node.js apps running through PM2 on the machine. As we can't access the Proxmox console or SSH into the machine when this happens, we are unable to check the PM2 process list right when this happens. We have checked various logs in /var/log
, unable to find anything related to this.
Any ideas?