What the Linux kernel calls a PID aka task is not strictly what ps or top calls a PID. Kernel PIDs have task group IDs (TGID) identifying the "heavyweight" process. Heavy in the sense that in certain multi-threaded programs, multiple PIDs share a TGID and memory. Thus, it is possible to see a java process using more than 100% of a CPU in certain performance monitoring tools.
"invoked oom-killer" header lines at the start show the unlucky task on CPU, and the stack up to that point. This may not be the task to "blame" for the OOM, and it also might not be killed if sysctl oom_kill_allocating_task is not set. But it probably just did a memory allocation.
"Tasks state" list, if enabled via sysctl:
Dumps the current memory state of all eligible tasks. Tasks not in
the same memcg, not in the same cpuset, or bound to a disjoint set of
mempolicy nodes are not shown.
In other words, this a best-effort to list processes on the system that could be killed. Note "tgid" is a column, to assist with tracking down multi-threaded thread groups. With cgroups enabled, such as when using systemd to contain units, this is a much shorter list than the entire system.
The kernel takes a very basic guess at task "badness", primarily based off of the ratio of this task to total system memory pages. Any "Killed process" messages show the details of the victim task, forcibly terminated via SIGKILL. This signal means the entire thread group is terminated.
None of these tasks are proven to be a "culprit". This is merely what the kernel can easily show you: what was just on CPU, some more tasks with their TGIDs for your convenience, and that killing something with a relatively large number of pages might save the system.
Realize that out of memory is a dire situation. The system is considering a crashing programs and possibly causing data loss. There is not much space to be clever.
If anything, your effort and cleverness should be spent towards improving your capacity planning. Find out how these services are memory contained, in a service manager, in a container. Observe memory consumption, by cgroup and system wide. Come up with a memory sizing algorithm, however many GB for the services, a bit for the kernel and administrative, and some few percent margin for safety. Adjust until you no longer get OOM killed.