You can set CPU limits and requests.
Once you set these, even if the limits are generous, the kubelet and container runtime work together to enforce the CPU limits. Along with that, you can reserve resources for Kubernetes itself so that the workload doesn't put the overall node at risk.
Once you define these, the Linux kernel becomes responsible for enforcing the limits and ensuring that the available resources are allocated fairly.
If you have DaemonSets in your cluster, make sure that these too have resources and limits. You could consider running the DaemonSets as guaranteed, so that their resource are ringfenced.
See Configure Quality of Service for Pods.
Taken together, these measures should protect your nodes from the workload whilst still allowing the app Pods to burst into available CPU during startup.
If you find there are still issues, there is an extra step you can take: delay each startup by a random amount. You can do that without app changes, by running a custom init container prior to the main app startup. That random delay helps to avoid thundering herd issues where each JVM runs with the same resource access pattern at the same time.