I am testing out K8s runtime classes, and have successfully fired up some pods using containerd & gVisor.
To do this, I changed /etc/containerd/config.toml
to the below, then restarted the service
disabled_plugins = ["restart"]
[plugins.linux]
shim_debug = true
[plugins.cri.containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"
This removes the default_runtime_name = "runc"
config from the default config that was in /etc/containerd/config.toml
(which was originally generated using containerd config default
before the cluster was built with kubeadm)
Then I created a runtime class to use runsc, and use this in my pod manifest with runtimeClassName: gvisor
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
and then finally fire up a pod to use the new runtime class
apiVersion: v1
kind: Pod
metadata:
name: gvisor-pod
spec:
runtimeClassName: gvisor
containers:
- name: nginx
image: nginx
But of course if I then do a normal kubectl run pod1 --image nginx
(ie without specfitying the runtimeClassName: gvisor
in a manifest to make it use my new runtime class), it still starts up fine using the runc shim.
A per the docs
If no runtimeClassName is specified, the default RuntimeHandler will be used,
which is equivalent to the behaviour when the RuntimeClass feature is disabled.
My question is that without the default_runtime_name = "runc"
in the containerd config file, how does kubelet/containerd still know to use runc when a custom runtime class/handler is not specified in the pod manifest? ie where is this default RuntimeHandler configured?