If you need access to the underlying Nodes for your Kubernetes cluster (and you don't have direct access - usually if you are hosting Kubernetes elsewhere), you can use the following deployment to create Pods where you can login with kubectl exec
, and you have access to the Node's IPC and complete filesystem under /node-fs
. To get a node console that is just like you have SSHd in, after logging in, perform chroot /node-fs
. It is inadvisable to keep this running, but if you need access to the node, this will help. Because it is a DaemonSet, it starts one of these Pods on each Node.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: privpod
spec:
selector:
matchLabels:
mydaemon: privpod
template:
metadata:
labels:
mydaemon: privpod
spec:
hostNetwork: true
hostPID: true
hostIPC: true
containers:
- name: privcontainer
image: johnnyb61820/network-toolkit
securityContext:
privileged: true
command:
- tail
- "-f"
- /dev/null
volumeMounts:
- name: nodefs
mountPath: /node-fs
- name: devfs
mountPath: /dev
volumes:
- name: nodefs
hostPath:
path: /
- name: devfs
hostPath:
path: /dev
This is from Appendix C.13 of Cloud Native Applications with Docker and Kubernetes. I've found this useful especially if I need to deal with physical drives or something similar. It's not something you should leave running, but helps when you are in a pinch.