I have coded AWS instances that can serve as nodes where my containers will run. So by default each of these instances have docker installed. This part is working fine.
Eventually i would like to have all these containers managed by Kubernetes - preferably EKS. I don't have experience yet with EKS as yet but i think it would require that the instances im building have some kubernetes agent or client running on those instances. Im trying to figure out what those are.
Im looking at the documentation here: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#:~:text=Installing%20kubeadm%2C%20kubelet%20and%20kubectl,like%20starting%20pods%20and%20containers.
And it states the following:
> You will install these packages on all of your machines:
>
> kubeadm: the command to bootstrap the cluster.
> kubelet: the component that runs on all of the machines in your
> cluster and does things like starting pods and containers.
> kubectl: the command line util to talk to your cluster.
This does not seem correct to me. From reading various sources, I was under the impression only kubelet and kube-proxy were needed to be installed on the nodes/instances. Can anyone confirm? If so, how would i install just the required libraries/components on these instances to be able to add them as nodes? is that something that should be coded or does EKS also handle that sort of thing?
The other part of this question is...
Is it good practice to add existing instances where i may run containers, but that also has other applications and services running currently (directly on the host)?
thanks much