Background
We use Ansible/AWX running in Kubernetes (awx-operator) to manage workstations with dynamic hostnames. Authentication is handled by Red Hat IDM (FreeIPA) and the same credentials are used to log in to each workstation.
The problem
Hostnames don't always update immediately, and sometimes Ansible connects to a host the wrong host (e.g. DNS for test1.domain.local
-> 172.1.1.10
, which actually now belongs to test2.domain.local
). Because the credentials still work to log in to test2.domain.local
, ansible doesn't know it's connected to the wrong host and proceeds on its merry way, wreaking all sorts of havoc.
Workaround/attempted solutions
Note: I do plan to address the underlying issue of changing hostnames (e.g. DHCP reservations or static IPs), but I would also like to solve the problem of ansible happily connecting to the wrong host, and I would think there would be an easy solution out there.
I've implemented a workaround using a static known_hosts
file (generated from IDM using sss_ssh_knownhostsproxy
) and passed through to the EE using a volume mount. This works, but it's not great as there is no way to dynamically update it as hosts are added/removed/rebuilt in IDM.
Ideally I would just use sss_ssh_knownhostsproxy
from inside the EE (ProxyCommand
in ssh_config
, just like on the host), but this appears to rely on the host being joined to the IDM itself (I'm not really sure here, but running sss_ssh_knownhostsproxy
from inside the container does not work).
I also tried a Kerberos principal in IDM and keytab in the container, but this appears to open a whole other can of worms with required files from the host, and the dynamic nature of the container's hostname. Not to mention I have no idea if simply having a valid kerberos token would resolve the issue, or if there's another can of worms following this one.
Questions
- Do others validate host keys when connecting with ansible? There does not appear to be a lot of documentation on enabling
StrictHostKeyChecking
in ansible — everything I see advises disabling it, but I would think this would be bad practice.
- How do others handle service auth inside k8s? I'm a Kubernetes newbie so any best practices would be helpful.
- How does
sss_ssh_knownhostsproxy
work? What files/configuration specifically does it rely on?
- How would you handle my issue? Am I on the right track with
StrictHostKeyChecking
and sss_ssh_knownhostsproxy
or am I missing something else obvious?