I have 3 ec2 instances in a single vpc and subnet.
Each ec2 instance has an elastic ip and a route 53 domain name pointing to it. The domain name is set as the host name in Amazon Linux 2. I am able to use their host name when directly going to the web applications or connecting via ssh. The terminal identifies them by this name as well i.e. ec2-user@domain-name.
Each is running a separate web app of a distributed platform. They need to be reachable from the internet (they are) and they need to be reachable from each other, which seems to be the problem.
I would like to be able to configure the web apps to reach each other using their dns names but when communicating between them I get No route to host <dns>/<elastic ip>:port
SO I figure the route table on the VPC needs to know that these elastic ip's are associated with specific instances in the VPC. I added routes with the eip as the destination and the instance as the target but now attempted connections between the servers just time out.
I'm clearly missing something, but short of taking a full course on AWS networking (i'm getting there as time allows) most of the material I have found stops after single web servers being publicly accessible and jumps to VPC peering.
I'm just trying to get these instances working like the R53 name is the proper fqdn and is how the server is referenced regardless of where I am trying to connect from.