I have cloned an existing Ubuntu image off of our AWS server, upgraded to 20.04, and am trying to configure the connection to the outside world. I've been running into a lot issues on the frontend where I would get 401 authentication failures every time I would try to log in via the web interface.
I did some digging and a lot of googling, and I found that the server was trying to resolve to the old server, instead of the new one.
Old Server IP = 172.9.8.7
New Server IP = 172.1.2.3
When I run route -n
on my new server I see the following:
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.9.8.7 0.0.0.0 UG 0 0 0 eth0
172.9.8.7 0.0.0.0 255.255.240.0 U 0 0 0 eth0
So I did some googling on some forums and found that the resolve.conf
file was pointing to the wrong one. I changed it to point to this: /run/systemd/resolve/resolv.conf
which had the old server IP where I modified it to the new one.
I then restarted the resolvconf
service and see this:
$ sudo systemctl restart resolvconf.service
sudo: unable to resolve host ip-172-1-2-3: Temporary failure in name resolution
Running ifconfig
shows the correct server IP. ufw
is also disabled, and iptables
is empty as this is the configuration of the live server.
What am I missing here? I can't seem to figure out why the physical server local ip is being rejected.
Server is running Ubuntu 20.04.