Short version: in a site-to-site VPN setup with Strongswan on both sides, how to route particular traffic via the VPN tunnel?
Long version:
We have two Linux (ubuntu 20.04) in AWS, both installed with Strongswan VPN, and a VPN tunnel has been established.
IP 172.31.0.151 IP 10.0.0.14
Server 1 <
As expected, they can ping each other, tcpdump will display correct private IP addresses for the ping.
The content of /etc/ipsec.conf of Server 2 (Server 1 ipsec.conf is almost identical, just swapping left/right values):
config setup
conn %default
ikelifetime=28800s
lifetime=3600s
keyingtries=%forever
keyexchange=ikev2
authby=secret
mobike=no
conn vpn-test
left=10.0.0.14
leftsubnet=10.0.0.0/24
leftid=18.999.999.999
leftsourceip=10.0.0.14
right=18.888.888.888
rightsubnet=172.31.0.0/16,2.2.2.2/32
auto=start
type=tunnel
ike=aes256-sha1-modp1024!
esp=aes256-sha1!
dpddelay=30s
dpdtimeout=120s
dpdaction=restart
Goal: if from Server 2 we need to access a "dummy" IP address in Server 1's network, for example, we want to ping 2.2.2.2 and send the ping request over the VPN tunnel, instead of going out of the Server 2's actual network interface and into the internet.
With the 2.2.2.2/32 in both configuration, left/rightsubnet respectively, it still doesn't work.
Mehod 1 we tried: ip route add
The Strongswan does not create a VPN network interface, an ip a
command gives the two default network interfaces in the Ubuntu:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0e:ac:45:e9:76:60 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.14/28 brd 10.0.0.15 scope global dynamic ens5
valid_lft 3576sec preferred_lft 3576sec
inet6 fe80::cac:45ff:fee9:7660/64 scope link
valid_lft forever preferred_lft forever
So to use ip route add
command to route traffic will not work here.
Method 2: iptables
The other method we tried is to use iptables to do DNAT, when destination is 2.2.2.2, route it to Server 1's ip address of 172.31.0.151, however DNAT will change the destination, therefore on Server 1 we will observe packets with destination of 172.31.0.151, instead of 2.2.2.2, and unable to NAT it accordingly. So iptables doesn't solve this problem either.
Being new to this field I don't know if it should be in Strongswan configuration or Linux routing or something else. How can we approach the issue?
Thank you for your time.