Score:2

site2site wireguard with docker : routing problems

fr flag

Disclaimer: repost from stackoverflow: https://stackoverflow.com/questions/67917278/site2site-wireguard-with-docker-routing-problems

I am trying to have two containers, running on two RPI, act as a site-to-site VPN between Network 1 and Network 2.

With the setup below, I am able to ping from within the container each other network:

  • from docker container 1 I can ping an address 192.168.1.1
  • from docker container 2 I can ping the address 192.168.10.1

But if I try to ping 192.168.1.1 from the System1 host (192.168.10.100) I have errors (see below image to visualize what I am trying to do).

I understand I have to add a static route on system1 host (192.168.10.100) to direct the traffic for 192.168.1.0/24 through the wireguard container (172.17.0.5), thus I run:

$i p route add 192.168.1.0/24 via 172.17.0.5
$ ip route
default via 192.168.10.1 dev eth0 proto dhcp src 192.168.10.100 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.18.0.0/16 dev br-e19a4f1b7646 proto kernel scope link src 172.18.0.1 linkdown 
172.19.0.0/16 dev br-19684dacea29 proto kernel scope link src 172.19.0.1 
172.20.0.0/16 dev br-446863cf7cef proto kernel scope link src 172.20.0.1 
172.21.0.0/16 dev br-6800ed9b4dd6 proto kernel scope link src 172.21.0.1 linkdown 
172.22.0.0/16 dev br-8f8f439a7a28 proto kernel scope link src 172.22.0.1 linkdown 
192.168.1.0/24 via 172.17.0.5 dev docker0 
192.168.10.0/24 dev eth0 proto kernel scope link src 192.168.10.100 
192.168.10.1 dev eth0 proto dhcp scope link src 192.168.10.100 metric 100 

but the ping to 192.168.1.1 still fails.

by running tcpdump on the container 2 I see that some packets are indeed reaching the container :

root@936de7c0d7eb:/# tcpdump -n -i any
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
10:11:19.885845 IP [publicIPsystem1].56200 > 172.17.0.6.56100: UDP, length 128
10:11:30.440764 IP 172.17.0.6.56100 > [publicIPsystem1].56200: UDP, length 32
10:11:35.480625 ARP, Request who-has 172.17.0.1 tell 172.17.0.6, length 28
10:11:35.480755 ARP, Reply 172.17.0.1 is-at 02:42:24:e5:ac:38, length 28

so I guess it is not a routing problem on system 1.

Can anyone tell me how to diagnose this further?


EDIT 1:
I have done the following test:

  1. run 'tcpdump -ni any' on container 2
  2. sent a ping from System 1 (from the host system) 'ping -c 1 192.168.1.1 .
    On container 2 tcpdump records the following:
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
    15:04:47.495066 IP [publicIPsystem1].56200 > 172.17.0.3.56100: UDP, length 128
    15:04:58.120761 IP 172.17.0.3.56100 > [publicIPsystem1].56200: UDP, length 32
  1. sent a ping from container (within the container) 'ping -c 1 192.168.1.1 .
    On container 2 tcpdump records the following:
# tcpdump -ni any
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
15:05:48.120717 IP [publicIPsystem1].56200 > 172.17.0.3.56100: UDP, length 128
15:05:48.120871 IP 10.13.18.2 > 192.168.1.1: ICMP echo request, id 747, seq 1, length 64
15:05:48.120963 IP 172.17.0.3 > 192.168.1.1: ICMP echo request, id 747, seq 1, length 64
15:05:48.121955 IP 192.168.1.1 > 172.17.0.3: ICMP echo reply, id 747, seq 1, length 64
15:05:48.122054 IP 192.168.1.1 > 10.13.18.2: ICMP echo reply, id 747, seq 1, length 64
15:05:48.122246 IP 172.17.0.3.56100 > [publicIPsystem1].56200: UDP, length 128
15:05:53.160617 ARP, Request who-has 172.17.0.1 tell 172.17.0.3, length 28
15:05:53.160636 ARP, Request who-has 172.17.0.3 tell 172.17.0.1, length 28
15:05:53.160745 ARP, Reply 172.17.0.3 is-at 02:42:ac:11:00:03, length 28
15:05:53.160738 ARP, Reply 172.17.0.1 is-at 02:42:24:e5:ac:38, length 28
15:05:58.672032 IP [publicIPsystem1].56200 > 172.17.0.3.56100: UDP, length 32

so, It seems that packets are treated differently from container 2 depending on something that I am currently missing.. could it be an iptables problem?


enter image description here

Site 1 Site 2
Network 1 IP range 192.168.10.0/24 192.168.1.0/24
host system address 192.168.10.100 192.168.1.100
bridge docker0 range 172.17.0.0/16 172.17.0.0/16
container address 172.17.0.5 172.17.0.6

System 1 - wg0.conf

[Interface]
Address = 10.13.18.2
PrivateKey = *privatekey*
ListenPort = 56200
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

[Peer]
PublicKey = *publickey*
Endpoint = *system2address*:56100
AllowedIPs = 10.13.18.1/32 , 192.168.1.0/24

System 2 - wg0.conf

[Interface]
Address = 10.13.18.1
ListenPort = 56100
PrivateKey = *privatekey*
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

[Peer]
# peer_casaleuven
PublicKey = *publickey*
AllowedIPs = 10.13.18.2/32 , 192.168.10.0/24
Endpoint = *system1address*:56200
jabbson avatar
sb flag
When you ping 192.168.1.1 from the system 1 - do you see packets arriving at container 1 and leaving it that you could correlate to the inbound packets you see on the container 2 (cause what you see can be just keepalives or something)? Does WireGuard have any logs that you can look at? What does the routing table on container 1 looks like?
fr flag
yes I have tried to run 'tcpdump -ni any' on both containers at the same time, waited for a couple of seconds to verify that no other packets were recorded. Then I run 'ping -c 1 192.168.1.1' to send just one ping from system 1 and I see the packets passing in both tcpdumps.
fr flag
added another test in the Edit 1 section
A.B avatar
cl flag
A.B
note: I hope container's `eth0` is unrelated, because there should be no NAT anywhere. So I don't understand why this rule exist: `iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE` within `PostUp`
fr flag
the rule iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE was put by default by the linuxserver/wireguard in the initial configuration. If I remove them from both PostUp and PostDown in both container, then I am not able to ping anything anymore.
A.B avatar
cl flag
A.B
Ok. Anyway this doesn't change anything to the answer I wrote below. Did you try it?
fr flag
Yes, I tried and it works!!! Thank you so much! I wonder though why that roule is so foundamental to make it work. If I run iptables --list I don't see any NAT table..
Score:1
cl flag
A.B

This looks like a routing issue.

192.168.1.0/24 via 172.17.0.5 dev docker0

This route was not hinted with a preferred source address. So naturally the host will choose the closest matching address: 172.17.0.1 since it's the primary address on docker0. 172.17.0.1 is not in peer's WireGuard's AllowedIPs list (nor should it have to), so will be rejected by WireGuard. If it wasn't rejected, there would be anyway a routing issue in the peer because of the two separate LANs using the same IP address block.

Try this

  • System 1

    ip route replace 192.168.1.0/24 via 172.17.0.5 dev docker0 src 192.168.10.100
    
  • System 2

    ip route replace 192.168.10.0/24 via 172.17.0.6 dev docker0 src 192.168.1.100
    

Note that before this adjustment, this should not have affected the rest of the LANs, only the two Docker host systems.

A.B avatar
cl flag
A.B
Re explaining what I wrote in the answer: the source address selected was wrong. This route has the src hinted to use the correct source address from the LAN
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.