Score:0

MTU Problems with vxnet over wireguard and linux bridge

cn flag

My Setup: Several Proxmox Hosts with one eth0 connected to LAN via bridge (vmbr0) in proxmox. (Default setup)

There is another "dummy" bridge device for internal traffic between Guests (vmbr100).

As first guest we have a "Router" VMs on each Host (debian bullseye) which are connected to vmbr0(host) on ens18(guest) and vmbr100 on ens19.

on the router we also have a simple vmbr100 linux bridge.

The Router's are also connected via wireguard over LAN (wg0) and we set up a vxlan (single-cast) over this wireguard connections. This works fine so far. I can ping and ssh all routers from the routers via wg or vxlan ip.

This vxlan is then bridged with the ens19 via the vmbr100 bridge. (so that other guests i start up, we've put them on the vmbr100 on the host, and they are automatically in the same net with the other proxmox hosts, even if they are globally on different datacenters).

The eth0 (LAN) has MTU 1500, because the whole network and proxmox has it.

wg has default mtu of 1420 (80 byte overhead over lan mtu)

All other (vxlan connected) devices have mtu of 9000.

My problem so far.. as i mentioned, .. i can do ssh and all sorts of iperf stuff, but when i connect a new guest to that vxlan bridge.. (on the host) ... ping works, dhcp / dns works.. but for iperf or ssh i need to set the mtu down to 1350 on that guests (which is wireguard + vxlan traffic overhead) .. to get the guests to talk to each other via .. tcp?

Firewall (iptables/ebtables) of course is disabled so far.

Can you shed me some light where i need to tweak/start to research further? Why traffic between the routers is ok (using the wg ip or dummy bridge ip), traffic from guest to guest host bridge is also ok, but traffic between these guests connected to the vmbr100 connected via vxlan connected via wireguard .. needs lower mtu?

shodanshok avatar
ca flag
Try to manually set the `wg` interface to 1500 bytes MTU. Does it change anything?
cernoel avatar
cn flag
yep i didnt know beforehand, that i also can set the mtu to higher value than link mtu which wireguard is going through.. so there is some fragmentation happening now on wireguard level, but traffic is low and it does not concern me.
Score:0
cn flag

Thanks for rubberducking my problem,

since wireguard is using a default mtu of 1420 .. on the wg0 interface, all transmissions below that point went through easily (i have not tried ping -s [packetsize]) .. this command helped me find the bug.

.. on the host "bridge" interface, which has mtu of 9000.. i sized down the wg0 mtu to 8500 ... and i set all other interfaces mtu to 8000 (vmbr100, vxnet, ens19) .. and now all guests and hosts are able to talk to each other (since guests default to mtu 1500).

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.