My Setup:
Several Proxmox Hosts with one eth0 connected to LAN via bridge (vmbr0) in proxmox. (Default setup)
There is another "dummy" bridge device for internal traffic between Guests (vmbr100).
As first guest we have a "Router" VMs on each Host (debian bullseye) which are connected to vmbr0(host) on ens18(guest) and vmbr100 on ens19.
on the router we also have a simple vmbr100 linux bridge.
The Router's are also connected via wireguard over LAN (wg0) and we set up a vxlan (single-cast) over this wireguard connections. This works fine so far. I can ping and ssh all routers from the routers via wg or vxlan ip.
This vxlan is then bridged with the ens19 via the vmbr100 bridge. (so that other guests i start up, we've put them on the vmbr100 on the host, and they are automatically in the same net with the other proxmox hosts, even if they are globally on different datacenters).
The eth0 (LAN) has MTU 1500, because the whole network and proxmox has it.
wg has default mtu of 1420 (80 byte overhead over lan mtu)
All other (vxlan connected) devices have mtu of 9000.
My problem so far.. as i mentioned, .. i can do ssh and all sorts of iperf stuff, but when i connect a new guest to that vxlan bridge.. (on the host) ... ping works, dhcp / dns works.. but for iperf or ssh i need to set the mtu down to 1350 on that guests (which is wireguard + vxlan traffic overhead) .. to get the guests to talk to each other via .. tcp?
Firewall (iptables/ebtables) of course is disabled so far.
Can you shed me some light where i need to tweak/start to research further? Why traffic between the routers is ok (using the wg ip or dummy bridge ip), traffic from guest to guest host bridge is also ok, but traffic between these guests connected to the vmbr100 connected via vxlan connected via wireguard .. needs lower mtu?