Score:2

LXC host losing internet access when bridging lxdbr0 with ens3

tn flag

My LXC host loses access to the internet when bridging an interface between lxdbr0 and ens3. LXC guests also fail to access WAN when the bridging is effective.

My vps hosting provider provides a nic ens3 with a static IP 123.456.789.012 as dhcp. I'm trying to set lxdbr0 as dhcp as well so containers can be assigned an IP. I'm using netplan on Ubuntu 22.04.

# cat /etc/netplan/*
network:
  version: 2
  ethernets:
    ens3:
        accept-ra: false
        addresses:
        - 1234:1234:123:1234::6475/56
        dhcp4: true
        match:
            macaddress: fa:16:3e:16:b6:35
        mtu: 1500
        nameservers:
            addresses:
            - 205.168.44.66
            search: []
        routes:
        -   to: ::/0
            via: 1234:1234:123:1234::1
        set-name: ens3

  bridges:
     lxdbr0:
        dhcp4: true
        interfaces:
        - ens3


# lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: cPool
    type: disk
name: default
used_by:
- /1.0/instances/template

Oddly, the file mixes ipv4 and ipv6 but I'm planning on using ipv4.

After having matched the mac address for the bridge in netplan my internet access keeps working for the host, just not for the containers yet.

# cat /etc/netplan/ovsbr0-config.yaml
network:
    version: 2
    renderer: networkd
    ethernets:
        ens3:
            dhcp4: false

    bridges:
        ovsbr0:
            dhcp4: true
            dhcp6: true
            interfaces:
            - ens3
            accept-ra: false
            addresses:
            - [static_ipv4/24]
            - [static_ipv6/56]
#            match:
            macaddress: fa:16:3e:16:b8:35
#            set-name: ens3
            mtu: 1500
            nameservers:
                addresses:
                - 123.456.78.99
                search: []
            routes:
            -   to: ::/0
                via: ipv6_gw_address::1 

So now I'm trying the following config. As soon as I enter 'netplan apply' I lose connection to the host when setting a child bridge. lxdbr0 is used to get a private subnet IP range.

# cat /etc/netplan/ovsbr0-lxdbr0.yaml
network:
    version: 2
    renderer: networkd
    ethernets:
        ens3:
            dhcp4: false

    bridges:
        ovsbr0:
            dhcp4: true
            dhcp6: true
            interfaces:
            - ens3
            accept-ra: false
            addresses:
            - [static_ipv4/24]
            - [static_ipv6/56]
#            match:
            macaddress: fa:16:3e:16:b8:35
#            set-name: ens3
            mtu: 1500
            nameservers:
                addresses:
                - 123.456.78.99
                search: []
            routes:
            -   to: ::/0
                via: ipv6_gw_address::1

        lxdbr0:
            dhcp4: true
            dhcp6: true
            interfaces:
            - ovsbr0
            addresses:
            - 10.0.23.1/24
            nameservers:
                addresses:
                - 123.456.78.99
                search: []

On the lxd side, containers fail to get assigned a dhcp IPv4 address whether lxdbr0 is managed or not managed. As an example, lxd init creates lxdbr0 and is set to use both IPv4 and IPv6. When starting a container we see only an IPv6 but no IPv4. When looking further a route is added for the lxdbr0 network and lxc network list does show an IPv4 but the containers don't get assigned any.

# lxc list
+----------+---------+------+---------------------------------------------+-----------+-----------+
|   NAME   |  STATE  | IPV4 |                     IPV6                      |   TYPE    | SNAPSHOTS |
+----------+---------+------+---------------------------------------------+-----------+-----------+
| template | RUNNING |      | fd42:649e:38c9:81f9:216:3eff:fe24:8b17 (eth0) | CONTAINER | 0         |
+----------+---------+------+---------------------------------------------+-----------+-----------+

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         167.114.169.1   0.0.0.0         UG    100    0        0 ovsbr0
10.20.20.0      0.0.0.0         255.255.255.0   U     0      0        0 lxdbr0
167.114.169.0   0.0.0.0         255.255.255.0   U     0      0        0 ovsbr0
167.114.169.1   0.0.0.0         255.255.255.255 UH    100    0        0 ovsbr0
213.186.38.99   167.114.169.1   255.255.255.255 UGH   100    0        0 ovsbr0

# lxc network list
+--------+----------+---------+---------------+---------------------------+-------------+---------+---------+
|  NAME  |   TYPE   | MANAGED |     IPV4      |           IPV6            | DESCRIPTION | USED BY |  STATE  |
+--------+----------+---------+---------------+---------------------------+-------------+---------+---------+
| ens3   | physical | NO      |               |                           |             | 0       |         |
+--------+----------+---------+---------------+---------------------------+-------------+---------+---------+
| lxdbr0 | bridge   | YES     | 10.20.20.1/24 | fd42:649e:38c9:81f9::1/64 |             | 1       | CREATED |
+--------+----------+---------+---------------+---------------------------+-------------+---------+---------+
| ovsbr0 | bridge   | NO      |               |                           |             | 0       |         |
+--------+----------+---------+---------------+---------------------------+-------------+---------+---------+

I found the exact same issue I have reported here: lxc container no outgoing traffic. Only thing is its content appears to be outdated today. I verified the host can ping the container, containers can ping each other, but there's no outgoing connection. I've also noted there's no output in the containers when entering ip route. Any idea how to resolve this issue? Thanks

user535733 avatar
cn flag
Often what happens is that the bridge presents a different (virtual) MAC address to the DHCP server, so the bridge get assigned a different IP address...and you lose connectivity with your host and containers as they all get new IP addresses unknown to you. That's normal for your setup. Check your VPS dashboard or VPS support for the new bridge address (which is the same as your new host address).
Score:2
tn flag

The issue is now resolved with the help of LXD documentation How to configure your firewall.

I needed to adjust my ufw rules and make sure LXD's own firewall was not interfering:

sudo ufw allow in on <network_bridge>
sudo ufw route allow in on <network_bridge>
sudo ufw route allow out on <network_bridge>

lxc network set <network_bridge> ipv6.firewall false
lxc network set <network_bridge> ipv4.firewall false

In my case:

sudo ufw allow in on lxdbr0
sudo ufw route allow in on lxdbr0
sudo ufw route allow out on lxdbr0

lxc network set lxdbr0 ipv6.firewall false
lxc network set lxdbr0 ipv4.firewall false

My containers now receive IPv4 addresses dynamically and both host and containers have access to internet on a single WAN IP.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.