Score:0

How to configure LXD network to host bridge?

cn flag

I have LXD instances attached to an LXD network and in each of those instances I've configured Netplan with static IP's.

The LXD docs say, "Network ACLs can be assigned directly to the NIC of an instance or to a network. When assigned to a network, the ACL applies to all NICs connected to the network."

The latter is what I am trying to accomplish. I've assigned ACL's to a network so all my instances can access the internet. But they do not. I believe what the problem is, is that each instance is configured with static ip's within Netplan and I've not correctly connected the LXD network to the host, and therefore the instances cannot access the internet.

I've tried creating a bridge and connecting the LXD network to it, but I also have a VPN connection on this machine and each time I loose all connectivity.

Here's what I've done so far.

$ lxc network show mylan
config:
  ipv4.address: none
  ipv4.firewall: "false"
  ipv6.address: none
  ipv6.firewall: "false"
  security.acls: inet-access
  security.acls.default.egress.action: allow
  security.acls.default.ingress.action: allow
description: ""
name: mylan
type: bridge
used_by:
- /1.0/instances/h1 
- /1.0/instances/h2
managed: true
status: Created
locations:
- none

Because my host has a firewall, I turned off the LXD firewall and added some default actions:

$ lxc network mylan ipv6.firewall false
$ lxc network mylan ipv4.firewall false
$ lxc network set mylan security.acls.default.ingress.action=allow        
$ lxc network set mylan security.acls.default.egress.action=allow  

Also, as I have Docker on my host, I turn off the host firewall just to see if things work without Docker interfering, but no success.

Also, I have NetManager running on the host machine (Ubuntu 20.04). Can anyone help with a proper bridge setup to get the LXD instances access to the internet just by connecting the LXD network (not via each individual instance)?

Update:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 98:e7:43:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.100/24 brd 192.168.40.255 scope global dynamic noprefixroute eno1
   valid_lft 46670sec preferred_lft 46670sec
    inet6 fe80::965d:xxxx:xxxx:xxxx/64 scope link noprefixroute
   valid_lft forever preferred_lft forever
3: wlp110s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 04:ed:33:xx:xx:xx brd ff:ff:ff:ff:ff:ff
4: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
   valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0
   valid_lft forever preferred_lft forever
    inet6 fe80::42:xxxx:xxxx:xxxx/64 scope link
   valid_lft forever preferred_lft forever
7: mylan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
   link/ether 00:16:3e:xx:xx:xx brd ff:ff:ff:ff:ff:ff
8: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 10.118.252.1/24 scope global lxdbr0
   valid_lft forever preferred_lft forever
    inet6 fd42:xxxx:xxxx:xxxx::1/64 scope global
   valid_lft forever preferred_lft forever
9: mpbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 10.38.226.1/24 scope global mpbr0
   valid_lft forever preferred_lft forever
   inet6 fd42:da70:xxxx:xxxx::1/64 scope global
   valid_lft forever preferred_lft forever
10: mylan2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:xx:xx:xx brd ff:ff:ff:ff:ff:ff
12: veth2b088099@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master mylan state UP group default qlen 1000
    link/ether 2e:ba:7a:xx:xx:xx brd ff:ff:ff:ff:ff:ff link-netnsid 0
16: vethea6e922e@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master mylan state UP group default qlen 1000
    link/ether 02:13:27:xx:xx:xx brd ff:ff:ff:ff:ff:ff link-netnsid 1
18: vethbff8e68a@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master mylan2 state UP group default qlen 1000
    link/ether 0e:02:4c:xx:xx:xx brd ff:ff:ff:ff:ff:ff link-netnsid 2
21: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
   valid_lft forever preferred_lft forever
22: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:xx:xx:xx brd ff:ff:ff:ff:ff:ff
23: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none
    inet 172.41.52.53/22 brd 172.41.55.255 scope global tun0
   valid_lft forever preferred_lft forever
    inet6 fe80::bdef:xxxx:xxxx:xxxx/64 scope link stable-privacy
   valid_lft forever preferred_lft forever    

Update:

$ sudo cat /etc/NetworkManager/system-connections/*

[connection]
id=br0
uuid=09221b2b-010a-b2cc-9c91-xxxxxxxxxxxx
type=bridge
autoconnect=false
interface-name=br0
permissions=
timestamp=1681164658

[bridge]
forward-delay=4

[ipv4]
address1=192.168.40.200/24
dns-search=
method=auto

[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=ignore

[proxy]
[connection]
id=bridge-slave-mylan
uuid=52ac1323-245e-4525-9f9c-xxxxxxxxxxxx
type=ethernet
interface-name=mylan
master=br0
permissions=
slave-type=bridge

[ethernet]
mac-address-blacklist=

[ipv4]
dns-search=
method=auto

[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=auto

[proxy]
[connection]
id=Wired connection 1
uuid=756b154c-016a-3d5f-9fb2-xxxxxxxxxxxx
type=ethernet
autoconnect-priority=-999
permissions=
timestamp=1660596751

[ethernet]
mac-address=98:E7:43:xx:xx:xx
mac-address-blacklist=

[ipv4]
dns=8.8.8.8;4.2.2.2;
dns-search=
method=auto

[ipv6]
addr-gen-mode=stable-privacy
dns-search=
ip6-privacy=0
method=auto

[proxy]
user535733 avatar
cn flag
Does "NetManager" mean NetworkManager?
Ender avatar
cn flag
@user535733 Yes, meant NetworkManager.
Ender avatar
cn flag
@user535733 - added update for `ip a`.
Ender avatar
cn flag
@user535733 - I really hope I cleaned that file up correctly. I had a tone of wifi AP's that made it quite long so removed those that didn't seem to matter.
user535733 avatar
cn flag
Your `br0` autoconnect=false. Is that what you want?
Ender avatar
cn flag
Not yet sure. I'd like to be able to turn this connectivity off and on. Off when working strictly between instances on the LXD network, and on when I need to load new software onto the instances by connecting to the internet.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.