Score:0

Can't get LCX bridge to work

jp flag

I can't get a network bridge under (a up to date) arch-linux as a host to work: I am aware that the lxc default is read on creation time of the container (I created a new container after changing the network settings).

(base) [r0b3@toshi ~]$ sudo lxc-start -n container32 --logfile aaaaxxxxxs.txt --logpriority DEBUG
lxc-start: container32: lxccontainer.c: wait_on_daemonized_start: 868 Received container state "ABORTING" instead of "RUNNING"
lxc-start: container32: tools/lxc_start.c: main: 308 The container failed to start
lxc-start: container32: tools/lxc_start.c: main: 311 To get more details, run the container in foreground mode
lxc-start: container32: tools/lxc_start.c: main: 313 Additional information can be obtained by setting the --logfile and --logpriority options

last content of the log is:

lxc-start container32 20210620114326.855 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_setup_limits_legacy:2749 - Invalid argument - Ignoring legacy cgroup limits on pure cgroup2 system
lxc-start container32 20210620114326.856 INFO     cgfsng - cgroups/cgfsng.c:cgfsng_setup_limits:2857 - Limits for the unified cgroup hierarchy have been setup
lxc-start container32 20210620114326.862 ERROR    network - network.c:netdev_configure_server_veth:659 - Operation not supported - Failed to create veth pair "vethotXiCD" and "vethMRVTzD"
lxc-start container32 20210620114326.862 ERROR    network - network.c:lxc_create_network_priv:3418 - Operation not supported - Failed to create network device
lxc-start container32 20210620114326.862 ERROR    start - start.c:lxc_spawn:1844 - Failed to create the network
lxc-start container32 20210620114326.862 DEBUG    network - network.c:lxc_delete_network:4180 - Deleted network devices
lxc-start container32 20210620114326.862 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:868 - Received container state "ABORTING" instead of "RUNNING"
lxc-start container32 20210620114326.862 ERROR    lxc_start - tools/lxc_start.c:main:308 - The container failed to start
lxc-start container32 20210620114326.862 ERROR    lxc_start - tools/lxc_start.c:main:311 - To get more details, run the container in foreground mode
lxc-start container32 20210620114326.862 ERROR    lxc_start - tools/lxc_start.c:main:313 - Additional information can be obtained by setting the --logfile and --logpriority options
lxc-start container32 20210620114326.862 ERROR    start - start.c:__lxc_start:2073 - Failed to spawn container "container32"
lxc-start container32 20210620114326.862 WARN     start - start.c:lxc_abort:1016 - No such process - Failed to send SIGKILL via pidfd 20 for process 128228
lxc-start container32 20210620114326.863 INFO     conf - conf.c:run_script_argv:332 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "container32", config section "lxc"

troubleshooting so far:

configuration of container cat /etc/lxc/default.conf gives

cat /etc/lxc/default.conf
#lxc.net.0.type = none

#lxc.net.0.type = veth
##lxc.net.0.link = lxcbr0
#lxc.net.0.link = br1
##lxc.net.0.flags = up
#lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
#lxc.net.0.name = eth0



lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = lxcbr0
lxc.net.0.name = eth0
lxc.net.0.hwaddr = 00:16:3e:f9:d3:03
lxc.net.0.mtu = 1500

bridge seems to be up systemctl status --lines=0 --no-pager lxc.service lxc-net.service gives

● lxc.service - LXC Container Initialization and Autoboot Code
     Loaded: loaded (/usr/lib/systemd/system/lxc.service; disabled; vendor preset: disabled)
     Active: active (exited) since Sun 2021-06-20 13:42:03 CEST; 12min ago
       Docs: man:lxc-autostart
             man:lxc
    Process: 128157 ExecStartPre=/usr/lib/lxc/lxc-apparmor-load (code=exited, status=0/SUCCESS)
    Process: 128158 ExecStart=/usr/lib/lxc/lxc-containers start (code=exited, status=0/SUCCESS)
   Main PID: 128158 (code=exited, status=0/SUCCESS)
        CPU: 24ms

● lxc-net.service - LXC network bridge setup
     Loaded: loaded (/usr/lib/systemd/system/lxc-net.service; enabled; vendor preset: disabled)
     Active: active (exited) since Sun 2021-06-20 13:42:00 CEST; 12min ago
       Docs: man:lxc
    Process: 128126 ExecStart=/usr/lib/lxc/lxc-net start (code=exited, status=0/SUCCESS)
   Main PID: 128126 (code=exited, status=0/SUCCESS)
      Tasks: 1 (limit: 9421)
     Memory: 1.1M
        CPU: 38ms
     CGroup: /system.slice/lxc-net.service
             └─128150 dnsmasq --conf-file=/dev/null -u dnsmasq --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --listen-address 10.0.3.1 --dhcp-range …

ip a gives

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp4s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 7c:05:07:ff:2e:14 brd ff:ff:ff:ff:ff:ff
3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 24:fd:52:cf:c9:86 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.153/24 brd 192.168.0.255 scope global dynamic noprefixroute wlp3s0
       valid_lft 544603sec preferred_lft 469003sec
    inet6 2a02:810a:8cc0:5310:26fd:52ff:fecf:c986/64 scope global dynamic mngtmpaddr 
       valid_lft 86399sec preferred_lft 43199sec
    inet6 fe80::26fd:52ff:fecf:c986/64 scope link 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:09:84:64:c2 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
23: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
       valid_lft forever preferred_lft forever

journalctl -u lxc-net.service gives

Jun 20 13:42:00 toshi systemd[1]: Starting LXC network bridge setup...
Jun 20 13:42:00 toshi dnsmasq[128150]: gestartet, Version 2.85, Zwischenspeichergröße 150
Jun 20 13:42:00 toshi dnsmasq[128150]: Optionen bei Übersetzung: IPv6 GNU-getopt DBus no-UBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP conntrack ipset auth cryptohash DNSSEC >
Jun 20 13:42:00 toshi systemd[1]: Finished LXC network bridge setup.
Jun 20 13:42:00 toshi dnsmasq-dhcp[128150]: DHCP, IP-Bereich 10.0.3.2 -- 10.0.3.254, Leasezeit 1h
Jun 20 13:42:00 toshi dnsmasq-dhcp[128150]: DHCP, Sockets exklusiv an die Schnittstelle lxcbr0 gebunden
Jun 20 13:42:00 toshi dnsmasq[128150]: lese /etc/resolv.conf
Jun 20 13:42:00 toshi dnsmasq[128150]: Benutze Namensserver 192.168.0.1#53
Jun 20 13:42:00 toshi dnsmasq[128150]: /etc/hosts gelesen - 3 Adressen
A.B avatar
cl flag
A.B
The relevant problem is there `lxc-start container32 20210620114326.862 ERROR network - network.c:netdev_configure_server_veth:659 - Operation not supported - Failed to create veth pair "vethotXiCD" and "vethMRVTzD"` . Are you running a custom kernel? Did you forget to enable CONFIG_VETH interfaces in its configuration? I would guess Docker doesn't work either, does it?
A.B avatar
cl flag
A.B
Can you run successfully as root this command? `ip link add name vethA type veth peer name vethB` (check the interfaces exist with `ip link show type veth`)
jp flag
`[root@toshi ~]# ip link add name vethA type veth peer name vethB Error: Unknown device type.`
jp flag
I didn't compile the kernel, I am using the standard Arch linux kernel
A.B avatar
cl flag
A.B
Anyway the command above didn't work while it should have worked. You have a kernel support problem that is the cause of your problems. what about `grep CONFIG_VETH /boot/config*` ? as well as `uname -r`. Maybe you're not really using Arch but some sort of VPS like openvz that prevents Arch's kernel to be used. Please install https://aur.archlinux.org/packages/virt-what/ and run the command. What's the result?
jp flag
`uname -r` gives `5.12.11-arch1-1`, I am on a real machine because I can lick it here. /boot/config* doesn't even exist!
Score:1
jp flag

The problem was that the "virtual ethernet" functionality requires the veth driver. This driver is/was not compiled into the kernel but is loadable as a module.

Loading the driver manually with sudo modprobe veth did the trick.

A.B avatar
cl flag
A.B
It should be autoloaded. Didn't think about the fact it wouldn't. There's still something preventing that in your setup, but glad you found a workaround.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.