Score:0

installing devstack xena failing on ubuntu20.04 (arping br-ex error)

ng flag

Having issues installing devstack stable/xena on clean ubuntu20.04 cloud image (a VM instanced by virt-install with 2 interfaces : ens3 (192.168.122.36 on HostPC virbr0) for public interface and ens4 (192.168.100.100 host virbr1) for management)

devstack installation proceeds almost to end but exits on an arping check on br-ex: tail of stack.sh.log:

2022-02-02 21:52:21.085 | +lib/neutron-legacy:_move_neutron_addresses_route:671  IP_BRD='192.168.122.36/24 brd 192.168.122.255'
2022-02-02 21:52:21.093 | +lib/neutron-legacy:_move_neutron_addresses_route:673  '[' 192.168.122.1 '!=' '' ']'
2022-02-02 21:52:21.101 | +lib/neutron-legacy:_move_neutron_addresses_route:674  ADD_DEFAULT_ROUTE='sudo ip -f inet r replace default via 192.168.122.1 dev br-ex'
2022-02-02 21:52:21.109 | +lib/neutron-legacy:_move_neutron_addresses_route:677  [[ True == \T\r\u\e ]]
2022-02-02 21:52:21.118 | +lib/neutron-legacy:_move_neutron_addresses_route:678  ADD_OVS_PORT='sudo ovs-vsctl --may-exist add-port br-ex ens3'
2022-02-02 21:52:21.127 | +lib/neutron-legacy:_move_neutron_addresses_route:681  [[ False == \T\r\u\e ]]
2022-02-02 21:52:21.135 | +lib/neutron-legacy:_move_neutron_addresses_route:685  [[ 192.168.122.36/24 brd 192.168.122.255 != '' ]]
2022-02-02 21:52:21.143 | +lib/neutron-legacy:_move_neutron_addresses_route:686  IP_DEL='sudo ip addr del 192.168.122.36/24 brd 192.168.122.255 dev ens3'
2022-02-02 21:52:21.153 | +lib/neutron-legacy:_move_neutron_addresses_route:687  IP_REPLACE='sudo ip addr replace 192.168.122.36/24 brd 192.168.122.255 dev br-ex'
2022-02-02 21:52:21.164 | +lib/neutron-legacy:_move_neutron_addresses_route:688  IP_UP='sudo ip link set br-ex up'
2022-02-02 21:52:21.172 | +lib/neutron-legacy:_move_neutron_addresses_route:689  [[ inet == \i\n\e\t ]]
2022-02-02 21:52:21.183 | ++lib/neutron-legacy:_move_neutron_addresses_route:690  awk '{ print $1; exit }'
2022-02-02 21:52:21.183 | ++lib/neutron-legacy:_move_neutron_addresses_route:690  echo 192.168.122.36/24 brd 192.168.122.255
2022-02-02 21:52:21.185 | ++lib/neutron-legacy:_move_neutron_addresses_route:690  grep -o -E '(.*)/'
2022-02-02 21:52:21.192 | ++lib/neutron-legacy:_move_neutron_addresses_route:690  cut -d / -f1
2022-02-02 21:52:21.202 | +lib/neutron-legacy:_move_neutron_addresses_route:690  IP=192.168.122.36
2022-02-02 21:52:21.210 | +lib/neutron-legacy:_move_neutron_addresses_route:691  ARP_CMD='sudo arping -A -c 3 -w 5 -I br-ex 192.168.122.36 '
2022-02-02 21:52:21.217 | +lib/neutron-legacy:_move_neutron_addresses_route:697  sudo ip addr del 192.168.122.36/24 brd 192.168.122.255 dev ens3
2022-02-02 21:52:21.236 | +lib/neutron-legacy:_move_neutron_addresses_route:697  sudo ip addr replace 192.168.122.36/24 brd 192.168.122.255 dev br-ex
2022-02-02 21:52:21.252 | +lib/neutron-legacy:_move_neutron_addresses_route:697  sudo ip link set br-ex up
2022-02-02 21:52:21.272 | +lib/neutron-legacy:_move_neutron_addresses_route:697  sudo ovs-vsctl --may-exist add-port br-ex ens3
2022-02-02 21:52:21.293 | +lib/neutron-legacy:_move_neutron_addresses_route:697  sudo ip -f inet r replace default via 192.168.122.1 dev br-ex
2022-02-02 21:52:21.309 | +lib/neutron-legacy:_move_neutron_addresses_route:697  sudo arping -A -c 3 -w 5 -I br-ex 192.168.122.36
2022-02-02 21:52:24.316 | ARPING 192.168.122.36 from 192.168.122.36 br-ex
2022-02-02 21:52:24.316 | Sent 3 probes (3 broadcast(s))
2022-02-02 21:52:24.316 | Received 0 response(s)
2022-02-02 21:52:24.341 | +lib/neutron-legacy:_move_neutron_addresses_route:1  exit_trap
2022-02-02 21:52:24.349 | +./stack.sh:exit_trap:521                  local r=1
2022-02-02 21:52:24.359 | ++./stack.sh:exit_trap:522                  jobs -p
2022-02-02 21:52:24.367 | +./stack.sh:exit_trap:522                  jobs=
2022-02-02 21:52:24.376 | +./stack.sh:exit_trap:525                  [[ -n '' ]]
2022-02-02 21:52:24.384 | +./stack.sh:exit_trap:531                  '[' -f /tmp/tmp.WDApXUJF5c ']'
2022-02-02 21:52:24.394 | +./stack.sh:exit_trap:532                  rm /tmp/tmp.WDApXUJF5c
2022-02-02 21:52:24.407 | +./stack.sh:exit_trap:536                  kill_spinner
2022-02-02 21:52:24.422 | +./stack.sh:kill_spinner:431               '[' '!' -z '' ']'
2022-02-02 21:52:24.430 | +./stack.sh:exit_trap:538                  [[ 1 -ne 0 ]]
2022-02-02 21:52:24.441 | +./stack.sh:exit_trap:539                  echo 'Error on exit'
2022-02-02 21:52:24.441 | Error on exit
2022-02-02 21:52:24.447 | +./stack.sh:exit_trap:541                  type -p generate-subunit
2022-02-02 21:52:24.455 | +./stack.sh:exit_trap:542                  generate-subunit 1643837814 930 fail
2022-02-02 21:52:24.807 | +./stack.sh:exit_trap:544                  [[ -z /opt/stack/logs ]]
2022-02-02 21:52:24.814 | +./stack.sh:exit_trap:547                  /usr/bin/python3.8 /home/stack/devstack/tools/worlddump.py -d /opt/stack/logs
2022-02-02 21:52:25.437 | +./stack.sh:exit_trap:556                  exit 1

networking parts of local.conf:

HOST_IP=192.168.122.36
SERVICE_HOST=192.168.122.36
#HOST_IPV6=2001:db8::7

## Neutron options
Q_USE_SECGROUP=True
#FLOATING_RANGE="192.168.122.0/24"
#Q_FLOATING_ALLOCATION_POOL=start=192.168.122.240,end=192.168.122.254
FLOATING_RANGE=192.168.122.224/27
IPV4_ADDRS_SAFE_TO_USE="10.0.0.0/22"
PUBLIC_NETWORK_GATEWAY="192.168.122.1"
PUBLIC_INTERFACE=ens3

# try LinuxBridge as ovs gives arping error for br-ex on $HOST_IP
#Q_USE_PROVIDERNET_FOR_PUBLIC=True
#Q_AGENT=linuxbridge
#LB_PHYSICAL_INTERFACE=ens3
#PUBLIC_PHYSICAL_NETWORK=default
#LB_INTERFACE_MAPPINGS=default:ens3

# Open vSwitch provider networking configuration
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

(tried linuxbridge as a workaround to OVS but still wants to define br-ex and get worse errors (neutron won't start)

my network conf at the end seems ok

stack@devstackxena:~/devstack$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:ed:c1:f2 brd ff:ff:ff:ff:ff:ff
3: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:28:b9:e7 brd ff:ff:ff:ff:ff:ff
27: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 4a:4d:4c:08:59:d1 brd ff:ff:ff:ff:ff:ff
28: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 7a:0c:6d:1c:17:49 brd ff:ff:ff:ff:ff:ff
29: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether c6:1e:0f:72:91:4c brd ff:ff:ff:ff:ff:ff
stack@devstackxena:~/devstack$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000
    link/ether 52:54:00:ed:c1:f2 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:feed:c1f2/64 scope link 
       valid_lft forever preferred_lft forever
3: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:28:b9:e7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.100/24 scope global ens4
       valid_lft forever preferred_lft forever
    inet6 2001:db8:ca2:3:5054:ff:fe28:b9e7/64 scope global dynamic mngtmpaddr 
       valid_lft 3269sec preferred_lft 3269sec
    inet6 fe80::5054:ff:fe28:b9e7/64 scope link 
       valid_lft forever preferred_lft forever
27: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 4a:4d:4c:08:59:d1 brd ff:ff:ff:ff:ff:ff
28: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 7a:0c:6d:1c:17:49 brd ff:ff:ff:ff:ff:ff
29: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether c6:1e:0f:72:91:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.36/24 brd 192.168.122.255 scope global br-ex
       valid_lft forever preferred_lft forever
    inet6 fe80::c41e:fff:fe72:914c/64 scope link 
       valid_lft forever preferred_lft forever

ovs config:

stack@devstackxena:~/devstack$ sudo ovs-vsctl show
2448b59c-19b3-4043-ab1f-c3bbc0e66102
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port ens3
            Interface ens3
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.13.3"
stack@devstackxena:~/devstack$ ip route
default via 192.168.122.1 dev br-ex 
192.168.100.0/24 dev ens4 proto kernel scope link src 192.168.100.100 
192.168.122.0/24 dev br-ex proto kernel scope link src 192.168.122.36 

tcpdump arp on br-ex:

stack@devstackxena:~$ sudo tcpdump -i br-ex -n icmp or arp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br-ex, link-type EN10MB (Ethernet), capture size 262144 bytes
12:09:52.568287 ARP, Request who-has 192.168.122.37 tell 192.168.122.1, length 28
12:09:53.592292 ARP, Request who-has 192.168.122.37 tell 192.168.122.1, length 28
12:10:18.892563 ARP, Request who-has 192.168.122.36 (ff:ff:ff:ff:ff:ff) tell 192.168.122.36, length 28
12:10:19.892728 ARP, Request who-has 192.168.122.36 (ff:ff:ff:ff:ff:ff) tell 192.168.122.36, length 28
12:10:20.892764 ARP, Request who-has 192.168.122.36 (ff:ff:ff:ff:ff:ff) tell 192.168.122.36, length 28
12:10:48.145743 IP 192.168.122.1 > 192.168.122.36: ICMP echo request, id 33715, seq 0, length 28
12:10:48.145850 IP 192.168.122.36 > 192.168.122.1: ICMP echo reply, id 33715, seq 0, length 28

arping received 192.168.122.36 but no reply received (that's my interpretation of the devstack Exit).

not sure where additional 192.168.122.37 comes from, this IP doesn't exist in VM.(no other VMs on Host) I assume can arping own IP? (that's what devstack script is attempting)

thanks for any pointers, ideas

Score:0
cn flag

I just encountered the same issue and tracked it donw to be this bug: https://github.com/iputils/iputils/issues/247. So either manually install a recent version if iputils-arping or e.g create a wrapper script and correct the exit code

AndyW avatar
ng flag
thanks for your input
AndyW avatar
ng flag
thanks very much for your help . How to get latest iputils? on ubuntu20.04, std install, status: arping -V arping from iputils s20190709 then git clone https://github.com/iputils/iputils.git sudo apt install gcc make meson ninja-build xsltproc libcap-dev cd iputils ./configure --options make sudo make install so after that I have: arping -V arping from iputils s20190709 (no change!) for your 2nd point, unsure how to add wrapper as stack.sh uses an exit_trap routine, if I put arping command in a function with an exit 0 at the end, this doesn't avoid the trap
AndyW avatar
ng flag
sorry comment formatting was lost
cn flag
Hi Andy, strange. Don't know what iputils master looks.like. But I guess you cloud checkout a later tag and build that. Here you can see releases and the corresponding issues: https://github.com/iputils/iputils/releases. I renamed the executable and created a bash script with the original name. This script wraps the original executable and return 0 instead. Hope that explains that workaround in more detail
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.