Score:0

Openstack VMs on different computes cannot communicate over the network

bf flag

I have deployed OpenStack via charms, following the documentation in https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/ . I have made this deployment on top of a vCloud system. The deployment looks ok, with no apparent issues:

$ juju status
Model      Controller       Cloud/Region      Version  SLA          Timestamp
openstack  maas-controller  maas-one/default  2.9.43   unsupported  15:29:01+03:00

App                       Version  Status  Scale  Charm                   Channel        Rev  Exposed  Message
ceph-mon                  17.2.5   active      3  ceph-mon                quincy/stable  170  no       Unit is ready and clustered
ceph-osd                  17.2.5   active      4  ceph-osd                quincy/stable  559  no       Unit is ready (2 OSD)
ceph-radosgw              17.2.5   active      1  ceph-radosgw            quincy/stable  548  no       Unit is ready
cinder                    22.0.0   active      1  cinder                  2023.1/stable  625  no       Unit is ready
cinder-ceph               22.0.0   active      1  cinder-ceph             2023.1/stable  524  no       Unit is ready
cinder-mysql-router       8.0.33   active      1  mysql-router            8.0/stable      35  no       Unit is ready
dashboard-mysql-router    8.0.33   active      1  mysql-router            8.0/stable      35  no       Unit is ready
glance                    26.0.0   active      1  glance                  2023.1/stable  572  no       Unit is ready
glance-mysql-router       8.0.33   active      1  mysql-router            8.0/stable      35  no       Unit is ready
keystone                  23.0.0   active      1  keystone                2023.1/stable  645  no       Application Ready
keystone-mysql-router     8.0.33   active      1  mysql-router            8.0/stable      35  no       Unit is ready
mysql-innodb-cluster      8.0.33   active      3  mysql-innodb-cluster    8.0/stable      56  no       Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
ncc-mysql-router          8.0.33   active      1  mysql-router            8.0/stable      35  no       Unit is ready
neutron-api               22.0.0   active      1  neutron-api             2023.1/stable  552  no       Unit is ready
neutron-api-mysql-router  8.0.33   active      1  mysql-router            8.0/stable      35  no       Unit is ready
neutron-api-plugin-ovn    22.0.0   active      1  neutron-api-plugin-ovn  2023.1/stable   73  no       Unit is ready
nova-cloud-controller     27.0.0   active      1  nova-cloud-controller   2023.1/stable  665  no       PO: Unit is ready
nova-compute              27.0.0   active      3  nova-compute            2023.1/stable  662  no       Unit is ready
openstack-dashboard       23.1.0   active      1  openstack-dashboard     2023.1/stable  578  no       Unit is ready
ovn-central               23.03.0  active      3  ovn-central             23.03/stable    99  no       Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-chassis               23.03.0  active      3  ovn-chassis             23.03/stable   134  no       Unit is ready
placement                 9.0.0    active      1  placement               2023.1/stable   87  no       Unit is ready
placement-mysql-router    8.0.33   active      1  mysql-router            8.0/stable      35  no       Unit is ready
rabbitmq-server           3.9.13   active      1  rabbitmq-server         3.9/stable     177  no       Unit is ready
vault                     1.8.8    active      1  vault                   1.8/stable     108  no       Unit is ready (active: true, mlock: disabled)
vault-mysql-router        8.0.33   active      1  mysql-router            8.0/stable      35  no       Unit is ready

Unit                           Workload  Agent  Machine  Public address  Ports               Message
ceph-mon/0*                    active    idle   0/lxd/3  172.30.171.102                      Unit is ready and clustered
ceph-mon/1                     active    idle   1/lxd/3  172.30.171.38                       Unit is ready and clustered
ceph-mon/2                     active    idle   2/lxd/4  172.30.171.41                       Unit is ready and clustered
ceph-osd/0*                    active    idle   0        172.30.171.108                      Unit is ready (2 OSD)
ceph-osd/1                     active    idle   1        172.30.171.109                      Unit is ready (2 OSD)
ceph-osd/2                     active    idle   2        172.30.171.111                      Unit is ready (2 OSD)
ceph-osd/3                     active    idle   3        172.30.171.112                      Unit is ready (2 OSD)
ceph-radosgw/0*                active    idle   0/lxd/4  172.30.171.104  80/tcp              Unit is ready
cinder/0*                      active    idle   1/lxd/4  172.30.171.36   8776/tcp            Unit is ready
  cinder-ceph/0*               active    idle            172.30.171.36                       Unit is ready
  cinder-mysql-router/0*       active    idle            172.30.171.36                       Unit is ready
glance/0*                      active    idle   3/lxd/3  172.30.171.33   9292/tcp            Unit is ready
  glance-mysql-router/0*       active    idle            172.30.171.33                       Unit is ready
keystone/0*                    active    idle   0/lxd/2  172.30.171.30   5000/tcp            Unit is ready
  keystone-mysql-router/0*     active    idle            172.30.171.30                       Unit is ready
mysql-innodb-cluster/0         active    idle   0/lxd/0  172.30.171.101                      Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2*        active    idle   2/lxd/0  172.30.171.40                       Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/3         active    idle   1/lxd/5  172.30.171.45                       Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0*                 active    idle   1/lxd/2  172.30.171.43   9696/tcp            Unit is ready
  neutron-api-mysql-router/0*  active    idle            172.30.171.43                       Unit is ready
  neutron-api-plugin-ovn/0*    active    idle            172.30.171.43                       Unit is ready
nova-cloud-controller/0*       active    idle   3/lxd/1  172.30.171.32   8774/tcp,8775/tcp   PO: Unit is ready
  ncc-mysql-router/0*          active    idle            172.30.171.32                       Unit is ready
nova-compute/0                 active    idle   1        172.30.171.109                      Unit is ready
  ovn-chassis/1                active    idle            172.30.171.109                      Unit is ready
nova-compute/1                 active    idle   2        172.30.171.111                      Unit is ready
  ovn-chassis/2                active    idle            172.30.171.111                      Unit is ready
nova-compute/2*                active    idle   3        172.30.171.112                      Unit is ready
  ovn-chassis/0*               active    idle            172.30.171.112                      Unit is ready
openstack-dashboard/0*         active    idle   2/lxd/3  172.30.171.39   80/tcp,443/tcp      Unit is ready
  dashboard-mysql-router/0*    active    idle            172.30.171.39                       Unit is ready
ovn-central/0*                 active    idle   0/lxd/1  172.30.171.103  6641/tcp,6642/tcp   Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-central/1                  active    idle   1/lxd/1  172.30.171.37   6641/tcp,6642/tcp   Unit is ready
ovn-central/2                  active    idle   2/lxd/1  172.30.171.42   6641/tcp,6642/tcp   Unit is ready
placement/0*                   active    idle   3/lxd/2  172.30.171.31   8778/tcp            Unit is ready
  placement-mysql-router/0*    active    idle            172.30.171.31                       Unit is ready
rabbitmq-server/0*             active    idle   2/lxd/2  172.30.171.44   5672/tcp,15672/tcp  Unit is ready
vault/0*                       active    idle   3/lxd/0  172.30.171.34   8200/tcp            Unit is ready (active: true, mlock: disabled)
  vault-mysql-router/0*        active    idle            172.30.171.34                       Unit is ready

Machine  State    Address         Inst id              Series  AZ       Message
0        started  172.30.171.108  cloud1               jammy   default  Deployed
0/lxd/0  started  172.30.171.101  juju-02f86d-0-lxd-0  jammy   default  Container started
0/lxd/1  started  172.30.171.103  juju-02f86d-0-lxd-1  jammy   default  Container started
0/lxd/2  started  172.30.171.30   juju-02f86d-0-lxd-2  jammy   default  Container started
0/lxd/3  started  172.30.171.102  juju-02f86d-0-lxd-3  jammy   default  Container started
0/lxd/4  started  172.30.171.104  juju-02f86d-0-lxd-4  jammy   default  Container started
1        started  172.30.171.109  cloud2               jammy   default  Deployed
1/lxd/1  started  172.30.171.37   juju-02f86d-1-lxd-1  jammy   default  Container started
1/lxd/2  started  172.30.171.43   juju-02f86d-1-lxd-2  jammy   default  Container started
1/lxd/3  started  172.30.171.38   juju-02f86d-1-lxd-3  jammy   default  Container started
1/lxd/4  started  172.30.171.36   juju-02f86d-1-lxd-4  jammy   default  Container started
1/lxd/5  started  172.30.171.45   juju-02f86d-1-lxd-5  jammy   default  Container started
2        started  172.30.171.111  cloud4               jammy   default  Deployed
2/lxd/0  started  172.30.171.40   juju-02f86d-2-lxd-0  jammy   default  Container started
2/lxd/1  started  172.30.171.42   juju-02f86d-2-lxd-1  jammy   default  Container started
2/lxd/2  started  172.30.171.44   juju-02f86d-2-lxd-2  jammy   default  Container started
2/lxd/3  started  172.30.171.39   juju-02f86d-2-lxd-3  jammy   default  Container started
2/lxd/4  started  172.30.171.41   juju-02f86d-2-lxd-4  jammy   default  Container started
3        started  172.30.171.112  cloud3               jammy   default  Deployed
3/lxd/0  started  172.30.171.34   juju-02f86d-3-lxd-0  jammy   default  Container started
3/lxd/1  started  172.30.171.32   juju-02f86d-3-lxd-1  jammy   default  Container started
3/lxd/2  started  172.30.171.31   juju-02f86d-3-lxd-2  jammy   default  Container started
3/lxd/3  started  172.30.171.33   juju-02f86d-3-lxd-3  jammy   default  Container started

It seems that when I create VMs on different computes, they cannot communicate over TCP or UDP protocols. Only ICMP is working. If the VMs are on the same compute, the network communication is OK.

Let's take the following setup: VM1 (192.168.0.79) is hosted on Compute1 VM2 (192.168.0.39) is hosted on Compute2

If I connect to the console of VM1 and ping VM2 it works fine. If I try to access any other port, I get a packet every ~1 minute. I have installed an apache2 web server on VM1 and I set up tcpdump on the network intefaces of all 4 of these elements: VM1, VM2, Compute1, Compute2, and I noticed the following behavior:

I used the following command from VM2 to access the web server on VM1:

while true; do nc -zvw3 192.168.0.79 80; done

The first packet, or first 2 packets, receive a response, then there are only TCPretransmission packets.

Wireshark image of the packet capture

I believe there is some OVN configuration which is blocking the flow between the compute interface and the VM interface, but I don't know how to investigate this.

Has anyone encountered this before? Or can someone please give me a hand with investigating the OVN system ?

Thank you very much for your support, Alex

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.