Score:0

Connecting two VMs running in KVM

us flag

Short: Get two VM's to talk to each other running on same hypervisor.

I have two VMs running under KVM and I am trying to manage them via Cockpit.

The hypervisor, and the vm's are running Ubuntu 20.04.

VM's are configured using br0 (192.168.1.248/24) which points to enp2s0.

They get their own local lan IPs such as: 192.168.1.152 192.168.1.220

These can been seen and pinged on the local lan. They however cannot ping or see each other.

How can I connect them?

Detail:

They don't necessarily have to connect to each other via 192.168.* it can be over virtual network on the KVM host. But they do have to remain accessible on the lan. I've tried creating virtual network via Cockpit interface

Tried: Creating Virtual Network via the VM interface, but the VMs do not seem to show the virtual interface via ifconfig - just local and enp1so

Update:

Not sure if this is related (I am blocked by this in Cockpit interface also, when trying to create virtual networks).

sudo systemctl status libvirtd
    ● libvirtd.service - Virtualization daemon
     Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2021-08-29 13:46:24 PDT; 6h ago
TriggeredBy: ● libvirtd.socket
             ● libvirtd-admin.socket
             ● libvirtd-ro.socket
       Docs: man:libvirtd(8)
             https://libvirt.org
   Main PID: 1068 (libvirtd)
      Tasks: 20 (limit: 32768)
     Memory: 32.0M
     CGroup: /system.slice/libvirtd.service
             ├─  1068 /usr/sbin/libvirtd
             ├─ 52826 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/virtual0.conf --leasefile-ro --dhcp-script=/usr/lib/libv>
             └─182682 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/virtual1.conf --leasefile-ro --dhcp-script=/usr/lib/libv>

Aug 29 19:45:25 dio libvirtd[1068]: internal error: No more available PCI slots
Aug 29 19:45:25 dio libvirtd[1068]: internal error: No more available PCI slots
Aug 29 19:45:32 dio libvirtd[1068]: internal error: No more available PCI slots
Aug 29 19:45:33 dio libvirtd[1068]: internal error: No more available PCI slots
Aug 29 19:45:33 dio libvirtd[1068]: internal error: No more available PCI slots
Aug 29 19:45:33 dio libvirtd[1068]: internal error: No more available PCI slots
Aug 29 19:45:39 dio libvirtd[1068]: internal error: No more available PCI slots
Aug 29 19:45:40 dio libvirtd[1068]: internal error: No more available PCI slots
Aug 29 19:45:40 dio libvirtd[1068]: internal error: No more available PCI slots
Aug 29 19:45:40 dio libvirtd[1068]: internal error: No more available PCI slots
~
~

Discussion:

Running this on the VMs and Host

sudo ebtables-save
*filter
:INPUT ACCEPT
:FORWARD ACCEPT
:OUTPUT ACCEPT
sudo nft list ruleset
table bridge filter {
        chain INPUT {
                type filter hook input priority filter; policy accept;
        }

        chain FORWARD {
                type filter hook forward priority filter; policy accept;
        }

        chain OUTPUT {
                type filter hook output priority filter; policy accept;
        }
}
ip -d link

for br0

4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 4c:cc:6a:06:f3:8b brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.4c:cc:6a:6:f3:8b designated_root 8000.4c:cc:6a:6:f3:8b root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer  280.62 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3124 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 addrgenmode none numtxqueues 1 numrxqueues 1 gso_max_size 64000 gso_max_segs 64

Solution: This ended up being Docker running in parralel on the system. Make sure you're not running Docker when installing KVM - as it will block all comms between virtual hosts.

Tom Yan avatar
in flag
Sounds like you/libvirt have configured ebtables / nftables to drop certain (not from enp2s0 AND not to enp2s0) L2 forwarding traffics. Or it could be some STP whatsoever configuration.
Tom Yan avatar
in flag
Check with `ebtables-save` AND `nft list ruleset` to rule out the former.
enko avatar
us flag
@TomYan I've edited the post with the output info. Ty.
Nikita Kipriyanov avatar
za flag
`ip -d link` on the host also might shine some light (it'll show bridge status, including STP status)
enko avatar
us flag
I've tried both enabling and disabling STP on br0, no changes.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.