Score:2

Connect (create ethernet link) between two VMs using their assigned SR-IOV Virtual Functions (VFs)

hu flag

I have two KVM virtual machines created on RHEL9. I partitioned an SR-IOV-capable Physical NIC such that I was able to get Virtual NICs (Virtual Functions) from it. For each VM, I assigned it a VF.

Virtual Machines now have the VFs showing up as ethernet interfaces. Question is, how do I make a connection (basic ethernet link) between these virtual machines using these VFs?

What I want is VM1:vf_et1 <<---->> vf_et2:VM2. Basically, a connection/link like when you connect two routers with an ethernet cable in the physical world.

Once I get this link, I'd configure ip address on them and form BGP between these two VMs using that link.

Here is an image diagram ( https://i.stack.imgur.com/iAGDt.jpg ) showing the Ethernet Link I want to create.

NOTE: The decision to use SR-IOV is to remove the hypervisor from the VM-to-VM traffic data path. Phase-1 design is 4 servers and 6 routers (10 VMs all in a single kvm host). Phase-2 scales these numbers almost twice. 99% of data traffic will be VM-to-VM traffic. Having the hypervisor and/or host CPU in that data path is going to get ugly sooner or later. See image link for phase-1: https://i.stack.imgur.com/FEkj2.jpg

Peter Zhabin avatar
cn flag
Why are you trying to use hardware sharing functionality (what SR-IOV is in the first place) to connect two VMs on the same host? An appropriate way would be to bypass any physical hardware that imposes performance limits and connect these VMs internally with virtual network. Aside from these questions both VMs in your case would appear as separate MACs (that you have assigned them) on the switch where your host is connected to, so if they are in the same VLAN they could talk to each other like servers on the same segment.
Mo Fatty avatar
hu flag
@PeterZhabin using SR-IOV because for VM-to-VM traffic for packets transfers, I do **not** want the hypervisor getting involved. This gives me a huge performance advantage. At the end of the day I'll have about about 10 routers and 4 severs passing data around. Hypervisor getting involved will hinder performance.
Peter Zhabin avatar
cn flag
Well, VM-to-VM on the same host using virtio connected to the host bridge will likely get you above 40Gbps performance on any modern hardware. Should you need to use SR-IOV to use physical hardware it's up to the switch upstream to connect these hosts, as SR-IOV is just a virtual PCIe device, pretty much like old-fashion HP Flex (which is also an SR-IOV implementation) that relies on the Flex Enet Module to provide interconnect for these cards.
Mo Fatty avatar
hu flag
@PeterZhabin I edited my post. See "NOTE" section. Say I could get 40Gbps using virtio, what is the impact of having the CPU/Hypervisor along the dataplane path? Maybe not much with only 10 VMs but what if I scale this up 24 VMs? Any data plane traffic that has to hit the CPU first, will eventually run into trouble as it scales. That is my concern. I'd love for VM-to-VM dataplane traffic to **not** leave the Physical NIC, if possible. If it has to leave the pNIC, then I wonder what I can do to make the connections above concerns in mind.
djdomi avatar
za flag
Ad I understand correct we talked about kvm, proxmox explain it I think a bit easier like here https://pve.proxmox.com/wiki/PCI(e)_Passthrough#:~:text=SR%2DIOV%20(Single%2DRoot,latency%20than%20software%20virtualized%20devices.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.