On an ESXi guest machine, with Ubuntu 18.04 operating system, two nics connected from two difference vswitch, each of them has a separate 10Gb uplink.
I made a bonded nic from these two links with balance-rr
and balance-alb
modes.
When testing the bandwidth, it doesn't exceed the 10Gb (around 9.7gbps) limit for the bonded interface.
bwm-ng v0.6.1 (probing every 0.500s), press 'h' for help
input: /proc/net/dev type: rate
\ iface Rx Tx Total
==============================================================================
lo: 0.00 b/s 0.00 b/s 0.00 b/s
ens160: 3.82 kb/s 5.30 Gb/s 5.30 Gb/s
ens192: 15.33 kb/s 4.35 Gb/s 4.35 Gb/s
bond0: 19.16 kb/s 9.64 Gb/s 9.64 Gb/s
---------------------------------------------
total: 38.31 kb/s 19.28 Gb/s 19.29 Gb/s
# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens160: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:xx:xx:xx brd ff:ff:ff:ff:ff:ff
3: ens192: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 3a:f0:c2:xx:xx:xx brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 3a:f0:c2:xx:xx:xx brd ff:ff:ff:ff:ff:ff
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: ens192
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: ens192
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:xx:xx:xx
Slave queue ID: 0
Slave Interface: ens160
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:xx:xx:xx
Slave queue ID: 0
I already tested the configuration, without using ESXi (an Ubuntu machine on bare machine), an I got a aggregated bandwidth of around 16Gbps for bond0 interface.
Also, with one nic on an ESXi guest, I can get saturate it and get a bandwidth around 10gbps.
Is there any limit on ESXi vswitch or the guest machine?