Score:0

Two bonded 10Gb nics on an ESXi guest OS not using full bandwidth

cn flag

On an ESXi guest machine, with Ubuntu 18.04 operating system, two nics connected from two difference vswitch, each of them has a separate 10Gb uplink.

I made a bonded nic from these two links with balance-rr and balance-alb modes. When testing the bandwidth, it doesn't exceed the 10Gb (around 9.7gbps) limit for the bonded interface.

 bwm-ng v0.6.1 (probing every 0.500s), press 'h' for help
  input: /proc/net/dev type: rate
  \         iface                   Rx                   Tx                Total
  ==============================================================================
               lo:           0.00  b/s            0.00  b/s            0.00  b/s
           ens160:           3.82 kb/s            5.30 Gb/s            5.30 Gb/s
           ens192:          15.33 kb/s            4.35 Gb/s            4.35 Gb/s
            bond0:          19.16 kb/s            9.64 Gb/s            9.64 Gb/s
  ---------------------------------------------
            total:          38.31 kb/s           19.28 Gb/s           19.29 Gb/s

# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens160: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:xx:xx:xx brd ff:ff:ff:ff:ff:ff
3: ens192: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 3a:f0:c2:xx:xx:xx brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3a:f0:c2:xx:xx:xx brd ff:ff:ff:ff:ff:ff
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: ens192
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: ens192
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:xx:xx:xx
Slave queue ID: 0

Slave Interface: ens160
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:xx:xx:xx
Slave queue ID: 0

I already tested the configuration, without using ESXi (an Ubuntu machine on bare machine), an I got a aggregated bandwidth of around 16Gbps for bond0 interface. Also, with one nic on an ESXi guest, I can get saturate it and get a bandwidth around 10gbps.

Is there any limit on ESXi vswitch or the guest machine?

br flag
How are the vswitches configured? Are you using jumbo frames?
br flag
Also have a read of this; https://unix.stackexchange.com/questions/469346/link-aggregation-bonding-for-bandwidth-does-not-work-when-link-aggregation-gro
raitech avatar
cn flag
vswitch configuration left as defaults: MTU 1500, Route based IP hash. I tested the config with one Nic and I can get full 10Gb bandwidth
Score:1
eu flag

https://www.vmware.com/pdf/10GigE_performance.pdf

This article may help. The practical limit is 9.3Gbps for packets that use the standard MTU size because of protocol overheads.

djdomi avatar
za flag
but in general it does not solve the issue that the author states as it in general is the same to Linux and I have a similar situation and a link only answer does not solve it
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.