Score:0

Only getting 25Gb/s instead of 100Gb/s from a Mellanox switch running Cumulus Linux

rs flag

I have a Mellanox 100gb/s switch (running Cumulus Linux 4.1) that I use for connecting multiple servers, each with a Mellanox ConnectX 5 100gb/s card. These servers connect to the switch via a DAC cable. While it is working, I am only able to get 25Gb/s port speed.

I checked the switch, and it would appear that each QSFP switch port is operating 4 individual 25Gb/s ports instead, as indicated by the table below. The servers are connected on ports swp1 through 8.

cumulus@cumulus:mgmt:~$ net show interface all
State  Name     Spd   MTU    Mode       LLDP                         Summary
-----  -------  ----  -----  ---------  ---------------------------  ------------------
UP     lo       N/A   65536  Loopback                                IP: 127.0.0.1/8
       lo                                                            IP: ::1/128
UP     eth0     100M  1500   Mgmt       SomeOtherSwitch (24)         Master: mgmt(UP)
       eth0                                                          IP: 172.20.72.5/24
UP     swp1s0   25G   9216   Trunk/L2                                Master: bridge(UP)
DN     swp1s1   N/A   9216   Default                                 
DN     swp1s2   N/A   9216   Default
DN     swp1s3   N/A   9216   Default
UP     swp2s0   25G   9216   Trunk/L2                                Master: bridge(UP)
DN     swp2s1   N/A   9216   Default
DN     swp2s2   N/A   9216   Default
DN     swp2s3   N/A   9216   Default
UP     swp3s0   25G   9216   Trunk/L2                                Master: bridge(UP)
DN     swp3s1   N/A   9216   Default
DN     swp3s2   N/A   9216   Default
DN     swp3s3   N/A   9216   Default
UP     swp4s0   25G   9216   Trunk/L2                                Master: bridge(UP)
DN     swp4s1   N/A   9216   Default
DN     swp4s2   N/A   9216   Default
DN     swp4s3   N/A   9216   Default
UP     swp5s0   25G   9216   Trunk/L2                                Master: bridge(UP)
DN     swp5s1   N/A   9216   Default
DN     swp5s2   N/A   9216   Default
DN     swp5s3   N/A   9216   Default
UP     swp6s0   25G   9216   Trunk/L2                                Master: bridge(UP)
DN     swp6s1   N/A   9216   Default
DN     swp6s2   N/A   9216   Default
DN     swp6s3   N/A   9216   Default
UP     swp7s0   25G   9216   Trunk/L2                                Master: bridge(UP)
DN     swp7s1   N/A   9216   Default
DN     swp7s2   N/A   9216   Default
DN     swp7s3   N/A   9216   Default
UP     swp8s0   25G   9216   Trunk/L2                                Master: bridge(UP)
DN     swp8s1   N/A   9216   Default
DN     swp8s2   N/A   9216   Default

According to ethtool, the servers support the desired link mode of 100gbps via its ConnectX cards:

Settings for enp175s0f0:
        Supported ports: [ Backplane ]
        Supported link modes:   1000baseKX/Full
                                10000baseKR/Full
                                40000baseKR4/Full
                                40000baseCR4/Full
                                40000baseSR4/Full
                                40000baseLR4/Full
                                25000baseCR/Full
                                25000baseKR/Full
                                25000baseSR/Full
                                50000baseCR2/Full
                                50000baseKR2/Full
                                100000baseKR4/Full
                                100000baseSR4/Full
                                100000baseCR4/Full
                                100000baseLR4_ER4/Full
        Supported pause frame use: Symmetric
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  1000baseKX/Full
                                10000baseKR/Full
                                40000baseKR4/Full
                                40000baseCR4/Full
                                40000baseSR4/Full
                                40000baseLR4/Full
                                25000baseCR/Full
                                25000baseKR/Full
                                25000baseSR/Full
                                50000baseCR2/Full
                                50000baseKR2/Full
                                100000baseKR4/Full
                                100000baseSR4/Full
                                100000baseCR4/Full
                                100000baseLR4_ER4/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Link partner advertised link modes:  Not reported
        Link partner advertised pause frame use: No
        Link partner advertised auto-negotiation: Yes
        Link partner advertised FEC modes: Not reported
        Speed: 25000Mb/s
        Duplex: Full
        Port: Direct Attach Copper
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: on
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000004 (4)
                               link
        Link detected: yes

Did I miss something when setting this up? I tried setting link speed to 100000 manually, but with no net change.

Score:0
rs flag

I found references to /etc/cumulus/ports.conf, and it would seem that the ports in question were in fact set up to operate as 4x 25G ports.

I've edited this config file to something more to my liking, and below is a config that works for me:

cumulus@cumulus:mgmt:~$ cat /etc/cumulus/ports.conf
# ports.conf --
#
#        This file controls port speed, aggregation and subdivision.
#
# For example, the QSFP28 ports can be split into multiple interfaces. This
# file sets the number of interfaces per port and the speed of those interfaces.
#
# You must reload switchd for changes to take effect.
#
# mlnx,x86_MSN2100 has:
#     16 QSFP28 ports numbered 1-16
#         These ports are configurable as 40G, 50G, 2x50G, or 100G; or a subset
#         of them can be split into 4x25G or 4x10G.
#

# QSFP28 ports
#
# <port label>    = [40G|50G|100G]
#   or when split = [2x50G|4x10G|4x25G]
1=100G
2=100G
3=100G
4=100G
5=100G
6=100G
7=100G
8=100G
9=100G
10=100G
11=100G
12=100G
13=100G
14=100G
15=100G
16=4x10G
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.