Score:0

TC: link sharing for ingress traffic not working

st flag

I was trying to set up an ingress link sharing env. by tc & htb qdisc. and I made two macvlan subinterfaces(say mgmt, data) under a parent physical interface enp8s0f0 and enp8s0f0's speed is 1000 mbit/s.

   enp8s0f0   | -- mgmt(f6:cb:f6:4d:28:df)
              | -- data(32:b2:ee:5c:1b:0d)

I made the following configuration based on ifb and tc setting. My intention is to guarantee 100mbit/s inbound bandwidth for mgmt. The setup will redirect all inbound traffic to enp8s0f0-ing and do the egress rules based on dst mac address.

sudo ip link add enp8s0f0-ing type ifb
sudo ip link set dev enp8s0f0-ing up qlen 1000
sudo tc qdisc add dev enp8s0f0 handle ffff: ingress
sudo tc filter add dev enp8s0f0 parent ffff: protocol all u32 match u32 0 0 action mirred egress redirect dev enp8s0f0-ing

sudo tc qdisc add dev enp8s0f0-ing root handle 1: htb default 20 
sudo tc class add dev enp8s0f0-ing parent 1: classid 1:1 htb rate 1000mbit ceil 1000mbit
sudo tc class add dev enp8s0f0-ing parent 1:1 classid 1:10 htb rate 100mbit ceil 1000mbit
sudo tc class add dev enp8s0f0-ing parent 1:1 classid 1:20 htb rate 900mbit ceil 1000mbit
sudo tc filter add dev enp8s0f0-ing parent 1:0 prio 1 u32 match ether dst f6:cb:f6:4d:28:df classid 1:10

But it is not working, although i saw the traffic go to respective flows. That is when dest mac is f6:cb:f6:4d:28:df, it goes to 1:10; when other dest mac, it goes to 1:20. Seems only link sharing on ifb is not working? (I also tried the setup on physical interface enp8s0f0 for egress traffics, it works fine except for the accuracy for the ratio of link sharing)

ip -d link show enp8s0f0-ing
57: enp8s0f0-ing: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc htb state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether be:c4:c2:87:d2:76 brd ff:ff:ff:ff:ff:ff promiscuity 0
    ifb addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535

tc -d -s qdisc show dev enp8s0f0-ing
qdisc htb 1: root refcnt 2 r2q 10 default 0x20 direct_packets_stat 132 ver 3.17 direct_qlen 1000
 Sent 5615288632 bytes 9516010 pkt (dropped 0, overlimits 335112 requeues 0)
 backlog 0b 0p requeues 0

tc -d -s class show dev enp8s0f0-ing
class htb 1:10 parent 1:1 prio 0 quantum 200000 rate 100Mbit ceil 1Gbit linklayer ethernet burst 1600b/1 mpu 0b cburst 1375b/1 mpu 0b level 0
 Sent 1918459729 bytes 1329974 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 lended: 114945 borrowed: 131292 giants: 0
 tokens: 1917 ctokens: 178

class htb 1:1 root rate 1Gbit ceil 1Gbit linklayer ethernet burst 1375b/1 mpu 0b cburst 1375b/1 mpu 0b level 7
 Sent 5620271293 bytes 9534707 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 lended: 164070 borrowed: 0 giants: 0
 tokens: 178 ctokens: 178

class htb 1:20 parent 1:1 prio 0 quantum 200000 rate 900Mbit ceil 1Gbit linklayer ethernet burst 1462b/1 mpu 0b cburst 1375b/1 mpu 0b level 0
 Sent 3701811564 bytes 8204733 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 lended: 6680553 borrowed: 32778 giants: 0
 tokens: 209 ctokens: 179
setenforce 1 avatar
us flag
What is not working exactly? Did you try to send max bandwidth on both class at the same time to be sure 1:10 is not borrowing? Also you can start with ceil at 100Mbit on 1:10 and 900Mbit on 1:20 just to be sure rate is working as you want.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.