I got quite powerful Linux server - Dell PowerEdge R6515 it contains x64 cores using AMD Epyc cpu.
Also there're dedicated LAN PCI:
lspci | grep 10G
41:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
41:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
We're using it as VPN server for Strongswan server. The problem is that we experience quite much interface discards for ~500Mmb encrypted RX traffic. Percent wise out of ~30k packets interface drops around ~30 of them. Which makes about 0.1% of dropped packets. Which is not very critical, but there's room to improve.
The only network related configuration is tunedadm mode set to network-throughput.
ifconfig p3p1
p3p1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::faf2:1eff:fed9:dc80 prefixlen 64 scopeid 0x20<link>
ether f8:f2:1e:d9:dc:80 txqueuelen 1000 (Ethernet)
RX packets 2637785997 bytes 1724447946355 (1.5 TiB)
RX errors 0 **dropped 1342892** overruns 0 frame 0
TX packets 2943486813 bytes 1844888609689 (1.6 TiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I'm using Centos 7.9.
Also the constant Soft Interrupt count is ~130k. Of course it doesn't load balance between cores equally that why on other sites we're using PowerEdge R340 which has just 12 Cores, which performs better on Discards and even interrupts side.
I think this behavior is quite related to big core count. Is there anything could be improved ? I see there's lots of tuning possibilities, but from my personal experience almost every tune comes with some drawbacks.