I have 8x18TB Raid10 mdadm volume on Ubuntu server. When I connect from Windows iSCSI speeds 220MB/sec read 375MB/sec write. I guess it is a iSCSI protocol bottleneck could not improve it. It leaves me multipath.
I have 2 nics on both server and client. I tried many things but could not figure out how to use Multipath or multiple sessions (in windows interface). Tried to connect sessions as
192.168.122.100 --> 192.168.122.110
192.168.123.100 --> 192.168.123.110
I see both portals in portals and create favorite as MPIO but when I use MCS it says it is already connected. Also tried to
192.168.122.100 --> 192.168.122.110
|--> 192.168.122.111
Could not get it work. What adjustment should I do server side. I am clueless at the moment. By the way windows is windows10 PRO 20H2 so no restrictions on it I guess.
note: We try to make it work fluently with avid media composer. Since it asks for disks. If I cannot get the performance I need from ISCSI, I plan to switch to nbd ? Any good alternatives to iSCSI or other suggestions which works over 10Gbe ethernets and switch ?
Edit: I stumbled upon CNA cards RDMA ISER solution gonna try it ...
Suggestions are still welcome.