OK... So, the answer to my original posted question was burried in a Windows Registry setting.
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\ConnectionCountPerRdmaNetworkInterface
The key is not normally present in the registry and the default value if not specified is 2, but it can be set from 1 - 16. According to this post from Microsoft.
https://learn.microsoft.com/en-us/windows-server/administration/performance-tuning/role/file-server/
Per the article, it's more or less a "look but don't touch" kind of setting. But, curiously enough, there's a very similar setting in the same location...
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\ConnectionCountPerRssNetworkInterface
It too is suggested to not be altered. But, despite that, they have it exposed through PowerShell where it can be both viewed and changed via Get\Set-SmbClientConfiguration. Why they chose to expose one but not the other is a bit of a mystery.
EDIT + Complete Answer:
System Info For Reference:
CPU: AMD EPYC 7313P
OS: Windows Server 2022
NIC: Intel E810-XXVDA2
Drivers: Intel ProSet 27.6.1
Traffic Tests:
Windows File Copy (Transferring +100 random generated 1GB data files) - Ironically, this had the best and most consistent results.
DiskSpd (with the help of Microsoft's Test-Rdma.ps1) - I tweaked the scripts output to show kb, Mb, Gb etc. to be more human readable.
iPerf3 - Settings were fickle and results were generally inconsistent between runs for some reason.
Corrections:
In the original post I mentioned about how the number of connections was 4 before but 2 after putting the second SFP28 connection in service. Well... That was somewhat incorrect in hindsight. What was allowing the 4 vs 2 is that when there was only one I had not yet enabled RDMA on the NIC. Without RMDA, SMB MultiChannel will use RSS and there is a different setting for that... which conveniently enough has a default value of 4 (see the registry entries that I mentioned in my posted answer). So, in so far as it switching from 4 to 2... that was my mistake.
Ultimate Solution:
After trolling through a ton of website, ripping down and re-creating the connections multiple times over, I finally figured out what was going on. I ultimately solved it by changing 2 other settings (and not just the registry entry I was looking for).
For starters, many suggestions from big players in the space (Microsoft, Intel, nVidia, Lenovo, etc.) were all saying to disable standard RX/TX flow control. Well... that was a bad recommendation for my particular use case. Re-enabling flow control helped quite a bit. The suggestion for standard flow control being turned off is to instead use Priority-based Flow Control (PFC) with Data Center Bridging (DCB). But.. for that to work properly your switches will also need to support PFC. While a direct connection between servers has no problems with this, the switch that will be sitting between them once deployed does not support DCB... so, I wanted to proceed without using this feature. Also, PFC is not strictly needed when using iWARP (it helps performance... but isn't a make-or-break requirement). So, from my testing, if you can't use PFC w/ DCB, you should at least fall back to using standard RX/TX flow control... Which is something I should have keyed in on sooner.
Another suggestion from the powers that be was to change the setting for Interrupt Moderation Rate or Interrupt Throttle Rate (ITR). The default for this is "Adaptive", but it was suggested to set it to disable or off. Changing this setting is a balancing act of trading off improved speed and lower latency for higher CPU usage due to responding to more interrupt requests from the NIC. Since I was hitting a wall on CPU, I changed the setting from OFF to LOW.
Anyway... Those were the keys settings I had to change to ultimately solve my particular CPU pinning problem. Hopefully if anyone else stumbles across this post with the same problem this will help nudge them closer to a solution.