Score:0

Network distance effect on SSH transfer speeds

cn flag

I have two identical servers running identical software. One of them is in the same datacenter as the source, with the other being 100ms+ away.

When running a bandwidth-heavy piped ssh transfer on the server that's just a hop away I can easily saturate the connection. When the server is far away I get maybe 10 Mbps (1/100th the speed).

iperf shows the far away server does a good 300 Mbps and can saturate the connection when using multiple parallel connections.

I know there is an unmaintained fork of ssh attempting to optimise for transfer speeds, and I know about tcp tuning. I also know I could use netcat, but I need encryption. I'm already using mbuffer. I also cannot split the source/pipe it in multiple concurrent connections.

Is there anything I can do here? I've battled this problem for years and being unable to fix it across time platforms and hardware is baffling me.

This sounds like a nice problem to solve, but I cannot believe it hasn't already been done.

in flag
May be you were just searching for the wrong keywords. As SSH bases on TCP you should search for the parameters in your OS to optimize the TCP parameters like window size, packet size and so on. Also the networks elements in between both servers have an effect on what paremeters values get the best result.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.