Score:3

Test network transfer speeds with rsync from a server with limited storage

dj flag

I want to test transfer speeds from server A to server B, but server A has limited disk space (<50GB) and because the network speeds are fast within the same data center, transferring 50 GB may be too quick to use as a benchmark. Server B has basically unlimited disk space.

Is there a way to transfer a "file" from server A (e.g. as a stream of data that is too large to fit on server A, something like 1 TB) so that I can repeatedly test the network transfer speeds to server B?


One option would be to transfer from server B to server A, and put the file directly into /dev/null, although I haven't tested this, and I would prefer server A to B because server A has only ssh keys and server B allows ssh via password.

Romeo Ninov avatar
in flag
WHat about using other protocols like `ssh`, `ftp`?
philshem avatar
dj flag
@rome yes, how would you suggest to do so with ssh? ftp wouldn't work unless I set up a server myself
Romeo Ninov avatar
in flag
CHeck this Q/A: https://superuser.com/questions/1517412/trying-to-pipe-an-image-to-ssh-stdin-input
MonkeyZeus avatar
in flag
As long as you keep in mind that lots of small files will always take more time than a single large file of the same size then whatever tests you run should give you sufficient data even at a 10 GB transfer size.
Score:4
ca flag

To test the raw network bandwidth, you can use iperf (or iperf3). You can use something as simple as:

  • on the server: iperf -s
  • in the client: iperf -c <server_ip>
  • then, reverse the server and client roles

To test rsync transfer rate, which it is often limited by SSH encrypt/decrypt speed, you can create a sparse source file and sync it to the remote side. A sparse file is a file which has a bigger nominal size compared to its real allocated size. You can create it via the truncate command.

For example, truncate --size=1T src.img will create a 1 TB-big file, which really allocates 0 bytes (or at most one 512/4K block, depending on the filesystem) - ie:

root@localhost:~# du -hs --apparent-size src.img
1.0T    src.img

root@localhost:~# du -hs src.img
0       src.img

root@localhost:~# stat src.img
  File: src.img
  Size: 1099511627776   Blocks: 0          IO Block: 4096   regular file

When reading and transferring such a file, it will be expanded to its true nominal size unless it is handled by a sparse-file-aware copy utility and/or a compression program (as gzip). rsync can "re-compress" such files when using the -S (--sparse) and -z (--compress) options, but if you leave them out, it will happily transfer and write the entire 1 TB of data.

Note: -z will cancel out the zeros during transfer, while -S will write them as holes in the destination file.

Score:1
cn flag

I would use a network bandwidth monitoring tool instead, e.g. iperf3. You can run the server instance of it on any of the hosts and from the client decide which direction bandwidth you like to meassure.

on one of the hosts to use it as the server run:

iperf3 --server

It will by default listen on port 5201/tcp so that need to be allowed from the other host.

on the other machine to test upstream bandwidth over tcp in it's simplest form:

iperf3 --client serveraddr

where serveraddr is the ip address (or hostname or fqdn) of the machine running the iperf3 server.

To test downstream bandwidth to the client do:

iperf3 --client serveraddr --reverse

Both directions:

iperf3 --client serveraddr --bidir

iperf3 has loads of options to tweak how much data should be transferred, which protocol and ports to use, etc and you can "daemonize" it on the server-side to have it always listen for client connections in the background ... man iperf3 is your friend :-)

iperf3 is OpenSource, it's available in most distributions and also has binaries to download for e.g. Windows and MacOS

I find it very reliable (at least on Linux) and I use it all the time, not only to measure throughput but also to see where we have packet loss, etc.

Here's the home page: https://software.es.net/iperf/

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.