I'm producing data (100GB files) that are finally copied to a server through NFS v4.2, on a 10Gb network. These files are stored on many HDDs, with XFS formatting (one copy per target drive).
When the copy tasks are running:
- There is a huge memory usage on the client (could be more than 64GB, it takes as much memory as it can).
- But almost no ram is used on the server.
I would like to reduce the memory usage on the clients, since they're continuously producing data and it slows them down. On the opposite, the server is mainly unused.
I guess since HDD are slows on the server, the client buffers as much data as it can to make the copy less blocking. I cannot change the hardware setup.
Is there any way to force the server to cache more data ?
I would prefer to prioritize using server memory rather than client memory.
The NFS configuration:
10.0.3.1:/ /mnt/field nfs nfsvers=4.2,noatime,nodiratime,_netdev,noauto,x-systemd.automount,x-systemd.mount-timeout=10 0 0
/etc/exports
:
/mnt 10.0.0.0/16(rw,async,fsid=0,no_subtree_check,crossmnt)
Nic configuration on server:
MTU 9000
rinbuffer tx 512, rx 1024
Nic configuration on client:
MTU 9000
rinbuffer tx 1024, rx 512
Edit:
As requested, /proc/meminfo:
Client ------------------------------------------ Server -------------------------------
A monitorix of the memory usage on this client:
Network usage:
Note: The client uses a big tmpfs (100GB) to compute. I thinks this tmpfs is never subtracted from the Available memory count.
Edit2:
The correlation between network and memory usage is more obvious on the other client (I should've started with that). This client doesn't use any tmpfs.