Currently, I'm managing a back-up service for multiple remote servers. Backups are written trough rsync, every back-up has it's own filecontainer mounted as a loop device. The main back-up partition is an 8T xfs formatted and the loop devices are between 100G and 600G, either ext2 or ext4 formatted. So, this is the Matryoshka-like solution simplified:
> /dev/vdb1 xfs 8,0T /mnt/backups
> /dev/loop1 ext2 100G /mnt/srv1
> /dev/loop2 ext2 200G /mnt/srv2
> /dev/vdb1 on /mnt/backups
> /mnt/backups/srv1.ext2 on /mnt/srv1
> /mnt/backups/srv2.ext2 on /mnt/srv2
ls -R /mnt/backups
The main problem is the read/write speeds, they are very slow. Also, sometimes everything hangs and eats up all my cpu and ram. I can see the loop devices are causing that.
Lately, I've started switching the containers from ext4 to ext2, because I thought I didn't really need the journaling, hoping it would improve the speeds. I've also been switching from sparse-files to non-sparse files hoping it would lower the cpu/ram usage. But the problem persists, sometimes yields the system unresponsive.
Therefore, I'm looking for a better solution that has faster r/w speeds. Also, it's important to quickly see the disk space every profile uses (I can simply use
df for now,
du would be too slow). The separation the loop devices give is nice from a security standpoint, but could also be solved using rsync over ssh instead, so not a requirement.
I've been thinking about shrinking the main xfs partition and make the file containers real ext4 partitions, but that would bring huge amounts of downtime when the first partition needs to be resized. I've been thinking about using
sqashfs, because I could simply get the filesize to get the disk usage, but have no experience with those.
Anybody any ideas if there's a better solution for this?