Score:0

Bad performance on multiple loop devices used as file containers

cn flag

Currently, I'm managing a back-up service for multiple remote servers. Backups are written trough rsync, every back-up has it's own filecontainer mounted as a loop device. The main back-up partition is an 8T xfs formatted and the loop devices are between 100G and 600G, either ext2 or ext4 formatted. So, this is the Matryoshka-like solution simplified:

df -Th
> /dev/vdb1    xfs   8,0T   /mnt/backups
> /dev/loop1   ext2  100G   /mnt/srv1
> /dev/loop2   ext2  200G   /mnt/srv2

mount
> /dev/vdb1 on /mnt/backups
> /mnt/backups/srv1.ext2 on /mnt/srv1
> /mnt/backups/srv2.ext2 on /mnt/srv2

ls -R /mnt/backups
> /mnt/backups
> └─/mnt/backups/srv1.ext2
> └─/mnt/backups/srv2.ext2

The main problem is the read/write speeds, they are very slow. Also, sometimes everything hangs and eats up all my cpu and ram. I can see the loop devices are causing that.

Lately, I've started switching the containers from ext4 to ext2, because I thought I didn't really need the journaling, hoping it would improve the speeds. I've also been switching from sparse-files to non-sparse files hoping it would lower the cpu/ram usage. But the problem persists, sometimes yields the system unresponsive.

Therefore, I'm looking for a better solution that has faster r/w speeds. Also, it's important to quickly see the disk space every profile uses (I can simply use df for now, du would be too slow). The separation the loop devices give is nice from a security standpoint, but could also be solved using rsync over ssh instead, so not a requirement.

I've been thinking about shrinking the main xfs partition and make the file containers real ext4 partitions, but that would bring huge amounts of downtime when the first partition needs to be resized. I've been thinking about using virt-make-fs or sqashfs, because I could simply get the filesize to get the disk usage, but have no experience with those.

Anybody any ideas if there's a better solution for this?

Score:0
cn flag

Answering my own question, perhaps helpful for others.

I've found xfs has xfs_quota utility where you can set up projects which monitor disk usage for any given folder.

First, the xfs partition must be (re)mounted with the prjquota flag enabled: mount -o prjquota /dev/vdb1 /mnt/backups. Optionally, this flag can be added to /etc/fstab to ensure it's mounted properly on reboot.

Then, we set up the project:

echo "srv1:50" > /etc/projid
echo "50:/mnt/backups/srv1" > /etc/projects

mkdir /mnt/backups/srv1
xfs_quota -x -c 'project -s srv1' /mnt/backups
xfs_quota -x -c 'limit -p bsoft=100G bhard=110G srv1' /mnt/backups

This sets up the project 'srv1' with id '50', creates /mnt/backup/srv where the project lives and gives it a soft limit of '100G' and a hard limit of '110G'. From now on, xfs will monitor all files added to this folder and measure usage.

To see usage, use:

xfs_quota -x -c report
xfs_quota -x -c 'report -h'

Read/write speeds look the same as writing to a normal folder without xfs_quota set up.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.