Here's my current backup strategy. I have a bunch (let's say 200) of servers that run a periodic borg backup script (deployed via this ansible role: https://github.com/mad-ady/ansible-role-borgbackup) to a central backup server, over ssh.
Each server has it's own ssh user, it's own home dir and repo on the remote backup server. And backups work fine (with regard to compression/deduplication)! :)
Now, I'd like to have the ability to periodically take a snapshot of just "today's backups" and put them on a different storage for redundancy. I'd like to avoid taking the whole repo (e.g. running rsync) because the local backup server has a pretty big retention policy (e.g. keeping last 6 months worth of daily backups), while the offsite server will have limited storage/bandwidth. Some of the backup-ed data deduplicates nicely, while other data is pretty dynamic and deduplicates poorly.
So, my question is - is there a way to transfer a specific backup snapshot from a borg repo to a different borg repo? I can iterate through each repo and extract today's backups just fine (here's an example):
_etc-20210615-0105 Tue, 2021-06-15 01:05:04 [d382788354e3e5a535fd570cea2fe741e1c69734c14d43ec6e59f856dadab83c]
_var_www_html-20210615-0105 Tue, 2021-06-15 01:05:24 [e46d6ad7313e158172a46f0908bf3b16f450d06d2c410a0c612fe0453f18871e]
_var_spool_cron-20210615-0105 Tue, 2021-06-15 01:05:48 [0f50160706d97754a3a6620456f350d477aa8797d9a43a1de0f8bfcb06531f79]
mysqldump-20210615-0105 Tue, 2021-06-15 01:05:53 [258f604bf4d1c2196c3e1df14878c78e56560b2c9c839a2e666efa27ddd6ac8d]
... but how would I transfer them to a different repo (ideally without restoring them to a temporary folder and re-archiving them)?
One way I could think of is to mount each archive and then running borg on it. That would prevent extra disk usage, but would mean decompressing and recompressing the data...
Thanks!