I'm reaching out to you in the hope that some of you can provide me with fresh ideas for a current problem I'm working on. I'm planning to set up a redundant data center (including websites, mail services and databases). Here are the key points:
I have two data centers connected by a 1 Gbps link, with IPs routed using BGP. One data center acts as active, while the other serves as passive. The basic prerequisites in this regard are already in place.
However, I'm facing the challenge of keeping the data consistent and up-to-date on both sides. Currently, I'm using @Virtuozzo Hybrid Server with Virtuozzo Storage (SDS), which features a three-fold redundant network storage system. The PLOOPS (Container Files) of various sizes ranging from a few GBs to several TBs are stored on these systems.
Currently, I'm creating backups of the PLOOPs by taking snapshots and then transferring them to a geographically separated storage using BackupPC. This process occurs once a day.
In the event of a disaster, it would be desirable to have access to more up-to-date data than what's limited to a 24-hour window. Unfortunately, due to the 1 Gbps link between the data centers, a form of live replication is not feasible.
Are there any approaches that might work better and faster than synchronizing the sometimes several TB-sized files using rsync between the data centers? (copy only change delta? But since it's Virtuzzo File System there is no such mechanism I can utilize as far as I know)
Perhaps some of you have interesting ideas in this regard? I would greatly appreciate your insights.
Thank you very much,
Andreas