I have two locations A+B and need to process one special directory between those branches using Windows DFS replication in the following way: All writes into that directory of A need to be pulled by B. All deletions in that directory of B need to be pushed to A again. In the end, it's simply pulling all contents from A by B with emptying at A and B at some point.
The important thing is that the directory is the file system interface of some special app. That app is by purpose running on host B only, while host A creates the data for that app. DFS is used to transfer that data somewhat reliably from A to B.
Because it's a file system interface, it needs to follow some conventions so that the app at B knows when it can access all the files. That convention is simply that before creating all the data a special lock file is created and when all the data has been created the lock file is removed. Removing the lock file means the app on host B is free to process the data however it wants. Of course, compared to the other data transferred, that lock file is really, really small, while all other data might be hundreds of MiB in size in theory. So, to reliably work with DFS, the order of file system operations would need to be taken into account when replicating.
From what I've read so far about DFS, it may transfer files out of order:
Does DFS Replication replicate files in chronological order?
No. Files may be replicated out of order.
OTOH, it tracks files by IDs already and the ID of the lock file would be lower than those of the other files.
What happens if I rename a file?
DFS Replication renames the file on all other members of the replication group during
the next replication. Files are tracked using a unique ID, so renaming a file and
moving the file within the replica has no effect on the ability of DFS Replication to
replicate a file.
Additionally, there seem to be some settings regarding concurrent downloads, which might be lowered to 1 in the worst case. Though, that for itself of course might not guarantee a strict order of file operations like I need it. The lock file could simply be deleted between transferring two large other files.
How are simultaneous replications handled?
There is one update manager per replicated folder. Update managers work independently of one another.
Would be great to have some mode of operation or alike of DFS using the Windows Change Journal and following the order of that events in the source folder to be replicated. that should guarantee that DFS creates and deletes the lock file as first and last operations, like has been done in the source folder.
So, is there any way to make file operations strictly ordered in DFS, so that it can be used with file system based interfaces?
Thanks!