I would like to use syncthing to replicate 40TB of data across regions.
Around 100GB of data is changed daily from all nodes.
Moreover, I have a http service to allow downloads/uploads. When an upload happens I thought of calling syncthing api with the file that was changed.
No scan intervals or inode setup happen since I know the files that are changed and at which node.
Problems I have seen so far are
- when I start/upgrade/restart syncthing the initial scan takes too long thus I cannot scan for new files. The only thing I can do is write the files that have changed in another file and sync them after initial scan is done
- when I upgrade the index db size gets huge (300+GB). Not sure if this is because I restarted nodes at the same time. Before restart db size is around 60GB.
- when syncthing syncs files on a syncthing node I cannot perform scans for new files
I’m writing to get your opinions. I’m thinking that syncthing might not be the best solution.