Syncing large files - observation

Hi there,

I’m incorporating Syncthing more and more into our workflow, but while testing I came accross this issue:

I have a folder with large files (30 files with 1 GB each). Whenever I move or rename that folder, Syncthing will eventually sync everything correctly but while doing so, it actually re-writes the data first to the new folder and then removes them. This is less of a problem for local syncing, but over the internet, this is a terrible waste of bandwidth and time.

I then tested with pausing sync for the folder on both machines, renaming a folder on one machine, turning on sync on that machine, wait until the scan process is finished, then resume sync on the second machine. Boom, instant rename (or move) of the folder.

It would be nice if Syncthing had this workflow built in. In other words, if there is large amount of data, finish the scan first, before sending changes to a second machine.

Or is there an option that does that?

Thanks and kind regards Hans

Currently no. What happens is that changes are sent in batches; additions first, then removes. A large file is large enough to constitute a batch on its own. So when a new large file is detected it’s scanned and that update sent to the other side. The other side sees it as a copy and starts copying. (Not transferring, so there should be no waste of bandwidth, unless you mean local I/O bandwidth.)

If you pause it so that update sending is disabled the addition and the delete will be sent at about the same time, thus it becomes visible as a rename instead of a copy to the other side.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.