Apologies to bring up and old issue. i.e
The only way we’ve found around this is to stversion all data for 24hrs and re-add that stversions folder as a st folder so that files can be pulled back into the correct place when detected. If we don’t do this then as soon as a user moves a directory of files, the directory tends to get deleted and re-downloaded (it’s 50/50 whether this happens or the folder is moved/copied). This is of course really IO intensive because when st versions a file it then has to be rescanned, and for our use case the folder may be 100GB.
Basically, what I’m wondering, is there any possible for syncthing to be able to pull files back from the stversions folder when it realises that it already had the file (from a tree rename etc) without the extra IO. I.e, when it versions a file it already knows the hashes so doesn’t need to rescan it.
This is similar to how resilio handles renames i believe:
Blockquote There are several scenarios possible but here is the ideal scenario:
- You rename file example.txt to example2.txt on your peer (peer A).
- On the remote peer B Sync detects that example.txt file is missing on peer A and moves the file into the Archive .
- After some time, Sync on peer B detects that example2.txt file appeared on peer A. Sync checks if there is a file with the same hash in the Archive folder. If there is such a file Sync puts it back with a new name.
This flow allows to avoid unnecessary re-transmitting of the data and save bandwidth. Thus, you need to have Archive option enabled, otherwise files will be re-synced again.
I would happily fund any development in this area.