That’s possible to script using our API:
https://docs.syncthing.net/rest/db-scan-post.html
It will rescan full subdirectories as well.
That’s possible to script using our API:
https://docs.syncthing.net/rest/db-scan-post.html
It will rescan full subdirectories as well.
Oh, fantastic, thanks!
Works like a charm, thanks again!
Strangely, I needed to give the folder ID, just using “default” produced a “no such folder” error, even as the default folder is the same as my sole folder.
Going to rename stuff ![]()
Sadly, this trick does not always work!
I did rename one season of files, used the trick with the TMP folder to rename and move in 2 stages, resyncing in between, and it worked!
But with the next season, it did not work:
I renamed into the TMP folder, resynced and all seemed fine. Then I move the files out of this folder and resented again. But here, about 8 GB of files were should out of sync and started to upload!
Strangely, the list of out-of-sync items contained the TMP folder, so somehow it tried to sync those? Even if the files were already moved out of this folder again.
That is somewhat irritating.
Maybe the remote site was not yet finished with renaming, even as “Up to date” was shown?!?
I find syncthing does not always do a rename. I have found usually syncthing will “copy from another location” and then delete the original.
If you’re using some kind of versioning you can tell because the original is moved to the trash. So you have one copy in the trash with the old name and another copy where it should be. If the copy happens first, then no data transfer has to happen. But occasionally the delete happens first and then there is. Nothing to copy from so the download happens.
Anyway these scripts are all band-aids. But none of them really address the problem that the renames are really not handled correctly.
I think within a folder, assuming no name collisions, renames should work. Across folders, this was never guaranteed, and works by chance of which folder sends updates first.
If they do not, someone should write down a step by step guide how to reproduce that (down to the level of detail of ‘create 7 files named this and that with that content’).
Saying that it happens for “you and your files” doesn’t help us reproduce the issue to understand why it happens.
I was also bothered by the need to re-download jellyfin after it was renamed. So I have an idea (not yet tested) : turn on “ignore delete” setting for the receiving device before renaming, and then turn off “ignore delete” setting when synchronization is complete. I think this can make a local copy on the receiving device, and only need to take up some capacity in synchronization to reduce network transmission.
It seems, things get worse not better with Syncthing 2:
- Rolling hash detection of moving data is no longer supported as this effectively never helped. Instead, scanning and syncing is faster and more efficient without it.
The current version of renaming DID help, even if it has it’s problem. But no renaming anymore?
SURE???
That would render Synthing nearly unusable for me. I constantly need to rename things and sure cannot re-upload!
Really, we need multiple threads of the same bullshit? This has nothing to do with that.
Why the tone?
Hash detection of moving data sounded EXACTLY like renamed files and folders.
If that is not meant, the upcoming change could use a better wording to be clear what it means!
It is VERY easy to misunderstand right now …
I think the annoyance comes from posting the same thing in multiple threads at the same time, which adds nothing to do the discussion but creating useless pings to people.
The weak hash is/was a feature to detect “slightly” changed files, where data was inserted (or removed) “in the middle” of the file. Syncthing then detects that all the file blocks after the inserted data are just shifted from original, and doesn’t have to re-transmit them.
While this sounds like a good idea in theory, real usage data collected by the project for many years has shown that this feature saves less than 2% of data transfer. Thus the decision was made to remove it (as there is a cost associated with this, both maintenance and performance).
The changelog could be reworded to use “shifted” instead of “moving”. Again, it deals with files where data was inserted/removed in specific ways, not about copying or renaming files to some other place.
Continuing the discussion from Renaming files and folders:
Here’s my 2c:
Alice renames / moves a bunch of /some/path/files to /some/other/path/files (not picked as a move by watching)
Bob gets info that /some/path/files are deleted. Here’s the twist: instead of deleting them directly, it moves them temporarily to some .stcache folder (having kept all the hashing blocks these files contain).
Following, Alice is re-hashing the “new” moved files in /some/other/path/files, and asks Bob if they have these blocks to put in /some/other/path/files. Bob now already has the files in its .stcache, hence tells Alice “I got these, don’t send the whole thing".
Once Alice (the node that initiated the sync move that is delete/copy) is completely synced, Bob can discard anything that remained in its .stcache.
This way, rescanning & rehashing Alice can’t be avoided, but its necessary and its only done quickly (locally). The cool thing is nothing but hashes gets transferred via the (slow) network.
The only downside is rehashing in Alice plus some temporary space overhead in Bob for deleted files (the .stchache) for a short while, which is so much better that resending & then rewriting GBytes of data all over again.
It’s like saying “Linux Desktop is only 2%, lets kill it”.
Maybe only 10% of users actually use it, and only for their 20% of the data send over. But this “exception” for them is crucial. VeraCrypt or .vhd of hundreds of GBytes will have to be resend and recopied in totality, just cause they mounted it and changed a readme file! This sucks. This feature is what made DropBox so much better than OneDrive, Google Drive and the rest.
It might be an exception to the rule, but it’s exception handling that makes products exceptional. I will never upgrade to v2.x until this is re-added, and I will be actively looking to replace SyncThing altogether.
We always do partial & delta updates, the removal you’re talking about has nothing to do with this.
Thanks for your reply. It mentions above “…a feature to detect “slightly” changed files, where data was inserted (or removed) “in the middle” of the file. Syncthing then detects that all the file blocks after the inserted data are just shifted from original, and doesn’t have to re-transmit them….”
How is that different to partial delta changes - I understand it’s the same thing - can you please explain? Will small “middle of the file” changes work by sending only the changed blocks instead of the whole 100GB file?
Also, can you or someone have a look at the simple .stcache “recycle bin” strategy for renaming files I’m proposing, how sound it looks and how probable it is to get implemented?
The removed functionality was about shifting data. In a hard disk image file, data in one place will usually not get moved around because you changed some content in an unrelated location. Editing a small text file for example will touch one storage block and maybe use another one in addition. But all the other blocks stay where they are. Like a tape cassette where some part is overwritten with a new recording.
The previous optimization concerned files which were reassembled by taking a span from the beginning, inserting some new data, then appending the rest of the original file. Like cutting the tape and gluing in a new piece to make it longer. This is an uncommon scenario with your mentioned use case of virtual hard disk images.
Ah great, thank you for the clarification, I hope it is useful for whoever stumbles upon this!
What about the renaming/recycling .stcache strategy - any ideas about this?
I know some people have in fact added their .stversions as a separate Syncthing folder (not shared) so the blocks can be re-used from there. But that is still not a rename operation, but a local copy will be made in the best case. I don’t know if Syncthing uses any OS specific file duplication system calls as an optimization. But in the general case, assume it’s a new empty file being created and then each block copied from the existing one, sequentially.
Yeah I’ve read the workaround with the `.stversions, that’s how I thought about this strategy that could be more seamless.
It can also be improved without needing to wear out our SSDs with unnecessary new file writes, if whole files were just moved intact, using my strategy above.
It’s open source. Write the feature yourself? Or hire someone to do so?
Sure, I wish I had the resources, but I can’t right now.
For now I created an issue, if someone wants to pick it up Move / Rename Folders & Files Strategy to Avoid Re-transmision and File Recreation. · Issue #10482 · syncthing/syncthing · GitHub
Thank you Andre @acolomb for your insights and help!