WHAT is better (faster)? Many files in one huge folder? oder sharing each subfolder?

hello there, thank you for maybe reading and answering here.

if i have many files. is it quicker for me(the time for indexing and syncing) / easier for syncthing if i share the complete HD drive as one shared folder?

or is it better to share each big subfolder located in that drive?

its 1,5million files, maybe 5TB in total.

thank you in advance! (one side is synology the other a ARM Raspi 4)

I don’t think there should be much of a performance difference either way, it’s more about what’s convenient administratively and whether things should be independent or not.

okay, thank you for that quick reply!

for me it would only be better to shared it seperately because i could pause some folders and not have them being scanned. so i can cut the long scan process into smaller pieces.

if you do not see any other difference or even advantage, then you can close that channel. thank you calmh!

how is the SC behavioer?

when i move a file from one folder to another. is it moved on the receiving side quickly? or is it deleted in one folder and synced over/transfered from the sender into that new receiver folder?

Depends on the scan/sync timing. If the “new” file is processed before the delete, the file is copied locally on the other device without network transfer. If the delete is processed before, the file has to be sent over network again.

@Alex when on SENDERside, i file is moved from one folder to another folder. what does the RECEIVERside do? does it recognise there was a moved file? and it moves it? or does is see one file missing in one folder, and one file additional in another folder? so it deletes one file and syncs-over one file and does not recognise that its the same file?

The answer is still “it depends”, on the timings the scans happen in the folder.

It doesn’t have any logic for file x has been moved from folder A to folder B.

It has logic for file x has appeared in folder B, hey look, it has a lot of similar content as this other file in folder A which I will just copy instead of downloading.

This is assuming the notification of the deletion in folder A hasn’t been received yet. Deletions are usually delayed by 10s, so if that is enough to copy the file, you should be good, maybe.


There are some optimisations to detect create+delete and do a rename, too, but as mentioned they are not 100% guaranteed to kick in, especially not if a very large number of files are renamed (such as a directory high up in the hierarchy).

This has been discussed and explained numerous times. I’m sure there are search results for it.

1 Like

One possibly interesting upgrade which may help is if deleted files retained in the trash-can or versioning folder were still in the index with their chunk hashes…. This would substantially increase the probability (perhaps to 100% if any kind of versioning were in use) that the chunks with matching hashes were available. Syncthing could then copy from the trash the matching chunks.

Anyway no idea at all how easy or hard this kind of upgrade would be. Or if this is in fact already being done. (I did search and couldn’t come up with any info on this bit.)

I’ve heard of people actually forcing the use of versioned files as a local-copy source. Just configure a new folder within Syncthing, not shared to any other device, pointing at the other’s versions folder. Any file known to Syncthing can be used to copy identical blocks from, even across shared folder boundaries.

Not sure it changes the timing problem though, as the versioned file still needs to be scanned before its blocks can be re-used. And when that happens in relation to the deletion being picked up is probably non-deterministic.

Yeah it would be good if this were automatic. I renamed a 25GB folder yesterday and only about 1GB of data was retransferred. 24GB or so was copied with a 2.5Gbit local network extra transferring isn’t a big deal but I was remote so connecting over a WAN.

That has its own drawbacks though, mainly increasing the space needed for the database, a price not every user might want to pay with just default settings + versioning.

Yeah. Perhaps it can be optional.

Or perhaps those blocks can be purged from the database after a period of time, say one hour. Or one day. (I mean purging the hashes from the dB. Not purging the versioned files.)

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.