272GB folder with 65 items (8MB) out of sync - why does scanning take priority?

I’ve got a fairly large folder (272GB) and I’ve noticed that when something is changed on a remote, Syncthing immediately recognizes the change, but will wait for the (insanely long) folder scan to finish. Is there a technical reason why Syncthing would prioritize a full scan over what feels like a rational option to look specifically at the 65 out of sync items (8MB total) first and try to sync them? I’ve been waiting for a good 45 minutes for this scan to finish on a fast i7 Mac with 32GB of RAM. Logs show it’s doing nothing but scanning.

bumping this – I use Syncthing extensively and am interested in understanding its operation a little more clearly, as this doesn’t make a lot of sense to me that scanning would (appear to) have priority over syncing.

Scanning has to finish first, because the file may very well already be present in the system (meaning that there is no need to proceed with downloading it again).

While I do understand that, it feels like this is perhaps an oversight or simply something that hasn’t come up with smaller shared folders.

Respectfully, a “quick, what’s the hash of this/these file/files” which would “jump the queue” of the literally millions of files being scanned/hashed seems like a huge UX improvement over “let’s finish scanning these millions of files and THEN get to the thing we know at least one remote definitely thinks has changed”.

Syncthing won’t interrupt a scan to start syncing, but it also doesn’t start a “full scan” as you say except on the configured interval.

Gotcha - thank you for the clarification. I have the scan interval set to “really long” (something like once a week) just to catch anything that might have got missed by the inode watcher but it seems like every time I’m having an issue with syncing it’s because the massive scan is in progress. :slight_smile:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.