Improving performance by limiting concurrent I/O

Specific to my case: it seems CrashPlan stores its data in data-files of (almost) 4Gb. So each time the backup has added something, the last file needs a full re-hash. Additionally it also does ‘pruning’ meaning it throws away blocks of files (or versions) that are no longer needed, freeing up space again for the new backup and again causing many more 4Gb files being touched. I have no statistics about this, but I’m under the impression this means Syncthing will quite often find many of those 4Gb files spread around over all directories related to the local machines marked for re-hashing. The ‘offsite’ directories are only updated by Syncthing itself, so I agree that just a meta-data scan should probably do there.

PS: with ‘on startup’ I meant ‘after not having run for a couple of days’. I’ve installed syncthing as a service, but for some reason it sometimes stops and I’ve been too lazy so far to really figure out why and what to do about it (instinctively I’m blaming the wrapper). Whenever I think about it I simply check and if needed start the service again and then I notice the described behaviour.