I have some scheduled DB dumps and these take quite long time to complete. I noticed how Syncthing is slowing down the process since it tries to constantly rescan the file just to realize it has changed again. The backup storage is slow rotational HDD so Syncthing slows down the overall process significantly by taking ~50% of the IO bandwidth. I tried to play with Advanced Settings / Folder fsWatcherDelayS, but it did not help.
Is there a way to get Syncthing scan to ignore a file that has been updated less then N secs ago (optionally if it is larger than X)? This would improve the overall system performance by eliminating useless re-scans.
Not really, no. I’d recommend having the backup process write to a temp file, then move it to the final destination. run_db_backup > db.backup.tmp && mv db.backup.tmp db.backup. Then ignore *.tmp and you get rid of both unnecessary scanning activity and the risk of syncing partially complete backups.
I see it kind of in the opposite way. Having long running jobs write incomplete data to temp files and rename when done is always good practice, works with all kinds of sync and backup tools, and has no real downsides. Implementing workarounds in Syncthing introduces additional complexity (extra config knobs to tune, more code, more things to test) useful only in specific cases.