Failed items because of regularly changing files causes constant Rescans/Resyncs

First of all thanks for providing such a great product, I really love Syncthing! :slightly_smiling:

One of the usecases for which I use Syncthing is to sync my home directory from my work PC, which is turned on only every now and then, to my home server, which runs all the time. This home directory sync is then backed-up to a remote server. This setup works fairly well, however there is one somewhat annoying issue:

As it is quite common in a home directory, many of the files change quite frequently. Syncthing detects this and generally reports after each Syncing phase that it is Out of Sync because of some Failed Items. (These items are most of the time elements in my Firefox configuration directory like log files or so, but generally hard to exclude because they are in various subdirectories.) Because of these failed items, Syncthing seems to skip the configured Rescan Interval of 1800s and starts a scan already after a minute. As this new Scan/Sync phase again ends with failed items, Syncthing is essentially constantly Scanning/Syncing and thus burns a lot of CPU cycles. To give some numbers: in 22h the Syncthing processes add up to about 7h of CPU time. I suppose that this is not intended behavior, is it?

For the short term, I would be interested whether there is a way to tell the Puller to wait longer after a “failed phase” as that would fix at least the symptoms of the issue. For the long term, I wonder what generic solution would be appropriate…

What do you think? Thanks,

Tobias

Well ideally you’d exclude those files.

Some files (such as mmapped sqlite files which firefox uses) can never be synced, as their mtime is not updated.

In advanced config there is a PullerPauseS setting which you can adjust.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.