Make puller more patient with locked files

I’m asking for a nice to have feature to reduce log flooding and to increase the patience of the puller in the case of a particular business as usual condition.

Symptom

23 lines of log per minute consisting of 2 lines repeated 11 times plus 1 more line.

INFO: Puller (folder "foldername", file "filename"): pull: peers who had this file went away, or the file has changed while syncing. will retry later
INFO: Puller: final: peers who had this file went away, or the file has changed while syncing. will retry later
INFO: Folder "Pdocs" isn't making progress. Pausing puller for 1m0s.

Cause

A file at the source with an exclusive lock.

A very old Windows program my wife uses which, after every write, closes and re-opens its main file. Therefore the file is picked up as changed and then locked immediately.

Eventually, maybe a few days later, she closes the program and the file is synced.

Amelioration

  1. Combine the “Puller (folder” and “Puller final” messages
  2. Produce the message(s) only once for the 11 iterations
  3. After (say) 5 minutes, increase the pause to (say) 15 minutes (alternatively increase progressively - 1, 5, 10, 20, 60 minutes)

Any one of these would help reduce the verbosity and wasted processing.

This not an error condition and, in our case, can last several days.

Other Considerations

I don’t know what the effect of increasing puller delay will have on the syncing of other files. It might be better to transfer the file to a patient puller and later to a _more patient pulle_r so as not to impact other files. Another possibility is to have a dedicated patient puller for just the problem file.

So just to clarify, the puller is currently infinitely patient as it will hopelessly retry and retry forever. However it produces a vast amount of complaints in the process, which I guess is the problem?

I guess there could be some sort of back off when it’s failed to pull a file two thousand times and it’s quite likely to fail the two thousand and first time as well…

Related: https://github.com/syncthing/syncthing/issues/2922 as this is a special case of the file being updated without Syncthing scanning it properly (because it can’t, as it’s locked).

That’s it - it’s the complaints I’m concerned about.

To defend my use of the word patient , the puller is infinitely persistent, but it isn’t infinitely patient as it complains.

Seeing the message that the folder isn’t making progress gives the worry that other files may be queued behind this one and won’t sync until after this one. I hope this isn’t the case.

On the Raspberry Pi, I used to pipe syncthing’s output via grep to remove messages for the problem file. Now I don’t bother. Instead I schedule stop syncthing and reboot every night.

1 Like

:smiley: Okay!

No, everything else is handled. What happens is that it attempts to sync all the files that need syncing, then retries the failed files, then retries again, up to ten times. After that it complains that it’s not making progress, since apparently everything that could be done without failing has been done.

3 Likes