This is a continuation of the discussion in https://github.com/syncthing/syncthing/issues/4953
As I understand it, the file watcher batches edits by 10s periods and deletes by 60s periods
However it seems that if a file is deleted and then recreated shortly after, then that whole edit will be delayed by 60 seconds. Is this intentional? This is essentially the same thing as clobbering a file so I would expect the delay to be the same as clobbering.
Yes it is intentional.
I don’t know what clobbering means, maybe it’s relevant to know that the FS watcher/aggregation/delay part does not know about file contents.
I am currently struggling with making up a concrete example where the policy actually helps, but the general idea is: Existing files can be delayed up to 6x the wait time if modified multiple times. New files can be reconstructed from existing files on the remote, so delay anything that was deleted (i.e. also delete&create) by 6x the wait time too. I mean I can come up with a scenario, but it is not very convincing: Rename file A to B, create a different file A and keep modifying the new file B, without changing its content much/at all. Then if A wasn’t delayed, B wouldn’t be reconstructed from A on the remote (as A was already replaced).
However I think it’s fine to err on the save side, so unless someone has some convincing arguments against it, I’d keep the current behaviour.
IMO, the current behavior (10 second batching for most stuff, 6x that for deletion or multiple modifications) is sensible for most use cases, but it would be kind of nice to be able to tweak both values on a per-folder basis. As a couple of examples:
I have a folder where I have a bunch of playlists, and when I edit those, I do so pretty slowly, and would much rather batch all the changes I make into one update, so for that folder, it would be nice to be able to bump up both values significantly.
On the other side of things, I do have a small handful of things where changes are very infrequent and consist of single writes, but I want them propagated as fast as possible, even if it means that data may not be reused as much as it otherwise would. In this case, it would be wonderful to just drop the batching down to 1 second, and remove the multiplier.
Well, it’s up to 6x times wait time. It’s capped at 1min, i.e. repeating changes and deletes aren’t delayed additional if the “normal” delay is already >=1min. So I think that should cover the first point.
For the second point: The fastest current possibility is indeed 1s delay, and 6s delay for continuously changing or deleted files. Personally this still feels plenty fast and having a configurable second delay seems too much complexity for the gain. Is the difference relevant in your case?