Discussing FS watcher delays (#4953)

This is a continuation of the discussion in https://github.com/syncthing/syncthing/issues/4953


As I understand it, the file watcher batches edits by 10s periods and deletes by 60s periods

However it seems that if a file is deleted and then recreated shortly after, then that whole edit will be delayed by 60 seconds. Is this intentional? This is essentially the same thing as clobbering a file so I would expect the delay to be the same as clobbering.


Yes it is intentional.

I don’t know what clobbering means, maybe it’s relevant to know that the FS watcher/aggregation/delay part does not know about file contents.

I am currently struggling with making up a concrete example where the policy actually helps, but the general idea is: Existing files can be delayed up to 6x the wait time if modified multiple times. New files can be reconstructed from existing files on the remote, so delay anything that was deleted (i.e. also delete&create) by 6x the wait time too. I mean I can come up with a scenario, but it is not very convincing: Rename file A to B, create a different file A and keep modifying the new file B, without changing its content much/at all. Then if A wasn’t delayed, B wouldn’t be reconstructed from A on the remote (as A was already replaced).

However I think it’s fine to err on the save side, so unless someone has some convincing arguments against it, I’d keep the current behaviour.


IMO, the current behavior (10 second batching for most stuff, 6x that for deletion or multiple modifications) is sensible for most use cases, but it would be kind of nice to be able to tweak both values on a per-folder basis. As a couple of examples:

  • I have a folder where I have a bunch of playlists, and when I edit those, I do so pretty slowly, and would much rather batch all the changes I make into one update, so for that folder, it would be nice to be able to bump up both values significantly.

  • On the other side of things, I do have a small handful of things where changes are very infrequent and consist of single writes, but I want them propagated as fast as possible, even if it means that data may not be reused as much as it otherwise would. In this case, it would be wonderful to just drop the batching down to 1 second, and remove the multiplier.

Well, it’s up to 6x times wait time. It’s capped at 1min, i.e. repeating changes and deletes aren’t delayed additional if the “normal” delay is already >=1min. So I think that should cover the first point.

For the second point: The fastest current possibility is indeed 1s delay, and 6s delay for continuously changing or deleted files. Personally this still feels plenty fast and having a configurable second delay seems too much complexity for the gain. Is the difference relevant in your case?

1 Like

For the first point, that does cover it pretty well, and it looks like I just misunderstood how that works. Being able to just make it a flat 60s delay might be nice for that particular use case, but isn’t really essential.

As far as the second point, 1 second is fast enough, though the potential six second delay might be too slow for some other people’s usage (not mine, but mine is obviously not the only use case).

Given this, it sounds like the ideal case might be to:

  • Allow per-folder configuration of the batching delay (if this is already possible, I don’t see it in the docs or from looking at the regular syncthing configuration).
  • Allow the user to turn off the extended delay (IOW, give them the option to just delay up to however long they have configured, and not defer again if something changes before the delay is up).
  • Possibly change the maximal delay to just be one minute, instead of 6 times the batch delay (this would make it somewhat clearer what’s going on, and make the behavior a bit more predictable for people who are new to this).

Clobbering means overwriting a file with a different one.

I did not realise that any file modified more than once got delayed extra. I can see the behaviour’s utility but it feels like a kludge and is not intuitive at all. Especially migrating from syncthing-gtk’s watcher which does not delay anything at all.

I can’t think of a use case where changing the extended delay separately would be useful, but I think this behaviour could at least be made clearer: Files that are currently being delayed could be shown in the UI, and the delay should be editable in the main folder settings, not only advanced settings.

This is a UX train crash, displaying some internal queueing mechanisms in the UI.

Would your parents care about that? Why do you care about that?

I am also somewhat against adding yet more knobs. Someone has to give a really good reason of the existing implementation breaking something spectacularly. I know people feel bad when they have the feeling that don’t have full control over the behaviour, but nor do I see a reason why they should.

As far as I understand clobbering does not involve a delete, it involves a rename on top, which I am not sure how we handle.

1 Like

I care because I was so confused by it that I honestly thought it didn’t work right and opened a github issue about it…

I meant showing them in the “out of sync items” list with everything else, it doesn’t need to say anything about delays, just some confirmation that, yes, syncthing has taken notice and will sync this file.

But it has nothing todo with out of sync, as out of sync are files we need to fetch from remote side, nothing todo with local queuing.

The queue is not even per file, but potentially per directory and gets collapsed as more and more directories appear etc.

Right. Well I don’t have a solution, but again I do think it’s confusing

Just to be clear: Modified more than once gets delayed until it is not modified anymore or the maximum delay is reached (which is 6x normal delay if <1min).

Everything related to FS watching happens on the device where the change occurs - so the other device that is going to be out-of-sync/pull the file, does not know about it until the delay finished -> no indication is possible.

I can see how it’s confusing that on the remote a new file is created quickly and then there is a longish wait time until an old one is removed. I just don’t see a good way around it.

I think another problem is, in case you are synchronizing two devices right next to each other, there is a significant, unnecessary wait until your changes are actually transferred. So you have to wait up to 60 seconds until your change is visible on the other device, and there is no indication why. With the scanner this was unavoidable, but now I don’t see why we should keep this bad behaviour.

It’s a tradeoff. Think of e.g. rendering a movie: Without delay, Syncthing will rehash it every 10s, with delay only every 60s. Admittedly that’s still bad and should ideally not be done in a synced dir in the first place, but who thinks about such things.

I am now implementing a mechanism, that will speed things up in most cases:
If there are no “creations” being delayed, “removals” don’t get an additional delay either (i.e. only the normal 10s instead of 1min).

I believe this will make the case where a delete is actually delayed so unlikely, that no-one will notice. Essentially it only happens if there are a continuous or a hole lot of simultaneous changes, and in that scenario I don’t think anyone will notice any delay patterns.

1 Like