There are many cases in which delayed sync would be useful. This is not a complete replacement for a good backup solution, but it may useful. Would it be possible to add a option to delay (up to 20-40 hours) received changes, when it received ?
There is the
fsWatcherDelayS option: Syncthing Configuration — Syncthing documentation (found under Actions > Advanced for each folder), if you’re using the filesystem watcher. Otherwise, just set your rescan interval appropriately.
in my understanding this feature actual delay on local filesystem watcher, adding delay to finalize changes. I’m looking for an option to delay beginning of transaction for file changing , on receiving channel at particular node.
as example file irreversible damaged on particular node , damage replicated , but postponed on a node that received it.
Such a thing does not yet exist in Syncthing. But of course it’s possible to implement, would probably just involve checking a timestamp of the change and backing off from the “puller” activity until it’s old enough. If you really need this feature for good reasons, I think a PR adding it as an option could be considered for acceptance. Just don’t expect anyone else to do it for you, as I think this is quite a niche case.
There is no built-in setting that would allow doing something like that, but you could set up some kind of a script that would simply pause and unpause the folder in question periodically, if you really need this functionality.
However, is file versioning not enough to protect against such damage?
@acolomb I think the long-requested functionality not to scan Receive Only folders on startup may be needed for this to be reliable though, as otherwise Syncthing will still scan the folder in question (and probably pull changes while doing so?), e.g. after performing auto-update and such. Unless you’re thinking about making this completely separate, with the “backed off” puller activity saved in the database or config.
Sorry, I don’t see how that’s related?
The OP was talking about receiving changes from remote, which are stored to the index first. I assumed those have a timestamp of some sort. So when giving work to the puller routines, I’d just check if the timestamp of that remote change is older than the configured threshold (compared to the current wall-clock time), and postpone the item until the next run.
No, I don’t think it should be saved anywhere separately. And I repeat, there is probably no developer thinking about “making” any of this a reality, as the general need still needs to be demonstrated.
Yeah, just got confused . I actually wanted a similar functionality in the past, so that I could store a sychronised copy of the data, and then only update it with a delay, e.g. once per day. However, this could be easily replicated by simply pausing and unpausing the folder, so I didn’t find the functionality that important.
I mean, run rsync on a cron? Not sure why people insist on using syncthing in scenarios like these. Sounds like all of what is being discussed is one way.
Connectivity even under extreme circumstances (e.g. between computers located in different parts of the world behind NAT with no public IPs, etc.) would be the main advantage over any other, “normal” backup tool.
That said, I’d just set synchronisation in Syncthing normally first, and then use a dedicated program to make backup copies locally. Obviously, this will require more disk space, but still.