Replying here as it’s the newer thread, but note the quote^ from the other similar post. This quote is identical to my dilemma; as I understand it they’ve described the problem well. My server has a robust file system (btrfs 3-copy ~raid in my case) and backup scheme, devices that require full write (laptops of the group, and for some shares phones) simply don’t support the file system level redundancy & repair.
It’s not clear to me how to address this, in the general case, if some devices with write access to a share do not have the same level of robustness. User interaction / hoops seem required given how timestamps are not always meaningful.
A few thoughts I’ve had. I’d agree with shifting burden to user over syncthing if at all possible, but it does seem unavoidable for syncthing to
- a [new] third folder type: create/delete only. All files are read-only, allowing a checksum fail to be always interpreted as “not modified, therefor corrupt”. This works close to perfectly for large use cases such as media backup. On the rare occasions with edits, it’s simple enough to create new files and delete the “modified”.
- file-level read-only. The user has to mark a file write. I do not propose “checkout” just change attributes to make it write globally, a synced value. Sure checkout might be nice, but would be a much more complicated feature, I would think? This change couples nicely with my variation of “archive”
- files in a share become “archive” (== read-only) after some time window. If syncthing used the file’s read-only attributes to ascertain that any checksum failures are corruption, then this could actually be a batch job on the server (chmod’ing old files to read-only) and the syncthing responsibility is limited?
- (linux respects this, afaict - iir windows has ways for the user to ignore? I’d be fine with that limitation)
As I consider risk to older but critical data, I’m growing very concerned about this