How to protect against one node deleting all files in entire cluster?

Let’s say nodes A, B and C share a folder via Synchting. Node A adds massive data, which gets copied to B and C. One day C suddenly deletes all the files in the folder. A’s folder is “Send only”, so it retains the huge data. However B receives all changes, so I assume it will accept C’s changes and delete the files on its side too, is it correct? Next day A clicks “override changes”, and the files have to be transferred to B all over again.

If my understanding above is correct, then the question: Q: Is there any way for B to avoid re-downloading the entire dataset in the above scenario? E.g., a way to accept only A’s changes, but reject everyone else’s changes?

The recently added “Receive only” folders are, I guess, a step in this direction. But setting “Receive only” is up to the user running each node. If node C did not set “Receive only” then it’s still able to delete everyone’s files (except A’s).

Sorry if this was already discussed, or if I am missing something obvious, and I will greatly appreciate any pointers to solving this!

Thanks!

No, there is no way to do that sensibly. There is an ignore deletes flag, but that works on aper folder basis, so deletions from both A and C would be ignored.

Yeah, I guessed so. (Thanks for reply!). I now wonder how such protection can possibly work, and if it’s even possible. (By the way, I think this is an important problem even in case of single user. E.g., one of machines may be compromised, lost, messed with by kids, etc, 1001 possibilities, sending entire cluster to a long re-download).

Idea 1 - using “ignore deletes” flag. Deletes are done by additional script, which gets the list of files to delete from master node. Problem: It protects only against deleting entire files, but someone may instead delete only contents and leave empty files.

Idea 2 - “spent fuel rods tank”. How about this. When deleting a block (any block, from deleting or modifying a file), Syncthing would store such block in a special folder named e.g. “undelete”, where such blocks would sit for a set time. Say, a week. If they become needed within this time, they get reused back. Otherwise after a week (each rescan may check this) they are finally deleted for good. So, if master node “overrides changes”, there’s no need to re-download everything. Problems: Needs implementing, uses more disk space?

Idea 3. Some option like “Connect only to nodes with Receive only folder” (with possibility to specify exceptions for nodes that can send changes). Problems: Theoretically malicious client can be made that pretends to be “Receive only” but in fact sends changes.

May something is possible? (Idea 2 seems possibly workable to me).

Your can enable versioning and create an unshared folder pointing at the versions folder, so it gets scanned and indexed, which will then be used for block reuse.

Anything else requires development, which would have to be done by you.

I always thought this was standard functionality. Because the file was scanned and indexed before the movement. Before reusing, rehashing would make sense,… thinking about it, then also monitoring for deletions would be necessary… and so on… I understand why it is not active per default. :slight_smile: An implementation (if @Kirr would consider performing it) as an per folder option might make sense in a few cases.

Thank you, @AudriusButkevicius for pointing out that is not implemented. It explains a few situations.

This looks like a solution! Thank you!

I did not realize that blocks can be reused across folder boundary, but thinking about it, it makes sense.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.