stuck syncing at 90%

Hey guys,

I have 2 nodes syncing 2 folders between them. One of the nodes got stuck at 90% when I was testing ignore-Delete, trying to setup a backup resilient to ransom-ware.

Here is my setup:

  • Node1 - v0.12.20 on slackware64

  • folder1 - all default settings

  • folder2 - all default settings + ignoreDelete is set

  • Node2 - v0.12.20 on windows xp 32bit (running on qemu)

  • folder1 - all default settings

  • folder2 - all default settings

Here is what I did:

  1. copied file to folder2 on node2
  2. it replicated on folder2 on node1
  3. ziped file on folder2 on node2 while deleting original (simulate ransom-ware)
  4. ziped file replicated to folder2 on node1, but original file did not disappear from folder2 on node1 (that was the desired outcome)
  5. deleted ziped file from folder2 on node1
  6. it disapeared from folder2 on node2 (also desired outcome)

Now, on node1 webGUI it says all nodes are synced, but webGUI on node2 says node1 is “Syncing (90%)”… Tried restarting syncthing on both nodes… didn’t help.

Can you please help me figuring out what is wrong with the webGUI on node2?

Thanks

This is expected, then. That device is ignoring updates, so it will never be up to date from the other devices point of view. This is even documented for the setting in question:

In this state, Bob is fully up to date from his own point of view, as is Alice from her own point of view. However from the point of view of Bob, who ignored the delete entry from Alice, Alice is now out of date because she is missing the file that was deleted.

I read that as well, but this is the confusing part: node1 (bob) is the one that has ignoreDelete set, so it should see node2 (alice) as out-of-sync, but it is the other way around… it is Alice that sees Bob stuck at 90%

No…

  • Node1 has ignoreDelete set
  • Node2 sends updates about deleted files.
  • Node1 ignores those updates
  • Node2 sees that Node1 hasn’t applied the updates
  • Node2 thinks Node1 is out of date

What you say makes sense to me, and that is what I’m getting, but it was not what I thought by reading the documentation about ignoreDelete.

Assume two devices, “Alice” and “Bob”, are sharing a folder. Bob has set ignoreDelete.
----snip----
However from the point of view of Bob, who ignored the delete entry from Alice, Alice is now out of date because she is missing the file that was deleted.

That too. They’ll probably both look out of date to each other in the end, actually. You’re right that could be clarified.

Yes, the documentation is a bit confusing, at least on this topic.

I tested it further with renames as well on both nodes and node1 not only kept the original file but also got the “new” file (same file contents, but different file name).

This is what I hoped, as it guaranties that if something like cryptolocker hits my server I’ll have the original files ( plus the encrypted files) at my backup server.

On all tests, node1 (Bob) thinks node2 (Alice) is in sync, but node2 (Alice) thinks node1 (Bob) is syncing at nn%

Thanks for clarifying for me, and for considering a documentation clarification :slight_smile:

Not due to the ignoreDeletes option though, if they are modified in place. That would still be synced. Renames though are seen as a create + delete, resulting that you see above.

Made a change to the docs to reflect @elfo74’s experience.

Bob ignores the delete and sees Alice as up to date.
Alice deletes the file and sees Bob as out of sync.

This also becomes out of date. Bob ignores the delete, so sees no announce at all for the deleted file. Hence he thinks Alice is missing the file entirely.

This is what I initially had in the doc but not what @elfo74 said occurred. I am running on my phone at the moment so I can’t test for myself.

I assumed Alice must announce the file as being deleted in the index exchange which Bob ignores. Unless Bob changes the file he shouldn’t be trying to push it back to Alice so must track that Alice is up to date until the file is changed again.

I was also wondering if this would break when you merge the deleted files being removed from the index once all devices are aware of the delete. If so would we see deleted files synced back to devices once all of the members of the cluster remove the file from their index?

No. Since Alice doesn’t (apparently) announce the file, she doesn’t have it at all, hence cannot be up to date. Remember, we don’t currently keep track of what devices have announced previously, as the index is sent in full at connection start.

We don’t do this (as you probably know, but for clarity for any other readers). But if we did, this probably wouldn’t happen in this scenario as the device with ignoreDeletes will never appear to be aware of the delete, since it doesn’t accept that update in the first place.

1 Like

Thanks for the clarification. I will make an edit to the docs when I get a chance tomorrow if no one does it first.

1 Like

I’m going off-topic from the discussion, but I think that a program designed for 2-way sync (like Syncthing) is not the right tool for this use case. I recommend something designed for append-only backups.

1 Like

Hi @calmh

I tried setting ignore delete in a test folder on my work laptop and the GUI does show devices which deleted the file as up to date i.e.

This seems to persist in the GUI until Bob performs a restart, neither refreshing without cache nor manual or automatic scans caused Alice to display as out of sync. Once Bob’s Syncthing instance restarted Alice appeared out of sync.

Yep. I’ll leave it as an exercise for the reader to figure out why it behaves like that. :slight_smile:

[quote=“lfam, post:14, topic:6991, full:true”]

I’m going off-topic from the discussion, but I think that a program designed for 2-way sync (like Syncthing) is not the right tool for this use case. I recommend something designed for append-only backups. [/quote]I’d argue that options like file versioning and ignoreDelete are very well suited for that kind of use case.

Regardless, I’m just trying it out and, if it all goes well, I’ll implement it as an addition to the 2 other backup systems in place.

Storage is cheap(ish) now and data can be priceless, so it would be almost criminal to not have redundant backups.

Sure, but it means you are only a configuration error away from compromising your backups. You will need to protect your configuration file very carefully.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.