Incomplete global state when trying to sync a new Receive Only folder with pre-existing files

I have encountered this problem quite a few times recently in my real setup. I still haven’t been able to reproduce this in my test environment though, so I have no exact steps to reproduce or anything like that yet.

This is what the folder state looks like on Device A. The folder is shared between a few devices, with Device B among them.

1

Now, after sharing the folder with Device B, something breaks.

Please ignore the “Waiting to Sync” state, as nothing changes after it does “sync”. The problem here is that the global state is incomplete, and doesn’t match the actual global state on Device A (and the other devices too) at all. Trying to revert local changes deletes the local files, but still doesn’t fix the broken global state, i.e. the folder syncs, but the result is a very incomplete sync, following the already incomplete global state. Pausing/unpausing the folder and/or restarting Syncthing itself also changes nothing.

This only seems to happen if there are pre-existing files in the destination. If there are no files, and we start from scratch, then the folder seems to sync 100% properly. Removing and re-adding the folder also doesn’t help. It still ends in this state, eventually.

Do you have any idea what the problem may be about? I will keep trying to reproduce this in my test setup, but for now I am kind of stuck.

Both devices run Syncthing v1.14.0.

1 Like

I could start to speculate, but it would be the same few options I speculate about on almost all of your reports. Here it would be missing indexes, and no clue why they are missing.
Not complaining about the reports, just expressing that I am in the dark on what happens. Otherwise I’d be experimenting to repro and fix it. Information that might help narrow it down: Anything in the logs (and I mean anything: Items failing to pull intermittently, index id mismatches, … - basically everything that isn’t operation as usual). Does pausing and resuming the folder change anything. Does pausing and resuming the connection (remote device) do anything. Does resetting the delta index (-reset-deltas). If there are identifiable items involved, /rest/db/file info is always good.

1 Like

I have tried to reproduce the issue with DEBUG logs enabled, but for some reason the folder has been syncing properly now :sweat_smile:. I paused everything except the two involved devices and the one folder, and then tried to re-add it from scratch as receive only on the second device, but this time the files synced properly.

The only issue experienced while doing the testing was the local additions problem discussed in Folder stuck in sync and non-matching local and global states, for which I do have proper logs now. However, the log files are massive (400MB and 800MB respectively) and require anonymisation, so I’m not sure when or if I will be able to do something about them.

Maybe a little bit off topic, but recently I discovered https://lnav.org/ as a great tool to analyze (even very large) log files, which might come in handy for your use case here.

For that problem (local addition dirs with un-ignore patterns going away on scan) no logs are required. It’s clear what is happening. Fixing it is somewhat cumbersome and likely comes with a performance hit while syncing. On the other hand it only affects certain ignore patterns and is resolved by a scan on the remote, i.e. resolves itself quickly in a default setup. Basically it’s just not high prio to me. A github issue doesn’t exist yet, right? That would be good to have it documented somewhere.

I don’t have a clear reproducer though (with steps, etc.). The problem doesn’t seem to occur when testing with a few files in a clean setup, but I always have it with real folders with constantly changing files, etc.

Should I still create an issue? It will be a little bit vague, and probably referring to the forum thread mostly.

Then not, I thought the repro was there. I’ll try myself and will open an issue if i succeed.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.