Out of sync issue

I suggest you post screenshots from both sides displaying the issue.

The screenshots are included below. I’ve masked some of the details, I think that will not be a problem. The last file received is not one of the failed files. The NAS is used to create incremental backups of the synced files.

The NAS seems to try to sync the failed files periodically. The line with failed items disappears while syncing, it then reappears and the “Out of Sync” status returns.

I would disable Delta indexes on the PC, then restart Syncthing and let it come back into sync before enabling delta indexes again.

Sorry, I meant the PC not the NAS. Updated original post.

I don’t think it’s todo with delta indexes. If you rescan on the NAS and the issue does not go away, I suggest touching the files.

I don’t know how to disable the delta indexes, but what I did was changing the configuration version from 16 to 15. It send over a lot more data so I think it did at least reset the delta indexes. I’ve also did the “rescan all” on the NAS (since the rescan button wasn’t shown for the folder in question).

The time stamps of the files should remain the same, so I do not want to touch the files. So I guess I’ll try to delete the index from the PC to see if that helps…

Could it be related to this ticket? https://github.com/syncthing/syncthing/pull/2113

How do you even disable Delta Indexes? I never found an option for that. And delete them? How to do that? Thanks if anyone can help!

You can disableTempIndexes per folder in the advanced settings.

Thats something else.

You probably need to drop the database to drop delta indexes, and let it rescan, but as I said, touching the files should fix the issue you are seeing.

I know you have suggested that before, but as I already said twice in this thread I do not wish to alter the time stamps of the files. Changing the time stamps might cause confusion later when we no longer know that the time stamps were changed and why. Also it is not the most practical solution to chase down out-of-sync files, especially if the number of files affected would be larger.

In my opinion this should be regarded as a bug in Syncthing. Maybe the node which detects the issue should ask the sender to recalculate the hash? Or nodes could perhaps verify the hash while sending them?

The link below describes dropping delta indexes by changing the configuration version, it seemed to work but did not solve my out-of-sync problem.

There is already a bunch of tickets relating to this, yet the fix is not as easy as you think. If your software deliberately adjusts timestamps back when changing content, there is no way for syncthing to know that the file has changed, so it’s not always our fault.

Asking to rehash the file can potentially DoS the provider if the file you are trying to download is constantly changing (log files…).

I found out that you can repair the “95%” issue (bug? workaround?) by enabling the option disableTempIndexes per folder. But it is not working all the time :/.

This shouldn’t have any affect on this, I guess it’s more likely luck.

I understand that (although in my case I guess something else went wrong), so I thought adding rehash requests or rehash during sending might be interesting alternatives.

Apparently file hashes are already checked when receiving files, so I guess in terms of cpu load it would not be a big issue to do it during sending as well? It could do the rehash only if the existing hash is older than a certain threshold?

Another alternative might be a force rehash button, since the rescan button seems to skip hashing based on file stamps.

Perhaps a DoS could be circumvented by throttling the rehash requests. For instance, only rehash after three failed attempts of sending an unchanged file. If the issue persists after a rehash then not allow another rehash within a certain time period. In general I think that sending a file will take longer than rehashing, so then it might not be viable for a DoS attack anyway.

Alternatively, it could show a button to rehash the files in question similar to the override changes button.

Lastly, (at least in my case) I trust devices that I share my files with. If one would DoS other nodes maliciously than we’d remove that node.

It could be a big burden on the CPU as if you add a file, and you have thousands of peers they could all ask you for the same data over and over again, at the same time limiting the troughput. Having to cache what you checked and what you haven’t is just as racey, as well as takes up ram.

The alternative already exists, and it’s touching the files. You can touch them, rescan and set the dates back to where they were.

In the current situation those thousands of peers (how many people have thousands of peers?) keep retrying to sync the files over and over generating a lot of traffic. But I get the point that you are not interested in my opinion or investigating solutions, so I’ll leave this discussion at this.

https://data.syncthing.net says there is a cluster with 145 devices.

I am fine with discussing the solution, but nothing you have proposed so far is viable.

Yes, we could add a force rescan everything, but thats an equivalent of removing and re-adding the folder, which I think is a sufficient workaround.

As for forcing to rescan specific files this becomes a UI problem, as we don’t have a directory browser UI.

If you wish to solve the problem, feel free to make a pull request, and we can discuss the implications and pitfalls of the approaches if any.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.