Device stays out of Sync - v1.12.1 - continued


I think I’m encountering Device stays out of sync until restart after updating from v1.12.0 to v1.12.1 too.

It first appeared after updating two nodes [A - B] that share a folder from v1.11.1 to v1.12.0. Everything was fine back then after the upgrade and fully in sync. After restarting Syncthing v1.12.0 on node B, it went from “up2date” to “out of sync (89%, 263 GiB)” but the data was still the same as on node A compared as files. I’ve now waited for v1.12.1 in the hope, issue #7122 fixes the problem and updated to it this morning. Still out of sync with the same percentage. I guess the bug is not fully fixed!?

What diagnosis (DB snap, traces, logs, etc.) is required for you to look into this? The normal Syncthing log doesn’t show up any errors.

I’ve already tried to restart Syncthing on node A and B but the “out of sync” status is not resolved.

Thanks for your help.

Kind regards, Catfriend1


node A:

node B:

The 3.160 elements dialog shows files that have not been changed for a long time (e.g.“archived installers”).

Both nodes show a log like:

2021-01-06 09:23:37 My ID: xxx
2021-01-06 09:23:38 Single thread SHA256 performance is 375 MB/s using crypto/sha256 (369 MB/s using minio/sha256-simd).
2021-01-06 09:23:38 Hashing performance is 316.61 MB/s
2021-01-06 09:23:38 Overall send rate is unlimited, receive rate is unlimited
2021-01-06 09:23:38 Ready to synchronize "xxx" (xxx) (receiveonly)
2021-01-06 09:23:38 GUI and API listening on [::]:PORT
2021-01-06 09:23:38 Access the GUI via the following URL:
2021-01-06 09:23:38 Ready to synchronize "Dokumentation" (xxx) (receiveonly)
2021-01-06 09:23:38 My name is "node A"
2021-01-06 09:23:38 Device xxx
2021-01-06 09:23:38 ...
2021-01-06 09:23:38 Device xxx
2021-01-06 09:23:38 Device xxx
2021-01-06 09:23:38 Device xxx
2021-01-06 09:23:38 Syncthing should not run as a privileged or system user. Please consider using a normal user account.
2021-01-06 09:23:38 Ready to synchronize "Installationsquellen" (xxx) (receiveonly)
2021-01-06 09:23:38 TCP listener ( starting
2021-01-06 09:23:38 Ready to synchronize "Archiv" (xxx) (sendonly)
2021-01-06 09:23:38 Ready to synchronize "Images" (xxx) (sendonly)
2021-01-06 09:23:38 Ready to synchronize "Temp" (xxx) (sendonly)

… and completed initial scan lines …

@Alex @Andy Are your problems resolved by the v1.12.1 update? I’d like to know to narrow down if I encountered “this” or “another” bug.

Then what you are seeing is likely not #7122 but what’s fixed in (there were a few fixes based on @Alex’s reports in Device stays out of sync until restart)

That does not get resolved by a restart. Meaning the fix in v1.12.1 prevents the problem from occurring, but if it already occurred it wont get resolved. You’ll have to run with -reset-deltas on noda A for that. In hindsight it would have been good to do that automatically for everyone on upgrade to 1.12.1.


I did use -reset-deltas once with the fixed version (some nightly that had the relevant fixes) just to be sure so it matches with what @imsodin wrote. No issues since then.

Maybe do that for the next version just to be sure because there may be cases of this on devices where people rarely look?

1 Like

I’m assuming that nothing has changed between v1.12.1-rc.1 and Final v1.12.1. Since the changeover a few weeks ago, everything has been running smoothly, including the two Android versions Syncthing and Syncthing-Fork.

What gave me very big problems yesterday was switching on another Windows computer that was last active in September 2020 and on which v1.10.0-rc.3 was still installed. Only 1 folder is connected there, but this is connected to 2 other Windows computers and 3 Synology NAS. The problem was not so much the version as the adjustment of changes to the current state, so I had to intervene manually.

Since not everything was properly synchronized after the procedure, I performed -reset-deltas on the other Windows computer. I noticed that everything on the computer was up to date, but the problem had shifted to other devices. Since -reset-deltas unfortunately only works on Windows computers, but not on Synology, I could not run -reset-deltas on all devices at the same time.

In the end, I had to completely rebuild a folder and connect it to all devices, then it worked again.

What I mean by that is that a functioning environment can be overturned by such influences, even if nothing is changed. That was the case with me too.

In addition, functions such as -reset-deltas should be able to be triggered in the GUI so that they work reliably and on all platforms. That would actually be a feature request.


Thanks! That was blazing fast support, excellent :-). The problem is now resolved.


1 Like

I had to run it on node B, but still, thank you!

Be aware that you will have to repeat this process regularly. I’m now finding that rather than resetting the deltas where I have a lot of files / directories and St folders, I remove the folder, let St ask me to re-add the deleted the folder, I then point it back to the synced files and let it resync again.

It’s messy, but i’m at a point where I can’t spend days having drives grinding away comparing two ends, but now I just narrow it down to a set of folders.

I have a theory the cause is down to Case Insensitivity within the db. It manifests itself more where the sending end has had a file / directory renamed and / or the case has changed then the db goes way out of sync.

1 Like

Same problem here , every time I add a folder , there stays items out of sync . After each new folder , has to check and to -reset-deltas , not very user friendly. Already more than 5 years and not yet a solution ? Is there an alternative for syncthing , I don’t know , I will look

If you’ve had the problem for 5 years, why are you coming with it now? It is not entirely fair to talk about syncthing in such way. Others don’t seem to have the problem.

It would be good to find out something about your system, the environment, other devices etc. etc., and help can be provided if necessary. Or what do you mean?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.