Remote stays on "Up to date" but is really "Out of sync" when looking at its instance directly

Hello everyone!

I have a problem with my setup and am not quite sure how to tackle it. I’m using the official Syncthing Docker image on my Synology NAS until I change to probably OMV soon and am syncing it with a Windows 10 laptop with SyncTrayzor.

Encrypted folders on the Synology have the limitation that filenames can only be 143bytes long. Since Syncthing adds the ‘.syncthing.’ and ‘.tmp’, that goes down to 128bytes.
When I now create a file with a name with 129bytes on the Windows laptop, I was expecting it to show an error in SyncTrayzor at the remote NAS, but it stayed on “Up to date”.
Looking at the st webUI of the NAS, it showed what I expected: Folder is “Out of sync” with one item failing, due to the filename being too long.

Can someone help me debug why this out-of-sync state is not reflected when looking at the NAS under the Remote Devices on the laptop? What debugging facility could be the right one to look into?

I wanted more robustness to the file syncing, which is why I’m substituting Synology Drive for Syncthing.
I had hoped that, as soon as the local instance detects a change, all remote instances would be shown as “Out of sync” until they explicitly responded that they’re in sync again. Id est, Syncthing doesn’t assume that the other devices are in sync until they say otherwise, but instead assumes that all other devices are out of sync if the local state has changed since the last sync, until the other devices confirm that they are in sync with the new state.
Is there a configuration error on my side or can this behaviour be somehow realised?
The filename length problem is not a big issue since I know about it, but if something else were to cause items not being synced, I would always have to check on the webUI of the NAS if it is really up to date and couldn’t rely on what the local instance says.

If important, some info about my configuration so far:

  • SyncTrayzor 1.1.28 with Syncthing 1.16.1 64bit on the laptop
  • Syncthing 1.16.1 64bit on the NAS; Dockerfile 698fbc765438665def5a2939aab5e852b38512fc413a8344a1d49a4a1aa6d48b (one behind latest I think); started using Docker Compose; network mode: host
  • NAT traversal, global discovery, and relaying all disabled
  • static address in both directions

Unfortunately I have to disappoint you, my experience shown that this is more likely the other way around. Drive is the successor to Cloudstation and is not so much better in terms of process security that you can let it run out of control. In particular, when the amount of data that has to be managed is high, Drive is not reliable, apart from the disadvantageous server-client-concept.

For this reason alone, Syncthing is in a different league. I have several Synologys, all of which also work with Syncthing. But I don’t have any encryption, so unfortunately I can’t say anything about it.

Since SyncTrayzor is apparently no longer being maintained, I wouldn’t use that anymore, but that’s my personal consideration. I can suggest the MSI version for Windows, which installs itself and also sets up a service.

Thanks for your reply!
My english skills marooned me, I meant to say I’m substituting Syncthing for Synology Drive :sweat_smile:
I’m very disappointed by the reliability of Drive, that’s why I’m changing to Syncthing. I’ll probably be missing some of the user-management of Drive, which st doesn’t have as a p2p application. But maybe I’ll use a parallel ownCloud or seafile instance for that.

I didn’t know that SyncTrayzor was no longer maintained - when searching for a Windows client, I considered SyncTrayzor, Syncthing-GTK, and syncthingtray.
SyncTrayzor had recent commits (<1month) at the time (now it’s at 3 months) and by far the most watchers; also, it was listed at Community Contributions — Syncthing v1 documentation as Windows client, so I thought I’d give it the first try.
However, it seemed to me that SyncTrayzor would simply run Syncthing ‘as-is’ under the hood, only making it possible to run it as a service and adding some eye-candy and system notifications.

If the behaviour as I described it (peers being shown as out-of-sync until they confirmed they’re back in sync) is the supposed behaviour, I’ll give some other clients on Windows a try and see whether that resolves the problem.

SyncTrayzor and Syncthing Tray are both actively maintained. You can find all the activity and recent releases in their respective GitHub repositories.

Syncthing-GTK is the one that is unmaintained though.


In the described case the expected outcome is that the folder is up-to-date on windows, but the synology remote device is shown as syncing in the windows web UI. If that’s not the case, something is indeed amiss there.

I don’t expect any wrappers programs to handle that themselves. They all provide an additional layer of UI/UX around the syncthing client, which does all the syncing and providing the sync state.

1 Like

Ah good, that’s also the impression I had!

Yeah, got confused by the repo first since there is still activity, but the last release was in 2019 and, if I remember correctly, there’s no plan to switch to python3 when I took a look at it

1 Like

That’s already good to hear and also what I expected from how the synchronization process is described in the docs!

That’s also the impression that I had from the workings.
Would you suggest giving a different wrapper and see whether the issue persists nonetheless? Or do you have an advice on which debugging facility could give me clues about why the correct state isn’t reflected?

I don’t think that wrappers would change the result, the basic engine is the same. I personally recommend that you either download the ZIP from the homepage under Windows or use the MSI. But that’s a matter of taste.

1 Like

As mentioned - wrapper definitely doesn’t matter for this. I like Synctrayzor on my windowses :slight_smile:

Screenshots to start off are very helpful, as there might be clues to spot and it makes it easier to grasp the general setup. The to debug if you can get into the same situation again, run syncthing cli debug *folder-id* *file* (replacing the starred values) on both sides and post the output. If you encounter an error, first run syncthing cli config gui debugging set true. Also use --home as appropriate if your config isn’t at the default location.

1 Like

I was just happily screenshotting and running the debug command (did you miss the ‘file’ command after ‘cli debug’?) but after running the debug command on the laptop, I think the problem dawned on me (making the screenshots and logs obsolete).

I have been using dummy-files for my testing of how a too-long-for-one-peer filename is handled. Dummy-files which had no content, which I remembered is probably important to the comparison algorithm, since the file has no blocks.
And, look at that, after putting a single byte in my dummy-files, the laptop’s UI shows “Syncing (99%, 1 B)” until I give the file a shorter name.

Side-question: Is it possible to publish the remote’s error (file + ‘filename too long’) to the other client instead of simply showing ‘Syncing’ with no progress made? I didn’t find a topic on that.

However, I think this behaviour could be problematic: I remember some programs that use empty files with certain names as tags for some things, so, in the fairly unlikely border-case that one of these files isn’t synced because of its filename, the other peer won’t realise that the new empty file / filename change of an empty file wasn’t successfully propagated. Only by looking at every peer’s own ui, one would be sure that these empty files are in sync.

Indeed, sorry.

Are you saying it shows up-to-date if there’s empty files that are not synced on the remote? That wouldn’t be good, we should show something like 99%, 0 bytes then.

That’s not possible. This indication is entirely based on knowledge of the local client, there’s no info about failures exchanged.

1 Like

Pity, but not a big problem!

Also, Syncthingtray already has support for switching between instances with a press of a button, maybe I’ll ask the maintainer if he’d consider to implement some logic to compute a remotes state with additional data from itself. But with the feature alone it’s already quite comfortable to just switch the instance for a deeper insight of what’s going on when a sync stalls.

Yeah, that’s what is happening, at least in my setup. I have some other Unix machines and VMs; if helpful, I can confirm whether that’s repeatedly the case.
With the two machines right now I can reliably reproduce the behaviour: With any kind of zero-byte file that isn’t synced due to the filename-length limitation in my setup, the peer that has said file will show “Up to date” for the remote, even though the file wasn’t synced. So, my laptop, which has the problematic file, will show “Up to date” for itself and the remote diskstation, while the diskstation shows “Out of sync” and the filename-length error for itself and naturally “Up to date” for the remote laptop.

As soon as I add a byte to the file, the “Out of sync” of the diskstation is correctly reflected on the laptop, then showing “Syncing (99%, 1 B)”.

I’ll create a GitHub issue so this can be tracked, if I can reproduce it with my other machines.

1 Like

No need to repro first, I can confirm that’s looking at the code - a github issue would be appreciated.

1 Like

Issue is created.

Thanks for your and everyone elses help in tracking down what’s going on!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.