Why aren't "Out of sync" items syncing?

I’m confused that Syncthing knows it’s out of sync, the other machine is on-line, yet both the folder and remote machine report “Up to Date”. This is happening for two folders I have; in one, I’ve added a couple hundred files (pictures), and in the other I’ve deleted six files. Please reference the attached screenshot.

I assume this is some wrinkle in how Syncthing decides when to take action, and I’m just not understanding. Could somebody explain it to me?




looks like a bug for me, I also had something like this some time ago so maybe that bug still exists (or another one that has the same result). Is it still there after restarting syncthing?

Do you have an ignore file on one side?

Thanks for the replies!

I have not created any ignore files. Just create directory, point Syncthing at directory, put files in directory.

I know I’ve explicitly restarted Syncthing on one end, when I noticed that this was happening. I haven’t explicitly restarted it on the other end, but I’m 95% sure that’s it’s been restarted via my closing that laptop and bringing it back up again. I haven’t yet removed the part of the startup that opens the web browser, and I see new tabs pointing at Syncthing pretty frequently, indicating that it has been restarted.

If I can supply anything more that’s useful, like different screenshots, config files, or log files, please let me know! Thank you!


Certainly looks like you’re missing a bunch of files and they ought to be transferred. It’s a bug for sure, but I’m not sure what it is…

I think I have an idea of why this occurs. I will investigate and get back to you and Jakob. But I have found several ways it can happen, and the issues relate to protocol messaging, version detection, filesystem model serialization, file cacheing, and prefetching, to name a few.

Most of these problems are not specific to Syncthing – they are generalized computer science problems.

The main question is this… Even if we do perfect everything, there’s going to be conflicts across the network. The key question revolves around “Conflict Detection and Resolution” – how to we detect and handle problems ?

(1) Local Master, Remote Copier : Remote runs into a sudden problem, like it’s NFS save directory is temporarily offline. An error occurs (a file cannot be copied or deleted, or doesn’t exist)… Maybe syncthing copies the file again to the now-empty mount point. Syncthing prints the error but keeps running. When disconnected mount-point returns online we don’t know what is copied fully andwhat is not.

The Local Master can now go out of sync for a number of reasons – mainly what happens is it thinks a file is not copied when it actually was copied. Or if the disk goes offline.

For example: Consider some file blocks written to the disk (iin the eyes of a SyncThing slave), but ffile writes can take up to 1 minute to actually be written to the platter. This can be a problem when dealing with disk cache. So anyway, say the OS fails to flush the kernel cache. The blocks dont update on the disk, just in the cache.

SyncThing then sends a ‘success’ message saying it updated the blocks it received.

struct IndexMessage { string Folder<>; FileInfo Files<>; }

struct FileInfo { string Name<>; unsigned int Flags; hyper Modified; unsigned hyper Version; unsigned hyper LocalVer; BlockInfo Blocks<>; }

Now we are ‘out of sync’, at least temporarily. Syncthing may or may not resolve this example on it’s own… But it gives you an idea of the problems.

In this case the solution is not to ignore it. I think the solution is to update the protocol to modify the way errors are handled. For example, if during writing on the Remote side, if there’s an error (disk space full), then that copy of the program should show that.

The user should be able to click the ‘Out of Sync’ message and see what file needs to be repaired and select how to proceed.

We try fairly carefully to make sure what we write end up on disk, and that the hashes match. Obviously, things at lower layers may lie and break - but this is more up the user and OS to fix. If they care enough to use for example ZFS and disks that don’t lie about cache flushes, things are indeed fairly safely on disk by the time the OS claims they are. But you’re right, nasty stuff happens all the way through the stack.

The above problem is something much simpler though. His syncthing instance is obviously well aware about the files being out of sync, it’s just not downloading them for whatever reason.

2 posts were split to a new topic: Master remains out of sync despite override