Surely it should. I can’t offhand think of why this would happen, other than perhaps with receive only folders (where the file might have been deleted in reality but we’re prevented by policy to let anyone know that fact).
That looks exactly like expected, including the availability null bit. I am at a loss as to what’s happening, however you did just post in the case sensitivity topic: Are the two devices windows and mac by chance, and if so, are there any case differences? I can’t remember this kind of symptom with case problems, but anything is possible there…
The cluster is 2 x Synology NAS units and a whole bunch of Macs (which is where my case sensitivity issue comes in - case-only renames get done on the Macs, which pull in duplicate copies on the NASes).
Here, the originating file is on one of the Macs, and I’m seeing the pull failure on the NAS units; however, I’m also seeing the same pull failure on at least one other Mac in the cluster - presumably more than one, but I haven’t examined the others yet.
There are 39 files spread across 3 subfolders of this shared folder which are failing to sync. I’ve examined the file paths for one of these files across four different machines and they all match as expected. There are a large number of files inside the folders containing the failed files, which have all otherwise synced successfully.
But of the two Macs and two NASes looked at, three of these machines have the old version of the file present.
Apologies for bumping, but if there aren’t any thoughts about where it would be useful to look and diagnose further, I’ll touch the out of sync files to prompt a rescan of them - hopefully that’ll recognise their validity, and allow the sync to complete.
Is that the best course of action, or is there anything else I can do to help diagnose the underlying cause?
Hi, if you are hitting limits on number of open files, it can cause files to be “unopenable”, which leads to all kinds of odd stuff happening. The last update or two of syncthing I have noticed “too many open files” errors in the logs for both linux and mac computers in my cluster, which causes all manner of weird behavior. Are you seeing any “too many open files” errors in either your syncthing logs or system logs?
Ok, so with the rest excerpt you posted above it looks like Syncthing picks up on the change and sends that info to everyone, but they fail to request the data. You have also manually verified the case of the names match. Please shout if any of this is wrong. Next step would be to find out what exactly fails, i.e. enable model debug logging on the machine that has the updated file. Then pause/unpause the folder in question on another device, which does not have the updated file. There should line(s) like model@... REQ(in): *err* - *deviceid*: *folderid* / *filename* o=... s=....
And is said setting true or false (if the folder is new enough, it’s true by default)? I am not saying that’s the reason, it’s just one parameter that’s important for trying to find the reason, given the error message.
Variable Size Blocks is off on - to my knowledge - all devices connected to this folder. All the devices were set up months ago, before VSB was a GUI option - so I’d be very surprised if it were enabled anywhere.