Hello, I tried to find the same issue in the forums but came up short.
My original set up is 1 Linux server and the rest are Windows desktops. Recently I switched my desktop from Windows to Linux and on that desktop I have a secondary drive ntfs called “Storage”. On this “Storage” drive all of the data is synced using syncthing.
I left my “Storage” drive as NTFS and all of the data on it. I set up syncthing on my Linux desktop and tried to sync with the Linux server. I am having a problem where it wants to redownload ‘mostly’ all of the files. It was in sync when it was a Windows machine and the exact same data. When I boot over to Linux, it tries to redownload all from the Linux server. I have validated that everything is the same.
You don’t specifically mention having set ignore permissions. You’ll have to do that on at least the NTFS side since it doesn’t store the expected bits.
Is it actually downloading anything (visible in the out-of-sync list or transfer rates)? I’d expect this to be the usual initial “sync” (now should be sync-preparing only) when adding a new device which already has the data.
What I did was, based on response from Arneko, I used the same ‘initial config’ and db files from my windows machine. It helped but didn’t stop it from “saying” it was out of sync by a large number of files.
I decide to let it go for 20 mins and came back it was normalize for that one folder. It couldn’t of downloaded that much in that time (my internet is slow). I believe it is downloading the files but I don’t think it is downloading all of the file. Here is an example of what its doing.
As of right now, I am going through one by one and enabling the folders. Most of them are in sync but ones with large files I had to manually intervene. For one 60 GB file, I renamed it on both sides and it didn’t want to resync it.
You are probably seeing a known side-effect of introducing variable block size: If large files existed before, they will have a small block size. Now if you add a new device where the data exists locally, it will scan the files with a larger block size. Thus it gets different blocks and doesn’t consider the files equal with what the remote has (even though the entire file is equal). What then happens is what you see in the out of sync view: It “syncs” the files but as the local file already has all the data, it just copies it locally. Which obviously is still annoying, as it rewrites all the files. To work around this, you’d have to remove and readd (or touch) all the large files on one of the “old devices”.