New Linux desktop wants to sync existing files

Hello, I tried to find the same issue in the forums but came up short.

My original set up is 1 Linux server and the rest are Windows desktops. Recently I switched my desktop from Windows to Linux and on that desktop I have a secondary drive ntfs called “Storage”. On this “Storage” drive all of the data is synced using syncthing.

I left my “Storage” drive as NTFS and all of the data on it. I set up syncthing on my Linux desktop and tried to sync with the Linux server. I am having a problem where it wants to redownload ‘mostly’ all of the files. It was in sync when it was a Windows machine and the exact same data. When I boot over to Linux, it tries to redownload all from the Linux server. I have validated that everything is the same.

I am on version v1.7.1 on both.

Here is an example of a file that wants to re-sync

On Linux Desktop (docker)

On Linux Server (docker-unraid)

They seem to be identical tho

Is there anything else that I can check? I’d like to figure this out, Syncthing has been working great for me for the past couple years.

I can figure out why this is wanting to re-sync so many files. The entire folder is 75GiB, I think it wants to resync 73GiB.

You don’t specifically mention having set ignore permissions. You’ll have to do that on at least the NTFS side since it doesn’t store the expected bits.

Thank you and to confirm, I use ignore permissions for all my shares.

Is it actually downloading anything (visible in the out-of-sync list or transfer rates)? I’d expect this to be the usual initial “sync” (now should be sync-preparing only) when adding a new device which already has the data.

What I did was, based on response from Arneko, I used the same ‘initial config’ and db files from my windows machine. It helped but didn’t stop it from “saying” it was out of sync by a large number of files.

I decide to let it go for 20 mins and came back it was normalize for that one folder. It couldn’t of downloaded that much in that time (my internet is slow). I believe it is downloading the files but I don’t think it is downloading all of the file. Here is an example of what its doing.

As of right now, I am going through one by one and enabling the folders. Most of them are in sync but ones with large files I had to manually intervene. For one 60 GB file, I renamed it on both sides and it didn’t want to resync it.

You are probably seeing a known side-effect of introducing variable block size: If large files existed before, they will have a small block size. Now if you add a new device where the data exists locally, it will scan the files with a larger block size. Thus it gets different blocks and doesn’t consider the files equal with what the remote has (even though the entire file is equal). What then happens is what you see in the out of sync view: It “syncs” the files but as the local file already has all the data, it just copies it locally. Which obviously is still annoying, as it rewrites all the files. To work around this, you’d have to remove and readd (or touch) all the large files on one of the “old devices”.

Yes, my Linux server (Unraid) docker is an old instance. Prior to v1 stable release.

I wonder if it would have been better to wipe that “old” instance and rescan/rebuild all folders.

Either way, it took some time but I am all in sync now. Thank you all for your input and help.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.