Syncing everything even if it's the same

I have installed Syncthing on a folder on a client web server and set it up as Send Only. I installed syncthing on my development machine. I then added the development machine and shared that folder.

On my development machine I accepted the new machine and then the new folder, setting it as a Receive Only. I already have a copy of the web server from a year ago and I set it to that folder.

When it completed the scan, it showed 164k files on the web server and 166k files on my development machine. I had then expected it to just go through and update all of the new image files that had been added, because the core code has not changed. Even though there are more files in the development machine, a lot of that has been deleted from the web server and there are about 10k of new files that I would have expected, all from a specific image folder.

However in the files to sync it is showing a lot of the base files from the web site code, which have not changed. It’s also showing a lot of the old image files that I know are the same. I copied some manually before they were updated, and they are the same size and content, only the date/time is different. This is the case for almost all of the 160k files, but I don’t want it to duplicate stuff that is already there and the same.

I understood that Syncthing used a blocks to evaluate the file, so it would ignore files that are different date/time but same content.

Is there something I have missed in the settings? Or is it just listing the file as different, checking and then not copying the data?

Thanks

That is enough for syncthing to consider them different. It’s most likely that syncthing is not downloading anything and just processing metadata of files one by one and showing thek as out of sync as metadata differs.

So would that mean that it’s not transferring the actual file, just changing the date on the development machine?

The only actual difference I saw between the two systems was the last accessed date, the actual create date and file date was the same.

This was a small test case, but the actual use case will be against terabytes of data and some really big files, so the reason I tried Syncthing was specifically to avoid the date/time problem and use the hash as the comparison, so that it wouldn’t do anything with a file if it didn’t have to. If I go through and ‘touch’ all the files locally, it will do the whole thing again, which I wanted to avoid when there are millions of files, because nothing has changed within the file hash. That’s a lot of traffic generated for no gain.

I guess this is not a valid use case?

You can click on the out of sync items in the ui and check what it’s doing.

Touching a file will make a new version of it, the new version will he broadcast to others and others based on the new version others will just rewrite the file locally (for atomicity) and update the metadata.

You have to understand that this does more than rsync, so it can’t just check some metadata, declare the files are the same and do nothing. We have to update the version we now have, announce it to others etc.

I think there should be a prominent notice in the out of sync modal that these files are not necessarily transferred with a link to the relevant docs. Too often users (understandably) directly link out of sync items with data transfer.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.