Why Syncthing resend complete file every time?

Hi, Im newbie with syncthing and a all dropbox user. I have a problem. Have many nodes with postgres DB. I want to get a backup at day of the installations. We do a full backup at day using pgdump. The file size is aprox 2 GB. We use in the past Dropbox technology and first backup takes a day but other backups take just 20-30 minutes. I think dropbox take in account the difference only and send that chunks.

Now we installed syncthing and see every time it send all the file (the full 2 GBs) the backup take hours, again.

Maybe is because pgdump regenearte al the file and syncthing get confused? If that is the reason, wich is the best way to “show” to syncthing the file just after be generated?

Please let me know, is very critical for us.

Best regards.

It shouldn’t, unless the content is genuinely different due to sort order of the dumps etc. As it’s downloading the file, you can click on Out of Sync link in the web ui to get a legend of what it’s actually doing

Thanks Audrius. I doubt the content is genuinely different brcause dropbox upload this very quick. If Dropbox do the same than Syncthing i think theres other problem.

Maybe, as i say, the problem is the new file is generated from zero byte? I will test generating a brand new file, and after generated copying over the old one wiuth syncthing sttopped and share the results.

best regards.

If you are overwriting the files and rescan happens in between, then yes, it’s possible that we delete data, and then try to sync it back.

Alternatively, make sure a scan doesn’t happen as the file is being overwritten.

To test this, I guess you can keep a copy of the old file under a different name.

Yet file under different name will only work if there was no shift to the file.

I think the best way to test this is to sync the file, shut syncthing down, overwrite the file with new content, restart syncing.

Because essentially what happens now is it probably notices a file with only a few bytes available as you start producing the dump, sends that info to remote device, which truncates the file and gets it to look like the same few bytes, and then as the file gets populated, by definition it has to transfer everything.

Ok i confirm that dotn work as expected, dont know if the cause is syncthing engine or pgdump bytes order.

I stop syncthing, run pgdump, and update the file. Syncthing send the whole file.

Now im testing with a test file format to avoid any suspect on a byte shift problem. The problem same pgdump optimized format have 1.5 gb vs 10 Gb on text file format.

BTW, syncthing dont use compression? Seems like is sending every byte without compress before send?

Best regards.

How do you know it’s sending the whole file?

Syncthing does compress, but compressing compressed data does nothing, and if pgdump produces compressed data, it’s very likely there is no data overlap.

Compression is enabled for metadata by default, but not for file data. Enable it for everything if you like, in the device editor.

Ok, just to inform.

I change pg_dump to plain text. The result was a output files of 10 GB vs 1.5 GB.

First syncro take a day. Second day it tooks only 35 minutes.

Conclusion:

Syncthing do the job as expected onky updating changes on text files. That was without stopping sync on the pgdump process. To be clear the pgdump run and generate the whole file in 10 minutes, with synthing running.

After that sending the change sonly took 35 minutes. very well for me.

Thanks for your support. Best regards.

One more question, what will happen if i enabled versioingin on that fiel?

Will syncthing only save and broadcast only CHANGES from the original file?

No, it will save a copy of the whole file.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.