I am having a little problem with Syncthing.
My setup is the following.
2* Windows Server 2012R2
connected via OpenVPN ~12 Mbit/s
I use Syncthing to sync backups which are made with Veeam Endpoint Backup. Veeam does incremental backups. Each day there are a few gigabytes that get put into a new file and the oldest incremental is integrated into the first backup file.
The new file poses no problem and gets synced reasonable fast. The trouble is with the large first backup file which sits at around 200Gb and is only getting bigger over time.
The sync from this ~200Gb file takes around 15 hours since it is mostly the same file with only a few gigabytes in changes.
When I look at the process I can see Syncthing only reads the file on my destination server at a rate of about 7Mb/s and this file resides on a raid configuration which is capable of 300Mb/s+.
I looked through the parameters to tweak and I only found Set Low Priority which I disabled but to no avail. Is there something else one can tweak to up the speed?
Is maybe the upcoming feature “Support variable sized blocks” a solution? https://github.com/syncthing/syncthing/issues/4807
Thanks and any help is much appreciated.
Try the .48 RC and enable large blocks. It’s still going to suck because every sync creates a temporary copy of the file, but it might suck slightly less.
Thanks for the fast reply.
Is the release date 5th of June still valid?
If so then I will wait a week except there are no big bugs to fix left in RC4.
Still valid as of now and I don’t expect (!= guarantee) problems to still occur after two (three) weeks of testing already. However the large block option is not enabled by default, so even after RCs this is a very new and thus not widely tested feature, there may still be problems that didn’t surface yet.
OK I will take a leap of faith and try RC4. Can I switch back to stable after official release? Hope to provide any data soon how it went.
It does work. There was an issue with changing back through the web UI once, but I believe it’s sorted out. And it’s anyway possible to change back “manually” through config.xml or replacing the binary. If you use some package manager, obviously all of that doesn’t apply, but I don’t think that’s the case.
Yet I am sure your slowness problem comes from somewhere else, such as hashing speed, memory available, disk write speed/latency or network connection, as 12Mb/s is too low.
Also make sure to set
copiers=1 on the folder (advanced config).
Copying 128k blocks in random order is crap on spinning disks. Copying multimegabyte blocks in random order will be a lot faster, although still copying is copying.
For filesystems that support it (like BTRFS), does Syncthing use a reflink copy that is then updated inplace, or does it create an empty file that is then filled with existing and remote contents?
The latter; there are no btrfs/XFS-specific hacks/optimizations.
I doubt it. The computers are Xeon-E31230/32Gb RAM(destination) resp. Xeon-E31225v5/40Gb RAM(source) with plenty of RAM free (>8Gb usually).
When syncing is happening the connection is saturated and about 1.3 Megabytes/s get transferred but as I said most of the file remains unchanged and there are only about 3-6 Gigabytes each night to transfer.
The source has Raid1 and the destination Raid5 with battery backed up cache.
Duly noted. I updated to RC4 and enabled Large blocks on both sides. Tomorrow morning I will report back.
IANP (I am no programmer) but what about creating a file which only contains the changes and an index file where to insert these changes once all data is transmitted? But i am probably talking out my butt.
Syncthing makes atomic changes to files by replacing the old copy with the new one. If it was to write to the existing file it would take longer to complete the operation and make it more likely data can be lost through simultaneous file access.
Also, people get annoyed when files are locked for no apparent reason or become corrupted because ST was shut down in the middle of writing to it.
I switched to RC4 yesterday, activated large blocks on both sides and set copiers to 1.
Soon after syncing started the transfer showed a strange behaviour. I let it continue for 4 hours in case it gets smoother but it stayed the same.
On the source machine the reading from the changed file was very slow (max. 1.2Mb/sec). Transfers happened in bursts.
I then tried to disable large blocks and in subsequent tries restoring all parameters to standard but the transfer pattern did not change.
As of time being it is tremendously slower and after 10 hours I sit at 32% synced.
I will revert to the last stable version and do some more testing inhouse on a 1Gb network to rule out any bottlenecks concerning link speed and latency.
Any ideas why the sync performance changed so drastically?
Not really. But note that changing the block size setting in itself doesn’t do much; it’ll only “kick in” at the next actual local change/rehash, and the subsequent transfer will be a full one due to the changed block size.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.