My use case for syncthing is to sync files between two machines geographically several thousands of miles apart. In my experience, this results in high latency and the only way to overcome it is with parallel transfers. I noticed the block sync protocol requires a tcp connection which is affected greatly by this latency. Will syncthing create multiple tcp connections between two individual machines and transfer blocks in parallel on these multiple connections? If so, is it configurable or will it attempt to auto adjust based on latency to maximize throughput?
Thanks in advanced!
No. It uses a single TCP connection, but can request for multiple blocks from multiple peers (or the same peer) at a time. Meaning we don’t need to wait for the response before we can make another request, though we are still under TCP restrictions.
OK, thanks. Although multiple blocks may be requested and I’m sure that helps some, it won’t solve most of the latency issue. As a work around, can you run the syncthing multiple times on the same machine pointing to the same disk location?
No you can’t do that.
I suggest you introduce more peers into the cluster, this way you’ll have more peers to pull from, therefore will utilize bandwidth better.
I mean, what do you expect it to do?
This is not HTTP where we cannot make more requests before we recieved a response, hence I am not sure how much benefit there would have from multiple connections, given we can achieve everything with one socket that takes modern http multiple sockets to achieve… What other improvements do you envision?
Thanks for the suggestion. It isn’t cost effective for my use case to attempt to host more servers. I simply want a fast way to sync between two machines in which network latency is overcome with parallel channels for data transfer. Would this feature be a reasonable request?
FWIW I do know parallel connections host to host will overcome the latency I am experiencing. Unfortunately, the software I have that can sync multiple parts in parallel and is secure is fragile and takes forever to start syncing new files. However, once it starts I am able to utilize 80% of my throughput capability on my slowest connection. Any other secure solution I’ve tried that uses one peer to peer connection only utilizes about 25% due to latency and dropped packets. I won’t even tell you how miserable using rsync tunneled through ssh was.
The logic to sync from multiple endpoints is basically the same as syncing from multiple connections from the same source, so perhaps I’ll tinker with syncthing’s code and see if I can implement it.
I have the same problem as you, and multiple TCP connections in syncthing would probably solve this issue for me as well. The reason i believe so is that right now i am successfully workarounding the problem by running multiple syncthing daemons on the same machine. It takes n times the space (for each instance), and is really hopeless to manage, but it syncs the files in lightning speed. With only 1 to 1 syncthing I am getting 500kB/s-1MB/s between these locations (8000 kilometers apart) even if both locations have a 1Gbit connection and are connected to their ISP directly via fibre. By running multiple syncthings i am adding to the speed by each instance i start, but again, its terrible to manage and it uses n times the resources on the machine, when all i really want to achieve is that blocks are spread across multiple TCP connections.
tldr: If you manage to implement this you’re my hero.
Sorry about the off-topic,but : May i ask what you are using today? Even if its slow and cumbersome it might be alot better than spawning multiple syncthings for me.
Same issue here. Many of the files I sync are from a single source in Europe and the peers are in the US. Multiple Syncthing processes running on the same machine multiples throughput, the same thing with FTP (using a multi-segment client that establishes multiple connections to the source). If Syncthing could provide one process multi connections similar to the ftp clients that would be very awesome.
Same problem here. For Syncthing to be usable for me it will have to provide multiple connections, downloading a user-configurable amount of blocks/segments at once.
Only solution for those of us in this situation for now is to use an FTP client like lftp, writing a cronjob script using the mirror command. This is a shame, since the GUI and sync features are nice, it’s just too slow.
The issue you are referring to usually only applies to high latency high loss links, so please, let’s not jump on the band wagon because you think something is slower than you expect. For low latency links probably simply increasing the number of pullers in advanced config will get the speeds close to your cpu hashing performance.
I’m syncing from a server in North America to one in Australia, so I am dealing with a high latency connection. I tried increasing the number of pullers but did not find that to give me much better results, and Syncthing still consistently performed at half the speed of lftp using segmented transfers over SFTP.
Thank you, however, for encouraging me to get off the band wagon. I could have got a splinter and there was a lot of excrement.
Transfer performance is not one of the main priorities for Syncthing, situations where TCP doesn’t perform well are fairly unusual, and multiplexing over multiple TCP connections between two devices is probably never going to happen. I say this not to be mean but to keep expectations in check. Sorry.
(Tweaking pullers and so on shouldn’t be necessary either.)
That’s not fully true, as we’ve seen people gain performance benefits tweaking pullers when on Gbit LANs.
I was suggesting we should probably double the default, as windows on the local networks can be stupidly big, hence we’re not filling up the pipe.
As for multiple tcp connections, I wouldn’t say never, as it’s trivial to plug in a custom transport if someone feels like doing that.
Yet last I looked, I couldn’t find a suitable framing library in some other language to get inspired by.
I guess you could make something stupid like open N connections, send packets round robin, reestablish connections as they break, slap KCP on top for ordering and reliability due to connection loses.
If the issue really is about TCP-level latency, one can increase the TCP window size.
On linux, it can be changed by writing to those special files:
For instance, a 400ms-latency connection transmitting at 100Mbps, would require a 5MB window, which you would set up like this:
sysctl -w net.ipv4.tcp_window_scaling=1
sysctl -w net.core.rmem_max=6291456
sysctl -w net.core.wmem_max=6291456
sysctl -w net.ipv4.tcp_rmem="4096 5242880 6291456"
sysctl -w net.ipv4.tcp_wmem="4096 5242880 6291456"
sudo as appropriate. Those settings are temporary. Set them on both systems, then restart syncthing on both systems and see if it helps. If it helps, add them to
There probably is an equivalent on Windows, but I don’t have one at hand to test.
Keep in mind it will only help if it really is the latency causing trouble. There are many other possible bottlenecks in a full syncthing setup.
I’m in the same situation as you Alek 100-300 ms WAN depending on country, been testing with resilio in the past, result it’s much faster.