My use case for syncthing is to sync files between two machines geographically several thousands of miles apart. In my experience, this results in high latency and the only way to overcome it is with parallel transfers. I noticed the block sync protocol requires a tcp connection which is affected greatly by this latency. Will syncthing create multiple tcp connections between two individual machines and transfer blocks in parallel on these multiple connections? If so, is it configurable or will it attempt to auto adjust based on latency to maximize throughput?
No. It uses a single TCP connection, but can request for multiple blocks from multiple peers (or the same peer) at a time. Meaning we don’t need to wait for the response before we can make another request, though we are still under TCP restrictions.
OK, thanks. Although multiple blocks may be requested and I’m sure that helps some, it won’t solve most of the latency issue. As a work around, can you run the syncthing multiple times on the same machine pointing to the same disk location?
I suggest you introduce more peers into the cluster, this way you’ll have more peers to pull from, therefore will utilize bandwidth better.
I mean, what do you expect it to do?
This is not HTTP where we cannot make more requests before we recieved a response, hence I am not sure how much benefit there would have from multiple connections, given we can achieve everything with one socket that takes modern http multiple sockets to achieve… What other improvements do you envision?
Thanks for the suggestion. It isn’t cost effective for my use case to attempt to host more servers. I simply want a fast way to sync between two machines in which network latency is overcome with parallel channels for data transfer. Would this feature be a reasonable request?
FWIW I do know parallel connections host to host will overcome the latency I am experiencing. Unfortunately, the software I have that can sync multiple parts in parallel and is secure is fragile and takes forever to start syncing new files. However, once it starts I am able to utilize 80% of my throughput capability on my slowest connection. Any other secure solution I’ve tried that uses one peer to peer connection only utilizes about 25% due to latency and dropped packets. I won’t even tell you how miserable using rsync tunneled through ssh was.
The logic to sync from multiple endpoints is basically the same as syncing from multiple connections from the same source, so perhaps I’ll tinker with syncthing’s code and see if I can implement it.
I have the same problem as you, and multiple TCP connections in syncthing would probably solve this issue for me as well. The reason i believe so is that right now i am successfully workarounding the problem by running multiple syncthing daemons on the same machine. It takes n times the space (for each instance), and is really hopeless to manage, but it syncs the files in lightning speed. With only 1 to 1 syncthing I am getting 500kB/s-1MB/s between these locations (8000 kilometers apart) even if both locations have a 1Gbit connection and are connected to their ISP directly via fibre. By running multiple syncthings i am adding to the speed by each instance i start, but again, its terrible to manage and it uses n times the resources on the machine, when all i really want to achieve is that blocks are spread across multiple TCP connections.
tldr: If you manage to implement this you’re my hero.
Sorry about the off-topic,but : May i ask what you are using today? Even if its slow and cumbersome it might be alot better than spawning multiple syncthings for me.
Same issue here. Many of the files I sync are from a single source in Europe and the peers are in the US. Multiple Syncthing processes running on the same machine multiples throughput, the same thing with FTP (using a multi-segment client that establishes multiple connections to the source). If Syncthing could provide one process multi connections similar to the ftp clients that would be very awesome.
Same problem here. For Syncthing to be usable for me it will have to provide multiple connections, downloading a user-configurable amount of blocks/segments at once.
Only solution for those of us in this situation for now is to use an FTP client like lftp, writing a cronjob script using the mirror command. This is a shame, since the GUI and sync features are nice, it’s just too slow.
The issue you are referring to usually only applies to high latency high loss links, so please, let’s not jump on the band wagon because you think something is slower than you expect. For low latency links probably simply increasing the number of pullers in advanced config will get the speeds close to your cpu hashing performance.
I’m syncing from a server in North America to one in Australia, so I am dealing with a high latency connection. I tried increasing the number of pullers but did not find that to give me much better results, and Syncthing still consistently performed at half the speed of lftp using segmented transfers over SFTP.
Thank you, however, for encouraging me to get off the band wagon. I could have got a splinter and there was a lot of excrement.
Transfer performance is not one of the main priorities for Syncthing, situations where TCP doesn’t perform well are fairly unusual, and multiplexing over multiple TCP connections between two devices is probably never going to happen. I say this not to be mean but to keep expectations in check. Sorry.
(Tweaking pullers and so on shouldn’t be necessary either.)
Syncthing still supports and prefers the usual TCP+TLS connection though, QUIC is primarily used for otherwise difficult NAT traversals. You can manually configure QUIC connections for your links though.
The performance of QUIC vs TCP varies in different scenarios, and there is no guarantee that one is always better than the other in a given scenario. You will have to test it if you want reliable numbers for your use case.