I’ve been banging my head against the wall trying to improve Syncthing WAN performance. Testing with iperf3, with a single TCP connection I max out at ~120 Mbits/sec - which matches Syncthing reporting. I’ve done these tests with datacenter SSD’s - so IO isn’t the problem. WAN latency is around ~140ms.
iperf3 -c wan_host -R -P 1
Connecting to host wan_host, port 5201
Reverse mode, remote host wan_host is sending
[ 5] local lan_host port 39572 connected to wan_host port 5201
[ ID] Interval Transfer Bitrate
....
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.12 sec 143 MBytes 118 Mbits/sec 14 sender
[ 5] 0.00-10.00 sec 139 MBytes 117 Mbits/sec receive
With 8 TCP streams, I can max out my ISP’s connection (500 Mbits/sec).
iperf3 -c wan_host -R -P 8
Connecting to host wan_host, port 5201
Reverse mode, remote host wan_host is sending
[ 5] local 192.168.0.135 port 39648 connected to wan_host port 5201
[ 7] local 192.168.0.135 port 39650 connected to wan_host port 5201
[ 9] local 192.168.0.135 port 39652 connected to wan_host port 5201
[ 11] local 192.168.0.135 port 39654 connected to wan_host port 5201
[ 13] local 192.168.0.135 port 39656 connected to wan_host port 5201
[ 15] local 192.168.0.135 port 39658 connected to wan_host port 5201
[ 17] local 192.168.0.135 port 39660 connected to wan_host port 5201
[ 19] local 192.168.0.135 port 39662 connected to wan_host port 5201
....
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.12 sec 90.3 MBytes 74.9 Mbits/sec 24 sender
[ 5] 0.00-10.00 sec 87.1 MBytes 73.0 Mbits/sec receiver
[ 7] 0.00-10.12 sec 81.6 MBytes 67.6 Mbits/sec 52 sender
[ 7] 0.00-10.00 sec 78.6 MBytes 65.9 Mbits/sec receiver
[ 9] 0.00-10.12 sec 102 MBytes 84.5 Mbits/sec 110 sender
[ 9] 0.00-10.00 sec 98.3 MBytes 82.5 Mbits/sec receiver
[ 11] 0.00-10.12 sec 80.9 MBytes 67.1 Mbits/sec 38 sender
[ 11] 0.00-10.00 sec 78.1 MBytes 65.5 Mbits/sec receiver
[ 13] 0.00-10.12 sec 56.5 MBytes 46.8 Mbits/sec 32 sender
[ 13] 0.00-10.00 sec 54.0 MBytes 45.3 Mbits/sec receiver
[ 15] 0.00-10.12 sec 94.7 MBytes 78.5 Mbits/sec 53 sender
[ 15] 0.00-10.00 sec 91.2 MBytes 76.5 Mbits/sec receiver
[ 17] 0.00-10.12 sec 93.7 MBytes 77.6 Mbits/sec 56 sender
[ 17] 0.00-10.00 sec 90.2 MBytes 75.7 Mbits/sec receiver
[ 19] 0.00-10.12 sec 78.6 MBytes 65.2 Mbits/sec 54 sender
[ 19] 0.00-10.00 sec 75.9 MBytes 63.7 Mbits/sec receiver
[SUM] 0.00-10.12 sec 678 MBytes 562 Mbits/sec 419 sender
[SUM] 0.00-10.00 sec 653 MBytes 548 Mbits/sec receiver
Everything on the TCP stack appears to be operating correctly, window sizes are increasing to the point where my theoretical max bandwidth should be reachable - but due to limitations of my ISP’s (it appears to be throttled somewhere in between my two ISP’s), single stream performance cannot reach higher than that 120 Mbits/sec.
I’ve read all the concerns about supporting multiple streams in Syncthing, but that doesn’t really help in these kinds of cases. I do understand both sides, so I’ve attempted to run multiple Syncthing instances (different databases, same files, 1 receiver, N senders) to increase the number of pull streams, and therefore performance, but it appears the different send instances start poisoning each other.
I’m hoping I’m just deploying multiple instances wrong? Maybe there’s another solution other Syncthing users have tried?