Improving WAN Sync Performance

I’ve been banging my head against the wall trying to improve Syncthing WAN performance. Testing with iperf3, with a single TCP connection I max out at ~120 Mbits/sec - which matches Syncthing reporting. I’ve done these tests with datacenter SSD’s - so IO isn’t the problem. WAN latency is around ~140ms.

iperf3 -c wan_host -R -P 1
Connecting to host wan_host, port 5201
Reverse mode, remote host wan_host is sending
[  5] local lan_host port 39572 connected to wan_host port 5201
[ ID] Interval           Transfer     Bitrate
....
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.12  sec   143 MBytes   118 Mbits/sec   14             sender
[  5]   0.00-10.00  sec   139 MBytes   117 Mbits/sec                  receive

With 8 TCP streams, I can max out my ISP’s connection (500 Mbits/sec).

iperf3 -c wan_host -R -P 8
Connecting to host wan_host, port 5201
Reverse mode, remote host wan_host is sending
[  5] local 192.168.0.135 port 39648 connected to wan_host port 5201
[  7] local 192.168.0.135 port 39650 connected to wan_host port 5201
[  9] local 192.168.0.135 port 39652 connected to wan_host port 5201
[ 11] local 192.168.0.135 port 39654 connected to wan_host port 5201
[ 13] local 192.168.0.135 port 39656 connected to wan_host port 5201
[ 15] local 192.168.0.135 port 39658 connected to wan_host port 5201
[ 17] local 192.168.0.135 port 39660 connected to wan_host port 5201
[ 19] local 192.168.0.135 port 39662 connected to wan_host port 5201
....
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.12  sec  90.3 MBytes  74.9 Mbits/sec   24             sender
[  5]   0.00-10.00  sec  87.1 MBytes  73.0 Mbits/sec                  receiver
[  7]   0.00-10.12  sec  81.6 MBytes  67.6 Mbits/sec   52             sender
[  7]   0.00-10.00  sec  78.6 MBytes  65.9 Mbits/sec                  receiver
[  9]   0.00-10.12  sec   102 MBytes  84.5 Mbits/sec  110             sender
[  9]   0.00-10.00  sec  98.3 MBytes  82.5 Mbits/sec                  receiver
[ 11]   0.00-10.12  sec  80.9 MBytes  67.1 Mbits/sec   38             sender
[ 11]   0.00-10.00  sec  78.1 MBytes  65.5 Mbits/sec                  receiver
[ 13]   0.00-10.12  sec  56.5 MBytes  46.8 Mbits/sec   32             sender
[ 13]   0.00-10.00  sec  54.0 MBytes  45.3 Mbits/sec                  receiver
[ 15]   0.00-10.12  sec  94.7 MBytes  78.5 Mbits/sec   53             sender
[ 15]   0.00-10.00  sec  91.2 MBytes  76.5 Mbits/sec                  receiver
[ 17]   0.00-10.12  sec  93.7 MBytes  77.6 Mbits/sec   56             sender
[ 17]   0.00-10.00  sec  90.2 MBytes  75.7 Mbits/sec                  receiver
[ 19]   0.00-10.12  sec  78.6 MBytes  65.2 Mbits/sec   54             sender
[ 19]   0.00-10.00  sec  75.9 MBytes  63.7 Mbits/sec                  receiver
[SUM]   0.00-10.12  sec   678 MBytes   562 Mbits/sec  419             sender
[SUM]   0.00-10.00  sec   653 MBytes   548 Mbits/sec                  receiver

Everything on the TCP stack appears to be operating correctly, window sizes are increasing to the point where my theoretical max bandwidth should be reachable - but due to limitations of my ISP’s (it appears to be throttled somewhere in between my two ISP’s), single stream performance cannot reach higher than that 120 Mbits/sec.

I’ve read all the concerns about supporting multiple streams in Syncthing, but that doesn’t really help in these kinds of cases. I do understand both sides, so I’ve attempted to run multiple Syncthing instances (different databases, same files, 1 receiver, N senders) to increase the number of pull streams, and therefore performance, but it appears the different send instances start poisoning each other.

I’m hoping I’m just deploying multiple instances wrong? Maybe there’s another solution other Syncthing users have tried?

Not sure. There is no way to get syncthing to use multiple tcp connections unless you find some sort of vpn that does that.

I don’t think its logical from ISPs perspective to limit a single connection but not multiple, so I’d still hope that this is down to TCP tunning. Presumably there should be tools to help diagnose that.

It might make sense if the ISP’s are load balancing between routers/routes, many small TCP streams are going to be easier to handle then few larger streams - and more common. I’m 90% the throttling is happening somewhere in Europe - I’ve tried to setup relays (Syncthing relay and VPN hops) in the US without success.

Is there a theoretical way of getting multiple read-only/send-only instances cooperating together? The only classical networking solution of slow single streams is having the application handle more streams. That said, to my knowledge, the only non-http applications/protocols that can make use of multiple TCP connections for single file transfers are SMB/Samba and Resilio Sync. It’s definitely a hard problem with niche uses.

Does your ISP limit UDP? You could tunnel your syncthing traffic via wireguard.

There is nothing in syncthing to make it cooperate. Best I can suggest is to manage different subtrees with different instances.

My naive observation is that the limit is on the proto/port/address tuple. So the only solution really seems like some type of split-tcp tunnel, which doesn’t really exist.

That said, I do have wireguard connecting each site, speed is around the same.

Oh well, thanks.

Unfortunately, it’s a few large files that I’m trying to optimize (or rather, lot’s of large files, but only a few need to sync at a given time). I am sub-treeing the small files though, but that’s more for management overhead then anything. :laughing:

Hmmm you could try to do some roundrobin dst port switching using iptables. You’d need to map s set of ports to wireguard on the other end for this to work.

Edit: for Wireguard

Ooh, that’s an idea - fan out a single wireguard flow across multiple ports, and blind reverse it on the other side?

For each packet destined to 51820:
-> every 3 packets, redirect to port 51821
-> every 3 packets, redirect to port 51822
-> every 3 packets, redirect to port 51823
For each packet to 51821, 51822, 51823:
-> redirect to port 51820

That feels like it would work…

I am working on a couple k8s clusters, but I’m sure Vyos could be hacked with some custom iptable rules.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.