How well does Syncthing handle high latency in late 2018?

I have a gigabit connection to a server which is on a different continent. Therefore, the latency is somewhere up around 300-400ms. I’m currently relying on LFTP for my file synchronisation as it is the only protocol which manages to get anywhere near saturating my link. I’m transferring files that are generally quite large (10GB+). I think what I’ve got is the classic “long fat pipe” problem.

I see that the developers were toying with UDT and even merged KCP support. However, I see that KCP has since been removed.

Does this mean that I should look elsewhere if I’m hoping to saturate a gigabit link?

Thanks in advance!

Syncthing uses tcp. If your tcp stack can reach the required rates, Syncthing might. If it won’t, Syncthing can’t.

And tcp is perfectly capable of filling high latency links, you just need yo tweak your tcp stack.

TCP may be able to accommodate high latency with some tuning, but does the average SyncThing user know how to optimize TCP parameters? Seems like a high expectation. I suspect most won’t even know where to start.

1 Like

Further to this, it is dependent on the TCP tuning that is permitted on both hosts. While I have full control over my local host and can tune TCP accordingly, I do not have that sort of control over a cloud server. Even with root access, under various levels of virtualisation it’s not always possible to modify all network stack parameters.

My hope was that this would be handled seamlessly at a layer higher than the transport layer so I could use Syncthing with an LFN. If it’s not, then that’s fine and my question is answered, but it does mean that unfortunately I’ll need to form my own solution.

1 Like

We use plain TCP which I guess answers your question.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.