I have a gigabit connection to a server which is on a different continent. Therefore, the latency is somewhere up around 300-400ms. I’m currently relying on LFTP for my file synchronisation as it is the only protocol which manages to get anywhere near saturating my link. I’m transferring files that are generally quite large (10GB+). I think what I’ve got is the classic “long fat pipe” problem.
I see that the developers were toying with UDT and even merged KCP support. However, I see that KCP has since been removed.
Does this mean that I should look elsewhere if I’m hoping to saturate a gigabit link?
TCP may be able to accommodate high latency with some tuning, but does the average SyncThing user know how to optimize TCP parameters? Seems like a high expectation. I suspect most won’t even know where to start.
Further to this, it is dependent on the TCP tuning that is permitted on both hosts. While I have full control over my local host and can tune TCP accordingly, I do not have that sort of control over a cloud server. Even with root access, under various levels of virtualisation it’s not always possible to modify all network stack parameters.
My hope was that this would be handled seamlessly at a layer higher than the transport layer so I could use Syncthing with an LFN. If it’s not, then that’s fine and my question is answered, but it does mean that unfortunately I’ll need to form my own solution.