Hello,
I have discovered Syncthing recently and found that it would perfectly match my need. I have a very low latency connection between two endpoint and would like to synchronize files among them. TCP is not ideal because of the latency and QUIC seems to be perfect. However, because this connection relies on a very specific device, I cannot choose the MTU (or to be precise, it has to be smaller than 1024).
For debug purpose I have set the MTU to 1500 (which will not work with the actual device) and I did a capture using wireshark. On the screenshot below we can see some QUIC traffic with packets of length 1308.
I ran the exact same setup using an MTU of 1024 and you can see that there were no QUIC packets (I show you ICMP packet to prove that the interface is working)
When I use TCP syncthing works fine (but in real operation there will be trouble because of high latency). So my question is is there a way to change the MTU when using QUIC, I feel like Syncthing keeps trying to send data that are too large for the interface MTU.
This doesn’t seem to make sense to me, every single benchmark I’ve seen indicates that QUIC only outperforms TCP on very high latency or lossy networks. The better your latency is, the more TCP wins against QUIC (due to the more optimized in-kernel implementations, often by upwards of 30%).
You haven’t explained much about your use case, but I suspect you’d be pleased or displeased with Syncthing’s performance based on that, not based on network latency.
Syncthing (I’m paraphrasing what I’ve read here) is about security and reliability far more than speed. Personally, I find that the performance is excellent for my (very simple) use case.
As a side note, I’ve yet to find a situation where QUIC (the Syncthing implementation there of) performs better than TCP. I don’t have ”real” bad networks but I’ve tried to provoke it by simulating high latency and packet loss, and TCP still comes out on top. If someone has a setup or simulated parameters where QUIC actually wins I’d be interested in seeing it, because there are optimisations we could do for multiple streams over QUIC. It’s just they’ve been pointless when I’ve tried.
We have disabled TCP and use QUIC exclusively on our long, high latency links. We have one link between two servers 9400 miles apart and typically pings around 300ms.
QUIC is substantially faster in this case. We are also using multiple connections and get faster transfers this way.
We get about 1-1.5MBit for larger files with 8 connections with QUIC. It varies quite wildly as this link is something like 25 hops. We believe this is saturated basically (but it’s kind of hard to tell and I didn’t run iperf3 to try to confirm.)
With TCP it was 100-150kbit.
Both endpoints are windows systems.
These numbers are from memory and we did this study not completely scientifically and right after the multiple connection feature was released a while back. So the numbers may be a little fuzzy.
You kind of have me interested to repeat this experiment. If I can find a bit of time to shut down the other transfers I may be able to do a better controlled test. Have to think about the best way to do it.
It might well be that QUIC has a sweet spot in high-latency low-bandwidth links, I was mostly testing with high-bandwidth but lossy or high-latency, and then I think TCP wins by lack of (programming) overhead. For such comparatively low bandwidths the CPU usage and overhead will be negligible so I could see other aspects of QUIC then making it win.