I’m syncing large datasets (500GB) between my home office (Ubuntu 20.04) and a remote server (CentOS 8) over a high-latency network (~150ms ping, 100 Mbps speed). While Syncthing works well, I notice slow initial sync and occasional stalls.
I’ve tried:
Increasing Scan Intervals – Helped reduce CPU usage.
Compression & Relaying – Mixed results, as CPU load increases.
TCP vs QUIC – QUIC seems more efficient over latency, but not always stable.
Has anyone optimized Syncthing for high-latency setups? Would tweaking MTU, using specific TCP congestion controls, or other settings help? Any best practices are appreciated!
I want to make it clear that I’m coming at this from a network engineering perspective, not knowledge of Syncthing code or advanced configuration options.
Relaying is not something I would suggest unless there was a much lower latency connection between the Relay you were using and each site with Syncthing. This seems like a very unlikely corner case.
How bad is the performance hit for compression?
What instability have you noticed with QUIC? Based on your post I’m guessing you are familiar with Wireshark and similar tools - any diagnosis there?
Consider experimenting with maxFolderConcurrency. I’d probably start with a value of 1 and see how things go from there. This should help with CPU load, perhaps offsetting the additional load from compression.