Hi, I’ve been a long-time syncthing user and have donated in the past to keep dev going. I’m wondering how to optimize my setup. In my experience so far, syncthing works far better than rsync or other solutions for the number of files and volume of data with which I work daily.
I sync a few servers to one central server with 250TB of disk. It’s all been working well for a year. I’m using the default config for 1.0.1 and it seems to work well.
The central server (hereafter known as z) is running pretty idle and otherwise doing fine. All servers are spread around the world, but on full gigabit connections and I can routinely average 860 mbps between them.
A basic diagram is a --sendonly–> z. b --sendonly–> z. c --sendonly–> z. Or said another way (a…y) --sendonly–> z.
Some data size samples:
a is syncing 22,649,455 files in 42,560,953 directories totaling 2.79 TB.
b is syncing 22,342,732 files in 42,481,303 directories totaling 28.4 TB.
c is syncing 60,058 files in 46,980 directories totaling 34 GB.
d…n servers are more like c.
a and c sync fine without issue and are generally at 99%-100% anytime I check them.
However, b is constantly lagging in the sync. The scanning period can take days to complete and sometimes only syncs a few TB and seems to go back to scanning. I’ve played around with Full Scan Interval a few times, but no setting seems to matter.
I’ve read these forums for months and through the various config and advanced config options. Nothing really jumps out at me as an obvious improvement. This is especially true given a and c are working just fine. Servers a and b have nearly identical hardware.
Added note: most of the servers are ubuntu 18xx. z and some servers are freebsd 11 or 12. In this case, b is ubuntu 18.10 and z is freebsd 11. Both OSes are fully patched.
Any obvious pointers?