I’m running syncthing on two machines: an Ubuntu 16.04 VM and a Macbook. There’s about 5 GB to sync in 110k files. It’s been going on for hours now and is 8% complete. rsync of the same data takes a couple of minutes, tops; didn’t time it exactly but it’s orders of magnitude faster.
Neither of the machines is starved on CPU (single-digit % used on the target Mac, about 30% on the Linux machine) or disk I/O (mostly zero on both sides) but still the network I/O is ~100 bytes per second. The speed seems to be about one file per second.
FWIW, calling /rest/db/status?folder=my-folder-id takes ~4 seconds every time. GUI is closed on both sides. I’m using syncthing v0.14.49
Anything I can do do find the bottleneck? (perf top on the Linux side shows the Go GC and LevelDB having lots of fun but it’s still far away from taking a single core, let alone all 12 threads).
Edit: the hosts see each other over local IPs, without a relay.
No, there are about 12k directories (that I could also watch get synced one by one before file sync started).
Can I speed up the DB somehow? I can live e.g. without fsync() after every write if that makes the sync fast (when the DB gets corrupted, I can just nuke and resync, but not really if it takes two days for 5GB).