Very slow network sync speed & syncthing index db overload (16 HDD node)

Hello. Need suggestion on how to force Syncthing to sync faster. I use a server (the machine which only received data) with 16 HDD which sync data from 20 remote clients. Server has 10 gbps network, modern CPU. Clients (the machines which only send data) using different OS and different country location. Some clients located within same datacenter. Most clients uses 1 Gbps network. Most clients connected directly WAN, only TCP connection used by Syncthing. No applications running on server, only Syncthing and SSH server (near idle). Last stable version of Go and Syncthing is used. All clients are using same Syncthing version. (Note for now i am using Syncthing only as Sync whole data solution, i disabled file monitor and set full scan interval to 3 months because i can not even finish initial sync, it is extremely slow)

Problem: Network speed sync is very slow. I can not even finish initial sync already for about a month. It is often to see 0 bps / sec. Sometimes it is 4 Mbps. Sometimes 20 mbps. Rarely i see 100 mbps, 200 mbps, 300 mbps, very very rare for a few seconds.

Network channel is 10 Gbps shared port. Iperf shows: 7 Gbits/sec one connection. Hard drives not overloaded. All drives are Enterprise datacenter HDD SATA III (6gbps). Near each client uses its own HDD inside Syncthing server (HDD is dedicated for a client) (so it is not possible one client stops sync for all clients, it should be run in parallel load in terms of HDD) CPU is not overloaded. 32 CPU cores is used for Syncthing (and system processes). RAM is loaded up to defined limit. GOMEMLIMIT=10737418240 GOGC=off

CPU and RAM profiling attached.

The only limitation is RAM. It is set to 10240 MB and GC runs to match this limit.

Syncthing index DB size: 29G index-v0.14.0.db

Total local data in Syncthing server: 16 357 304 folders 3 451 993 files ~2,42 TiB size

Estimate total amount which needs to be now on server for the all clients after initial sync: 50M folders, 100M files, 100 TB data.

Currently connected clients: 4 (others disabled to speed up sync of these 4 clients)

Problem should not be in clients because everything possible is tuned in clients, clients are different, it is not possible for every single client to have problems regardless of OS, country and so on.

Settings as follows:

<configuration version="37">
    <folder  EACH SAME SETTINGS>
        <minDiskFree unit="GB">0</minDiskFree>
    <device  EACH SAME SETTINGS >
        <minHomeDiskFree unit="%">1</minHomeDiskFree>

Known overloads: Index database is HDD overloaded. Index file index-v0.14.0.db is located on separate disk which is dedicated for index (not used for anything else). However it does not help for it to work good. atop screenshot attached.

Need help to:

  1. Increase network sync speed to the NIC level limits (or at least 100 mbps per client which is a reasonable start value).
  2. Offload Syncthing index database for Syncthing not to be HDD bound

syncthing-heap-linux-amd64-v1.23.4-205409.pprof (550.4 KB) syncthing-cpu-linux-amd64-v1.23.4-205335.pprof (37.9 KB)

Hard to say without looking deep into the details – you’re certainly at the edge of the performance envelope here. If nothing else, make sure the database is on an SSD and not on a spinning disk.

A few questions…

  • With a 32-core CPU and only Syncthing + OpenSSH server other than the usual complement of system tools, what’s the reasoning behind limiting Go to 10GB of memory? Does the server only have less than 16GB of RAM?
  • It’s unusual for a mainboard to have 16 onboard SATA connectors. Is there a HBA or something else providing the connections?
  • What make/model are the HDDs?
  • There are 16 HDD and 20 remote clients, but you said that “[…] each client uses its own HDD inside Syncthing server (HDD is dedicated for a client) […]”.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.