Syncthing not using all available resources

I know this probably wouldn’t be ideal for all but for us when we run Syncthing as a service on our server we want it to use all available resources when it needs them i.e. when performing transfers. At the moment it seems the WebGUI shows the CPU utilisation on a per core basis and it has only ever been seen reaching 189%.

We have 6 cores:

processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 62 model name : Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz stepping : 4 microcode : 0x427 cpu MHz : 2100.000 cache size : 15360 KB physical id : 0 siblings : 6 core id : 0 cpu cores : 6 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt aes xsave avx hypervisor lahf_lm ida arat pln pts dtherm bogomips : 4200.00 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management:

You are probably hitting the IO cap hence why there is CPU left unused.

What’s the IO cap? Can it be removed or increased?

It’s all the waiting for disk reads, network transfers and stuff like that.

You could push the limits by getting faster hard drives (SSD) and a faster network connection.

It’s on a 1Gb connection so that isn’t the issue, the fastest I’ve seen transfers go at is ~150Mbps. If the bottleneck is the hard disk, then the following output would show this:

# hdparm -t /dev/sda

/dev/sda:
 Timing buffered disk reads: 2744 MB in  3.00 seconds = 914.66 MB/sec

At the moment it isn’t much of an issue but as the folders get bigger I can see this being a problem.

So, what’s it doing that you would like to go faster? Scanning, sending, receiving? To/from how many other devices?

If I’m guessing transfer then most likely the per device (connection) TLS (encryption) and compression is the bottleneck. That’s very hard indeed to parallelise for a single connection.

As an example the other day I added about 6 devices to a 1.5GB share, they were all syncing at the same time including my laptop from home which is on a 152Mb connection and the monitored outgoing speed went no higher than 150Mbps as was reported correctly by the syncthing webGUI and on our firewall.

There were also BT Infinity connections that go up to 80Mbps syncing at this time.

How can I determine if the encryption is the bottleneck?

Profiling, if this is on Linux, there are some threads about that here, or Google generic golang profiling. I’m on my phone right now, so painful to find and copy the relevant references. :slight_smile:

I would expect you to be a able to use one core each for the encryption etc per connection, with 150 Mbps being in the ballpark of what I’ve seen max per connection.

Note that you have the same potential bottleneck on the other side, of course, so slowest common bottleneck wins.