Network performances and CPU utilization

Hello,

I’ve been testing Syncthing as a replacement of BTSync, and so far I really appreciate the software. However, I have a few questions regarding the network performances. My network is composed of:

  • a desktop, CPU Intel Core i3 (LAN)
  • own laptop, CPU Intel Core2Duo (LAN)
  • work laptop, CPU Intel Core i5 (LAN)
  • a netbook, do not remember CPU but not very powerful (LAN or WAN)
  • a dedicated server on an external host All of them are running Ubuntu.

My main share is composed of ~ 39 Go of data, ~ 30,000 files.

  1. Every time a peer connects to another one, it seems that the full index of files is transferred between peers. In my case, that makes about 100 Mo transferred. On the local network this is not a big issue, but the connection with the dedicated server is more problematic. If I turn on the 3 PCs on my local network at the same time, it makes 300 Mo to transfer at 400 ko/s. Still ok, but I can clearly see the issue coming for larger repositories (especially since Syncthing often requires a restart when configuration is changed). Is there a reason to transfer the whole file index? Could it be possible to transfer, for example, a hash of the index and compare it with the local copy? Hence, it would be necessary to download the whole index only when it has changed.

  2. The CPU use of Syncthing is quite high, actually much higher than BTSync or a simple SFTP transfer. This seems to result in lower transfer rates. For example:

  • desktop - own laptop: 8 Mo/s for Syncthing (CPU of the laptop at 100 %), 33 Mo/s for BTSync or SFTP (which is the HDD speed limit)
  • desktop - netbook: 2.5 Mo/s for Syncthing (CPU of the netbook at 100 %), 7 Mo/s for BTSync
  • desktop - work laptop: 15 Mo/s for Syncthing (CPU of the desktop close to 100 %), 45 Mo/s for BTSync

Do you have an idea of what could be the reason of Syncthing using much more resources? Could it be caused by the compression? The encryption which is not optimized? BTSync is using AES 128 bits, and SFTP Blowfish or AES. I see that Syncthing is also using AES 128 bits, so I would expect similar CPU load. From what I see in main.go, you use a standard Go implementation, right?

Thanks!

I think it’s planned to do this a bit more intelligent, currently it’s just the easiest way for index exchange. This is also one thing that I don’t really like because my upload is really slow and I need some minutes for Index exchange.

We have compression, encryption and hashing of the files. Everything combined gives a high CPU load. On LAN disabling compression usually helps a bit.

For your transfer speed I have no idea what you mean, i never heard of ko, Mo, etc. what does the “o” mean?

I’ll give it a try, thanks.

The “o” stands for octet. Well, the core of my message is that Syncthing seems to be about 3 times slower than BTSync or SFTP, while using more resources. I guess the Go implementation of the encryption algorithm might be the reason…

Yeah, so persistent indexes is something that is missing. What we really need to do is version indexes, and then only transfer if version has changed (or perhaps only the parts of the index that changed). @calmh might have a better idea when and if this is coming.

CPU load, yeah, we can only guess, but if you were to provide some profiling, we could tell you more. I personally don’t have a repo big enough to hit these problems, so I cannot suggest anything. You might be right that goland tls is just not brilliant.

OK, I’ll try the profiling. Is it: STCPUPROFILE=yes ./syncthing ? By the way, to reproduce the behavior it is not necessary to have a big repo, but just be sure to have 2 machines connected on a high speed LAN, preferable 1 Gbps. Just add a big file (like something ~ 1 Go), and you will have time to monitor the speed.

Here is a profile file, used to transfer 2.5 Go between my desktop and my own laptop : cpu-3847.pprof (511.9 KB)

I had a quick look at the results of the profiler, and it seems clear that most of the resources are used for the encryption:

(pprof) top
Total: 5388 samples
    1426  26.5%  26.5%     1426  26.5% crypto/aes.encryptBlockGo
    1101  20.4%  46.9%     1101  20.4% crypto/cipher.(*gcm).mul
     299   5.5%  52.4%      299   5.5% github.com/bkaradzic/go-lz4.(*encoder).writeLiterals
     267   5.0%  57.4%      272   5.0% runtime.MSpan_Sweep
     228   4.2%  61.6%      365   6.8% io.ReadAtLeast
     201   3.7%  65.4%      500   9.3% github.com/calmh/xdr.(*Reader).ReadUint32
     198   3.7%  69.0%      804  14.9% github.com/calmh/xdr.(*Reader).ReadBytesMaxInto
     170   3.2%  72.2%      170   3.2% scanblock
     126   2.3%  74.5%      243   4.5% runtime.makeslice
     116   2.2%  76.7%      443   8.2% github.com/bkaradzic/go-lz4.Encode

On the other hand, that could be expected. It would probably make more sense to compare the performances of the Go librairies against other libraries, such as C/C++.

Can syncthing use AES-NI for the Encryption? Thats can bring a huge benefit special on the smaller systems like the AMD A1.

We don’t want to use C/C++ libraries as that would kill cross compiling most likely. As for AES-NI, we are just using What Go provides which does not utilize AES-NI. Plus AES-NI doesn’t seem to exist on older ARMs.

I am aware of that, I was just thinking that such a benchmark would be interesting to have.

Ok i googled a bit. Go should use AES-NI on AMD64 if its available.

Maybe a stupid question, but is it possible to disable encryption on LAN? On simply build Syncthing with encryption disabled, so I can perform some tests?

You can probably customize the build to rip out TLS, but it’s not as easy as as changing some variable, as we identify the node by the TLS certificate presented. I guess you could rip out TLS and compile two versions one of each end with the device ID hard coded in.

Also, you can change the cipher suite which is a single variable change, but I guess any cipher is still an overhead.

@calmh @AudriusButkevicius would you like to tell us if and when you want to implement improved indexing?

I personally have no plans