Does the CPU load depend of the size of the shared folder?

I have two shared folders, each has in the root a tmp folder with only one file. I use plain syncthing and syncthing-inotify to tell syncthing which folder to rescan.

For each of these shares I change only the single file in the otherwise empty folder. The CPU load that results from this is very different and it relates to the size of the shared folder.

A shared folder with only a couple of files has no noticeable CPU load impact

A shared folder with 700k+ files with total size of 1TB creates a very specific CPU load profile. First for 20 seconds 2 of my 4 cores get 100% cpu load, then there is 10 seconds no cpu load and then 1 core gets loaded 100% for some 30+ seconds. After that the CPU load drops again to zero.

As you can imagine changing and syncing some small files every minute deep in a big share (even without a lot of nested folders to scan) will continuously create a serious load on the CPU.

So my question is: what can I do to reduce this load? Is there any setting to prevent syncthing to do a lot of (useless?) work?

There was a bug related to this that will be fixed in the next version, see: