Newbie question re: CPU utilization at 'rest' and recommended maximum folder sizes.

Hello,

New user here. Firstly, thank you to the developers. I had been using BTSync for over a year and now I want to move away. Syncthing looks the best candidate for replacement.

My question:

For testing I installed Syncthing on 4 computers - 2 are dedicated servers (ubuntu) , 1 is a Linux laptop, and one is a Windows laptop. Everything seems setup nicely and all the computers can see each other and communicate directly.

My concern is with cpu utilization at rest.

For example, I tested using one folder of media files, approx 30Gb. As expected, indexing took quite a while to (I left them running all night), but seems to have completed successfully. However, even so, there are frequent cpu spikes. On both the syncthing interface and top/htop, the cpu will apparently go to >200% for a few seconds, every few seconds.

I would have expected that, after indexing, syncthing might have settled down, but this does not seem to be happening. I have no ignores defined. The scan interval is set to 86400 s (1 day) on all machines. RAM Utilization is 51.5 MiB.

Perhaps it’s my ignorance of what is happening under the hood. Is 30Gb too much for sync folder? Is there a recommended limit?.

Assuming everything is indexed, what could be causing the cpu spikes?

Kind thanks in advance for any guidance, stim

The GUI has an impact on the CPU load as it has to periodically traverse the database to update the state in the UI. See if not using the UI helps.

Hi and thanks,

Wow that does make a big difference.

Still seeing the occasional spike, but I will continue to observe and see what happens. It could simply be that my hardware isn’t up to the task of syncing large folders. I will adjust my strategy accordingly.

If I see anything I will report back, For now, I’m sending the anonymous stats

Thanks again for your great work and generosity.

Kind regards, stim