heavy load on folder scan

Since a few days folder scans take too long. Also, memory and CPU usage (over 80%) have grown. Even though it is a slow device, it always worked fine and folder scans always took a reasonable time to complete. The server syncs about 600.000 files in 38 folders (~500 GB). I have tried to downgrade to version 1.2.2, but the issue persists. This way the server is unusable. Could the conversion to ‘large’ database be the cause? There has been no change in the syncthing’s configuration in the last weeks, just the upgrade to version 1.3.0. Current version is v1.2.2, Linux (32 bit) Any idea or suggestion to investigate the issue will be appreciated. Thanks in advance

Without any logs - preferably those from the first start-up when this happened - nobody can really help. Are these initial scans (read: hashing) or have those been performed? How many folders are simultaneously scanning?

You say the scans used to take a “reasonable” time, but don’t say what that was, or how long they take now.

Since you are also using 32bit you also may be affected by the database setting:

and our great devs have already implemented a fix for the next version

until then you need to set it manually to “small” after updating to v1.3

1 Like

By reasonable I meant 2 minutes for a folder with about 19000 files against 40 minutes. This for just one folder scanning. In case of more folders scans at the same time the server would be unusable. Anyway, I upgraded again to v1.3.0, set Database Tuning to ‘small’ and restarted. Now the scan times are again acceptable. Furthermore, I set the rescan time to 1 day (it was 1 hour) for all folders in order to limit the possibility of simultaneous rescans.

1 Like

Thanks for your suggestions, which I followed (see above message). I also have more than 100000 .ldb in the db folder. I don’t know if a -reset-database would help to improve performance.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.