Excessive RAM usage during initial scan


I currently have syncthing 1.5.0 ARM installed on Raspberry Pi 1 b+ with 512M of RAM and I am trying to sync 800k files about 1Tb in size. I have had it running for 5 days and syncthing is already using 1.7Gb+ of RAM. The webui is super slow (takes tens of minutes just to load some basic info) and I can’t see what it is doing. The other clients can’t connect to it because of i/o timeout errors. The CPU usage is less than 10-20%, probably because it is waiting for io and swap.

Is it normal for syncthing to use that much ram during scans or syncing? I had it running for half a year now and at some point it was (slowly) managing to have about 700Gb of files in sync, but things broke in the last 2 months or so (and I have added about 270Gb of data since) and now it doesn’t look like it is working at all even after leaving it alone for weeks. By the looks of things, it had managed to sync 3 smaller folders and is struggling with a large one with 900Gb of data. I have database tuning set to small (and db is about 500Mb in size) and I have changed all folders to random sync order.

For 1TB of data over 800k files, probably yes, it can get to that when scanning.

You could probably grab a heap profile to check where the memory is spent. The docs explain how.

I recommend Max Folder Concurrency = 1, which helps a lot during initial scan. It cut my “startup” scanning (~2 million files) time from ~3 hours down to 1 hour

For this phase you could use the API to get just the information you need:
Get folder list with /rest/system/config and loop through folders to get status with /rest/db/status?folder=<folderid>

Shouldn’t it be 1 by default on a single core CPU? I have tried changing it but gave up waiting

Here is my heap profile syncthing-heap-linux-arm-v1.5.0-192715.pprof (279.4 KB)

Syncing, database operations, and, oddly, TLS handshakes. Nothing that stands out as broken to me.

Is it possible to somehow reduce RAM usage?

Also, it now throws no connected device has the required version of this file errors, but it can’t connect to other clients because of TLS handshake timeouts. Is it somehow possible to increase the timeout interval?

I think it’s already at something absurd like 10 seconds.

Sorry, but I think you’ll have to get adequate hardware for the size of the data you are handling.

I’m guessing probably half of it is due to the GUI. Closing that might help.

Please check https://forum.syncthing.net/t/optimising-syncthing-for-low-end-hardware/14885.

I must say though that the memory gains may not be great. In my case, I have two devices with the same folders synced. One of them has been optimised for low RAM, while the other uses the default settings. The RAM usage difference between them is barely noticeable, i.e. maybe 5-10 MB at best (70-75 MB on Device A vs 80 MB on Device B).

Thank you for the suggestions. If nothing major could be changed in Syncthing and things work as intended, I guess I’ll just wait for another week(s) to see if it finishes :grinning:

I have also tried changing some swap settings and now things seem to work a bit faster.

Also, should I expect the RAM usage to climb back down after the syncing finishes?

I think memory usage is high due to database operations, so both scanning and syncing might cause it to spike.

It had just restarted with the message before it could finish scanning the folder

Jun 13 00:27:21 rpi1b syncthing[14548]: [BPYMT] INFO: Paused state detected, possibly woke up from standby. Restarting in 1m0s.

Is it possible to somehow disable pause detection?

It’s somewhere in advanced config.

Wow, I didn’t expect it to make much difference, but after applying everything it has managed to finish scanning in just about 7 hours (whereas it would crash after several days before) and is now syncing everything. Other devices do not timeout anymore and it is using about 800mb vs 1.7gb before.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.