High RAM utilization when transferring a single large file

Recently I noticed very high RAM usage (reported by the GUI) when transferring just a single large file, this can grow high enough to make the system unusable. Adding a 1.4 GB file used roughly 1.4 GB RAM when syncing and crashed my Raspberry Pi 3B (Syncthing 1.3.0). This was the first time I noticed such high RAM usage, I had transferred much larger files before without any problems (not sure which Syncthing versions). On my other systems there is high RAM usage too when transferring large files (especially sending, then RAM usage is roughly the size of the file to tranfer), but that’s maybe just because there is enough memory available anyway. The Raspberry Pi 3B only has 1 GB RAM and 1 GB of swap, so now I’am a bit afraid of transferring large files again…

It this normal / to be expected in the current version of Syncthing? Or maybe some wrong configuration? I started with the 0.14.?? defaults and did not change the configuration since then, except for relocation of some folders. The similar problems I found in the forum were related to a large amount of files, but in my case there was no scan or other sync going on, and 40000 files / 125 GB should not be too much anyway…

Higher RAM usage when syncing larger amounts of data is expected, but version 1.3.0 introduced large database tuning which increases overall RAM usage in case you have a large database (> 200 MiB currently - check size of syncthing’s config directory or look for a “large database tuning” message somewhere early in the log).

You can go into the Advanced Configuration and set Options -> Database Tuning to small. See if that solves your issue.

That does not sound normal. You should collect a memory profile and post it here.

Are you running a 32bit OS?

OS is the current 32 bit Raspbian, Syncthing version: v1.3.0, Linux (ARM)

Database size is ~260 MB. I changed Database Tuning from auto to small and restarted Syncthing. Initial memory usage after scan was 320 MB. When sending a 1.3 GB file, usage slowly went up to 1.15 GB. 1-2 minutes after the transfer was finished, usage started to go down, after 5 minutes it settled around 385 MB. So maybe might be an improvement, but not much.

How do I collect a memory profile? A quick search was no really conclusive for me…

https://docs.syncthing.net/users/profiling.html

You should definitely keep db tuning to small (or update to 1.3.1-rc.2 where this is the default) - there’s a bug with 32bit and large db settings.

Here are 2 memory profiles, the first before and the second while sending a large 1.4 GB file.

This time I noticed that several times during the transfer RAM usage dropped to 850-900 MB and then slowly went back up (to max. 1.08 GB), so the system remained usable all the time. Could this be an effect of the database tuning setting?

syncthing-heap-linux-arm-v1.3.0-163410.pprof (46.6 KB) syncthing-heap-linux-arm-v1.3.0-165833.pprof (53.0 KB)

This looks like large db settings are still in effect (memdb taking 64MB instead of 8MB) - double check that.

Do you have many device pulling that file? There’s a limit for how much any device can pull at a time, but not for the total (and sending data just requires as much memory as is being sent). And check that device setting (maxRequestKiB), just in case it is set to unlimited (negative).

Oh, you are right… somehow the small setting didn’t survice the restart.
Now it is back to the ~100 MB RAM it usually had when doing nothing, and (currently) not more than 440 MB while sending a large file.
syncthing-heap-linux-arm-v1.3.0-182231.pprof (33.0 KB)

There is/was only one other device connected, the folder is shared with 3 other devices.

I currently see this problem again.
Now on 1.3.1, RAM Utilization when uploading a large file (1.6 GB) is again more than 1 GB. This makes the whole system very unresponsive, it is permanently swapping since it only has 1 GB RAM. No memory profile this time, enabling the debug setting did not work until I restarted syncthing.

Maybe this high RAM usage doesn’t occur all the time, as since my last post here I didn’t notice any problems/slowdowns of the system despite uploading large files multiple times.

Database tuning is still small, Max Request Ki B is set to 0, which seems to be the default value.

We found a bug that should be fixed on master. You can try building from master, or download the latest build from build.syncthing.net

1 Like

The 1.3.2 RC, released tomorrow, has the fix for this. You can download it today:

https://build.syncthing.net/viewLog.html?buildId=49350&buildTypeId=Syncthing_BuildLinuxCross&tab=artifacts

1 Like

So good news, thank you!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.