Sharp increase in memory usage in reporting (expected)

The memory usage graph in on shows a sharp increase lately, possibly together with v1.3.0. The increase is significant: From ~80MB to ~110MB. My instance reports 353MB used, which feels like more than before (unfortunately I don’t know for sure). Looking at a heap profile accounting for 140MB in total, ~130MB of that is used by goleveldb. My assumption is that the large blocks change is the reason for this memory increase. And now having written this I had another look at the PR in question, and given the increase cache and write buffer this seems sensible. So actually I don’t have a question anymore - I still think it’s worth posting, as it’s nice that the effect was well captured by usage reporting :slight_smile:

PS: The corrected block reporting is also already showing in the transfer graph.

1 Like

My syncthing instances used < 100 MiB before 1.3.0 and stabilized somewhere at about 800 MiB after the upgrade (initially even higher, but went down a bit after a few hours). Large database tuning was used and my database is slightly larger than the treshold for small/large. Later, I manually configured database tuning to ‘small’, which decreased memory usage back to the previous ~80 MiBs.

Are you running a 32 bit build?

No, I’m running a mix of Windows and Linux - all amd64 builds - plus a few android devices which should be AArch64.

Could you remove the “small” configuration and capture a heap profile with the high mem usage (

You know the demonstration effect? That’s precisely what I’m having, because now I can no longer reproduce the 800 MiB values I definetly remember seeing in the GUI. Maybe that was just the initial switch or some other db madness? The highest I get now is 350 MiB - this is still an increase by a factor of 3.5, but not nearly as high as before.

Profile with tuning set to “auto”:

syncthing-heap-linux-amd64-v1.3.0-212328.pprof (36.7 KB)

Profile with tuning set to “small”:

syncthing-heap-linux-amd64-v1.3.0-213003.pprof (35.3 KB)

(This is both on the same Linux machine. On Windows the usage is even lower when on “small”, it’s at ~70-80 MiB whilst Linux is just under 100. On “auto/large” they’re both approximately the same at about 350)

The increase by 3.5 is probably normal and to be expected with the new tuning options, so I wasn’t worrying about that. The performance with “small” was totally fine so far, so I just went with that. My database really isn’t that big - just over 202 MiB.

I think that’s roughly the expected memory difference.

It might be that we want phones and/or autodetected low-RAM machines to always use the small tuning, regardless of db size… Like we do for 32bit now.

1 Like

Or maybe yet another split, such that table sizes are increased but not the write buffers/cache?