I actually had a similar problem with the 32bit dev-build (from another ticket) on my windows machine this morning. There was a popup from windows saying syncthing had a memory problem. Didn’t think about it more though. But synchting was using almost 1GB of RAM at that time.
Two questions:
Could you add a proper error-message in that case so it’s easy to understand?
Is there any way to work around this? I mean the machine is 32bit with low RAM, can’t change that. Maybe could you optimize / reduce the memory-usage?
If it dies due to failure to allocate there is a very verbose panic message, I just it just wasn’t captured by systemd or you didn’t see it. If it gets killed by Linux due to OOM, there is no chance to emit a message. The dmesg log will have a message from the kernel about it.
Generally speaking it’s fairly optimized as is. The known exceptions are very large files (which soon will be better with variable block size) and large folders where a lot has changed (because all of the changed files get queued).
[998745.207835] Out of memory: Kill process 21850 (syncthing) score 826 or sacrifice child
[998745.207975] Killed process 21850 (syncthing) total-vm:1340936kB, anon-rss:856940kB, file-rss:0kB, shmem-rss:0k
Disables the cache which allows to calculates scan progress, so your scans won’t have progress but they won’t consume as much memory.
There are a few other settings that could reduce memory usage, yet I suspect it shouldn’t be this high to start with. Can you try redownloading the binary to make sure it’s not corrupt?
I don’t think that could / should be a problem. Did this ever happen?
I use apt-get to install it. How should it be corrupted without being totally broken?
Would it make sense to use a disk-spilling queue in both scanner and puller? Scanner is a bit more problematic as we store a file info (without block though), not just file name. It seems pretty doable to use an adjusted version of the index sorter for this. It’s not the nicest solution considering the disk io discussions going on, but better than OOM crashes and short of walking the folders twice, I don’t see a possibility to have progress updates without intermediately storing files to be processed. I’d definitely propose to use some heuristic criterion for spilling, not a fixed max size, to prevent spilling on systems that can take the memory spike.
I had to move Syncthing from my NAS box (4GB) to the main PC (16GB) to get it working. And now of course watching for changes does not work (on CIFS mounts)
We could do two scans, one to figure out the amount of data that needs to be hashed, and once more to do it. We might run into a discrepancy and need to handle that fact of course.
Disk-spilling for this purpose does not sound worth it to me.