The default is the number of CPU cores. This was decided a while back based on the (then updated) assumption that hashing is usually CPU bound. However if you have fast CPU and slow storage (say, a desktop computer with an i7 and a single 4TB disk or something) then the opposite applies. Also, while all current file/operating systems do some sort of file read ahead (or so I think), Windows seem to hate parallel I/O more than others… So perhaps the default should be revisited here.
Two things here; Syncthing is written in Go which is a memory safe and garbage collected language, while I think BTSync is written in C or C++ (I think). This means we will always have a disadvantage in terms of overhead - generally using about twice the memory we would otherwise need to. On the other hand we don’t have buffer overflows or memory leaks (not saying they do either, just that this is the language tradeoff) and a generally nicer environment for the developer.
Syncthing’s memory usage peaks at startup, when pulling changes, and when loading the GUI, due to how we currently need to walk the database to present the various folder summaries and so on. The database layer uses a sometimes unfortunate amount of RAM, which is something I’m working on. Go’s garbage collector releases unused memory back to the OS after about 5-10 minutes. So “idle” may mean something different to Syncthing than it does to you.
All that said, Syncthing should not use 800 MB when idling, and ideally not even at peak, for your workload so I can see how this would be a but surprising and off-putting to you.
In addition to the amount of data, the number of files and number of (connected) peers may influence the usage, but my closest comparable setup looks like this right now:
“Idle” on that box is ~58 MB; loading the GUI pulls it up to the 112 MB visible above, leaving it open will see it shrink down again after 5-10 minutes. However note also that the large folder on that one has a very long rescan interval, making it idle more than it would with the default settings. In my case this data just changes very infrequently and I don’t mind the lag from the rescan, but using something like syncthing-inotify would have the same effect and still give you close to instantaneous sync of changes.
So in short, there are some things to tweak for sure. I’m not super happy with the memory consumption myself, and that’s something we’re working on, always.