I’ve recently started using Syncthing to sync between a Synology NAS with 2x Seagate IronWolf 12 TB HDDs configured as RAID-1 and a Raspberry Pi 4B with 2x Seagate Backup Plus 5 TB portable drives configured as a single disk with striping using LVM. There are six shared folders with a total of 6.15TB in over 333,000 files.
I use the stable Syncthing Debian repo and just upgraded both to 1.9.0. They both started doing to do lots of scanning and syncing with the Synology NAS doing an excessive amount of disk grinding and actually going a lot slower than the Raspberry Pi. It turned out that that was due to the system running out of physical memory and swapping a lot. The syncthing process seemed to be using nearly 2 GB of memory. Is that to be expected given the amount of data I’m sharing?
I am not an expert, but the RAM usage seems pretty normal to me. For comparison, mine is using ~500 MB of RAM for ~300GB of data (667,623 files). It also probably depends on the number of folders, files, and connected devices.
This is with the 64-bit version, as the 32-bit one will probably use less (but also may crash if the RAM usage gets too high).
1.9.0 has increased memory usage for folders with many files, due to case sensitivity checks. We’re bringing that down a bit in 1.10.0. You might also set the case sensitivity option. (If your file systems are all in fact case sensitive.)