Continuing to love the Syncthing lifestyle, etc etc etc - thanks all for a wonderful tool!
I’m just looking at a rather silly setup I’ve got running: it’s running macOS on an (Intel) 2013 Mac Pro, with about 20 Syncthing shared folders defined.
Most of those folders live on a RAID 0 array of 11 SSDs, connected via Thunderbolt. Total Syncthing data size is about 30TB, in 670000 subfolders, across 5.1 million files.
Everything generally works lovely. Sync happens. Everyone’s happy.
However, when I need to restart the system, the scan time for these folders seems very high.
For example, Syncthing has now been running for about 4 hours; 6 of the Shared Folders are still doing their scan since Syncthing started. To pick an example of these folders: one is 1.7TB, 13000 subfolders and 120000 files - it’s 36% of the way through this scan, and says it has about 6h remaining.
The last completed scan on this folder was earlier this morning; no data will have changed in the folder since then. Syncthing is currently consuming about 230% CPU usage.
Does Syncthing rehash all the data during non-initial scans? I assumed it was comparing sizes and timestamps to determine whether files need re-hashing - but the level of activity going on at the moment suggests it’s much more than a few million filesystem lookups…