I don’t think tweaking settings will help you much. There’s a couple of things going on here.
The hasher is parallel and concurrent for multiple files, but any given file is hashed on a single thread. With your six core / twelve thread CPU and how Syncthing (and Windows) shows CPU usage, you should expect to see about 1/12 = 8.3% CPU usage while it’s doing this. 27 minutes is 75*1024/27/60 = 47 MB/s which is a tad on the slow side. I would expect your CPU to do about 150 MB/s or so of hashing so it’s a factor three or so off. The rest is probably overhead, read latency, etc., hard to say. We’re counting on the filesystem doing readahead for us, otherwise the cycle of read-hash-read-hash-etc incurs a couple of milliseconds for each block.
Syncing the file when it has changed goes like this, on the receiving side:
- Create a temporary file
- Read the previous version of the file and compute weak hashes of the blocks there.
- For each block in the file that ought to be the same, copy it while also hashing it and verifying the hash. Uses the weak hashes to find blocks in the old version of the file, otherwise a database lookup to find matching blocks in other files locally.
- Pull the blocks that we didn’t find locally from some other device and hash them, write them.
- Rename the temporary to the final name.
During the copy phase you’ll see essentially no data flowing, just a few Kbps of index updates. The original file will be read twice; once for weak hashing, and once when copying. It’s too large to fit in disk cache, so this will cause a lot of disk access. Copying the blocks within the same disk will also cause quite a lot of seeks and so on so it’s not a super efficient thing to do for files as large as this.
TL;DR: Large files can be painful.
To find out what it’s doing, if you think it’s CPU or memory allocation related slowness, grab a profile. I’ll help interpret it.