Actually, the current default is the number of CPU cores divided by the number of configured folders. So, if you have four CPU cores and one folder, you get four parallel hashers for that folder. Two folders gets you two hashers per folder. Three or more folders get one hasher each.
The median user has three folders configured. We don’t know how many CPU cores they have, although I suspect four is a good guess for current consumer hardware and more than four is probably more unusual.
So in effect we’re usually running with a default of one hasher per folder.
The same median user has a CPU that can hash ~120 MB/s on a single core. That’s probably less than what a single modern SATA disk can deliver in sequential reads. But then the OS really needs to help out with readaheads if we’re going to be able to take advantage of more parallelization…
Perhaps we should adjust the defaults depending on the OS.
Windows and Mac => we assume a laptop/desktop class machine with an interactive user, so we don’t want to load things too heavily => default to one hasher.
Linux on ARM => as above, optimize for crap hardware.
Everything else => as we do currently.