I’m not sure if anything can be done to help this in the future, or if that is just the way it is.
I’m running Syncthing on my server which is the focal point to several devices and has an incredible amount of data in various shares. Probably 10TB worth and easily a half million files at least.
This really wasn’t much of a problem because I had added several large shares a few at a time and let the indexes build before adding the next and so on.
I installed the 0.11beta, which forces a full re-index, and well… WHAM… all of those shares, many on the same drive set, all at once.
Ug, the server is just crushed with disk IO. Processor and RAM, I can live with - i7 and 16GB, no problem there. But a forced reindex of THIS MUCH STUFF I feel should really be throttled somehow.
I’m probably going to be forced to remove some shares and add them back one at a time as even up-to-date shares are getting timeouts trying to update because of how busy Syncthing is just trying to deal with it all.