This is not directly related to the v1.9.0, I just saw it happen again after last release - sorry that I didn’t report earlier.
Yesterday my server (Linux, dual core, 64bit, sufficient RAM) with 12 devices / 20 folders / 2 million (700 GB) files was not responding. Database/system is on SSD, files on HDD. After many tries I managed to get in via SSH to find that Syncthing was the cause. The GUI loaded really slow and every folder had status “scanning”
I restarted Syncthing, paused all devices and scan (I use stacked scanning, so 1 folder at a time) only took 45 minutes, which is typical for my system
The only difference was: the night before I had copied new files that rescan did not pick up. In the morning after the new release the new files were detected and transferred. It seems that the simultaneous updating of database, scanning, transferring, (partial) index exchanging, etc. is too much and each task is dragging each other down.
startup scan time: 45 minutes
startup scan time with transfers: 3+ hours + heavy i/o
I would be great if Syncthing does not transfer files during (initial) scan. Tthis would help slower systems a lot and I think also faster systems could profit from this.