I have a Syncthing setup consisting of several Android, Windows and Linux devices.
One of the Linux devices had a hard-disk issue and was restored from backup. The backup contained all the files in the shared folders, but not the Syncthing index. I therefore re-generated the index on the restored system with syncthing --reset-database.
After this, I observed several files being transferred from the restored device (Linux) to the other devices (Windows and Linux). These files already existed on all the devices, and have not been touched in years.
Luckily enough, I had file versioning enabled on one of the devices. I therefore did a comparison of the newly transferred file with the one backed up in .stversions. Strangely enough, they are identical:
Syncthing changed to dynamically sized blocks a few years ago, so large files scanned today will get larger blocks (256 KiB, 1 MiB, etc, depending on the file size). Old files not changed in a long time will still have had the old standard block size (128 KiB). Files with different block sizes will look different to Syncthing, even if the contents are in fact the same, so wiping the database and rescanning created “new” files with “new” contents.
It’s reassuring to understand what’s happened, but also a bit worrying: if this had happened over a low bandwidth connection it could have delayed sync for a long time, and if it had affected more files all the pointless old versions could have used up a lot of space.
Is there a reason why Syncthing does not re-index with the new, dynamic block size when block sizes disagree (instead of immediately re-downloading each block)?
No reason, just not implemented. In most cases this is a non issue since both sides are in agreement on what block size to use, until the file grows 4x or similar. Only legacy setups can get hit by this when resetting as happened here.