File Integrity Verification


This feature request arose from this thread:

The idea is that the user has the option to manually trigger a checksum based file verification process, in case the user suspects anything went wrong with the local data.

After pressing the new “Verify all files” button in the UI, Syncthing would:

  • warn the user that any changes to files during that process my be lost
  • hash every file and check it against the checksum in the index
  • re-download all files where there is a checksum mismatch

Read-only nodes could execute this function periodically via an api call in a cron job. This would also then report broken and fixed files. This could also mitigate bit rot on primitive filesystems.

Issue #1315 on Github already discusses this and the idea was dismissed with the main points made:

  • (A) This is something the filesystem should handle.
  • (B) There is no way to detect if changes were made by the user and therefore should be synced, or if the data should be corrected.

Also, Audrius Butkevicius’ comment from the thread where this came up:

I don’t think verify feature makes sense. If the file changed we’d know (mtime and size changes), if it hasn’t changed, what’s the point of verifying? What are we trying to catch here? Bad drives? That’s not really syncthing’s problem.

I would like to comment about the two points made on Github, as well as Adrius’ comment.

First, the easy one:
(B) Because this would be a manual feature, the user has to trigger it and can be reminded to not change any files during the process.

Now the tricky one:
(A) Yes, you are right. Theoretically, the filesystem is responsible for the integrity of the files. There are really great filesystems out there that can totally handle all these problems.

But lets look at this from a practical perspective for a moment. For most users, Syncthing is a kind of DIY thing. They don’t trust cloud providers with their proprietary software to push all their data to some cloud. Or tech savvy people that use Syncthing for personal home network. Or just office colleagues that want to sync data.

All of these people have no idea about all these fancy filesystems that would solve integrity problems. They are way to complex and time consuming to set up and maintain.

My hypothesis is, that the majority of Syncthing users do not use self-healing file systems, and therefore the argument “that this is the filesystems job” does not hold true practically.

I’d like suggest that the maintainers evaluate this a feature carefully and give it consideration. I urge you to decide what is best for the project and listen to the community - and that may also mean to deny this feature request. I am not trying to push you into this. I am just contributing an idea.


Corrupted files
(Simon) #2

Unless I am mistaken that can already be done, it’s just not a single api call:

  1. Reset the database for the folder in question (
  2. Reset the folder to global state with the revert endpoint ( or button in GUI)

Now 1 isn’t available in the web UI, but I’d argue it shouldn’t be in general as it’s not something that should be done unless there is a very good reason and it can’t be unless there is a solution to However I think it would be a sensible request to extend the -reset-database command line option to optionally take a parameter to specify the folder(s) to be reset.

1 Like

(Jakob Borg) #3

The cumbersome way is available in the GUI: remove the folder and add it back (retaining the folder ID etc.).

1 Like

(Dr Schnagels) #4

I really love this idea. It would add more confidence knowing that data is bit-identically synced to one or more devices. And it would show or even prevent bitrot or non-ECC protected bitflipping data storages. ZFS-style added data security super convenient. Maybe you can add a per-folder setting: “Full-rescan & rehash after 30 days”. Or for data laying on an internet server: per device full rescan and rehash every 90 days. It’s something you could even manage remotely with Arigi. Rehash and rescan everything. Wait x hours and you can sleep better =).


(Simon) #5

I actually get to see the value of this more and more when thinking about it. You first need to ensure that the folder is fully up-to-date, then stop any index exchange/syncing during the process, then by scanning first, thus picking up modifications with a change of file modtime and/or size, but not bitrot and the like, then rehashing and dropping any detected changes with data from remotes you can ensure the integrity of your data. However, it’s also a fairly complex and potentially data destroying process (I am sure there is software out there that changes file contents without changing modtime or e.g. does a file modified through a hardlink get its modtime updated?), which I don’t want a user to do unless they really understand what they are doing. And compared to understanding and assessing the risk, doing (scripting) a few rest calls takes less time.


(J King) #6

Yes. Modification time is a property of the file, not the link.