Corrupted files

Previously posted on Github: #5629

I know this will be a pretty vague bug report, but I still wanted to file it, because this makes me want to move away from Syncthing, but I really like it.

I run Syncthing on about a dozen devices and move a lot of important data around. A couple weeks ago I noticed a PDF file that I created two years ago. It was corrupted.

Then I noticed another corrupt PDF file this week on another device in another folder shared by another group of devices . This file was only 1-2 months old.

With this second occurrence of a corrupted file in a seemingly unrelated environment I became suspicious.

In both cases, the files were much smaller than they should be - as if only a small first part of the file was present, and the rest missing.

I, of course, cannot reproduce the problem, as I also don’t know when it occurred. The second time this happened, there luckily was a sync-conflict file of the file in question that was OK.

As far as I understand, Syncthing writes files to a temporary location and then moves the file into place when finished. And as these files have been in the repo for some time and were perfectly fine, I wonder how this could have happened.

I am sorry that I can’t be more specific about this, as this is all the information I have.

What worries me, is that there could be more corrupted files - but I don’t know how to even efficiently find them, as I don’t want to go through all my files and try to open them.

I’d really love to have Syncthing with integrity checks, sadly this feature request (#1315) has already been denied.

Version Information

Syncthing v0.14.51, Linux (first occurrence)
Syncthing v0.14.51, MacOS (second occurrence)

How do you know the files are corrupt? If you have versioning, can you compare a corrupt and a non-corrupt version to see where the differences are?

Do you only sync PDF files?

Generally speaking I doubt that Syncthing corrupts data by itself. If it did, with any regularity like you’re seeing, we’d be lynched by thousands of users seeing the same thing. We also try very hard to avoid just this.

I’d like to know if you have used a pdf editor or similar tool.

I wonder why the conflicted files have been created. In my experience that’s often due to editing a file which itself just leads to Catfriend1s question.

Thanks for all the replies.
I will do some more investigating with your input on Monday and will report back then.

I also have some other cool ideas how to possibly improve this situation in the future :wink: - will share later.

How do you know the files are corrupt?

Well, I tried to open them.

On further inspection, one file of the “corrupted” files was actually valid, but the contents were of another file type entirely - and valid. That is really weird - and I feel a bit dumb about not having done a better analysis before…). This was most probably caused by something outside of Syncthing.

But! I could find two other files that really broke. At the receiving end these files had the exact same amount of bytes, but were only filled with zeros instead of the real data. This does sound like a hiccup in Syncthing.

Do you only sync PDF files?


Generally speaking I doubt that Syncthing corrupts data by itself.

No, me neither. That is why I was so surprised find multiple corrupted files in such a short time frame on different systems.

I’d like to know if you have used a pdf editor or similar tool.

No, all files mentioned were created in a single action and never edited. They were downloaded or scanned and then saved as is.

I am now hashing all the files of one the bigger folders I sync and will report back after having compared to other devices. I use find . -type f -exec shasum -a 256 {} \; >shasum256_hostname.txt 2>/dev/null to hash the files.

One great feature for manual remediation and verification would be a “Verify all files” button in the UI, that:

  • warns the user that any changes to files during that process my be lost
  • hashes every file and checks against the hashsum in the index
  • re-downloads the file if there is a checksum mismatch

Read-only nodes could execute this function periodically via an api call in a cron job. This would also then report broken and fixed files. This could also mitigate bit rot on primitive filesystems.

I don’t think verify feature makes sense. If the file changed we’d know (mtime and size changes), if it hasn’t changed, what’s the point of verifying? What are we trying to catch here? Bad drives? That’s not really syncthing’s problem.

I moved the feature discussion here: File Integrity Verification

In this thread I really want to dig deeper into the things I am experiencing. Let me elaborate on the specifics on one of these occasions:

On of the affected folders is shared with an external company that receives PDFs from us via Syncthing for further processing. We just drop a PDF into the synced folder and do not touch it afterwards.

The company then raised this issue with the corrupted file and sent it back via email.

The file had the correct name and size. This information must have been transmitted via Syncthing, as there was no other means of communication for this. But the file was only filled with zeros.

Is it possible that the program, that initially placed the PDF in the folder, first allocated the needed size on the filesystem (with zeros) and Syncthing picked up the new file before it was actually written to disk, synced a file full of zeros and then did not pick up the actual write afterwards?

Yes, however a bit unlikely. The watch for changes feature has a small delay, 10s by default. Meaning between receiving the change event from the filesystem and hashing the file at least 10s pass, so the program would have to take more than this time to first create the file and then actually write the contents to disk.

To see whether the file changed at any point (and if so where), you can query the db/file endpoint:

If it is a single device on your side, that sounds like a good use case for the send-only setting. Thus you can eliminate the possibility that (or detect if) the problem occurs on the external company’s side.

Also, if the program uses mmap to write out the files (which is possible and sometimes faster), flushing mmapped pages does not update the mtime, so we might have detected the zeroed out memory at the start, sent that, and never picked up the change to the file when the content was written, as mtime or size was not updated.

There is no way to work around this, as mmapped files essentially behave as persistent memory oppose to files and do not retain certain preconditions syncthing relies on.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.