Syncthing 1.4.2 weird deletion behaviour.

In a nutshell, it’d appear that Syncthing in its infinite wisdom assumes that if you re-introduce a file with the same name as a previously deleted file, it’s a “mistake” and helpfully deletes the file, regardless of the different sizes/dates/etc of the files. Latest example is a video file from my tuner box’s on-board storage getting spontaneously deleted, because last week I deleted the initial attempted recording (same filename, but about half the size, because of an EPG snafu).

Sooo, is there a way to either implement or enable storing file fingerprinting for deleted files so that doesn’t reoccur,or am I SOL and gotta rethink and re-implement my personal synced storage setup?

That should be impossible, i.e. you don’t need enable/implement anything, you need to find out what weird thing is going on and then we/you fix it. Given you are not the only user, but the only one reporting such an issue I admit I am biased to expect that something else is going on. To find out what’s happening regardless, you need to enable model and scanner debug facilities and reproduced the weirdness, then share the logs.

1 Like

Ok, duly enabled and will test it out with a freshly recorded episode tonight and post the logs as soon as this reoccurs. The annoying part is that this is literally the only folder without a form of version history enabled (for the sake of the testing, I won’t change that for now)

1 Like

Quick update, there’s definitely something funky going on, because syncthing nuked yet another .ts file with the same name as something deleted months ago. Unfortunately, it’d appear that the debug flags didn’t stay enabled after restarting syncthing, so nothing pertinent showed up on the logs between

[4W7BJ] 13:42:47 INFO: Completed initial scan of sendreceive folder “Default Folder” (default)

and a notice of a last change in the web GUI that 2 seconds later

Latest Change Deleted .ts <FileName>.ts @ 2020-05-01 13:42:49

I’ll leave syncthing running with the 2 debug flags enabled and hopefully later today will be able to re-test with another fresh recording off the tuner box. FWIW, the last 2 times I noticed this sort of aberrant behaviour were following a fresh start(or restart)'s initial folder change scan.

Ah, and since I appear to’ve neglected to mention - all machines are running Windows 10, with the synced storage being on non-system GPT-NTFS partitioned unencrypted drives (in case that’s relevant)

You should enable the flags by setting env vars, not by enabling them in the ui, as the logs in the ui are like the last 300 lines or so, so you should redirect syncthings output to a proper file.

1 Like

Could it be the PVR is deleting the ts recording as it’s own database would think that it needs to be deleted. Or is it a case that the recording is still on the pvr, but the St copy is deleted?


The latter - same drive, different partition/folders and as soon as I manually move the newly recorded .ts to the scratch folder, the file disappears into the ether :smiley: Luckily, I’ve already got these episodes DVR’d, so I can experiment without worrying about data loss :wink: Edit: I just checked (just in case) the NUC-based HTPC (recording via a Silicondust tuner) to see if it’s not somehow starved for storage or if Plex’s allowed to delete files for any reason, but it’s all fine → 12 terabytes are free and available and Plex can’t nuke a file for any reason (either automatically nor per some pre-approved criteria).

@Audrius - given that I’m observing this sort of behaviour in real-time, the 300 line limit really shouldn’t be an issue. I’ll probably enable those debug flags if tonight’s episode doesn’t trigger this sort of weird behaviour, because my spidey sense’s telling me that this may be happening only when ST gets started.

Aaaand it happened again, but this time the model and scanner debug flags “caught” the deletetion (I’ll copy the relevant log snippets here, with the more complete log attached as ST Deletion Bug.txt (36.0 KB) ):

2020-05-05 10:59:11 to hash: <Filename>.ts File{Name:"\.ts", Sequence:0, Permissions:0644, ModTime:2020-05-05 01:09:56.1094371 +0300 EEST, Version:{[{4W7BJT3 1}]}, Length:6919791012, Deleted:false, Invalid:false, LocalFlags:0x0, NoPermissions:false, BlockSize:4194304, Blocks:[]} 2020-05-05 10:59:12 real to hash: <Filename>.ts

2020-05-05 11:00:41 sendreceive/default@0xc00ca26000 Deleting file Subfolder<Filename>.ts

Specific conditions when this happened the last time around - straight after a restart (after upgrading to 1.5.0), the file’s “fresh” from the DVR’s folder (manually moved) and after a rescan, it got auto-nuked. From what I could see, at that point ST wasn’t connected to any of the other devices it’s installed on. Exact same file, but with any difference in the name gets through without a hitch. BTW,if I were a betting man, I’d wager that for some reason during the initial re-initialization of ST, in some rare cases it only checks against the name of a file as opposed to matching it vs a known file hash.

Edit: Filename/Folder has been redacted here, for the sake of brevity, but it should be easy to spot the original ones in the log.

Yeah, I think I can see the sequence of events that would cause this. It requires that the database was wiped on 4W7BJT3 at some point?

  • A file exists
  • It gets deleted, other devices know about this
  • The database gets wiped for whatever reason
  • We get an entry for the deleted file from another device, and we don’t act on it because we don’t have the file nor any historical record of it so there is nothing to delete
  • The file gets resurrected, and we treat it as a new never-seen-before file
  • Because it’s never seen before it gets assigned a new, blank version
  • But the delete entry from another device had a higher version number, so we delete the file

It’s clearly a bug, regardless.


Well, this is unfortunate, because I was hoping it was just a blunder on my end that could be easily fixed :slight_smile: . The explanation is definitely plausible, though, because the storage was completely overhauled between the initial file deletion and the current issues, with all synced content being recreated using the other connected ST machines. Oh well, for the time being I’ll mitigate this by enabling file history on all folders (the important stuff is already backed up and in send-only mode) and just wait out a more permanent solution down the line.

I just hope that I’m the first one to be bitten by that bug, because it does have the potential to do some damage to unwary users(quite coincidentally, the same users that ST is a good substitution for a proper backup solution) that’s not readily apparent.

There are other data loss scenarios when the database has been lost or wiped, but it’s the first time I hear about this specific one.


Regardless of this being a bug and hopefully fixable, this just exemplifies again how everyone should be more careful recommending database resets - it should really be treated as a last resort measure.


I agree, the problem is that this apparently can happen even when you don’t actually touch the database and go for a routine storage upgrade (which makes this bug a bit annoying to pin down), because if it was after a DB reset or other tinkering, it’d have at least pointed at a potential fault.

At any rate, with a decent deployment it’s nothing unrecoverable, the problem is that the people that assume that ST’s a backup solution in itself (despite the FAQ itself explicitly saying that it’s not for that use case) and raising a stink when something goes bad XD

I think this should fix it, in practice, for most people:


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.