Syncthing deleted 67000 files, how do I restore?


I have no frigging idea why ST decided to do massive deletetion, this is beyond fucked up to me. Why? Because I did nothing to cause this. I am guessing that I will be blamed, or my hardware will be blamed for it, regardless, I am in deep doodoo now.

Both nodes are Linux (one is 32 the other is 64 bit). They both run the latest versions/

Now I have the .stversions folder with 67000 files with ~ tildes in the names. How do I restore all those files back to original names? I do not know how to remove that portion from the file names. I am hoping that whoever offered that naming convention had an idea about how to renbame it back to the original state.


1 Like

Since you’re on Linux, you can do batch rename using linux tools, something like:


for file in $(find /yourpath/ -type f -regextype posix-awk -regex  ".*~[0-9]{8}-[0-9]{6}.*")
  mv $file ${file::-16}

The script is based on the naming patterns used with staggered versioning, i don’t know if other versioning patterns create same filenames in .stversions. The script above recursively searches for files in a given path and its sub-directories for files named somename~12345678-123456 (numbers represent date-time), and cuts off 16 last characters from the name (all that comes after ~).

I’d suggest to copy everything to some temp directory, and run script on the copy, just to make sure it works properly, and not interfere with ST.

I had a similar issue of ST wiping out ~1TB of data some time ago. It was on windows machine, shared folder with 1TB of data resided on external HDD, and for some reason, the USB connection dropped, while OS still reported it is connected: Using the file browser you’d see the disk as connected, but only previously read files would be shown. All the rest of the files are not shown until the disk is reconnected.

So ST would see the .stfolder file, assume form that that everything is OK, and delete stuff from other nodes to match current visible state of this HDD. The when the HDD was reconnected, it wiped stuff from it to match the other nodes.


Thanks I will try it.

I wonder why ST does not have certain kind of breaks like wiping half of the data on a drive with many thousands of files probably not what the user wants, maybe it should wait for confirmation in the browser to go ahead with such deletes. maybe anything over %10 of deletes in a repo that has thousands of files should be alarming to syncthing.


Because it would be an annoyance for people who do that intentionally.


No it would not. Which one is more annoying? ST deleting thousands of files in a frenzy or ST telling you dude I am about to delete %80 of your share, is that ok with you?

If I was to have multiple versions of these files I would have been …ed because it is much harder to figure out what to restore when you have multiple versions. How am I supposed to look at like 70000 files with multiple versions and restore proper ones. I was lucked out because I caught this not too late, and I had only single version per file.

And this kind of situation does not happen always, I have been using ST for years and this is the only real time I needed such feature.

1 Like

Well syncthing doesn’t go about deleting files unless the files were gone locally, so I think fixing the cause is better than putting users in rooms with soft walls and asking them to double confirm everything as a bandaid.


Lets presume that I want to fix the issue so it does not empty out my share. That entails a preknowledge of future hardware, software, network rtc errors/issues which is impossible to know ahead of the time. How can the user know of a future issue that will purge his share like that?

If I were to know that ST was going to wipe most of my share, I naturally would have done something else to prevent. I want to fix the issue so it does not happen at all , for sure.

Two-way syching is dangerous. There’s no free lunch.

Myself and my customers have several use cases where a large quantity, or all, of the files in shared folders are deleted in one go. It’s also expected to sync without any sort of manual intervention, because that is what Syncthing is meant to do. If is a bug that caused this, it certainly needs fixing, but I do not want Syncthing to require constant babysitting to function.


That’s the thing with any syncing SW - it syncs stuff. Same would happen with dropbox as well (I witnessed several instances when one person fat-fingers something, and then a team of 10 people lose their files). I agree with Audrius here. The expectation with background/pseudo-realtime/automated sync is that it’ll sync. That’s it. Without bothering the user with prompts each time the delta crosses some threshold.

That being said, I don’t see any trouble with having some non-default option that’d enable such thresholds. But someone should be willing to invest in this feature.

I’m empathetic to your issue, and as I mentioned - I faced similar issues as well. We humans are bad at anticipating risk factors. So always keep the versioning up, and using some extra backup sw won’t hurt either :wink:

It is not synching though as you all claim, this is not syncing this is just purging. It is purgin because the deletetions did not come from the user, ST somehow decided bby itself that it needs to purge the share. It is not a result of the user’s action if that was the case yes St should do that. I am saying that what ST synced is not the result of the user’s action.

Yes it is. Syncthing does not replicate user actions (whatever those would be) on different systems, it replicates files - that’s all it does. So unless the entire folder (the .stfolder file in the root) disappears, Syncthing does what it is intended to: It syncs. If syncing means purging most of the data, it does that happily - and it should. Whether it was a user deleting without thinking, another software cleaning up, a hardware error, … really doesn’t matter. Just do your backups and you are save for these eventualities. This applies to anything, and even more so when more systems/people are involved due to syncing.


Use BTRFS with snapper, get your hourly snapshots, profit. You never know when you are going to delete the wrong file by mistake.

The versioning schemes with renames are for keeping multiple copies of synced files, perhaps restoring an old version at some future time. It’s not meant or convenient for a full restore.

The “trash can” scheme is the opposite. It only keeps the last version, without any renaming. Restoring an entire structure is a simple copy.

Neither is suitable as a backup as you cannot restore to a given point in time.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.