Problems with renamed directory

While Syncthing 0.14 is running on my main PC and laptop (within LAN), I started renaming subfolders of a synced folder. Each subfolder contains large amount of photos and videos. In total, it was 300GB.

I changed the naming scheme twice so I ended up renaming subfolders twice. Now, while syncing is still happening, I notice my 960GB ssd inside my laptop is suddenly over 750GB full (it was 350GB).

Clearly, during sync, I could see folders with the original name and the new name on my laptop so everything is being copied over. When syncing was finished, folders with the old name were gone.

More proof: On the laptop, .stversions contained hundreds of GB of “deleted” folders and their content, even though the files were renamed server side.

Conclusion: Syncthing did not detect renames and is simply copying entire content over, ending up with double the amount of data (or triple in some cases), which your storage on the receiving end needs to be ready for. This means a huge amount of uneccessary disk I/O and (temporary) storage consumption.

This is on Ubuntu Budgie 20.10, BTRFS filesystem on laptop and MergerFS pool of BTRFS drives on server.

edit: another downside of deleting files in renamed folder (moving them to /.stfolder) and uploading a new version of the file to the new-name folder: although the file is identical to the deleted file, you now store the same file twice. This is why my laptop SSD was filling up.

Is this by design/expected behaviour of Syncthing?

I just did a clean test on the laptop: renamed 10 folders in a subfolder /sync/2005/. Now go to the workstation: in /sync/.stversions I now have 10 folders with the old name.

I am using TRASHCAN file versioning. Syncthing doesn’t recognize the difference from a rename versus a delete so it uses trashcan… doubling the amount of data (and not using reflink… so no benefits of the btrfs file system).

0.14 is 3-4 years old at this point, I don’t think it’s worth spending time debugging this. I suggest you upgrade and see if that improves things.

That was a typo, I am on v1.14, on the laptop it is installed via the syncthing repository, on the workstation it runs in Docker with the official image.

Also, renaming is not a special operation in syncthing. It’s a create followed by a delete, hence renames are still versioned, so having things in the versioning directory is not unexpected.

OK so in that case this is expected behaviour, then it makes total sense. I assumed Syncthing would be able to recognise a rename action but I can imagine that is difficult.

It can, in most cases, recognise a rename, but the decision to version is conscious, because if both sides are not operated by the same person, you’d see the file disappear, and it would be unexpected if the “deleted” (from the users perspective) file did not get versioned.

Ah ok with that example it actually makes a lot of sense.

Also, the renames are detected at the file level, not folder level, so the fact that both directories existed throughout the operation doesn’t really say much, because we simply precreate the directories for every file we’re about to do some operations on.

I just came across this article explaining how Resilio acts with rename operations:

To quote:

There are several scenarios possible but here is the ideal scenario:

  1. You rename file example.txt to example2.txt on your peer (peer A).
  2. On the remote peer B Sync detects that example.txt file is missing on peer A and moves the file into the Archive.
  3. After some time, Sync on peer B detects that example2.txt file appeared on peer A. Sync checks if there is a file with the same hash in the Archive folder. If there is such a file Sync puts it back with a new name.

This flow allows to avoid unnecessary re-transmitting of the data and save bandwidth. Thus, you need to have Archive option enabled, otherwise files will be re-synced again.

Would this method perhaps qualify as feature suggestion for Syncthing?

It’s simillar to how it works already, minus the “archive” part, which implies files are not deleted when you delete them (for some amount of time, which doesn’t seem specified).

We expect inotify or a scan to notify to detect both the deletion and the creation of a file. If we find a pair that have a matching hash, we send them across next to each other in terms of “updates”, so that the other side can infer that it’s a rename.

OK that means the behaviour I am seeing should not happen… How can I debug why this doesn’t work as described?

I think I already covered above why it happens?

Otherwise please be more specific what you don’t expect to happen.

Sorry this is confusing. I described what happened: I renamed a folder (subfolder of a Syncthing sync folder) with 350 GB of data on a peer. Instead of this rename action being recognised, Syncthing thinks I deleted the folder and created a new one.

This resulted in the folder with the “old” name ending up in my Syncthing trashcan and 350GB of data being pumped over to my laptop.

This folder is only shared between 2 devices, a desktop running Ubuntu 20.10 and a laptop running Ubuntu 20.10. Both devices are on, the desktop is always on. Both have a single Ubuntu user account, both are named Asterix (UID/GUID 1000) and similar permissions.

Both use BTRFS filesystem.

The way I interpret your feedback, this should not have happened, Syncthing can recognise renames. I also cannot imagine this behaviour would be normal, lot’s of people would face issues. I do understand there can be cases where a decision to version is made. But it is unclear to me why this happened in this case. I simply renamed a folder.

I stopped using Syncthing since then because this behaviour really scared me, if I or my parents would do this again, the laptop could fill up completely overnight.

I am just wondering if I want to investigate further, basically doing a similar action on a smaller folder, to test, what to look for.

What you described, the folder ending up versioned, is expected, and I explained why.

I am not sure what you mean by “pumped”, but I would expect no data to be transferred, but the operation would still take time as it would have to copy each file for the purposes of versioning.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.