Do Syncthing support mv?

If I do mv on a big file in a cluster, to a new name. Will the file be resynced as a new file or will Syncthing recognize that the file is renamed? If i rename a directory with a lot of files, what should happen then? Will all files be transfered again?

Syncthing treats files as a series of blocks. When synchronizing a file, Syncthing will get a list of all of the blocks in that file. If the block is already available on the local system (as part of an existing file), the block will not be transferred over the network.

Yes, that I know. But what you are saying is a no the, syncthing do not support mv, but will instead resync all files (however, reusing already known blocks). In the latter example, renaming a huge directory, I guess that will be a lot of work?

I wonder, since I did a mv on a very large directory, and 1 day later my very old NAS is not in sync yet, keep going on 99% cpu usage.

It depends how you define “resync”. No, it will not copy those files over the network. Yes, it will have to do a small amount of work for each file to detect that it is a rename, but I don’t see how that could be avoided.

The system on where the rename/mv is performed can detect this given that the inode is exact same. However this is maybe not a prioritized feature to have in syncthing, since it is a rather rare use case and it works.

Don’t forget that inodes are meaningless in a cross platform world.

When indexes arrive, while processing them we prioritize additions before deletions. Now when we need to perform an addition, we check if there is a matching deletion and perform a rename shortcut. You can see these by enabling STTRACE=model env var.

But, this means that you do support mv in contrast to what canton7 wrote? If there is a matching delete then it is merely renamed?

If I understand you right this should also work on a directory rename, since all files in the old directory are “deleted” and added in the new directory?

Yes.

Now a bit of realworld experience… No, renaming of directory does not lead to any reuse of anything. On other nodes all data from the formerly named directory get deleted and a huge resync of all the data follows. What is possible to do is shut rename all directories on all nodes manually and rerun syncthing again - that worked for me very well. This experience is dated around august of 2016 - the time of this convo. If Aundrius was commenting new minor versions abilities, cool, it might be working as he described today with current version of syncthing.

Renaming between folders (the Syncthing concept) can have the effect you describe. Renames within a folder should stay renames, with no extra transfer.

That is not true, if you rename FolderA to FolderB (os concept), it’s possible you’ll get removal index changes before additions, and which point when you get additions, the files are already gone. This is due to how walking a filesystem is done in an alphabetical order.

We don’t walk like that though. We first walk for new files, then scan for deletes. So barring very unfortunate timing (you do the mv at exactly the wrong point during a scan) we should always pick up on new files and directories before the deletes, and the index sending is now ordered based on that as well.

There may be caveats around this in relation to filesystem notifications… I’m not sure how those are ordered.

Well, the only thing I know it made a few GBs resync on a rather slow line because of simple rename of directory inside one shared folder on one node. I did not care for such whet it was about dozens of MBs, but it happened more than once this way. What happens when one changes just upper-lower case in the name without changing it is even far funnier (again, previous minor version and linux-windows ST cluster) - endresult is deleted file in all cases I have tried. Since then, I am not trying that when I need to reorder things, I just shut all STs, do changes everywhere, restart STs pausing all devices until all are done scanning - that works flawlesly and it deffinitely worth the potential lost time. But thats the only issue I ever had with ST, you’re doing great job!

This is known and only expected to work reliable in a Unix-only cluster (OS X probably excluded), because all device filesystems have to be case-sensitive.