Interesting article I wanted to share about our lord saviour copy_file_range:
Lovely system call, in concept and implementation.
Nice read, also the linked article about backporting fixes and (missing) Q&A: Kernel quality control, or the lack thereof [LWN.net]
Could be used for this?
In fact we use it to create versions when available and configured to do so.
What is it you’re doing that you keep bringing up reusing data from versions? In most scenarios I can imagine this doesn’t seem terribly useful.
Not the person asked, but in my case, this comes into play when renaming/moving very large folders. The original files usually get deleted on the remote device quicker than the new (i.e. renamed/moved) folder finishes scanning, and reusing versions prevents downloading them again.
That might be the case. Of course people can set up their systems as they like, but separate folders are really separate. This case to me is similar to two SMB or NFS shares from the same server, or separate ZFS volumes, or separate file systems in general. Moving files between them on a client will generally entail downloading and uploading, not a simple server-side move. If this is a concern you’re better of architecting so that your routine operations don’t have moves across share / folder boundaries.
I used to have fewer bigger folders in past, but it didn’t really work out for me. I need more fine-grained control over what to share with whom, and that was impossible then. I also run some software that processes tons of files, deleting and recreating them on each run. Syncthing often syncs the deletions faster than the next batch of files has been created (which are mostly exactly the same as the previously deleted ones), which forces their redownload on other devices.
I have fixed this in Windows now by using the External Versioning to send all deletions to the Recycle Bin, while also adding the Recycle Bin folder itself to Syncthing (as unshared). This way, it reuses data from it instead of downloading everything again.