.stignore System Questions

Hi, Recently, someone posted a suggestion that the .stignore system be able to ignore/include files based on modification time, or in other words based on the age of the file. At the same time, I was thinking about ways to adaptively limit the amount of data kept on local “client” machines (treating, say, a NAS as the “server” even though Syncthing is peer-to-peer). Initially I was thinking of resurrecting the SyncthingFUSE project to create a cached virtual filesystem, but after seeing the other suggestion, I thought it might be easier to look into a melding of these two ideas, for example, so that for a given directory, rather than the most recently used files existing on the client, the newest files exist on the client, up to a disk usage limit set by the user. I thought maybe the .stignore system could be used for this, so I downloaded the source code.

But first I decided to see how the .stignore system currently works, and from experimentation, found out a few things:

  1. If I have a file that is synced, and then add it to the local .stignore file, the file does stop syncing, but it also remains on the client, which I wasn’t expecting. If I then delete the file on the local machine, everything seems fine. But if I then remove that filename from .stignore, the delete suddenly becomes recognized and gets propagated to the other machine. This seems a bit dangerous. I initially expected the behavior to be more like Dropbox, where when you exclude a directory (using selective sync) it immediately disappears from the local system and is treated as if it was never synced in the first place, so you don’t propagate deletes back to the other systems if you re-add it later.

  2. Given this behavior, in order to implement something like what I was thinking, it seems that the behavior of the .stinclude system would have to be changed significantly, so that files are removed from the local system as soon as they are excluded, and that they should be treated as if they were never synced, so they can be re-included without it being interpreted as a delete. This makes me think that it might be too big of a change and perhaps I should consider going back and attempting to resurrect SyncthingFUSE.

-D

Not sure what the question is, but yes, what you suggested is a big change. It’s probably easier to write something new that acts like a cache that speaks the same protocol.

SyncthingFuse isn’t really a cache like what you expect. The fact it has a cache is just a side-effect.

Thanks for the reply. Sorry about the poor wording. I think I started out planning to ask some questions, but it ended up being more of a description of things. If there’s any question, it might be “Is this the right way to go about this?”, which might have “no” as an answer.

I haven’t really looked at the code yet for SyncthingFUSE. I’ve done some coding in the past, but I’ve never used Go or Git, so I’ve been getting familiar with that. Also, SyncthingFUSE is so out of date that it probably doesn’t work with current versions of Syncthing, so it will take time to figure out.

What I would try to achieve is something similar to what pCloud does in their proprietary FUSE driver. It starts out as just a view of what’s on the server (as most FUSE filesystems are), but then downloads (and subsequently syncs) files on demand, keeping them on the local system up to a user-specified disk space limit. The files are kept in a cache which persists after the FUSE filesystem is unmounted.

I think taking syncthing fuse and making it work the way you want it or using it as a starting point to build what you want is easier than trying to bolt something on top of syncthing.

Syncthing fuse sounds almost like what you described, but I don’t understand the purpose of these systems in general, they are flaky, slow and don’t work half of the time.

I kind of agree. I’ve used FUSE drivers for things like Google Drive and they are klunky for sure. The pCloud one is the first one I’ve used that actually seems to work smoothly, though I don’t have thousands of files on there yet.

Also, I’m not sure a FUSE filesystem is yet the best approach. The ultimate goal, for me, is something that provides an adaptive subset of remote files (up to specified maximum disk space) being synced to the local machine while letting you list all of the files. The adaptive mechanism could be most recently used, or maybe newest, you could probably make arguments for either.

In a sense, it would be an evolution of, or maybe an adjunct to, the .stinclude system, which is static and doesn’t allow you to list the the directories/files that aren’t synced.