Required to let syncthing client scan?

Hey all! I am new hear and new to syncthing in general, but I did a quick google and didn’t seem to see anything, as you would think, hard to specify search terms to narrow this down…

A bit of a weird use case for some background; I am an amateur photographer and share the passion with my father whose house I frequent, I built him a great photo editing workstation, thus all of my Lightroom catalogs and images are on that machine. Problem is, they also need to be on my computer at my aptartment… Recently I built a freenas box and am running syncthing in a windows VM (jail didn’t work, long story, windows permissions issues with chmod, although if there is a solution to this PLEASE LET ME KNOW), but I am using one of these folders simply as an always on sync target for both workstations to sync to. The files will never be directly edited on the freenas box. They are edited on local workstation ssd’s, synced with my freenas dataset, and then syncthing pushes changes to keep both systems mirrored.

Question is, since my VM is scanning 120k+ files with a total size of over 1TB, it’s a bit argues to say the least… (wish the jail option would have worked, SMB sharing this is pretty bad), but depending on the answer to this questions, it’s possible a lot of CPU cycles will be saved!

Since no files will ever be directly edited on the freenas directory and will only be pushed from another syncthing client, do I actually have to let the VM’s instance scan the dataset, or will syncthing inherently know the file structure based on the incoming syncs? I hope this is clear, but in short I am trying to negate the need to routinely scan the dataset from my VM’s instance. I am thinking since the data is only ever being changed by an externally pushed sync, the VM’s instance will get all of the updates it needs as it builds the synced file structure. Am I correct in this thinking?

I know I can round about test this theory myself, but I am just trying rock understand how the software is working a bit better.


Yes, it needs to scan them. Otherwise it cannot determine what is different compared to another device.

Your use case doesn’t sound weird at all, that’s a pretty common setup, except that I didn’t really understand the point of the windows VM on the freenas: If you can setup that, you surely have means to run the syncthing binary directly on freenas? A quick google search suggests there is even an integration of syncthing for freenas (syncthing-plugin).

1 Like

The first scan yes, but even once it has determined what it’s directory looks like, it doesn’t “relearn” as it’s writing new data?

There is a way to run syncthing in a jail in freenas, which is much more elegant. Problem is, it looks like they way syncthing handles writing files is chmod, and freenas explicitly denies the use of chmod on a windows permissioned folder. Since I run a windows exclusive network, I have all my folders set to windows permissions and not Unix which would be required for chmod to be executed. Thus… the Windows VM which is horrible because it has to do all of its syncing over virtual network adapter instead of a mounted filesystem like it would have in the jail.

There is an ignore permissions option.

Not sure what you mean, but depending on support it might use a filesystem watcher or rescan every 60sec, where as the subsequent rescans do not involve rehashing most of the time.

Data written by Syncthing is not hashed again afterwards by reading it, no. It was hashed when written.

I did see the ignore permissions option, but considering I can’t chmod a file with root in shell, I didn’t even bother to see if syncthing could… After a bit of reading, it looks like there is a potential issue if you chmod a file that is currently open. I am by no means well enough learned in this field to be more specific, I am still very new to unix.

Right, so, if its hashed when written, if the only changes that come are from writes performed by syncthing itself, would it still require the use of a rescan? This instance of syncthing will never have a situation where a file is changed by a user. The only changes that will occur will be from syncthing itself.

Since both workstations are not always on, this system, which is always on, just acts as a way to sync between the two workstations.

I’m not sure what you’re after really since you are asking about not requiring something which is essentially a no-op. If there are no changes there is nothing to rescan. If something ever does change and you don’t rescan there will be trouble. If you’re worried about the occasional check of file timestamps by all means increase the scan interval or enable the filesystem watcher. I run with 86400 (one day) on boxes where the files are “guaranteed” to “never” change locally.

The files will never be changed by any system outside of syncthing changing them. The dataset is not even user accessible since it’s just a location on the server that syncrhing can receive updates from workstation 1 to later push to workstation 2 once client 2 is turned on.

Workflow is:

Workstation 1; import pictures, edit them in Lightroom.

Syncthing on workstation 1 scans “photo library” folder and will sync changes to freenas VM. Workstation 1 turned off (it’s only on over the weekend)

A few days later when I get back to my apartment during the week, Workstation 2 will be turned on, and will receive updates from freenas VM.

Workstation 2; edit some more pix in Lightroom, and sync those changes back to freenas VM. Next weekend when Workstation 1 is on again, those changes will flow back to Workstation 1.

But, I guess I should fallow your advice and set it to scan once a day. No real reason not to scan once a day I suppose.