Best practice: many/large folders

Hi,

Quick question: I wish to share (from “laptop”) one huge (100GB) folder (“Album”, say) with another server (“cloud”), and parts of that folder (“Album/2019” and “Album/misc”, say) with another server (“desktop”).

To make things easy, I’ve added each subfolder separately (Album/2000, Album/2001, etc.) which is >20 folders, and shared each manually. Later I’ve seen that I could have shared Album with cloud but only Album/2019 with desktop (is it really so?).

So my first question: is it really so? My second question: performance-wise, is it better to split as I did (to 1-10GB folders), or make a single large folder (100GB, with some shared subfolders)? Or perhaps it doesn’t matter at all?

Thanks!

It is so. And I don’t think it matters much, but generally less folders means less overhead, so I’d expect it to be slightly lighter on resources.

Thank you, this is helpful.

On the other hand something rather obvious that I missed regarding performance: If the size of the selectively shared subdirectories gets large, it’s obviously less performant to both share those and the parent dir as the subdirs get scanned twice.

I see, this is good to know. But perhaps I can make the full rescan interval of the large folder very large so most of the times only the subfolders (which can indeed be quite large) will be scanned.

And I should stop using “scan” as a general purpose thing: Scanning as in checking for changes by going over all files is not a headache anymore in normal setups (i.e. not stuff on network or the like), as it happens infrequently due to the filesystem watching, which immediately detects changes. What I was referring to is the hashing: When a change is detected, Syncthing needs to generate cryptographic hashes of the data. This will happen twice with sharing the parent dir and there’s no way around it.

Also: larger DB and more reads from / writes to DB.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.