Syncthing Distribution Points and Method of Operation

Is there a concept of a node as a distribution point for other nodes?

i.e. I have a read-only folder sending to a read/write folder on another instance, only so that it can distribute the files and changes from the original read-only folder to a number of target destinations.

In other words, I have a hub and spoke configuration where my main file share is a spoke, sending files and changes to the hub (my read/write folder above), which itself sends changes to the remaining spokes (all write-only). Importantly, no user changes are made to files on my hub instance of Syncthing (it exists to store files as a backup, and… distribute), nor are any user changes to files made to files on my write-only spoke instances.

Is there an idea or concept of distribution points? Or perhaps a better question is… how does the Syncthing “cluster” work. In my (or any) configuration, is the cluster relying on the read/write file activity (i.e. File System Watchers) and periodic scanning to replicate changes among the cluster, OR, does the system replicate changes by communicating file changes via the internal DB and distribute them accordingly.

In other words… If I made my hub “distribution point” a read-only folder and/or disabled periodic scanning and File System Watcher, will my spoke files make it to the numerous destination spokes? Is my hub watching and scanning unnecessarily because file changes are communicated via DB updates and Syncthing cluster communication, etc (given I make NO changes to files on the hub itself).

In other, other words… is the cluster intelligently aware of file activities on the cluster as a whole, OR, does it simply rely on the correct wiring/configuration of folders among cluster members and FS watchers and scanning?

Thank you!

Changes propagate without needing to be scanned/discovered on your central hub device. It can be receive-only with scanning disabled.

Yes. It will just work. You can do this exactly as you say. If the spokes happen to have individual connectivity then they will communicate directly with each other but in any case if the “hub” as you call it is always reachable by all machines then it will act as a relay of sorts and with a proper versioning setup also can serve as a rudimentary backup.

This was exactly the answer I was hoping for.

However, if I set my hub folder to Receive-Only and No Scanning, file changes are propagated to one of my other attached Receive-Only spokes, but not the other.

All nodes are running 1.24.0

Both spoke Nodes are configured identically, except for

  • “Device Name”
  • “API Key”
  • “Sync Protocol Listen Addresses”
  • “Incoming/Outgoing Rate Limit”

Both spoke Receive-Only Folders are configured identically, except for:

  • “Folder Path”
  • Some File Versioning Specifics

What log files should I be focusing on?

I think the first thing to investigate is what is “out of sync”. Check the UI on the machine that isn’t getting the updates and confirm the machine is actually reaching one of the other machines that does have all the data and is “up to date”. Check the out of sync items in the UI and see if you can get any idea why they are out of sync. Screenshots may help us figure out what’s going on. It should work as described above. So something isn’t right.

1 Like

Agreed. Numerous files don’t seem to be updating on this spoke, regardless of the changes mentioned.

Is there a way to reset the spoke DB and force a full rescan? Might that help?

What screenshots would be most helpful?

The spoke’s Syncthing UI is showing the cluster is up to date when changes occur, but I don’t see new files, etc…

Disregard, seems completely self-inflicted … ID10T error on my part :confused: Thank you!

1 Like

I think disabling the watchers and periodic scanning fixed a major performance issue for me.

Previously, when a file would be updated, folders with many files (400k?) would seem to thrash and be marked “scanning” for a while. Now, with watchers and scanners disabled on my “hub,” if an update happens, the UI immediately jumps back to “Up To Date.”

Could the cluster “relay” feature AND leaving the watchers/scanners on my “hub” node be interfering with each other? (or at least, causing a lot more work for Syncthing as it synchronizes one way, then detected that the file change watch/scan was basically for the same file)?

Not sure. Depends a bit on your hardware. Just as far as scanning goes folders I think don’t need to be scanned I set for a scan interval of 86400. Once per day. Just in case some change gets missed by the watcher a daily scan will help prevent unexpected conflicts.

Not sure what other people’s feelings are but this is what I do.