I have syncthing deployed on approximately 30 servers at the moment to keep a common ‘deployment’ folder up-to-date. Each of those 30 servers are at their own separate office. They are connected via VPN tunnels, and they only sync on the private network.
The system works well, but I’ve noticed something about syncing new files that appears to be inefficient.
When I drop a file in to our deployment folder at the main site, the sync takes about the same amount of time to complete at every site. Download speeds are unrestricted, but upload speeds are restricted to 2 mbit/sec.
The amount of time it takes appears to be the amount of time it would take for me to manually copy the file to all 29 other locations at a rate of ~2 mbit/sec.
It seems like syncthing takes a chunk of the file and then feeds it to each of the other 29 servers. Then it takes the next chunk and feeds it…then the next until all 29 servers have received a complete copy of the file from the main server. This means the main site distributes a file, it’s doing all the work of uploading and distributing.
Maybe my non-scientific observations are wrong.
If not though, wouldn’t it be more efficient to chunk up the file and send it ‘round robin’ and let nodes download from each other?
Chunk 1 goes to Server 1, Chunk 2 goes to Server 2, etc…until at least one complete copy of the file exists on the network…and then the server can send Chunk 2 to Server 1, Chunk 3 to Server 2, etc…
The moment a server receives a chunk, other servers can download it from them instead of the master.
Bringing a new location online is obviously faster and more efficient–the new location connects to all 29 servers and receives chunks as fast as they can be sent.
Anyways–not a support issue or a feature request really. Just curious if I’m correct on how the sync works.