SyncThing Architecture and number of active connections

Hello team,

I have couple of different questions:

  1. Previously I had one master with 76GB of data and 380 servers added to it, most of them were already synchronized, and few of them were in progress to sync. Apart from this I have everyday file transfer to 16 servers from SyncThing Master.I have experienced lot of delay in the transfer time. It used to take 5-9 hours for transferring 12-20MB files. Recently I have shut down 323 servers, so I have 57 active servers trying to connect to the master. Now I don’t see any delay in the transfer and the files are getting transferred within the expected time like less than 1 min for a 12 MB file and 11-15 mins for a 400MB file. When I had lot of connections I used to see that there is a connection breakage between client and master. But now I don’t see such error. Could someone please explain what makes the difference here?

  2. How exactly the architecture should be built to have a robust quicker and reliable solutions? One master and n number of clients to it (in my case I have 1200 clients to be added to master). Or do I need to setup in such a way that I have one Master of master’s -> Multiple Master’s (Say 100 clients/Master) -> Clients.

2 Likes

Please search the forum about topology, this was mostly answered before.

Thanks for the response Audrius. It says better way to go with is SnowFlake topology, but I have no where found about more than 1000 servers. The max I could see is 100 in the forum. I am not sure but somehow I feel snowflake might slow down the transfer speeds. Correct me.

If you are looking for real-time file availability for 1000s of machines, you are looking at the wrong solution and you are probably better off having a shared file system (be it NFS or S3+fuse if you can afford the latency for the scalability).

https://data.syncthing.net/ says there is someone with 2.2k devices, yet I have no idea what their topology looks like, and we target “I run syncthing at home between my N computes” use case (where N is in signle digits), so it’s not something we test for.

I have a test setup with 400 clients connected to one central point. A 50 meg file added centrally gets synced to the 400 clients at about 2 Gb/s, takes a minute or so for everyone to be in sync. The limitation here (in my setup) is mostly that the 400 test clients are running on the same (beefy) VM, so I/O per client is limited.

So it can work perfectly fine, but the defaults are not great for the use case. You will need to turn off temp indexes, progress updates, reduce pullers on the clients.

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.