Syncthing capabilities and requirements with 1000+ users

Hey,

Our cluster will look like this: two or more (currently unknown) send-only servers that have all the shared folders, plus 1000+ receive-only clients/customers who has access to only a few folders. The customers will use our app that runs syncthing in the background. They probably won’t connect and start the downloads at the same time. (though in the beginning we expect to have a higher amount of concurrent downloads) We expect that there will be less than a hundred customers connected at the same time in the long term. The number of shared folders is somewhere between one and two hundred. Each shared folder should be around 100 Gb and contain less than a hundred files.

Hopefully you guys can help us answer a few questions:

  1. Is Synthing able to cope with a cluster of this many nodes? If you have personal experience, would you share it?
  2. What kind of hardware (RAM, CPU) should we use to serve this amount of clients without any performance issues?
  3. Is there any issues we should expect with a cluster of this many nodes?
  4. Any advices for the syncthing configuration?

Thank you!

Syncthing is bidirectional sync, and receive only is only a local folder type, someone might go modify the config and turn it into send receive and send trash to other people (if you are thinking if mesh network). If not, and you are just sending some files from some device to many other devices, then syncthing is an overkill, as the server will have to store everyone’s state of what they have etc, when you don’t really care. In that case using a cdn or some sort of home grown solution is probably a more sensible approach.

On the customer side the syncthing will be configured to see only the servers, so the “clients” are not connected in any way. And since all the folders on the server will be configured to be send-only, nothing happens if someone tries to be tricky. (beacuse the server won’t process any changes from the customers) The current solution is FTP based and has the same problem as the CDN has: the files are huge and if the download progress stops just before finishing for whatever reason the users will be really mad (understandably) especially if their subscription is limited/pay-as-you-go based.

Using syncthing will not scale well as a one way centralised distribution mechanism, as that’s not what it’s built for. Yes, you can probably make it work, but resource usage will be unreasonable for what you are trying todo, as there will be a large cost per client.

Yeah, we know this is not a perfect solution, but with every solution we investigated comes a trade-off and for the features syncthing offers the relatively large cost is a good deal.

Larger-scale cluster have been discussed before, here’s one thread I found quickly: Scalability Question

Generally the scaling factor is devices connected. 1.4.0 has an improvement that should by far lessen the db growth per added remote device. However other overhead per device is still there. For that reason the usual advice is some kind of topology where a number of central nodes are interconnected and divide the large amount of clients (e.g. 5 central nodes each connecting to 1000 separate clients and central nodes connected to each other).

Also https://www.kastelo.net/arigi/ might be of use to you (by @calmh’s company, founder & mantainer of Syncthing).

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.