I need to make available one single folder containing several tens-of-MB-sized files to several hundred linux workstations connected behind high latency low bandwidth mobile links (A 38MB file was transferred over rsync in 25 minutes). No need for writable access, read is sufficient. No internet, just closed network. No need for file history versioning.
Can Syncthing handle this use case, in terms of reliability? Assuming the “origin” point has of course sufficient uplink bandwidth to handle them simultaneously. I could setup a rsync+cron-based bash scripts solution but I’d rather try to use something already invented and tested.
This sounds like something Syncthing could handle to me. And handle very reliably.
I’m not, however, convinced that Syncthing would be faster than rsync for your use case. Given the network constraints, I’d suggest that you experiment with setting compression to “all data” and testing speed.
forgot to mention that the files content will never modify, so rsync advantage of only differing blocks will not apply here. every now and then, a whole new file get introduced in the folder and older ones deleted.
Given the number of devices along with the overall simplicity in requirements, you’re better off using rsync + cron (it doesn’t sound like even a script is necessary).
Configuring and maintaining Syncthing on hundreds of devices is a lot more work compared to setting up a cron job (which could even upgrade itself by downloading an updated crontab from your “origin” server).
Syncthing devices do a status check on their peers every 5 minutes, so each remote workstation will be connecting over its mobile link to your “origin” server at least 288 times per day.
Agree, and also observed they do pings to the discovery server as well.
I’ll have to research in docs to see if these numbers can be increased. I don’t mind the pings, though, but will reduce the discovery server log flow if I set them to something like once an hour instead.