Pressing the limits? What do you all think about 6,000,000 files and 4TB of data, using Syncthing on FreeNAS to weekly REPLICATE Windows Server Shares of millions of files, merely for the ease of Snapshotted backups, ransomware “protected” by the FreeNAS service?
In other words, what are the upper limits of what Syncthing has done, in practice?
There’s one person here on the forum who has discussed a large setup that I can recall, otherwise I think it’s the case that the large deployments are generally corporate and internal and seldom discussed publicly.
Generally I would say a lot of data is not necessarily a big issue. The scaling constraints are more often number of devices (tens good, hundreds require care, thousands bad) and churn (does all your 4TB of data change all the time? this is tricky).
You are referencing 9 year old information, from a time where syncthing wasn’t even a year old. Based on old issues, syncthing didn’t even have a database back then (it has had one for almost 9 years now). You shouldn’t trust any information from that time as still being accurate.
It is possible, but depending on your resources and I/O patterns it may or may not work satisfactory for you (or not at all, like if you’re trying to do this on a raspberry pi…).
This was ALL I could find… (and beleeve me I looked everywhere…) And we were using synology drive but there was a cap of 500.000 files…
But if 9 years ago it was able to sync about 1.000.000 files (and no database) I assume syncthing will not be the problem…
But you are right about the speed of the system. (Raspberry Pi) … It runs on an synology and during my testings I recognized the speed. The 2 recent windows machines dit a large sync in about 15 hours where the synology took about 2 days…
Question; are you familiar what could go wrong?
I can imagine SLOW syncing… Or is it possible syncing will not work and I beleeve it works? Or do i get some errors in the GUI or something? Than I know what to keep watching for…
Maybe as a sidenote you could install if possible some system monitoring (e.g Telegraf/InfluxDB/Grafana stack). Or monitor by hand if you blow up your memory and disk I/O with this huge deployment.