Hey all, I’m currently sharing a 500GB Google Drive folder with ~800 users, which is growing all the time (both the size and the users). It’s a folder of digital assets (like textures and 3d models) for visual effects artists and game designers for anyone who cares.
I’m wondering if Syncthing running permanently on a VPS could be a feasible replacement, I’ve only ever used it with a handful of devices at a time (syncing between desktop and laptop, and some work projects between colleagues in the past).
The 800 users should only have read-only access to the folder, which I understand can be done in a way by setting the Folder Type to “Send Only” on the VPS, but I don’t really know how this plays out in practice if some users accidentally try to edit/move some files.
I also would like to be able to upload files from my own computer to the VPS’s master (“Send Only”) folder, basically giving myself (and the VPS) write-access while all other users are read-only. I don’t know if that’s possible, but one possible way around it would be to sync a separate private folder with the VPS and run a script to mirror it to the public one.
Then what about the experience for the users - since Syncthing is p2p tech, I’m guessing when a new user/device joins the cluster, they’ll be downloading the files from multiple other users, not just the VPS. Some users might not want to upload files to other users. They could set a connection speed limit, or just close the client, but if they do that regularly that defeats the purpose of syncing the folder in the first place.
Maybe it’s better to use something like Next Cloud for this, but I’m just wondering if it can be made to work.
It probably won’t scale as well as you’d hope, and the user experience will probably be annoying. I’d recommend something more like NextCloud (without having tried it) or, at minimum, some sort of management layer on top of Syncthing for managing adding/removing devices etc. And you’ll probably need more than one VPS for the scaling.
You could add the folder twice on the VPS, once for you with read/write, once for the others with send only.
The users will only share with each other, if they add each others devices to syncthing, or if they set the vps as introducer.
as every user can set their folder to the mode they want, they could set it to read/write. This would not change the data on your vps but every other user, that user is connected to, would get the changes the user makes.
As syncthing holds the file list for every connected device in RAM, you would probably need a very, very, very, … big server to sync 500Gib to 800 devices.
Not all is necessarily kept in RAM, but some things have bad effects with a lot of clients. Consider a large file that is changed, and subsequently announced to 800 clients. All of them will need to download the file and make a number of block requests. By default each client will request up to 64 MiB of data which needs to be buffered and encrypted and whatnot, so you can easily end up with 800 * 64 * ~2.5 =~ 128 Gigs of concurrent block data in RAM. There are some controls in various places for things like this, and you can tune it, but it’s not all trivial and smooth.
If you can architect it like a multi level tree or such, so that each device gets data from “above” and shares it with 20 or so clients “below” then you’re probably perfectly fine. But then it’s no longer an easy “just hang 800 clients off of the server” sort of setup.
Thanks guys, seems like it could be made to work the way I want, but something like nextcloud is definitely a better solution.