I want to use syncthing to sync 800 linux’s client .
I just launch the discuss, to see if anyone could help me a bit, or warn me if that’s not possible.
My idea is, i have a master client, i stop syncthing on it, i modify system as i want, restart syncthing, the modification is pushed over 800 machines.
First questions i see :
Could I authenticate new clients without any graphical action throught the web ui ?
OR is there any way to have only one master config and ONE client config, the same one on all 800 clients ? This could able me to deploy the master system on a new machine without scripting any config modification, or keys generation one the new client.
Can i sync multiple dirs, or one top dir excluding some else (sync “/”, excluding “/dev /proc /sys /tmp /run … etc”)
Which hardware configuration is needed one the master client to support 800 nodes (ram, cpu) ?
I you have any idea to help,
Thank you by advance.
The cleanest way here is to set the master as “introducer” and then add your new clients there. The other existing devices will then accept the new one.
This is probably a bad idea. Syncthing does not synchronize file ownership. You probably want to select more specific directories to sync.
[quote=“eoli3n, post:1, topic:6629”]
Which hardware configuration is needed one the master client to support 800 nodes (ram, cpu) ?[/quote]
No idea. This will depend more on the amount of data being synced, but for sure there is a memory hit to keeping track of 800 other devices. We normally recommend not doing that, but grouping the devices in smaller clusters instead, like a tree.
Oh … So i will not be able to use Syncthing. The idea is to use it to deploy and maintain my pool by syncthinging it instead of deploy with clonezilla tool or chef / puppet / ansible like.
As I have a single master client configuration deployed on different hardwares, i thought that was a good idea to just modify a single client to automatically deploy modifications over the network through bittorrent.
I realize that the problem is not only with syncthing, but with bittorent protocol.
Synching doesn’t use the bittorrent protocol. I am not sure why there is no support for file ownership beyond no one wanting it enough to do it but Synching is made to be cross compatible and managing ownership adds to the difficulties in maintaining that.
Windows and Linux/Unix differ here, so it would be difficult to impossible to syncronize file ownership between those operating systems.
File ownership is maintained by the filesystem using a user und groupid. When you want to synchronize these IDs, you have to ensure that these IDs exist on each device and are consistent.
You introduce severe security/privacy issues when you have inconsistent user/group IDs among your devices. Imagine what happens when you assign the wrong user to your files (e. g. Bob is ID 1000 on device A, Alice is ID 1000 in device B, Bob is ID 1000 on device C).
This is IMO out of scope anyway. Syncthing is designed to run as a specific user who syncs his own files.
I have to say, putting Syncthing in control of / is rather terrifying to me.
Consider what will happen when you make a change on the “root” system and the changes are being propagated by Syncthing. During the entire duration of the sync, each system’s state is completely unpredictable and inconsistent.
Even if you took each system “out of production” during the sync in order to limit the damage, it’s possible that during the sync, the systems would become totally broken and never work again afterwards.
Syncthing is based on the operating-system agnostic Block Exchange Protocol. It really isn’t a good idea to mutate running systems block by block (or even file by file).
If you are interested in a better way of provisioning systems than chef / puppet / ansible, please look into GuixSD or NixOS.
Not entirely true, you could simply sync numeric IDs without caring if they exist or not (see rsync --numeric-ids). Or you could do a name -> ID lookup based on names with fallback to numeric-id (rsync default behavior).
Still, what frightens me most would be the need to run syncthing as root to be able to chown / chgrp.
Which may introduce security/privacy problems, right. I do not recommend using this option at all. There was some discussion about this on the arch linux pacman-dev mailing list, which I cannot find atm. The problem is, that you come up with files that have no existing user or group assigned. Once you create a new user (or group) with the same ID as your leftover files, the freshly created user (or group) is allowed to read/modify them. Maybe your package manager creates such a new user when you install some package, or … just be creative.
That’s the reason you would have to carefully unsure that everything is consistent according the the user:group IDs.
Running arbitrary services as root is also a situation which everybody should try to avoid. Maybe some cases make sense in a container, but I am not that deep into the container world…
I completely agree with you, and I actually only use it for some very specific backup scenarios, where the backup location is owned and restricted to root. So yes, of course you need to be aware of the implications. But you can end up in similar situations with left-over files of removed-users and newly created users as well. Window’s SSID concept is better suited for things like that than a 16/32bit integer
And no, I would not recommend that syncthing implements this as well.