The Clone Army of Pi

Hi, i am looking for a p2p sync solution on LAN between several cloned Raspberry Pi, used as media players. The goal is to sync the media folder across the RPi inside my LAN. Syncthing seems to be a good match for the job.

But i am facing 2 problems:

  1. My RPi systems are regularly cloned, so i will end up with the same device ID on each one which seems problematic. I could force the Pi to re-calculate a new id on first boot, but i was wondering how the Device ID is calculated ? Is there some other system files involved ? If so, i will still end up with duplicate IDs (since systems are cloned) ? What is the best way to force a new Device ID calculation on an existing Syncthing node ?

EDIT → Ok i guess i just have to delete key.pem and cert.pem

  1. I will use 30+ devices on my LAN, and the number will be changing a lot: new devices will come and go, especially if i re-generate deviceID every time a new SD is cloned as mentioned in 1). So it looks nightmarish to try to maintain everyone’s association manually ! How could i maintain association between all devices in my LAN cluster without copying all ids manually between each others ?

The cluster will be on a closed LAN only, for non sensitive data, so i don’t actually care much about security. I would be happy with an “open sync”, as in “you are welcome to sync with us, no matter who you are” topology. → is there a way to “wildcard” or auto-add deviceID, so any device on the LAN will be allowed to sync automatically ? → Or may be a groupID that can be cloned with my images ?

EDIT2 → i have a fixed machine at a fixed IP (let say 10.0.0.1) which will run Syncthing. If every cloned RPi on my LAN has 10.0.0.1 as an Introducer, do they will be able to sync each other too ?

EDIT3 → the introducer tricks does not solve the need to add the new RPi to the 10.0.0.1 device list. Is there a way to allow any new device on the Introducer side without manual intervention ?

Any idea to achieve an easy-to-deploy army of clone would be much appreciated !

I am also willing to put my hands in the code if necessary, any clue to where to look at would be very helpful: i am new to this project :wink:

Thanks! Best Regards, Thomas

1 Like

I think you answered all of your own questions except the one about automatically adding new devices. There’s no built in functionality for that, and I’m unconvinced we want that functionality. Typical solutions include using a config management system like chef or puppet to generate the config file with all device IDs, or using something that talks to the API to do the same.

Generally speaking, devices being recloned and changing device ID frequently doesn’t sound like something that will be fun to maintain and debug.

Also, every device that existed might leave residue in the database, just so that you are aware.

Hi, ok thanks for your answers.

So if i add a new device, it will need a new deviceID.

In my case a new device entering the cluster will be pre-set with 10.0.0.1 (my master node) as introducer.

But i need to add the new device to the introducer list.

using the API, this would looks like:

  1. GET /rest/system/config from master node (the introducer)
  2. Add the new device to the “devices” list received
  3. POST /rest/system/config back with the new config to the master node

Does it sounds right to you ?

Thanks ! Best,

Thomas

Yep

1 Like

I’m curious as to why you’re not streaming all of the media data from a central server? Is it to do with cost, or something else I’m missing?

Hi Vincent, i have to sync media locally on the RPis for various reasons:

  • Media will be played in sync (delta < 10ms), which is way easier if files are already on disk locally on every devices. I implemented a way to sync clock and commands, but i did not work nor find a proper solution to sync streaming playback.

  • It’s a WiFi LAN with 30+ moving devices: bandwidth can’t be guaranteed all the time on every devices. So preemptive media distribution is safer than “distribute on demand”.

For this preemptive media distribution, since the network has multiple antenna, a P2P torrent-like sync system seems more suited than a central media server which involve traffic merging to the main server and his antenna and would globally slow down the media distribution.

The other solution i was looking at was to run rsync between device and a media server on startup: it’s a simple and effective solution in my case but with less “intelligence” that what Syncthing seems to offer.

I have another question to the Syncthing experts:

  • if i have a central introducer with several client connected via this central device, what happens if this main introducer goes offline ? The other devices continue to talk to each other ?

Best, Thomas

Yes, absolutely. Introducer basically does just that: introduces the devices that share same folders to each other. Now that they know each other, there’s no need for the introducer to be online for them to keep in sync.

1 Like

Hi, thanks for your answers.

I have 2 other questions:

  1. the system used on this cluster of RaspberryPi is read-only, but got a /data partition with read-write capability. Which part of Syncthing must be moved (or symlinked) to this readwrite partition ? (i guess log / database, but could you specify which folder if its not too complicated ?)

  2. the network has no internet connection, so time can’t be updated and since RPi doesn’t have RTC clock, it loose the track of time on each reboot. We installed a working fake-hwclock which ensure that we do not go back in time (the time continue back where the RPi was at the previous runtime but it does not track the time the RPi was off). That means clock will not go back in time BUT each device will have a very different time. Is it a problem for Syncthing ?

Thanks !

Best Regards,

  1. Most of things that can be overriden with -home env var.
  2. It’s not ideal, you’ll probably have conflicts if they ever send their changes across. We don’t rely on time between devices, but we do require sensible flow of time on the same device.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.