I’m trying to build a fleet of these. Takes a really long time to scan and syncing is sloooooow (12,000 items with 65GB, so kinda expected). But they seems to work. My whole cluster got out of whack and I had to wipe and start over (all indication is that it was self-inflicted)
These are the textbook use-case of needing “receive only” folders. They have no users typically. I am using them as tiny web servers (content needs to be pushed out, but is not generated on device)
I’m using this since few months and it works as well
Only difference I’ve attached an external HD but I had to use a “Y” cable to power it externally with the same power source used for the board… unfortunately those tiny boards have limited current on their usb ports.
(almost same scope, receive only device to backup a remote folder)
Yeah, I’ve found the most reliable way is to use 128GB MicroSD cards. USB flash drives can work too, but many of them are not engineered for 24/7 operation and overheat.
I’d posit that this one is the smallest Syncthing device (at least per/GB, it has 128GB onboard)
As I have 64GB on sd and 128GB on usb, I have 50% more storage. The usb drive is less than 50% the volume of the encased pi, so you aren’t beating me on your measure.
[edit] Mind you, if you stuck in a 128GB flash drive…
[edit] Mind you, if you stuck in a 128GB flash drive…
OK, how about 2x128GB
we can call it a tie if you like, since I have four of those flash drives, and the Pi0 in the back has four USB ports, though I’m not sure it can power that many.
(also, smallest syncthing cluster, all three of those are syncing the same data)
It’s not so much the amount of data, more how much it changes.
As you can see from my 3rd post, when not doing very much, it uses less than 1% cpu. This a lot better than a couple of years ago. It’s been tweaked over time to make it more efficient.
If syncthing has real work to do, it maxes out the cpu, especially when hashing and at startup.
So, providing you start with pre-copied data and don’t change 20% of it daily, then it’s worth a go.
My cluster is syncing two folders. On with 12 thousand small files at 8,7GB and one with 750 files at 65GB.
Given it’s performance I would not expect 2+ TB to work well, but should work. Initial Sync takes about 48 hours, Thereafter, it seems to keep up fine. I’ve tested adding and subtracting a couple GBs with no issues.
Pre-copied data got me into a world of hurt with sync conflicts (I really need a receive only option). Since I rebuilt the cluster with all data populated by syncing, it has been working fine
Thank you both for your input. You are correct, its not how much data is synced . . . It’s how much the synced data changes.
I have almost 2TB in a one way sync, and I can validate that the scan process takes a good amount of time and uses a reasonable amount of CPU during the scan. The initial sync is brutal for large volumes of data, but I am confident that is entirely up to the network connection one is using to perform that first sync.
The Zsun card reader running LEDE is able to run syncthing and transfers at around 64KiB/sec. I have it running with a 64GB micro SD card and it uses 0.72W when idle (wifi connected) and around 1W when doing data transfer over wifi.
It can do around 3.5-4MB/sec writes over smb share