Algorithms in use and low performance nodes

We have given up trying to construct a Raspberry Pi 2 based network. It seemed that one user’s requirements in terms of data volumes proved too much - the CPU usage stayed high but progress seemed to stop. Yes, I will provide evidence at need, but my question is this:

Are the algorithms in use such that lower performance devices will ultimately reach a synchronised state or are they sufficiently non-linear in processing requirement against folder size to eventually deadlock?

If the answer to this is ‘yes’, then I suspect my project will ultimately be doomed to fail. It would also suggest that there is a stateable CPU power requirement, possibly expressed as a function of anticipated folder size.

Have I missed something?

I have run a single ~18gb share to a raspberry pi 1. It can take a big amount of time because the crypto that is used it purely run over the CPU and not accelerated. It can put high amount of load on your system.

I’m not sure about the exact numbers. But I thought approx 1 MByte of RAM per GByte of shared. The CPU is only loaded when a sync is in progress. If there are many users (like a hosted sync server), you should consider a more powerfull server with more ram.

I’m also no sure if you have a modern CPU with Intel AES-NI (accelerated crypto) is still enabled in Syncthing. Which reduces the CPU time considerably.

Thanks for this - the Pi made it a cheap, low energy solution in principle. But I fear we may be out CPU performance to make it viable - Andreus (forgive my spelling!) was pretty scathing about Pi performance.

If you want a concrete answer you have to ask a short concrete question.

Are the algorithms in use such that lower performance devices will ultimately reach a synchronised state or are they sufficiently non-linear in processing requirement against folder size to eventually deadlock?

I would say it is quite linear with the size of your synced folders (and the amount of files, a lot of small files need longer than a few big ones). I have almost 100GB on a pi 1 and it works, never saw any deadlocks due to it being too slow, only some when there was a real bug :smiley:

If you add a lot of stuff at once you may run out of RAM (at least someone did here Huge RAM usage (2GB+) on synology DS1513+) but otherwise I don’t see any problems.

Many thanks - I don’t think that Pi, syncthing in its present state and my level of expertise and commitment are the right solution to our problem, so we must look elsewhere.

Ben

You’re taking a very theoretical approach to this. The amount of work to be done depends only on the files to sync and the number of devices connected. As such, yes, it should at some point reach an in sync state regardless of how slow your devices are.

Memory usage for a few operations is proportional to the amount of not-in-sync data however, so that might be a hump you need to get over initially.

Yes, sorry about the theoretical approach. A background in systems architecture and design has taught me to try and understand the boundaries of software designs and algorithms, such that one can stay away from the limit cases!

@morrisman68, Since you’re using a Pi2, you could try to build Syncthing with GOARM=7 as an ENV option. That could speed up the runtime a little bit. Note this setting is not supported for Raspberry Pi A, A+, B, B+ or Zero. If you’re using a Pi2 (ver 1.2) in 64bit or a Pi3 you can further increase this value to GOARM=8 [1]. This setting allows the Go compiler to use a newer instruction set tailored to your device.

[1] https://en.wikipedia.org/wiki/Raspberry_Pi

Many thanks. I may try this.

I am still interested in hearing what the design and boundary cases for the chosen algorithms are, if anyone wishes to share them.

Ben