Syncthing on RaspberryPi

Can you move the indexes with v0.11? I might be missing something basic. Cheers

It should be fine, you just need to make sure you generate the index while having the same certificates (device ID) on both machines, and before you add any other devices.

root@raspi:/etc/syncthing# ./syncthing > sync.log

just gives

root@raspi:/etc/syncthing# cat sync.log
[monitor] 22:13:47 INFO: Starting syncthing
[MVWYV] 22:14:04 INFO: syncthing v0.11.0 (go1.4.2 linux-arm default) unknown-user@syncthing-builder 2015-04-22 00:37:18 UTC
[MVWYV] 22:14:04 INFO: My ID: XXXX
[MVWYV] 22:14:06 INFO: Starting web GUI on https://0.0.0.0:8080/

Thats all. Can’t troubleshoot with these infos :wink:

edit: ok now I get more infos / slow perfomance on RPi was the problem.

Wait for Finished scanning folder XYZ, at that point it should be usable.

Is it as simple as copying and pasting the device ID from the slow armv7 machine to the config.xml on the powerful amd64 machine?

You probably don’t need to fiddle with the keys and device ID:s actually… Just stop syncthing, copy over the index.v0.11.0.db directory, and start again. You must make sure that the contents of the folders on both sides are identical at this point, since that is what you are claiming by copying the index. Any missing files will be seen as deletes, and so on.

:exclamation: I haven’t actually tested this myself. Good luck!

This worked just fine for me. Just another data point…

rsync and btsync are working far better then syncthing regarding the hashing algorithm. I just takes too much performance. Rsync checks the files in minutes… syncthing takes days for the same work.

I believe this is because rsync uses the Adler-32 rolling checksum while Syncthing uses the SHA-256 cryptographic hash function, which is much more CPU expensive. Check the FAQ:

I can’t see the advante of sha256 for hashing? The traffic between two nodes in encrypted… as far as I understood it, hashing is not relevant for security?

Please read the docs on how the protocol works. We ask for content based on hashes, hence we rely on not being able to have hash collisions, which is what cryptographic hashes provide.

I’ve been letting Syncthing run on my Pi for about a week now. It’s got ~150GB of data to index. Some large files and many small ones. I expected it would take a long time, but I wonder: if I kill the process or the system crashes, do I lose all the work up to that point? Or is the index saved as it progresses?

How long do you think ~150GB of data would take to index on the Pi?

My Pi has been running at 99% CPU for the entire week :smile:

No :smile:

You will need about 2 weeks I guess

edit:

just found that there is a parameter for windows called “/low” Is there a similiar option for linux?

you can start syncthing with nice -n <niceness> syncthing where <niceness> is any positive number- the higher the number, the lower its priority- To change a running process, check out renice.

Personally, on my RPi, I have a repo worth 400GB, took me “just” about 3-4 days to initialize I think.

I had 50Gb and this alone took a week (RPi B+)

I still have severe problems on the RPi. You told me I have to wait for the message:

INFO: Completed initial scan (rw) of folder xxxxxx

This has happend for all my folders. Nethertheless I am waiting since hours for some small files to be synced. There is no network traffic from syncthing. The cpu is alwys near 100% due to syncthing. How can this be? All folders scanned completly but still near 100% cpu load and no sign of sync the files?

@theincogtion It is probably still indexing your files. How much data is it? It also helps if you close the webgui because this will speed up the other tasks we are doing.

All in all I have 80GB. The webgui closed. I just open it temporary.

I’ve been using it fine on a RPi2, I tried it first on a RPi B+ and unfortunately it seems the hardware is not up to the task. Something you could try is to run syncthing and “htop” and give a look how much memory syncthing is using, if it’s too much you’ll have problems… there is a new command-line option to limit memory usage iirc.

100MB still free