Syncthing process just dies after some time on Raspberry Pi B

Hey guys!

I’m trying to run Syncthing on a headless RaspberryPi B (first version with 256MB RAM) and I’m kinda failing… :sweat_smile:

I’m running the latest (and greatest) raspbian and enabled syncthing (and syncthing-inotify) as services. After verifying from other connected devices that syncthing on the pi is dead, systemctl status syncthing shows:

Process: 1618 ExecStart=/usr/bin/syncthing -no-browser -no-restart -logflags=0 (code=killed, signal=PIPE)

Syncthing-inotify suffers the same fate. systemctl status syncthing-inotify shows:

Process: 1619 ExecStart=/usr/bin/syncthing-inotify -logflags=0 (code=killed, signal=PIPE)

I found someone had a similar problem 2 years ago, but couldn’t find a solution.

Anyone has an idea what’s going on here? Any logs I should look at? Is there a debug mode? Anything?

Cheers!

Maybe out of memory kills? Check systemd logs using journalctl and maybe dmesg.

@calmh, seems you are right! Looking at journalctl i found the out of memory kill for syncthing.

Are there any tweaks possible in order to have the same shares in my good old raspi? Memory footprint is associated mostly with the number of files or share size?

For example, in my case I currently have 3 shares, totalling 8k files and 300GB of data. If I tar each of the shares, having only 3 files and the same volume of data, would things be smoother?

Set progress update interval (advanced settings or config.xml) to -1 to save some memory on the initial scan. Add swap.

Thanks! Will do and post back results. :slight_smile:

So I changed the progress update interval in config.xml and tried adding 50MB of zRam, as increasing swap will trash my SD card quite fast. No success… :sweat:

I will try to change the swap file placement to my USB attached HDD, where it can grow and rewrite freely (alas slowly).

Does it make progress while it runs? I had a out of memory crash on a raspberry pi with 512MB once during some heavy operation (initial scan or something, I don’t remember), after a restart (daemontools restarted it) it just continued and finished after some hours. During normal operation with not too many files changing all the time 256MB should be fine.

No sync progress, the raspi just stall for a couple of hours where I’m unable to even ssh into it. I removed the largest folder and everything went smooth. I’ll try to tar some of the folders with the largest number of files to share (they are just backups anyway) and see if the memory use stays within some acceptable value. tar and gzip have an option --rsyncable and hopefully that can help me send incremental changes in a efficient way.

From my experience: The smaller the file(s), the better. The total number of files does not matter much (restarts and GUI updates take longer, but then thats not vital for me as long as sync is stable and quick).

So try not to tar things. I’ve been successfully syncing folders with 64’thsd mp3 files from a radio telescope with no problem on a RasPi B (first gen). Even without inotify (though I’ve set the rescan interval to 432’000 seconds).

I was never able to sync a folder with around 400 files of (self-made) “movies” from the same telescope that are all >700MB per file.
While building the index (“Scanning”) the memory got gobbled up until the process was killed - or killed the RasPi. Either way - no way.
It also took days (literally) to finally fail. So I’ve since switched to different machines for that.

Addendum: As a rule of thumb, 1 TB of data needs about 100 MB of RAM - if files are <10MB (with the one or the other odd files being bigger or smaller thrown in).

That is true only after the initial scan has finished.

During the initial scan, lots and lots of RAM is needed. Albeit super slow, this could be provided with swap space. Which makes only sense if not too many of the files are modified after the initial sync. Otherwise Synching would perpetually be scanning itself to death.

Your statement is actually false, I think syncthing will work much better with less larger files.

There was some old issues with keeping too much stuff in ram during scanning. Nowadays I think the only hard requirement is that the block list for a single file must fit in ram (times three or four or something for copies, serialization, gc overhead, etc). We used to keep statically sized batches of files in ram, which could be a lot if there were many large files.

No. my statement is absolutely correct. As I said, this is my experience.
I assumed you’d say something differently, that why I prefixed the whole text with this in bold.

@sync-noob: You’ve tried syncthing the number of small files. Why not tar them up and try again and let us know. Maybe what AudriusButkevicius says works for you.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.