Syncthing on RAMDISK/mergerfs to avoid constant HDD/Flash writes?

Hi Community,

I’m starting another attempt to move from Resilio Sync to Syncthing. Resilio works really well, but I’m not sure whether it is still alive or has been abandoned. There has been no update for quite a while and the forum is dead. Furthermore I like Open Source software and its great community better and therefore would prefer Syncthing.

Anyway, there has been and still is a big show stopper for Syncthing: It constantly writes to its database even when no sync is taking place. The reason is mainly the last seen timestamp which is written each and every minute and the connection duration when another instance disconnects. I’m not sure whether Syncthing needs this precise tracking but I don’t care about it and wonder whether it would be possible to disable it or significantly increase the interval.

The undesired impact of these constant writes is that:

  • if the database resided on a HDD, it never spins down
  • if the database is located on some flash storage, it wears out the media

Both are things I absolutely don’t want so I‘ve been considering a workaround but I’m not sure whether it would work. This would be my daring attempt:

  • Create a mergerfs with /hdd + /ramdisk > /syncthing-db and configure this mergerfs to create new files on /ramdisk
  • Run syncthing which will create all new database files (ldb and log) on /ramdisk
  • cyclically (of course longer than the HDD’s spindown timeout)
  • – (if HDD running): backup /ramdisk to /ramdisk_backup_on_hdd
  • – (if /ramdisk usage exceeds limit): stop Syncthing, move /ramdisk/* to /hdd/, restart syncthing. It will push the merge-tree .log into one or more ldb files and create a new merge-tree .log to which it writes.
  • After Syncthing shutdown, move /ramdisk/* to /hdd/

Note: For the initial sync I would of course keep everything on the HDD as /ramdisk would fill up much too quickly.

Would that concept probably work or is there a better way? Of course there will be some more details to consider - especially what happens if the machine crashes. A sudden power outage is quite unlikely as 3 machines are connected to an UPS and 1 is a laptop with battery.

1 Like

Are we talking about SD cards(raspberry pi) or SSDs? If it’s a SSD don’t bother.

1 Like

Unless you have something listening to folder summary events (e.g. open web UI), the last connection time shouldn’t be written.

Sounds complicated, but the basic idea of having the db in RAM is definitely possible: I use anything-sync-daemon to load the db into RAM when Syncthing starts (it also manages backups/crash recovery and the like). Obviously that isn’t ideal if your main concern is minimal writing (mine was performance, but I was on a rotating disk when I set this up - maybe it’s pointless nowadays).

Why? I like explanations/arguments :wink:

SSDs have a very high write limit. e.g the Samsung 860 Evo 1TB warranty covers 1200TBW(Terabyte written). That’s a consumer grade SSD and that is only what’s covered by the warranty. The SSD might endure x2-3 of that. So we’re talking about at least one petabyte of writes before that drive dies.

2 Likes

I wouldn’t really give Samsung as the primary example, since their SSDs are basically the best on the market (and usually more expensive than the rest). However, even the cheapest discs sold currently will not have issues from writing a tiny amount of data every minute anyway.

I would also add that power consumption of SSDs is much lower than that of HDDs too, so the difference in this matter will also be negligible.

Thanks for your responses! In fact, I’m only talking about cheap consumer-grade storages like SD cards or even USB Pendrives and about HDDs. For an SSD I would not care - my 7 yr old not very expensive Crucial M500 with 7 TBW tells me that there is still 95% lifetime left. But both my always-on machines (1x Raspberry Pi, 1x Intel Atom Mini PC) have no SSD and therefore with Resilio I have my database on the data HDD. With Syncthing the HDD never spins down.

@imsodin The lastSeen time is written each minute as long as at least other Syncthing instance is online. This is a short fatrace with a fully synced folder:

07:40:17.585888 syncthing(5345): W /test_syncthing/prog/syncthing-linux-amd64-v1.14.0/index-v0.14.0.db/000018.log
07:41:17.587394 syncthing(5345): W /test_syncthing/prog/syncthing-linux-amd64-v1.14.0/index-v0.14.0.db/000018.log
07:42:17.593881 syncthing(5345): W /test_syncthing/prog/syncthing-linux-amd64-v1.14.0/index-v0.14.0.db/000018.log

I’m wondering whether this behavior can be changed or otherwise the workaround could work. The anything-sync-daemon sounds very interesting and could be the perfect solution to my problem (basically I expect it to do what I wanted to write my own script for…). I will definitely try this out!

1 Like

You could use a different filesystem like FFFS which is designed for flash storage without wearleveling.

But i agree that limiting unnecessary writes is a good thing.

@calmh what would happen if we keep this only in memory and don’t persist it?

Again, it shouldn’t happen unless the web ui is open or something else is listening for folder event.

The simple solution here would be to load and update the info in memory and store to db on shutdown.

That explains why we need this information at runtime but not why we need to persist it?

I didn’t bother to explain why it is needed at all. I don’t know any besides that it’s useful information, which is enough, isn’t it? And it stays useful when e.g. restarting a laptop, thus persisting it.

Just checked it. The 1 minute periodic lastSeen is written even if no Web UI is open. I will check anything-sync-daemon (which seems to be just a bash script) this weekend as it looks very promising!

2 Likes

Hi,

I have now tested the mechanisms of anything-sync-daemon. In fact, I did not test the script itself but simulated its main tasks (tmpfs, overlayfs, sync) manually on the command line to see if/how it works. To sum it up, it seems to be a viable workaround, although I would probably write an own script which does some things differently (zram instead of tmpfs, maybe mergerfs instead of overlayfs, …will see). So thanks a lot for this recommendation!

Nevertheless: This is all just a workaround for handling the symptoms. Is there any chance that there will be a way to deal with the root cause and get rid of these permanent database writes?

IMHO your usecase is valid as syncthing is also ment to be run on raspberry pis and NAS systems and those writes are not good for both of them. I’d open an issue.

If someone wants to optimize this to only do the writes in question on connection changes and shutdown that sounds fine. However I don’t think avoiding one write per minute is a use case, and I think the corresponding “it will wear out my disk” worries are overblown. To let spinning disks spin down you’ll probably also be chasing periodic reads for quite a while…

What do I have to do in order to have this addressed (I know that it won’t be top priority)?

You may want to open a feature request on GitHub to have a record somewhere. However, unless someone decides to work on the feature/improvement specifically, then there likely won’t be any changes in this respect.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.