Event Persistence

Hey guys

I use the events to trigger some additional logic, but if my syncthing goes down, so do the events stored in memory and anything that I missed is now lost forever.

Is there any existing way that I can parse historic events? If not, how do I add a feature request?

Kind regards

There isn’t. I’m also somewhat skeptical that it would be a good idea to attempt to persist events, and even we did there would need to be a limit to the amount of events stored. That means there remain situations where you lose events, so you’ll sort of need to be prepared to handle that regardless…

Thanks Jakob.

I only have a few events that are really of importance, so the volume should not be too crazy.

I’ll write a small script that listens for events and persists them in a redis queue and postgres db.

I would then use that queue and db for any applications dependent on event information.

I’ll maybe publish a Docker Image to Dockerhub that will do this for anyone interested in the same thing. Will post a link to the image here.

Just to summarise

Event long-term persistence is not a part of the Syncthing roadmap and alternative measures should be taken to persist them if that is a requirement.


Hi @calmh

I’d like to implement a Pull Request for this. I’m thinking of allowing a for a new config parameter to specify DB connection details as well as a list of event types that should be stored.

Quick one, where in the code is the config reader and the event creation and storage handler?

This would really help me get a head start

I’m not sure this is the right approach.

I think writing a side application external to syncthing that polls for events and persists them is probably more suitable.

I considered that. I have an application polling now every 6 seconds, but it seems the API returns a max of 1000 events (which doesn’t seem to be configurable).

I have around 20 devices reading and writing to multiple folders so I quickly lose events from the API.

Hence I need a different way to get those events and thus am considering long term persistence directly from Syncthing and not from an API polling script.

You don’t need to poll every 6 seconds, you need always poll, in a loop with no sleeping.

If there are no events, it will “hang” for sometime until there are new events or until it times out, effectively sleeping for you.

Honestly I don’t think we want database connectivity code in syncthing.

1 Like

Ah okay so long polling is the solution. I’ll try it out.

I understand why you don’t want to include it- it would complicate the source code and introduce issues unrelated to what the primary function of Syncthing is

The alternative I would consider, if long polling didn’t work for whatever reason (event buffer overflows during the few milliseconds between two calls), would be to have a pure streaming endpoint instead. That is, you GET an URL and the response is an infinite stream of newline-separated events.

Would it be possible to push events e.g via Websocket?

Sure, but that requires extra work. I am also not sure we can use that in the UI as people front syncthing with various ancient versions of apache that don’t support websockets. My idea was that long term we should have a grpc api, where events could be a steam.

Websocket support isn’t really cutting edge. e.g it’s supported by apache since 2.4 which is part of Debian oldstable and Ubuntu 16.04.

Right, but if the UI suddenly switched to use it, and people did not adjust their http server configs (because why would you, if you are just updating an application), I assume we’d be breaking a lot of people.

Which is IMHO acceptable if the UI shows a proper warning with a link to the documentation with an updated guide for proxies.

And we don’t need to break anything. Add the websocket in version x with proper detection, warnings, documentation and require it in version y. That should give users plenty of time to adapt their stack.

There’s no reason to do that for the GUI though.

To be honest, I’d rather we added grpc rather than websockets.

1 Like