Pre and Post process scripting?

Hi

Not sure if it is possible but I am wondering if we could have pre and post process system scripting (via bash, or whatever the system offers) for things like compression, encryption etc.

I realize that complicates how ST needs to check files but it might be a nice feature to have.

1 Like

I was thinking of having some hook/scripting system yes. What would your use cases be? The hooks I’ve been having in mind so far were things like;

  • “pre-sync” that fires before a file is replaced with a new version, so that a script has a chance to for example make a copy of it or move it somewhere. This could be done to implement custom version schemes.

  • “post-sync” that fires when a file has been replaced. Could be used to notify some external system that a file has appeared, to deploy it to a web server or similar.

2 Likes

Those sound good to me. I was initially thinking for encryption on a remote repo (ie encrypt the file after downloaded)

If you change the file, syncthing will pick that up and sync it back. But you mean read it and encrypt to another destination?

yeah the remote could backup to someother place and the host could it delete after wards, but your initial suggestions are much useful

Sorry to be bumping up an old topic - but I am interested in this as well. I’ve used Burp backup until recently on my laptop - and with Burp I would use a pre script before starting the backup, which checked if I was connected to the SSID of my 3G mifi and returned a non-zero exit if that was the case - which caused Burp to abandon the backup. This is because I didn’t want to max out my data allowance when connected over 3G.

Similarly, I am now looking for some way to prevent syncthing from transferring data while I’m connected to my 3G mifi - but can’t really come up with any trick - either using the syncthing config itself - or some external software.

I will also be installing syncthing soon on my Jolla phone - and again I would be looking at some hook/script to prevent syncthing from syncing when connected over 3G/4G - and only allow wifi syncing.

I guess I could get the network manager to modify the firewall and block the syncthing ports every time when connecting to 3G - slightly messy - but it should work - hmm …

I think this particular problem is best solved by starting/stopping Syncthing as needed. Or pausing it, once that’s implemented.

What makes things slightly messier in this regard is the fact that syncthing normally runs as the user who is logged in. A stop/restart script might be able to - at best - restart syncthing as a specific user - which is inflexible. What if different people use the same machine at different times? Maybe the stop/restart script could find a way to save and remember under which user syncthing was running when it was stopped - and use the same user to restart. But we end up building a significant amount of complexity used for running syncthing - but outside syncthing.

Essentially what I’m thinking about is that syncthing might be used on a machine used by multiple users at different times - not just a scenario suitable for a single user machine. That is why a way for syncthing to suspend syncing all by itself based on some tests or criteria or exit code from a script would save the above mess and would be more elegant and robust. Well - if such a thing is possible, of course.

I’m not sure how much of the above makes sense?!

I would actually like to use this type of hook for triggering the post processing of transferred TV Shows and Movies. I am using Filebot to sort and rename these files into separate Movies and TV Shows folders in a different location. Any progress?

1 Like

I am also interested in this feature. I would use it to locally move files to their final directory after they have been downloaded, so that the synced folder is kept empty. This way the source device would not need to keep a copy of the files that have already been transferred.

For that though, you just need to see if there are any files and if so move them away. Use an inotify listener, or just check every ten seconds or how often you like. The files are stored as temp files (.syncthing.somefile.tmp and so on) until they’re done, so just void those temp files and you’re good.

I’ll do it this way then. Thanks!

Another possibility to add to Syncthing that would probably help most of us in need of this feature would be to add an option to have syncthing move files to a destination folder upon completion of each transfer session. So say syncthing detects a change in one folder at one location, then at the other location the transfer starts and after all transferring is complete how ever many files that may be, then move files to said folder. For me this or adding the post-transfer script would help me accomplish the same goal.

I use syncthing to transfer media folders to be post-processed once on my local machine. The problem I have right now is that the services I have post-processing the media start processing before the sync is complete and most of the services I have that do the post-processing have issues with the syncs not being complete when they are run.

Just write a small script which finds all files which don’t have ~syncthing~ in them, and moves them somewhere else.

Yea, but the key is that this is done after a whole series of files is completed transferring. It seems the only way to do this at this point would be to have a service running to periodically check to see if all transfers are finished. I’d rather wait until something is implemented to either notify the post-processor that the transfer is complete or as I mentioned to have syncthing move the files when all transferring of the specified is complete. Unless I’m missing a different possibility?

There is the events API you can listen on, which can tell you whats happening. I am not saying this is not a cool feature, I am suggesting a workaround while it does not exist.

1 Like

This isn’t really a well defined concept. Update notifications from other devices are sent in chunks. If five thousand files are changed, we might get the info in chunks of five hundred, start pulling files when we know about a thousand of them, complete than, then notice that there are four thousand more files to pull and handle them, and so on.

It sounds to me like you might be better of with an rsync script, as it also sounds unidirectional (what with the moving files away etc)?

I understand. Thanks for walking through it with me.

For what it’s worth, there’s a proposed change by @lkwg82 that would, among other things, give scripts more power in this area. Essentially it means the versioner script that now only gets called on to dispose of old files would instead get the responsibility of replacing the old file with the new version we’ve created. Among the options available to that script, it could

  • just rename the new file over the old, essentially what we do internally.
  • archive the old file to somewhere, then do the rename.
  • do the rename, the do a git commit
  • do the rename, then copy the file to a staging area for pick up by someone else interested in seeing changed files
  • etc

I think that would be an improvement in this area.