Feature request: restart / upgrade on mtime-change of syncthing executable

Would it be possible to enhance the update/upgrade mechanism in syncthing/main.go to also trigger when the executable changes its mtime ?

something like this perlish pseudocode:


main () {
 $started_mt = (stat($0))[9];

 mainloop {
 $mt = (stat($0))[9];
  if ( $mt ne $started_mt ) {
    exec $0;
  } 
 [...]
 }
}

Like as if the update/upgrade routines would have returned “you are old, update!”

Currently a running syncthing instance only updates when releaseURL tells it to. Or the user restarts / initiates the update manually. Or /rest/.

But when I have a /usr/bin/syncthing process running and then update the executable on disk the running process doesn’t know about it. It continues to run the ‘old’ version.

So checking the mtime of something like osext.Executable() once in a while would do the trick.

This should work on all OS as long as the filesystem returns a valid modification time.

Why don’t you pkill syncthing when you upgrade the executable on disk? This is what our Debian post-upgrade script does, for example.

In my case, I have about 250 machines, where “somewhere” “someone” might be running syncthing. We have master machines, which over night copy their system to the others, so all machines only differ by IP address (almost).

We’re updating (in short) by cd / && tar x diffs.tar, but any package manager would do that: extract a new binary and store it at the well known location.

So I don’t have any clue if, and who is running syncthing, which version and if it’s their own compile or /usr/bin/syncthing. A pkill /usr/bin/syncthing would work, but why not let syncthing decide itself?

We have a lot of services/daemons running which check if they have been updated/modified ( does not work with bash scripts :wink: ) and this is a very nice feature to have.

Syncthing already has the restart feature built in but more or less on steroids ( download new version, checksum, unpack, etc ). Whereas a simple check stat($0)[9] would be a lot more easy to handle. For developing, debugging, server farm deployment etc.

Not per default, but as option.

Would it break something ? If kill doesn’t harm a running syncthing, where could the restart() call go ? main.go or deeper in upgrade and friends ?

I don’t think it would harm anything, it’s just not a behavior I’ve seen before. I’m used to things auto updating, or being updated by a package manager. Overwriting a bunch of the system and not restarting stuff seems fraught…

You could do the check somewhere in main, like the auto upgrade check that is there today. The auto upgrade things does the upgrade and then exits with a code that indicates “I want a restart”. You could do the same just not do the actual update.

Out of curiosity, which are these? I’m always more likely to accept something if there is prior art out there to show it’s praxis.

Here’s an example of a management daemon, running on all our systems:

I have a perl script lurking on /var/log/messages on our ssh-gateway, which decides whom to present an iptables DROP. Since it has all the bad IPs in it’s ‘DATA’ segment, I just edit it, save, and it restarts:

while (1) {
  my $nt = (stat($0))[9];
  if ( $nt ne $mtime ) {
     $_ = `perl -c $0`;
     if (!$?) {
       last;
     }
     print "### SYNTAX ERROR: $0\n$_\n";
     sleep $SLEEPTIME;
  }

This is as stable as it can get and hard to break. I don’t recon other software who also has the option to restart this way, so I might get a patent on it :sunglasses: Well, I hereby declare this method Public Domain.

Anyways, I’m open to get this method questioned. I currently see less point of failures ( time(), stat(), if then exec ) than in all other methods like signal catching, dbus, rest and so. Pretty old-school but, hey, it works. :sunglasses:

Ah, yeah, see, custom stuff you’ve written yourself to fit into your own process doesn’t really count for making this seem like a mainstream thing. :slight_smile:

I’m not necessarily saying we can’t do this, just that if nobody else is doing it then odds are it’s a weird and unexpected thing and the threshold for including it is higher.

I think the set of people who don’t let Syncthing auto-upgrade, yet run it with the monitor process and with permission to restart itself, is pretty small. Most people are either going to be running it standalone, or under the supervision of a package manager and something like systemd. This proposal isn’t useful in either of those situations, I think.

Please don’t get me wrong. It’s no meant to be a feature “I” need in “my” setup.

Users with auto-upgrade disabled usually live the following situations:

  • Download latest archive for your OS

  • Windows: extract the executable by overwriting the running one. Scheduler, autostart or manual start doesn’t matter: They have to open the browser and manually force the running instance to restart.

  • Linux: either start syncthing manually or put it in some preferred autostart location, handled by their window/startup manager. Same as above, after overwriting the old executable you have to open a browser and force a restart.

  • Linux/systemd: Replace binary, open the browser -> restart.

  • Development: go run build.go && cp bin/syncthing ~/bin/syncthing or just go run build.go when bin/syncthing is currently running.

  • Workstation/Serverfarms with central deployment tools: update executable, deploy the file. rsync, puppet, ansible, scp, Dameware, SoftwareCenter what ever. You’ll need a special and complicated mechanism to restart running instances.

  • Mac OS X: Download, extract, move to /Users/USERNAME/bin/syncthing. Open browser, restart.

But all of the above have one thing in common: The executable file has changed its modification time. Which is the absolute minimum precondition for a restart of syncthing to upgrade. New binary, lets use it.

I’ve got an uneasy feeling about this. I often separate upgrade time and restart time into distinct events. Applications may get upgraded at various points, but the application(s) will only be restarted when we’re good and ready.

1 Like

I have a different view. About half of the user base run with automatic upgrades. Of the ones that don’t, the big categories are Android (upgrade & restart managed by wrapper), Debian packages (upgrade & restart managed by APT), homebrew and other package managers (upgrade & restart managed by the package manager) and other wrappers.

The people using puppet, ansible, etc. are probably handling it just fine. It’s not complicated. For example, in ansible, I manage Caddy like this;

- tasks:
  - name: Caddy stuff
    ...
    notify:
      - restart caddy

- handlers:
  - name: restart caddy
    command: pkill caddy

That’s it.

I think people managing their upgrades manually and just by replacing the binary are few and far between.

It may be the minimum precondition, but it’s also not necessarily the complete precondition. We’re sort of lucky that we’re distributing a static binary that has few dependencies, but otherwise you’re deep into issues when you also want to upgrade .so files, change the config and fix the runner script as part of the upgrade. A surprise restart can ruin everything.

So all in all, I think it solves a niche problem, may introduce problems of its own, and surprise more people than it would delight.

It understand, but my idea is exactly the same as if you are running syncthing with auto-upgrade. It reads releaseURL, downloads, extracts and restarts.

My suggestion just skips download, extract and install, which has already been done by the user.

I fully understand the point of the fear to get in trouble with ‘surprise restarts’, I definitly like my sync databases handled like raw eggs in upgrade situations.

Anyway, it’s an idea, not implementing it is not a problem.

Thanks for the discussion, I’m convinced to skip this feature request.

Thing is, I don’t think that’s true. I think users with auto-upgrade disabled usually use their system’s package manager to update.

In Windows you can’t overwrite a running file.

No, systemctl restart <stuff>.

Exactly, go run build.go and kill it when you’re done.

I’m not sure I’d call it “special and complicated”? Syncthing should really be running under the supervision of some sort of service manager in this case (like systemd), in which case you’d get the service manager to restart Syncthing. That’s a single command, not a “special and complicated” process, and will be the same as any other service.

brew services restart syncthing, I believe? Maybe I’m out of date.

I think the general misconception is that most people who have disabled auto-updates are 1) not using a service manager, and 2) doing updates by manually downloading and replacing the Syncthing binary, by hand. I don’t think either are correct.

@canton7 Thanks Antony for taking the time and going into detail. You are totally right concerning Windows overwriting running files. This is the knockout argument to my suggested feature. I should have tested this before.

With “special and complicated” I thought about finding processes, sending the right signals to the right pids and so on. “Kill it” is not that simple when I have several processes from different users running. namespaces, containers, I even might miss a process due to its blocked io-state, swapping or system overload.

btw, I tried the systemd/systemctl --user part and failed miserably because I was not able to read the journal. ( ACLs disabled on / , known unfixed systemd issue since 2016 ), Adding the user to a journal-group to enable reading all journals isn’t the right solution.

Anyways, thanks for your input, you seeded new ideas 8-)))