Environment:
Syncthing v1.19.2 running on Debian 5.10 in a guest CT under Proxmox 7 host with all Folders shared via a cifs share mounted default folder on the guest via fstab.
Problem:
After a reboot of either the guest CT or host server, all Syncthing folders show “Stopped” and never starts/scans as expected. Syncthing is brought back online with Actions>Restart from within the web GUI, which starts the rescan process of each folder.
I would like to either have this restart action performed automatically at reboot, or address the underlying cause.
When I bind mount the storage in the CT as opposed to using cifs mounted via fstab, this behavior does not occur. So I believe it is related to cifs mount timing - but not sure. Does anybody else notice this behavior?
calmh - seems I remember catching this on several occasions after a power outage before installing a UPS. I normally caught this and did Actions>Restart to get things back up immediately - never waiting around… I’m going to reboot the CT now and give it a couple hours to test your suggestion.
This appears to give cifs time to mount before restarting the user’s Syncthing service. No more “Stopped” folders in the Syncthing web GUI after reboot and no user action required after reboot/power cycle.
Not Audrius, but you should probably start with the basic output that Syncthing logs out when started from the command line. You’re running Linux and probably starting Syncthing in the background, so I’d suggest to simply add -logflags=3 -logfile=<path-to-a-file> to the command that launches it, and then you can upload the logfile here once you’ve encountered the aforementioned issues.
and it launches at boot. So I stopped the service with
sudo systemctl stop syncthing@[username].service
and then manually launched the application by simply entering
syncthing
and folders never indicate “Stopped”. Interestingly enough, every time I ^c and relaunch in this manner, I never experience the “Stopped” folders issue - everything works perfectly. So the Stopped folders issue only occurs when Syncthing is launched as a system service…
You can probably add After directives to the relevant cifs mount targets for the Syncthing service to solve it a bit more elegantly than a cron timer with a restart.
I’d guess it’s probably similar to a situation, where you run Syncthing at system start when using encryption. The folders are also started “stopped” then, because the storage isn’t accessible at that time yet.
Actually, I think I’ve experienced a very similar issue with BitLocker in Windows previously. In my case, the folders also kept being stuck in the “stopped” state for hours until a manual intervention was made. If there’s a mechanism that should make Syncthing re-detect the folders automatically, it definitely didn’t trigger at that time for me.
imsodin - Thank you for your suggestion. I learned something very valuable today after your suggestion to find a more elegant solution with the After directive. My solution:
Determine the name of the systemd generated mount unit with:
systemctl list-units --type=mount
For my mount point, it is shown as:
mnt-syncthing\x2ddeb.mount loaded active mounted /mnt/syncthing-deb
Now knowing my mount unit is “mnt-Syncthing\x2ddeb.mount” for my mount point /mnt/Syncthing-deb I looked for my Syncthing user service in systemd:
cd /etc/systemd/system/multi-user.target.wants
ls
cron.service postfix.service rsyslog.service syncthing@[username].service
networking.service remote-fs.target ssh.service
cp syncthing@[username].service syncthing@[username].service.bak
nano syncthing@[username].service
I then appended to the end of the After= directive with the unit name of my mount changing:
[Unit]
Description=Syncthing - Open Source Continuous File Synchronization for %I
Documentation=man:syncthing(1)
After=network.target
StartLimitIntervalSec=60
StartLimitBurst=4
to
[Unit]
Description=Syncthing - Open Source Continuous File Synchronization for %I
Documentation=man:syncthing(1)
After=network.target mnt-syncthing\x2ddeb.mount
StartLimitIntervalSec=60
StartLimitBurst=4
I then commented out my root cron timer, rebooted, and now there are no more “Stopped” folders because the Syncthing service waits for my mount point to go active first!
Thank you for pointing me in the right direction. Awesome!