Folders show "Stopped" after a reboot with cifs shared default folder

Environment: Syncthing v1.19.2 running on Debian 5.10 in a guest CT under Proxmox 7 host with all Folders shared via a cifs share mounted default folder on the guest via fstab.

Problem: After a reboot of either the guest CT or host server, all Syncthing folders show “Stopped” and never starts/scans as expected. Syncthing is brought back online with Actions>Restart from within the web GUI, which starts the rescan process of each folder.

I would like to either have this restart action performed automatically at reboot, or address the underlying cause.

When I bind mount the storage in the CT as opposed to using cifs mounted via fstab, this behavior does not occur. So I believe it is related to cifs mount timing - but not sure. Does anybody else notice this behavior?

Sounds like the file systems aren’t available when Syncthing starts. It should retry and revive them at some later time.

calmh - seems I remember catching this on several occasions after a power outage before installing a UPS. I normally caught this and did Actions>Restart to get things back up immediately - never waiting around… I’m going to reboot the CT now and give it a couple hours to test your suggestion.

3+ hours after reboot, all Folders still show “Stopped”. I wouldn’t expect it to take that long to initiate a rescan.

To solve this issue at reboot, I entered in crontab for root:

@reboot sleep 10 && systemctl restart Syncthing@[username].service

This appears to give cifs time to mount before restarting the user’s Syncthing service. No more “Stopped” folders in the Syncthing web GUI after reboot and no user action required after reboot/power cycle.

You should check the logs/provide them here.

Audrius - I would be happy to. Which logs?

Not Audrius, but you should probably start with the basic output that Syncthing logs out when started from the command line. You’re running Linux and probably starting Syncthing in the background, so I’d suggest to simply add -logflags=3 -logfile=<path-to-a-file> to the command that launches it, and then you can upload the logfile here once you’ve encountered the aforementioned issues.

tomasz86 - Thank you for your reply. I do not launch Syncthing manually, from a script, or with cron. After installation, I only:

sudo systemctl enable syncthing@[username].service
sudo systemctl start syncthing@[username].service

and it launches at boot. So I stopped the service with

sudo systemctl stop syncthing@[username].service

and then manually launched the application by simply entering

syncthing

and folders never indicate “Stopped”. Interestingly enough, every time I ^c and relaunch in this manner, I never experience the “Stopped” folders issue - everything works perfectly. So the Stopped folders issue only occurs when Syncthing is launched as a system service…

You can probably add After directives to the relevant cifs mount targets for the Syncthing service to solve it a bit more elegantly than a cron timer with a restart.

I’d guess it’s probably similar to a situation, where you run Syncthing at system start when using encryption. The folders are also started “stopped” then, because the storage isn’t accessible at that time yet.

Actually, I think I’ve experienced a very similar issue with BitLocker in Windows previously. In my case, the folders also kept being stuck in the “stopped” state for hours until a manual intervention was made. If there’s a mechanism that should make Syncthing re-detect the folders automatically, it definitely didn’t trigger at that time for me.

imsodin - Thank you for your suggestion. I learned something very valuable today after your suggestion to find a more elegant solution with the After directive. My solution:

Determine the name of the systemd generated mount unit with:

systemctl list-units --type=mount

For my mount point, it is shown as:

mnt-syncthing\x2ddeb.mount loaded active mounted /mnt/syncthing-deb

Now knowing my mount unit is “mnt-Syncthing\x2ddeb.mount” for my mount point /mnt/Syncthing-deb I looked for my Syncthing user service in systemd:

cd /etc/systemd/system/multi-user.target.wants
ls
cron.service        postfix.service   rsyslog.service  syncthing@[username].service
networking.service  remote-fs.target  ssh.service

cp syncthing@[username].service syncthing@[username].service.bak

nano syncthing@[username].service

I then appended to the end of the After= directive with the unit name of my mount changing:

[Unit]
Description=Syncthing - Open Source Continuous File Synchronization for %I
Documentation=man:syncthing(1)
After=network.target
StartLimitIntervalSec=60
StartLimitBurst=4

to

[Unit]
Description=Syncthing - Open Source Continuous File Synchronization for %I
Documentation=man:syncthing(1)
After=network.target mnt-syncthing\x2ddeb.mount
StartLimitIntervalSec=60
StartLimitBurst=4

I then commented out my root cron timer, rebooted, and now there are no more “Stopped” folders because the Syncthing service waits for my mount point to go active first!

Thank you for pointing me in the right direction. Awesome!

1 Like

Now use systemctl edit --full and you can avoid nano and have something which is update proof :wink:

2 Likes

bt90 - Even better. Thank you! I was wondering if this would break with an update. Appreciate the tip.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.