I have a Synology NAS running Syncthing via the docker container. In general it works great. I have a process where I transfer images from a camera via FTP to a folder that’s being watched by syncthing. This is all working fine, except that the syncthing instance getsan FS watcher event after the first image is transferred, and the directory is scanned. The initial few files are transferred fine. And then, as more images are stored via the FTP server, the FS Watcher seems not to fire more events, so only the first 20-30 images are transferred. I have to go into the WebUI and manually trigger a scan (or wait until the scheduled scan) for the remaining files to be picked up. There’s no file integrity problem. The manual scan works fine and there’s no apparent risk of data loss.
But why don’t the subsequent files cause additional scans until all the files are transferred? Anyone know? is there a setting somewhere that I screwed up somewhere that may be affecting this behavior?
Filesystem is BTRFS
There aren’t a ton of files. In fact, the folder probably has 10 subfolders.
The camera actually creates an empty folder and puts files into it. In this case, the camera ended up pushing 1400 files into one folder, and 1400 files into another folder. In both cases, Syncthing picked up the initial changes, and sent the first few files from the first folder, but ignored the rest of the files that were put there… THen the camera created the second subfolder and started putting files in the second folder. Syncthing sent a few files from the second folder too…
So a few files from each folder are picked up…
I have a ton of watchers enabled. I think half a million or something ridiculous. I also don’t get any warnings in the UI about not enough watchers which I saw before I allocated a ridiculous number of watchers.