Syncthing stops downloading while doing intensive disk task

Hi All

I’m a new user and have ~20 sites all connecting to a single server across the internet. Several of the sites are in sync, and others are still downloading. Source folders are between 100GB and 2TB in size, with very large full backup files, as well as daily incrementals for the last 30 days. Source files do not change.

I’ve seen some strange behavior leaving the IU open in chrome for extended periods, so I’ve been closing it recently.

Today, I noted that Syncthing had completely stopped downloading from all my sites that are not in sync as I checked my task manager. One folder was writing data at 80MB/s for several hours, using the syncthing process at low priority. However system was writing to the same folder using the system process at normal priority at about 5MB/s. In addition, connecting to the localhost with the browser timed out. After waiting for a few hours, I restarted the system in frustration as it seemed to be processing forever. This is not a disk IO issue, as I have download targets in 4 different disks, and 3 of them were idle.

Is this a legitimate use case? Am I doing something wrong? Is it normal to process large files for a very long time, interrupting the download of other files?

It’s not clear what the actual question is. Syncthing downloads usually a single file at a time. If syncthing was not responsive it’s most likely bottlenecked by IO, yet check other things like ram, cpu and network.

Here is a screenshot of what happens a hour or two after restarting syncthing. Some folders have been fully scanned and are downloading from remote sites. Some folders are very large and will take a while more to finish scanning. After each folder is scanned, it begins a process to start downloading as you can see in the network traffic area.
The problem was when I came in this morning, nothing was downloading, and one client was WRITING data at 80Mb/s for several hours. One process was system one was syncthing as I mentioned previously.
OS: Server 2012 R2, 2.93Ghz, 12 CPU, 32GB ECC RAM, Stable system.

I came in this morning and the problem is back. The system is not downloading anything while it writes this file. Is this normal?

A little more info. The file being worked on is 2.12TB in size. Do you have a size limit? The folder the file is in, shows the space has been allocated, but the file is not visible. Eventually, it stopped writing, and scanned my whole folder list. Downloads are resuming.

Its ok not to download stuff if we find that data locally, you end up with just reads and writes, no downloads.

Yeah, that sorta makes sense, it’s just weird as I have about 10 folders with data to download still, and all the transfers cut off while it writes data to this extremely large file.

I’ve set my full scan interval to be 86400 seconds (1day) on each folder, and enabled large blocks on the folder on both sides. Is there some timer that makes the system rescan all the folders at the same time? I’d like to stagger this full rescan of all folders if I can…

Any other suggestions appreciated.

Rescan time is not accurate, there is upto 20% of delay randomly added or removed between scans. Anyways, you can run syncthing with debug logging or just get stacktraces when it’s running to see where it’s spending time. I suspect your disks are just slow and it will not download data until it can write the one it already has, and your disks/os seem to do unfair io scheduling starving some threads on IO, essentially preventing transfers.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.