Transfer Slow After Period of Inactivity

Just getting started with Syncthing to sync folders between my Seedbox and local media server. Initially everything looked great - changes were quickly seen on both sides, the transfer rates were fantastic, and resource usage was fairly low. I’ve started noticing a trend, however, that if I have a period of inactivity (several hours without any syncs) my next sync is very slow - around 3mpbs whereas I’m normally seeing 100+. The odd thing is that if I pause and then immediately restart the sync everything jumps right back up to full bandwidth and stays that way for all subsequent operations until a period of inactivity again.

I’m not sure what about my config would be relevant as far as troubleshooting. Initially relays were enabled and I’ve since turned them off and setup a direct connection using hostnames. The relevant ports are properly NAT’d through the local router and I have no local QoS or anything turned on. Local media server is always busy and not resuming from idle or a low power state. I’m wondering if it’s something on the remote side but I’m at a little bit of a loss of where to look. Any thoughts or help would be appreciated!

This is interesting, and the fact that pausing and unpausing fixes this is even more intriguing. It could be somewhere along the network, for example long running connections get throttled or deprioritized, or it could be in our code, something like the rate limiters. Are you running with rate limits?

I am, actually. 20 MB/s on the local pull side. I also ran a test overnight and wrote a script that deleted and created a small text file in the local pull folder every 15 minutes as a kind of keep alive. No difference this morning - the first transfer from the remote was capped at almost exactly 400KB/s. After pausing/unpausing it shot up to 20MB/s (see screenshot). The next thing I was going to try was adding some code to my extractor script on the remote side to create a small “wakeup” file for the push folder before dumping the real payload. I’ll see how that goes. Are you thinking, though, that my rate limit could be the culprit?

Try disabling that and see if it becomes better.

If it doesn’t, the next thing I’d suggest is stripping out rate limiters all together and see what happens.

Do you see raise in any other resource usage, like memory cpu or io?

Not on the local end - I didn’t check the remote at the time. I’ll check both next time it happens. If my wakeup trick doesn’t work then I’ll try removing the rate limiters altogether and see how that goes.

Looks like a small wakeup file does the job. My extractor script now generates a 1KB file with a random name into the sync folder prior to dumping the rest of the payload. The pull folder has been setup to pull smallest files first. Ran a couple of test jobs over the last 24 hours and all of them have been delivered at full speed.

That doesn’t really help understand the cause however.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.