Recurrent Read timeout

Hello, I have a basic syncthing setup like this:

  • Server A, knows B but not its address
  • Client B, knows A by its IP address

Both local and global discoveries are disabled, but I doubt that’s the matter.

Now, A and B see each other. Syncthing somewhat even works, in that it can synchronize files. However, about 30s after the devices connect to each other, the up/down rate drop to 0 and, 2 minutes after, they get disconnected.

Logs show a read timeout on both sides:

Apr 28 08:25:12 hostname syncthing[10544]: [KFBPQ] INFO: Connection to BBBBBBB… closed: read timeout

Synchronization does happen, just a few hundred kbytes per connection before it gets disconnected. Its gets synced eventually, but it takes forever.

I am struggling to find the cause of this issue. It started to happen a few days ago on 0.12.21, after several weeks of flawless working. I tried upgrading to 0.12.22 (from syncthing deb repository), no change.

I also know the link is not very good, I have about 15% drop rate. Other programs do not have issues with that though (I transferred several GB through sftp without a single disconnect a few days ago). The firewall is not dropping the connection either (I can still see it in the state table long after it gets stall).

Any clue? Either to fix the issue or to get a better understanding of what’s going on?

15% packet loss is rather extreme. The usual pain threshold is around 0.1% to 1% at which point normal web browsing becomes unbearable as everything stalls. I’m guessing this is just the connection stalling so that Syncthing doesn’t receive any data within the timeout period, which is five minutes. “No data” from the Syncthing perspective is a little stricter than from a pure TCP perspective, as it needs get at least a full TLS frame to able to get anything out from the connection.

A packet trace might confirm what’s going on. A tcpdump -s0 -wsyncthing.pcap -i $theInterface port $serverPort on the server, where $serverPort is the port Syncthing listens on, uploaded somewhere, perhaps…

Sorry for not answering sooner, especially when your prompt answer was really helpful.

I know 15% is extreme, but unfortunately that’s what you get in some places with wireless. I moved to some other place since, so unfortunately I cannot dump in such an extreme environment.

On the other hand, in a dozen other places with varying connection quality, I was able to strongly correlate the issue with high packet loss rate.

I wonder why HTTP traffic does not seem much affected though. My guess is packet loss happen upwards but not downwards, so once the request made it, there is not issue with the download. That’s plausible, given the transmit power of the laptop is much lower than that of the access point.

So far I had been syncing with only new files on my side. I will try to put files on the server to have them downloaded next time I run into a very bad connection. See if it works better than the other way.

Anyway, thanks for your fast answer. The bit about the full TLS frame was quite helpful in understanding how packet loss affects the issue.

Don’t forget that HTTP requests are one off mostly within a short timespan, where as syncthing is a continuous stream.