I just experienced a somewhat uncomfortable situation. I wanted to sync an android tablet via WLAN / eduroam / indoor network in Berlin to a workstation connected via vpn tunnel with exit point somewhere in Munich (800km) . So data packets from the PC here drop out in Munich. Both clients are looking at each other at a 30cm distance. Some people want to see the world burn. That’s ok.
So far so good, that is an evil setup, I know. After waiting serveral minutes (10 - 20), rescans, restarts whatever we tried suddenly the two machines registered each other. All took much much … much longer as expected, so we finally got sync started. But then we had extreme slow rates, about 10 kiloBytes and lower per second.
I hunted it down to the relayserver we were magically connected to:
for what reason ever: the Munich part ended up on a relay server in .cz with 99 clients, > 500 sessions and an bandwidth limit of 150 kB/s.
For me, fixing this is no problem at all, I took the relay-ip, went to the web page, found the relay server I was connected to and was technically able to calculate that this was the culprit. Even reconfiguring each client to use our own 10 GbE relayserver was no problem. I now get about 20-40 MegaByte/s .
But I’m thinking about people who just experience unresponsiveness, slow rates, and connection problems using the defaults after getting redirected to a bw-limited strelaysrv.
How is it even possible a 150 kB/s relay is getting hundreds of clients ?
I see it’s nice to have a lot of relays, this is why we also provide one. But I fear due to such slowdowns like the above mentioned some people start thinking twice before using syncthing. “Yeah, I tried it. Nice. But too slow.”
Maybe we could have a look at the relay selection algorithm for default setup ?