System is Slow on Windows? Look Here!

I decided to post this in case it would help anyone who is also struggling out there using Syncthing with Windows machines.

TLDR - You may need to manually increase the UDP maximum READ/WRITE buffers for Windows because quic-go (used in Syncthing) might not automatically increase it. There’s a post on GitHub about increasing this for linux, but not for windows. You can find information about increasing the buffer for Windows here.

In my case, I have about 30 machines that must be Windows due to software limitations. Syncthing is keeping about 50GB of data synced between them from 1 folder, which works great 90% of the time. The other 10% is when an update occurs. Updates take all of the machines 2-3 days to recover with very small transfers occurring all the time.

After researching for a bit, I looked into the QUIC library that Syncthing was using, quic-go. In this library, there is a post about UDP buffers and how they impact the protocol.

Unfortunately, they didn’t mention how to fix this in Windows, but I eventually found 1 post and 1 document that specified that you can do it by adding two DWORD registry keys to HKLM:\SYSTEM\CurrentControlSet\Services\AFD\Parameters:

DefaultSendWindow - 0x17D7840
DefaultReceiveWindow - 0x17D7840

0x17D7840 = 25000000B = 25MB

Once I added the registry keys and restarted Syncthing…the significance was massive! A node was syncing 50GB of data in less than 10 minutes when previously it would take several hours to sync 5GB! Considering how my Syncthing “service” is set up, quic-go probably doesn’t have enough permissions to modify the buffer, so it is forced to work with the defaults, which is 8192…i.e. 8.1KB.

Hopefully, this helps someone, it helped me!

Note: Unfortunately, I had to delete some of my links because new users are limited to 2, but I kept the most important ones.

2 Likes

Very nice! You should open an issue in the quic-go Github project. This is IMHO a valuable information which should be part of their wiki.

1 Like

Why aren’t you using tcp?

I am using both. I had noticed that some nodes will swap from using TCP to using QUIC instead, which also happened to be the slow ones.

Admittedly, I haven’t looked into why it’s switching to QUIC in the first place when it could just use TCP. I think that should be the next step in my investigation.

That being said, I’ve also noticed performance increases in the nodes that are consistently using TCP.

You can just force it to use tcp (by disabling quic listeners by adjusting the listen addresses), and that would also solve your problem, potentially in a better way

1 Like

Thanks AudriusButkevicius, I’ll test that out next. If it doesn’t maintain the TCP connection, there’s likely something in my network that I’ll need to analyze.

tl;dr you’re basically enforcing huge send/receive buffers. The fact that this improves throughput is IMHO an indicator that you’re suffering from high packetloss. Your TCP connections getting dropped might also be a result of this.

Thanks bt90! That’s a good follow-up post on quic-go’s Github. :sweat_smile: Haha, admittedly, I was a bit overzealous with increasing it to 25MB, and had a bit of a “go for broke” moment.

Initially, I had started with 2MB after reading the setup for the LCM Project: https://lcm-proj.github.io/lcm/content/multicast-setup.html#kernel-udp-receive-buffer-sizing

Unfortunately, I couldn’t include it in the original post since I’m limited on the links I can post.

The next step for me would be to decrease the buffers and figure out the ideal/sensible buffer size for my machines. I think I will need to do some research here. If I come up with something, I’d be happy to share my findings here in case it helps others.

1 Like