Very slow transfer speed even on direct connection

Hello. I know there are countless threads on this forum regarding this. Here’s yet one more, as nothing I’ve tried has fixed my issue. Dedicated server is running Windows 10, Sync thing 1.4.2 The machine at home is running Linux, Sync thing 1.4.2. I’ve been attempting to troubleshoot this for quite some time. I’ve got 7 tb of content on a dedicated server with an Intel core i7-2600. 3 hard drives are raid 0 and the machine has 32 gb of ram.

I’m attempting to transfer that over to my local machine here at home. The server is, in theory, gigabit, yet I can’t seem to get more than 600 KiB/s out of it. I have a 50 mbit download speed, the machines are connecting to each other direct, and I have forwarded the port 22000, even though the log clearly shows that UPNP is working. There is a bit of latency between the two machines since they are a fair distance apart, but every other medium pegs this connection, though admittedly with ftp it does take a few more connections than I would usually use.

I have tried increasing the pullers just slightly. That doesn’t seem to have any effect. These files range from a few kb to over 500 mb in size, but most of them are much larger than a few kb and are mainly 100 plus mb.

Closing the web UI does absolutely zero.

So far, Sync thing appears to be my dream, except for this - and the fact that, for some odd reason, the latest version no longer shows CPU and ram usage, which I found very nice to have on hand. Speaking of ram, Sync thing is now using over 600 mb on the server and at home. Hopefully that goes down when Syncing is complete (if it ever is).

Does anyone have any suggestions?

There are various advanced settings you can tweak, but what sort of latency are we talking about here?

hello Latency is anywhere from 60 MS to 130. Thanks

So I don’t think 60ms latency could cause this.

What’s most likely happening is that the small files are synced slowly because of the overhead per file. Fsync’ing every file takes longer than downloading the file itself most likely.

As a test, you could setup a new folder, generate a single large file with random data and see what the performance looks like.

Would changing the folder settings so downloading larger files first would take precedence? Edit: I have tried syncing a large (3 gb) file. It doesn’t seem to make a difference and the speed remains the same.

Just so we’re sure were not missing something in your setup, could you post screenshots of the web UI from both sides please.

I’ve found that download speed is directly related to how busy Syncthing is. If it’s doing lots scanning of folders, or if there are very large files that take time to analyse, this can bring a good speed down to it’s knees.

More so if you are using slow drives such as USB as the process will take longer.

It’s finding a balance of what needs to be processed and what needs to be synced. When I find that a folder drops from say 17Mbps to 4Mbps I will pause folders that are not making significant process, eg, they show ‘scanning’, so they will be paused. If they are saying say scanning (80%) I will allow them to continue since the task is nearly complete. Sometimes that could mean that 20 folders are paused. This will return the download back up the 17. Less work / faster speed.

In theory concurrency should be your friend, but it will only move on once a folder is up to date, but if you are downloading say a 1Tb file, that could take a week to retrieve so that hangs up progress, so I tend to have mine on -1 unless I know that the bigger stuff is already up to date.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.