I’ve checked various sources and forum posts but I can’t figure out why the speed on my LAN Connection is stuck at around 40-50 Mbps when sending data from one of my machines.
I am running Syncthing on TrueNAS Scale and a Windows Machine. Both are connected via a 10Gbps Connection that also delivers such speeds when copying files directly via SMB.
Syncthing on my Windows Machine runs via SyncTrayzor. Syncthing on the NAS runs via the Truecharts Docker Image.
Both installations show the connection Type as TCP LAN.
CPU/RAM/Drive/Network usage is low on both machines. On Windows around 1% CPU, 820 MB RAM (50GB free). On TrueNAS (data via netdata) the container hovers about 10% of it’s allowed CPU Usage and 20% of it’s allowed RAM Usage.
The storage paths in the docker are mounted as Host Paths. CPU Resources as 4000m and RAM as 8Gi.
When I send data from Windows to TrueNAS via Syncthing I get full performance as fast as the LAN network connection allows. When syncing data from TrueNAS to Windows I get abysmal speed. So clearly the Syncthing installation on my NAS is to blame. But why?
I’m out of ideas at this point what to look at next.
Does anyone have an idea/suggestion?
Does this happen when sending a single large file or when sending multiple small files? If it’s the latter, then the slower speed is probably normal.
The folder consists of 20.237 files totaling 3,88 TiB. So an average of 5 GiB per file.
What kind of storage is the syncthing database (usually ~/.config/syncthing) located on? Slow storage (read hard disks) can significantly impact the syncing speed. Though for such large files as yours I wouldn’t expect a big impact after initial index exchange - still worth checking.
Is it? Wouldn’t the receiver have to do more work? Database and file writes should be costly.
You could try to exclude Syncthings database folder from Defender to rule out AV going rogue.
Since my last post here the behaviour has changed drastically.
I am not sure what exactly caused it because I changed quite a few things around my setup (switched from a SATA to PCIe Card to a SAS Card, introduced a new dataset order with separation between SMB Access and Container Access, put my Docker Container on a seperate User ID and managed the access to the files via NFS ACLs, updated to TrueNAS Scale Bluefin) but I do suspect the recent TrueNAS Scale update as having a part in it.
As per the changelog: https://www.truenas.com/blog/truenas-scale-bluefin-is-released-into-the-wild/ TrueNAS Scale now supports OverlayFS which apparently greatly reduces overhead when running Container workloads.