I just migrated over to syncthing to see how it compares to btsync - liking it so far!
I’m wondering 2 things:
Is it possible to force a sync to happen over a local/internal network? If so, how do I check that this is the case? Right now, I have a folder uploading to my NAS, but the upload speed is around 10-15MB/s, which is very slow for my gigabit network
How does syncthing decide whether to use a local network, or to route publicly over the internet? My concern is my bandwidth given I have a monthly cap.
My desktop computer will always be at home syncing to my local NAS - so ideally I’d just transfer locally over my network to save on bandwidth. However, my laptop tends to travel with me. When I’m at home I’d like my laptop to sync over the local network, but when I’m at work I’d like it to still sync over the internet. Is it possible to achieve something like this?
My main concern is bandwidth usage, and want to achieve a setup where usage is minimized with little fuss
If local discovery is enabled or you enter the IP:Port of the other Node in the device settings, instead of dynamic, then syncthing will only transfer locally.
You can check that by expanding the device on the right side of the web interface. If it shows a local IP, than it is syncing locally.
I did set up the sync between my desktop and server with their local IP set in the device settings of each other, so they can only sync locally, and I have the same low speed as you have.
thanks, sounds easy
If I have both local and global discovery turned on on my laptop, will it favour global discovery when Im away from the lan, and switch to local discovery when Im at home automatically? (using dynamic instead of a fixed ip)
Yes, it works that way with my Adnoird devices. When in home WiFi it uses local discovery and shows the local IP address, and when in foreign WiFi it connects through global discovery.
The network is most likely not the bottleneck. We do a lot of hashing, crypto and other shuffling that probably means you’re CPU limited to that rate.
I just ran a test with a 2.1GiB file (MP4).
I closed both web interfaces* and watched the folder listing in 10s intervals. The new file went from 13MB to 1.1Gib and from 1.1Gib to 2.1GiB in 10s (each) while the CPU usage on both devices was not very high.
So it transfered at about the maximum of my Gbit LAN.
After that i did another test with a 10GiB file (MKV), which was only transfered at about 10 - 15 MiB/s while the CPU usage of syncthing was at 350% (all 4 cores at 80 - 90% each) on the sending side and at ~90% on the receiving side.
I ran each test twice with same results.
I also tested 2x1,3 GiB files (MKV) which caused a CPU usage of ~150% at sender and transfered at ~40MiB/s
My sending device is a Core i5 4590S (M2 SSD) and the receiving device is a Core i3 3220T (WD RED HDD, RAID1).
The transfer rate and CPU usage differs from the content and amount that is to be transfered.
Why is that?
*It is known, that the transfer speed takes a hit, when having the web interface open, even though not CPU nor ram is really limiting.
this is my theory:
I feel that when transferring larger files we get hit throttled by garbage collector (because more data travels, we need to allocate more buffers for reading stuff into, and the garbage collector has to do more work to make sure we are still within sane memory limits hence it kicks in more often stopping everything else that is running).
Also, it’s possible that parts of the MKV you were sending were already at the destination (in some form, perhaps parts of it were the same as some other file), hence there was very little transfers as syncthing was just rebuilding the file from what it already had around.
The 2.1GiB MP4 file was already in another shared folder, so the test is more or less invalid.
I repeated the test and saw that the complete file was blue (copied from original).
I used another 1.8GiB MP4 which is wasn’t in any other shared folder, and the speed and CPU was comparable to the test with the 2 1.3GiB MKV files.
Additionally I tested a 4.4GiB MP4 file, which was transfered at about a little over 30MiB/s while syncthing used about 300-330% CPU on sending device.
The more is to be transfered, the slower and more CPU intensive it gets.
I did a few more tests with smaller files and the fastest transfer speed I could reliably measure was about 40MiB/s with a little less than 100% CPU usage.
This is counterintuitive to me (the cost per block shouldn’t increase just because there are more blocks to transfer), so something that should be investigated…
Says otherwise, as for sending a 5GB file, we spend 14% of the time in heap scans, 10% in crypto, 30% in serializing. Where as with 1GB file we spend 50% in crypto, 4% in heap scans, 20% in serializing.
Just for fun, how does
GOGC=400 syncthing gerform?
I repeated the Test with a 10GiB MKV file with GOGC=400.
The speed was better with ~25MB/s and the CPU usage of sending syncthing was higher than before; mostly at 385 - 390%.
Lots of various numbers in your post, but I guess that’s about twice what you saw with the default GC setting?
yes, almost. With a 10BiG MKV file I get:
Default GC: ~15MiB/s, ~350% CPU
GOGC=400: ~25MiB/s, ~385% CPU
We’ve merged a change to set the default to GOGC=100 (previously 25).