About 180 folders and 180 files size about 30 mb each. 1 connected.
Are you using the arm6 file? And Raspbian?
Tested pulling from my pi: got a average of 5 Mbps (measured 161s to sync 100MiB) cpu profile: cpu-29075.pprof.tar.xz (21.8 KB) edit: iām using armv6 and raspbian edit2: is compression deactivated on both sides? I donāt know what happens when only one side has it active and the other not, does activated compression mean you compress what you send and still reviece uncompressed if the other side has deactivated it?
Yes Raspbian, and the binary should be ARMv6 (not sure any more).
And right, I have compression disabled on both sides, that might actually be it (I mostly have pictures, music etc so it probably wouldnāt help much).
Are you using pulse version 0.10.1? I have compression disabled on both sides.
Alex and Nutomic how are you measuring the speed? Pulse is telling me that Iām downloading at +5 Mbps BUT the System monitor says something else. Look at my screenshot⦠Strangeā¦
I could not post until now, the forum said I should wait 18 hours before I could post again ?!?
Yeah v0.10.1.
I checked the speed from the web GUI, not sure why itās different. @calmh?
I checked the transfer time using a 100 mb file the System monitor is 100% correctā¦
System monitor shows 964 kB/s (bytes), which is ~7.7 Mb/s (bits). Perhaps itās including network overhead, or something else is happening in parallel?
I mixed up the B (byte) and b (bit).
So I guess we can conclude that we all run at around 2-5 Mbps (0.250-0.625 MBps) on our RPI.
BTsync run with 2-3 MBps (BYTE), so the question still remains, why is Pulse so much slower.
Alex managed to make a profile, maybe you can see something in that file. We are using the same system and download at the same slow speed.
@Alex which version of syncthing is the profiling against?
Looks like v0.10.1/armv6 (or at least Iām guessing on that, and the profile seems to make sense when interpreted in that context). Nothing here that stands out abnormally;
(pprof) top10
Total: 4273 samples
520 12.2% 12.2% 534 12.5% code.google.com/p/snappy-go/snappy.Encode
359 8.4% 20.6% 359 8.4% bytes.Compare
291 6.8% 27.4% 294 6.9% crypto/cipher.(*gcm).mul
228 5.3% 32.7% 228 5.3% crypto/aes.encryptBlockGo
180 4.2% 36.9% 180 4.2% crypto/cipher.safeXORBytes
168 3.9% 40.9% 316 7.4% runtime.mallocgc
160 3.7% 44.6% 166 3.9% runtime.MSpan_Sweep
149 3.5% 48.1% 149 3.5% runtime.memmove
91 2.1% 50.2% 91 2.1% scanblock
73 1.7% 51.9% 73 1.7% hash/crc32.update
Number one is snappy, the compression for the on disk database. Number two is a generic byte slice comparer, but looking closer at where itās called from in this profile points to database writes. Numbers 3-5 is the TLS encryption, numbers 6-9 are GC.
So, in order 1) database writes, 2) encryption, 3) GC.
The stuff that youāre syncing, is it lots and lots of small files?
Looking at it another way (where did the calls come from, basically), we see something different that points in the same direction:
(pprof) top10 -cum
Total: 4273 samples
1 0.0% 0.0% 3888 91.0% runtime.gosched0
0 0.0% 0.0% 1351 31.6% github.com/syncthing/syncthing/internal/protocol.(*rawConnection).readerLoop
0 0.0% 0.0% 1055 24.7% github.com/syndtr/goleveldb/leveldb.(*DB).compactionTransact
0 0.0% 0.0% 999 23.4% github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction
0 0.0% 0.0% 999 23.4% github.com/syndtr/goleveldb/leveldb.(*DB).tableAutoCompaction
0 0.0% 0.0% 999 23.4% github.com/syndtr/goleveldb/leveldb.(*DB).tableCompaction
42 1.0% 1.0% 999 23.4% github.com/syndtr/goleveldb/leveldb.funcĀ·018
8 0.2% 1.2% 907 21.2% github.com/syndtr/goleveldb/leveldb.(*DB).get
1 0.0% 1.2% 812 19.0% github.com/syndtr/goleveldb/leveldb.(*version).get
9 0.2% 1.4% 809 18.9% github.com/syndtr/goleveldb/leveldb.(*version).walkOverlapping
Gosched0 is the parent of everything, so we can disregard that. Then 30% of the time is spent in readerLoop and the stuff that it calls. Looking closer at that we have:
The profile continues down to database writes. So, incoming index updates, which end up as database updates, which are expensive. Lots of files being synced?
yeah @calmh is correct with the version, and it was one file with ~100MB and then i killed it again, could run the profiling a bit longer if you are interested
Well. The profile is probably showing the initial index exchange then, not the actual syncing. Do you see syncthing using 100% CPU during the actual syncing stage?
yeah itās usually at 100% CPU when syncing, so i should do some profile during normal operation, not just a start, let the 100MB sync and exit again?
Yes, you should definitely profile the transfer process. Because thatās where the issue lies. Only if you profile during the transfer, the main causes of CPU during that time can be found. Only then can they hopefully be reduced and the transfer speeds thus increased.
Another possibility I see is that btsync may split up single files into several parts and transfer them from different hosts. Iām not sure if that could be related. But with 100 % CPU, I doubt it.
I tested btsync between my rpi and my laptop, so the better transfer speed is not because of more hosts.
This is exactly what syncthing/pulse does.
As I said, itās best you provide some profiling information during transfer to be able to pin point why it is being slow. The CPU use implies there is a bottleneck in TLS or a bottleneck in hashing (which shouldnāt be happening on download)
Yes, please profile the transfer process. For some reason profiling is not working on my rpi.
i get a weird result, only a web interface and a folder to sync (the device is connected to wifi, but isnāt seen as connect) but still CPU at 80/100%. Even at startup it take a lot of time to bring up the web interface, then it seems fast to process key and things.
doing a profile result in an empty file, its permission are -rw-rār--, but even forcing to 777 while profiling does nothing.
even disabling web interface does not change the CPU usage.
OS is arch linux, from community repository⦠now that i look at it it seems there is only i686 and x64ā¦