i’ve setup syncthing a few days ago to do a one-way synchronization from a win10 folder to an android10 device (internal memory). the intention is to regularly (about 2x per month) update about 5gb of data on the android device to actualize. this is done by deleting all files in the syncthing folder of the win10 pc and copying new data there to be synchronized to the android device, replacing the old data. this was working very fine the first time, having a transfer rate of 5-6MB/s. but now, when running it again, transfer rate is usually between 10-200kB/s, and sometimes there is no transfer at all. only for a few minutes, i’ve seen a tranfer of about 5MB/s, going down to about 10kb/s again then. now, there is no transfer at all most of the time, althoug there are still 33000 files to be syncd, and from time to time, i see a tranfer of about 400kB/s for a few seconds. what could go wrong? i don’t think that i changed anything since the initial run.
Hi,
Do you use 2.4g or 5g wifi? If my phone goes down to 2.4 the rate drops significantly. If on 5g, I get between 20 and 50 mbyte/sec. out of my hardware but battery of the phone drains insanely.
For what it’s worth, depending on timing this will result in two very different scenarios. In one case the deletes have time to happen on the destination, then the copy happens and all your files need to be transferred again. In another case the deletes don’t have time to propagate and instead just the changed blocks are seen by the destination. The result is much less transferred data, presumably (depending on how your data looks), and also a visibly lot lower transfer rate since a lot of the work is instead done by reusing data on the destination.
as all of the files which have to be syncd have been regenerated, i.e. all of them have changed, i see no advantage in just replacing them instead of first deleting all and then adding the new ones. especially as there are some files which no longer exist, this is easier to handle than overwriting the old versions and then searching for no longer used files to be deleted.
In my experience, it is usually the former, unfortunately. Unless we are talking about very small changes, what happens is that Syncthing is unable to scan the new files in time, and once it does, the files have already been long deleted on the destination. Then all of them need to be re-downloaded from source again. This has been the case in my systems almost always when the number of files exceeds 1,000 or similar, and this is on quite fast hardware. On slow hardware, there is basically no hope here.
Because of that, I have personally been considering changing shortDelayMultiplicator
from 6
to 30
, and longDelayS
to 300
, which will ultimately delay deletions by 5 minutes. This way, Syncthing should at least have enough time to scan the newly replaced files before propagating deletions further. It would actually be really nice to have these configurable, but right now the only way to do this seems to be to modify the source code directly.
There is also non-trivial overhead per file. So if you are downloading large files, the speed will be fast… for tons of small files we spend 2-3 seconds fsyncing the file we downloaded in 100ms, so the rate becomes near zero.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.