Incremental sync - system only at 30% - how to max out

So I’m syncing my laptop to my Synology NAS, apart from others also some 50GB single files (virtual machines). Now the NAS CPU is only at 30% while hashing the existing file (or whatever it is doing to find out the missing pieces). Laptop is at 0%, no data are being transferred. When I say CPU I really mean overall utilization. Right now NAS CPU 27% and IO wait 4%. So I’m wondering, how can I demand a bit more juice from the system? When an actual network sync is in place, system is performing quite ok, getting utilization to 80-90%. I tried to change hashers from 0 to 4 but no difference. Any ideas?

Sorry, I don’t understand the issue. You are not happy it’s not using 100% of CPU? Why do you think it would be better?

Well I think that if it takes 3hrs to sync 50gb file at 30%CPU, shouldn’t it speed up to say 1hr at 100%?

Sync speed also depends on the link speed, disk speed, data set, not only CPU. Also, CPU can be bound on either side, not only the receiving side.

You can increase the number of pullers in the config and see if that helps. Also, make sure you don’t have rate limits.

Maybe I wasn’t clear enough. This is only during incremental sync of say 50GB file. 99% of this file is reused and only say 1% of changes are added from other device. But during that process of reusing the existing file my NAS gets only to 30%. It jumps from 25%CPU 5%IO wait to 5%CPU 25%IO wait but never more. So I never get say 25%CPU 70%IO wait. During this process the other device is at idle CPU doing nothing, basically waiting until 99% blocks are reused and then sends the remaining 1% So I’m wondering why the system is not using its whole potential and if there is a way I can squeeze it a bit.

And you are saying syncing that 1% takes 3 hours?

You can post STTRACE=model logs, it will explain what it’s doing.

Yes, that’s exactly what I’m saying. I’m running synology NAS 415+, 64bit Intel Atom C2538, Quad Core 2.4 GHz, 2GB RAM. I sync my virtual machines from laptop to NAS (as a backup). When VM is open it only changes a little bit, so when it is then syncing back, laptop doesn’t really send much data, most of the time NAS is doing its thing…hashing and checking existing blocks if they can be reused. That can be also seen with GUI progress bar - nearly all is reused and only a little bit downloaded. But that process of checking blocks takes very long. Just checking logs - it took 2:30hrs to sync 50GB file, keeping in mind that 99% where reused. And during this time - reusing blocks - NAS is not using its full power for some reason. My understanding is that reusing block is CPU expensive thing, so I’d expect CPU to be maxed out during this process. Anyway, will post logs, let’s see where the problem is:)

Another interesting thing would be to run profilers, which I think it’s also explained in -help.

just a screenshot from initial index exchange. Right now scanning of VM file change is about to finish and then it will start syncing. Well, just to say, startup looks good to me, system is reasonably utilized…

And now it’s syncing…

Nothing is being downloaded, just reusing existing blocks…and system usage is like this. Log attached as well. I will keep logging overnight as well, until the whole file is synced and can post full log then. sync.txt (5.6 MB)

and also screenshot from GUI showing blocks are just being reused:

And maybe one more thing…I’m not too sure how to monitor HDD usage, this is one more screenshot. I’m not sure if ~120 IOPS is maximum what standard SATA disks can do, and in fact system can be already maxed out even if IO wait shows only 25%?!?

So I think it’s because you are using inotify, and what’s happening is that the file is changing faster than we can sync, hence we are constantly out of sync.

Actually, you are right, it does seem it takes a while to copy stuff. I guess it’s worth raising a bug, but I suspect it will be specific to the hardware.

I’m not sure about this. I’m running syncthing installed as a synology package. I haven’t installed inofity, unless it is somehow part of a package deal?:slight_smile:

[quote=“AudriusButkevicius, post:13, topic:6722”] it’s worth raising a bug [/quote] If there’s any more tests I can do, I’m more than happy do to so.

I don’t think there is much to test. The right thing to do is probably benchmark the copy code and figure out where most of the time is spent.

1 Like

Actually, you could setup syncthing in such a way so that the immediate thing it would do after starting is dealing with this reuse scenario you are talking about, and then run it with STCPUPROFILE=1, and provide exact arch version you are using together with the pprof file created in syncthings home dir for me to look at.

@AudriusButkevicius, just that I’m clear exactly what to do. Please confirm if this is the correct setup.

  1. stop NAS syncthing
  2. Laptop - change only 1 50gb file by opening/closing virtual machine
  3. Laptop - rescan folder with this file and wait until it is completely reindexed
  4. run NAS syncthing as STCPUPROFILE=1 ./syncthing
  5. wait for sync to complete (index exchange + sync = approx 3hrs)
  6. stop NAS syncthing correclty from the menu & post logfile.

Correct?

Sounds correct. I am not sure where syncthing puts the profile, might be in the cwd or might be in it’s own homedir.

so here it comes:) cpu-31520.pprof.tar.bz2 (949.2 KB)

1 Like

Still need the exact version, OS, arch and so on.