Poor performance on big file

Hey everyone, to start of i am really amazed by Syncthing. It does a great job, and is easy to use. I use it for keeping a big (around 20 gigabyte, still growing) media collection synced (music, movies etc.) and to sync a VM image. I am a big fan of using Linux, but i have to use a VM for some windows proprietary software. Since it’s quite a hassle to keep windows updated and my apps configured, i would like to have the same image used on both my desktop and my laptop. I hoped to be able to use Syncthing for that, but i had to deal with poor performance.

The first issue is the time of the scan. A scan of the VM image takes about 27 minutes. Its a single file, it is exactly 75GB big. My desktop is an old workstation. I got 64 GB of RAM and 12 Xeon CPU E5-2640 cores - the CPU usage by Syncthing is very low. Why is it that slow? The disk, on which I have the VM image is only used for the image and performs at about 180MB/s for Block files, like the image. Copying the file from that disk to a faster one takes only about 7 Minutes. And the scan is on a regular basis, of course there is the possibility to widen the scan-interval, but that takes away the idea of easy syncing.

The second issue is the speed achieved for the Sync. I connect my laptop to the workstation via Gbit LAN, directly. I already validated that this connection gets around 110MB/s, and I disconnected the WiFi of the laptop to ensure that the LAN is used. Anyway, using Syncthing, I only get a few KB/s throughput when syncing this file. Interestingly, the throughput goes up to the max in my LAN when syncing other folders, like my media folder. It is quit inconsistent, but that may be caused by the big overhead of little files (like mp3s).

Is there a chance to get Syncthing working in a comfortable way, for my purpose?

I use v0.14.18, Linux (64 bit) on Manjaro.

Best regards, wucke

Where is the syncthing database / your home folder located? Syncthing probably needs to access the DB “extensively” while scanning a 75 GiB file. What does the hashing speed test on Syncthing startup say?

For the sync speed: That’s probably because most Syncthing does is check and see, that that block is already there and doesn’t need to be transfered.

Also in advanced settings for that folder try increasing the number of hashers to 64 and number of pullers to 128 or even more.

I don’t think tweaking settings will help you much. There’s a couple of things going on here.

The hasher is parallel and concurrent for multiple files, but any given file is hashed on a single thread. With your six core / twelve thread CPU and how Syncthing (and Windows) shows CPU usage, you should expect to see about 1/12 = 8.3% CPU usage while it’s doing this. 27 minutes is 75*1024/27/60 = 47 MB/s which is a tad on the slow side. I would expect your CPU to do about 150 MB/s or so of hashing so it’s a factor three or so off. The rest is probably overhead, read latency, etc., hard to say. We’re counting on the filesystem doing readahead for us, otherwise the cycle of read-hash-read-hash-etc incurs a couple of milliseconds for each block.

Syncing the file when it has changed goes like this, on the receiving side:

  • Create a temporary file
  • Read the previous version of the file and compute weak hashes of the blocks there.
  • For each block in the file that ought to be the same, copy it while also hashing it and verifying the hash. Uses the weak hashes to find blocks in the old version of the file, otherwise a database lookup to find matching blocks in other files locally.
  • Pull the blocks that we didn’t find locally from some other device and hash them, write them.
  • Rename the temporary to the final name.

During the copy phase you’ll see essentially no data flowing, just a few Kbps of index updates. The original file will be read twice; once for weak hashing, and once when copying. It’s too large to fit in disk cache, so this will cause a lot of disk access. Copying the blocks within the same disk will also cause quite a lot of seeks and so on so it’s not a super efficient thing to do for files as large as this.

TL;DR: Large files can be painful.

To find out what it’s doing, if you think it’s CPU or memory allocation related slowness, grab a profile. I’ll help interpret it.

1 Like

First of all: thanks for the fast and detailed answer. This is great! @ wweich (I cannot mention more than 2 people per post :slight_smile: ) The Syncthing DB should be located on my system ssd, which is an 120GB Samsung EVO 840. I don’t think that this is the bottleneck. I should have mentioned this… How do I have to start, to see that hashing speed test? Syncthing runs as systemd service at boot normally.

@AudriusButkevicius I did so. Sadly it did not have any impact on performance or CPU usage, system load is still below 10%. Probably because there is only on thread used per file?

@calmh Thanks, your answer is really detailed. If I got it right, performance would be a lot better when I had the file split to a couple of hundred files, am I right? If so, would it be a solution to divide big files into chunks, which are treated like single files by Syncthing? I will have a look whether there is a way of splitting the file and merging it for the VM Access, via pipelines, but that’s more like a workaround. EDIT: Syncthings internal CPU usage is jumping between 8.3% and 8.4%, so you were exactly right with your assumption.

PS: I just saw that the percentage of the file scan jumps, like between 23% and 29% timed at about 1Hz. What does that mean?

Best regards, wucke

It’s printed to stdout, so the systemd logs should have it. Otherwise, Actions -> Settings, Preview next to Usage Reporting. It’s the field called sha256Perf, unit is MB/s.

Yes, you would gain concurrency by doing that. If it would be significantly faster on not I don’t know; there’s still 75 gigs of data to read and hash multiple times and write once, and we’d lose some performance to seeking (at least on spinning disks). Doing it inside Syncthing would be Complicated for little gain.

If you mean 23% -> 29% -> 23% that sounds very odd and interesting. Broken, even. If you just mean that it updates now and then with a whole chunk of percentage points at the time, that’s because the updates are sent periodically.

@calmh

If you mean 23% -> 29% -> 23% that sounds very odd and interesting. Broken, even. If you just mean that it updates now and then with a whole chunk of percentage points at the time, that’s because the updates are sent periodically.

To clarify, it says Scanning while I write this, and it jumps forward and backward about 1 time per second for each phase. So one second it says 81%, the next 90%, than 82%, than 90% while slowly increasing the numbers, since the whole scan take 27 Minutes.

That’s unexpected, and intuitively it sounds to me like the file is being scanned twice in parallel, which is “impossible” but would on the other hand explain the unexpected performance problem. Are you using syncthing-inotify? Can you stop if from systemd and run it manually with the -verbose flag, which will give you the corresponding status updates on the console and might clarify what’s going on…?

Scan happening twice somehow?

1 Like

@AudriusButkevicius a folder restart during scanning might cause this perhaps? We stop the scan, but only in between files if I remember correctly.

(This would be a good use for the new Context thing which could be passed all the way out to the scanner and be used for a quick abort.)

Potentially.

I have some information for you. The jumping stopped, when the upper percentage reached 100%, now the lower one is slowly increasing to 100% too. Next my hashing speed @wweich:

INFO: Single thread hash performance is 155 MB/s using minio/sha256-simd (115 MB/s using crypto/sha256).

And here I have the CPU Profile, recorded while the strange thing percentage hopping thing happened: https://air.wucke13.de/f/fd2cd45260/

EDIT: No, I do not use Syncthing inotify. I use syncthing-bin package from the AUR, which should be your latest stable release.

Just for reference. I just tested it with my VM image.

23 GiB vdi file, Core i5-4590S (“sha256Perf”: 328.44), VM and ST DB on System SSD (960 EVO NVMe). Scan time: 7:20 (no inotify, manually initiated scan)

75GiB in 27 min would be (more or less) my speed as well.

1 Like

Thanks for that reference. You’re hashing speed is a lot better, probably because you’re single core performance is a lot better. That’s why I need to have 12 cores :smiley:

The log output is even better than the report preview:

INFO: Single thread hash performance is 382 MB/s using minio/sha256-simd (367 MB/s using crypto/sha256).

And still, my scan isn’t really faster than yours.

@wucke13 so now, it’s idle? If you touch the file to cause it to rescan it, while not doing anything else to cause a config change etc, do you see the same effect?

I just booted the VM, and added a textfile on the desktop. As far as I can see, nothing happend. The last scan was before my change. Can I upload the verbosity log, without having to think about privacy issues, or is there something I should mask before?

EDIT: It is not scanning, but my laptop gets the sync, roughly about 1kbit/s up and down.

There is nothing more sensitive than your IPs, folder names, and device IDs (which may reveal your IPs).

I’m testing it now as well just to compare. I see that the MB/s rate in the verbose log is broken, but the timing will be correct anyway at least…

Seriously though, having syncthing continously scanning and syncing a huge VM image while it’s running has to be enormously energy consuming and generally painful. Don’t do that :no_good:

[KK6N2] 14:39:44 INFO: Single thread hash performance is 340 MB/s using crypto/sha256 (338 MB/s using minio/sha256-simd).
[KK6N2] 14:39:45 VERBOSE: Startup complete
[KK6N2] 14:39:46 VERBOSE: Scanning folder "default", 0% done (0.0 MB/s)
...
[KK6N2] 14:50:22 VERBOSE: Scanning folder "default", 99% done (0.0 MB/s)
[KK6N2] 14:50:25 INFO: Completed initial scan (rw) of "Default Folder" (default)
[KK6N2] 14:50:25 VERBOSE: Folder "default" is now idle

jb@unu:~ $ ls -l Sync/
-rw-r--r--  1 jb  staff  32768000000 Jan  3 14:39 largefile

32768/((5060+25)-(3960+46)) = 51.28 MB/s so about the same as you. MacBook, SSD.

Okay. Here is my log https://air.wucke13.de/f/e3d3bed0e1/ . Interestingly, the sync to the laptop seemed to be stuck at 42%, with almost no data loss. Considering the delta sync, I would expect the sync (without the long scan time) not to be slower than copying the file via sftp over the same connection from one machine to the other, but this is indeed the case. What additional information, from what situations can I contribute? I am willing to support you with all information you could need.

PS: You said

Doing it inside Syncthing would be Complicated for little gain.

While I understand, that it indeed is not trivial to do so, I am sure that the gain is high. I use an OSS cloud service (Seafile…) for my documents, but since my VPS has just a fraction of my local storage, Syncthing is a perfect solution to keep big folders in sync, which I do not need to access via Webinterface from everywhere. So good performance on big files and big folders is really what it needs for this use case (and big folders with loads of little files work extremely well already). I think, I’m not the only one who is willing to use Syncthing to sync folders and files which exceed storage quota’s of other services (like mega, dropbox and so on).

Best regards, wucke

The receving side does a bunch of work, but yeah I’d hope it wouldn’t be slower than transferring the whole file. But there’s something interesting going on here. This is for your initial numbers:

This is my (now fixed) rate output:

[VCDL5] 15:02:19 VERBOSE: Scanning folder "default", 3% done (47.9 MiB/s)
[VCDL5] 15:02:21 VERBOSE: Scanning folder "default", 3% done (47.8 MiB/s)

With a much faster (per thread) CPU and SSD, I should not be getting the same rate. Something else is bottlenecking. I’ll look into it.

1 Like