I am scanning around 130TB of data, all the files are large (around 1-70GB each), and it’s been going for about 8 days now but still has 17 days left.
I am only getting 20-25MB/s reads from the drives. I have tried tweaking the pullers/hashers/etc. Processor usage is only 15% (Xeon E3 1270v2). Syncthing is running locally on the NAS via a docker (no network limits) and I can copy files from the drives at 160-180MB/s.
So why is it going so slow?
The log says:
[6XION] 11:52:49 INFO: Single thread SHA256 performance is 312 MB/s using minio/sha256-simd (211 MB/s using crypto/sha256).
[6XION] 11:52:50 INFO: Hashing performance with weak hash is 269.53 MB/s
[6XION] 11:52:50 INFO: Hashing performance without weak hash is 307.56 MB/s
[6XION] 11:52:51 INFO: Weak hash enabled, as it has an acceptable performance impact.
Is your syncthing database on a network share or some dodgy storage? Check IO usage, see if any drive is maxed out.
Database is on a practically empty 1TB 840 EVO SSD. I am showing only 9kb/s writes to it currently.
I am using Seagate Archive drives for the actual data, which are prone to slow writes under certain situations but they should be fine for reads. Benchmark shows them going 150-220MB/s depending on where it reads from the drive.
I have seen it randomly go up to 160MB/s for a few files, but it’s rare, and then it quickly goes back down to ~20MB/s. I tried scanning a few files on a brand new 12TB Seagate Barracuda Pro, and it went only ~55MB/s.
I also tried disabling low priority mode, but saw no difference.
As I said, check iotop while syncthing is scanning to see if the drive is busy.
Speed does not reflect the queue length, as spinning disks have to seek around, so queue length/utilization is the metric you want to look at.
By running the command, given you’re on Linux and have it installed.
I am pretty sure there are plenty of guides on the internet showing how to check disk utilization/busyness.
No idea what im looking at here, but here’s the results.
This doesn’t look like iotop and data is all muffled around, hard to read.
iostat -x 1
iostat can’t be installed on unRAID as far as I know. Here’s a better screenshot of iotop.
To me it looks like syncthing is trying to run 2 processes on the same drives at the same time. I am only scanning 2 shares right now, but I see 4 processes?
The other processes are a totally seperate controller/drive running a torrent client. I tried stopping all that and it made no difference in syncthing speed.
This is showing threads, not processes.
Syncthing has two processes, but one of them does nothing.
This doesn’t show wait time or queue length and just speed, so not immensely useful.
You need to get something that shows those.
What does top show for iowait?
I managed to find a package that included iostat. I marked the 2 drives syncthing is currently reading from.
By default, on Linux, syncthing will use as many threads as there are CPU cores, so four is not unexpected. This may be causing unnecessary seeking on your hard disks, the disks are clearly doing what they can.
You can set the number of
1 to reduce the number - to one per folder.
How many hashers are you running with? Ideal number in your case is probably 1 (or 2 if it’s a raid of 2 drives).
Does the speed change when you set that to 1?
Also it seems syncthings db is on the same drive as the data, which is not ideal, as there seem to be writes happening, wether it’s the db or the log, I don’t know.
I guess that explains a lot, I recently upgraded from 2 cores to 4 cores in an attempt to fix this and it only made matters worse. It was set to default (0). Problem solved! 120MB/s now.
Syncthing database is on an SSD seperate from the data. It might just look the same because that’s how unRAID maps the docker.
Thanks so much, only 3 days now!
I know its a little bit off-topic: But can you please share your User-Story/Use-Case with us? (In a seperate Thread). Would be really nice to know how you use Syncthing in a 130TB Environment
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.