Disk Usage

I have 2 drives on my main development PC: an SSD for the system (C:) and 1TB of old fashioned spinning rust for holding things like database backups that are big but don’t need performance (on D:). Syncthing is therefore mostly working on the D: drive. Backups happen overnight, so the first thing that happens when I switch the machine on in the morning is that there are a lot of big files to sync down. And SyncThing maxes out the disk access whilst doing so:

If I stop SyncThing, then disk access on D: goes back down to 0%, so it’s definitely SyncThing and not some other process.

I don’t believe that my internet connection is fast enough to saturate my hard drive bandwidth - even if it is an old and slow one.

This wouldn’t be an issue, except that games are installed on D:\ as well, so I can’t dick around with Hearthstone whilst waiting for code to compile. When SyncThing is doing its sync, Hearthstone won’t get past the login screen.

Obviously the easiest fix would be to move my Hearthstone install to C:, but if you want a solid reproducible example where SyncThing is being really inefficient with disk access, I have one for you.

would you mind posting a screenie or Resmon.exe (Resource Monitor) ? Similar to this one : reinstalled the OS, best way to migrate Syncthing? But with the bottom panel (Storage) opened as well ?

Will do. What I can tell you is that if I pause all the folders and then restart just one, it maxes out the disk for a bit, and then recovers in a sensible amount of time.

(all the folders at once can take hours to sort themselves out, doing just one is ~30 seconds).

Oh I see the other tab. Just a mo.

Big queue.

All folders are scanned at same time and in consequence the drives block each other i/o wise. I see this problem during automatic upgrades on a small linux box.

It would be great if Syncthing only scans one folder at a time (as suggested here) during startup as non-SSD drives are a lot faster this way (a few minutes vs. hours)

2 Likes

I just stopped SqlServer to check whether that was exacerbating the problem and it doesn’t seem to make a difference:

Ah. That explains the issue then. +1 for “scan the folders one at a time and have the job done in minutes” rather than “scan all the folders at once and have them tripping over themselves for hours”.

while I agree with @uok , there might be more going on in this case than meets the eye. Those read & write speeds are wayyy too low. I would expect up to 5x faster. And the drive staying pegged at 100% is usually not normal either. 99.9% is ok, 100% is not good.

Have you monitored the disk’s health recently ? Any s.m.a.r.t. info to be had ?

It’s all green:

though I seem to recall that I had cause to look at the S.M.A.R.T. report in the last 6 months, so perhaps there was something else about it that looked dodgy…

Hard Drives are cheap enough that it’s not worth wasting more time looking at until I try swapping it out.

Plus this: https://hackernoon.com/applying-medical-statistics-to-the-backblaze-hard-drive-stats-36227cfd5372 which I read recently doesn’t dispose me towards trusting Seagate drives…

indeed, nothing wrong hardware-wise. If you want to dig deeper…I would :

Pause all the folders, and in the morning (when you say there’s the most amount of work to be done by SyncThing), unpause a folder, wait untill it is Up To Date, unpause another folder, etcetera. All the while keeping an eye on Resmon / Taskmgr.

I already know that that works: that’s how I have been getting it unstuck every day.

But Uok’s issue on top of an aging / failing HDD might mean that something that should be a minor problem has become a major one through the combination of hardware and software. I’ll swap the HDD and report back.

The only thing unusual in SMART, is a single (1) reported command time-out. Error codes 197 & 198 are the most indicative ones re imminent failure, but they are 0. I wouldn’t expect the drive to fail suddenly.

Even though Seagate is :poop:

1 Like

A scan on folders that are unchanged should not have much issues.

It’s possible that syncthing is recnstructing files it’s supposed to download from other files (instead of hutting the network). That should be visible in the out of sync dialog showing where the data is coming from.

In this case it doesn’t look like a scan is causing this, but a sync (looking at the capture of the ui)

Here’s the Out Of Sync dialog:

Is it something to do with being half way through a very big file when paused?

As you can see, for most part syncthing is avoiding downloading data and reconstructing the file locally, so it’s not hitting the network, hence maxing out on disk usage.

yep, it’s reusing (=copying) existing blocks and this causes further slowdown (on non-SSD drives) - especially if multiple folders are doing that. The hash speed on my linux box is 100MB/s or more, but simultaneous scans sometimes bring it down to single digits.

Maybe he devs can implement a single-folder mode, where only one folder is scanned/synced at a time. It may be a while until everybody can affort 1TB+ SSDs :slight_smile:

There was an attempt by someone, if you feel like taking over, knock yourself out.