Syncthing Stuck at 100% CPU While Scanning/Preparing to Sync

Hi,

I had Syncthing disabled for a day while migrating my TrueNAS OS from a bare-metal host to a Proxmox VM. Once everything was set up, and I re-enabled Syncthing, I noticed the TrueNAS VM’s CPU was pegged at 100% for an unusually extended period.

Checking top in the terminal showed Syncthing consuming 100% CPU. In the Syncthing GUI, one of my two shared folders was stuck in “scanning” and “preparing to sync” status. Expanding the folder revealed it was “out of sync” with about 500 MiB of data, processing very slowly.

I left it running overnight (8+ hours), but it was still hovering around 450 MiB out of sync with no meaningful progress and CPU still at 100%.

This is the first time I’ve encountered this behavior. The out-of-sync files are legitimate changes, but the scanning/sync preparation is taking far longer than I’ve ever seen before.

The good news: since this is a Proxmox VM, the 100% CPU usage isn’t overheating physical hardware. Still, I’d like to understand what’s causing the extreme slowness and high CPU load, and how to resolve or work around it.

Any insights or suggestions?

Thanks in advance.

Take a support bundle and we can figure out what it’s doing.

Creating a Support Bundle — Kastelo Docs documentation.

Skip the “Enable Debugging” step if using 2.0.0+.

Just for the record, it is safe in terms of Syncthing (i.e. no-one can access your device using anything from the support bundle), however the logs and the config do include device names, folder names, file paths, etc. so it depends whether you consider those private or not.

Not really, unless you send the file via a private message, but then only that person will have access to it. In this specific case, you probably only need to upload the *.pprof files from the bundle, and those, as far as I’m aware, don’t contain any private information.

Your build has build user root@buildkitsandbox (which if you search on the forum, you get multiple reports of problems), and uses modernc-sqlite as its database engine (which is not as efficient as mattn-sqlite).

I switched to an official dnf repo install of Syncthing, and it’s running into the same issue.

The profile shows it scanning, where it spends most the of the time grabbing items from the database, and serving a an API request from the GUI which is a bigger database query. I suppose it’s possible the dnf build also has the same issue, whatever build that might be.

Are there any solutions for my problem?

Is it the local Linux machine that’s having the problem or the NAS?

If the latter, then your comment about the dnf repo

is somewhat confusing to me, as (at least to me) the implication here is that you did this on the device with the problem. :sweat_smile:

Also, based on what filenames I’m seeing in the folder, and the folder label, this looks like a home folder that may contain the Syncthing database. That must be ignored, or it will most likely go into a loop of see database update → scan → update database → see database update → scan → update database.

If you’re seeing the same on 1.30 as 2.0 it’s certainly not a database thing.

1 Like

What item is out of sync?

When I click the “1 items”, it shows nothing:

If you only sync these five folders and ignore the rest, and that rest comprises a lot of folders and files, then I think it will be more efficient to just add those five folders to Syncthing separately and not deal with ignore patterns at all. This is because ignored files are still included in the database (see https://forum.syncthing.net/t/solved-large-index-size-with-nothing-synced-but-a-lot-of-ignores/14284), and the larger the database, the slower accessing it will be, which may have some impact here, especially considering that the hardware is weak.

I considered that approach when I first set up Syncthing, but to be frank it feels really messy in terms of organization for my liking.

Also, that still wouldn’t explain the root issue here. Worked fine for years, suddenly switched TrueNAS from a host OS to a VM on Proxmox and suddenly this started occurring. Not saying it has anything to do with that, but it is interesting on timing.