v2.0.9 to v2.0.10 upgrade fails with errors

A problem with storage issues is that it’s often subtle and gradual. A while back I had a drive that seemed to be working just fine, but it wasn’t until I noticed that one file failed checksum verification that prompted me to check every file – 30,000+ files, ~30% were corrupted all without filesystem errors. The corruption was caused by media errors that didn’t trigger any alerts until I ran a bad block scan that put high I/O load on the drive.

In Syncthing 1.x, LevelDB doesn’t have the same level of integrity checking as SQLite does, so corruption can go by unnoticed.

Quality and types of metrics from S.M.A.R.T. vary a lot between different models so it could give a “Pass” to a drive that’s still failing due to media errors. It’d be helpful to see the results.

Clarity on the pool and RAID configuration would also be very helpful, e.g. Is it pure Btrfs mirroring between the pair of NVMe drives or is it an Unraid array with Btrfs on top?

What does the following command say?

btrfs filesystem show
3 Likes

Googling “unraid sqlite corruption” indicates this being a common problem with the combination. I have a feeling about which side the problem is likely on, which would give me pause when considering some of the Unraid filesystem solutions.

4 Likes

Smart Test Results

Drive 1
Self-test status: No self-test in progress
Num  Test_Description  Status                       Power_on_Hours  Failing_LBA  NSID Seg SCT Code
 0   Extended          Completed without error               25024            -     -   -   -    -
 1   Short             Completed without error               25023            -     -   -   -    -
Drive2
Self-test status: No self-test in progress
Num  Test_Description  Status                       Power_on_Hours  Failing_LBA  NSID Seg SCT Code
 0   Extended          Completed without error               24978            -     -   -   -    -
 1   Short             Completed without error               24977            -     -   -   -    -

Here is the result

root@unraid:~# btrfs filesystem show
Label: none  uuid: <id>
        Total devices 2 FS bytes used 280.07GiB
        devid    1 size 465.76GiB used 297.03GiB path /dev/nvme0n1p1
        devid    2 size 465.76GiB used 297.03GiB path /dev/nvme1n1p1

Here is the manual scrub status I ran

root@unraid:~# sudo btrfs scrub status /mnt/cache
UUID:             <id>
Scrub started:    Sun Oct 19 07:42:42 2025
Status:           finished
Duration:         0:05:12
Total to scrub:   560.16GiB
Rate:             1.79GiB/s
Error summary:    no errors found

Thank you for the directions. Hopefully we can figure out what exactly is happening.

Thanks, but the table of metrics right above the self-test status is much more useful info.

Okay, so a plain Btrfs mirror (RAID 1) between the pair of NVMe…

… and the sizes shown between the two btrfs outputs line up.

Btrfs says that every block written matches its respective checksum, which helps rule out media errors.

One potential issue is that you’re using Unraid’s temporary cache volume for Syncthing. Is that also where you’re keeping Syncthing’s configuration and database?

If it is, it’s probably not good because Unraid has a background process that moves files from /mnt/cache to the main storage array. While Syncthing is chugging along, SQLite creates temporary files and write-ahead logs. Unraid’s cache cleaner shouldn’t move open files, but there’s always a chance that a file is closed just long enough for the cleaner to grab it.

You didn’t mention anything else about your NAS hardware, so another good thing to do is run a stress test on the RAM (especially if it isn’t ECC).

2 Likes

That sounds terrifying. Absolutely do not put the database on some sort of fused storage that moves files between disks and whatnot.

4 Likes

Why not?

Because it won’t work and it’ll trash your database.

2 Likes

I wanted to report back some additional findings and more context that I’ve learned about unRaid. Hopefully it helps my situation and others.


First, regarding the mover: My cache drive does not use the Mover (which unRaid calls it). The Mover will move from the cache to the array. For all of my application’s data (appdata), it always lives on the cache drive and stays there.

For anyone using unRaid, you do not want your appdata to live on the array. It should be cache only and not utilize the mover.

Second, regarding the “filesystem shenanigans”: @gadget mentioned this above regarding how unRaid’s FUSE system works.

I didn’t realize that you can just switch your container’s mappings from /mnt/user/ to /mnt/cache/. It will bypass the additional FUSE mapping and reference the BTRFS pool directly.

I will try this and see what happens.

1 Like

Any news on the issue?

I have been using Linuxserver’s Syncthing docker on Unraid until it silently stopped working 2 months ago, with the WebUI no longer being available. I am considering simply deinstalling it and setting it up again, maybe binhex’s Syncthing image instead which seems less used (I’d rather not manage original images manually) - But from the previous posts, it seems that this might not make a difference..

The docker is stored at “/mnt/user/appdata” - which is a user share on a ZFS Pool (not BTRFS in this case) of doubled NVME drives, named “Cache” (as Unraid does as a standard). There is no secondary storage, and consequently, no mover (only a ZFS Snapshot and a backup task, no reversions so far).

Syncthing has been a wonderful and reliable solution to my use cases so far :slight_smile: Let me know if I (rather basic skills) can provide any information that helps identify the culprit.

I ended up stopping the image, blowing away the database files, and updating it.

It automatically starts after the update, and recreates the database, which takes forever, but since then, it’s worked fine on my end.

My app data is on cache which is a BTRFS pool of SATA SSD and I personally have never seen, with any version, any sort of corruption of the sort that some folks seem to think we should be having on Unraid.

From my end, the main problems I’ve had were in the migration to the new database schema. On two occasions, I had the similar experience of the web UI not starting, going into the logs and seeing massive errors, eventually giving up and blowing away the database and rebuilding it, and each time, it worked fine… these days there haven’t been any super significant updates to Syncthing, so it seems stable.

BTW I’m using the regular syncthing, not the binhex version.

1 Like

Thanks a lot! I will stop the container, locate the SQLite files (in /mnt/appdata/syncthing, I suppose), delete them, check for updates, restart - and trust that a rebuild solves it.

The database is at

/mnt/user/appdata/syncthing/index-v2

on my system. Anything index* can be deleted – if there are other ones there, they are from earlier revisions of syncthing and since you want to rebuild the indexes, you presumably don’t want any of the other ones anymore, either.

YMMV, don’t blame me if something goes wrong, but this worked for me. Stop, delete the index directories, update, and that’ll start the docker, confirm it’s running in the log, and then you can see the web UI again. It’ll take a while to build the indexes from scratch… I have a lot of data, many TB, so it was multiple days…

1 Like

Thank you - I also figured this and just deactivated (renamed) the index-v2 directory, started the container, the rescan starte, and all went fine. Syncthing is up and running again :slight_smile:

Now let’s hope that this db mismatch was a singularity in time :smile:

Thank you for all the kind support!

I can confirm that mapping the volume for Syncthing’s config to Unraid’s cache drive directly using /mnt/cache/ instead of /mnt/users/ is working.

I was able to upgrade from v1 to v2 and it has been running for 3-4 days now. The database usually was corrupt around 1-2 days.

Happy Thanksgiving everyone!

Yes, that is what happened to me, too.

Should “/mnt/user/” not work? Configuring “/mnt/cache” sounds unusual…

Configuring “/mnt/cache” did not solve the problem for long. Same problem again - How do I prevent the datatbase from corruption for good?

So, just a data point, but I personally have always run Syncthing on Unraid using just regular /mnt/user mount points.

My config and database are in /mnt/user/appdata which is on cache (btrfs)

All the problems I had involving the database were connected to the specific upgrades I mentioned. Basically to summarize, and this is just a wild guess, one of the updates forced a rework of the database that in my case failed due to lack of memory. Another update seemed to expect some sort of database rework to have occurred and yet in my case, it either hadn’t been reworked properly OR the rework was somehow skipped. In each case, I stopped the docker, blew away the index, started the docker, waited a couple of days – I’m syncronizing something like 10 TB, perhaps a bit more – and then everything was fine after that.

I believe you have some other issue, probably a bad drive, or bad cable, or failing power supply. Those are the most likely candidates. Maybe bad RAM.

I’d start out by powering down, and reseating every cable and the RAM. I suppose you could look at the SMART data for your drives, too.

Oh, I suppose overheating is another possibility.

BTW I’m running the linuxserver Syncthing, no modifications other than adding paths.

In short, I personally don’t believe there’s any problem running Syncthing on Unraid with the application data, including database, on cache with btrfs file system. Never had any problems until the 2.0.x series with the database schema changes, and other than having to rebuild the index twice, I’ve had no issues.

1 Like

Thank you for all the hints. So far, my Unraid NAS was running pretty smoothly, but you never know…

But I think I found the culprit (otherwise, I might try your suggestions, although this looks like a major maintenance):

One Android device was still running the original Android app (“syncthing” mobile, not “syncthing-fork”). It somehow had slipped my attention that the app has not been updated for a long time, and the project abandoned and substituted by “syncthing-fork”. I noticed because I integrated another mobile device in my setup, and there was only syncthing-fork. It took me a while to research and to conclude that, on the older device too, I need to delete the old syncthing and install syncthing-fork instead.

This was clearly my neglect - But if my assumption is correct and the desktop sync-device is self-corrupting its DB when syncing with a deprecated android app on the other, this would be a critical error that should be handled securely (e.g. refuse to sync if the remote device version is too old / not flagged to use a compatible db schema).

I will observe if using the deprecated app was the culprit (and then file a github repo issue for this).

I found this thread while researching the malformed db disk image others have reported unsing Unraid. In my case nothing was working to restart the service. I eventually noticed the Docker configuration in Unraid was assigning ‘appdata’ to ‘/mnt/user/appdata/cache/’. When I pointed it to ‘/mnt/user/appdata/syncthing/’ everything spun up as expected and the db started migrating.

I have no idea if this was just me or not, but if that info can help someone else, it was worth posting.

I have recently had this issue starting to occur. At first it was unclear why, and despite still investigating, it seems that the Appdata Backup plugin Robin Kluth for Unraid may be what is causing it. I noticed that every time I would get a malformed database, coincided with the time Appdata Backup had its scheduled run (middle of the night). The Syncthing Docker would fail to restart properly after being backed up, reporting a malformed database. This is despite the plugin stopping the docker container, backing up, and then restarting afterwards. For now I have excluded Syncthing from the Appdata Backup to see if it fixes the issue. If it turns out to be the root cause, I will try to just exclude the database/index folder from the backup process. I suspect a lot of Unraid users have this plugin installed so I thought it relevant to post this here. Has anyone else noticed this issue being caused by Appdata Backup plugin?