v2.0.9 to v2.0.10 upgrade fails with errors

Hi, I’m on the linuxserver Docker on Unraid.

I rebuilt my Docker container today to v2.0.10, it had been running v2.0.9 just fine, and when I restarted, the WebUI never came up.

The logs have these errors over and over

2025-09-25 14:11:37 ERR Error opening database (error=“openbase (PRAGMA optimize = 0x10002): database disk image is malformed (11)” log.pkg=main) 2025-09-25 14:11:38 INF syncthing v2.0.10 “Hafnium Hornet” (go1.24.7 linux-amd64) root@buildkitsandbox 2025-09-24 09:18:20 UTC [modernc-sqlite, noupgrade] (log.pkg=main) 2025-09-25 14:11:38 ERR Error opening database (error=“openbase (PRAGMA optimize = 0x10002): database disk image is malformed (11)” log.pkg=main) 2025-09-25 14:11:39 INF syncthing v2.0.10 “Hafnium Hornet” (go1.24.7 linux-amd64) root@buildkitsandbox 2025-09-24 09:18:20 UTC [modernc-sqlite, noupgrade] (log.pkg=main)

In the past when I’ve updated Syncthing, the start of the log when I first run the new version shows some language regarding migration – presumably because the database layout has changed somewhat. I didn’t see any language like this on this particular run.

Please help. I had some trouble in an earlier migration – that version was borked because of problems with memory management, later fixed – and because of that I had to blow away my v2 database and restart the system and let it rebuild. I have enough data that that process takes over 36 hours. Not fun. I’m hoping you guys simply forgot to put some flags in there to update the database properly.

Oh, and I’ve tried restarting the server as well, that didn’t help, I didn’t expect that it would.

Thanks in advance.

Just a side comment, but you’ve mentioned scanning taking a lot of time, so you may probably want to know that the third-party version of Syncthing you’re using is about 20% slower than the official builds. It may also use more memory.

Yep. That was discussed in the past, when I had the blown db migration due to out of memory errors, and I believe that someone filed a request with the folks over there to use a different C compiler.

That said, speed isn’t the issue at the moment – or more accurately, I don’t want it to be the issue – I’ve been following the dot releases from 2.0.4 onwards, each has migrated just fine, but 2.0.9 to 2.0.10 just gives me infinite errors, forever.

That doesn’t look recoverable :frowning:

Ugh. I’ve used Syncthing for many years now. It worked perfectly and I never thought about it, honestly, until the 2.0.x upgrades, which for me have been a nightmare of lost productivity.

I guess I’ll blow away the database and let the system regenerate – it takes 36+ hours because of the amount of data I have, and the disk activity is so intense the server can’t be used for any other purpose.

It doesn’t make me feel very good that I’ve never had to do this before and have had to do it twice in the last month or so.

It also surprised me to come here to the forum and not see other reports of the same problem. From my end, it was all as simple as could be – shut down 2.0.9 docker, update docker, start up 2.0.10 docker – just like any other upgrade – except Syncthing never came up, and the logs indicate all the errors.

The linuxserver docker build isn’t entirely optimal, unfortunately. Unless you need something specific only they provide, I suggest using our image instead.

1 Like

Yeah, the issue is that the linuxserver version is well integrated with Unraid and prompts me to update it periodically. Doing a manual install of your image results in something that is harder to maintain over time.

Regardless, it doesn’t matter, I gave up. I blew away my database again, and am rebuilding it again, there’s clearly something wrong in the 2.0.9 to 2.0.10 migration code but I don’t have time to wait for other people to report the same problem and for Syncthing devs to fix it.

It’s fine, just computer time and users inconvenienced.

There has only been one instance of a similar error reported on the forum (see https://forum.syncthing.net/t/syncthing-disk-image-is-malformed-error/25262) and it used the very same build of Syncthing as yours, so at this point, I think there is also a chance that there is simply something wrong with Syncthing provided by LinuxServer.

Certainly that’s a possibility. They build a lot of packages and everything is automated, something could have gone awry.

The total rebuild has been running for a few hours and a couple of folders are done so whatever happened would have been fairly weird, like a single file not getting updated in the pull.

Just an FYI.

My server will finish the rebuild by tomorrow morning, so I’m not doing anything with it at this moment, but linuxserver Docker syncthing on the Unraid console shows that there’s an update.

Of course, you guys haven’t released anything newer than v2.0.10 and the last time I updated my Unraid, it showed v2.0.10 as the version in all those error log files, so this is an interesting development. It makes me think the linuxserver Docker was messed up and they’ve released an update to fix the problem.

Honestly my plan at the moment is to not touch anything for a while. Having this system down for multiple days is something I can’t repeat for a while, especially given that I had to do a total rebuild a month ago when going to 2.0.x.

Anyhow figured I’d mention it.

I have the same symptom with the ARM build (syncthing v2.0.10 "Hafnium Hornet" (go1.25.1 linux-arm64) ``builder@github.syncthing.net`` 2025-09-23 12:46:31 UTC) so it’s something global I think.

I can’t find the culprit file, even with a strace.

Running: for i in *.db; do echo $i; sqlite3 $i ‘pragma optimize = 0x10002;’; done in the index-v2 folder works (all db-wal and and db-shm are removed).

Removing all folder.* is a solution, but it will hash everything again :frowning: And… I’ve moved only main.db and it has just destroyed all my folder.* :smiley: :sob:

@dugwood that’s awful. I personally have given up on future updates for Syncthing for now. I deleted my database and let v2.0.10 rehash for a day and a half. I am just going to leave the configuration alone and not update it until I get a sense from this thread – or from other details on other threads – that the problem has been found and fixed.

I’m sorry you are having to deal with this same issue that I had, but it is very helpful to the devs that you ran into the same problem using their Docker, as they definitely implied / stated that they thought the problem was specific to the linuxserver Docker – obviously that’s not the case, given your experience.

1 Like

It’s been 5 days, and my 1TB of data isn’t fully scanned… I’ll revert back to 1.3, I think that’s the only good bet for now on slow hardware, even with the slow hardware settings.

I’ve created a light GUI to see the progress, and to view all my Syncthing servers, that’s working great (with the Syncthing API).

Clearly the load of connected devices is part of the issue, as stopping 20 devices from connecting (paused) makes the load go down a lot. It speeds up the scanning, but as it takes ages to scan, if I end up on the same PRAGMA bug, it’s a no go.

As you said @rramstad we can’t allow more than 1 day of downtime, I don’t know your usage, but mine is a 3rd backup of my clients’ data. It needs to be in sync (+/- 4h is okay).

Just wanted to add, I have the saw database disk image malformed issue twice now.

Unfortunately, I did no know what version I went to initially, I am guessing it was from v1.30.0 to v2.0.9. After I found the issues, I now specify my docker version tags and tried to again with a new migration a few weeks back. The migration went fine and I could access the UI. I thought all was good but sometime after then it failed again with a database disk image malformed. I did not find it until today (Syncthing used to be very hands off).

Today, I am using the steps to re-migrate the old database here - Problems after upgrade to v2.0.2 - #4 by cvanelli - and now trying to go directly to v2.0.10 with another migration from v1.

I would primarily suspect filesystem shenanigans. SQLite expects data to be flushed to disk when the filesystem is told to and reports back that it’s happened. If there are multiple layers of virtualisation or caching or filesystems that lie, things may end up unhappy. Unraid specifically seems like the kind of funny business that could cause issues.

All of the above…

From a user’s POV, /mnt/user looks like one big volume, but underneath is UnRAID’s homebrew FUSE-based union filesystem called “shfs” (I think “sh” might be short for “shared”) that spreads files across an array of drives and cache.