I have a SyncThing instance running in a Docker container on a NAS (to act as the always-online ‘central repository’ for all my other devices). I’ve been running this setup for several years. I don’t directly access its UI very often (last time was maybe a year ago), but today I logged in to add another device…and found that its entire config had disappeared. No connected devices, no folders, nothing - it was like a freshly installed default config.
It was definitely online & syncing as of like 30 minutes ago, so it seems like the config just nuked itself right around when I actually logged in. If I look at the config.xml file, sure enough, its contents are default, & the file shows a modification timestamp right around when I connected to the UI.
How on earth could that happen? I literally just accessed its URL, and…no more config.
I do have a weekly backed up config.xml file from about 5 days ago, which I assume is safe to restore? And if I restore that config.xml, what should I do about its index-v0.14.0.db (some of the files the db folder do have more recent modified timestamps than the nuked config.xml, so presumably they are also now bad). Should I delete the db folder (aka will it regenerate)? Or should I delete the db and the shared files themselves on the NAS (so it can pull them all from the devices)? Basically, in addition to restoring config.xml, what else do I need to do to ensure there aren’t a million conflicts upon restarting the Docker container on first sync?
You haven’t provided any details on the type of NAS, but is it by any chance https://forum.syncthing.net/t/synocommunity-syncthing-package-update-1-19-0-24-for-synology? If yes, then you should be able to find some instructions and tips on what to do next in that topic.
For the record, the config wipe issue reported there was caused by a faulty installer specific to the NAS. It’s not something that was done by Syncthing itself.
As for the backup restoration, what I personally always like to do is to set all folders to receive only first, then wait for the synchronisation to finish, then reset “local additions” and restore their original folder type. This way you can re-use the database too, so there’s no need to re-index everything from scratch.
You haven’t provided any details on the type of NAS
I mentioned that it’s running in a Docker container (Docker containers are platform agnostic). The container was setup with docker-compose and has an explicitly specified image version, so it cannot auto-update. Syncthing’s config is in an external mapped folder.
It looks like Synology packages store their config in some internal filesystem, (@appdata), which is what caused their issue. So yeah - similar outcome, but seems not actually related.
Yeah, so it’s difficult to say what exactly might have been the culprit unless the related information has been recorded in the logs (both in Syncthing and perhaps in the OS/docker).
I’d still proceed with the backup restoration as described above.
- Restore your old
config.xml file while Syncthing isn’t running.
- To be extra careful, start Syncthing with the
--paused flag. This will pause all folders and devices. Alternatively, you can pause the problematic device in Syncthing on other devices too.
- Change all folder types to “Receive Only”
- Unpause everything and wait for the sync to finish.
- Reset local additions when needed.
- Change the folder types as you wish.
Thanks for the reply.
Hmm…weird. So after pausing the NAS on all other devices, restoring config.xml & db on the NAS from backup, & restarting SyncThing on the NAS…now every folder says
older marker missing (this indicates potential data loss, search docs/forum to get information about how to proceed).
I don’t really mind waiting for it to resync everything, just want to get this back up & running with minimal effort. Is it generally ideal to restore config.xml and db, or just config.xml and regenerate db? I’m also fine with deleting all the files on the NAS & having it resync from the other devices. Just want to be out of this mess (which I still have absolutely no idea how it occurred)…
Nevermind, deleting the db & just restoring the config file seems to be working