Reset Database - Now Asking For Deleted Files

Hello!

I had an unresponsive NAS in my sync cluster which I had to do a forced power cycle, which then damaged the Syncthing database. As the database was rather large (IIRC about 90GB of DB files…), rebuilding it was going to take forever - so I reset the database to allow it to rescan and build anew.

Since doing so, the NAS has reported a number of files which are Out of Sync in a couple of folders, and that no available devices have the required files.

The folders in question have completed their initial scan - but not all folders on the NAS have yet done so. Briefly querying the API, it looked like an example out-of-sync file had been deleted elsewhere in the cluster - but the NAS doesn’t seem to be aware of this. (Unfortunately I didn’t save the API output, so I can’t provide concrete examples right this second…)

Would resetting the deltas be a sensible thing to do at this point?

Many thanks, as always, for your help and for this amazing application! Really looking forward to the SQLite update (though not to migrating a 90GB database to it…)

My guess it hasn’t received the delete update yet because it’s still busy doing … stuff?

Maybe get on the 2.0 train when you’re anyway rehashing everything. The final release is likely going to be very very similar to the current rc.

Thanks @calmh I’ve waited over a week since resetting the DB, so I would have expected all the updates to have come in by now… I’ll query the API again and provide something more concrete shortly.

Ooooh yes, that’s a good shout. I know you can’t provide any cast iron guarantees, and of course I won’t hold you to anything, but I’m guessing the fact that it’s an RC now suggests you think it’s ready for production data?

Just checked - it’s about three weeks since resetting the DB - and a good many of the machines in the cluster are powered on 24/7, and are not CPU-constrained.

Here’s the API reply for one of the errant files, from a working node in the cluster:

{
  "availability": [
    {
      "id": "SEPDJWL-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "UD7AGR2-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
    },
    {
      "id": "QMA6DXE-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "NN3I4DJ-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "SDP4AYM-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "4WQIRNR-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "6FLFQJH-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "62NVCLP-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "NZZLVXM-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "K45JSVV-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "PGDSJCF-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "4353CXZ-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "YD56AL6-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "N64VTSG-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "7FC5MIZ-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "DPBBQIZ-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    },
    {
      "id": "OFWIR4X-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
      "fromTemporary": false
    }
  ],
  "global": {
    "blocksHash": null,
    "deleted": true,
    "ignored": false,
    "inodeChange": "1970-01-01T01:00:00+01:00",
    "invalid": false,
    "localFlags": 0,
    "modified": "2025-04-30T17:00:03.267472863+01:00",
    "modifiedBy": "K45JSVV",
    "mustRescan": false,
    "name": "file/path/here",
    "noPermissions": true,
    "numBlocks": 0,
    "platform": {
      "Unix": null,
      "Windows": null,
      "Linux": null,
      "Darwin": null,
      "FreeBSD": null,
      "NetBSD": null
    },
    "sequence": 412704,
    "size": 0,
    "type": "FILE_INFO_TYPE_FILE",
    "version": [
      "DPBBQIZ:1744277721",
      "K45JSVV:1746028803"
    ]
  },
  "local": {
    "blocksHash": null,
    "deleted": true,
    "ignored": false,
    "inodeChange": "1970-01-01T01:00:00+01:00",
    "invalid": false,
    "localFlags": 0,
    "modified": "2025-04-30T17:00:03.267472863+01:00",
    "modifiedBy": "K45JSVV",
    "mustRescan": false,
    "name": "file/path/here",
    "noPermissions": true,
    "numBlocks": 0,
    "platform": {
      "Unix": null,
      "Windows": null,
      "Linux": null,
      "Darwin": null,
      "FreeBSD": null,
      "NetBSD": null
    },
    "sequence": 223102,
    "size": 0,
    "type": "FILE_INFO_TYPE_FILE",
    "version": [
      "DPBBQIZ:1744277721",
      "K45JSVV:1746028803"
    ]
  },
  "mtime": {
    "err": null,
    "value": {
      "real": "0001-01-01T00:00:00Z",
      "virtual": "0001-01-01T00:00:00Z"
    }
  }
}

…and from this wonky NAS, which is connected to all the same nodes…

{
  "availability": null,
  "global": {
    "blocksHash": "Z5p+F1fJPkoZzkAEYxtAUQxAMICxVXMegp1dCIhoJTI=",
    "deleted": false,
    "ignored": false,
    "inodeChange": "1970-01-01T01:00:00+01:00",
    "invalid": false,
    "localFlags": 0,
    "modified": "2025-02-20T09:53:13.267472863Z",
    "modifiedBy": "PGDSJCF",
    "mustRescan": false,
    "name": "file/path/here",
    "noPermissions": true,
    "numBlocks": 67,
    "platform": {
      "Unix": null,
      "Windows": null,
      "Linux": null,
      "Darwin": null,
      "FreeBSD": null,
      "NetBSD": null
    },
    "sequence": 88786,
    "size": 8712796,
    "type": "FILE_INFO_TYPE_FILE",
    "version": [
      "PGDSJCF:1740045304"
    ]
  },
  "local": {
    "blocksHash": null,
    "deleted": false,
    "ignored": false,
    "inodeChange": "1970-01-01T01:00:00+01:00",
    "invalid": false,
    "localFlags": 0,
    "modified": "1970-01-01T01:00:00+01:00",
    "modifiedBy": "",
    "mustRescan": false,
    "name": "",
    "noPermissions": false,
    "numBlocks": 0,
    "permissions": "0",
    "platform": {
      "Unix": null,
      "Windows": null,
      "Linux": null,
      "Darwin": null,
      "FreeBSD": null,
      "NetBSD": null
    },
    "sequence": 0,
    "size": 0,
    "type": "FILE_INFO_TYPE_FILE",
    "version": []
  },
  "mtime": {
    "err": null,
    "value": {
      "real": "0001-01-01T00:00:00Z",
      "virtual": "0001-01-01T00:00:00Z"
    }
  }
}

I’m not aware of anything further that needs fixing. Unless someone testing it reports a new blocking problem, the current RC is it.

I just upped the ante by inflicting the latest rc on all Debian users following the candidate channel…

Interesting, so it sees an old (?) version from PGDSJCF which is in conflict with the newer deleted version, so it wins. But the same PGDSJCF is listed in the other output as having the deleted file. Strange. :man_shrugging:

Thanks - good to know! I’ll have a look at the implications for modifying the launch script on the package this Synology uses, and consider moving across.

I understand. Yes - strange… Is there a relevant API call for querying the errant NAS about what it knows about other versions of that file? Or might a query to PGDSJCF yield any further information?

If not, I’m happy to perform some level of reset and move on - but if there’s any useful insight which can be gleaned, I’m happy to poke further.

Ok - a couple of data point tangents here, then an update:

This installation is using the SynoCommunity Synology package - and I’m glad to report that updating the Syncthing binary from within the Syncthing UI worked perfectly! It looks like the package is already using the long-form options, so there weren’t any launch issues around this.

As this device hadn’t completely rebuilt the database, it was still ‘only’ 30GB - but the migration worked smoothly:

[start] 2025/06/10 14:08:39 INFO: Migration complete, 8703987 files and 212964k blocks in 67h21m49s

Resource usage was typically about 60% CPU and 20% RAM (16GB RAM installed) during the migration. Individual folders’ migration would get slower as they worked through - but I’m guessing that’s related to inserting records into an increasingly-large database.

I’ve only unpaused a couple of folders so far, and it does largely appear to have fixed the original issue. However, I’m now having the same problem as in Global and local state mismatch (v2) on each of these folders: the folder says it’s Up To Date, but the Global State is inflated compared with what it should be:

It looks like you’ve already identified this regression - but I don’t believe the steps that lead to this example are the same as in that other thread. (Certainly, these folders have been syncing fine for some years.)

Please let me know if it would be helpful to send you the database - but I appreciate that, as the database was clearly questionable before the upgrade, this may not be very useful!

This is currently expected I think, while there are files that have been ignored. Fix is in the works.

1 Like

Fabulous - thankyou!