Deleted Folders Keep Re-appearing

Hello:

I have a problem with some folders inside a couple of Shared Folders refusing to stay deleted.

I have a ~20 machine strong sync cluster (2 Synology NAS units, the rest running macOS - mostly up-to-date Syncthing versions) and - generally speaking - everything syncs beautifully thankyouallsoverymuchindeed.

However, I have a few folders which refuse to stay deleted: if I delete them on one of my nodes, they re-appear a few seconds later. Looking at the logs for this event, I see this happening:

(TL;DR version: deletions are recognised, then index updates come in for the deleted folders, resulting in folders being re-created.)

Iā€™ve tried pausing all Remote Devices, deleting one of the miscreant folders, then progressively unpausing Remote Devices one-by-one. Doing so, it seems thereā€™s about 6 devices which insist these folders should not be deletedā€¦ Nothing special about these devices - and (as far as I can so far see) they have the same Ignore Patterns configured as other devices in the cluster.

Iā€™ve looked at the setup on a couple of these devices: their Global State was much higher than it should be, and there were a lot of Failed Items, corresponding to items inside the folders Iā€™m trying to delete.

Consequently Iā€™ve reset the database on these couple of devices - that seemed initially to work - but Iā€™ve since found that theyā€™re again contributing to the reinstatement of the deleted folders.

Iā€™ve tried renaming the top level of one of the should-be deleted folders - but the originally-named folder tree soon appeared alongside it.

Whatā€™s going to be the best way to clear these zombies once and for all?

Many thanks!

Is any of the devices running Syncthing v1.22.0? If yes, this is a bug and you should definitely upgrade to v1.22.1 (or later if you decide to run on the RC channel).

1 Like

Thanks @tomasz86 - of the devices currently online, I canā€™t see any which are running v1.22.0. Some are running older versions, some v1.22.1ā€¦

Hmm, thatā€™s strange then. The older ones, I wouldnā€™t be concerned about (although you may still want to upgrade to v1.22.1 just to eliminate possible culprits). I had the exact same problem in https://forum.syncthing.net/t/keep-getting-conflicts-generated-on-android-device-for-files-modified-only-on-a-desktop-pc/19060 but it was fixed after upgrading to v1.22.1.

There is still the other issue of conflicted copies in Android that has only been fixed very recently and isnā€™t yet available in Syncthing stable, but it doesnā€™t sound like thatā€™s related to your problem.

1 Like

Conflicts can also cause resurrections. So even 1.22.1 might not solve it, though the problems there seem to only manifest on android - and you didnā€™t mention that being involve, is it?

1 Like

Yeah - definitely no Android machines present here. And as far as I can tell, there arenā€™t any conflicts at play - unless you count existent vs deleted as a conflictā€¦

Very similar situation here but with files too. To keep the scenario simple, I powered down all devices except a Windows 10 laptop and a Raspberry Pi, both running syncthing v1.22.1. Observing the two systems side-by-side:

  1. I created a folder on the laptop and pasted a file into it. Within moments folder and file appeared on the Raspberry Pi as expected.
  2. I deleted the file on the laptop. The file did not disappear on the Raspberry Pi, and a few moments later it reappeared on the laptop.
  3. I repeated these steps and then the file deletion was successfully synced from the laptop to the Raspberry Pi and the file stopped reappearing on the laptop.

This has been happening ever since I created this setup, over a year ago. I can reproduce it almost without fail. I tried these steps several times and at some point a .syncthing.file.tmp appeared on the Raspbbery Pi which stayed there permanently but was never synced to the laptop.

A side-effect is that if I rename a file1 to file2 in the laptop folder, a few moments later file1 reappears in the laptop folder (ie I get both file1 and file2 in that folder), presumably because rename works as copy+delete with the deletion not syncing to the Raspberry Pi as described above. Another possible side-effect is that I get lots of conflicts.

My syncthing settings are the default, so I donā€™t think I broke something.

Logs with model debug logging when this happens would help (maybe db its needed too, donā€™t think so but canā€™t check atm).

Thatā€™s exactly what I mean: Direct look like a conflict as one side is a Denton but still is, and the file always wins. The conflict issue meant that on sync, the local item got ā€œbumpedā€ in version this creating a conflict. Trying to repro with the latest RC would also be insightful, as it fixes that (if itā€™s the same cause)

Ah yes - I forgot ā€˜the file always winsā€™ (and quite rightly too!).

Well, this morning, deleting the errant folders on a different node in the cluster seems to have worked - but not all nodes are online yet, so I donā€™t yet know theyā€™ll come back againā€¦

They came back againā€¦ I found another node on the cluster with the Global State too high and a bunch of Failed Items entries for files which have been deleted. Resetting the database on this node seems to have fixed it for now, but there may be more nodes that need this doing.

At present Iā€™m chasing another shared folder on another node which has a Global State too high and a bunch of Failed Items entries for files which have been deleted, causing the creation of zombie folders on other nodes. For this node, Iā€™m reluctant to reset the database though - itā€™s a large, slow NAS, and it will take forever to rescan.

Is there any elegant way of clearing these erroneous Failed Items, or am I going to have to touch them, then delete them all?

Hereā€™s a screenshot of a correct node on the cluster:

ā€¦And hereā€™s a screenshot of the faulty node:

image

ā€¦And an excerpt of the Failed Items:

(For the record, the Ignore Patterns on each machines are identical.)

Iā€™m also interested that , on the faulty node, Local Items + Failed Items != Global Itemsā€¦ :thinking:

Logs with model debug logging when this happens would help (maybe db its needed too, donā€™t think so but canā€™t check atm).

Not sure if you were addressing me, or whether I should create a new topic. I donā€™t mean to hijack @Moisieā€™s topic. Anyway, I enabled extra logging as instructed and repeated the steps I described, saving and deleting 44.pdf in my_folder/test. The file reappeared. I am attaching the log.

log.txt (19.2 KB)

In the logs the local device deletes the file, then device PPMH2ZO re-adds it and from the version vector a conflict is quite likely (PPMH2ZO newly appears in the vector). The latest RC has a bunch of issues resolved around this, so next debugging step here would be to try and repro with that.

The ā€œno connected device has the required versionā€ error usually happens when there is a old, rarely connected and/or inhomogenously added (some devices do share, some donā€™t) device. You could use syncthing cli debug file on one of those, see which device is supposed to have that file. remove that device and re-add it and the error should be gone (if thereā€™s multiple devices that have that file, do the same for all of them). Or of course just establish a connection or keep them removed if either better applies to your situation.
As for the overall discrepancy - donā€™t know. Iā€™d always first try --reset-deltas for such issues around state, just because itā€™s cheaper and does rebuild the state.

1 Like

Hi Simon:

Many thanks for the helpful reply - much appreciated.

I donā€™t know there are any such devices that would hold these files in this clusterā€¦ but I will continue to poke about on all the machinesā€¦

Not sure about that option yet - will investigate. For the record, I just checked the API on the machine with the screenshots above for info on one of the Failed Items - it returned "availability": null - presumably that means it doesnā€™t know of any machine in the cluster with the file?

Iā€™ve tried doing this, but itā€™s a little complicated because itā€™s on a Synology NAS, and it appears you can no longer run a login session as the user account which runs Syncthing. Iā€™ve tried just modding the launch script temporarily - but I canā€™t see anything obvious that itā€™s happenedā€¦

Is there any indication in the log file that --reset-deltas has been applied?

Hello:

A few updates on this:

Yes there is - [ABCDE] 2022/11/14 08:51:51 INFO: Reinitializing delta index IDsā€¦

Instead, I ran the command as root, and then chowned the log file and database files back to sc-syncthing:syncthing afterwards.

So, now having reset-deltas and let the folder re-scan, Iā€™m still faced with much the same situation:

image

  • The folder now spends ~10 seconds Preparing to Sync, then briefly flips to Syncing, then immediately back to Preparing to Sync
  • I still see that Local Items + Failed Items != Global Items
  • In the log, every ~10 seconds, I see:
[ABCDE] 2022/11/14 10:52:13 INFO: Puller (folder "Folder Name Here" (y9qce-abcde), item "Folder 1/Folder 2/Folder 3/Folder 4/Folder 5/File 1"): syncing: no connected device has the required version of this file
{...repeated many times for different files...}
[ABCDE] 2022/11/14 10:52:13 INFO: "Folder Name Here" (y9qce-abcde): Failed to sync 525 items
[ABCDE] 2022/11/14 10:52:13 INFO: Folder "Folder Name Here" (y9qce-abcde) isn't making sync progress - retrying in 1m11s.

Hello again:

Hmmm - this is getting further and further detached from reality now:

image

Would removing the folder from this device and re-adding it clear its entries in the local DB, then repopulate them from the cluster?

Your screenshot by itself is nothing strange, so Iā€™m assuming youā€™re referring to the files failing due to ā€œno connected device has that versionā€, which is the problem you should troubleshoot. You could show output (syncthing cli debug file as mentioned above) for such a file if you want possible hints. I donā€™t think any database gymnastics will change that situation, in the long term, since items that are needed but not available have per definition been announced by some other device.

2 Likes

Hi @calmh

Thanks for the reply - very much appreciated!

Yeah - oops, sorry - I see now that that screenshotā€™s not very useful in isolationā€¦

Hereā€™s the output from that command:

root@My-NAS:/volume1/@appstore/syncthing/bin# ./syncthing cli --home /volume1/@appdata/syncthing debug file y9qce-abcde "Folder 1/Folder 2/Folder 3/Filename"
{
  "availability": [
    "DDR3EZU-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG"
  ],
  "global": {
    "deleted": false,
    "ignored": false,
    "inodeChange": "1970-01-01T01:00:00+01:00",
    "invalid": false,
    "localFlags": 0,
    "modified": "2021-11-01T21:32:56.873846783Z",
    "modifiedBy": "ZLTQRJ2",
    "mustRescan": false,
    "name": "Folder 1/Folder 2/Folder 3/Filename",
    "noPermissions": true,
    "numBlocks": 1,
    "platform": {
      "darwin": null,
      "freebsd": null,
      "linux": null,
      "netbsd": null,
      "unix": null,
      "windows": null
    },
    "sequence": 4526674,
    "size": 20268,
    "type": "FILE_INFO_TYPE_FILE",
    "version": [
      "ZLTQRJ2:1645025081"
    ]
  },
  "globalVersions": "{{Version:{[{ZLTQRJ2 1645025081}]}, Deleted:false, Devices:{DDR3EZU}, Invalid:{}}, {Version:{[{M45F6VX 1657116515} {N64VTSG 1649061634}]}, Deleted:true, Devices:{7777777, LM7BDG6, SDP4AYM, BJ6O5JO, SEPDJWL, N64VTSG}, Invalid:{}}, {Version:{[{N64VTSG 1649061634}]}, Deleted:false, Devices:{MNZUIRB}, Invalid:{}}, {Version:{[{ZLTQRJ2 1645025081}]}, Deleted:false, Devices:{}, Invalid:{DXKCD2K}}}",
  "local": {
    "deleted": true,
    "ignored": false,
    "inodeChange": "1970-01-01T01:00:00+01:00",
    "invalid": false,
    "localFlags": 0,
    "modified": "2022-07-06T15:08:35.873846783+01:00",
    "modifiedBy": "M45F6VX",
    "mustRescan": false,
    "name": "Folder 1/Folder 2/Folder 3/Filename",
    "noPermissions": true,
    "numBlocks": 0,
    "platform": {
      "darwin": null,
      "freebsd": null,
      "linux": null,
      "netbsd": null,
      "unix": null,
      "windows": null
    },
    "sequence": 7161868,
    "size": 0,
    "type": "FILE_INFO_TYPE_FILE",
    "version": [
      "M45F6VX:1657116515",
      "N64VTSG:1649061634"
    ]
  },
  "mtime": {
    "err": null,
    "value": {
      "real": "0001-01-01T00:00:00Z",
      "virtual": "0001-01-01T00:00:00Z"
    }
  }
}


By contrast, hereā€™s the same output, but from a ā€˜workingā€™ node (itā€™s connected to most, if not all, the same Remote Devices):

root@My-NAS-2:/volume1/@appstore/syncthing/bin# ./syncthing cli --home /volume1/@appdata/syncthing debug file y9qce-abgcde "Folder 1/Folder 2/Folder 3/Filename"
{
  "availability": [
    "LM7BDG6-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
    "N64VTSG-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
    "7777777-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
    "SDP4AYM-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
    "SEPDJWL-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
    "M45F6VX-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG"
  ],
  "global": {
    "deleted": true,
    "ignored": false,
    "inodeChange": "1970-01-01T01:00:00+01:00",
    "invalid": false,
    "localFlags": 0,
    "modified": "2022-07-06T15:08:35.873846783+01:00",
    "modifiedBy": "M45F6VX",
    "mustRescan": false,
    "name": "Folder 1/Folder 2/Folder 3/Filename",
    "noPermissions": true,
    "numBlocks": 0,
    "platform": {
      "darwin": null,
      "freebsd": null,
      "linux": null,
      "netbsd": null,
      "unix": null,
      "windows": null
    },
    "sequence": 6009717,
    "size": 0,
    "type": "FILE_INFO_TYPE_FILE",
    "version": [
      "M45F6VX:1657116515",
      "N64VTSG:1649061634"
    ]
  },
  "globalVersions": "{{Version:{[{M45F6VX 1657116515} {N64VTSG 1649061634}]}, Deleted:true, Devices:{LM7BDG6, N64VTSG, 7777777, SDP4AYM, SEPDJWL, M45F6VX}, Invalid:{}}, {Version:{[{ZLTQRJ2 1645025081}]}, Deleted:false, Devices:{DDR3EZU}, Invalid:{}}}",
  "local": {
    "deleted": true,
    "ignored": false,
    "inodeChange": "1970-01-01T01:00:00+01:00",
    "invalid": false,
    "localFlags": 0,
    "modified": "2022-07-06T15:08:35.873846783+01:00",
    "modifiedBy": "M45F6VX",
    "mustRescan": false,
    "name": "Folder 1/Folder 2/Folder 3/Filename",
    "noPermissions": true,
    "numBlocks": 0,
    "platform": {
      "darwin": null,
      "freebsd": null,
      "linux": null,
      "netbsd": null,
      "unix": null,
      "windows": null
    },
    "sequence": 1061613,
    "size": 0,
    "type": "FILE_INFO_TYPE_FILE",
    "version": [
      "M45F6VX:1657116515",
      "N64VTSG:1649061634"
    ]
  },
  "mtime": {
    "err": null,
    "value": {
      "real": "0001-01-01T00:00:00Z",
      "virtual": "0001-01-01T00:00:00Z"
    }
  }
}


Does that shed any light?

Well, in the top output the (global) file exists, was last changed by ZLTQRJ2, and is available from DDR3EZU. In the bottom output, the file has been deleted, was never changed by ZLTQRJ2, and DDR3EZU is nowhere to be seen. These two variants of the file appear entirely disconnected from each other, sharing no history. Perhaps you are running a network with devices not talking to each other and files being modified in multiple places?

1 Like

What Jakob said. And in reference to the earlier suggestion of removing (and if appropriate re-adding) devices to see if that clears the problem: Youā€™d need to do that with DDR3EZU. However that only works if you havenā€™t been connected to it in a long time or that device is otherwise ā€œbrokenā€ (has a skewed view of the global state), if everything is connected in principle as you say, it looks like you have a more complicated connectivity/sync problem as suggested by Jakob.

1 Like

Hi @calmh, hi @imsodin

Thanks for your thoughts - very much appreciated.

Hmmm - thatā€™s certainly not the intention. All devices in the network are connected to many other devices in the network, with all of them connecting to both these nodes.

(With apologies for not following your suggestions earlier!) Iā€™ve dug into this a bit further, and itā€™s possible that DDR3EZU has a skewed view of the world: this is an older machine which, in theory, has been replaced - but I see that itā€™s still been connecting to the networkā€¦ I shall investigate!

Many thanks for your expert input - very much appreciated!

1 Like