Deleted Folders Keep Re-appearing

Hello:

I’ve removed DDR3EZU from the system which wasn’t able to get into sync - and it’s now looking much much happier - thanks for the pointers!

I’ll remove at source as soon as I’m able to - looking at ways to thoroughly ensure it’s expunged from the network (and not lurking through some device which I can’t get access to immediately), I’m thinking:

  • Remove all Shared Folders from DDR3EZU, whilst it’s connected to the network;
  • Allow it some time to send its removal updates(?);
  • Then completely remove the Syncthing installation from this device;
  • Then delete it from all other devices in the network that I can get my hands on.

Is that sensible, or am I over-worrying about it?

Thanks!

1 Like

The desired result is DDR3EZU is gone from the point of view of Syncthing, correct? Then just remove that device from all other device and uninstall Syncthing on DDR3EZU - order doesn’t matter. I am probably missing something here.

1 Like

That’s right - it’s just that I won’t be able to access all nodes quickly to remove it and get to a stable state - so I was thinking that if I can remove the Shared Folders from DDR3EZU itself, then it will effectively remove itself from all nodes - unless I’m misunderstanding…

1 Like

Ah - just managed to get access to DDR3EZU and started deleting folders. It looks like nothing is transmitted to the network to indicate this device is no longer syncing these folders, so I guess there’s no point in pre-emptively deleting these folders… I’ll do so anyway, as I guess it can’t harm! :wink:

1 Like

The other devices should show some indication that DDR3EZU is not sharing the folder anymore. Can’t remember if it will also drop all information from it or not (likely not, only once it’s removed/unshared locally).

1 Like

Unfortunately I can’t get to any of those other devices at the moment - but they won’t be able to get to DDR3EZU any more - I’ve completely wiped its config.

So - and sorry to keep going on, but there are a few variations on a theme going on here: back to the zombie folders reappearing:

I’m one of my nodes (BJ6O5J). I’ve paused all Remote Devices except one (7FC5MI) - though that node is itself connected to many other nodes in the network.

I delete a subfolder which is misbehaving, and then allow the Shared Folder to rescan - at which point the subfolder reappears. The subfolder only contains other subfolders - no files contained inside here.

The Shared Folder status is Up to Date; the Ignore Patterns are just my standard set which I use across the entire cluster.

Here’s a screenshot:

image

And an output from cli debug:

root@My-NAS:/volume1/@appstore/syncthing/bin# ./syncthing cli --home /volume1/@appdata/syncthing debug file xns9m-abcde "Folder 1"
{
  "availability": [
    "7FC5MIZ-AAAAAAA-BBBBBBB-CCCCCCC-DDDDDDD-EEEEEEE-FFFFFFF-GGGGGGG",
    "7777777-777777N-7777777-777777N-7777777-777777N-7777777-77777Q4"
  ],
  "global": {
    "deleted": false,
    "ignored": false,
    "inodeChange": "1970-01-01T01:00:00+01:00",
    "invalid": false,
    "localFlags": 0,
    "modified": "2022-11-16T15:44:33.1742879Z",
    "modifiedBy": "7FC5MIZ",
    "mustRescan": false,
    "name": "Folder 1",
    "noPermissions": true,
    "numBlocks": 0,
    "platform": {
      "darwin": null,
      "freebsd": null,
      "linux": null,
      "netbsd": null,
      "unix": null,
      "windows": null
    },
    "sequence": 14506,
    "size": 128,
    "type": "FILE_INFO_TYPE_DIRECTORY",
    "version": [
      "BJ6O5JO:1668612898",
      "LM7BDG6:1668607556",
      "M45F6VX:1668612025",
      "NN3I4DJ:1668612338",
      "N64VTSG:1668611722",
      "SDP4AYM:1668612224",
      "6FLFQJH:1668506862",
      "7FC5MIZ:1668613534"
    ]
  },
  "globalVersions": "{{Version:{[{BJ6O5JO 1668612898} {LM7BDG6 1668607556} {M45F6VX 1668612025} {NN3I4DJ 1668612338} {N64VTSG 1668611722} {SDP4AYM 1668612224} {6FLFQJH 1668506862} {7FC5MIZ 1668613534}]}, Deleted:false, Devices:{7FC5MIZ, 7777777}, Invalid:{}}, {Version:{[{BJ6O5JO 1668612010} {LM7BDG6 1668607556} {M45F6VX 1668612025} {N64VTSG 1668611722} {SDP4AYM 1668611936} {6FLFQJH 1668506862}]}, Deleted:false, Devices:{K45JSVV, XVMTD4U, N64VTSG, SDP4AYM, 2WR6R63, 6FLFQJH, YD56AL6, NN3I4DJ, OFWIR4X, LM7BDG6, UD7AGR2}, Invalid:{}}, {Version:{[{BJ6O5JO 1668611870} {LM7BDG6 1668607556} {N64VTSG 1668611722} {SDP4AYM 1668611936} {6FLFQJH 1668506862}]}, Deleted:false, Devices:{ERKQLI4}, Invalid:{}}, {Version:{[{BJ6O5JO 1668611642} {LM7BDG6 1668607556} {N64VTSG 1668611722} {6FLFQJH 1668506862}]}, Deleted:false, Devices:{SEPDJWL}, Invalid:{}}, {Version:{[{BJ6O5JO 1668607516} {LM7BDG6 1668607556} {6FLFQJH 1668506862}]}, Deleted:false, Devices:{M45F6VX}, Invalid:{}}, {Version:{[{BJ6O5JO 1668595001} {6FLFQJH 1668506862}]}, Deleted:false, Devices:{7ENN7MV}, Invalid:{}}, {Version:{[{BJ6O5JO 1668507441} {6FLFQJH 1668506862}]}, Deleted:false, Devices:{QMA6DXE, OSGBXTT}, Invalid:{}}, {Version:{[{ERKQLI4 1665407954} {GQAWTUG 1666008424} {OFWIR4X 1663679735} {SDP4AYM 1666630017} {UD7AGR2 1664791210} {XVMTD4U 1666606164} {ZLTQRJ2 1663338230} {2WR6R63 1666948180}]}, Deleted:false, Devices:{ZLTQRJ2}, Invalid:{}}}",
  "local": {
    "deleted": false,
    "ignored": false,
    "inodeChange": "2022-11-16T15:44:37.002498717Z",
    "invalid": false,
    "localFlags": 0,
    "modified": "2022-11-16T15:44:33.1742879Z",
    "modifiedBy": "7FC5MIZ",
    "mustRescan": false,
    "name": "Folder 1",
    "noPermissions": true,
    "numBlocks": 0,
    "platform": {
      "darwin": null,
      "freebsd": null,
      "linux": null,
      "netbsd": null,
      "unix": null,
      "windows": null
    },
    "sequence": 157741,
    "size": 128,
    "type": "FILE_INFO_TYPE_DIRECTORY",
    "version": [
      "BJ6O5JO:1668612898",
      "LM7BDG6:1668607556",
      "M45F6VX:1668612025",
      "NN3I4DJ:1668612338",
      "N64VTSG:1668611722",
      "SDP4AYM:1668612224",
      "6FLFQJH:1668506862",
      "7FC5MIZ:1668613534"
    ]
  },
  "mtime": {
    "err": null,
    "value": {
      "real": "0001-01-01T00:00:00Z",
      "virtual": "0001-01-01T00:00:00Z"
    }
  }
}

I’ve already been around (I think) all the devices referred to here; at the time, none of them were waiting for files inside this folder.

What can I do to convince these folders to stay deleted?!

You don’t have scanning/syncing ownership or xattrs enabled, do you? Even if you don’t, it might be a combination of issues in v1.22.1 that cause it: lib/protocol: Ignore inode time when xattr&ownership is ignored (fixes #8654) by imsodin · Pull Request #8655 · syncthing/syncthing · GitHub and Xattrs not properly applied to directories, causes rescan/resync loop · Issue #8657 · syncthing/syncthing · GitHub. If this were the case, only upgrading to RCs (or later v1.22.2) will fix it. I am not sure though, really just guessing here.

You could temporarily set the folder to send-only and delete the directories. Then you’ll at least get to see who re-introduces the deletions.

1 Like

Hi @imsodin

Thanks - yet again - for your input!

No - I haven’t enabled that on any of the nodes in this cluster yet.

Ah - interesting… I shall investigate!

1 Like

So I’ve been continuing the deletion of DDR3EZU from devices in the cluster - and on each device I remove it from, the Failed Items for one of my folders goes away. So far so good.

I haven’t yet managed to delete this device from all my nodes - so I’m guessing its presence is still causing these zombie subfolders to keep resurrecting themselves.

Hmmm - how would I get to see who is the root cause of re-introducing the deletions? I can’t see anything in the UI to indicate this, and looking at the logs (with model enabled), I see about 14 index updates come in after deleting the subfolder. I know that I’ve removed DDR3EZU from 95% of these devices - I’m guessing I’m going to need to do 100% of them before I achieve success?..

The same syncthing cli debug file ... command. Ideally when deleted and then once resurrected to get a clean diff, but just afterwards should work too (available devices will show it to some extent, version vector is more precise who the original origin was but not always easy to decipher).

1 Like

Thanks @imsodin

Well - I’m delirious to report that the zombie apocalypse seems to have abated now!

I found another node which had DDR3EZU still configured as a Remote Device - so I removed that - and another node which hadn’t been online for a while - which I brought online.

Since attending to these, my canaries have remained deleted for three days now - woohoo! I’m hopeful this is now resolved - but I’m continuing to keep an eye on it, just in case other nodes coming online reintroduce the problems.

Many thanks - as always - to @imsodin and @calmh for their invaluable advice and support.

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.