file names lingering forever

In some situations it seems some file name may linger forever, even the files do not exist anymore. Image bellow exemplifies a case. None of the named files exist anymore. Removed long ago. But they appear as pending sync. How can I clean this up?

Perhaps there is a file I can edit and remove the references?

There is no such thing.

Apart from resetting Syncthing entirely, you can try to find out why exactly they are there. Then if itā€™s a problem with your setup, youā€™ll be able to fix it directly, otherwise we can try to identify and fix a potential bug in Syncthing. To start with that describe your setup and those files (e.g. where are those screenshots from).

Ok. My GUI is in Portuguese, so my translation back into English may not correspond precisely to what you guys see on your screen. That being put behind, I see those lingering file names on the web interface when I check the machine. I bring an example in the image bellow: Syncthing3 Here, ā€œLenovoā€ is one of the machine my other machine have to synchronize with. You see the message ā€œFora de sincroniaā€ (out of synchrony, ā€¦ 3 items, ~128B). If I click on ā€œ3 itemsā€¦ā€ then I see the name of those files that do any exist anymore. They existed once, but were deleted. In some cases, as per bellow, several months ago (in the example, on 2019-10-07).

You see, ā€œNovoBin/Telegram.oldā€ was deleted on 2019-10-07. It existed in every connected machine, but has been removed in all of them. Nevertheless, syncthing still reports it at the machine ā€œquartoā€ with size ā€œ0 Bā€.

I find no way to remove this.

So the ā€œDocumentosā€ folder on ā€œLenovoā€ shows as up-to-date, right? Do the numbers on the global and local state on both Lenovo and the hub? If you are interested in debugging this, get the output of https://docs.syncthing.net/rest/db-file-get.html for one of the files on both sides. Otherwise you can start Syncthing with the -reset-deltas options on the hub (the device that shows the out of sync items on remote devices), that might ā€œclean it upā€.

Iā€™ve been hitting this issue a bunch, and I think the repro process is:

  • create a bunch of files
  • sync the index with a remote node, but disconnect before syncing the files (so that the remote node knows ā€œthese files should existā€ but it doesnā€™t have a copy of them)
  • add the files to .stignore on the local node
  • delete the files locally (which doesnā€™t update the index because theyā€™re ignored?)
  • now your network collectively has the information ā€œthese files should existā€ but they donā€™t exist on any of the nodes
  • optionally remove the local .stignore file if you want the local node to report out-of-sync as well as the remote nodes

REST info for one of these files which is marked ā€œlocal node created this fileā€ in the index and ā€œlocal node does not have this fileā€ in reality.

Iā€™ve tried marking these files as (?d) in the ignore file and then deleting their parent directory; and Iā€™ve tried marking the local repo as send-only and then pressing ā€œoverride changesā€; but I canā€™t figure out any way to get these non-existent files out of the index :frowning:

$ curl -H "X-API-Key: ..." 'localhost:8384/rest/db/file?folder=...&file=rust_test/target/debug/.cargo-lock'
{
  "availability": null,
  "global": {
    "deleted": false,
    "ignored": false,
    "invalid": false,
    "localFlags": 0,
    "modified": "2019-02-27T00:30:11.065920388Z",
    "modifiedBy": "DJZ6O3S",
    "mustRescan": false,
    "name": "guessing_game/target/debug/.cargo-lock",
    "noPermissions": false,
    "numBlocks": 1,
    "permissions": "0644",
    "sequence": 13380,
    "size": 0,
    "type": "FILE",
    "version": [
      "DJZ6O3S:1"
    ]
  },
  "local": {
    "deleted": true,
    "ignored": false,
    "invalid": false,
    "localFlags": 0,
    "modified": "2019-02-27T00:30:11.065920388Z",
    "modifiedBy": "DJZ6O3S",
    "mustRescan": false,
    "name": "guessing_game/target/debug/.cargo-lock",
    "noPermissions": false,
    "numBlocks": 0,
    "permissions": "0",
    "sequence": 721067,
    "size": 0,
    "type": "FILE",
    "version": []
  }
}
1 Like

This is sort of expected. A file is announced and now nowhere to be found.

I would have expected the device that announced then ignored would announce the ignored file thus noone will need it anymore. will check out out once at a computer again

I donā€™t think that bumps the version, as anyone ignoring would mean the file is no longer needed for everyone.

I get how this is expected from an implementation-detail point of view, but as an end-user itā€™s pretty weird. Either way, now that Iā€™m in this situation, any suggestions for how to get out of it?

(I guess I could remove the folder from all my hosts and then re-add it with a new folder ID, which seems a bit brute-force, but it should workā€¦)

1 Like

I tried to reproduce and failed. What I did:

  1. Make sure files canā€™t be synced to device R (permissions).

  2. Create file on device L. Device R gets out of sync.

  3. Delete and add file on L.

Result: File isnā€™t out of sync on R anymore. And if I optionally remove ignore patterns on L, it isnā€™t out of sync either.

What should happen (and happens for me):

When a file is deleted and ignored and previously existed, it gets marked as ignored and thus invalid without bumping the version. That ignored file info gets sent to remotes and thus overwrites the previous valid file info -> noone needs it anymore.

So now we need to figure out what happened differently in your case (@shish).

I did this in every machine. It helped in part. Some of the lingering files disapeared. Nevertheless, there is one that still holds its position. See the image bellow.

Machines A70, Lenovo and Quarto are pending update with a single 128B file. Culpritā€™s name is .lock. As per picture bellow, it is supposed to exist in the machine Dell, directory Joplin. Now, this is a directory that does exist in every machine since. Nevertheless, it is supposed to be ignored because its name is on the file .stignore in every machine. This is so because Joplin does its trick in each local machine and I donā€™t want a ā€œ.lockā€ directory from one machine to spill over the other machines. I suppose this would take Joplin stray.

On the other hand, I see that dates and times are different on different machines. This means .stignore seems to be working. So, I should see no .lock file as pending update (so I guess). Nevertheless, this old fella keeps alive like a ghost that is nowhere to be found.

As I said, running Syncthing with the -reset-deltas option did not solve the problem (even thou some of the lingering files did disappear).

-reset-deltas didnā€™t make any difference for me :frowning:

UPDATE: The local repo is now Up to Date rather than Out of Sync - but the remote repo is still stuck at 77% (And the remote node claims that it is up to date)

I tried to reproduce and failed

Ack D: I still havenā€™t figured out a bulletproof repro process either - Iā€™m just pretty certain itā€™s something along those lines because every time I hit the issue, the ghost-files are files which are supposed to be ignored on all nodes, but temporarily werenā€™t ignored on one node (because that node was recently reinstalled and I forgot to add #include .stglobalignore into my .stignore file)

Is there any more debug info I can give that would help figure out whatā€™s happening?

Re-synced via command line dark magic :smiley:

vim .stglobalignore  # remove */target/*
curl 'http://127.0.0.1:8384/rest/db/remoteneed?device=...&folder=...&page=1&perpage=100000' -H 'X-CSRF-Token-...: ...' > out-of-sync
cat out-of-sync | jq '.files[].name' | grep  -oE '.*/' | sed 's/"//' | sort | uniq | xargs mkdir -p
cat out-of-sync | jq '.files[].name' | xargs touch
rm -rf */target
vim .stglobalignore  # re-add */target/*

For me, pretty dark!

I ran those lines here but all I got in the out-of-sync file was this message: CSRF Error whose meaning I canā€™t grasp. I have found a reference here but it did not help.

So, I still have those ghost files and also a command that gives me an errorā€¦

Ah, the ā€œā€¦ā€'s in the command need to be replaced with values from your own install, found by intercepting network traffic >_<

Also, it seems that just touching the files locally doesnā€™t always result in ā€œhereā€™s a new local version to override the ghost oneā€ - sometimes the ghost has version=6, and local has version=1, so now Iā€™m doing:

cat oos | jq '.files[].name' | xargs -I '{}' /bin/sh -c 'echo moo1 > {}
cat oos | jq '.files[].name' | xargs -I '{}' /bin/sh -c 'echo moo2 > {}
cat oos | jq '.files[].name' | xargs -I '{}' /bin/sh -c 'echo moo3 > {}
...

Until EVENTUALLY my local file version numbers are larger than the ghost version, and then I can delete them, and the delete will propagate to all nodes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.