file names lingering forever

In some situations it seems some file name may linger forever, even the files do not exist anymore. Image bellow exemplifies a case. None of the named files exist anymore. Removed long ago. But they appear as pending sync. How can I clean this up?

Perhaps there is a file I can edit and remove the references?

There is no such thing.

Apart from resetting Syncthing entirely, you can try to find out why exactly they are there. Then if it’s a problem with your setup, you’ll be able to fix it directly, otherwise we can try to identify and fix a potential bug in Syncthing. To start with that describe your setup and those files (e.g. where are those screenshots from).

Ok. My GUI is in Portuguese, so my translation back into English may not correspond precisely to what you guys see on your screen. That being put behind, I see those lingering file names on the web interface when I check the machine. I bring an example in the image bellow: Syncthing3 Here, “Lenovo” is one of the machine my other machine have to synchronize with. You see the message “Fora de sincronia” (out of synchrony, … 3 items, ~128B). If I click on “3 items…” then I see the name of those files that do any exist anymore. They existed once, but were deleted. In some cases, as per bellow, several months ago (in the example, on 2019-10-07).

You see, “NovoBin/Telegram.old” was deleted on 2019-10-07. It existed in every connected machine, but has been removed in all of them. Nevertheless, syncthing still reports it at the machine “quarto” with size “0 B”.

I find no way to remove this.

So the “Documentos” folder on “Lenovo” shows as up-to-date, right? Do the numbers on the global and local state on both Lenovo and the hub? If you are interested in debugging this, get the output of https://docs.syncthing.net/rest/db-file-get.html for one of the files on both sides. Otherwise you can start Syncthing with the -reset-deltas options on the hub (the device that shows the out of sync items on remote devices), that might “clean it up”.

I’ve been hitting this issue a bunch, and I think the repro process is:

  • create a bunch of files
  • sync the index with a remote node, but disconnect before syncing the files (so that the remote node knows “these files should exist” but it doesn’t have a copy of them)
  • add the files to .stignore on the local node
  • delete the files locally (which doesn’t update the index because they’re ignored?)
  • now your network collectively has the information “these files should exist” but they don’t exist on any of the nodes
  • optionally remove the local .stignore file if you want the local node to report out-of-sync as well as the remote nodes

REST info for one of these files which is marked “local node created this file” in the index and “local node does not have this file” in reality.

I’ve tried marking these files as (?d) in the ignore file and then deleting their parent directory; and I’ve tried marking the local repo as send-only and then pressing “override changes”; but I can’t figure out any way to get these non-existent files out of the index :frowning:

$ curl -H "X-API-Key: ..." 'localhost:8384/rest/db/file?folder=...&file=rust_test/target/debug/.cargo-lock'
{
  "availability": null,
  "global": {
    "deleted": false,
    "ignored": false,
    "invalid": false,
    "localFlags": 0,
    "modified": "2019-02-27T00:30:11.065920388Z",
    "modifiedBy": "DJZ6O3S",
    "mustRescan": false,
    "name": "guessing_game/target/debug/.cargo-lock",
    "noPermissions": false,
    "numBlocks": 1,
    "permissions": "0644",
    "sequence": 13380,
    "size": 0,
    "type": "FILE",
    "version": [
      "DJZ6O3S:1"
    ]
  },
  "local": {
    "deleted": true,
    "ignored": false,
    "invalid": false,
    "localFlags": 0,
    "modified": "2019-02-27T00:30:11.065920388Z",
    "modifiedBy": "DJZ6O3S",
    "mustRescan": false,
    "name": "guessing_game/target/debug/.cargo-lock",
    "noPermissions": false,
    "numBlocks": 0,
    "permissions": "0",
    "sequence": 721067,
    "size": 0,
    "type": "FILE",
    "version": []
  }
}
1 Like

This is sort of expected. A file is announced and now nowhere to be found.

I would have expected the device that announced then ignored would announce the ignored file thus noone will need it anymore. will check out out once at a computer again

I don’t think that bumps the version, as anyone ignoring would mean the file is no longer needed for everyone.

I get how this is expected from an implementation-detail point of view, but as an end-user it’s pretty weird. Either way, now that I’m in this situation, any suggestions for how to get out of it?

(I guess I could remove the folder from all my hosts and then re-add it with a new folder ID, which seems a bit brute-force, but it should work…)

I tried to reproduce and failed. What I did:

  1. Make sure files can’t be synced to device R (permissions).

  2. Create file on device L. Device R gets out of sync.

  3. Delete and add file on L.

Result: File isn’t out of sync on R anymore. And if I optionally remove ignore patterns on L, it isn’t out of sync either.

What should happen (and happens for me):

When a file is deleted and ignored and previously existed, it gets marked as ignored and thus invalid without bumping the version. That ignored file info gets sent to remotes and thus overwrites the previous valid file info -> noone needs it anymore.

So now we need to figure out what happened differently in your case (@shish).

I did this in every machine. It helped in part. Some of the lingering files disapeared. Nevertheless, there is one that still holds its position. See the image bellow.

Machines A70, Lenovo and Quarto are pending update with a single 128B file. Culprit’s name is .lock. As per picture bellow, it is supposed to exist in the machine Dell, directory Joplin. Now, this is a directory that does exist in every machine since. Nevertheless, it is supposed to be ignored because its name is on the file .stignore in every machine. This is so because Joplin does its trick in each local machine and I don’t want a “.lock” directory from one machine to spill over the other machines. I suppose this would take Joplin stray.

On the other hand, I see that dates and times are different on different machines. This means .stignore seems to be working. So, I should see no .lock file as pending update (so I guess). Nevertheless, this old fella keeps alive like a ghost that is nowhere to be found.

As I said, running Syncthing with the -reset-deltas option did not solve the problem (even thou some of the lingering files did disappear).

-reset-deltas didn’t make any difference for me :frowning:

UPDATE: The local repo is now Up to Date rather than Out of Sync - but the remote repo is still stuck at 77% (And the remote node claims that it is up to date)

I tried to reproduce and failed

Ack D: I still haven’t figured out a bulletproof repro process either - I’m just pretty certain it’s something along those lines because every time I hit the issue, the ghost-files are files which are supposed to be ignored on all nodes, but temporarily weren’t ignored on one node (because that node was recently reinstalled and I forgot to add #include .stglobalignore into my .stignore file)

Is there any more debug info I can give that would help figure out what’s happening?

Re-synced via command line dark magic :smiley:

vim .stglobalignore  # remove */target/*
curl 'http://127.0.0.1:8384/rest/db/remoteneed?device=...&folder=...&page=1&perpage=100000' -H 'X-CSRF-Token-...: ...' > out-of-sync
cat out-of-sync | jq '.files[].name' | grep  -oE '.*/' | sed 's/"//' | sort | uniq | xargs mkdir -p
cat out-of-sync | jq '.files[].name' | xargs touch
rm -rf */target
vim .stglobalignore  # re-add */target/*

For me, pretty dark!

I ran those lines here but all I got in the out-of-sync file was this message: CSRF Error whose meaning I can’t grasp. I have found a reference here but it did not help.

So, I still have those ghost files and also a command that gives me an error…

Ah, the “…”'s in the command need to be replaced with values from your own install, found by intercepting network traffic >_<

Also, it seems that just touching the files locally doesn’t always result in “here’s a new local version to override the ghost one” - sometimes the ghost has version=6, and local has version=1, so now I’m doing:

cat oos | jq '.files[].name' | xargs -I '{}' /bin/sh -c 'echo moo1 > {}
cat oos | jq '.files[].name' | xargs -I '{}' /bin/sh -c 'echo moo2 > {}
cat oos | jq '.files[].name' | xargs -I '{}' /bin/sh -c 'echo moo3 > {}
...

Until EVENTUALLY my local file version numbers are larger than the ghost version, and then I can delete them, and the delete will propagate to all nodes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.