Remove failed items from sync database?

I use a Raspberry Pi as backup server but it is a bit on the slow side (as many has noted here on the forum).

Sometimes it claims it has files it doesn’t, resulting in Failed Items (no such file or directory) on my other machines in the network.

The files is created by workstations in the network and then moved shortly after to other locations. What I think happens is that the Pi recieve notice that there are new files in one location which it adds to it’s database but it never actually recieves the files before they are moved. But it still think it has the files somehow…

Can I tell the Pi to forget these files without having to rescan the whole share (which takes ages) (edit: which doesn’t help either…)?

You can try setting a faster rescan interval on the workstation where the files are moved frequently or use inotify which immediately informs Syncthing about file changes

I know it is offtopic (shame on me), but please don’t forget that syncthing is no backup application: http://docs.syncthing.net/users/faq.html#is-syncthing-my-ideal-backup-application

I think I use Inotify (on Ubuntu workstation at least). I’ve raised ping times as suggested here on the forum to help with the Raspberry responsiveness. It seems like the Pi unnessesary rescanned a lot with lower intervals.

It’s more of a syncthing share node than a backup… It’s more my view of it. I actually have an offsite Raspberry that rsyncs from this one as a proper backup.

1 Like

Now that those false file references are there, How do I get rid of them?

Ideally it should fix itself after a rescan. If it doesn’t, then best bet is to shutdown all syncthing instances (at the same time) and then restart them.

Also, if you are willing to help us debug this, it would be useful if you could provide output from the following REST call from both ends for one of the files in one of the folders that is stuck:

http://docs.syncthing.net/rest/db-file-get.html

Usually invoked like this:

http://127.0.0.1:8384/rest/db/file?folder=<name of folder>&file=<path to file which has issues, relative to folder root path>

I have the same problem – I’m using SyncTrazor on all my systems. For me it usually happens when I copy pictures from my camera (sometimes around 500 to 1000. As I’m going through the photos I delete the ones I don’t need – that’s when some of the other systems start complaining. I have one server located over a VPN so it doesn’t get the files as fast as the systems on my network. It will give me errors saying it can’t find the file to copy over. The only way I can fix it is to temporarily make my desktop a master and override the errors. Running a rescan on all systems doesn’t seem to work.

I’ve learned to copy all the files from my camera and let all my systems sync completely before I go through and delete/organize – that usually helps. Next time it happens I’ll try to help debug.

Did this and got the same response for both server and workstation.

{“availability”:null,“global”:{“flags”:“0”,“localVersion”:0,“modified”:“1970-01-01T01:00:00+01:00”,“name”:"",“numBlocks”:0,“size”:0,“version”:[]},“local”:{“flags”:“0”,“localVersion”:0,“modified”:“1970-01-01T01:00:00+01:00”,“name”:"",“numBlocks”:0,“size”:0,“version”:[]}}

Not sure if I got the command right. This is the path reported by Syncthing webGUI:

1505_THISPROJECT/005_RENDERS_O_COMPS/02_COMPED_SEQ/testanim_pfut/test_0001.png

and this is what I tried:

192.168.0.32:8384/rest/db/file?folder=/1505_THISPROJECT/005_RENDERS_O_COMPS/02_COMPED_SEQ/testanim_pfut/&file=test_0001.png

Right?

No that’s not right. It should be:

?folder=nameofsyncthingfolder&file=path/to/file/relative/to/syncthing/folder/root.jpg

Ah, that went better, thank you…

The files I was having problems with yesterday no longer is an issue, they dissapeared from the Failed list. I got complaints on other files now. These files DO exist on the server, as 0 byte files. Not sure when the got back there or how. They were deleted and weren’t there a little while ago. They can’t be synced to workstation though.

workstation:

{“availability”:[“ID-OF-RASPBERRY-SERVER”],“global”:{“flags”:“0664”,“localVersion”:174915,“modified”:“2015-07-02T11:34:01+02:00”,“name”:“1505_THISPROJECT/005_RENDERS_O_COMPS/02_COMPED_SEQ/testanim_pfut/test_0001.png”,“numBlocks”:1,“size”:0,“version”:[“103624838204970834:2”,“10707944453534682816:1”]},“local”:{“flags”:“010664”,“localVersion”:535033,“modified”:“2015-07-02T11:34:01+02:00”,“name”:“1505_THISPROJECT/005_RENDERS_O_COMPS/02_COMPED_SEQ/testanim_pfut/test_0001.png”,“numBlocks”:0,“size”:128,“version”:[“103624838204970834:2”]}}

server:

{“availability”:[],“global”:{“flags”:“0664”,“localVersion”:174915,“modified”:“2015-07-02T11:34:01+02:00”,“name”:“1505_THISPROJECT/005_RENDERS_O_COMPS/02_COMPED_SEQ/testanim_pfut/test_0001.png”,“numBlocks”:1,“size”:0,“version”:[“103624838204970834:2”,“10707944453534682816:1”]},“local”:{“flags”:“0664”,“localVersion”:174915,“modified”:“2015-07-02T11:34:01+02:00”,“name”:“1505_THISPROJECT/005_RENDERS_O_COMPS/02_COMPED_SEQ/testanim_pfut/test_0001.png”,“numBlocks”:1,“size”:0,“version”:[“103624838204970834:2”,“10707944453534682816:1”]}}

I will have to come back to this issue when I got some new problem files.

And whats the error message you are getting and on which device?

Failed on workstation.

And actual error message?

Ok, happened again. 2 machines, Workstation and Raspberry Pi server.

I rendered animation frames to one location at my workstation, then moved these to another, also on workstation.

Now workstation complains on “no such file or directory”, so the server has got report on the files but didn’t recieve them before they we’re moved. It does advertise itself as having them though.

The files does not exist on neither workstation nor server.

REST call on bot workstation and server gives:

{“availability”:[],“global”:{“flags”:“0”,“localVersion”:0,“modified”:“1970-01-01T01:00:00+01:00”,“name”:"",“numBlocks”:0,“size”:0,“version”:[]},“local”:{“flags”:“0”,“localVersion”:0,“modified”:“1970-01-01T01:00:00+01:00”,“name”:"",“numBlocks”:0,“size”:0,“version”:[]}}

For some reason whenever I tried to read network shares on the server from my workstation all file browsers just hung themselves. When I ssh’d into the server I could read the folders I was sharing but whenever I tried to cd into the syncthing folder the terminal also froze. Had to re-ssh into the server where I killed the syncthing process after which I could cd into the syncthing folder. File browsers from the workstation still hang. Have tried to remount the shares. Will now try and reboot machines. Will be back with results in a moment.

This is very strange. Did you rescan on both ends? Its very unlikely they think files are there if they actually are not. Try using find path/to/dir as an alternative to ls or cd to see whats happening.

Also, do you have a reproducible test case how to reproduce this? I think stuff being on a network share could contribute.

This happens to me also. Not using Pi, but I tend to move large file quickly, before it sync and I ussually get a filed sync.

And yes the workaround is to wait until the files synced then move them around but sometimes we forget.

Yes rescan both ends.

Running a third machine, a laptop, also reports the same files as missing. If I shutdown the raspberry server both worksatation and laptop reports Up to Date with no missing files. Starting the Raspberry again the workstation and laptop gets Out of Sync with those same files Failed.

What I exactly did was render some animation from an animation software, these currently Failed files is a folder containing 237 png files at a total of 6,2 Mb, so not very large. As default the animation is rendered to the same folder where the work file is so after finished render I moved the files to my /RENDERED_SEQUENCES folder. I did this just as the render finished so the raspberry hadn’t the time to sync it properly.

Here is where it all fails. The raspberry think it has the files but it doesn’t.

An interesting thing is that the failed files is reported to be 0B (Out of Sync 237 items, ~0 B).

This topic was automatically closed 27 days after the last reply. New replies are no longer allowed.