Ah. That’s a bug, a file changing while it’s being synced is annoying, but should not be considered a stop-the-repo kind of error, for sure.
Edit: Actually no, looking at the code that should not happen. Are there more, more serious, errors earlier in the log?
Yes, there were also permission errors.
I have fixed the permissions and added to ignore list chrome and zeitgeist folders. Now I have no more issues with synchronization.
I finally found some time to test the new version and it seems to work.
Now only the file causing the problem will stay unsynced in the first run.
However the repo will still be stopped and syncthing will not try to sync added files or changes after it has been stopped. Syncthing needs to be restarted. Is this the intended behaviour?
It would be nice, if just the file(s) causing the error would be suspended from syncing instead of stopping the entire repo.
Are there cases in which it is useful to stop the repo entirely?
It’s hard to tell the difference. We stop when we get fatal errors from the operating system, preventing us from creating files or writing to them. Syncthing can’t know the root cause of an “invalid argument” response, and it differs from OS to OS. The cause could be that the file system is broken, in which case no file will ever work. It tries to work around this by only stopping the repo if all files it tried to sync failed.
But due to the nature of things, this is where we end up: we start with 100 files, 10 of which fail. We succeed in syncing the 90 other files and retry the 10 – all fail. Would an eleventh file suceed at this point?
So what is the work around for this issue? I keep getting this and it is kind of hard to figure which node is causing this issue, providing names is not enough and debugging is not much concise especially if one is on the go and hits this issue. I have couple nodes attached and I do not know where and why it is stopping.
it would be nice if ST just ignores but reports them. It can be tedious to fix this, considering that people can be in all kinds of situations and not being able to access all the nodes.
We could, but once we’re getting persistent errors from the filesystem layer, things are getting dangerous. Apparently we can’t create or write to files as we should. Are we sure the system is healthy enough for reads to be OK? If not, we could be corrupting data by accepting changes from the local system that are not intentional.
To be sure, there are some easy cases that we should be able to discount, and there should be a better mechanism to figure out what’s going on than looking in the logs, but just ignoring errors is a dangerous path to walk.
That is fine. I understand the complications of managing files across devices but we have to remember that the whole point of having a syncer is to have files synced. If the app stops in the middle of syncing for obscure reasons that is no good.
Presume for a second that you are in China on the go and your server is in Europe. You have important files to sync back and forth. ST decides to stop for some reason that might not have anything to do with he files you need to be syncing. And lets say you have no way of accessing your servers to figure out what is going on. I think this kind of scenario is plausible. I personally think that ST should strongly inform the user but stopping sync should the last resort.
I have a similar problem. The reason why the full repo stopped, was because some files (maybe only one) should be retransferred. But the index was not even fully transmitted at the time the repo was stopped.
I restarted and then, at 22:34:17 i saw the message: "All files… Stopping Repo “Test2”…
But it took serveral minutes more until the full index was transferred and the Repo switched from Syncing (blue) to Stopped (red)
Should i open an issue for this? 0.9.17, linux node