Not really sure what's going on. Lots of disconnects and half syncs

So I have five servers all sharing directories.

Hosts: ARL-FS (FreeNAS - Local Network) ARL-01 (Windows 7 Ultimate x64 - Local Network) ARL-Laptop (Fedora 23 - Local Network) Phone (Android - Remote and Local) server3 (CentOS 7 - Remote)

Folders: sources - ARL-FS, ARL-01, ARL-Laptop, server3 Pictures - ARL-FS, Phone pictures-backup - server3, ARL-FS

So with all of that information I keep having all of them randomly disconnect.

ARL-FS reports it’s syncing with ARL-01 even though I’m unable to access the web gui on it. ARL-FS also reports server3 is disconnected but it’s able to get share requests from it. server3 reports ARL-01 and ARL-FS is disconnected but it’s copy of sources is up to date even though it’s missing 20gb+ of data.

If I restart the client on ARL-01, it’ll respond for a bit but eventually will stop accepting web gui connections. As well it appears to keep losing the folders.

ARL-FS estimates my local state to be 39.9GiB but it’s still scanning as well. Am I just pushing too much data for this to work? Or should I let one server scan everything, then bring each server online as they fully sync?

Check if the process is running when you are try to connect.

Up to date being up to date makes perfect sense when you are not connected to anyone, as we are unaware what the remote state is, hence up to date.

The fact that it’s still scanning doesn’t mean much without defining how long is has been scanning for. Scanning 40GB doesn’t sound too much, yet number of files also plays a role, so does hardware involved, which is not clear at this point.

Also, just a tip for the future when looking for help, giving some long verbose names to things doesn’t help, especially when half of them share the same prefix. It’s much easier to read A is connected to B, B sees C, etc, then to try and adapt to everyone’s naming scheme with every support request.

I figured I’d keep the names the same as they are as they are short in case I had to post the logs for assistance as well, but thanks for the tip.

And yes, the process is running on all of the servers.

There are tons of files, 2,465,296 items alone in the sources folder.

If I should wait for it to finish scanning before letting them sync, that’s possible to do.

Well it will try to sync as it’s scanning, but large number of items means a large index, a large index means a lot of overhead maintaining it, hence you might see all sorts of interesting fenomena along the way.

Might be worth trying to diagnose just between two hosts, ideally turning Syncthing off on the rest. There are debugging options which increase the log verbosity and may shed some light as to what’s going on.

Ref: web UI problems. Check the log files, firewall settings, make sure the web UI is using the port you’re trying to connect to, try increasing verbosity.

Ref: losing folders. This is certainly alarming as the shared folders are just entries in the config.xml file. Maybe take a copy of this when the folders “exist” and compare to when it doesn’t. Possible disk/filesystem errors? Could be that Syncthing isn’t able to save its config file, so is losing settings when it’s restarted. Easy to test this. Again, treat this as a separate fault and diagnose it independently.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.