Hello, I follow the project for a long time and now I think is the right moment to start using it.
The topology which I want to use is with four servers. I will not use relays, automatic discoveries and etc. Everything will be static, IP addresses, ports, etc. The only significant change I made is fsWatcherDelayS=“2” on every server. The synced folder is 300GB and rising. Mostly small files around 1-20MB each.
The folder is mounted with NFS on both “Server 1” and “Server 2” from “Fileserver 1”. This will be the case 99% of the time. Both servers will read/write on this folder.
If “Fileserver 1” goes offline the NFS will be remounted automatically on “Server 1” and/or “Server 2” from “Fileserver 2”. If “Fileserver 2” goes down too then “Server 1” and “Server 2” will start read/write on their local synced folders. And when one of the Fileservers is back online the folder will be remounted from it.
Both “Server 1” and “Server 2” will have monitoring scripts through Syncthing API calls for the current status on every server. The remount scripts will use the results from these checks for the final decision. This is on theory
So far I do not have any problems with the test environment, but I will be pleased if someone shares a thought or advice on this kind of topology.
My questions are:
What potential problems may I have with this topology and logic?
Any best practices to follow for this kind of sync?
About API checks for now I think the “/rest/db/completion” - “state” and “needBytes” will give info for the current state of the cluster. Anything else I can check to be sure that folder is synced?
I don’t understand your combination of Syncthing and NFS, but I can say that best practices don’t include folders on NFS. If you can run Syncthing where the files actually reside that’s better.
Hello, thank you for the answer.
I need the NFS because I need a shared folder for “Server 1” and “Server 2”. What problems can occur with Syncthing and NFS shared folders?
Syncthing will be on all four servers, I do not understand what you mean with “If you can run Syncthing where the files actually reside”.
The Syncthing instances on “Server 1” and “Server 2” are “deep cold” “backups” only if both Fileservers are gone, which is highly unlikely (they are in two different Datacenters).
You should not point multiple instances of Syncthing at the same underlying folders, because you will have hell with the two instances stepping on each others toes.
You should not point Syncthing at network mounted storage, as quite a few features we use are not available on network mounted storage filesystems, so you might end up with abysmal performance.
Hello, maybe I was not clear enough about the structure.
The folder which is NFS mounted on “Server 1” and “Server 2” from “Fileserver 1” is not the folder that will be shared with the Syncthing cluster on “Server 1” and “Server 2”. Every Syncthing node will “work” with his local directory.
I`m starting stress tests of this configuration and will give you updates.
Hi. My setup is similar to what you’re doing. I have three servers in three different locations. Each server syncs to each of the other servers.
This means that each server has what all the other servers have. No need for shares between the servers.
I have other computers on the network so I set up SMB shares on the servers so that these other computers could access the files. In this case the shares are used as offline storage. No files on the computers. This makes the workstations lean.
As calmh mentions, there shouldn’t be a need for shares and syncthing unless the shares are because the servers on the left don’t have the space that the fileservers on the right have.