I see this question is asked pretty often and is in FAQ, but I would like to make it a little bit more specific, so to understand whether I should use it for my use-case, or it’s not advisable to.
I have a huge directory which I would like to mirror (The Backup further in the text) to two or three remote servers, one of which I have on my LAN and others on the other locations I have access to. The backup process happens not very often, once a week. With data being just added up. Basically that’s some sort of archive for our team’s work projects, which are also synchronized either via git or Syncthing, or both (more on that later). Those projects contain such file-types as html, css, js, psd, sketch, images and other office type files. The weekly additions is expected to be within 10 GiB in size, and The Backup is cleaned up annually, moving about half of its size to a non-synchronized server with an SSH access (The Archive further in the text), so that we keep only this and last year projects.
The size of The Backup directory is expected to be less than 100 GB, but the number of files is unpredicted. With all that said, I have these questions:
- Is this design encouraged with Syncthing’s architecture, or is it just a specific use-case I have to test on my own? There’s a Is Syncthing my ideal backup application? section in FAQ, but it barely answers this question. And it looks like a really useful scenario to me, since I want the directory to be synced as I change it, preferably automatically. That looks especially useful with remote hosts, which are less easier to manage.
- Does the frequency of files changes make any difference to the performance? I would like to use my old home-server (Pi-like performance, Arch Linux with Intel Atom 230 on the board 2 @ 1.6 GHz with 1 GB of RAM, just 150 MB is used right now). As I tested, 1 GB directory with frequent changes works very well. Will it work similarly well while being 100 GB in size and less frequent changes being made, with just 1 GB being added once in a while, in a root of the directory?
- Will the number of writes to the disk be significant in that case? Having The Backup directory synced is very useful, but if the number of writes is significant (which I assume wears disks a bit quicker, am I correct?), then I should redesign the architecture to move directories from The Backup to The Archive more often, since the latter just lives on a remote server.
- And for the syncing git thing is a bit of off-topic to my general question, but I put it here as well. I have some projects with git and some are not. If git is strongly discouraged to use with Syncthing, I have to redesign my directories architecture the way Syncthing will do the job only for sans-git projects, which would complicate my system significantly. I tested Syncthing and git with basic markdown notes and haven’t found any issues so far, but that’s not a proper test of this system. In case it’s only me who works with those git-directories, would it still be discouraged to use git? Will it be discouraged, if I work only from one machine and backup the directories with git the aforementioned way, with Folder Type setting set to Receive Only, and push changes from the original machine? If I understand the algorithm correctly, then there’s no difference for my original machine a remote Syncthing directory will mess my current working git directory. I found a thread with very similar question: Sync Large Directory with Raspberry Pi3 (Deadlock?), but it’s almost 3 years, so things could change a little.
I believe that should be of no difference, but for this use-case with The Backup we use macOS and Linux machines only, and Linux machine for The Archive, which may later be switched to FreeBSD.
P.S. The forum forced me to remove the links due to me being a new user, so I left just two, with no links to the documentation that I mentioned in the post.