Is it more efficient to setup one or two servers or a massive web of computers?

In short, if I have files I want to synchronize between several computers, if I intend to leave one (or two) on 24/7, does it require less system resources to only sync to the one always running? Alternatively, I could configure them all to talk to each other, but it seems this giant web would require more work to check if the files are in sync across multiple computers. It seems this is the optimum way to have lan-sync (is this true)?

I have and use several computers and I started using Syncthing to keep files related to settings (and bash scripts) in sync between them. I intend to always have one computer running at home, school, and work to act something like a server. I am wordering if it would be a good idea to only sync with the 3 always on machines rather than syncing with every single computer (3 laptops, 7 desktops, some of these are dual boot too). It seems to take a while to sync having everything in a giant web but perhaps someone knows and I need not guess.

Yes connecting to less nodes will be less resource intensive, but conflicts will be more likely to happen, so it’s a tradeoff.

You could have everything syncing with everything, which would make conflicts less likely, transfer speeds faster, but require more resources.

Now sure if there is another question that this does not answer.

What about speed, will it be faster to sync from the workstation, to a server, then to another workstation? It seems to be incredibly slow having everything synced with everything (leaving it running overnight, files do not get copied over).

I don’t know, it depends on link speeds, but files being available on mutiple devices means we can fetch them from multiple devices.

I am noticing incredibly slow sync speeds between two computers directly wired to the same ethernet hub. For example, I added ~200MB of files to a directory I am syncing 11 hours ago and it still has not transferred. Using rsync, this would transfer in well under a minute.

I thought this speed hit must be because I had setup every computer to sync with every other one and it is struggling to propagate updates. I am not really sure the cause though.

It depends on the machines and number of files.

If one of them is some NAS with a calculator-grade cpu, it could be that crypto is just killing it.

Also, many small files means there is a lot of metadata associated with the files which can take while. You might want to increase the number of copiers and pullers in the advanced config on the receiving side in that case.

If it’s a small number of large files then this does sound unusual, and double check that you don’t have rate limiting enabled.

I do not have rate limiting enabled. The machines are 2 standard desktops (one a Core2 Duo ~8-y-o, the other a AMD Phenom II ~4-y-o) and the files were 19 mp3 files.

I saw another post where you mention that having the web UI running slows down the transfer rate so I closed that and it seems to have completed transferring everything (I had run it to verify the application was running and monitor the progress). I there a better means to monitor the status of the synchronization?

No web ui is the right tool. Check for cpu ram or io bottleneck which could be the cause.