Synchronization is too slow for file sizes exceeding 1250000

in LAN, We have 4 servers, evenly distributed across two data centers, and currently have over 900GB of data (1250000+files, 1000+folders, network speed of 50Mb/s+, mostly images). These files need to be synchronized in real time to the designated directories of the 4 servers, ensuring file synchronization within 5 seconds, Currently, we have found that the work time before file transfer takes up a significant amount of time, sometimes exceeding 30 minutes without synchronization completion.

I’m not convinced that Syncthing is the right solution to your need — under 5 seconds is a pretty ambitious goal.

In my small Syncthing cluster, I will sometimes see a single file get synced in under 5 seconds, but that’s a single small file going across a gigabit link that’s typically loaded under 1%. The speed of sync is going to be dependent on a number of factors, including your filesystem watcher settings. Have you experimented with changing the Fs Watcher Delay? The default is 10 seconds. You can find it under Actions / Advanced / Folders, and it’s set per Folder there.

1 Like

5 seconds is tough.

1.2M files means any kind of full rescan will take a long time and no syncs will happen, even if you rely mostly on the fswatcher for quicker updates.

Consider your specific use case and think about whether of those files need to be constantly synced or whether you can make a small transfer directory that’s shared across the 4 machines and have scripts on each side move or copy files into the path where the other 1.2M files are stored.

Yes, currently we use self-developed tools for scanning attachment records and targeted file transfers on servers. A more optimal solution would be shared storage, such as Ceph

I guess if you’re considering ceph as an alternative I really don’t understand the use case. I’m using ceph as a redundant object store/filesystem and syncthing on top of cephfs to sync multiple systems with the ceph file system.

if ceph works (which requires a reliable always on, high speed link between all of your systems) and the files are small, why not use a central server and access remotely?

Anyway as I said I don’t understand the use case.