After having internally testing ST for about 2 yrs on a variety of smaller implementations (100-200GBs), I’m trusting it now. So, I’m looking to know which would offer better performance for this use case.
I’ve run into a use case that Resilio can’t handle of 2.76TB, 1.2 million files, 400k folders across 10 Folders per server. Resilio’s DB crashes now. It’s simply too large. It was working well up to about 2TB bidirectional syncing within 30-60 seconds. Here comes ST. If I want to get that sort of performance, do you think the Filesystem Watcher in Base or SyncTrayzor or GTK is better or is it all the same watcher and just a different GUI? The ST doc site doesn’t really elaborate that.
If Resilio gets stuck and stops working, there is a performance problem somewhere. Hardware, network etc. I have almost this scope on different devices with Resilio and it has been running perfectly for a long time.
But. On one of my server I noticed, Resilio realized the speed and reliability through an enormous use of resources in the entire hardware chain. I then switched everything to syncthing on this device and it has been going really well since then. Nothing stops anymore and the device has been less loaded since then and basically just as fast in the response times.
I’ve been running Resilio (formerly Bit Torrent Sync) for a long time (maybe 6-8 yrs - I’m getting older and don’t recall) and until I went over 2TB it was good. I tried a few things, worked with support and it didn’t work. I don’t care about Resilio. Let’s focus on ST.
How are you running ST (base/SyncTrayzor/GTK?)
Are you running in a bi-directional approach? I plan to.
FYI: I have ST in a test phase with the above setup and it’s about 1-2 min per change. Not bad, but not as good as Resilio. I want faster sync.
Just like me … … I think it’s a shame what is happening at Resilio. They worked great for years and now they let it hang. It doesn’t matter to me, but I still see Syncthing as an alternative. Therefore more about that:
I have several Synology Servers and Windows 10 computers and run Resilio and Syncthing in parallel on all devices, except for the Android devices on which the Resilio APP no longer works without freezing with newer Android versions. So that Resilio and Syncthing can work in the same directories, I have determined a set of ignore patterns for the temporary exchange files of Resilio and Syncthing and integrated them into both ignore lists (“IgnoreList” for Resilio, “.stignore” for Syncthing). This also means that a server only runs with Syncthing.
This also gives me a good comparison of the speeds and my experience shows that there are no major differences in deceleration between Resilio and Syncthing. Both are roughly the same, with a slight advantage for Resilio. I think this is immaterial, it is mostly in the seconds range. Sometimes Syncthing is even faster, like Resilio, so I think its better you have some tests for that.
Now to your questions:
On all Windows computers I start Syncthing via the Task Scheduler and can access it normally via the browser. I don’t use all the tray tools etc.
On my Synthing installations I have 6 out of 48 peers on “Send only”. The 42 bidirectional ones do not cause any problems, one of the six others causes a bit of grief, there are always a few files that are not balanced, I don’t know why.
To get an idea of timespans involved it’s good to have an idea of how syncing works in Syncthing. For that I recommend reading https://docs.syncthing.net/users/syncing.html. The third paragraph of the “Scanning” section has info on intentional delay with watching for changes (configurable to a degree).
Nowadays the wrappers do not influence change detection (at least they don’t have any reason to, as syncthing does it on its own). They are mainly just an extended UI. So it all depends on your use-case - I am running it as a service that I almost never look at, so no UI wrapper.
Also Resilio use a delay, is defined in the file “FileDelayConfig” and its standard value is also 10 seconds. By default, the delay time for all predefined types of files is set to 10 seconds and you can modify for each. By the way, also Goodsync use this feature to avoid conflicts and problems. I think that should be handled with care.
The delay make sense, since it let you set the delay time with which certain types of files will be synchronized to other devices. This may be helpful if you are modifying and syncing a file at the same time, for example, if you are editing an Office document located in a synced folder. In this case, setting a bigger delay time will prevent Office and Sync processes from running into conflict with each other.
Good to know the GUI wrappers make no difference in speed. I’ll switch over to a service + browser implementation (so I don’t have to worry on restart w/no login).
The delta can vary from 10MB to 1GB between hour syncs.
I’ve left the default scan time to 1 hour. I’m nervous to lower it SINCE…
The issue is while the scanning is happening (hourly), transfers take about a 5-6 minute pause in syncing vs normal <1-2 min. I wish there was a way to speed this up. VM has 5 cores, 32GB (339MB used by ST [why not cache the index DB in ram]), and Perfmon shows 1MB/sec activity and disk queues of 0.01. ST just isn’t hammering the server.
I read the “Syncing” section before I originally posted.
fsWatcherDelayS = 10 sec (the default)
What I mean is the following:
when a ST folder isn’t currently doing a “Rescan” the system is able to detect and sync changes within 1-2 minutes.
when a ST folder is in the “Rescan” process (and the system shows “Scanning”) the system takes around 5-6 min to detect a change and replicate it.
I’m debating increasing the “Rescan” to 6-8 hrs, and rely on the file system watcher. Feedback?
the documentation mentions “Even with watcher enabled it is advised to keep regular full scans enabled, as it is possible that some changes aren’t picked up by it”. When is it possible that “some changes aren’t picked up”? I’d like to know those use cases.
If your filesystem scan takes so long, I believe extending the rescan interval is sensible (I just don’t do it myself because I don’t have any incentive to change the defaults as it just works for me (aka lazyness)).
Filesystem notifications are backed by various backends on different systems, the upstream notification library is a really nice peace of software, but does have a few cornercase limitations here and there and thus it’s just prudent to have a “fallback”. I believe on linux unless you run into the system limit, watching should work just fine.
I don’t know whether it’s documented (I hope it is): In case of deletions or repeated changes to the same file, the delay is increased to 6x the configured delay capped at 1min.
I wish I knew what the limitations were? I’m planning to try and go production on a very large scale file server and want to know if all changes are done via SMB shares (hundreds of files open), are the limitations around that? Thanks!
You can watch a local drive. You can not watch a share. So if the SMB ( or what not ) shares are in a local drive, it’s a local drive you are watching, now if the local drive letter is a mapping it will fail, as it is not a local drive.