So … I’m trying to use this software to replicate data … I seems work. But …
The amount of data approx 2.5M files, and 1TB per year. I have around 5 years, but currently i experimenting with current and last year, that changing. That approx 2.6M files and over 1TB in overall. 3 nodes, 1 sending only, 2 receiving only.
So i decided to try 1.4.0RC8 - one node was OK in update, a second one, just crashed in the middle of DB conversion , effectively double of data in one shared folder. It in scanning state now, and i have no idea in outcome. that node are source, so all destination , i just shutdown to avoid something not good.
What I’m observed just in a month of usage … A. For some reason only receiver node can somehow mark a local folder (just a folder, not files in the folder) as local changed, effectively raising flag “local changes” 7 folders, 0 bytes. And that annoying and not possibly to erase it .
B. Global database. Seems an original design not assumed a scalability. Would be a great to have more buttons to manage DB, like complete purge. ideally a set per shared folder.
C. Option to define where all config, db, log are located, default that hook to a user profile are not really good option. (or i just don’t know it)
UPD: scanning finished … amount of files wrongly doubled , so what? erase all db and conf and started from beginning ??? (technically not a probl, just a time consumption)