2M Files ...

So … I’m trying to use this software to replicate data … I seems work. But …

The amount of data approx 2.5M files, and 1TB per year. I have around 5 years, but currently i experimenting with current and last year, that changing. That approx 2.6M files and over 1TB in overall. 3 nodes, 1 sending only, 2 receiving only.

So i decided to try 1.4.0RC8 - one node was OK in update, a second one, just crashed in the middle of DB conversion , effectively double of data in one shared folder. It in scanning state now, and i have no idea in outcome. that node are source, so all destination , i just shutdown to avoid something not good.

What I’m observed just in a month of usage … A. For some reason only receiver node can somehow mark a local folder (just a folder, not files in the folder) as local changed, effectively raising flag “local changes” 7 folders, 0 bytes. And that annoying and not possibly to erase it .

B. Global database. Seems an original design not assumed a scalability. Would be a great to have more buttons to manage DB, like complete purge. ideally a set per shared folder.

C. Option to define where all config, db, log are located, default that hook to a user profile are not really good option. (or i just don’t know it)

UPD: scanning finished … amount of files wrongly doubled , so what? erase all db and conf and started from beginning ??? (technically not a probl, just a time consumption)

soo … i renamed DB folder … started from beginin … (saved db and logs, if someone interested)

C) Check -home command line flag

All the rest i don’t understand, please post screenshots and/or simple steps you took and what happened then and how it is different from what you expected.

This seems like a recent defect in metadata tracking. Cosmestic to a certain extent, but odd. And of course I’m not seeing it on any of my devices…

After switching to v1.4.0-rc.x, I also had this effect with several peers. Since nothing else had helped, I always deleted the affected peer on one side. From the other instance or the other instances the connection is offered again to connect. After confirmation the database for this peer is created again. That’s how I worked with each peer individually. With one peer, I even had this effect on two instances, which I determined by comparing with the file manager and resolved in the same way.

Right, I see the bug.

FWIW it’s a cosmetic issue that’ll resolve on its own eventually, or when running once with STRECHECKDBEVERY=0 after upgrade to -rc.9.

1 Like

another question1:

node1 folder(send only) --> node2 (receive only) node1 status: red out of sync, 8 files node2 syncing , 8 files all stuck forever …

I don’t quite understand why send only stuck of sending files, as it suppose a master (send only) and sync with other (received only), overwriting even newer filed to the older version . (or nor ?) , Does master - slave principle exist in this implementation?

question2: received only folder on node2 indicating local addition in reality no one even touch it but syncthing software, it a closed system. How it apperas ? , bug? , how to get rid off, button “revert local changes” do nothing …

Not really. “Send only” just means do not apply changes from anyone else, “receive only” means do not send changes to anyone else. In both cases there can be changes that are tolerated and not overwritten unless the relevant buttons are pressed.

Especially combining the two can be confusing as you get two places where changes are filtered and two places where you might need to hit override & revert. I would normally recommend pairing either of the “… only” modes with the regular “send and receive” only.

Might be that metadata that Syncthing sets is not properly preserved. Time stamps, permissions, that sort of thing. You would need to look at the details, and perhaps use the ignore permissions and modtime window options if the filesystem does funky stuff.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.