I have 3 computers, A, B, C.
A is the master, running Ubuntu 14.04, and gets files loaded on it for 1 week and then gets deleted. Files range from 500MB-5GB in size.
B and C (both Windows) sync these files, and a separate script runs on both to move the files out to a permanent location (so it’s not deleted when A deletes it.)
B is on a faster internet connection and operates as expected: syncs and moves.
C is on a slower connection but seems to collect a large amount of .tmp files - they never finish downloading. I suspect it is because B moves its finished copy out while C is still downloading. Could this be confusing C and stopping the download to it? The source file is still available at A.
Here is the log snippet:
[HRQL3] 23:25:38 INFO: Puller (folder “xxxxxxxxx”, file “test\test.bin”): pull: peers who had this file went away, or the file has changed while syncing. will retry later
[HRQL3] 23:25:38 INFO: Puller: final: peers who had this file went away, or the file has changed while syncing. will retry later
Checking the UI on C, it says Listeners: 2/2, and Syncing (28%) from A, but there is no download activity. The local folder says “Up to Date”
If B moves the file away from directory, thats considered as a modification and should be propagated back to A and C, asking them to delete the files. Tmp filea are kept around for a day in case it reappears.
Setting A as master doesn’t mean “All other nodes should keep the files exactly as I have, no matter what other nodes say” but more like “I don’t care, if other nodes change something, I will keep it as it is”.
So B deleting the files will tell C that they are deleted. As A doesn’t override it (which will cause B to download it again), C will use the last info it gets (the one from B).
Thanks guys for your info. that makes sense.
So the only way for this to work is to separate B and C, and establish only A-B and A-C relationships, correct?
Just trying to understand the workings of Syncthing here:
It looks as if C is trying to get some files from B, doesn’t it?
Maybe because B is behind a faster connection, so it helps C by sharing the (parts of the) files it already has. C would not have to download them from A.
So instead of splitting into A-B and A-C relationships, the mesh of A-B-C could be kept, if
B would not move/delete its copy right away. They will be deleted eventually when A deletes them.
So if space is not a constraint but bandwidth and/or speed is, then the mesh would be the way to go. Correct?
@maelcum Yes, my original setup was to permit bandwidth sharing across all 3 computers to speed up the sync for both B and C.
And yes, the mesh setup could be kept if B did not move its copy out, but my scripts are all designed to fire immediately upon seeing a completed file, to preserve its own copy before A deletes the original.
Finally, yes, if you do not have plans to delete files during the initial syncing process, mesh will give you best performance.
Thanks for the clarification!
I don’t suppose you could get away with hard links instead of the move?
This is what I do with completed torrents;
Create a hard link in a folder named completed then rename and move as required. This way I can continue to seed without jeopardising my folder structure and naming conventions.
You could probably could use hard links. It’s not the end of the world for me to separate the B and C connection, but I might play around with it academically.
Thanks for the suggestion!
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.