The script should make Syncthing revert local changes in your folder of choice. The folder is defined in the third line by its ID.
If I were you and wanted to use the script, I’d make sure to test it thoroughly with test data first, and only apply it to the actual Syncthing data once you’re 100% certain that everything works as intended. As always, making a full backup is recommended before any such experimentations.
Yeah I hear you, I do, but tell that to the chain of: microsoft, google, ubuntu, SMB, syncthing itself, someone sneezing on a hard drive and suddenly the whole chain stops, etc. Me and all the others who kept requesting this feature, did not arrive at this robust one-directional master-slave shadowcopy via torrent feature need because we were lazy…
Whatever… I’ll try to find time to figure out, and maintain my own syncthing fork. I just want to finally protect my data in a way that actually works; because as I elaborated earlier, there is literally nothing else out there that does this as well as synchting.
A conflict means the master can not properly overwrite the slave file. And please read & really understand the plight instead of being reductionist and oversimplifying the other person’s more complex multifaceted text.
Let me simplify it myself here: Master - Slave feature, means master overwrites the slave no matter what, (and the slaves can continue to keep their local changes if they want, through the standard .stversions procedures, thus not interfering with the “don’t loose data” mantra of syncthing).
And I was here looking for some progressive or understanding attitude, not for 2-second examples of why nothing should change. I/we have been repeatedly clear enough, and will not respond any longer unless I can see posts really thinking about where ppl like me might be coming from.
I understand what you want, which is some sort of continuous rsync --delete for backup reasons.
However that is not what syncthing does or aims to do, it’s not a backup application, so it’s simply not the right application for what you want to do. Yes, you can pretend to use it for backups, but it’s not meant for that. And we have no interest to be in that space, there are already plenty of players in that space.
Perhaps restic or some other tool meant for actual backups would be better.
even if you still insist syncing + versioning != backup, anyway it’s a transfer tool: people use synchting to synctheir backups for redundancy ----- which is why you also need one way master-slave mode so the TRANSFER can be reliable.
you keep saying “just use these other tools” – you realize everyone who came to syncthing got here after beating themselves bloody against the brick walls of everything else that exists under the open source sun, right? Yes you can come up with a simplistic handwavy scenario why you could use rsync + ftp or whatever – you can’t actually tho.
Haha to add to the frustrated rant – restic.net – it’s a nebulous command line tool, their website & github don’t even have the word “features” on it – doesn’t say how it works, just that it’s “backups done right” – yeah trust, nice ux mate, ppl are defs gonna pick up your product if they can’t even see what features it has. Sorry for the cynicism but you see the overarching point here…
Synchting just works. Across any ISP, router, OS, major filesystem, user skill etc. Some obscure cmd with a manual, doesn’t just work; and no doubt has roadblocks…
I still haven’t heard anyone explain why they can’t just prevent writes to a directory they want to prevent writes to, or point to any other mainstream software which has this “remove a file as soon as I save it” “feature” that is so critical.
Nobody seriously objects to using Syncthing as part of a backup strategy. I do that all the time, to a filesystem that then does snapshots. And to which no other user has write permissions, so there are never spurious edits there. Syncthing is the least of your problems if you have random people making edits to your backups, after all.
As long as the argument sounds like a version of “I’m too lazy to set up proper permissions, you should add a dangerous workaround in Syncthing instead” it’s not a serious argument.
Honestly excuse me but to my brain it looks as if you haven’t read the OP and the comment/thread I linked in the OP…
Furthermore, just because you can’t imagine x, doesn’t mean you should pull against the wave of users.
We weren’t asking to remove a file as a feature. We were saying: option to treat one peer as the master, and make sure that that master can push anything in any circumstance to the peer drive. The peers can keep their local versions as .stversions if they want.
Why? Because my grandma might open a file on the external hdd because she temporarily wanted to see smth, and then that file will be stuck in a conflict while I’m relying on it to be autonomous. Is that enough reason? Or maybe there’s some program running on that machine that goes through the files and touches them in a way that synchting thinks an edit has happened (maybe an antivirus, or smth that edits metadata, or a media server, or some permissions get changed, and synchting sees that as an edit). Or maybe some magic happens across 3 different OSes and SMB drives.
The argument sounds like “I need enough feature flexibility to make sure that an average person can set this up in whatever circumstance and it just works” - Don’t confuse that with “lazy”. Not everyone can set up a read only drive (e.g. knowledge, scenario, work laptop restrictions etc). And not everyone is a computer scientist etc.
And to be clear, I’m not angry etc that “you aren’t doing my bidding now now”, I’d be perfectly happy if you just understood the point of view of the approach.
I read both your original post and that thread now, and I’m not seeing a justification beyond someone just wanting it because they think it’s the solution.
Apart from everything else, you keep talking about this “stuck” thing. That doesn’t happen. If grandma changed the file, it changed. Fine, it’s no longer the same as it should be, and I understand you’d want that change immediately undone. Nonetheless, next time the file is changed on the other side it will get synced again. It’s not permanently stuck. (Or if it is, that’s a bug and we should fix it regardless of anything else here.)
So you want Syncthing to get into an infinite loop fighting with some other program, each time downloading the file again everytime the media server adds metadata… I would say that’s a horrible solution to the problem, with the correct solution being to simple not grant the media server write access to the files or perhaps configuring the media server to not do that.
Yeah I’m not a big believer in magic, and as along as the argument is “Syncthing should do this because there’s some magic happening on my system that I don’t understand and I can’t be bothered to figure out” … yeah, no.
I understand your point about not everyone being a computer scientist etc, but still – they can spend some time Googling a real solution to their problem, or we can provide a shitty bandaid that’s also dangerous. I’m not seeing anything here that moves the needle towards the shitty bandaid part of the spectrum. And yeah, I suspect that you understand what I’m saying and disagree, and I understand what you’re saying and disagree.
This is indeed the case (just tested) but with a caveat — a conflict file is created, which by itself still causes Syncthing to show the “Revert Local Additions” button. Maybe a good idea/recommendation to disable conflicts in Receive Only folders?
The exact same thing applies there: We can’t assume that conflict file locally has no value. You can get that behaviour already though by configuring the folder to not create conflicts, it’s just not default.
Yeah, I meant something that could be suggested when the problem comes up on the forum, or even something to add to the FAQ, as this particular case of one-way synchronisation seems to be quite popular (regardless of all the different opinions about it ).
A conflict file is created and creates confusion about which is the main file in this setup. Leave a system unattended, and many many conflicts, from many files pile up over time, with no automatic way of cleaning up (e.g. not handled by versioning cleanup).
But more importantly, if grandma or a weekly media-server-scan touches a file, after I also touch a file, then we both go online, it will make a conflict file and I leave its local file as the “main file”. I can’t set it up such that mine is treated as the truth.
calmh is just shiftlessly coming up with edge cases in his favor of handwaving the issues/needs I reveal. (e.g. “infinite loop fighting with some other program” – what other program runs that often on all your files? It would happen once a week not once per cpu clock – “infinite loop”… jeeze… A similar “infinite loop” is also what happens when 2 users take turns editing the same file, you intractable frustrator)
“You can get that behaviour already though by configuring the folder to not create conflicts, it’s just not default.”
If folder A is configured to not create conflicts, and is set to “receive only”, and edits a file, and folder B is synced with A and set to “send only” and edits a file after folder A has edited a file, and A and B go online, then there is a conflict. So then what happens? Does syncing just not happen?
We just need a scenario where I can ensure that a –transfer– occurs, no matter what, from my machine to my backup machine, and is treated as the main file. It’s a simple need that common people have
If you tried to show evidence, you’d see it behaves just like you want: Of course a valid remote change in conflict with a local change in a receive-only folder will always win. As everyones telling you, it will not block syncing and get you the file you want.
I did try things… I came here after getting conflicts. I was getting asked to resolve each one and pick which one to keep. Other threads have been made about this too.
“A valid remote change in conflict with a local change in a receive-only local folder will always win” - This sounds wrong / unintuitive to me. But if it is true, then disabling conflicts for the local receive only folder, solves the question in the OP, and all the previous threads. So why did we have to argue for 20 pages (and other threads) to get to this point? (would be nice to mention it in a faq, behind as many warnings as you want)
Then what were you talking about not wanting to implement the feature, if the feature was already implemented? “I get conflicts, therefore my automation is blocked” was my observation…
Ok so to clear up confusion:
UX wise, I don’t think common users ever understood that a local modification in a receive only folder behaves different to one on a non-receive-only folder (that there is this invalid state. which btw the word “invalid file” would lead me to think that this file is broken or not received properly)
Great. This again is confusing UX wise / would’ve never figured that out. Would have thought instead that files don’t come in if there is this mysterious forever conflict file blocking the way.
Ok. “Newer wins over older” is concerning though, but, as you guys said “newer wins over older, unless it’s a receive only folder, in which case the older remote always wins” – Is this correct?
I would have intuited that if you disable conflicts, then the local file never changes, but you’re saying that it gets deleted and replaced with the file that wins (the remote file in case it’s a receive only folder). Awesome.
So thanks a lot for clarifying, there’s no way an average user would have had a clue about even half of that behaviour. Being the best p2p sync tool means your ux gotta deal with many many different users.