Write Only - One Way Sync

I would like to use Syncthing to sync my backups off-site as part of my backup strategy, I am trying to do so in a way that mitigates against a ransomeware attack, I believe a one-way, write only sync would achieve this. Here’s my scenario:

I have a fast fibre connection between work and home and have permission to use the bandwidth of both. I have two identical 2TB USB 3.0 external drives attached to both my home and work system. Briefly I bring the two external drives local to each other, I perform a full system drive image using Macrium Reflect and save that file to both drives in order to minimise on bandwidth on the initial sync as we’re talking about over 1TB of data. From that point on I take the remote drive to work where I would like it to stay.

My home system backs itself up to the Syncthing folder every two days, it adds a new file which is an Incremental disk image, this is uploaded to the backup drive at work.

I want to try my best to mitigate against a ransomeware attack whereby the scenario lets say is my local computer becomes maliciously encrypted against my will along with the attached backup hard-drive, what I don’t want is those malicious modifications to be synchronised to work unknowingly.

I have tried experimenting with ignoreDelete, Master Folders and readOnly settings in Syncthing however I haven’t achieved exactly what I want. I want any new files to be sync’d Home > Work however I don’t want the Home system to be able to modify existing files on the work system.

Does anyone know of a way that I could set up Syncthing to achieve that? I feel so close but just not quite there! I think a feature like this would be a valuable setting for Syncthing to have, I know a few of my colleagues would be interested in using it for this use-case if it works.

Thank you for your time and help.

This, generally speaking, is “master” mode on the source side.

This, however, is not currently possible. Sorry.

However rsync --ignore-existing may be exactly what you’re looking for as rsync is inherently one way and that option makes it not update files that already exist.

Thanks for your time Jakob and I appreciate the straight answer even if it might not be the one I want to hear!

I have to do a little more experimentation but I think I might be able to achieve what I am trying albeit it might not be completely automated, I might have to hit the over-ride changes button manually every now and then, perhaps that manual intervention would be better because I would only want to trigger the Sync from the Work side if I knew Home was clean / OK.

Your suggestion of rsync does sound like it would work thank you, however my home and work system are both Windows 7 unfortunately.

Ah, Windows… Perhaps some periodic thing to tweak the permissions on the receiving side, removing the ability from Syncthing to change the files? Or if there is a permissions combo on Windows that allows creation and rename but not delete (then the files can not be overwritten by Syncthing, although it’ll expend energy trying)?

Interesting suggestion thanks Jakob. In a previous attempt using other software I did something a little similar; using scheduled tasks on both systems I was ejecting the device (in software) using a batch script, this in essence made the physical device disappear from the computer, none navigable I assume that malicious ransomeware would not find it to encrypt it.

Once a week it would automatically mount the drive again with a similar scheduled task / script and would then Sync. I could do something similar perhaps or get a scheduled task to tweak permissions as you suggest, that may be more lightweight and elegant solution. Both systems at each end are fairly decent spec so I’m not too concerned about them expending energy trying, I can observe that but backups are important to me.

Out of interest do you ever see this perhaps becoming a feature of Syncthing one day? I’m not expecting it anytime soon, I know you probably have enough on your plate already!

Nothing is impossible, but this is the first time in the project life time that it’s come up and syncing existing files is very close to the core purpose of Syncthing… So it’s rather niche, not super likely something we’d work on. On the other hand it’s a bit like the ignoreDeletes option; it doesn’t hurt much if you don’t use it. So I would probably accept an implementation of it if someone coded it up.

1 Like

If you normally never change files (only add new), you can use file versioning. As files will never get changed, there will normally be no version and no additional space required.

If the image files on your home device get corrupted, the uncorrupted ones will be moved to the .stversions folder on the work device before being replaced with the corrupted ones.


It is very close to the ignoreDeletes option, I’m surprised no one has asked for this before, thanks for letting me know. I wish I could code now! I’m trying to learn but struggling, so I won’t be submitting any code for review anytime soon! :facepalm:

Interesting concept wweich, I hadn’t considered trying to use the file versioning option in Syncthing. You mentioned no additional space being required - there will be additional space required every two days in order to create a new additional incremental backup image. It would create a new file probably around 2-4GB, I don’t know how this might play into using file versioning.

PS. Where is the best place for me to crowd-fund a massive supply of Coffee + Cakes to tempt #Syncthing Devs into implementing this feature into code? :yum:

File versioning kicks in, when Syncthing is going to replace (change) an existing file. As long as you only add files, but don’t change files, the versions folder will stay empty.

But you mentioned, that you got a “override changes” on your home device. This means, that there was a change on the work device. Hitting that button will probably place the “changed” file(s) in the .stversions folder.

If you, on the other hand, would change files at home, they would be placed in the .stversions folder, which would result in the 2TB HDD at work getting full much quicker than the HDD at home.

I think you may have come up with a solution. The initial full backup image is huge (>1.5TB) but that is split across numerous 100GB files. Say 1 of those 100GB files was maliciously changed on the home system and it tried to replicate that change to my off-site (work) location then if it moved the original file the versions folder and tried to make a new malicious copy of the huge file then a) at least it would keep my original/good copy and b) the drive could become full quickly, which would be fine if a malicious event had occurred which had caused it to fill and stop working as long as the data is intact that would not matter.

I will have to do some experimenting with file versioning, I have not used it before but will try, thank you for the suggestion.

If I understand your point the Work computer is considered safe and You don’t want syncthing to delete files there. So setting File versioning there as “staggered” and Maximum age to “never delete” will achieve just that. The Home computer will request the Work to delete files but instead they will be versioned and NEVER deleted by syncthing and you could easy recover them from .stversions folder in case of malicious usage of Home computer. I do something like that - on my PC I have staggered versioning enabled with never delete set, on my phones I have just basic photo folder syncing. So if I need free space on my phones I can delete more of the oldest photos (newest may not be synced yet) without much worries because they will be kept versioned on the PC.

Simple versioning would be the better choice here, I think. Staggered versioning does not version every change. If for some reason the malicious code would change the files more than once, I don’t know if the original version would stay or one of the first corrupted version would.

I suggest using simple versioning with a high number of versions.

that might be correct if a time interval is set but versioning docs state that “Note: Set to 0 to keep versions forever.” So “never delete” should not delete any versioned files

That setting is for maximum age, not to circumvent the staggering.

Thank you wweich & kisolre your conversation has been useful to me and has made me consider different file versioning approaches. I definitely think this is the feature to full-fill my need in a bit of a hacky but perfectly acceptable way that works.

I think I am going to opt for Simple File Versioning with something ridiculously high like keep 999 versions. Bearing in mind that my full backup images will be a series of around 12-15 files each 100GB big and then a series of incremental images trickling in days later which may be around 2-4GB. The hard-disk is 2TB big, I figure if malicious code on the home PC modified/encrypted the backup images malicious, lets take worse case scenario; not just once but multiple times and those changes were replicated to the remote (work) location then my original image file will still be safe in the .stversions folder + the drive will probably fill up quickly if Syncthing started to create duplicate versions of the 100GB files, it filling up and ceasing to be able to create further endless versions of huge files is probably a good thing and it will obviously never reach 999 iterations of huge disk-image files which are 100GB.

I have done some testing with the file versioning and actually tested by purposively corrupting smaller sample data, everything so far is working well as one would hope.

Thank you for your time and input all, if anyone has any further suggestions I’d be glad to hear them.


I to was looking at this problem and decided to try staggering file versioning to solve it. However I wish there was a way for synching to just add files but never delete them also.

I have a 10.4 tiger OSX digital photo frame. I occasionally will add photos to this machine. I do not want any of the old ones to end up deleted but I would love if I could add to the smb share through sync thing. Unfortunately the 10.4.11 PPC mac will not run 32bit sync thing so I am stuck with adding it through samba with staggered versioning.

One thing I have noticed is that sync thing will delete files when you have a 2 to 1 sync setup as the 2 shares machines will fight sometimes and delete the other ones files if I hit the override changes button. The files end up in the .stversions folder but still its annoying that it even happens, whats worse if the files are only local to that machine they are wiped out.

I have the 2 shares as master and the 1 backup drive as a Normal folder and the logic should go that they just add files and never delete unless there is 2 files the same and than versioning kicks in. Instead the files delete and end up in the .stversions folder and I have to pull them out and resync. I think this is a bug really, the logic just does not make sense. local files should be ignored & deletes should not occur unless approved. If anyone has any ideas I’m curious if I’m doing anything wrong but from what I understand I’ve got synching setup right.

I suggest you read the docs on advanced configuration.

1 Like

Since the anniversary update of Windows it is possible to install “Ubuntu on Windows”, although it’s still Beta. I tried rsync at it worked great.