Backup-Server with two external, alternating drives

Hi, my current backup-system involves automatic samba-copy jobs to my backup-server. The backups will then be stored on an external USB-drive. After a week, I’ll exchange the USB-drive with another drive (that is stored Off-site). I basically have two identical backup-drives that alternate every week.

Now, I wanted to switch to Syncthing. However, I don’t know how make my current workflow work with my two hard drives.

I can let Syncthing backup everything on the first external drive. But after I swap out the drive, all the existing files will be gone (or at least 1 week old).

Is it possible to still use syncthing with my current workflow?

I suspect it’s better just to keep doing what you already doing if it works.

Syncthing does not like all files disappearing. You can work around that, but I question if its worth it.

Why do you want to use Syncthing?

As far as I understand you only every want to copy to your backup drive and weekly is enough, i.e. I don’t see any need for bi-directional and continuous, decentralized/p2p sync in almost any networks (just the main characteristics of Syncthing from the top of my head - no claim on correctness or completeness).

Then Syncthing is not a backup solution. If on your non-backups something deletes all the data, Syncthing will happily delete it on your backup too (it’s designed to do that). Think of Syncthing as what transfers your files to a place, but if you want to archive/backup it, you need another tool on top of the transfer Syncthing provides.

My current system is somewhat flawed and annoying, because the server tries to copy the files on my client via samba. However, because of changing/conflicting IP-settings (LAN, WLAN adapters) this is somewhat messy. Oftentimes the clients can’t be reached.

The thing I like about Syncthing is its robustness in terms of contacting the server. Also, the versioning is a great addition. Currently, I just copy the folders into subsequent “Monday”, “Tuesday”, … folders for each day of the week to have fake versioning

That sounds exactly like my on-site backup solution: Sync data to my backup-/home-server with Syncthing and do daily snapshots from there (using a homegrown rsync solution or borgbackup depending on the type of data). For that Syncthing is indeed perfect. And the server also serves as an always on node for the Syncthing cluster.

Except that I don’t alternate drives for that: I copy to offsite drives. I see two appraoches:

  1. Leave one drive permanently attached where you Syncting to and create daily snapshots, then copy to the secondary drive every week.

  2. Install separate Syncthing instances on the two drives. Then when you swap the drive it will have all the state from a week ago and sync back up to the current state without creating a mess (the two drives must not share anything from Syncthing like keys/config/db).

1 Like

I have a somewhat similar backup scheme, with two separate offsite backup drives. However, I gather your backup drives are mounted continuously while in use, then you swap them every week. I don’t; I mount mine aperiodically and use rsync to update them. Between these manual backups, I have a nightly cron job which uses a network backup for files which have changed since the last offsite mounted backup.

I see why you want your procedure, but if your currently mounted drive goes bad, you will lose everything since the last swap. Here’s a suggestion, which I do not claim is workable for your situation: set up Syncthing to keep an offsite network system up to date. Once a week, mount one of your offsite drives and rsync to bring it up to date.

The drawback is all that network traffic to keep in sync, which only you can tell is a problem. The advantage is you always have a very recent backup. Your offsite backups change their nature in a sort of neutral manner.

Thank you for all your help.

I went with the solution of having two separate synthing docker containers, each mounting a different USB-drive.

I plugin drive1, then turn on container1 and let it sync

After a week, I stop the container, then the drive. Then I plugin drive2 and turn on container2

This seems to be the best/cleanest solution for my setup.

Just one more question: Is it possible to have Syncthing-config-path of both containers point to the same folder on my host-machine? Or will that cause chaos.

Is it better to just have both config-folders be separate instances?

TLDR: Mayhem alert - don’t. Yes to:

They mustn’t share a db, as they don’t share the indexed data. And the db is in the config dir. Neither is it a good idea to make other’s believe it’s the same device. That may work, because indexes will be reset, but maybe not, and even if, that’s wasteful. And your setup is so static, what’s the point anyway.

And just out of curiosity: Why docker?

Why not? I can easily deploy (multiple instances of) it. It doesn’t clutter up my system, I can easily update it (automatically).

It also gives me an agnostic way of managing all of my services. For example, I can let the services listen on their default ports and just forward them to any desired port on the host-machine.

Are there any downsides in using containers? (other than the tiny performance overhead)

1 Like

Volumes used by docker are transient, and only alive during the lifetime of container unless otherwise specified. If you misconfigure stuff, might end up with sad consequences.

There is a whole lot of magical networking involved in docker which is likely break local discovery.

Mapping different ports for protocol use is not something you should do, as you’ll likely break global discovery as well.

I suspect syncthing in docker does not use usual automatic upgrade mechanisms, so is most likely going to lag in terms of releases and in terms of rollout.

I am not saying don’t use docker, but I am saying it has caveats which you are likely to hit by just using docker, making it an inferior solution.

I’m not using docker volumes. I use bind mounts to directly write on the host-drive.

Yeah, I noticed local discovery not working correctly. But I was planning to turn off local/global discovery anyway, since the backup-server (which runs syncthing in a container) has a static IP.

You mean, if I (e.g.) forward 22000 to 220001 that breaks global discovery, but when I change the syncthing config to listen on port 220001, that doesn’t break it?

If so, then shouldn’t port-forwarding through my router also break discovery? How do I run two separate instances of Syncthing (at the same time) without forwarding ports?

We setup the port forwarding using UPnP, so we know what external port we got from the router and what to advertise. If you just map port 22001 to 22000 in docker, in syncthing, we have no idea that we should advertise 22001 instead of 22000.

If you setup port forwarding manually, then your internal port should always match the external port, otherwise stuff will not work. It’s even worse if you use docker, as then all 3 ports have to line up.

2 Likes

That’s good to know. Thank you

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.