I like to present my little project which was enabled by syncthing
- 3 PCs [Windows]
- 1 VPS [Ubuntu]
- 1 Raspberry Pi with HDD [Raspian]
- 3 PCs [Syncthing]
- 1 VPS [RSYNC / SYNCTHING]
- 1 Raspberry Pi [RSYNC]
The VPS has a 500GB HDD and costs 8€/month. ST is running on this Server and the 3 PCs are connected to it.
To make sure that there is no data loss due to a bug in syncthing I use rsync on the VPS to make a shadow copy of all data in another folder on the VPS without deleting files.
All data on the VPS are encrypted with ecryptfs.
On the RPi rsync fetches the encrypted data from the VPS and stores them locally on my HDD.
- independant of syncthing bugs
- independant of the VPS
- data is online accessible
- several backups (hdd / pcs / vps)
- real-time backup
- lot of work
- if the server is running, the files are unencrypted
Where did you rent the vps from? All the vps I came across had very little storage (probably aimed at website hosting).
You left out a critical part of the configuration: do you have Syncthing’s file versioning feature enabled on the backup devices? If not, this scheme is only sufficient to recover from the total loss of a device, or the accidental deletion of a file. It is not sufficient to recover from the case where you accidentally overwrite the contents of a file – in that case the change will be synced everywhere and the previous version lost before you can do anything about it.
Also bear on mind that some forms of disk failure could trick syncthing into thinking that you’ve happily deleted all of your files, so it will happily delete all of your files from the other devices as well…
versioning is enabled on all devices
This is the reason why I am using rsync
If ST is deleting everything I still have 2 backups:
one on the rpi and another one on the server. Both are made with rsync and both are encrypted
I use rsync on the VPS to make a shadow copy of all data in another folder on the VPS without deleting files.
Hi, nice setup. Since you are considering that as the backup solution how do you perform the regular data checks on the backup machines ?
(It happened to me several times, that the backup was corrupted - esp. in the days of floppy disks
Q2: Among the photo groups there is the saying to apply 1-2-3 approach to files (1 orig. photo, 2 backup solutions kinds / 2 geographical places, 3 copies [orig, backup 1, backup 2])
The reason for 2 backup solutions is to avoid being screwed up by 1 way.
How would you design it?
No I do not perforam data checks (I even don’t know how to do so).
I am using rsync and syncthing file versionion. If one file gets corrupt I should have several versions of it which are still OK.
I am using rsync and syncthing. These are my 2 backup solutions.
I have at least 4 copies (original machine / vserver / rsync backup on vserver / local copy with rsync on my local rpi)
With the vserver I made sure that I have at least 2 locations for my backups.
For data checks you could use the following rsync option in regular intervals,
e.g. once a week or month:
This changes the way rsync checks if the files have been changed
and are in need of a transfer. Without this option, rsync uses
a "quick check" that (by default) checks if each file’s size and
time of last modification match between the sender and receiver.
This option changes this to compare a 128-bit checksum for each
file that has a matching size. Generating the checksums means
that both sides will expend a lot of disk I/O reading all the
data in the files in the transfer (and this is prior to any
reading that will be done to transfer changed files), so this
can slow things down significantly.
You probably could also use this together with the “dry run” option to verify
files between the Syncthing end points, but would essentially need to dis-allow
any activity on/usage of the source PCs during the check, to avoid “false alarms”.