Thankfully not that crazy. While we may have a large amount of files stored, we only get about 600 or so files every five minutes. I wish I knew enough about writing config files for lsyncd so I could write one that is default.rsyncssh but it just ignores delete events.
Given the available information, it sounds like a simple cron job set to run every 5 minutes might work just fine:
*/5 * * * * rsync.sh
But if a fixed sync schedule isn’t desirable, incron can be used. Create a text file, e.g. incrontab.conf
, with the following line:
/src/ IN_CREATE,loopable=true rsync.sh
The rule above tells incron to…
- … monitor the directory
/src/
. - … ignore any file system events except create (i.e. new file, new directory, new symlink, etc.)
- … not run more than one instance of
rsync.sh
at a time even if there are new events to prevent a stampede/storm of sync connections to the target server(s) if for some reason new files are created before an active sync has finished.
Then for the rsync.sh
script:
#!/bin/sh
rsync --recursive --times --inplace --partial /src user@ServerB:/dest/ && find /src/ -type f -mmin +60 -delete
The script above has rsync
mirror the contents of /src/
to the destination server, then only if rsync is successful, prune files in /src/
that are older than 60 minutes (or if storage space is very limited, set it for 5 minutes).
- Unless the script is running with
root
privileges, there’s no need to waste resources trying to sync user/group ownership. --inplace
skips rsync’s default of creating a temporary file → rename, resulting in less disk I/O. Ideal in situations where new files are added rather than updating existing files.- If ServerB is offline,
rsync.sh
will just spit out an error without deleting any files. The next timersync.sh
runs without error, all files that haven’t yet been synced will be.
1 Like
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.