Abstract: This feature can be useful when you just want to store files to a warehouse and don’t want to leave a copy on your working device.
I have setup Syncthing node on two of my devices, one Macbook Pro (Mac OS X installed) and a Banana Pi (Linux installed).
I usually import a bunch of photo from my camera to Mac and do some adjustment, however my Mac has only 80GB of storage, I must move those photos to other place once i finished editing.
I use Syncthing to do this job, first I move those edited photo to repo “EditedPhotos” and Syncthing will push them to Banana Pi.
Once the photos are synced across two nodes, all files in “EditedPhotos” on Banana Pi is moved to another folder, and “EditedPhotos” on my Mac will be empty after a while, free some space.
The workflow, in short:
On Mac: I adjust some photos on my Mac and I won’t use them recently
On Mac: I put all of them in repo “EditedPhotos”
On Mac: Syncthing push those photos to Banana Pi
On Banana Pi: I manually move those photo from “EditedPhotos” to the final destination of them
On Mac: Syncthing detect the deletion of files and empty “EditedPhotos”
On Mac: Mac has space to import new photos
I wish the workflow can be further simplified to:
On Mac: I adjust some photos on my Mac and I won’t use them recently
On Mac: I put all of them in a repo “EditedPhotos”
On Mac: Syncthing push those photos to Banana Pi
On Banana Pi: Since repo “EditedPhotos” is set as a black hole, Syncthing move everything in “EditedPhotos” to another folder (a.k.a. white hole)
On Mac: Syncthing detect the deletion of files and empty “EditedPhotos” on my Mac
On Mac: Mac has space to import new photos
I know this feature can be done in various ways, and I’ve implemented one using inotifywait(1) on Linux (posted below). Maybe others want this “black hole repo” option too?
#!/bin/bash
# Warning: this script does no error checking, and expects no two files have the same name
# Warning: only files can be put into black hole, folders not handled prperly
BLACK_HOLE="/mnt/hdd2/black_hole"
WHITE_HOLE="/mnt/hdd2/white_hole"
inotifywait -mq --timefmt '%Y/%m/%d %H:%M:%S' --format '%f' -e moved_to "$BLACK_HOLE" | while read file
do
mv "$BLACK_HOLE/$file" "$WHITE_HOLE/$file"
echo "moved $BLACK_HOLE/$file to $WHITE_HOLE/$file"
done
To me this sounds like a very specific use case, and one that your script actually handles fairly well as is? I don’t think syncthing needs to be everything for everyone, as long as it can be used as a brick to build whatever the final use case is.
Well I agree this is a very specific case and should be done using an external process. I will try to create a separate cross-platform black hole repo implementation using howeyc/fsnotify and post here when it is done.
This is the exact opposite of the unix philosophy. I don’t know what @calmh thinks but in my opinion syncthing already does syncing and versioning adding also this would be too much.
this is really a too specific case, the more general suggesting here Pre and Post process scripting? should make this a one line script when called at “post-sync”, so better only implement that
Ugh, please, NO, ftp is obsolete since 1999, not only because it’s totally insecure, but because it’s algo very poorly designed and quite complicated to handle behind NATs/firewalls/etc.
This particular case can be perfectly handled by an rsync script anyway, which is perfectly portable to any system. Just rsync the files to destination, and delete the local copy afterwards. You can have it scan the directory for files every N minutes and have a .lock file to avoid multiple instances if you want to cron this.