Black hole repo?

Abstract: This feature can be useful when you just want to store files to a warehouse and don’t want to leave a copy on your working device.

I have setup Syncthing node on two of my devices, one Macbook Pro (Mac OS X installed) and a Banana Pi (Linux installed).

I usually import a bunch of photo from my camera to Mac and do some adjustment, however my Mac has only 80GB of storage, I must move those photos to other place once i finished editing. I use Syncthing to do this job, first I move those edited photo to repo “EditedPhotos” and Syncthing will push them to Banana Pi. Once the photos are synced across two nodes, all files in “EditedPhotos” on Banana Pi is moved to another folder, and “EditedPhotos” on my Mac will be empty after a while, free some space.

The workflow, in short:

  1. On Mac: I adjust some photos on my Mac and I won’t use them recently
  2. On Mac: I put all of them in repo “EditedPhotos”
  3. On Mac: Syncthing push those photos to Banana Pi
  4. On Banana Pi: I manually move those photo from “EditedPhotos” to the final destination of them
  5. On Mac: Syncthing detect the deletion of files and empty “EditedPhotos”
  6. On Mac: Mac has space to import new photos

I wish the workflow can be further simplified to:

  1. On Mac: I adjust some photos on my Mac and I won’t use them recently
  2. On Mac: I put all of them in a repo “EditedPhotos”
  3. On Mac: Syncthing push those photos to Banana Pi
  4. On Banana Pi: Since repo “EditedPhotos” is set as a black hole, Syncthing move everything in “EditedPhotos” to another folder (a.k.a. white hole)
  5. On Mac: Syncthing detect the deletion of files and empty “EditedPhotos” on my Mac
  6. On Mac: Mac has space to import new photos

I know this feature can be done in various ways, and I’ve implemented one using inotifywait(1) on Linux (posted below). Maybe others want this “black hole repo” option too?

#!/bin/bash
# Warning: this script does no error checking, and expects no two files have the same name
# Warning: only files can be put into black hole, folders not handled prperly

BLACK_HOLE="/mnt/hdd2/black_hole"
WHITE_HOLE="/mnt/hdd2/white_hole"

inotifywait -mq --timefmt '%Y/%m/%d %H:%M:%S' --format '%f' -e moved_to "$BLACK_HOLE" | while read file
    do
        mv "$BLACK_HOLE/$file" "$WHITE_HOLE/$file"
        echo "moved $BLACK_HOLE/$file to $WHITE_HOLE/$file"
    done

To me this sounds like a very specific use case, and one that your script actually handles fairly well as is? I don’t think syncthing needs to be everything for everyone, as long as it can be used as a brick to build whatever the final use case is.

Well I agree this is a very specific case and should be done using an external process. I will try to create a separate cross-platform black hole repo implementation using howeyc/fsnotify and post here when it is done.

1 Like

Can’t you just use ftp or likes? Syncthing is meant to keep folder synced not to send files.

I do use Syncthing to sync files between my devices, this is a special use case in one of my many repos.

In fact it is doing syncing jobs… – Sync new file from Mac to warehouse -> Syncing file deletion notification to Mac

SFTP or other solutions is indeed more suitable for this case, but if you can do many things using one tool, why bother setting up another service?

This is the exact opposite of the unix philosophy. I don’t know what @calmh thinks but in my opinion syncthing already does syncing and versioning adding also this would be too much.

this is really a too specific case, the more general suggesting here Pre and Post process scripting? should make this a one line script when called at “post-sync”, so better only implement that :wink:

If you could sync directly to your final destination, i would be easier.

You copy, you sync, you delete, your deletions are not repeated on the Banana Pi.

Ugh, please, NO, ftp is obsolete since 1999, not only because it’s totally insecure, but because it’s algo very poorly designed and quite complicated to handle behind NATs/firewalls/etc.

This particular case can be perfectly handled by an rsync script anyway, which is perfectly portable to any system. Just rsync the files to destination, and delete the local copy afterwards. You can have it scan the directory for files every N minutes and have a .lock file to avoid multiple instances if you want to cron this.

With “ftp or likes” I meant protocols that transfer files: sftp, scp, webdav, whatever you want. I think no one sane use ftp.

Wow, really? but to my observation, everything worked as expected in the last 2 weeks.

Yes, the above mentioned behavior is a planned feature, not a bug.