bindfs compatibility

Hi there,

for security reasons I run syncthing on a Debian server under a dedicated user “syncthing”, so that syncthing can only modify files owned by the user “syncthing”.

Now I want to sync a folder that is owned by another user (“scanner”). Therefore I just added the following line to my /etc/fstab file, to bindfs-mount the folder into syncthing’s working directory, so the “syncthing” user is able to access and modify those files:

bindfs#/data/scans /data/syncthing/Scans fuse map=scanner/syncthing:@scanner/@syncthing,force-group=syncthing,defaults 0 3

In general, this setup works, but when a new file gets added to the folder or was modified, syncthing does not pick this up immediately. Instead changes will only be synced the periodic scan runs. Sure enough, I have enabled the “Watch for Changes” checkbox in the WebUI.

Combining syncthing with bindfs is mentioned at some other topics (e.g. What owncloud/nextcloud thinks about syncthing - #15 by 3v1n0 and Permission denied (backing up docker mounted volumes) - #4 by gadget), but it seems nobody experienced the same problem (or they just do not want or need immediate synchronization).

Here is my question: Is it possible to have immediate synchronization with bindfs? Can one suggest an alternative solution to my problem?

bindfs doesn’t send change notifications, so periodic scans is the only way to pick up changes.

1 Like

Which version of the Linux kernel and bindfs?

What type of filesystem is it?

Are /data/scans and /data/syncthing/Scans on the same filesystem?

# uname -a
Linux *snip* 6.1.0-12-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.52-1 (2023-09-07) x86_64 GNU/Linux
# dpkg -l | grep bindfs
ii  bindfs                          1.14.7-1                                amd64        mirrors or overlays a local directory with altered permissions

ext4

Yes :smile:

Okay, so that combination above is new enough to work with inotify…

What are your settings for the following kernel parameters?

/proc/sys/fs/inotify/max_queued_events
/proc/sys/fs/inotify/max_user_instances
/proc/sys/fs/inotify/max_user_watches

How many files and subdirectories are being watched by Syncthing under the /data/scans directory?

That’s good, works fine with inotify.

:grinning:… I had to ask because I’ve seen forum posts where it turns out the user is mounting a remote filesystem, sometimes with more than one level of indirection (e.g. Google Drive → rclone → SMB… and perhaps even bindfs just because they can).

It’s not impossible for filesystem notifications to work across network shares, but only under specific conditions.

On a related note, since you’re running a recent kernel, it’s better to use the updated syntax for /etc/fstab.

This older syntax might be removed in the future…

bindfs#/data/scans /data/syncthing/Scans fuse map=scanner/syncthing:@scanner/@syncthing,force-group=syncthing,defaults 0 3

… in favor of this syntax:

/data/scans /data/syncthing/Scans fuse.bindfs map=scanner/syncthing:@scanner/@syncthing,force-group=syncthing,defaults 0 0

Instead of 3, I specified 0 for the 6th field (fsck at boot time), because the 1st field isn’t a block device (e.g. /dev/sda1), so there’s no benefit to running fsck on it.

1 Like
# cat /proc/sys/fs/inotify/max_{queued_events,user_instances,user_watches}
16384
128
121770

Just 3 subdirectories and about 20 files.

Thanks for the heads-up. Fixed it :smile:

Have you noticed that @calmh already answered that bindfs is not compatible with syncthing? I now think about adding syncthing to the scanner group and just replace the bindfs mount with a symlink :thinking:

First two are typical defaults, and the last one is fine for most normal use-cases.

That’s well under the limits set for inotify on your system, unless there are a lot of filesystem events for /data/scans.

For example, an email server, web server or source code repo could overload the 16,384 events queue, even with a relatively small number of files.

Is there a backup program or something else besides Syncthing that’s running alongside the same user “syncthing” that’s also relying on inotify?

inotify’s user_watches is per-user, so the total number of watches requested by Syncthing (across all Syncthing folders), plus any other programs that are requesting watches must be equal to or less than 121,770.

Yes, and it’s not just Syncthing, but also other programs that use inotify with a bindfs+FUSE filesystem. However, as with a lot of things in Linux – what was once unsupported can later (and often) become a regular feature. :nerd_face:

To avoid having to update software settings every time the storage configuration changes, I use bind mounts to mask the real storage paths.

I’ve got a NAS running Fedora 38 (Linux 6.5.5 kernel + FUSE 2.9.9 + bindfs 1.17.4). For extra security, I also have SELinux enabled in enforcing mode.

Syncthing (similar to your setup), is running as syncthing:syncthing. However, almost all of the subdirectories and files on the storage volumes are owned by my login (gadget:gadget).

So to simplify management of storage and permissions, I use regular bind mounts and bindfs to selectively map storage paths to /srv:

/srv
├── apps
├── backups
├── media
├── mirrors
├── Syncthing
└── users

For example, /srv/backups is really a bind mount of /media/WD-1234567890/backups (the mount directory WD-1234567890 is the serial number of my Western Digital HDD). If the drive needs to be replaced, I can update the bind mount without having to touch the backup software settings. I can also move the real backups directory to a different drive without breaking user software settings.

As you can see above, I’ve got a /srv/Syncthing directory. It’s a fuse.bindfs mount because the upstream directory it refers to is owned by root:root. So Syncthing, as user syncthing:syncthing, is writing to a directory with the following permissions:

drwxr-xr-x. 1 root      root       26 Oct  7 14:09 Syncthing

The Syncthing folder above syncs with my Ubuntu 22.04 LTS laptop.

Syncthing on the NAS is set to watch for changes and do a full rescan every 3600 seconds. I’ve got 11,000+ files in 2,300+ subdirectories, and on average, new changes on the NAS start transferring to my laptop in roughly 11-12 seconds (well under the 3600 second rescan interval).

That’s certainly a viable option. gadget on my NAS also belongs to the syncthing group so that I can access files being managed by Syncthing.

I could have done my setup entirely with user and group permissions, but bindfs offered a way to not have to relabel files and directories (e.g. when I temporarily move an external drive to another system that doesn’t share the same UID:GID mappings).

3 Likes

Nope and other folders on the same filesystem sync immediately. Hence, I do not think the problem is caused by some misconfiguration of kernel parameters.

In general I agree, but is this also true for bindfs’ support for inotify? The bindfs homepage states under “Known Issues”: " inotify events are not triggered (#7)." And the linked bug report is still open.

Do you use bind mounts or bindfs mounts? I understand that bind and bindfs are two separate things.

The setup you describe sounds very similar to what I want to achieve.

1 Like

About the inotify stuff: I’m not aware of any FUSE filesystem that has inotify support (or more precisely, fsnotify, the in-kernel subsystem). If there is one, I would love to hear about it (with links to source where possible).

There have been many talks about adding it over the years, but I don’t see any activity that has lead to actual kernel changes. In-kernel filesystems (non-FUSE) may have fsnotify/inotify support, but afaik not even NFS does?


PS: I have to expand on the above, because as it’s currently written it is likely to be confusing.

For work-related reasons I have been reading some Linux kernel source code recently, especially regarding filesystem implementations. My (high-level) understanding of the fsnotify subsystem is as follows (admittedly, I’ve only glanced over it so I may have missed things).

The kernel’s primary (or only?) trigger for fsnotify is the Linux VFS. The VFS is a filesystem abstraction that unifies actions for all filesystems. All filesystems, both in-kernel and FUSE, go through the VFS.

For example, if a file is renamed, the kernel calls “vfs_rename” (simplified) which then delegates the actual rename to the underlying FS - extX, NFS, BTRFS, XFS, FUSE or whatever. However, it also triggers a fsnotify event. Therefore, all filesystems get automatic inotify support, since it’s handled by the VFS not by the filesystem itself.

But: This only works for operations that go through the VFS. For example, on NFS a change made by a remote machine won’t go through the VFS, since it wasn’t an action requested by local user (via a syscall). Therefore, no fsnotify event is generated, unless the filesystem manually calls into fsnotify.

For bindfs, this should mean that local inotify events work - change a file within bindfs itself, and you should see an inotify event. However, if you change a file “outside” of the bindfs mount, there won’t be an event for the file, even if it affects a file within the bindfs mount. The same principle should apply for other filesystems.

1 Like

Yup, that particular issue report really has more to do with NFS than bindfs. There have been discussions about NFS relaying change notifications, but I’m not sure the protocol can support it without a major overhaul.

For SMB, Microsoft did extend it to support remote change notifications (2.2.35 SMB2 CHANGE_NOTIFY Request), but I’m not aware of fusesmb or the in-kernel CIFS module supporting it.

Both. Most are simple bind mounts because there’s quite a bit of overhead with FUSE mounts, but I also use bindfs at home and at work.

At work, I have to manage storage and access for in-house staff and contractors. For basic setups, the usual user and group permissions are fine. A combination that’s not easily done with users and groups can be handled by an ACL or SELinux, but I’ve found it’s often overkill when bindfs can be used instead.

As @Nummer378 already described in detail, bindfs supports the inotify subsystem, but if you’re adding files “underneath” the bindfs mount – e.g. to /data/scans while Syncthing is looking at the mount point /data/syncthing/Scans – then filesystem watches won’t work (a regular bind mount is fine in the same situation).

I assumed that you were giving Syncthing read/write access to /data/syncthing/Scans and that’s where all changes were occurring. Does Syncthing need both read and write access to /data/scans?

Yes, /data/syncthing/Scans is the folder that syncthing has read and write access to. But there is a worker process running on the server that puts files into /data/scans which are then not immediately picked up by syncthing.

To be more precise: I have a document scanner on my network that uploads scanned documents via SFTP onto the server. On the server is a daemon running as the system user “scanner”, which removes blank pages, OCRs the scanned documents etc. and then stores them inside /data/scans, the folder that is owned by the “scanner” user. I now want to leverage syncthing so that I have the scanned and OCRed documents immediately synced to my other devices, like my phone. So I can just put e.g. an invoice into the scanner, grab my phone and upload the invoice to my tax consultants’ online portal. Or forward a letter I received via e-mail to smb. else, etc.

Based on description above, it sounds like Syncthing doesn’t require write access to /data/syncthing/Scans, unless your phone is pushing documents back to the server.

(Off hand, I know of more than one way to accomplish what you’re looking to do, with or without bindfs, once I have a handle on what the workflow parameters are.)

Indeed the phone does not push files to the server, but delete / rename / move them out of the Scans folder (Otherwise the scans folder would run full after a while).

So depending on the security requirements, it’s possible that it could be as simple as a chmod 777 /data/scans since both the scan collator and Syncthing are both allowed to make changes. Of course it depends on who else has remote login access to the server and what other applications are involved.

The advantage of bindfs is avoiding directly changing the permissions of files and directories, and/or relying on supplemental user groups, while conveniently doing something such as granting access to another single user.

Unless I’m mistaken, the goal sounds like it is to give user “syncthing” – and only user “syncthing” – read/write access to /data/scans, which an all-or-nothing ‘chmod 777 /data/scans’ would be too expansive.

Adding user syncthing to the scanner group would work, but it still requires a chmod 770 /data/scan – plus it also means that Syncthing would potentially have access to other files owned by the scanner group outside of /data/scan. So still a pretty large fence.

Since ext4 supports ACLs, and assuming it’s not turned off…

setfacl -m u:syncthing:rwx /data/scans

The command above adds user syncthing to the access control list for the directory /data/scans, granting read/write/execute access (execute is required to be able to list the contents of a directory). It overrides the basic chmod and chown permissions.

A couple of caveats:

  • It might not be immediately obvious what’s causing a permissions issue since ACLs aren’t visible via the usual ls command (use getfacl).

  • Also, not all file transfer programs preserve ACLs even within the same filesystem (e.g. rsync has an --acl flag).

(I haven’t used stock Debian in a while so I don’t know if the “acl” package is installed by default, but if not, it’s in Debian’s repo.)

Another option is SELinux, but that’s a whole other can of worms. :wink:

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.