Switch from Raspberry Pi to Proxmox LXC, different behavior regarding file ownership

Hi everyone,

I recently successfully migrated my Syncthing setup from a Raspberry Pi to a Proxmox LXC, and I’ve encountered an issue with file permissions. I’m hoping you can help me with this.

Old Setup:
Raspberry Pi with OpenMediaVault and Syncthing installed Synchronization between:

  • Folder on my laptop (Debian)
  • Folder on my Android smartphone
  • Target folder: an OMV share

I never had any issues with permissions when accessing files on the share from my laptop. There were no special permission settings configured, only the user “pi” existed on the Raspberry Pi, and user “peter” on the laptop.

New Setup:
Syncthing runs in an LXC container on Proxmox to which I mount the folders from an OMV VM. Now all files created by Syncthing have the following permissions, confusing me when accessing the folder from my laptop:

  • Owner: (Mapped)User of the Syncthing container
  • Group: users with read-only permissions
  • Others: read-only permissions

My Question Is there a reason why the permissions behave differently in the new setup? How can I configure it so that the permissions work as they did in the old setup? I appreciate any help and suggestions. Thank you in advance!

2 Likes

Old:

  • (Debian) laptop
  • (Android) smartphone
  • (Debian) OpenMediaVault + Syncthing server on a RPi

New:

  • (Debian) laptop
  • (Android) smartphone
  • (Debian) OpenMediaVault server (KVM?) VM
  • (Linux) Syncthing server LXC VM

That LXC VM hosting Syncthing is effectively another device on your network complete with its own OS, network interfaces, etc.

If it’s a network mount (e.g. NFS, SMB, WebDAV, sshfs,…), it’s not an ideal setup for Syncthing and other similar network sync applications.

Syncthing will have to rely on regular full scans to detect changes because OS-level filesystem notifications don’t pass thru the network share connection (see: Understanding Synchronization).

Syncthing needs to somehow access the independent OMV server – usually using some kind of network filesystem – which then introduces new layers of access permissions on both sides of the connection.

Need a lot more details about how the Syncthing host accesses files on the OMV host, how your laptop accesses files from OMV, etc.

3 Likes

Hey, thanks for your patience! Sorry for not being perfectly clear earlier. Let me break it down a bit better: In my old setup on the Raspberry Pi, I was directly using the subfolders of OMVs export folder for Syncthing. It was pretty straightforward.

In my new setup with Proxmox I set up the folders I’ve used earlier in my Raspberry Pi setup as NFS shares and mount them to the LXC container where Syncthing runs.

I also mount those same NFS shares on my laptop. This way, I can easily access things like photos from my phone.

All users are in group 100 (users) and should therefore have read and write permissions. Strangely, this permission for group 100 (users) is changed to read-only after syncing. These are the rights for an example file:

On the laptop: 							-rw-rw---- 1 lincesquarecube users 669815 Aug 28 20:04 testfile
On the NFS share created by Syncthing: 	-rw-r--r-- 1 linesquarecube nogroup 669815 Aug 28 20:04 testfile

Even if the users on different machines have the same name, the UID/GID is somehow different. I know that LXC users on Proxmox are somehow mapped to other host UID/GID, but I was expecting that this would not apply in my case since Syncthing is running as user linesquarecube.

Anyway, thank you very much for your help! :slight_smile:

1 Like

Yes, all files are local, so there’s just one set of users and permission bits.

When posting directory listings, it’s actually more helpful to see the output from ls -ln instead so that the numerical IDs are shown instead of the translated username and group.

The output above isn’t all that unusual. It often means that the GID didn’t line up. Also the umask in effect on the Syncthing side will impact the permissions bits.

User and group names are just window dressing to make it easier for us to recall, so it’s better to focus on the UID/GID.

In terms of basic file security, a user on one machine with UID 1000 is effectively treated as the same user on a different machine that has the same UID, so it really doesn’t matter what the names are.

If you don’t mind, post the contents of /etc/exports (it’s fine to redact names, but be consistent and leave the permissions related details intact).

Also do the ls -ln directory listing so that the UID/GID is shown instead.

2 Likes

First of all, thank you very much for your help!

I guess you mean the /etc/exports from OMV right? As everything is only available locally, I do not fear any security risks in posting the content.

# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.

# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
/export/Bilder 192.168.188.101/32(fsid=311b5c9c-3eb3-4952-99c2-355d886a45f3,rw,anongid=100,insecure,rw,subtree_check) 192.168.188.156/32(fsid=33321873-91ef-42cd-8680-8c88de689a71,ro,anongid=100,insecure,ro,subtree_check)
/export/Dokumente 192.168.188.101/32(fsid=ee829be4-a557-4ff0-bb18-7960da2e60f1,rw,anongid=100,insecure,rw,subtree_check) 192.168.188.2/32(fsid=be641583-0531-4b15-aa84-1ed0b3f3a70e,rw,anongid=110000,anonuid=110000,insecure,rw,subtree_chec>
/export/Pixel4a 192.168.188.2/32(fsid=3b3daa08-1b4c-42dc-ac6b-3b1f2321bec9,rw,anongid=110000,anonuid=110000,insecure,rw,subtree_check) 192.168.188.101/32(fsid=6b29532c-4c5c-4c23-85c2-c0f6608bce49,rw,anongid=100,insecure,rw,subtree_check)
/export 192.168.188.101/32(ro,fsid=0,root_squash,subtree_check)
/export 192.168.188.2/32(ro,fsid=0,root_squash,subtree_check)
/export 192.168.188.101/32(ro,fsid=0,root_squash,subtree_check)

Here is the ls -ln listing across different locations:

  1. In the Syncthing LXC:
-rw-r--r-- 1 1002 65534 10 Sep 3 23:16 testfile
  1. On the NFS mounted to the laptop:
-rw-r--r-- 1 101002 100 10 3. Sep 23:16 testfile
  1. In the OMV VM where the USB drive is mounted:
-rw-r--r-- 1 101002 100 10 3. Sep 23:16 testfile

I’ve already noticed the discrepancies between the UIDs/GIDs. When I started with Syncthing in the LXC, I experimented a bit based on some suggestions from the Proxmox forum because Syncthing didn’t have write permissions on the mounted NFS share.

From what I’ve already gathered in the documentation, it is possible to directly map UID/GID from an LXC to UID/GID on the host, to avoid remapping to UIDs/GIDs that start with 100000. Would this maybe solve the problem?

Thank you so much for taking the time to invest in my problem and for your efforts to help me :blush:

1 Like

Yes! Sorry about that. I should have been more specific. :grinning:

So reviewing what we know based on what’s already been posted…

Viewpoint from each device …

NFS mount in Syncthing LXC VM:

-rw-r--r-- 1 linesquarecube nogroup 669815 Aug 28 20:04 testfile

-rw-r--r-- 1 1002 65534 10 Sep 3 23:16 testfile

Linux laptop:

-rw-rw---- 1 lincesquarecube users 669815 Aug 28 20:04 testfile

-rw-r--r-- 1 101002 100 10 3. Sep 23:16 testfile

As you already know, in order to avoid collisions, user and group IDs inside a LXC container are mapped to IDs that are offset by 100000 on the host side (on the host, /etc/subuid and /etc/subgid sets the subordinate IDs).

So the output above looks as expected with the files created on the NFS mount by Syncthing in the LXC container having an offset UID of 101002 (100000 + 1002) from the viewpoint of the PVE host.

It was once common for regular user accounts to start at UID:GID 100:100, but as the number of network services and need for better security by not running every daemon under root increased, the default starting point for regular user accounts shifted to 1000:1000. The reason I mention this little bit of UID:GID history is that it helps interpret the mix of UIDs and GIDs above.

Since your laptop happens to have a “users” group with ID 100, ls -l didn’t end up showing just a number when listing the details for testfile.

At the same time, inside the Syncthing LXC VM, files created under the NFS mount attempt to be assigned 1002:1002 (assuming the GID is 1002).

The other clue is Syncthing LXC VM’s view of the NFS mount where the UID:GID for testfile is 1002:65534. It means that the OMV VM didn’t have a match for GID 1002, so NFS defaults to assigning it 65534 which is almost always the group named “nogroup” or “nobody”.

Inside the OMV VM, on the USB drive shared via NFS:

-rw-r--r-- 1 101002 100 10 3. Sep 23:16 testfile

The output above aligns with what the laptop sees, and not unexpected given what /etc/exports says.

/etc/exports …

Because NFS allows bunching together all of the access control entries for an exported filesystem on a single line, or breaking them up into separate lines, I’ve split the lines and reordered them to make things easier to read, compare, and follow, but otherwise it’s verbatim.

I’m also ignoring the lines that export read-only shares since if the device mounting it cannot change anything, file ownership is OMV’s responsibility.

First, it could have been due to the copy-and-paste, but the original “/export/Dokumente” line is incomplete. It’s missing at least 2 characters and the > shouldn’t be there at the end of the line.

So looking only at the read/write NFS shares:

/export/Bilder          192.168.188.101/32(fsid=311b5c9c-3eb3-4952-99c2-355d886a45f3,rw,anongid=100,insecure,rw,subtree_check)
/export/Dokumente       192.168.188.101/32(fsid=ee829be4-a557-4ff0-bb18-7960da2e60f1,rw,anongid=100,insecure,rw,subtree_check)
/export/Dokumente       192.168.188.2/32(fsid=be641583-0531-4b15-aa84-1ed0b3f3a70e,rw,anongid=110000,anonuid=110000,insecure,rw,subtree_check)
/export/Pixel4a         192.168.188.101/32(fsid=6b29532c-4c5c-4c23-85c2-c0f6608bce49,rw,anongid=100,insecure,rw,subtree_check)
/export/Pixel4a         192.168.188.2/32(fsid=3b3daa08-1b4c-42dc-ac6b-3b1f2321bec9,rw,anongid=110000,anonuid=110000,insecure,rw,subtree_check)

To be honest, the UID/GID assignments above are a bit of a mess. :crazy_face:

It’s not currently clear why the anonymous UID/GID is set to 110000 in some instances (it doesn’t match Syncthing’s subordinate UID/GID), while in other instances only the anonymous GID is set to 100 (which equates to “users” on the laptop and perhaps also in OMV).

In OMV, for /export/Dokumente, NFS is told to assign all newly created files and directories GID 100 due to the anongid=100 option being declared. The applies to when anonuid=110000 is used it assigns the UID 110000 to all files.

(The “anonuid” and “anongid” options basically override whatever ownership is being sent over the wire.)

Also for /export/Dokumente, hosts 192.168.188.101 and 192.168.188.2 have different anonuid and anongid declarations – that kind of mix almost guarantees permissions issues.

Since it doesn’t sound like you’re actively maintaining a uniform set of UID:GID across devices either manually or via some type of “yellow pages”, and it also doesn’t sound like you’ve got multiple users that require protected file space, my recommendation is to choose a pair of UID:GID for all NFS shares (e.g. setting anonuid and anongid to nobody:nogroup).

Because NFS is the one who ultimately is reading/writing files, and nfsd on Linux requires root privileges, it has no access issues. In OMV, everything on that USB drive will be owned by a single user regardless of whether Syncthing in your LXC VM or you on your laptop is reading/writing.

And if your USB drive isn’t formatted with a filesystem that supports Linux file attributes (e.g. NTFS), it makes an even better solution.

1 Like

Yes, it’s possible to change the UID/GID offset, but it’s generally not a great idea unless you only have one LXC container or expect to only have very few users.

For security reasons, a lot of network services run under non-privileged, dedicated users, so the chances of a conflict aren’t as low as it might seem.

1 Like

Sorry for my late reply and thank you VERY much for your detailed replies. You are really enlightening me with your answers.

Absolutely right—this is a copy-and-paste issue that I did not recognize. Your interpretation that only two characters are missing is correct.

Thank you for pointing that out. I seem to have misunderstood or misinterpreted that. My understanding was that I needed to specify the UID/GID of the user in OMV for each client where I’m hosting the NFS share. After initial problems with write permissions in the Syncthing LXC, I didn’t make any further adjustments in OMV, as the problem was already solved by inheriting the folder permissions.

The client ending with .101 is my laptop, and the client ending with .2 is the Proxmox host. Since the UID/GID from the LXC is added to 100000, I thought I needed to specify the UID/GID of the user mapped to the Proxmox host here. But as you can see from my reply to the quote above, thanks to your explanation, I’ve already understood that this isn’t correct :blush:

Your assumption is correct - since I had no problems with my previous setup using the Raspberry Pi, I never thought about unified users.
(However, based on my current understanding, this was simply because the user ‘pi’ that ran OMV and Syncthing, as well as the user on my laptop, ‘coincidentally’ had the same UID/GID.)

It’s also true that I don’t need separate access areas, as there are no other users on my local network. Would setting the UID/GID to nobody:nogroup ensure that all folders and files are not assigned to any user or group, and thus ‘all’ users, regardless of their UID/GID, would have access? Would it also be possible to assign the UID/GID mapped on the Proxmox host to users on different devices and VMs? Or would the effort be too big and difficult to maintain?

Thanks a ton for clearing things up! I really appreciate your help and it’s made a big difference in my understanding.

Thanks for this hint – I will stick to your suggestions in your first answer :blush:

It can be any choice of UID:GID, but nobody:nogroup is convenient because it’s almost always the same UID:GID on every Linux distro.

With anonuid and anongid set to 65534, NFS overrides the permissions of all files, effectively making the network shares open-access to all users and devices (that are authorized to connect). It doesn’t matter which user creates a file/directory because it’ll be owned by 65534:65534.

It’s possible, and it’s what NIS is designed for, but if you’re the only user, it’s probably not worth the effort.

Because NFS requires access to /dev/, which in turn requires root privileges, and for security reasons a LXC VM is most often unprivileged, a NFS share cannot be directly mounted inside the LXC VM (i.e. something like mount -t nfs server:/share /mnt/share doesn’t work inside the VM).

And as you know, with a LXC VM, the internal users in the “guest” OS are mapped to users in the “host” OS side (e.g. 1001 → 101001). So it means you’d have to create a user with UID:GID 101001:101001 on every device for the files/folders to appear with the correct user:group names.

But, the downside of doing that is it means the NFS share(s) will end up with files/directories owned by a mix of users with different UID:GID, taking you back to the original issue of read/write permission issues.

So bypassing it by making everything always owned by a single user:group avoids the tangle of permissions that isn’t necessary for your current use case.

You’re welcome. :grinning:

Thanks again for the explanaition :blush:. I really think just using UID:GID nobody:nogroup is the easiest and most convenient option for my usecase.

Unfortunately I still have some issues. I’ve set anongid=65534 and anonuid=65534 in OMV and created some testfiles to be synced. Somehow the files are still owned by the host mapped user who is executing Syncthing in the LXC and not by the defined nobody:nogroup :confused:

-rw-r--r-- 1 101002 100        9 10. Sep 21:15  test1234

Did I miss something?

Thanks for your help again - hope we can fix this until christmas :smiley:

What’s /etc/exports look like after the most recent changes?

For this year? :wink:

The content is as follows after the change:

/export/Bilder 192.168.188.101/32(fsid=311b5c9c-3eb3-4952-99c2-355d886a45f3,rw,anongid=100,insecure,rw,subtree_check) 
/export/Dokumente 192.168.188.101/32(fsid=ee829be4-a557-4ff0-bb18-7960da2e60f1,rw,anongid=100,insecure,rw,subtree_check) 192.168.188.2/32(fsid=be641583-0531-4b15-aa84-1ed0b3f3a70e,rw,anongid=65534,anonuid=65534,insecure,rw,subtree_check)
/export/Pixel4a 192.168.188.2/32(fsid=3b3daa08-1b4c-42dc-ac6b-3b1f2321bec9,rw,anongid=65534,anonuid=65534,insecure,rw,subtree_check) 192.168.188.101/32(fsid=6b29532c-4c5c-4c23-85c2-c0f6608bce49,rw,anongid=100,insecure,rw,subtree_check)
/export 192.168.188.101/32(ro,fsid=0,root_squash,subtree_check)
/export 192.168.188.2/32(ro,fsid=0,root_squash,subtree_check)
/export 192.168.188.101/32(ro,fsid=0,root_squash,subtree_check)

I realized that I accidentally cut off the lines starting with /export 192.168.188.xxx last time.

For now, I have started with the share to which Syncthing writes when syncing pictures from my phone, namely /export/Pixel4a/. I also found that I’m unable to mount a share to my laptop when switching from anongid=100 to anongid=65534, anonuid=65534.

This is the output of ls -ln for the last two pictures that have been synced:

-rw-r--r-- 1 101002 100   2129657 13. Sep 19:28 PXL_20240913_172806932.jpg
-rw-r--r-- 1 101002 100   2225414 13. Sep 19:30 PXL_20240913_173018532.jpg

Well, that is on my wish list! :smile:

The export parameters including anongid and anonuid don’t restrict access to a NFS share, so no matter what they’re set to, mounting will still work (even if you ultimately cannot access the contents) as long as the host declaration permits it, e.g. 192.168.188.101/32() in your latest /etc/exports.

/export/Pixel4a 192.168.188.2/32(fsid=3b3daa08-1b4c-42dc-ac6b-3b1f2321bec9,rw,anongid=65534,anonuid=65534,insecure,rw,subtree_check)

Need to add the all_squash parameter so that when a client sends a UID:GID with a request, the NFS server maps them to anonuid and anongid.

It seems Christmas has come early this year! :laughing:

I wasn’t aware of using all_squash in combination with anongid and anonuid. If I’ve understood correctly, specifying anongid/anonuid without all_squash is pretty much pointless.

But now for the result of my newly acquired knowledge…

As an example, the line for the NFS share of my phones pictures in /etc/exports now looks like this.

/export/Pixel4a 192.168.188.2/32(fsid=3b3daa08-1b4c-42dc-ac6b-3b1f2321bec9,rw,all_squash,anongid=1000,anonuid=1000,insecure,rw,subtree_check) 192.168.188.101/32(fsid=6b29532c-4c5c-4c23-85c2-c0f6608bce49,rw,all_squash,anongid=1000,anonuid=1000,insecure,rw,subtree_check)

I’ve decided to go with the uid/gid of the user of my laptop → 1000.

This configuration yields the following permissions for an example file:

-rw-r--r-- 1   1000 100   2317657 17. Sep 21:14 PXL_20240917_191358196.jpg

This is exactly what I wanted to achieve! :star_struck:

Just to confirm again that I’ve understood everything correctly:

  • all_squash causes all accesses to the NFS share to be mapped to a default anonymous user with uid=65534 / gid=65534.
  • If I use anonuid and anongid in addition to all_squash, the mapping is not to the default user but to the uid / gid that I specify.

Thanks to you @gadget, I’ve reclaimed my digital throne and am once again the master of my files!

I’d really like to say thank you for your help, and I truly appreciate the friendly and open communication. People like you make it enjoyable for newcomers to learn without being discouraged by overly complicated or snarky responses. This kind of support isn’t something to be taken for granted, and I want to express my heartfelt thanks once again for that! :heart:

1 Like

It’s Christmas in September! There’s absolutely zero chance that I’d pass for Santa Claus, but maybe a taller elf? :smiley:

all_squash controls how a UID:GID from a client to the server is used, while anonuid and anongid control what UID:GID the server sends to the client for the anonymous user and also what UID:GID the server uses when overriding the client.

If for example all files for a NFS share on the server are owned by the anonymous user with UID:GID 65534:65534, the NFS client also happens to have the same UID:GID pair for its anonymous user, and Syncthing is running as the anonymous user, then all_squash wouldn’t be necessary, but anonuid and anongid would be if the server and client need to be aligned on who the anonymous user is.

Correct.

(It’s possible that the anonymous user is something other than 65534:65534, but pretty unlikely.)

Correct.

King of your Digital Domain once again… :grin:

Thanks! Glad I could help. :nerd_face:

Hopefuly others who are trying the same setup with Syncthing + LXC + NFS will find this useful, even if it doesn’t involve Syncthing it’s applicable to other applications – e.g. Plex / Kodi, Duplicacy, etc.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.