date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,476,395,110,000
This is the permission of my /dev/input/event* files: crw-rw---- 1 root input 13, 64 Mar 21 09:02 /dev/input/event0 crw-rw---- 1 root input 13, 65 Mar 21 09:02 /dev/input/event1 crw-rw---- 1 root input 13, 66 Mar 21 09:02 /dev/input/event2 crw-rw---- 1 root input 13, 67 Mar 21 09:02 /dev/input/event3 crw-rw---- 1 root input 13, 68 Mar 21 09:02 /dev/input/event4 ... As you can see there's no + after the permissions which means no special ACL permissions. I also confirmed it with getfacl. I'm not part of the input group either and am not running X11 as root. I just manually type startx in the console after logging in as the user to start xorg. So my question is how and where does udev give permission to X11 input drivers (e.g. xf86-input-libinput) to open the those files without ACL? If I want to open the /dev/input/event files I have to use sudo or be part of input group but rootless X11 seems to be able to do that with no problem! Here's a minimal c program to demonstrate the permission issue. If keybit_limit is set to anything below 578, X11 drivers will have the permission to read the corresponding /dev/input/event and a device with your given name will show up in xinput output. Anything higher than that up to KEY_CNT will cause permission errors in Xorg log and xinput won't show the new device. Although you still can see the device with sudo evtest. The permission and group of /dev/input/event in both situations are exactly the same but in the KEY_CNT scenario X11 can't read it and the 1 2 3 keys will not be registered in Xorg. #include <stdio.h> #include <sys/ioctl.h> #include <string.h> #include <unistd.h> #include <fcntl.h> #include <linux/uinput.h> #define ERROR(format, ...) { \ fprintf(stderr, "\x1b[31merror: " format "\x1b[0m\n", ##__VA_ARGS__); \ return 1; \ } #define SEND_EVENT(ev_type, ev_code, ev_value) { \ uev.type = ev_type; \ uev.code = ev_code; \ uev.value = ev_value; \ write(ufd, &uev, sizeof(uev)); \ } int main(int argc, char *argv[]) { if (argc < 2) ERROR("needs uinput device name!"); int ufd = open("/dev/uinput", O_WRONLY | O_NONBLOCK); if (ufd < 0) ERROR("could not open '/dev/uinput'"); ioctl(ufd, UI_SET_EVBIT, EV_KEY); int keybit_limit; /* X11 will recognize this device for me */ // keybit_limit = 577; /* but anything above that will cause permission denied errors in xorg log and xinput will not show the device */ keybit_limit = KEY_CNT; for (int i = 0; i < keybit_limit; i++) { if (ioctl(ufd, UI_SET_KEYBIT, i) < 0) ERROR("cannot set uinput keybit: %d", i); } struct uinput_setup usetup; memset(&usetup, 0, sizeof(usetup)); usetup.id.bustype = BUS_USB; strcpy(usetup.name, argv[1]); if (ioctl(ufd, UI_DEV_SETUP, &usetup) < 0) ERROR("cannot set up uinput device"); if (ioctl(ufd, UI_DEV_CREATE) < 0) ERROR("cannot create uinput device"); struct input_event uev; uev.time.tv_sec = 0; uev.time.tv_usec = 0; sleep(1); /* press 1 2 3 */ SEND_EVENT(EV_KEY, KEY_1, 1); SEND_EVENT(EV_KEY, KEY_2, 1); SEND_EVENT(EV_KEY, KEY_3, 1); SEND_EVENT(EV_SYN, SYN_REPORT, 0); /* release 1 2 3 */ SEND_EVENT(EV_KEY, KEY_1, 0); SEND_EVENT(EV_KEY, KEY_2, 0); SEND_EVENT(EV_KEY, KEY_3, 0); SEND_EVENT(EV_SYN, SYN_REPORT, 0); /* give you time to check xinput */ sleep(300); ioctl(ufd, UI_DEV_DESTROY); close(ufd); return 0; } Here are the permission errors in ~/.local/share/xorg/Xorg.0.log file when keybit_limit = KEY_CNT and uinput device name passed to the program is "MYDEVICE": [ 28717.931] (II) config/udev: Adding input device MYDEVICE (/dev/input/event24) [ 28717.931] (**) MYDEVICE: Applying InputClass "libinput pointer catchall" [ 28717.931] (**) MYDEVICE: Applying InputClass "libinput keyboard catchall" [ 28717.931] (**) MYDEVICE: Applying InputClass "system-keyboard" [ 28717.931] (II) Using input driver 'libinput' for 'MYDEVICE' [ 28717.933] (EE) systemd-logind: failed to take device /dev/input/event24: No such device [ 28717.933] (**) MYDEVICE: always reports core events [ 28717.933] (**) Option "Device" "/dev/input/event24" [ 28717.933] (EE) xf86OpenSerial: Cannot open device /dev/input/event24 Permission denied. [ 28717.933] (II) event24: opening input device '/dev/input/event24' failed (Permission denied). [ 28717.933] (II) event24 - failed to create input device '/dev/input/event24'. [ 28717.933] (EE) libinput: MYDEVICE: Failed to create a device for /dev/input/event24 [ 28717.933] (EE) PreInit returned 2 for "MYDEVICE" [ 28717.933] (II) UnloadModule: "libinput" I have tested both evdev and libinput drivers for X11 with a xorg.conf.d file and both will behave the same. If I put myself in input group or use uaccess tag in a udev rule for the device then X11 drivers can read it. This suggests that in <578 scenario the device is read as root but in the KEY_CNT scenario the device is read as user. Why is that? And which process is doing that?
Xorg does not open the device nodes directly – it makes D-Bus IPC calls to the systemd-logind service, which opens the device nodes on behalf of the caller (after checking things like which user is "logged in" at the foreground tty) and forwards the file descriptors to Xorg, using the fd-passing capability of D-Bus (built on the SCM_CREDENTIALS feature in Unix sockets). See org.freedesktop.login1(5) for the relevant TakeDevice() D-Bus API. This method allows systemd-logind to proactively revoke access to input devices when the foreground tty is switched away to another user's session (in contrast, setting and removing ACLs would allow one user's programs to simply hold a file descriptor open and continue reading input even if another user's session is foreground). For non-systemd distributions, seatd provides a functionally similar API (libseat_open_device) but Xorg does not support it yet, relying instead on the setuid Xorg.wrap wrapper. (The revocation mechanism is ioctl(EVIOCREVOKE), specific to input devices. There is something similar for DRM devices but so far there is no equivalent for e.g. audio or camera devices; sound servers such as pipewire listen to logind's "session switch" signals and cooperate in closing and reopening the devices.)
How/Where does udev give permission to X11 input drivers to open /dev/input/event* files without also giving access to the logged-in user?
1,476,395,110,000
I'm struggling with files and directory permissions. ls -l is telling me something that test -w contradicts. $ ls -l total 1792 -rw-r--r-- 1 root www-data 168 Jan 29 23:53 CODE_OF_CONDUCT.md -rw-r--r-- 1 root www-data 19421 Jan 29 23:53 COPYING -rw-r--r-- 1 root www-data 14547 Jan 29 23:53 CREDITS -rw-r--r-- 1 root www-data 95 Jan 29 23:53 FAQ -rw-r--r-- 1 root www-data 1414049 Jan 29 23:53 HISTORY -rw-r--r-- 1 root www-data 3638 Jan 29 23:53 INSTALL -rw-r--r-- 1 root www-data 5273 Jan 29 23:54 LocalSettings.php -rw-r--r-- 1 root www-data 1530 Jan 29 23:53 README.md -rw-r--r-- 1 root www-data 36717 Jan 29 23:53 RELEASE-NOTES-1.39 -rw-r--r-- 1 root www-data 199 Jan 29 23:53 SECURITY -rw-r--r-- 1 root www-data 4371 Jan 29 23:53 UPGRADE -rw-r--r-- 1 root www-data 4496 Jan 29 23:53 api.php -rw-r--r-- 1 root www-data 156078 Jan 29 23:53 autoload.php drwxr-xr-x 3 root www-data 4096 Jan 29 23:53 images $ sudo -u www-data test -r INSTALL; echo "$?" 0 $ sudo -u www-data test -w INSTALL; echo "$?" 1 $ sudo -u www-data test -x INSTALL; echo "$?" 1 www-data is a member of www-data $ groups www-data www-data : www-data apache seems to agree with ls as it can't upload to image/ What am I missing ? Operating System: Ubuntu 22.04.1 LTS Kernel: Linux 5.15.0-58-generic
The test -w does work well (your title says test -r but the question is about test -w, so I assume it was a typo). If you look at man test or man [ you can see: -w FILE FILE exists and the user has write access. Note that test evaluates that The user has read access In your case with these permissions: -rw-r--r-- 1 root www-data 3638 Jan 29 23:53 INSTALL The root user (first r), members of group www-data (second r) and everyone else (third r), so anyone as long as they also have access to the directory, have read access. Nevertheless, the www-data group does not have write nor execution permissions and for that reason when you evaluate permissions for the www-data user you get 1 (or false): sudo -u www-data test -w INSTALL; echo "$?" 1 sudo -u www-data test -x INSTALL; echo "$?" 1 About apache seems to agree with ls as it can't upload to image/ That's correct too, because the permissions for images directory have non-write access for members of the group www-data (use sudo chmod g+w to grant access to that group). The user running the apache process would also need search access (x) to all the directories leading up to that directory.
test -r contradicts ls -l
1,476,395,110,000
I understand that chmod u+w means give the user/owner (u), writing permissions (w, which is equivalent to the number 2), so the new permissions of the file after running the chmod command above would be (in octal): 2XY Where 2 (equivalent to write) is the new owner's permission, and XY marks the group/other permissions that weren't modified. Is that correct?
No, it actually adds the 2 to the original permission the owner had. So if originally he had only permissions to read (4) and execute (1), after running chmod u+w the owner's permission would be: 1+2+4=7 instead of 5. If he had only read permissions, after the chmod command, the owner will have 4+2=6 instead of just 4. By the way, if the command was chmod u=w (equal sign instead of a plus sign), then you would be correct, and the owner's permission would change to 2 (only write). That's the difference between + and = in the chmod command. The first one adds to the permissions, the second one replaces them. That said, if you want to affect all groups (u,g,o) just do, say, chmod 200.
Does "chmod u+w" mean give the user (owner) writing permissions ("2XY" in octal)?
1,476,395,110,000
I'm trying to create a segregated workspace for multiple groups, each group member should only be able to read, write and view their associated shared folder. I've created 2 user groups groupATeam and groupBTeam to handle the permissions of users. I've also assigned the group permissions to the relevant project folders groupA and groupB. #Check project folder permissions admin@computer:/folder/data$ ls -al /folder/data | grep groupA drwsrws--x 2 root groupATeam 4096 Jun 24 11:56 groupA admin@computer:/folder/data$ ls -al /folder/data | grep groupB drwsrws--- 2 root groupBTeam 4096 Jun 24 11:38 groupB For the admin user who is in both groups, I can access both folders and subsequently read and write without issue. #Check groups admin@computer:/folder/data$ getent group groupATeam groupATeam:x:1009:worker_3,worker_4,admin admin@computer:/folder/data$ getent group groupBTeam groupBTeam:x:1008:worker_1,worker_2,admin #Check admin can access and write to groupA folder admin@computer:/folder/data$ cd groupA/ admin@computer:/folder/data/groupA$ ls test_file.txt admin@computer:/folder/data/groupA$ cd .. #Check admin can access groupB folder admin@computer:/folder/data$ cd groupB/ admin@computer:/folder/data/groupB$ ls test_file.txt People in the groupA also seem to have the correct permissions, being able to access, read and write to their folder but not groupBs folder. # Worker 3 is part of groupA team and therefore should only be able to interact with groupA folder but not groupB worker_3@computer:~$ cd /folder/data/groupA/ worker_3@computer:/folder/data/groupA$ touch test_file101.txt worker_3@computer:/folder/data/groupA$ ls test_file.txt test_file101.txt worker_3@computer:/folder/data/groupA$ vim test_file.txt #Check non group member can acccess restricted groupB folder worker_3@computer:~$ cd /folder/data/groupB/ bash: cd: /folder/data/groupB/: Permission denied # This is the correct behaviour I'm looking for The issue seems to be with users of the groupBTeam. # Worker 1 is part of groupB team and therefore should only be able to interact with groupB folder but not groupA worker_1@computer:/folder/data$ cd groupB/ worker_1@computer:/folder/data/groupB$ ls test_file.txt worker_1@computer:/folder/data/groupB$ touch test_file101.txt worker_1@computer:/folder/data/groupB$ ls test_file.txt test_file101.txt worker_1@computer:~$ cd /folder/data/groupA/ #This shouldn't work worker_1@computer:/folder/data/groupA$ ls ls: cannot open directory '.': Permission denied worker_1@computer:/folder/data/groupA$ cd .. # Incorrect behavior, I can access the groupA folder even though worker_1 isn't part of # this group Members of groupBTeam can access groupA folder, which isnt the desired behavior. Can anyone explain why I'm not getting the expected behaviour and how I can rectify it? Fore refence, I followed these steps to set up the groups and folder permissions - https://www.tutorialspoint.com/how-to-create-a-shared-directory-for-all-users-in-linux
You have the execute bit set for others on the groupA directory: drwsrws--x 2 root groupATeam 4096 Jun 24 11:56 groupA That allows everyone to traverse the directory regardless of group membership. If you'll notice, there are no bits set for others on the groupB directory where members of groupATeam can't access it: drwsrws--- 2 root groupBTeam 4096 Jun 24 11:38 groupB To get what you want, remove the execute bit from the groupA directory with either of the following commands chmod 2770 /path/to/groupA chmod o-x /path/to/groupA Neither the users in groupBTeam or anyone else will be able to access it. If you want it to affect everything inside the directory including files: chmod -R 2770 /path/to/groupA chmod -R o-x /path/to/groupA
How to set up up group specific folders in linux
1,476,395,110,000
I have created directory as following: root@host [~]# mkdir mydir root@host [~]# ls -ld mydir drwxrwxr-x 2 test test 4096 Mar 2 19:36 mydir root@host [~]# Then I have changed the group ownership of the directory by using the 'chgrp' command, and then also added the ‘setgid’ permission. root@host [~]# chgrp test2 mydir/ root@host [~]# chmod g+s mydir root@host [~]# ls -ld mydir/ drwxrwsr-x 2 test test2 4096 Mar 2 19:36 mydir/ root@host [~]# Now, when I create a file under the directory, I see the file is missing execute permission. root@host [~]# touch mydir/myfile3 root@host [~]# ls -l mydir/myfile3 -rw-rw-r-- 1 test test2 0 Mar 2 19:59 mydir/myfile3 root@host [~]# My understanding is that the file should get exactly same permission as the parent directory. And, the parent directory as rwx permission so the file should also get rwx permission.
The sgid bit on directories causes files created inside them to be owned by the group owning the directory, that’s all. You can see this in your example: myfile2 is owned by group test2. The sgid bit doesn’t determine permissions on files created inside the directory. Again, see Understanding UNIX permissions and file types for details.
Setgid on a directory is not giving execute permission on the file within directory
1,476,395,110,000
I really tried to find a solution before posting here, however I couldn't find any. I tried to allow a specific user to run apt update and apt upgrade for a without entering his password, so I did a sudo visudo and edited the line for this user: user ALL=(ALL) NOPASSWD: /usr/bin/apt update /usr/bin/apt upgrade sudo -l gives me the following output: User user may run the following commands on machine: (ALL : ALL) ALL (ALL) NOPASSWD: /usr/bin/apt update /usr/bin/apt upgrade What am I missing? Just allow ALL is no option. The order should be fine. Any help much appreciated.
The separate commands needs to be separated by a comma in the sudoers file, e.g. user ALL=(ALL) NOPASSWD: /usr/bin/apt update , /usr/bin/apt upgrade
sudo for specific commands without password doesn't work
1,476,395,110,000
I was having a look at how sudo grants work across parent/children processes, however, I'm confused. If I open a terminal (emulator, in a graphic environment), and execute: $ sudo bash -c "sudo -v" # I'm asked the password $ sudo -v # I'm not asked the password I can see that if a child process is granted sudo permissions (through a subshell), the parent (the current shell) is granted them as well. However, if I open a new tab in the same terminal emulator, and execute: $ sudo -v now I'm asked the password. Since both shells/tabs are children of the same terminal emulator process, then the sudo grants are based on something different/additional compared to the simple parent/child relationship. I've checked the sudo man page, however, it seems it doesn't contain this specific information. How are permissions precisely granted, in relation to parent/child processes?
I can see that if a child process is granted sudo permissions (through a subshell), the parent (the current shell) is granted them as well. No. The child doesn't affect the parent. The second sudo requires no password because you run it less than 5 minutes after the first one. The sudo in the other window runs in a different shell, which has a different credentials cache then the shell in the first window. The cache is empty, therefore a password is required.
How are sudo permissions granted across parent/children processes?
1,476,395,110,000
I am trying to change the ownership of a directory to a certain group. I execute getent group and I see that the group I'm interested is in there - sudo:x:27. Now I am executing chown in the following way: sudo chown -R sudo /PATH/TO/DIR And I get: id 'sudo': no such user I also tried sudo chown -R 27 /PATH/TO/DIR - this commands actually being executed but then when I try for example to use mkdir in the directory I should own right now I get permission denied (and I did check with ls -l that the permissions changed). What am I doing wrong?
chown takes the user and group as user:group. If you only want to change the group: sudo chown -R :sudo /PATH/TO/DIR
Changing ownership of a directory
1,476,395,110,000
I thought I understood how permissions work in Linux, until I became aware of this situation. I have a subfolder, oddball that contains files with following permissions: ls -aRl /mnt/oddball /mnt/oddball: total 120 drwx------ 2 bendipa bendipa 4096 Aug 27 00:16 . drwxr-xr-x 6 root root 4096 Jul 1 15:51 .. -rw-rw-r-- 1 bendipa bendipa 12 Aug 27 00:16 Adoc.txt -rwx------ 1 bendipa bendipa 12655 Aug 26 18:16 .IntBnkDet.doc.pgp -rwx------ 1 bendipa bendipa 14550 Aug 26 19:04 .PersonalDetail.odt.pgp -rwx------ 1 bendipa bendipa 76357 Aug 15 15:43 .StatePensionGateway.doc.pgp The problem is every time I make a new file within /mnt/oddball it assumes permissions shown as with the file Adoc.txt, whereas I assumed files would take permissions from their parent folder; in this case oddball's are shown as the first of the output line permissions listed. I note the parent folder of oddball, /mnt is root owned and shows permissions different to oddball, but would not expect those to have any effect on oddball's files. Of course it's easy enough to change the permissions of new files in /mnt oddball in the terminal, but having to do so each time a file is created is a bit tedious. Or is this a necessity in Linux?
Permissions are not inherited from the parent folder. On a normal filesystem, new files are created with mode 0666 (rw-rw-rw-) but modified by the umask value (inverted). So, for example: $ umask 0 $ touch foo $ ls -l foo -rw-rw-rw- 1 sweh sweh 0 Aug 26 21:14 foo $ umask 022 $ touch bar $ ls -l bar -rw-r--r-- 1 sweh sweh 0 Aug 26 21:14 bar $ umask 0222 $ echo hello > baz $ ls -l baz -r--r--r-- 1 sweh sweh 6 Aug 26 21:15 baz We can see that the umask value determines the permissions that the new file is created with. To explain the numbers: r=4 w=2 x=1 Each file has permissions for "owner", "group", "world". If you look at the number in Octal then you can break it down. eg a permission of 0123 would mean owner = 1 ==> --x group = 2 ==> -w- world = 3 ==> -wx And we can see this: $ chmod 0123 foo $ ls -l foo ---x-w--wx 1 sweh sweh 0 Aug 26 21:27 foo A umask value determines the bits to keep. So determine the actual creation mode you take the 0666 and do a bitwise AND with the negation of the umask. So if the umask is 0022 then the negation is 0755 and 0666 AND 0755 is 0644, which leads to the rw-r--r-- permission we saw earlier. However there are some complications. First is that to get to the file you need permissions along the whole directory path. So, in your example, even though Adoc.txt has world read permissions no one else can see the file because the directory blocks them from getting that far. So effective permissions depend on the permissions on the whole directory tree, as well as the permissions on the file. For example: $ sudo ls -al X total 8 drwx------ 2 root root 4096 Aug 26 21:19 . drwxr-xr-x 3 root root 4096 Aug 26 21:18 .. -rw-r--r-- 1 root root 6 Aug 26 21:19 y The permissions on y says everyone can read it, but if I try... $ cat X/y cat: X/y: Permission denied That's because the directory permission is blocking me. You need x permissions on a directory to read files inside it. $ sudo chmod a+x X $ ls -ld X drwx--x--x 2 root root 4096 Aug 26 21:19 X/ $ cat X/y hello Another complication is if you use a non-native file system (eg NTFS, or SMB). The mount flags can override the Unix permissions just because the original filesystem doesn't understand unix permissions. But that's likely not the case in your question.
Permissions Query
1,476,395,110,000
To run some tests, I want to revoke write permissions for a folder. This is a minimal example: $ mkdir test $ chmod a-w test $ touch test/test || printf '%s\n' "write permissions successfully revoked" touch: cannot touch 'test/test': Permission denied write permissions successfully revoked However, if I run it with fakeroot, this doesn't work: $ fakeroot sh # mkdir test # chmod a-w test # touch test/test || printf '%s\n' "write permissions successfully revoked" # ls test test For an explanation why this doesn't work, see this question, for example: Issue with changing permissions However, my question is: how can I get around this? Can I temporarily disable fakeroot for the chmod command? Or can I make fakeroot permission changes permanent anyway?
fakeroot is using a LD_PRELOAD hack to load a small shared library which overrides some library functions like chmod(2), chown(2), etc. See the ld.so(8) manpage for details (there are also a lot of examples on this site). Just run a command with the LD_PRELOAD environment variable unset or set to an empty string, and that library will not be loaded: $ fakeroot sh $ chown root:root fo.c $ LD_PRELOAD= chown root:root fo.c chown: changing ownership of 'fo.c': Operation not permitted
How to temporarily leave fakeroot environment
1,476,395,110,000
I'm having a problem with permission on Ubuntu. I have a user appuser on my system it can run node, npm, etc, But now I need to install net-tools package This one: https://zoomadmin.com/HowToInstall/UbuntuPackage/net-tools The problem is that I install it with sudo apt-get update -y as root user, since I do not have privilege as appuser. And when I try to run it as root user for example command: arp it is working fine. But when I want to run it as appuser it won't work, even when I add appuser to sudoers. When running arp as appuser I get: bash: arp: command not found
If you have installed net-tools (as root), you should be able to run arp as a non-privileged user either by specifying it's absolute path, e.g. /usr/sbin/arp or by ensuring that it is in the PATH for the user; e.g. in the appropriate initialization file (.profile or the equivalent for your shell/environment), add a line such as PATH="$PATH:/usr/sbin"
How can I run a program as another user
1,476,395,110,000
I am working on Linux Mint. I am not so expert in permission commands. My directory is /var/www/html/themeexplorer/destiniy/. After running sudo find . -type d -exec chmod 755 {} \; command I found permission is not affected using ll command. Why is it happening ?
I'm not sure what you were expecting but chmod 755 on directories will give you drwxr-xr-x
Permission command is not working
1,476,395,110,000
As I understand it, denying a user the reading permission on a directory should prevent them from listing its content. Giving them the execute permission allows them to access its content, but only if they use a path to something that exists. What prevents a user with said permissions from blindly attempting to access various paths inside such a directory, to get a list of what's in it anyway? If they can do that, is there really a point in e.g setting 711 permissions on a home folder to protect its content while allowing access to things like SSH keys? (I've read people advise this.)
Yes you can brute force such a directory. Unix was originally created in a very co-operative environment, so a set of permissions that said don't browse here would have been respected. If the users of your machine don't have that sort of culture then (assuming you can't change your users) don't create directories with execute permissions if you want to keep the contents secret. Likewise don't make files readable rather than hoping people will not be able to guess the name. Of course to brute force a big directory with very long filenames will take a long time. Each component can be 254 characters long, chosen from a set of 254 characters (can't be \0 or /, but any other 8 bit pattern is OK), so about 10610 possible filenames. There are roughly 1080 atoms in the know universe.
Can I list a directory with `-wx` permissions by brute force?
1,476,395,110,000
My question is very simple: Is there any way to make a text file that no users can read or write to, including root?
There is no way to completely stop root from reading or modifying a file or directory. root can do those things even without any permissions and ACLs can be modified and removed by root. This would also be the case if permissions did stop root from reading or modifying the the file because root could just change the permissions or ownership. The file could be put on an NFS export with root squash but root could still su to a user with permissions and read it that way. The same thing could be done with other protocols. Even SELinux can just be disabled by root. That's the point of having a superuser. Rather than attempting in vain to stop root from accessing or reading files, the best thing to do is to not put data that you don't want being read on that particular system. This isn't entirely possible in an enterprise environment because the System Engineers/System Admins who have access to everything would still be able to get to it but it's the only way to do it for others who have root.
Is there any way to make a text file that no users can read?
1,476,395,110,000
I know that a root user can read a file even if the access permissions are all set to 0 but i don't understand about the write and execute permissions in specific. Can a superuser write and execute a file having permissions as 000 ?
It can write the same as it can read. Being root trumps these. But with execution it's a different story. If a file is not marked as executable, then it's not considered executable. However, once it's marked executable, it doesn't have to be readable for root to be executed (if it is a script). Unlike it is with the regular users.
Can superuser write a file having 000 access permissions?
1,476,395,110,000
I currently have Ubuntu installed on one partition and my personal files (Pictures, Documents, etc.) on a second partition. I would like to install KDE Neon in the partition containing Ubuntu, while keeping the personal files partition. I have yet to install Neon, but I have used a bootable USB. The problem I've run into is that I don't know how to transfer ownership of files from an account on one installation to an account on another. If I were transferring files between accounts on the same OS, I would just use chown and be done with it, but I don't know how to do that across OSes. I realize that I could set the permissions so that others have read access and then copy all of my files using the Neon account, but that would take hours due to how many files I have. I would rather use chown or something similar.
I don't know how to transfer ownership of files from an account on one installation to an account on another. Files are not owned by a username, they are owned by a UID. The mapping between username and UID is usually managed in the users database file /etc/passwd. Here's an example snippet root:x:0:0:root:/root:/bin/bash tom:x:1000:1000:Tom Pearce,,,:/home/tom:/bin/bash bill:x:1001:1001:Bill Brewer,,,:/home/bill:/bin/bash jan:x:1002:1002:Jan Stewer,,,:/home/jan:/bin/bash peter:x:1003:1003:Peter Gurney,,,:/home/peter:/bin/bash When you run ls -l the UID/GID owners for each file are translated using this database to the corresponding names. You can see the actual names with ls -ln. So, to "transfer" ownership of files you have a couple of choices Make sure that the mapping of name to UID/GID is the same on both systems. No chown/chgrp is required in this instance because the files ownerships are mapped to the same set of names on both systems. Find out the original UID/GID and the target UID/GID and change every affected file one by one. This isn't quite as simple as it sounds because you have to be careful not to change a file to a UID/GID pair that will then later be changed once again. Typically, you would chown/chgrp each file to a temporary range of UIDs that isn't used anywhere on either system, and then change them from that set to the actual set. # Example to change file UIDs from 1000 to 1010 find / -mount -user 1000 -exec chown 61010 {} + # Later, when you've moved all the file ownwerships out of the 1xxx range find / -mount -user 61010 -exec chown 1010 {} +
How Do I Transfer Ownership of Files Between Distros?
1,476,395,110,000
I read in Linux Command Line and Shell Scripting Bible by Richard Blum Christine Bresnahan that : The umask value is subtracted from the full permission set for an object. The full permission for a file is mode 666, but for a directory it’s 777. So if a file can't be have read, write, execute permissions at once, does a user have to change file everytime he needs to perform an action. What should a programmer do if he needs to test his code? And what about other file types? Thanks in advance.
The umask value is subtracted from the full permission set for an object. This isn't true. Or at least it's inaccurate and simplified to a common case. First, the base value is not "the full set of permissions", but instead whatever the process creating the file sets as the file permissions. Granted, for regular files, this is usually 0666: the idea being that (say) a text editor shouldn't decide what the permissions of a file should be, but the user should be allowed to decide it via the umask. But a process creating the file doesn't need to use 0666 as the file permissions. For private files (think SSH keys), 0600 would be used, so that regardless of the umask, the file is never accessible by others than the owner. Also, for executable files, 0777 could be used so that the resulting file is executable. For directories, 0777 would be common, since the x bit is practically as necessary as the r bit for general use. For general data files, it isn't, so this is why the common case is 0666 for files, and 0777 for directories. The base permissions used could of course be something else, but those are likely to be the common cases. Second, the value of the umask is not subtracted, but masked out. Subtraction implies a carry from one bit to the next up, and subtracting e.g. the 0007 from 0666 would result in 0657. That's not how the umask works, and that would not be useful. Note that the umask is only used when a file is created, the Linux man page also calls it the "file mode creation mask". After that, chmod() can be used to change the permissions without being limited by the umask. So if a file can't be have read, write, execute permissions at once, Sure, they can. It's just not useful for nonexecutable files.
Why file full permission mode is different than for directories?
1,476,395,110,000
I have a file owned by user1:group1. It has permission 770 deliberately so that other users in group1 can collaborate on it. When I open the file as user2 (who is in group1), I can edit it and save changes as expected, but when I save those changes the file ownership is changed to user2:user2. The closest I found to the problem from my google search was this question prevent group ownership change on file save Which seemed to just say “put up with it”, but that was five years ago. Surely it can't still be the case that collaboration isn't possible within Linux desktop environments, so what am I doing wrong?
If you set the "set group id" (SGID) bit on a directory, the files created in the directory inherit the group id of the directory, instead of the primary group id of the creating user. New subdirectories also get the SGID bit set automatically, so you don't need to do it manually; existing subdirectories must be changed manually, though.
How do I stop file permissions changing when I save a document in another user account?
1,476,395,110,000
It seems that umask and nohup don't work correctly together. I did this: $ umask 022 $ nohup java -jar blah.jar & [1] 12345 nohup: ignoring input and appending output to `nohup.out' $ ls -l nohup.out -rw-------. 1 juser juser 41242 Jun 27 11:07 nohup.out Any idea why? Is it nohup that forces 600 ignoring umask settings? How can I get nohup.out to be created with 644 permissions instead than 600? OK, I can do a chmod 644 nohup.out, but I'd prefer a "clean" approach. My shell is bash, my OS is CentOS 7.
GNU’s implementation of nohup ignores umask: Any nohup.out or $HOME/nohup.out file created by nohup is made readable and writable only to the user, regardless of the current umask settings. This only applies when nohup itself creates the file, so you can create nohup.out with the appropriate permissions before running nohup, and nohup will append to it without changing the permissions: umask 022 touch nohup.out nohup java -jar blah.jar & or even use a redirection to get the shell to create the file for you: umask 022 > nohup.out nohup java -jar blah.jar &
umask doesn't work with nohup
1,476,395,110,000
So I was playing a little with permissions in my system and then I noticed there is no permission specified for sending the file somewhere else. I tried, as a simple user, the following command: mail -a //etc/shadow [email protected] I was satisfied to get a Permission Denied message, but it's still not clear what the permissions are required in order to send a file. I mean, I use the mail command for mail protocol, but what about other commands or other protocols? btw, the permissions for the shadow file were: -rw-r----- 1 root shadow 1759 Oct 23 2017 shadow
There isn't one, because "sending" a file isn't really a filesystem-level operation. What the mail command does, is that it opens the file for reading, reads the data, and sends (writes) it over the network socket (probably encoded in the case of email, not that it matters). Similarly, an FTP client, scp, or any other would do the same, they'd read the file as usual. You don't have read access to /etc/shadow, so mail running with your user id cannot open it for reading. Linux does have the sendfile() system call, which directly copies data between two file descriptors, but that's basically the same as calling read() on the one and write() on the other fd, except that it happens within the kernel so there's less system call overhead. It, too, requires the source to be opened for reading.
Which permissions do I need in order to send someone else / root files?
1,535,694,643,000
I'm trying to fix my YARN problem. When task is submitted to YARN it's creates new directory with all needed settings and scripts. After finishing this task directory is removed. My task is failing after 0-2 seconds so files are removed so fast I cannot save them. I also don't know exact name of file before running task (there is autoincementing counter), but I know parent directory and I could try to guess exact path. I would like to protect or copy this files (whole directory) in some way. I guess I could set up cron running every second and copying parent directory (I don't know if size of files wouln't exceed cp capabilities). I probably could also change (every second) every file in directory into read-only mode (but it could interfere with YARN processes and create new problems). I can't change permissions to directory - YARN wouldn't be able to create them in first place. Is there better solution? (YARN detail is not important, but maybe someone knows how to use some unknown to me YARN features).
Several of your ideas won't work. Cron works only at the minute intervall. Making files read only doesn't prevent deletion. Making the directory read only prevents deletion, but also creation. On the other hand, no size of a file exceeds cp capabilities. You best solution is to find a way to disable the deletion of the temporary files. If that doesn't work, the best way is not to copy, but to link the files. If you know the directory where the files will be created, use this in a second shell: while true; do ln sourcedir/* targetdir &> /dev/null done You have to terminate this after you are done. sourcedir and targetdir must be on the same file system. this will create a hard link of every file. When it runs again, the target exists, so it would display an error message, therefor the redirect to /dev/null. Unless the files in the source are removed very quickly, you should have all your files in the target directory. Edit For a limited number of nested directories, use ln sourcedir/* sourcedir/*/* sourcedir/*/*/* targetdir &> /dev/null For an arbitrary level of nesting, use find find sourcedir -type f -exec ln --target-directory targetdir {} +
Prevent removal of not yet created files
1,535,694,643,000
I, as a normal user (not root), am trying to create a file in a directory as below: touch a/b/c/d/test.log However I get an error: permission denied. I know that I should chmod for the directory but I don't know what a proper permission is because I don't want to make it 777. Also, I should chmod for each directory like this: sudo chmod ??? a cd a sudo chmod ??? b cd b sudo chmod ??? c ... Or I should simply do: sudo chmod ??? a/b/c/d If I need read and write test.log after creating, what should I use to replace ??? above?
You need a minimum of 'execute' permissions on directories a/, a/b/, and a/b/c/, with 'write' permission on directory a/b/c/d/. The execute permission allows you to traverse the directory to get to the next one, you don't need actual write permissions on the intermediate directories. You can use the recursive option to chmod to grant execute permissions: sudo chmod -R +x a Do keep in mind that this command will affect all files and all directories within a/, not just the directories you mention.
What is the proper permission to a directory while creating a file in it
1,535,694,643,000
I am setting the group of a process when I launch it by doing the following: sudo -g offline "/home/natral/apps/some-app/bin/app.sh" %f after the process is running how can I verify the name of the user and group it is running as? I checked ps aux and this would tell me the user but not the group. Then I tried ps -eo uid,gid,args and managed to find the GID but how can I verify that the GID is indeed the group "offline"?
You can use user and group in place of uid and gid to have ps show you the group and user names instead of the numbers. And of course, if you have the process id, you don't need to browse the whole list ps -e gives you, but could just use something like this $ ps -o pid,user,group,args -p "$pid" or if you don't have the PID, pgrep could find it for you: $ ps -o pid,user,group,args -p $(pgrep -f app.sh) But I do suspect sudo would give an error if it couldn't set the group id to the one you want.
Verify group of process was set correctly when launched
1,535,694,643,000
I'm getting this error when trying to start a custom systemd service. netrender-slave.service: Failed at step EXEC spawning /usr/local/bin/netrender-slave.sh: Permission denied Here's /etc/systemd/system/netrender-slave.service [Unit] Description=Blender netrender slave manager [Service] ExecStart=/usr/local/bin/netrender-slave.sh start ExecStop=/usr/local/bin/netrender-slave.sh stop ExecReload=/usr/local/bin/netrender-slave.sh reload Type=simple [Install] WantedBy=multi-user.target In this question, the problem was permissions on the script, but netrender-slave.sh seems ok: ~# ls -al /usr/local/bin total 16 drwxr-xr-x 2 root root 4096 Dec 4 11:30 . drwxr-xr-x 10 root root 4096 Apr 20 2016 .. -rwxr-xr-x 1 root root 816 Dec 4 11:30 netrender-slave.sh In this question the problem was insufficient privileges in one of the directories, but all of /usr/local/bin all appear similar to this: drwxr-xr-x 2 root root 4096 Dec 4 11:30 . drwxr-xr-x 10 root root 4096 Apr 20 2016 .. ... However, in the comments of that same question this is offered: the ls output did not show a trailing . after the UGO permissions drwxr-xr-x - GNU ls uses a . character to indicate a file with an SELinux security context, but no other alternate access method. A file with any other combination of alternate access methods is marked with a + character. I don't understand how to check if this is my problem.
This is a bad way to do this, worthy of the systemd House of Horror. You might think that your only problem is the lack of an interpreter on the script file. It is not. Your larger problem, that you are not seeing, is the wrapping of a van Smoorenburg rc script, complete with wholly unnecessary Poor Man's service management, inside a service unit. This ends up with the wrong process as the dæmon, and does not manage things properly. Do not do things that way at all. You should tell its developers that its -b option is a confusingly documented. [Unit] Description=Blender netrender slave manager Documentation=https://unix.stackexchange.com/a/408848/5132 [Service] Type=simple WorkingDirectory=/mnt/my-data User=ec2-user Environment=FLAGS="simple_slave_eiptarget.blend --addons netrender -a -noaudio -nojoystick" ExecStart=/mnt/my-data/blender-2.73a-linux-glibc211-x86_64/blender -b $FLAGS --enable-autoexec [Install] WantedBy=multi-user.target Further reading Jonathan de Boyne Pollard (2015). The systemd House of Horror. Frequently Given Answers. Jonathan de Boyne Pollard (2001). Mistakes to avoid when designing Unix dæmon programs. Frequently Given Answers. Jonathan de Boyne Pollard (2015). Readiness protocol problems with Unix dæmons. Frequently Given Answers.
systemd custom service: Failed at step EXEC spawning ... Permission denied
1,535,694,643,000
I have been hacking a Linux system (in an attempt to get BlueTooth working, although this is not relevant). There are directories structured as below /var/lib/bluetooth/ ├── B8:27:EB:8E:A8:4D │   ├── 00:12:A1:12:09:51 │   │   └── info │   ├── 34:88:5D:70:53:44 │   │   └── info │   ├── cache │   │   ├── 00:12:A1:12:09:51 │   │   ├── 34:88:5D:70:53:44 │   │   ├── E4:CE:8F:03:00:6D │   │   └── F8:77:B8:AD:BC:AC │   └── settings I have been trying to manipulate these, but Command Completion does not work on any of the names containing :. Is this normal, and is there any way I can get Command Completion to work. Typing these cryptic names in full is tedious, and error prone. sudo ls /var/lib/bluetooth/B8:27:EB:8E:A8:4D/34:88 shows no completion sudo ls /var/lib/bluetooth/B8:27:EB:8E:A8:4D/34:88:5D:70:53:44 is OK Using wildcards e.g. 00* doesn't seem to work either.
sudo ls If you are having to use sudo in order to gain access to the directory in order to list it, what makes you think that your shell can list it? This isn't a problem with command-line completion, wildcards, colons, quotation marks, or the version of your shell. It's a very simple permissions problem. Your do not have the access rights to list that directory. Thus your shell, running as your account, cannot. Since it cannot list the directory, it cannot complete names within it or expand wildcards.
Command Completion does not work on names containing `:`
1,535,694,643,000
here is my file : srw-rw---- 1 nfsen nfsen 0 mai 16 10:51 nfsen.comm and I want to remove the s ,I tried comething like chmod 0660 nfsen.comm but didn't work, any ideas ?
You can't remove that s with chmod or anything else because it is not a permission. The first character in each line in ls -l output indicates the type of file: - for regular file, s for socket, d for directory, c(haracter) or b(lock) device, and so on. You can't change the type of a file after it has been created.
remove s from permission
1,535,694,643,000
I log on to my server as userA, this user has a bash shell, everything works fine with it. Then, for the purposes of a program, I've had to do sudo adduser --system --home=/home/userB --group userB; this user is apparently passwordless, judging by the contents of /etc/passwd and /etc/shadow: $ grep userB /etc/passwd userB:x:Z08:WW9::/home/userB:/bin/false $ sudo grep userB /etc/shadow userB:*:16XXX:0:YYYYY:7::: Also, there is no /home/userB/.profile, nor any /home/userB/.bash* files in the userB home directory. Now, while I'm logged in as userA, I'd like to run commands as userB, in particular inspect the $PATH that userB sees. So I've tried to edit via EDITOR=/usr/bin/nano sudo visudo, and add either of the userA lines: ... # User privilege specification root ALL=(ALL:ALL) ALL #userA ALL=(userB) NOPASSWD: /bin/bash userA ALL = (userB) NOPASSWD: ALL ... ... then save the file, logout from remote shell, re-login back as userA. Then I try running: $ sudo -iu userB; echo $? 1 $ sudo -S -u userB -i /bin/bash -l -c 'echo $HOME'; echo $? 1 $ sudo -i -u userB echo \$HOME; echo $? 1 ... and clearly, nothing works - and there is no error either. Then I thought I'd strace one of these commands, and indeed I got an error: $ strace sudo -iu userB ... write(2, "sudo: effective uid is not 0, is"..., 140sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges? ) = 140 exit_group(1) = ? +++ exited with 1 +++ However, nosuid is not a problem on this root partition, I guess: $ mount | grep '/ ' /dev/sdaX on / type ext4 (rw,errors=remount-ro) So now I really have no idea what to do. Is it possible at all to have userA in this case run commands (e.g. print the $HOME environment variable) as userB - and if so, how can I get it to work?
From the sudo manpage: -i, --login Run the shell specified by the target user's password data- base entry as a login shell. Your userB has /bin/false as the shell, so that's the command that is run. % /bin/false ; echo $? 1 So to fix this you need to change the shell of userB to /bin/bash (or /bin/sh or whatever you prefer) or don't use the -i flag to sudo. Do you need a login shell?
Run commands as another passwordless user - sudo fails?
1,535,694,643,000
I can change the permissions of a file by: chmod 600 ~/.ssh/authorized_keys How can I retrieve the permissions of a file in the same format which is 600?
stat(1) can show many file associated attributes by specifying special format strings to it's -c option. In your case, use stat -c '%a' ~/.ssh/authorized_keys to receive the same file mode in octal, 600. See it's manual page for a full list of supported format modifiers.
How can I retrieve the permissions of a file in a certain format? [duplicate]
1,535,694,643,000
Normally when I make a directory with mkdir the permissions I expect are 751 or 755. However for some reason when new files are created, even in a users home directory, they are set to 700. What controls the default permissions on new files and what kind of configuration change led to this happening?
As @Tejas mentioned, you need to understand umask and its values for changing the default permissions. I recommend you read this article so you'll understand how to use it properly. In addition, you should know that it's not permanent, so after rebooting your system the umask value you've set will be gone. To set it in a permanent way, you need to write a new umask value in your shell’s configuration file (~/.bashrc which is executed for interactive non-login shells, or ~/.bash_profile which is executed for login shells). Good Luck
What do are group permission missing on new directories?
1,535,694,643,000
I was attempting to give any regular user access to two folders that reside a few branches down in my home folder. I thought I was in the correct directory and typed out sudo chmod 666 *, but accidentally issued this in my own user account's home directory (had multiple terminal windows open at the time). No big deal, I thought, since this would effectively just give every user access to read and write perms in my home folder. I'll just use chown to switch everything back to just my account. However, now some folders have become Binary files and nothing within my home directory can be opened. I get errors telling me I do not have permission to access the files, and if I try read file permission values the gnome3 "Files" application says: The permissions of [file in question] cannot be determined. I've tried using chown [user] * to restore ownership to the proper accounts but it doesn't seem to have any effect. Also, I thought chmod 666 would give any user read and write access, so I don't understand how this problem arose. Any suggestions? I'm using Arch Linux with Gnome 3
Many operations on directories require execute (search) permission in addition to read permission. chmod 666 clears the x bits, causing strange failures of ls and other basic stuff. Reasonable default permissions might be 644 for files and 755 for directories.
Issued chmod 666 * in home directory, permissions problems resulted with all files
1,535,694,643,000
Basically I didn't encrypt my Linux (Debian) partition when installing it. However I found a way to encrypt at least home folder so in theory I could put everything there which I want to make sure it's not accessible should my computer be stolen etc. However there are certain folders (/etc, /bin, /var, ..) which for sure can't be moved away to home folder to encrypt them as well. Given that with my user (!= "root") I have root access (but upon inputting the password for each su(do) ), I was thinking to limitate the access to sensitive file/folders by allowing only my main user (even without su(do)) to access them only after login in KDE and at the same time making sure no other part of system would be broken. Is it possible? (perhaps which chmod etc) Alos, I'm not sure what folders are supposed to contain app data and other sensitive information 'by default'?
Permissions only protect against other users of the system, who access the system through normal software means. They are totally useless against someone who has access to the hardware, only encryption can protect against that. Setting permissions is like writing “secret” on an envelope; it works if the operating system is the only entity that's manipulating the envelope, but not if the attacker has stolen the envelope. Places where sensitive files are likely to end up include mainly: the swap area; /tmp (if it isn't in RAM); /var/tmp (but few programs write there); email and printer (and possibly other) spools under /var/spool; possibly some information in /etc, such as Wifi passwords. It's difficult to be sure to cover all the sensitive files; to be safe you'd need to encrypt all of /var. Furthermore there's no simple way (nor even a mildly advanced way) to put system files under ecryptfs: ecryptfs is fundamentally oriented towards encrypting a single user's files. While it isn't mathematically impossible, I wouldn't recommend attempting it unless you know what to do when it breaks (and it will break); it's one of these if-you-need-to-ask-then-don't-do-it things. I wouldn't do it, I'd encrypt the whole system instead. You can encrypt the system after installation, but it isn't easy. The basic idea to boot from rescue media, shrink the existing partition to span less than the whole disk, move it to the end of the disk, create an encrypted container in the now free beginning of the disk, create an LVM volume on it, create a filesystem, move the files from the plaintext area, resize the plaintext volume to be small (~200 MB), move /boot to the plaintext volume, extend the encrypted container and the LVM volume to the whole disk, make some swap space again, enlarge filesystem, mount the plaintext volume under /boot, and reinstall Grub and regenerate the initramfs. (I hope I didn't forget anything…) If you're concerned about sensitive files outside your home, reinstalling would probably be easier. An easier compromise would be to make /home an encrypted volume, rather than encrypting your home as a directory tree. (This may also slightly improve performance.) The basic idea is: Boot from rescue media. Shrink the existing filesystem and containing partition (ext2resize, parted). Create a dmcrypt volume in the freed space (cryptsetup). Make two LVM volumes on the dmcrypt volume (pvcreate, vgcreate, lvcreate × 2), one for swap (mkswap) and one for /home (mkfs). Mount the volume for /home and move /home there. Update /etc/fstab and /etc/crypttab. Now you can move other sensitive directories (e.g. printer spool) to /home, and make a symbolic link to follow them. To protect the sensitive files that you just erased, wipe the free space on the plaintext volume.
What are the right permissions to set to make files unreadable unless it's my main user accessing them?
1,535,694,643,000
Say I'm implementing a programming language which has an interactive mode, and that interactive mode reads some ~/.foo_rc file in the user's home directory. The file contains code in that language which can be used to customize some preferences. The language isn't sandboxed when reading this file; the file can do "anything". Should I bother doing a permission check on the file? Like: $ foo -i Not reading ~/.foo_rc because it is world-writable, you goof! P.S. you don't even own it; someone else put it there. > _ I'm looking at the Bash source and it doesn't bother with permission checks for ~/.bash_profile (other than that it exists and is readable, preconditions for doing anything with it at all). [Update] After considering thrig's answer, I implemented the following check on the file: If the file is not owned by the effective user ID of the caller, then it is not secure. If the file is writable to others, then it is not secure. If the file is writable to the owning group, then it is not secure if the group includes users other than caller. (I.e. the group must either be empty, or have the caller as its one and only member). Otherwise it is deemed secure. Note that the group check makes no assumptions about any correspondence between numeric user ID's and group ID's, or their names; it is based on checking that the name of the user is listed as the sole member. (A note is added to the documentation for the function which performs this check that it's subject to a time-of-check to time-of-use race condition. After the check is applied, an innocent superuser can extend the group with additional members, who may be malicious, and modify the file by the time it is accessed.)
Reasonable and prudent, provided there are clear warnings on what file is failing, and why, so the user can fix the permissions issue. bash probably dates from a more trusting (and prank-ridden) day. Note that user files can legitimately be group writable, if the site has a policy of each user going into a group that only that user is in, otherwise not. (Parent directory checks may also be prudent, to detect chmod 777 $HOME goofs.)
Permission check on profile file in home directory: should it be done?
1,535,694,643,000
I want to verify that is it the responsibility of file system?
The kernel is doing it when your program is using the open system call.
What software is responsible to check permission of files to give access or not to users?
1,535,694,643,000
What is the relationship between chmod and sudo on an executable file for a user? Are the cases that "a user needs sudo to run an executable" the same as the cases that "chmod hasn't set the execution mode bit for the user"? Are the cases that "a user doesn't need sudo to run an executable" the same as the cases that "chmod has set the execution mode bit for the user"? More specifically, For an executable file, If chmod doesn't set its execution permission for a user, must that user run the executable with sudo or su? if chmod sets its execution permission for a user, does that mean that the user can run the executable without sudo or su? How do you make an executable runnable only with sudo or su by a given user? Conversely, if a user can run an executable only with sudo or su, does that mean chmod hasn't set execution permission of the executable file for the user?
First the terminology. chmod is a program (and a system call) which alows changing permission bits of a file in a filesystem. sudo is a special program that allows running other programs with different credentials (typically with elevated privileges, most usually those of the root user). su is similar but less (read "not") configurable than sudo - most importantly it requires authenticates users based on the knowledge of the root password (which is security-wise rather appalling). The executable bit says whether the contents of a file may be loaded into the memory and run (it doesn't say anything about whether it makes sense - you can set the executable bit of a JPEG image and watch it fail spectacularly when you try to run it). Now for the questions: the permissions are evaluated once the executable is being loaded. In the case of su and sudo this happens with the effective IDs (user and group - the credentials used in privilege evaluation - see the credentials(7) man page) of the target user. Hence if the target user is allowed to execute the file it is executed. As mentioned above: when the executable bit is set for the effective UID or GID, then it can be executed. Otherwise not. Generally, you don't. If you want, you can mark it as executable only for certain IDs and then prepare the sudo configuration so that it allows certain users to run that binary with the credentials of one of those that have executable rights on the file. No. It usually does not make much sense to prevent users from running programs that require special privileges - programs should handle lack of those (gracefully if possible). Some programs even have only some functionality that doesn't require special rights but offer more when run with special privileges - one example is route: unprivileged users may use it to display kernel routing tables, while administrators can also change those.
relation between chmod and sudo on an executable file
1,535,694,643,000
I'd like to give execute permissions on a path recursively so I can read a file in the directory. The chmod binary that I'm using on Android only supports the octal/numeric notation though. Normally I'd do chmod -R a+x /this/file/is/here/filename.txt but don't know how I'd do this numerically. I've Googled but haven't found an answer to this. Looking at at the question "Is it possible to represent the +X symbolic permission with an octal value?", I don't think this is possible but what would the easiest recommended way.
You cannot do that using octal notation. Octal notation only allows you to set all mode bits and the +x allows to add to existing mode bits. If you have find on the system, you can write a script that uses find to search for directories a specific combination of mode bits with -perm and change only those to the pattern including the execute bits. @Wally's solution will not work because directories A and B: -rwxrwxr-- A -rwxr-xr-- B will both become rwxr-xr-x with his solutions but with chmod a+x: -rwxrwxr-x A -rwxr-xr-x B
How can I give a permssion to all users for a path using the octal notation?
1,535,694,643,000
I allow my non-root user the ability to view my nginx access.log. I upgraded from CentOS 6.4 to 6.5 and now my user cannot view this file but the permissions still look correct. I removed and readded my user to the nginx group.... Am I missing something obvious? Bleh, the user is nginx and the group is adm right? that is probably what Im missing..... Just needed to type it out. [09:46 AM] brian web>ll -rw-r-----. 1 nginx adm 6393 Dec 4 09:23 access.log -rw-r-----. 1 nginx adm 0 Dec 4 03:12 error.log [09:46 AM] brian web> groups brian brian : wheel brian nginx [09:47 AM] brian web> cat access.log cat: access.log: Permission denied
User brian is a nginx group member, but nginx group does not have any permission on your acces log file. Add brian to the adm group.
Upgraded from CentOS 6.4 -> 6.5 and now permissions do not seem to work properly?
1,535,694,643,000
I'm struggling with the SGID command. Given the following situation: ----rws--- 1 simon simon 233 nov 24 13:52 hosts Why can't a user open/edit this file?
The SGID bit will not help you edit a file if you don't belong to the group owning it. The SGID bit on files is mainly useful for running a script with the file's group as your EGID (effective group ID). To open the file, you still need read permissions to it (i.e. either it should be world-readable or you should be a member of its owning group). Similarly, to edit it, you need write permissions to it.
Setgid won't work
1,535,694,643,000
Can some one please explain with a an example the file permission mechanism in Linux and other Unix like systems ? What are the nine bits for ? Why do we have a group id for a user as well as for a file ? Are these two related ?
The ownership and access permissions basically work together. Ownership tells the system who can access the file, the file permissions say how. Ownership splits access into three groups: user (a single user owning the file), group (of users), others (the rest of the world). The permissions are: r - reading is allowed, w - writing is allowed, x - executing is allowed For directories the meaning is slightly different: x allows you to enter a directory, while r listing its contents (and w lets you update it) - that means, that if you know the exact file name you don't need read permissions on the directory it resides in, x is enough. You need r on the file though. Then there is one additional bit triplet: setuid, setgid, sticky. The first two cause (on an executable file) the program to be run as the user/group owning the file (depending on which of the two bits are set). Sticky bit is implementation dependent. For executables it used to mean that the program code should be cached in swap to speed up loading it next time. For directory it prevents unprivileged users removing a file if they do not own it, even if they had the rights to do so otherwise - this is why it is usually set on world writeable directories like /tmp. In addition to this, many filesystems support additional access control lists (ACL) which allow finer grained access control. These are accessible with getfacl/setfacl rather than with chmod. As a side note, similar permission system is usually implemented for memory (RAM) with page granularity. The main aim is to adhere to the "W^X" principle: either you can write to the memory or you can execute it, but not both at the same time. While generally a good idea, it doesn't work for interpreted just-in-time compiled code - e.g. Java, because the interpreter needs to compile/optimize the generated code (i.e. to write the page) and then execute it, often incrementally (and changing the permissions every time wouldn't make much sense).
File permission mechanism in Unix like systems
1,535,694,643,000
I can't login to my linux machine. It's giving me the following error: /usr/libexec/gconf-sanity-check-2 exited with status 256 I removed all the files in the /tmp directory after this issue happened. What is the cause of this error?
try to run a live ubuntu CD , mount the root partition and change the permission of /tmp directory to 1777. then it will work.
Unable to login to linux machine
1,535,694,643,000
I'm an Ubuntu user and I'd like to change default permissions for downloaded files. Currentely all downloaded files are automatically saved with "-rw-r--r--" permissions (umask 0022). I'd like to add "+x". How to do that?
You would have to edit the source code of the programs performing the downloading as files are created by default as 0666 modified by the current umask. From the fopen(3) man page: Any created files will have mode S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP | S_IROTH | S_IWOTH (0666), as modified by the process’s umask value (see umask(2)).
+x permission for files in directory
1,535,694,643,000
While attempting to create a program that reads some configuration before launching programs as a normal user and then as the root user, I noticed this odd behavior. I can't seem to find mention of it anywhere else. Normal filesystems use the effective UID/GID for access checks, but it looks like FUSE seem to check all three of the effective, real, and saved(!!) UID/GID for access. I had initially just dropped the effective uid so that I could recover it later, but this kept me getting permissions errors until I realized what was going on. Why is this this case? Why does FUSE care about the saved uid/gid? (I'm aware I can set allow_root on FUSE and avoid this, that isn't what this question is about) Example C code to demonstrate: #define _GNU_SOURCE #include <stdio.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #define measure() getresuid(&ruid, &euid, &suid); getresgid(&rgid, &egid, &sgid); printf("UID: %4d, %4d, %4d. GID: %4d, %4d, %4d \t\t", ruid, euid, suid, rgid, egid, sgid); fflush(stdout) #define set(r,e,s) if (setresuid(0,0,0 ) != 0) return 1; if (setresgid(r,e,s ) != 0) return 1; if (setresuid(r, e, s) != 0) return 1; #define attempt(r,e,s) set(r,e,s); measure(); test(argv[1]) void test(char* arg) { struct stat sb; if (stat(arg, &sb) == -1) perror("fail"); else printf("Success\n"); } int main(int argc, char *argv[]) { uid_t ruid, euid, suid; gid_t rgid, egid, sgid; measure(); printf("\n\n"); attempt(1000,0,0); // Expect: Fail. Actual: Fail attempt(0, 1000,0); // Expect: ok. Actual: Fail attempt(0, 0, 1000); // Expect: Fail. Actual: Fail attempt(1000,1000,0); // Expect: ok. Actual: Fail attempt(1000,0,1000); // Expect: Fail. Actual: Fail attempt(0,1000,1000); // Expect: ok. Actual: Fail attempt(1000,1000,1000); // Expect: ok. Actual: ok return 0; } Output: $ sshfs some-other-machine:/ /tmp/testit # I think any FUSE filesystem should "work" $ gcc test.c -o test $ sudo ./test /tmp/testit UID: 0, 0, 0. GID: 0, 0, 0 UID: 1000, 0, 0. GID: 1000, 0, 0 fail: Permission denied UID: 0, 1000, 0. GID: 0, 1000, 0 fail: Permission denied UID: 0, 0, 1000. GID: 0, 0, 1000 fail: Permission denied UID: 1000, 1000, 0. GID: 1000, 1000, 0 fail: Permission denied UID: 1000, 0, 1000. GID: 1000, 0, 1000 fail: Permission denied UID: 0, 1000, 1000. GID: 0, 1000, 1000 fail: Permission denied UID: 1000, 1000, 1000. GID: 1000, 1000, 1000 Success $
As you have noticed, without the allow_root/allow_other options, other processes are not allowed to access the filesystem. This is not meant to protect your filesystem, but to protect the other processes. For this reason, if the accessing process has a shred of another identity, the access can't be allowed. That's the relevant code in the kernel for this behavior (fs/fuse/dir.c): /* * Calling into a user-controlled filesystem gives the filesystem * daemon ptrace-like capabilities over the current process. This * means, that the filesystem daemon is able to record the exact * filesystem operations performed, and can also control the behavior * of the requester process in otherwise impossible ways. For example * it can delay the operation for arbitrary length of time allowing * DoS against the requester. * * For this reason only those processes can call into the filesystem, * for which the owner of the mount has ptrace privilege. This * excludes processes started by other users, suid or sgid processes. */ int fuse_allow_current_process(struct fuse_conn *fc) { const struct cred *cred; if (fc->allow_other) return current_in_userns(fc->user_ns); cred = current_cred(); if (uid_eq(cred->euid, fc->user_id) && uid_eq(cred->suid, fc->user_id) && uid_eq(cred->uid, fc->user_id) && gid_eq(cred->egid, fc->group_id) && gid_eq(cred->sgid, fc->group_id) && gid_eq(cred->gid, fc->group_id)) return 1; return 0; }
FUSE filesystems look at saved UID/GID?
1,535,694,643,000
I've been at this all day and think I finally figured it out, but want to make sure before I put it into production. I'm changing my server to allow the apache:apache user write permission on a few directories. I'm the only user jeff:jeff on the server. My directory structure looks something like this: /home/jeff/www/ 0755 jeff:jeff /home/jeff/www/example1.com/ 0755 jeff:jeff /home/jeff/www/example2.com/ 0755 jeff:jeff /home/jeff/www/example2.com/uploads/ 0755 apache:apache The problem is: I run chmod apache:apache uploads/ to allow apache write access. Whenever I want to edit a file in uploads/ via sftp, I have to chown it back to jeff:jeff, then reverse when I'm done. My preliminary solution is: Add apache user to jeff group Give jeff group write permission on uploads/ dir via manual chmod 775 Force apache user to create any new files + folders + subfolders as apache:jeff. Requires setgid 2775 on uploads/ dir Force apache user to create any new files + folders + subfolders with umask 002 = 775 via systemd I'm only about 50% sure I've got all this right. Does it sound okay? Is there a better way? Did I miss anything? With Jim's help, here is the final solution I used: For my reference. # usermod --append --groups apache jeff > Relogin all sessions # chown -R apache:apache www/example.com/uploads/ # find www/example.com/uploads/ -type d -exec chmod 775 {} \; # find www/example.com/uploads/ -type f -exec chmod 664 {} \; # systemctl edit --full php.service ----------- [Service] UMask=0002 ----------- # systemctl daemon-reload # systemctl restart php WordPress users will want to add this to their wp-config.php: define('FS_CHMOD_DIR', 0775); define('FS_CHMOD_FILE', 0664);
Yes, that's the general idea, and you're fairly close, but I would suggest that rather than adding the apache user to the jeff group, it would be slightly more secure and perhaps a tad more convenient (and extensible, if that's important) to do it the other way 'round: Take apache out of the jeff group. Instead, add jeff to the apache group. For starters, this prevents apache from having read access to all the jeff group files. chown the uploads/ directory to be apache:apache and chmod 775 so that anyone in the apache group can create/delete files there. Have apache create files using umask 2 so that anything apache creates is group-writeable. This way: A) You no longer have to chown files before you can edit them; all the upload files are owned by group apache, are group-writable, and you already belong to the apache group. B) You have the ability to easily add other users to the apache group and they, too, will be able to edit uploaded files.
Converting server to allow Apache write access to certain directories, does this solution look right?
1,535,694,643,000
I have 3 specific user accounts (less than 10 anyway), for all files & folders under a specific /data or /home directory, I want to change just the group ownership of all occurring files/folders of those specific users. I don't know where everywhere might be (besides /home and /data) so I want to do a <what?> -R on /. Is there a way to do that? The existing group is named XYZ and I want to change all files & folders that are owned by ron.XYZ to ron.users. How can that be done?
Use find to identify the target files, and then apply the change of group to those. find / -user ron -group XYZ -print -exec chgrp users {} + You can omit the -print if you aren't worried about seeing which files are being changed. You can also (temporarily) omit the -exec … + if you first want to see which files would be affected before changing them. You can extend the match to all three user accounts at once: find / \( -user ron -o -user alice -o -user bob \) -group XYZ -print -exec chgrp users {} + Note that you will descend into /proc (and other pseudo filesystems). You can safely ignore errors about changing ownerships there. You can use -prune to omit such filesystems: find / \( -path /proc -o -path /dev -o -path /sys \) -prune -o …as above…
change group ownership specifying from to for entire file system
1,535,694,643,000
I encountered an interesting (at least to me - a newbie in Linux) scenario today. I connected my drone to my Linux PC. Drone configuration software (Betaflight) couldn't connect to the drone. A quick google search solved the problem. Basically when I connect a drone with an USB cable a file is created: /dev/ttyUSB0 and it has 660 permissions. The owner of the file is root, the group of the file is uucp. So the simple solution was to chmod this file to 666. However that raised some questions in my head. I've just added r and w permissions to this file to everybody, which seems excessive. Alternatively I think I could have added the Betaflight to uucp group which seems stupid, because it's an important group or ran it with sudo which seems even worse. What's the proper way to handle this? Logically it would make sense to add a rule specifically for Betaflight to have rw access to this particular file. I'm just curious what's the "Linux way". I don't want to give a random app an excessive access to my OS, neither do I want to give any app/user access to a particular file.
Welcome Andrzej to Unix&Linux, If you want to be minimal about the permissions you give, I’d suggest you: create a dedicated group (let’s name it drone): addgroup drone change your /dev/ttyUSB0 to the group drone: chgrp drone /dev/ttyUSB0 change your drone configuration software (let’s call it /usr/bin/foo) to the group drone and enable the setgid bit: chgrp drone /usr/bin/foo; chmod g+s /usr/bin/foo optionally disallow other to execute the configuration software and give dedicated user(s) (let’s call it user1) permission to execute it with file ACLs: chmod o-x /usr/bin/foo; setfacl -m u:user1:r-x /usr/bin/foo You may replace 3 and 4 with using sudo to allow the dedicated user(s) to run the configuration software as group drone. You would add in your /etc/sudoers: user1 ALL=(:drone) /usr/bin/foo And then use it as user1: sudo -g drone foo
What's the correct approach when an app needs access to a particular file?
1,535,694,643,000
On Debian 11, mounting an external disk changes permissions on the mount point directory. The disk is formatted as ext4. Is this normal behavior? Create directory: user@debian:/media$ mkdir external2 mkdir: cannot create directory ‘external2’: Permission denied user@debian:/media$ sudo mkdir external2 Check ownership: user@debian:/media$ ls -la total 20 drwxr-xr-x 5 root root 4096 Aug 17 20:19 . drwxr-xr-x 18 root root 4096 Jul 9 21:11 .. drwxr-xr-x 2 root root 4096 May 4 2021 cdrom drwxr-xr-x 2 root root 4096 Aug 12 2021 external drwxr-xr-x 2 root root 4096 Aug 17 20:19 external2 Change ownership to user: user@debian:/media$ chown -R user:user /media/external2 chown: changing ownership of '/media/external2': Operation not permitted user@debian:/media$ sudo chown -R user:user /media/external2 And check ownership: user@debian:/media$ ls -la total 20 drwxr-xr-x 5 root root 4096 Aug 17 20:19 . drwxr-xr-x 18 root root 4096 Jul 9 21:11 .. drwxr-xr-x 2 root root 4096 May 4 2021 cdrom drwxr-xr-x 2 root root 4096 Aug 12 2021 external drwxr-xr-x 2 user user 4096 Aug 17 20:19 external2 Mount hard disk: user@debian:/media$ sudo mount /dev/sdb /media/external2 Check ownership: user@debian:/media$ ls -la total 20 drwxr-xr-x 5 root root 4096 Aug 17 20:19 . drwxr-xr-x 18 root root 4096 Jul 9 21:11 .. drwxr-xr-x 2 root root 4096 May 4 2021 cdrom drwxr-xr-x 2 root root 4096 Aug 12 2021 external drwxr-xr-x 2 root root 4096 Aug 12 2021 external2 EDIT: for the sake of transparency, I'm leaving this question, but I have another one which may be related: Strange rsync behavior deleting already transferred files
Yes, this is normal and expected behavior on all mount points on all Unix-like systems. When you are mounting the external hard disk, the root directory of the external HD's filesystem is placed on top of the mount point directory, and the mount point directory is "hidden under" the new filesystem. The root directory of external HD's filesystem has its own ownership and permissions, and once you have mounted the external disk, those are the permissions you'll see - because you no longer see the directory you used as the mount point, but the root directory of the filesystem you mounted onto it. When you unmount the other filesystem, you will again see the original permissions of the mount point directory, as the root directory of the filesystem that was covering up the mount-point directory is unmounted. You may have expected the ownership to reflect the identity of the user that mounted the filesystem, but that is the exception, not the rule: it can only happen when mounting a filesystem with no support for Unix-style ownerships and permissions (like the filesystem types of the FAT family), and the mount point has been pre-arranged to allow regular users to use the mount command without sudo. Mounting a filesystem has significant security implications, and so normally only root (or someone with unlimited sudo access) will be able to use the full unrestricted forms of the mount command. But it is possible to allow non-root users to use mount without sudo at pre-arranged mount points only. To do that, the system administrator must first write an /etc/fstab entry that includes one of the mount options user, users or owner. The differences are: users: any user can mount, and any user can unmount user: any user can mount, but only root or the user who mounted the filesystem will be able to unmount it owner: like user, but with the added requirement that the user must own the device node they're trying to mount (This would allow the mounting of hot-plugged removable drives for locally logged-in users only, in systems that are configured to grant the ownership of hot-plugged devices to the user that is logged in locally. Many distributions, including Debian, prefer to not do it this way, but use a separate removable-media mount helper like udisks2 instead. See man udisks.) To use such a pre-arranged mount point, the non-root user must use the mount command in its short form, i.e. specifying only the device to mount or the mount point directory, not both. mount will then look up the rest of the details from /etc/fstab, enforcing the mount options and other parameters specified by the system administrator. If you ever need to access files or sub-directories that have been hidden by another filesystem being mounted on top of them, it is possible by making a bind mount of the parent filesystem (using mount --bind, not mount --rbind) and accessing the mount-point directory through the bind mount. Example using your setup. Start with the external disk unmounted: sudo touch /media/external2/This_will_be_hidden_by_external_disk ls /media/external2 # now you see it sudo mount /dev/sdb /media/external2 ls /media/external2 # now you don't! sudo mount --bind / /mnt ls /mnt/media/external2 # here you will see it again! sudo umount /mnt # to clean up the bind mount If a mount point directory is not empty, systemd-based systems will issue a warning when mounting filesystems according to /etc/fstab at boot time.
Mounting an external disk changes permissions on mount point directory
1,535,694,643,000
How can I remove below file? srwxrwxrwx 1 patroh root 0 Aug 8 16:11 0= The user patroh is myself. The rm command won't work - it doesn't give any error when I execute rm 0. I am not sure how I created this file?
The s at the start of the line in ls -l's output identifies that as a unix-domain socket. The = at the end is a type indicator for sockets, one that ls -F adds. So the file itself is called just 0. Unix sockets are a particular method of interprocess communication that mostly acts like real network sockets but have names in the filesystem, which allows the usual filesystem access controls to apply to the sockets. That "file" you have there is one such name. The socket pseudo-files tend to linger around (uselessly) after the process that opened them has exited, unless something takes care to remove them. But they can be removed like any file. (Well, on Linux, at least.) E.g. with nc creating a unix socket and rm removing it: $ nc -U -l socket & [1] 22480 $ ls -l total 0 srwxr-x--- 1 ilkkachu ilkkachu 0 Aug 10 00:45 socket= $ rm socket $ ls -l total 0 $ kill %1 If rm doesn't give an error, it should mean it was able to remove the file. Of course, that wouldn't stop the file from being recreated afterwards.
how to remove file 0= file which has srw permission
1,535,694,643,000
Rather than run as root, my Duplicati container runs its own UID/GID [using the Linuxserver.io image]. This is great from a file creation and process running perspective to minimise confusion and unnecessary privilege. However, this has an unintended side effect on accessing source files for backup locally on the same filesystem/OS; it now does not have access to all files by default. This is also great except... ...files are not created equally. They are created from processes (also running in containers from both custom Dockerfile and public registries) that create files and directories through many arbitrary, distinct file (permission) configurations. In a perfect world the Duplicati UID would simply be in all groups used by all all the source file creation processes. But some processes, containers, etc, use strange or uncontrollable UMASK, default file creation modes, or even some files are intentionally not readably permissioned beyond the user owner. So on to my question: how can I continue to run Duplicati within the container as its own distinct user, but allow it to act as root in the (local) filesystem to allow it to backup all files? Obviously there's a route where I can repermission or chown the files before each run, but this potentially breaks certain applications that only run when certain permissions are present, or it breaks other security best practice. EDIT 2022-08-09 17:58 (UTC+1): Thanks to @telcoM I've created a custom-cont-init.d script (as provided by the Linuxserver.io container I'm working with): apt update && apt install -y libcap2-bin && apt clean setcap cap_dac_override=+ep /usr/bin/mono-sgen I can now see the appropriate cap_dac_override capability lit up on the process using getpcaps: root@dc42a0e3e0d7:/# ps auxnww USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 0 1 0.0 0.0 200 28 ? Ss Aug08 0:00 /package/admin/s6/command/s6-svscan -d4 -- /run/service 0 16 0.0 0.0 204 16 ? S Aug08 0:00 s6-supervise s6-linux-init-shutdownd 0 18 0.0 0.0 196 4 ? Ss Aug08 0:00 /package/admin/s6-linux-init/command/s6-linux-init-shutdownd -c /run/s6/basedir -g 3000 -C -B 0 27 0.0 0.0 204 20 ? S Aug08 0:00 s6-supervise s6rc-oneshot-runner 0 28 0.0 0.0 204 20 ? S Aug08 0:00 s6-supervise s6rc-fdholder 0 35 0.0 0.0 180 4 ? Ss Aug08 0:00 /package/admin/s6/command/s6-ipcserverd -1 -- /package/admin/s6/command/s6-ipcserver-access -v0 -E -l0 -i data/rules -- /package/admin/s6/command/s6-sudod -t 30000 -- /package/admin/s6-rc/command/s6-rc-oneshot-run -l ../.. -- 0 471 0.0 0.0 204 20 ? S Aug08 0:00 s6-supervise duplicati 20031 473 0.0 0.1 146324 14756 ? Ssl Aug08 0:00 mono Duplicati.Server.exe --webservice-interface=any --server-datafolder=/config --webservice-allowed-hostnames=* 20031 481 17.6 2.1 2273276 175044 ? Sl Aug08 249:35 /usr/bin/mono-sgen /app/duplicati/Duplicati.Server.exe --webservice-interface=any --server-datafolder=/config --webservice-allowed-hostnames=* 0 501 0.0 0.0 6872 492 pts/0 Ss+ Aug08 0:00 /bin/bash 0 1278 0.0 0.0 7040 3556 pts/1 Ss 17:33 0:00 /bin/bash 0 1315 0.0 0.0 8468 2796 pts/1 R+ 17:52 0:00 ps auxnww root@dc42a0e3e0d7:/# cat /config/custom-cont-init.d/ 21-extra-group-id 31-setcap-dac-override root@dc42a0e3e0d7:/# cat /config/custom-cont-init.d/31-setcap-dac-override apt update && apt install -y libcap2-bin && apt clean setcap cap_dac_override=+ep /usr/bin/mono-sgen root@dc42a0e3e0d7:/# getpcaps 471 471: = cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcap+ep root@dc42a0e3e0d7:/# getpcaps 473 473: = cap_dac_override+ep root@dc42a0e3e0d7:/# getpcaps 481 481: = cap_dac_override+ep And whilst my first smaller backup test worked without any of the previous filesystem permission errors, I'm still getting them for the much larger/slower backup. Is there anything else I'm missing to get this to work as I'd hope? EDIT 2022-08-30 13:09 (UTC+1): The accepted answer likely works, just not for me. I'm running this container on a Docker Swarm: The cap_add and cap_drop options are ignored when deploying a stack in swarm mode which comes from the Docker Compose reference docs.
That might be best solved by using Linux capabilities (see man 7 capabilities). For backup jobs, CAP_DAC_READ_SEARCH might suffice to allow it to read (and thus backup) everything that exists in any part of the filesystem namespace it can see. For restore jobs, you might need CAP_DAC_OVERRIDE to be able to write anywhere, plus CAP_CHOWN, CAP_FOWNER and CAP_FSETID to be able to restore any ownerships, permissions and ACLs. Docker has facilities that allow you to configure your containers with capabilities: things like this is what those facilities are for.
How can I ensure my backup user has access to all files created by arbitrary processes with arbitrary UMASK, users, groups, and permissions?
1,535,694,643,000
I've had multiple issues with the configuration of the current user. Under a new user account these are solved. Now I want to move (not copy, because of disk space) all data from the old user to the new user. I also want to selectively move some of the unproblematic application setting files over to the new user. To do this, How can I make the entire user folder of the old user read and writable for the new user?
Change the ownership of the home directory using chown to the new user. chown -R newuser:newuser /home/olduser
Migrate to a new user home
1,535,694,643,000
I've stumbled upon a script which included the following two commands: chown -R some-user:0 /some/dir chmod -R g+w /some/dir Specifically, this is from the Dockerfile of the nginx-unprivileged Docker image. Is there any reason why one would add group-write permission to files owned by the root group anyway? Is there any situation in which this might make a difference, considering there are no other users in the root group?
First, the question should not be "are any users in the root group" but "will there ever be any users in the root group". Users (real users and system users) or running processes could end up in the root group any of these ways: (obviously) if they are listed in the root group in /etc/group If they are listed in the root group in /etc/passwd column 4 If the executable is sgid root if some system service (like systemd or sudo) starts a process in the root group probably something else I've forgotten If any of these ever occurs, then it matters if files are group root writable.
Do group permissions matter for files owned by the root group?
1,535,694,643,000
Is it possible to create a directory that its owner can't delete? Let's say I have directory bar owned by user foo, and I'd like to create a subdirectory bar/baz, also owned by foo, such that: foo can create and remove files and directories in bar/baz as normal foo can create and remove files in bar as normal foo can remove most directories in bar as normal foo (or any other non-superuser) CANNOT remove the directory bar/baz The reason I'd like to do this is because I'd like to set up bar/baz as a BTRFS subvolume (to exclude it from snapshots), and if foo can remove it and recreate it using mkdir, then it would not be a subvolume anymore.
I can think of at least two ways to prevent an owner from deleting a directory. A directory can't be deleted if it isn't empty. So put something in it the owner can't delete. A directory they don't own a file (owner doesn't matter) that is immutable Mount something on the directory In the first case, they'd still be able to rename the directory. But if something is mounted on it (which is what you want anyway), they can't do anything to it. Now if they can unmount what is on it...
A directory that is owned by some non-superuser, but can't be deleted by them
1,645,538,743,000
Suppose I'm creating a file or directory with some name and mode argument using a system call and that operations fails with EEXIST. Assuming I know my current umask, euid, and egid, how can I tell if that existing file/directory has permissions equivalent to what the system call would have created had the operation succeeded. AFAIK, for classical permissions, the answer would be true iff .st_gid == egid && .st_uid == euid && (.st_mode & 07777) == (RequestedMode & 07777 & ~CurrentUmask) and the found/expected filetypes match. How could this be extended to a system with access control lists?
To extend this to ACLs, you’d call acl_get_file with the path in which you’re creating the file, and ACL_TYPE_DEFAULT to request the default ACL on the directory. If there is one, that’s the ACL that would be applied by default to the file you tried to create. You’d then use acl_get_file on the existing file, with ACL_TYPE_ACCESS, to retrieve the actual ACL on the file. I don’t think there’s an ACL function for comparing ACLs, so that’s left as an exercise for the reader.
Comparing permissions of an existing file/directory with those of what would be created
1,645,538,743,000
I noticed an unusually large number of SSH attempts on my server so disabled the root login for SSH, created a new sudo user and confirmed I was able to log in without issues and elevate myself to the sudo user and root I've since discovered that Plesk is unable to load the data from the server, the file permissions show as 10000:psaserv and even after updating it to root:psaserv the servers files are still unavaliable. How can I undo this, is there a trick to setting the correct file permissions for Plesk? I used the Plesk file privelages repair tool but it wanted to change it to alex:psaserv which is the user I created as a sudo user, but it does not make a difference. Please let me know if any logs or command output is needed to help resolve this Since the issue started I restored the root user and can SSH as root without issue, still the above continues Enviroment is CentOS7
You say file permissions and you change ownership If Plesk repair tool didn't work, try setting back the default permissions # find /var/www/vhosts/example.com/httpdocs/ -type f -exec chmod 644 {} \; # find /var/www/vhosts/example.com/httpdocs/ -type d -exec chmod 755 {} \; Obviously, change example.com to correspond your directory
Broke file permissions and can no longer load webpages on server
1,645,538,743,000
If group and user's settings are group name: group1 gid: 2000 user name: user1 uid: 2000 Some directory's permission is Directory: /application Owner: user1 Group: group1 When change the gid and uid to 2001, is there any permission issue for the directory?
Filesystem stores the UID and GID of the owner, not the name, so if you change your UID to 2001 you will no longer be owner of that directory, owner will still be the (now non-existing) user with UID 2000.
Is there any file/directory permission effect if change uid or gid on Linux?
1,645,538,743,000
I configured a basic samba shire to share media files over SMB on my local network without credentials (i.e., as a SMB guest) /etc/samba/smb.conf [media] Comment = Media directory Path = /mnt/media Browseable = yes Writeable = Yes create mask = 0666 directory mask = 0777 Public = yes When I create a directory called example using SMB on Windows, the directory structure looks like this ls -alh total 28K drwxrwxrwt 4 root root 4.0K Oct 21 13:44 ./ drwxr-xr-x 3 root root 4.0K Oct 20 13:33 ../ drwxrwxrwx 2 nobody nogroup 4.0K Oct 21 13:44 example/ drwx------ 2 root root 16K Oct 20 13:36 lost+found/ lsattr --------------e----- ./example When I try to delete the directory from the system using a standard user account, I get an error message. rmdir: failed to remove 'example': Operation not permitted Yet, I can delete the folder from using SMB on Windows. What is happening here, and how can I allow any local unix user to delete or modify files created by a guest over SMB?
The t flag in the parent directory permissions declares that only the owner of a directory (or root) can delete a file or directory from it. Samba appears to be configured to provide user access as the account nobody. You aren't nobody so you don't have the rights to delete the directory. I not recommend you create files and directories at the top level. Leave that for lost+found and one data directory, and share that data directory rather than the mountpoint. # Remove global write permission from the mountpoint chmod go-w,-t /mnt/media # Create your files and directories in here mkdir -m777 /mnt/media/data Now fix up the Samba data path [media] comment = Media directory path = /mnt/media/data browseable = yes read only = no guest ok = yes force directory mode = 0777 force create mode = 0666
Why can't a user delete a directory owned by nobody?
1,645,538,743,000
Directory permissions: d---rwx--- 2 root wheel - 512 Aug 5 15:43 Test/ File permsisions: ----rwx--- 1 chambi wheel - 33 Aug 5 15:42 loop.py* Both the directory and file only allow those in group wheel to wrx. In this case however, it seems to be all within group wheel other than the owner of the file "chambi". Why is this the case? This issue can be overcome if I create the file as root but I want users within this group to be able to make files under their own names and be able to edit, execute etc.
File permissions aren’t cumulative, the most specific permission applies. If you want the owner of a file as well as group members to be able to access it, the appropriate permission is 770.
chmod 070 allows all but owner of file to wrx
1,645,538,743,000
I'm interested in protecting a parent directory (e.g., test in the example below) by being removed (i.e., accidentally, without explicit root priviledges), while at the same time I need to be able to write inside it (as a non-root user). For instance, suppose that we have a parent directory test with the following sub-directories: test/ |-- dir_1 `-- dir_2 Let's suppose that root is the owner and the group is some_group: drwxr-xr-x 4 root some_group 4096 Apr 16 13:38 . drwxr-xr-x 3 some_user some_user 4096 Apr 16 13:36 .. drwxr-xr-x 2 root some_group 4096 Apr 16 13:38 dir_1 drwxr-xr-x 2 root some_group 4096 Apr 16 13:38 dir_2 Now, I want a user who is member of some_group not to be able to remove dir_1 and dir_2, which I believe I've guaranteed by setting them to 755, but at the same time I need this user to be able to write inside test, but without being able to delete the whole test directory. Is there any way of doing this? Thank you.
You need to realize that in Unix file systems, you don't need write permission on the target to delete an entry, just on the containing directory. If you want to protect the directory test, set the permissions on the parent of test. If you want to allow users to create and delete their own files, set the permissions on test to 1777. The high bit is called the sticky bit.
Permissions of parent directory (being able to write on it, but not deleting it)
1,645,538,743,000
I'm using this commands to make DIRECTORY undeletable: sudo chmod 000 DIRECTORY sudo chattr +i DIRECTORY But I can delete it using this commands: sudo chattr -i DIRECTORY sudo rm -rf DIRECTORY How can I make DIRECTORY really undeletable?
As long as you have access to the root user who can write to your partition, you cannot make anything undeleteable as you still can cat /dev/zero > /dev/partition and destroy all the data. Ways to mitigate that for certain: Read only media: CD/DVD/BD-Ray (tends to degrade over time, most off the shelf optical disks become unreadable sooner or later unless you store them in a very special environment) Media with read only switch (can also die over time) Paper (yeah, you can print something valuable and it becomes sort of undeleteable) No root access And everything can still be physically destroyed.
How to make a directory really undeletable?
1,645,538,743,000
Issue: I have an FFmpeg command that I've been running for months now to stream video into the /dev/shm directory. It had been working fine until relatively recently (e.g. within a week), now it throws a permission issue. The command: ffmpeg -threads 2 -video_size 640x480 -i /dev/video2 -c:v libx264 -f dash -streaming 1 /dev/shm/manifest.mpd This is not the exact command (paired down for brevity), however the outcome is the same: libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast X Error: GLXBadContext Request Major code 151 (GLX) Request Minor code 6 () Error Serial #57 Current Serial #56 ffmpeg version n4.3.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04) configuration: --prefix= --prefix=/usr --disable-debug --disable-doc --disable-static --enable-cuda --enable-cuda-sdk --enable-cuvid --enable-libdrm --enable-ffplay --enable-gnutls --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libmp3lame --enable-libnpp --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopus --enable-libpulse --enable-sdl2 --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxvid --enable-nonfree --enable-nvenc --enable-omx --enable-openal --enable-opencl --enable-runtime-cpudetect --enable-shared --enable-vaapi --enable-vdpau --enable-version3 --enable-xlib libavutil 56. 51.100 / 56. 51.100 libavcodec 58. 91.100 / 58. 91.100 libavformat 58. 45.100 / 58. 45.100 libavdevice 58. 10.100 / 58. 10.100 libavfilter 7. 85.100 / 7. 85.100 libswscale 5. 7.100 / 5. 7.100 libswresample 3. 7.100 / 3. 7.100 libpostproc 55. 7.100 / 55. 7.100 Input #0, video4linux2,v4l2, from '/dev/video2': Duration: N/A, start: 1900.558740, bitrate: 147456 kb/s Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 147456 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264)) Press [q] to stop, [?] for help [libx264 @ 0x55b15d8912c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 [libx264 @ 0x55b15d8912c0] profile High 4:2:2, level 3.0, 4:2:2 8-bit [libx264 @ 0x55b15d8912c0] 264 - core 152 r2854 e9a5903 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 [dash @ 0x55b15d88f600] No bit rate set for stream 0 [dash @ 0x55b15d88f600] Opening '/dev/shm/init-stream0.m4s' for writing Could not write header for output file #0 (incorrect codec parameters ?): Permission denied Error initializing output stream 0:0 -- Conversion failed! (tl;dr: Could not write header for output file #0 (incorrect codec parameters ?): Permission denied) For contrast, this version of the command (writing to the home directory) works fine (/tmp/ also works): ffmpeg -threads 2 -video_size 640x480 -i /dev/video2 -c:v libx264 -f dash -streaming 1 ~/manifest.mpd As mentioned above, the strange thing is that I have not (knowingly) changed permissions on anything or altered the application; it seemingly just stopped working (although, not ruling out that I caused it). The last time I remember it working was probably a week ago (~March 20th, 2021). What I tried: Running ffmpeg as sudo (sudo ffmpeg...) Result: sudo: ffmpeg: command not found. This hasn't been necessary in the past, and it had the same output as before. sudo sysctl fs.protected_regular=0 Result: No change. Ran the ffmpeg ... command as su Result: No change chmod +777 /dev/shm Result: No change (ls -tls reveals that the directory is indeed rwxrwxrwt) chown'd both root:root and my username on /dev/shm Result: No change. touch /dev/shm/test.txt and sudo touch /dev/shm/test.txt Result: The file is created without issue. I've exhausted everything I could think of relating to permissions to get it to work. The Question What do I need to do to get FFmpeg write files to /dev/shm? Ideally, figuring out why this happened in the first place. If anyone has any ideas for commands I should run to help diagnose this issue, feel free to add a comment. System Info: Kernel: 4.19.0-14-amd64 Distro: Debian FFmpeg: version n4.3.1 (Was installed using Snapd, if it matters.) == Solution == jsbilling's solution of using snap.<snapname>.* unfortunately did not work, however in the linked forum thread there was a post which basically got around the issue of writing to /dev/shm by mounting a directory in home ~/stmp and writing the ffmpeg output there: $ mkdir ~/stmp/ $ sudo mount --bind /dev/shm/streaming_front/ ~/stmp/ ... $ ffmpeg -threads 2 -video_size 640x480 -i /dev/video2 -c:v libx264 -f dash -streaming 1 ./stmp/manifest.mpd Not an ideal solution, but a working one.
If you are using snaps, this forum post indicates there are specific patterns that are allowed for files in /dev/shm: /dev/shm/snap.<snapname>.* Another forum member suggested this hack, although it is basically a security bypass: $ mkdir /dev/shm/shared $ mkdir ~/shmdir $ sudo mount --bind /dev/shm/shared ~/shmdir $ touch ~/shmdir/foo $ ls /dev/shm/shared/ foo
FFmpeg cannot write file to /dev/shm: Permission Denied
1,645,538,743,000
Objective: Creating a folder on the root, chown to group and add users to group - but users get too wide permissions! Consider the following: # as root # we need a user group groupadd team1 # we need a shared folder mkdir /project1 chown root:team1 /project1 chmod 770 /project1 # we need users - and they get set pw elsewhere :) for i in bob tina jim joy; do useradd $i; done # we add them to the project group 'team1' that gives access to the shared folder usermod -aG team1 [username] What is puzzling is that user jim can create a file in /project1 and user joy can open, change and save the file in vim or try to delete the file, which will be executed after confirmation that this is the intent. Question: Is this to be considered correct behaviour? Shouldn't chmod 770 /project1 be limited to permissions on the folder itself, but not as it appears: recursively to files within said folder?
This is normal behaviour. 770 permissions on a directory allow the directory’s owner and any member of the directory’s group to read, write and search the directory. This means that any member of the group can delete files in the directory and create new files, independently of the permissions and ownership of the files themselves. This is what you’re seeing; whatever permissions jim sets on a file, joy can delete it and replace it with another, which is what vim does. There are additional permissions you can set on directories, in the standard Unix permissions model. The first useful one here is the sticky bit, which restricts deletions: files can only be deleted by their owner, the directory’s owner, or root. chmod g+t /project1 would set this up, and then joy wouldn’t be able to delete jim’s files. The second useful permission is the sgid bit, which causes the directory’s group to be applied to newly-created files in the directory: chmod g+s /project1 To combine both, run chmod 3770 /project1 See Understanding UNIX permissions and file types for details.
Setting permissions on files and folder - is folder perms inheritance always implicit?
1,645,538,743,000
systemctl status lighttpd ● lighttpd.service - Lightning Fast Webserver With Light System Requirements Loaded: loaded (/usr/lib/systemd/system/lighttpd.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2020-09-24 15:56:39 EDT; 2s ago Process: 6152 ExecStart=/usr/sbin/lighttpd -D -f /etc/lighttpd/lighttpd.conf (code=exited, status=255) Main PID: 6152 (code=exited, status=255) Sep 24 15:56:39 js.dc.localsystemd[1]: Started Lightning Fast Webserver With Light System Requirements. Sep 24 15:56:39 js.dc.locallighttpd[6152]: 2020-09-24 15:56:39: (server.c.752) opening errorlog '/var/log/lighttpd/error.log' failed: Permission denied Sep 24 15:56:39 js.dc.locallighttpd[6152]: 2020-09-24 15:56:39: (server.c.1485) Opening errorlog failed. Going down. Sep 24 15:56:39 js.dc.localsystemd[1]: lighttpd.service: Main process exited, code=exited, status=255/n/a Sep 24 15:56:39 js.dc.localsystemd[1]: lighttpd.service: Failed with result 'exit-code'. dir permissions are as follows: ]# ls -la /var/log/lighttpd/ total 4 drw-rw-rw- 2 lighttpd lighttpd 41 Sep 24 15:54 . drwxr-xr-x. 8 root root 4096 Sep 24 14:49 .. -rw-rw-rw- 1 lighttpd lighttpd 0 Sep 24 15:00 access.log -rw-rw-rw- 1 lighttpd lighttpd 0 Sep 24 15:54 error.log I've remove and recreated the file. There is no selinux enabled. Not sure what else to try.
Confirm the permission of /var/log and /var/log/lighttpd. The output you show suggests that /var/log/lighttpd could be drw-rw-rw-, perhaps it should be drwxr-xr-x ?
lighttpd won't start even with the right permissions
1,645,538,743,000
I have some strange behavior I don't understand. I'm just trying to list some files in a directory: sudo find /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root produces: /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/ /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/trustdb.gpg /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/private-keys-v1.d /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/S.gpg-agent /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/S.gpg-agent.extra /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/pubring.kbx~ /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/S.gpg-agent.browser /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/pubring.kbx /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/S.gpg-agent.ssh So I know the .gnupg directory exists, and has files in it. sudo ls -la /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root produces: total 12 drwx------ 3 root root 4096 Sep 21 14:54 . drwxr-xr-x 3 root root 4096 Aug 24 18:30 .. drwxr-xr-x 3 root root 4096 Sep 21 14:54 .gnupg So the directory itself has rwx permissions. But the command sudo ls -la /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/* gives: ls: cannot access '/home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/*': No such file or directory I've checked the path over and over and can't see anything wrong. I have rwx permissions and root level access. What else could stop me from listing this directory? My ultimate goal is to do a chmod 600 /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/*, which also fails. But for now I'd settle for ls. Edit: It just hit me. Does this have to do with the file globbing. Does the * expand before sudo, and therefor without root access?
As you noted correctly, the expansion order is the root of your problem. The step relevant for filename globs is "filename expansion". Although it is rather late in the order of expansions (see here e.g.), it is performed before the command is actually invoked. This means that e.g. ls * in a directory containing file1.txt, file2.txt and file3.txt is actually called as ls file1.txt file2.txt file3.txt. The problem now is that the calling user doesn't have the necessary permissions to enter the .gnupg directory. Hence, the expansion of the * filename glob will fail. In such cases, unless specific shell options are set, the * will remain literal on the command line, so that the shell would try to perform the ls command on a file literally called *. Hence the error message cannot access '/home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/*' As you see, it tries to access a file called * inside /home/vsts/work/_temp/tmp.Q8K2bSeNVV/root/home/root/.gnupg/. What you can do instead is write a shell script that does the relevant operations, which you then call as root using sudo. You would need to place the script under a path accessible from everywhere, such as /usr/local/bin.
ls shows "no such file", but why? [duplicate]
1,645,538,743,000
I have to write a script that runs at startup (this is a part that I don't know how to do yet) that erases the temporary files of the current user and I have the "Permission denied" error. The errors look like this: '/tmp/systemd-private-long number-colord.service-LTsv8G' : permission denied ; '/tmp/systemd-private-long number-systemd-timesyncd.service-PxhNq0' : permission denied ; '/tmp/systemd-private-long number-rtkit-daemon.service-KQN6zN' : permission denied This is my code so far: TMPFILE=$(mktemp)|| exit 1 find /tmp -type f -user $USER -exec rm -f {} \; if i run ls -l after i've created TMPFILE I get: total 4 -rwxrwxrwx 1 bristena bristena 530 may 25 10:51 sh.01 if i do cd /tmp and then I run ls -l I get total 12 drwx------ 3 root root 4096 may 25 10:14 systemd-private-long number-colord.service-LTsv8G drwx------ 3 root root 4096 may 25 10:14 systemd-private-long number-rtkit-daemon.service-KQN6zN drwx------ 3 root root 4096 may 25 10:14 systemd-private-long number-systemd-timesyncd.service-PxhNq0 -rw------- 1 bristena bristena 0 may 25 10:51 tmp.j0rvQtmz7G from what I've seen online I can also use trap "rm -f TMPFILE; exit", but I don't know how to integrate the current user requirenment All of your help would be deeply appreciated
In order to avoid the error messages of find /tmp -type f -user $USER -exec rm -f {} \; you can either redirect them find /tmp -type f -user $USER -exec rm -f {} \; 2>/dev/null or prevent find from running into that problem: find /tmp \( -type d \( -executable -o -prune \) \) -o -type f -user $USER -exec rm -f {} \;
find permission denied while erasing a temporary file of a certain user
1,645,538,743,000
I just installed elementary OS dual booted with windows 10. After the installation. I saw that the / root directory has every file and folder locked. I can't rename, delete or create anything there. Home directory is fine. I ran sudo chown -R $USER: / command from internet to get permissions and all folders were unlocked. But now after reboot I can't login. It just doesn't show the login screen. It shows a black screen with blinking _. I want to ask the following - Are the root directory folders locked by default? If they are, do I need to unlock them for some reason in future for installing some software or any other reason? If I need them unlocked for some purposes, how should I do it correctly? I pressed Ctrl+Alt+F1 on black screen and I can login as my username. But couldn't do much. Can I reverse the process to login again? If yes how? I can see my windows disks under other locations. All files can be read but are locked which is fine and I want them to be like that. But is there any possibility that those files can be deleted by me by running some commands? I am kind of new to linux. I don't understand when I am the only user, why root has the permissions. And if root has the permissions, I am only one using the OS so how to do things which only root has the permission to do? I tried googling this thing but couldn't understand it properly.
You broke it. The root directory and most things under it (not your home directory), are owned by the system, and are as you say locked (for good reason). Even when you know what you are doing, it will be rear to change anything there. Very roughly, the system is split into two parts: The operating system, and user data. Don't try to change the operating system part. If you need to use sudo it is to change the operating system part, don't do it (except to install new packages: sudo apt install ...). There are some executable-files / programs, that when run, run as the owning user of the file. They need to do this to allow privileged operations (such as logging in, and changing to the new user). The easiest thing to do is to re-install. After re-install, avoid use of sudo except to install new packages. Learn how the system works before trying to fix it.
Can't login after chown command [duplicate]
1,645,538,743,000
I have developed a project/service which will give me installable file for linux m/c. service: Which always run in background if machine is on. Linux background processes. Just I want to know what is the extension of that file. Like Windows service has .exe extension? Do we need admin access on linux machine to install that file? I probably use below commands [Unit] Description=Dotnet Core Demo service [Service] ExecStart=/bin/dotnet/dotnet Service.Sample.dll WorkingDirectory=/etc/SampleService/ User=dotnetuser Group=dotnetuser Restart=on-failure SyslogIdentifier=dotnet-sample-service PrivateTmp=true [Install] WantedBy=multi-user.target
The code that starts with [Unit] is a systemd service file. When you distribute your package, you need to use means specific to the package system, be it dpkg, RPM or something else, to put the service file into the appropriate directory (most likely /usr/lib/systemd/system) and have it autostarted according to the [Install] section (systemctl enable). Refer to the documentation of systemd and of the package system in question. The extension of the service file must be .service.
What is extension of Linux long running background service/executable file? [closed]
1,645,538,743,000
Changed the permissions on a file, bsj: /media/cwh/BA70-05FE/swdev$ ls -al ~/work/sw/swdev/ total 12 drwxrwxr-x 2 cwh cwh 4096 Feb 28 22:21 . drwxrwxr-x 4 cwh cwh 4096 Feb 28 22:21 .. -rw-r--r-- 1 cwh cwh 4048 Feb 28 22:21 bsj /media/cwh/BA70-05FE/swdev$ chmod +x ~/work/sw/swdev/bsj /media/cwh/BA70-05FE/swdev$ ls -al ~/work/sw/swdev/ total 12 drwxrwxr-x 2 cwh cwh 4096 Feb 28 22:21 . drwxrwxr-x 4 cwh cwh 4096 Feb 28 22:21 .. -rwxr-xr-x 1 cwh cwh 4048 Feb 28 22:21 bsj Tried same command on a file on an SD card: /media/cwh/BA70-05FE/swdev$ ls -al total 96 drwxr-xr-x 2 cwh cwh 32768 Feb 28 22:17 . drwxr-xr-x 4 cwh cwh 32768 Dec 31 1969 .. -rw-r--r-- 1 cwh cwh 4048 Feb 28 22:17 bsj /media/cwh/BA70-05FE/swdev$ chmod +x bsj /media/cwh/BA70-05FE/swdev$ ls -al total 96 drwxr-xr-x 2 cwh cwh 32768 Feb 28 22:17 . drwxr-xr-x 4 cwh cwh 32768 Dec 31 1969 .. -rw-r--r-- 1 cwh cwh 4048 Feb 28 22:17 bsj Appears to have no effect.
Looks like you are using a non-Unix file system (perhaps FAT32) on the SD card. Unix permissions do not work on those.
How to change permissions on a file on SD card
1,645,538,743,000
I am confused as to what this question is asking me to do. For context, hello is a c++ file. "Use chmod command again to make hello an executable, but not readable and not writeable for all users." Using an online chmod calculator, my best guess would be chmod 001 hello, which is executable by the public, but not readable nor writeable by the public. Is this correct?
First of all allow me to explain the basics of chmod. Chmod is a Unix command that allows you to set permissions that determine who can access the file, and how they can access it. You can set these permissions for 3 different categories. The owner of the file (User) The members of the group that owns that file (Group) Everyone else (Others) There are two ways to modify the permissions: 1) By using alphanumeric characters The permissions are separated in 3 categories: a. Read b. Write c. eXecute You can set the permissions in the following way: Let's imagine a file called file.sh We want to set the permissions so that the user can read, write, execute the group can read, write the others can read All we have to do is run chmod u=rwx,g=rw,o=r file.sh Or perhaps we want to make it executable to everyone, so we run chmod +x file.sh and if want the opposite of the above command we can do chmod -x file.sh 2) By using octal numbers The other way is by using octal numbers that each one of them represent the permissions for the user, group, and others, in that order. 4 stands for "read" 2 stands for "write" 1 stands for "execute" 0 stands for "no permissions." By adding those numbers we can easily set the individual permissions. So if we take the previous example that would mean chmod 764 file.sh 7 is the result of permissions 4+2+1, 6 is 4+2+0 and 4 is 4+0+0 You can view more information by running man chmod Back to your problem. Although your question is unclear I would say you should use chmod 711 hello Which means you (the owner) have full permissions, your group can only execute and same goes for everyone else. or (depending on how you interpret the words "all users") chmod 771 hello Which means you (the owner) have full permissions, same goes for your group but everyone else can only execute. Now I should mention that you could use something like chmod 001 hello or chmod 111 hello but I see no point on doing something like this, unless it's a compiled program or something. But still...
chmod 001 or 111? Unix Permissions for executable files question [duplicate]
1,645,538,743,000
I am trying to setup a RaspberryPi as a Plex Media Server. The server is setup and running. I can access it via the web interface. I want my media files in the directory /mnt/sda/will/plex. This is the permissions of that directory: pi@raspberrypi:/mnt/sda/will $ sudo ls -lstr total 8 4 drwxrwxrwx 3 will root 4096 Dec 28 16:53 plex 4 drwxrwxrwx 2 will will 4096 Dec 28 16:53 'test folder' My Plex server is using user will: pi@raspberrypi:/mnt/sda/will $ sudo nano /etc/default/plexmediaserver with line export PLEX_MEDIA_SERVER_USER=will. Neither folders plex or test folder can be see by Plex: No media added to directory/mnt/sda/will/ or either sub-directories is picked up by Plex. This has all the hallmarks of a permissions issue but I can't see at all where they are incorrect. It's might be worth noting that I can access the folders via networked drive using user will.
Take a look at the permissions on the folders at or above /mnt/sda/will. All of those folders need to have r and x permission on them. 'x' means something different on folders than files. You could probably solve the problem running the following commands: sudo chmod a+rX /mnt sudo chmod a+rX /mnt/sda sudo chmod a+rX /mnt/sda/will When running chmod +X (the X being capitalized) it will only apply the x permission to folders which makes it very useful when used recursively.
Plex Media Server cannot see sub folders.. Permissions issue?
1,645,538,743,000
I installed PhpStorm on Linux Mint 19.2 . When I try to open the project, the editor doesn't show me the /var/www directory. What can I do? I don't have this problem with VSCode In lamp installation I used: (LAMP Installation) sudo chgrp www-data /var/www sudo chmod 775 /var/www sudo chmod g+s /var/www usermod -a -G www-data username sudo chown username /var/www/
I installed PhpStorm with Software Manager so problem was here. After deleting app and installing it with snap and the problem was gone. sudo snap install phpstorm --classic
PhpStorm don't show /var/www on opening
1,645,538,743,000
I have more then 50 files and I need to find the ones that have: No r permission for group No w permission for group No x permission for group w or r permission for others I tried the command find <directory> -perm /102 but its showing the files with w and r permission for group
-perm /102 will simply match files which have any of those bits set, as described in the manpage. To achieve what you want, you need two -perm predicates; one that excludes your "no" permissions, and one which includes your "yes" permission: find ... \! -perm /070 -perm /006
How To Find Files Based On their Permissions In Linux
1,645,538,743,000
I download some files through FileZilla and all of the files in subdirectories have this "???" owner/group permision: -????????? ? ? ? ? ? file_a.txt -????????? ? ? ? ? ? file_b.txt -????????? ? ? ? ? ? file_c.txt This is when viewed from "user_a" but when viewed as root then are correctly identified at "user_a:user_a". I tried to chown -R <owner>:<group> path/ but permissions still look correct as root and still look like "???" on "user_a". I tried copying the folder and fixing the permissions but it's still messed up. How can I fix this?
You didn't show the permissions of the directory containing those files, but it's likely you're missing the access (x) bit from the directory permissions. Without it, you can't call stat() on files, and thus can't find out their sizes, permissions, owners, etc. Example: $ mkdir dir; touch dir/foo.txt; chmod -x dir; ls -l dir ls: cannot access 'dir/foo.txt': Permission denied total 0 -????????? ? ? ? ? ? foo.txt Make sure you have the x bit set on the directories. You could add it for the owner for all directories in the subtree with something like this: find . -type d -exec chmod u+x {} + See: Execute vs Read bit. How do directory permissions in Linux work?
Files downloaded from FileZilla have "-????????? ? ? ? ?" permissions and I can't chown them with root
1,645,538,743,000
==================================================== ==================================================== CHECK_BEGIN: DO_CRON ==================================================== ==================================================== [FILE]: CRON.ALLOW -rw-------. 1 root root 0 Sep 1 2017 /etc/cron.allow ==================================================== ==================================================== ==================================================== [FILE]: CRON.DENY -rw------- 1 root root 0 May 5 2018 /etc/cron.deny ==================================================== ==================================================== Checking permissions on /var/spool/cron drwx------. 2 root root 4096 May 5 2018 /var/spool/cron ==================================================== How do I interpret the above output? The root user can always use cron, regardless of the usernames listed in the access control files? If the file cron.allow exists, only users listed in it are allowed to use cron, and the cron.deny file is ignored? Therefore in this case, only the users listed within /etc/cron.allow are allowed access to the cron daemon.
If the cron.allow file exists, it lists the users that may use cron. If it does not exist, the cron.deny file is checked. If the cron.deny file exists, it lists the users that may not use cron. This file is not consulted if the cron.allow file exists. If all users are denied the use of cron (as in your case, since the cron.allow file exists, and is empty), only root is able to use cron. This is the same that would happen if neither file existed. The most common configuration is to have an empty cron.deny file and no cron.allow file. This would allow everyone the use of cron. This also applies to at.deny and at.allow for using at to schedule commands.
Interpreting cron output
1,645,538,743,000
My sd card in my usb card reader will not allow me to add files while in ext4. I checked permissions and it's in root. I'm hoping if I change the permissions to non-root, it will let me add files. sudo chmod 777 filename = I don't know the file name; I put in the random numbers/ letters assigned to it, but get error: no such file or directory. Same with chown. When I put the usb card-reader into the computer, the comp automatically gives the ext media a name, such as lj4l5jlj069ofjrkle5kg05. in a terminal: whoami@server:/media/whoami/t9gjkg-tji-gjgj-gogjf-gjgu-i94k4-k5k $ sudo chmod 777 t9gjkg-tji-gjgj-gogjf-gjgu-i94k4-k5k "no such file or directory"
If your working directory is /media/whoami/t9gjkg-tji-gjgj-gogjf-gjgu-i94k4-k5k, then directory t9gjkg-tji-gjgj-gogjf-gjgu-i94k4-k5k is not found, because you're already in it. You can then use sudo chmod 777 . to change the permissions. Or simply use the absolute path to execute the command. This way is independent of the working directory: sudo chmod 777 /media/whoami/t9gjkg-tji-gjgj-gogjf-gjgu-i94k4-k5k
How can I change the permissions on a usb flash drive?
1,645,538,743,000
I have a directory that has the SGID bit set, so ls displays it as drwxr-sr-x, and is owned by a normal user.  I have a file that is owned by root in that directory, with permission 644.   The question is can I make that file become owned by the user-owner of the directory.
You have read permissions of file, and write permission of directory. Therefore you can make a copy, remove original, rename copy to original name.
Ownership of file(s) in directory with SGID bit set
1,645,538,743,000
I am using Linux on a VMware on Windows and added a second HDD device today. However, each file or any dictionary that is created on the new device gets executable permissions which are not removable (even using root rights). While using ls, the folders are highlighted in green which I don´t want to have. I guess this is more related to the VM rather than Linux. Changing the file permission (removing the executable permission) as sudo chmod -x myFile will not give any change. Does anybody know, why each added file ends up with the executable flag and how one could remove it? Any answer is highly appreaciated. Thank you in advance, Tobi
It sounds like you're trying to use NTFS (or FAT) file system on linux and struggle with file permission. By default those fs do not support unix-style permission, but you can defined a default one in mount options sudo mount -t ntfs -o rw,auto,user,fmask=0133,dmask=0022 /dev/drive /mnt/point This will assign 644 to all the files and 755 to all the directories on mounted drive. Even though, there is a way to enable unix permission on NTFS drive by adding permissions to mount options. However you need to define user mapping in order to use it. See man ntfs-3g for more details. FAT doesn't support this in any way though.
Cannot change file/folder permission on Linux running on VMware
1,645,538,743,000
drwxrwxrwx 2 user1 user1 4096 Jun 21 11:25 . drwxr-xr-x 16 user1 user1 4096 Jun 21 11:25 .. -rw-r--r-- 1 user1 user1 15 Jun 21 11:25 access.txt The file access.txt is owned by user user1 but the directory has open access to the world (777). If I log in with user2, I can delete access.txt even though user2 does not have write permissions to it. So does directory permissions take precedence over file permissions? Perhaps that not the best way to describe it, but just looking for a basic explanation here.
Unlinking access.txt from the directory is not a change to access.txt, but a change to the directory, so user2's write permission on the directory is what is relevant. The write permission on the file would be of interest if user2 wanted to modify the file, rather than the containing directory.
Do Linux directory permissions overrule file permissions? [duplicate]
1,645,538,743,000
Few days ago i have installed elasticsearch, For diagnosing some problems on my elasticsearch setup, i have checked the logs files located in /var/log/elasticsearch and tailed them but apparently the directory's owner is elasticsearch and the group is elasticsearch, therefore i have resolved it by switching to root and apply the tail (not an elegant way). Now I would like to write a simple script to tail those files (I'm using tmux), Does anybody has a suggestion how to overcome the the privileges issue while using a script? I prefer keeping the ownership to elasticsearch user.
Considering 'Stuart' is the user running your script, you can : make Stuart a member of the elasticsearch group (provided group members can actually access these log files) OR make the directory + logs readable by Stuart. This implies : setting the execution bit on /var/log/elasticsearch so that Stuart can enter it setting the read bit on /var/log/elasticsearch/whatever.log so that Stuart can actually read it OR define sudo privileges (but this sounds overkill)
How to operate on a directory with different owner from a script?
1,645,538,743,000
Question I would like to launch a Docker container which runs a process which may or may not clone itself. Is it possible to set up a user with normal + clone_newuts permission so that I do not have a login user to my container which is a superuser? Initially this question was labelled: CLONE_NEWUTS permission only, which is incorrect. @sourcejedi has answed in good faith below and improved my understanding considerably. EDIT-1 I've found where the su flags are held: /usr/include/linux/sched.h, I expect the answer to be something along the lines of monkey patching a specific users permissions on container create. I'm going to go along that route for now and see where it takes me. EDIT-2 I found where the user/file capabilities can be set. I see I have a lot of reading to do but I think that a specific permission (capability) can be given to a file (which will be excutable in this case). From the capabilities manpage: Starting with kernel 2.2, Linux divides the privileges traditionally associated with superuser into distinct units, known as capabilities, which can be independently enabled and disabled. Capabilities are a per-thread attribute. so, it should be possible to apply a specific capability (whose flag is found in the above header) to either a file or user. Which one, I'm not sure yet but it's getting pretty fun and deep to find this out. EDIT-3 As @sourcejedi has pointed out I have misinterpreted my needs. The information neccessary is in man limits.conf, in which one may run a process at a specified user level, in this case root. EDIT-4 Unfortunately the route I have taken from EDIT-3 is also incorrect. This allows a specified user to run all processes at the specified uesr level. @sourcejedi is still correct, however I will open a new question asking what I really need. Link: Run a single process that can fork and clone as a non-root-user
There is not a capability to specifically allow calling clone() with the CLONE_NEWUTS flag. CLONE_NEWUTS is used to create and enter a new "UTS namespace". All of the namespace types require CAP_SYS_ADMIN to create, with one exception: The upstream Linux kernel allows unprivileged users to create and enter a new user namespace. When you create a user namespace, you can allow yourself root / full capabilities inside that namespace, including CAP_SYS_ADMIN. If your system supports this, you can see it with unshare -r . It opens a root shell in a new user namespace. The intended method for unprivileged users to use namespaces, was inside a new user namespace. However some Linux distributions configure the kernel to dis-allow this feature. CAP_SYS_ADMIN is used as a catchall for anything without a more specific capability. It is far too powerful. You should assume it can be used to take over other programs and hence gain any other capability.[1] The default configuration of the Docker daemon does not place any container inside a new user namespace. As a result, you should assume that explicitly granting CAP_SYS_ADMIN to a docker container allows it to escape the container with full privileges.[1] If unprivileged users could create all the types of namespace directly, it would have raised issues where namespaces could be used to confuse a setuid program, into performing privileged actions that it was not supposed to. Cross-reference: "Why does unsharing mount namespace require CAP_SYS_ADMIN?" The other option is to use a helper executable with setuid/capabilities which only allows a specific task. Like how sudo can be configured to allow running specific privileged commands only. This is the approach taken by bubblewrap, which is used by FlatPak. The bubblewrap README also provides some references, about the security concerns which caused Linux distributions to restrict user namespaces. I think this story overlaps with the reasons that "Docker in Docker" is not really supported / is not possible without disabling important security features in the main Docker daemon. Although it is not quite the same. [1] For example CAP_SYS_ADMIN is the capability used to mount block filesystems, which kernel developers consider are not possible to reliably secure against malicious FS images. Inside a new user namespace, CAP_SYS_ADMIN does not allow you to mount block filesystems. But, if you created a new mount namespace as well - e.g. unshare -rm - CAP_SYS_ADMIN will allow you to create bind mounts, mount the proc filesystem, and in kernerl 4.18 or above you can mount FUSE filesystems. Docker also uses LSM-based security - SELinux or AppArmor - on systems where those are available. It's possible these layers could restrict CAP_SYS_ADMIN in some ways. This is much more obscure than Docker's other security layers. If you relied on the detailed workings of specific LSMs, that seems to defeat one of the points of building a convenient portable Docker container.
Allow a non-super-user run process to fork and clone itself (probably duplicate)
1,645,538,743,000
I am managing some folders on our server. The server runs: REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.6" I have created a directory in "/srv/" and named it "test". I have added another directory called "archived" inside "test". I have created project_managers user group and added some users to this group. project_managers have read/write/execute permission for "/srv/test/" recursively. I want the project_managers to be able to move files from one directory to another but not to be able to delete the files entirely from the server. For example I want project_managers have permission to move "/srv/test/my_code.py" to "/srv/test/archived/" but do not have permission to delete "/srv/test/my_code.py" I have done: sudo chattr -R +i /srv/test/ It prevents both deleting and moving files. Is it possible to make files undeletable but moveable?
To answer you question: a move and a delete are pretty similar, I think you need a different strategy. I would put the directory under version control, not only because it appears to be code. The advantage of a version control system is that you have file history. you could simply run a scheduled task, say every hour that checks if changes were made, using, say git, and apply the changes to version control. That way, you can allow the people to move stuff, delete stuff, does not matter what ... it is still in version control and retrieval is only a command away. The other thing with version control systems, like git, is that you have a backup mechanism integrated. You can pull any changes over to one or more other systems AND when I say changes, it is in fact complete file history ... including any change ever made. With text files, the overhead in storage in minimal, it only ever stores the changes that were made. The only thing the others are not allowed to touch is the version control system. If you use git, it will create a .git directory, this will contain all the important information ... you could also use other systems like mercurial. If you really want to capture all changes as they are made, you could try git watch which can watch a git repository, including all subfolders and files therein, and insert any change into git. Better, even, gitwatch can send the changes as they happen to a git on another server (full backup, full history).
How to make files undeletable but moveable in linux server?
1,548,199,275,000
I am getting a "FastCGI sent in stderr: "Unable to open primary script: /var/www/mediawiki/index.php (No such file or directory)"" error when I enter my wiki address in a browser bar. Here is my PHP-FPM www.conf file: [www] user = nginx group = nginx listen = /var/run/php/php-fpm/php-fpm.sock listen.owner = nginx listen.group = nginx listen.mode = 0660 ;chroot = ;chdir = ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp php_value[session.save_handler] = files php_value[session.save_path] = /var/lib/php/session php_value[soap.wsdl_cache_dir] = /var/lib/php/wsdlcache php_value[opcache.file_cache] = /var/lib/php/opcache Here is my nginx conf.d file: # HTTP requests will be redirected to HTTPS server { listen 80; listen [::]:80; server_name wiki.example.com; return 301 https://$host$request_uri; } # HTTPS Configuration server { listen 443 ssl; listen [::]:443; server_name wiki.example.com; root /var/www/mediawiki; index index.php; autoindex off; # SSL Certification Configuration ssl_certificate /etc/letsencrypt/live/wiki.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/wiki.example.com/privkey.pem; client_max_body_size 5m; client_body_timeout 60; location / { try_files $uri $uri/ @rewrite; } location @rewrite { rewrite ^/(.*)$ /index.php?title=$1&$args; } location ^~ /maintenance/ { return 403; } #PHP-FPM Configuration NGINX location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { try_files $uri /index.php; expires max; log_not_found off; } location = /_.gif { expires max; empty_gif; } location ^~ ^/(cache|includes|maintenance|languages|serialized|tests|images/deleted)/ { deny all; } location ^~ ^/(bin|docs|extensions|includes|maintenance|mw- config|resources|serialized|tests)/ { internal; } # Security for 'image' directory location ~* images/.*.(html|htm|shtml|php)$ { allow all; types {} default_type text/plain; } # Security for 'image' directory location ^~ /images/ { allow all; try_files $uri /index.php; } } I feel like it is a permissions issue or the php-fpm daemon is looking in a redundant file path or something. I tried passing an absolute path to FPM via the nginx conf.d file by doing: fastcgi_param SCRIPT_FILENAME /var/www/mediawiki/index.php; to no avail. So I know I'm pointing it in the right direction but it still gives me the same error which makes me believe I have a permissions issue. I've also tried: setenforce 0 but this also doesn't work. I've chmod 777 the entire directory up to and including the index.php file. Some background: I wanted to install a wikimedia extension which required a new version of php (7.0+) and I was running 5.4 since it came with the base install of CentOS 7. I wasn't familiar with how to update PHP so I accidentally yum remove php*, installed php73 from remi, removed that, re-installed php 5.4, and finally figured out I could yum update with remi-php71.repo enabled to update my base packages. However, I lost my .conf and php.ini files in this process. Edit: /var/log/nginx/error.log when I go to my website in a browser: 2019/01/22 16:58:19 [error] 10876#0: *1 FastCGI sent in stderr: "Unable to open primary script: /var/www/mediawiki/index.php (No such file or directory)" while reading response header from upstream, client: 10.11.190.1, server: wiki.example.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php-fpm/php-fpm.sock:", host: "wiki.example.com" /var/log/php-fpm/www-access.log: - 22/Jan/2019:16:58:19 -0700 "GET /index.php" 404 /var/www/mediawiki/index.php /var/www/mediawiki /index.php /index.php
I fixed it, thanks to Christopher for pointing me in the right direction with his query about cgi.fix_pathinfo=0 in php.ini. His specific question did not fix the issue, but I continued to play around with settings near cgi.fix_pathinfo=0 in php.ini and was able to get a different error message with NO error from PHP-FPM complaining about not being able to open /var/www/mediawiki/index.php. This problem took me a solid 5 days to resolve. I REALLY appreciate the help Christopher! I ended up commenting the following lines in php.ini: ;cgi.fixpathinfo=0 ;user_dir=/var/www/mediawiki **This is the one that changed the error message Once I changed that, I got a InvalidArgumentException which was due to me not installing php-mysqlnd when I upgraded from 5.4 > 7.1. Once I installed that, bam, wiki is back up and running. I feel like running around the building five times. Thanks again to Christopher for pointing me in the right direction! Jordan
PHP-FPM: 'No such file or directory' error from nginx/error.log. Path or permissions issue? [closed]
1,548,199,275,000
Hi I have a directory /home/nvs-upload/media/ImageFtp/ and the directory ImageFTP have many sub directory. The issue than I have is than a job run and add new sub directory it add to the new directory the permission 755 and the files inside 644. I will like to add automatically to the new directory and files the permission 777. not matter the user or group. what command I have to run to do this action?
You can set directories to 777 and files to 666, respectively, by defining default values as ACL entries: setfacl -m default:u::rwx -m default:g::rwx -m default:o::rwx /home/nvs-upload/media/ImageFtp (use the -R option to recursively also apply this to already existing files and directories). Here we set the default values for the user, group and others to rwx. Please note that you: a) have to have the drive mounted with ACLs enabled (on standard UNIX FSs these are mostly activated by default these days) and b) you cannot make files executable by default as explained here and the discussions linked there.
Permissions from file and sub directory for all users inherit permanet
1,548,199,275,000
It is known that most Linux systems have some sort of file permissions. But what is responsible for defining them? The operating system, the filesystem, other thing? Firstly, I thought that it is the filesystem (ext3, NTFS etc.). This is suggested by this Wikipedia's article, as there are used phrases like "file system permissions". But surprisingly, in the article it is also mentioned that: Unix-like and otherwise POSIX-compliant systems, including Linux-based systems and all macOS versions, have a simple system for managing individual file permissions, which in this article are called "traditional Unix permissions". And that suggests that permissions are a thing managed by the operating system (at least POSIX-compliant systems, whatever that might exactly mean). This is also suggested by this linfo article on file permissions. What is more, this Red Hat documentation on ACLs tells that: The Red Hat Enterprise Linux kernel provides ACL support for the ext3 file system and NFS-exported file systems. ACLs are also recognized on ext3 file systems accessed via Samba. what would suggest that ACLs – that is, a kind of file permissions – are somehow defined in the Linux kernel. And I am confused about that.
Briefly: Let's cover the traditional permissions first. In a filesystem like ext2 and the successors, and also in the original Unix filesystem, there's a structure called an inode. It consists of a number of bytes that describe properties of a file, like where it is, how large it is, etc. The bytes that represent permissions have bits set that correspond to the permissions for the owner, the group, and the rest of the world. You can see this in ls -l, where the lowest bits directly correspond to the rwxr-xr-x etc. you see (so that would be the bit pattern 111101101). You can also see it in commands like chmod where you use this binary number in octal (so the groups of three bits each correspond to one digit). The permission bytes are interpreted by the file system driver in the kernel (basically, the kernel uses some C data structure that matches the inode data structure). So in that sense you can both say "the permissions are managed by the kernel" and "the permissions are stored in the file system". ACLs work similarly, except they are more flexible, and they use a more difficult representation, and a more difficult kernel API.
What is responsible for file permissions in a linux system?
1,548,199,275,000
I want to use one of Drupal's syslog modules but place the log in a the user's home directory so there are not permissions issues when the user wants to view or analyze the file. Is there a way a syslog configuration can be set in that way?
You can add a file to route wanted messages to a given user directory. Eg create /etc/rsyslog.d/00-meuh.conf with if ($msg contains "testing") then { action(type="omfile" file="/home/meuh/logs/meuh-rsyslog" sync="on" fileCreateMode="0644" fileOwner="meuh") stop } then restart rsyslog and send a suitable message with $ sudo systemctl restart rsyslog $ logger 'meuh testing new logfile' The file is created: $ ls -l /home/meuh/logs/meuh-rsyslog -rw------- 1 meuh root 50 Sep 23 17:10 /home/meuh/logs/meuh-rsyslog $ cat /home/meuh/logs/meuh-rsyslog Sep 23 17:10:22 home meuh: meuh testing new logfile
How can a syslog configuration be set to place a log in the users home directory?
1,548,199,275,000
My user account, samjaques, belongs to a group sams. I have two folders, both in the sams group. Folder 1 is owned by root, Folder 2 by samjaques. Both have permissions set as ---rwx---. From the terminal (running as samjaques and sams), I can only open Folder 1 but not Folder 2 (Folder 2 gives Permission denied). My guess is that the system is checking permissions of the user, then the group, then other, and denies permission if the user is denied without checking the group. Is this the expected behaviour, and is there a reason for it? In general, is it pointless/bad practice to have group permissions higher than user permissions?
Yes, if the EUID of the accessing process matches the owning user, only the user permissions are checked. If not, but the process's GIDs match the owning group, then the group permissions are checked. Otherwise the "other" permissions are used. The ball stops at the first identity that matches. It doesn't make much sense for the user to have less access than the group, since usually the owning user could just change the permissions and give themselves whatever access they like. (barring stuff like SELinux etc.) But in the case of group vs others, it can sort of make sense: you can deny access to a particular group, while allowing it to everyone else. E.g. for a file owned by someuser:somegroup, with permissions rw----r--, members of somegroup can't access it, but anyone not a member of somegroup can read the file.
Permissions Based on Lowest Level
1,548,199,275,000
I am using the Raspberry Pi 3. I want to modify the file (increase the limit of files) limits.conf in /etc/security/limits.conf, but when I try to save the file after modification, it gives the error [Can't open file to write].
Since the file limits.conf is only writable as the root user, you must launch nano as root using sudo. For example: $ sudo nano /etc/security/limits.conf
Can't open file to write
1,548,199,275,000
In solaris 9 (5.9) I fail to mkdir with user builder, the user exist in the group defined as owner for that path. bash-2.05$ groups builder other root sys bash-2.05$ and this is the file structure: bash-2.05$ ls -la / | grep opt lrwxrwxrwx 1 root other 16 Apr 14 2008 opt -> /export/home/opt bash-2.05$ bash-2.05$ ls -la /export/home/ | grep opt drwxr-xr-x 13 root other 512 Jan 24 11:49 opt bash-2.05$ builder belong to other group, why does it fail to mkdir in /opt ? bash-2.05$ pwd /opt bash-2.05$ mkdir dire mkdir: Failed to make directory "dire"; Permission denied bash-2.05$
The other group does not have write permissions in that directory. Write permissions are needed to create directory entries, such as files and subdirectories. To give the other group write permissions, as root do chmod g+w /export/home/opt
Solaris 9 Fail to mkdir - no permission
1,548,199,275,000
Browsing through fs/nfs/... but this wasn't obvious to me, so: if I try to write while being "above quota", to a file that doesn't belong to me, will I get EPERM or ENOSPC? Another way to phrase this is: for an inode write, which comes first, the check for permissions, or the check for quota?
You can only write to a file after you have opened it. When you open it the permission checks are done. In theory one might argue that for a request for a read-write file descriptor the quota state might be checked but as you need write access to truncate a file and quota should never prevent space from being freed I assume this is not the case. Thus due to the order of open() and write() the permission check should always come first.
Which comes first, for a write/create: permissions check or quota check?
1,548,199,275,000
I was following someone's answer and they said to: user@User-pc ~ $ cd /var/lib/apt user@User-pc /var/lib/apt $ sudo mv lists lists.old user@User-pc /var/lib/apt $ sudo mkdir -p lists/partial What commands can I do to completely undo these operations? Thanks!
cd /var/lib/apt Removing the new lists/partial structure: sudo rmdir lists/partial sudo rmdir lists These commands will complain if the directories are not empty. If that happens, you have likely done something else that you are not showing. There should now be no thing with the name lists in the current directory. Moving back lists.old to its former name: sudo mv lists.old lists This is assuming that there was no directory called lists.old to start with. If there was, then the lists thing is located inside that directory and has to be moved out of it: sudo mv lists.old/lists ./
I need to reverse some file changes I just made
1,548,199,275,000
I'm trying to make my linux system secure. So I'm considering if I can remove the w permission for all of binary files, such as ls, pwd etc. For now they are all -rwxr-xr-x root root, can I remove w for root as owner?
Executable files shouldn't need 'w' permission to run - that is the purpose of the 'x' permission. But, I don't think what you're trying to do is going to work. If someone can gain root access to your system, then they will have the power to do anything, regardless of whether the file owner has 'w' permission or not. The root user always has 'rw' access to all files on the system. There may be some things you can do to protect your data though: https://superuser.com/questions/698404/how-can-i-prevent-access-to-my-home-directory-from-another-root-user
Do executable binary files need w permission
1,548,199,275,000
I have a simple script that I'm using to sync a test environment for a couple of developers. It doesn't need to be any more complex than just taking a mysql dump, checking the hash over SSH, and then if changed, moving the dump to the new environment and undumping it. I've rewritten it to obscure sensitive information, but here is the gist of this script I've written up to the point where I'm having the issue: #!/bin/bash mysqldump -h localhost testDB > dbPath/testdb.sql hash1=$(md5sum dbPath/testdb.sql) | awk '{print $1}' echo $hash1 When executing the script : sudo ./testScript.sh I see the mysqldump created with the permissions -rw-r--r-- 1 root root which seems correct to me, however, as the script continues to the md5 hash, I get this: ./testScript.sh: line 5: dbPath/testdb.sql: Permission denied When I execute the md5sum command from the shell (not in the script), it works fine, even from my normal user without using sudo. When I change into root and execute the command from the shell, it works correctly. When I run the script in any capacity (from my user account, sudo from my user account, or from root directly), I get the permission denied error on the md5sum line. I would think that somewhere my user account's permissions were bleeding over instead of root's permissions being used, except for the fact that the script cannot be executed by root from root's shell without also getting a permission denied error, and as far as I can tell, root shouldn't have permission denied to anything. As a test, I threw a whoami before and after the md5sum command and both commands output root as the user when executing with sudo or with root. Root and my user account clearly both have permission to execute md5sum on the file; how is the fact that this is running from a script making a change to my or root's abilities to execute commands or manipulate the file? Environment is RHEL 6.
The underlying problem is the parse error on this line: hash1=$(md5sum dbPath/testdb.sql) | awk '{print $1}' which should almost certainly be hash1=$(md5sum dbPath/testdb.sql | awk '{print $1}') You can check for errors like this at https://shellcheck.net
Root gets permission denied when executing commands from script but not from shell
1,548,199,275,000
I've got a samba shared directory which previously was setup normally but I noticed not being able to connect to it anymore. Turns out that permissions were reset to root and whenever I try to change it either with nautilus or with chown sambauser:sambashare directory it instantly resets permissions to root:root What's happening here and how do I change it? The directory path is /sharing/ and the permissions I want to set are sambauser:sambashare, it is also the samba server having this problem, not a samba client. My only guess is that it might be due to the root-filesystem that the directory is inside, but that's only a guess.
The problem was in the way I setup the automount options. I simply added uid and gid to my configuration and it worked again. It used root because it was inside the root filesystem, since nothing else was provided. Telling the configuration which user and group to use made it accessible by said user or group. Always nice to solve my own problems after enough digging.
Permission automatically resets to root after using chown
1,548,199,275,000
In my Fedora I have some additional HDD with partition mounted as /media/dilnix/data witch contains the most of my huge files sorted in folders like "Music", "Downloads", "Video" etc. Those folders are targets for my symlinks in home folder. Like /home/dilnix/@Video to /media/dilnix/data/Video /home/dilnix/@Downloads to /media/dilnix/data/Downloads etc. My last 2 entries of fstab are following: UUID=355ba039-6126-4c36-ba6a-8ff4f2ee79e8 /media/dilnix/data ext4 defaults,noatime,user 1 2 UUID=24dd893c-07dd-4f52-85c5-066773f74c0f /home ext4 defaults,noatime 1 2 The problem is when I trying to run some application or script from "Downloads" folder (and from deeper) I getting error like following: bash: ./mktool: permission denied Permissions of files for example script I have used: [dilnix@localhost mktool-master]$ ll -Z загалом 36 drwx------. 3 dilnix dilnix unconfined_u:object_r:user_home_t:s0 4096 чер 8 2015 . drwxrwxr-x. 3 dilnix dilnix unconfined_u:object_r:user_home_t:s0 4096 січ 16 11:38 .. -rwxr-xr-x. 1 dilnix dilnix unconfined_u:object_r:user_home_t:s0 18448 чер 8 2015 mktool -rw-rw-r--. 1 dilnix dilnix unconfined_u:object_r:user_home_t:s0 612 чер 8 2015 README.md drwx------. 2 dilnix dilnix unconfined_u:object_r:user_home_t:s0 4096 чер 8 2015 tools [dilnix@localhost mktool-master]$ getfacl mktool # file: mktool # owner: dilnix # group: dilnix user::rwx group::r-x other::r-x What the thing that I missed in my configuration to make my additional folders work as part of my home?? I tried to temporary disable SELinux, but it's not a reason because of error continue to appear.
From man mount, the user mount option implies `noexec: user Allow an ordinary user to mount the filesystem. The name of the mounting user is written to the mtab file (or to the private libmount file in /run/mount on systems without a regular mtab) so that this same user can unmount the filesystem again. This option implies the options noexec, nosuid, and nodev (unless overridden by subsequent options, as in the option line user,exec,dev,suid). So you could remove the user option, or change the mount options to something like defaults,noatime,user,exec,suid.
Why permission denied from folder that is a symlink of a home's subfolder?
1,548,199,275,000
Permissions: ls -al file -rwxrwxr-x 1 root wheel User group: groups wheel If I do this: sed -i'' -e '/Marker/i\'$'\n''text string'$'\n' file I get an error: sed: ../file: Permission denied But at the same time I can read, write and execute this file. As it shown in permissions. Why sed is not working? I use the same User and the same file. Okay, owner is root, but I have read and write permissions uname -a FreeBSD srv 11.0-RELEASE-p1 FreeBSD 11.0-RELEASE-p1 #0 r306420: Thu Sep 29 01:43:23 UTC 2016 [email protected]:/usr/obj/usr/src/sys/GENERIC amd64
So the problem was with parent path, where script is located. I think (I really dont know how sed works), sed tried to create a clon-file to add a text string and can't because of 755 path permissions. 775 solved the problem.
FreeBSD sed error - I have permissions, but "Permission denied"
1,548,199,275,000
I have come across a weird error with permissions with an external hard drive I attached to my server. I wanted to enable Transmission to download torrents to a folder on it, but discovered it was unable to create directories due to permission errors. I tested it myself and verified that the daemon, running as user transmission, can't create directories in a folder it owns with 755 permissions. I thought it might be some weird inode shenanigans, but an fsck came back clean and everything looks normal. matoro@matoro-server ~ $ ls -i /run/media/matoro/drive-data total 40 43253761 drwxr-xr-x 5 matoro matoro 4096 Apr 11 2017 backup 11796481 drwxr-xr-x 3 matoro matoro 4096 Oct 28 22:40 iso 37568568 drwxr-xr-x 2 matoro matoro 4096 Apr 23 2017 pending 42336296 drwxr-xr-x 3 matoro matoro 4096 Oct 25 01:26 podcasts 38141969 drwxr-xr-x 39 matoro matoro 12288 Sep 18 22:05 reading 37519377 drwxr-xr-x 3 transmission transmission 4096 Oct 30 17:10 seeding 37490784 drwxr-xr-x 4 matoro matoro 4096 Oct 30 17:09 videos 42336292 drwxr-xr-x 3 matoro matoro 4096 Oct 25 01:23 youtube matoro@matoro-server ~ $ ls -ia /run/media/matoro/drive-data/seeding total 912160 37519377 drwxr-xr-x 3 transmission transmission 4096 Oct 30 17:10 . 2 drwxr-xr-x 11 matoro matoro 4096 Nov 3 14:56 .. 37584902 drwxr-xr-x 3 transmission transmission 4096 Aug 10 2016 'some directory' 37488367 -rw-r--r-- 1 transmission transmission 430297088 Aug 14 2016 some_file matoro@matoro-server ~ $ sudo -u transmission mkdir -v /run/media/matoro/drive-data/seeding/test mkdir: cannot create directory ‘/run/media/matoro/drive-data/seeding/test’: Permission denied Here are the relevant mount options: /dev/sdc3 on /run/media/matoro/drive-data type ext4 (rw,nosuid,nodev,noexec,noatime,data=ordered,uhelper=udisks2) What could be causing this? Could it have something to do with ACLs?
The mkdir command must traverse the directory structure to find the existing directory /run/media/matoro/drive-data/seeding and then add an entry to it. The required permissions are: x permission on / x permission on /run x permission on /run/media x permission on /run/media/matoro x permission on /run/media/matoro/drive-data w and x permission on /run/media/matoro/drive-data/seeding (and of course they all must be directories, and the one you're creating must not already exist) I bet you're missing one of these (probably #4 or #5) If the process already had /run/media/matoro/drive-data/seeding as its current directory (which can happen if the ancestor directory permissions change after you enter the directory, or if the process switches uid) then it could mkdir test and succeed with only permission #6 (w and x on the current directory) while mkdir /run/media/matoro/drive-data/seeding/test would require all of the x permissions, even though it refers to the same location. When you use absolute paths, or relative paths with multiple components, there is an x permission check on every ancestor directory that you mention.
Permissions issue on ext4 external drive
1,548,199,275,000
I used to copy my system/config files from lvm-based /dev/mapper/devuan--vg-root + /dev/mapper/devuan--vg-home partitions into external storage and moved them to a fresh btrfs partition after installing Windows. Unfortunately, I forgot to copy the rest from the lvm-based /dev/mapper/devuan--vg-var + /dev/mapper/devuan--vg-tmp partitions. Problems After I set and installed grub, I was able to boot the system, but: The system stops loading at the following error message: fsck from util-linux 2.27.1 /bin/fsck.btrfs: /dev/mapper/devuan--vg-home does not exist /bin/fsck.btrfs: /dev/mapper/devuan--vg-tmp does not exist /bin/fsck.btrfs: /dev/mapper/devuan--vg-var does not exist fsck exited with status code 8 I got other errors as well, due to the missing /var/* folders for some servers such as cron and exim4 - at that point, I managed to create them manually, as well as copying the required files of /var/lib/dpkg/* and /var/cache/dpkg/* folders from xubuntu livecd. The only solution I found for the fsck errors, is to touch /fastboot, but this is limited to the next boot (i.e not permanent solution). Questions How can I disable lvm partitions' check on boot (I mean uninstall lvm completely) permanently? What tool could you suggest for backing up and restoring the system + user data from lvm partitions' filesystem, more efficiently in the future?
You've failed to restore the file ownerships and permissions for the OS files. I'm quite impressed the system boots and allows root to log in. If you took good backups you should be able to wipe and restore properly. Otherwise you'll need to reinstall from scratch and then restore the files from your home directory. It should theoretically be possible to reapply the deb packages you already have installed, but without the package database in /var that's next to impossible. To answer your specific questions you've added, Reinstall or restore from a known-good backup. You don't have the backup so that leaves you only one choice. You have no database of installed packages so you would have to pick off the LVM tools (programs, libraries, configuration files) one by one. See #1 There are many options. Here are a few rsnapshot and its dependency rsync tar duplicity and duplicati Veeam Agent - Free (commercial software but zero financial cost). I use this professionally and at home. I am not affiliated to Veeam Next time I'd suggest you install local tools under either /usr/local/ (for example /usr/local/bin/wais) or /opt. You can then copy them off to a new system trivially.
Debian: I'm unable to log in + can't boot properly after system backup and restore
1,548,199,275,000
I'm running Debian 9.1 with KDE and used BackInTime (which uses rsync) to backup my files. Now I would like to create a changelog / diff of changes to permissions of files. I'd like know this for two reasons: security and manually restoring permissions if there is no built-in way to do so. How can this be done?
you can use rsync -ani source destination if there will be file permission changes you will see output like rsync -ani 1 new/ .f...p..... 1 the flags identify the changed parameter of the file f stands for file p stands for permission changes
Is it possible to view changes of permissions by comparing files to a backup?
1,548,199,275,000
Example: In Debian if user want to have access to journalctl without using root credentialas he must be added to systemd-journal group. /bin/journalctl is owned by root and group root so how it works ? How systemd-journal group has access and how to edit this permissions. I am not talking about permisisons to files and folders but maybe it comes down to that.
A program runs as the user and group(s) that invoke it. The ownership of the program executable file is irrelevant. (The exception is if the executable file has the setuid or setgid bit set, but this only concerns a few programs which run with elevated privileges, of which journalctl is not one.) Anybody can run /bin/journalctl, just like anybody can run /bin/ls. However, not everybody can run /bin/journalctl usefully: you need to have access to the files that journalctl accesses, just like running ls somedirectory requires the permission to access somedirectory. In the case of journalctl, the relevant files are under /var/log/journal. See Where is “journalctl” data stored? for more details. You should not change the permissions of any of the files involved. Since you don't know exactly what you're doing, you're likely to break something. If you want to give a user read access to the logs, add them to the systemd-journal group, that's what it's for.
How group has access for something?
1,548,199,275,000
I had installed CentOS as guest operating system in VirtualBox. Now I have been mostly experienced with Ubuntu and CentOS is though similar has some differences. Now I was trying to mount a windows shared folder named vmshare-windows. For this I first tried to create a folder under /mnt named vmshare where I could mount vmshare-windows folder. But when i invoke mnt command to do this I get following error: "mkdir: cannot create directory ‘vmshare’: Permission denied" As my user was already added to wheel group, I could sudo and create the folder. Next Now when I try to run mount command without sudo I get same permission denied errors. I then checked that the user and group of created vmshare folder are both root. So I have to sudo again to mount. Now the issue is whenever I have to modify anything in mounted folder I have to sudo which is defeating whole purpose of my user who basically should have administration privileges. So I then changed Account type of my logged in user to Administrator and restarted my system. However without sudo I still cannot mount or modify anything in mounted folder.I then added my logged in user to root group. But the result is still the same: I must use sudo So the question is what is it that I have to do to make sure I can mkdir/mount/unmount/modify inside the mnt folder without resorting to sudo each and every time.
Use uid and gid options for mount: mount -t vboxsf -o gid=33,uid=33 vmshare-windows /mnt/vmshare-windows
User and Group permissions for mnt folder and files access in CentOS 7
1,548,199,275,000
I encounter a strange Linux cron job permission error on Amazon Linux and can't found related information on the web. 1. First I create a cron job with a regular user [newuser@node1 home]$ crontab -e no crontab for newuser - using an empty one crontab: installing new crontab 2. Then I try to read the cron job file, permission deny [newuser@node1 home]$ cat /var/spool/cron/newuser cat: /var/spool/cron/newuser: Permission denied It is strange as "newuser" is the owner of that file, why "permission denied" ? Login as root. [root@node1 home]# cd /var/spool/cron [root@node1 cron]# ls -l -rw------- 1 newuser newuser 47 Aug 3 08:28 newuser [root@node1 cron]# cat /var/spool/cron/newuser 1 1 1 * * /usr/bin/php /tmp/scheduleJob.php [root@node1 spool]# ll -d /var/spool/cron drwx------ 2 root root 4096 Aug 3 08:28 /var/spool/cron
On Linux systems to access a file you also need to have access to traverse all the directories in the path (execute bit in UNIX permissions). In your case /var/spool/cron permissions are set to rwx------ and owner is root, therefore you cannot traverse into the directory as other user than root and get Permission denied error when trying to access contents within.
Linux cron job file permission
1,548,199,275,000
BTRFS disk mounted like this: /dev/sdb /mnt/disk1 btrfs noexec,nofail,defaults,compress-force=lzo 0 0 disk1 is shared via cifs with 640 permissions. I can't launch any application/script because permissions and noexec mount parameter but when I map this share in windows I can change permissions - right click on file -> preferences -> security tab and add executable permission and thats all right because I am the owner of changing file but I can't understand why from now I can launch exe file (windows app. will launch) on noexec btrfs filesystem ? Debian 9 with btrfs-progs 4.7
The noexec flag only applies to the OS which is using that fstab entry to mount the relevant partition. Windows does not use fstab and indeed doesn't care about such flags.
Debian filesystem permissions
1,548,199,275,000
The preconditions are somewhat complex, so here is the context: There is a program, that is launched through a shell script; The shell script launches the program through a .jar file; I want this program to store its cache in a /var/opt/ subfolder, say /var/opt/program/; The cache content should not be accessible directly, only through the program The program should be available for all local users in a system The main idea I came up with is to create a group that has all required permissions to read and edit content of the cache folder. I don't want to add users to the group manually, so was looking for alternative options. If I understand it right, the setgid bit should suit me perfectly. My understanding as follows: A folder with the setgid bit set force all its content to have the same owner group, as the folder has and force all its subfolders to follow the same rules. In the meantime it doesn't provide any extra permissions for the those not included in the folder owner group, that is, folder with mask drwxrws--x does not allow others to edit and read its content. A program with the setgid bit set always run on behalf of the owner group, allowing users to perform actions respecting group permissions. My steps as follows: I created a group mygroup; I created a folder /var/opt/program/ and set its owner group to mygroup; I set setgid bit for the folder I set the program shells script owner group to mygroup; I set setgid bit for the shell script The problem is that the program is not able to create and edit files in the cache folder being launched by a user without root permissions. Any advice will be appreciated. UPD Environment: OS: Ubuntu 16.04.2 LTS
A folder with the setgid bit set force all its content to have the same owner group as the folder Yep. Newly created files get the group of the directory, the group of the file can of course be changed afterwards. (We call them directories, not folders.) A program with the setgid bit set always run on behalf of the owner group Yep. Though you don't need setgid on the directory if you're only creating files through a setgid binary. A setgid program gets the group in question as the primary group, so any files it creates are owned by that group by default (and not the calling user's group). I set setgid bit for the shell script This is the part that doesn't work. Most systems don't respect setuid and setgid bits on interpreted scripts, since it easily leads to a number of security issues. What you need to do, is to write a C program wrapper that executes the script, and make the wrapper setgid; or (preferably) use something like sudo to allow your users to run the script with the rights of another group. (sudo already deals with things like cleaning up environment variables that might be problematic.) In both cases, make sure the script and its interpreter are in directories that the users can't modify. For sudo, the required configuration (in /etc/sudoers) would be something like this: username ALL=(:privgroup) /path/to/script That would allow user username to run /path/to/script as the group privgroup. You could use %groupname instead of username to allow all members of groupname to run the script. The users will need to run the script using sudo -g privgroup /path/to/script because sudo by default tries to run the named command as root, and we didn't allow that. But you can write a wrapper for that command.
How to set setgid bit for a program to allow it change its group folder?