date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,456,433,541,000 |
Every time I do git pull or git reset, git resets changes to permissions and ownership I made. See for yourself:
#!/usr/bin/env bash
rm -rf 1 2
mkdir 1
cd 1
git init
echo 1 > 1 && git add 1 && git ci -m 1
git clone . ../2
cd $_
chmod 0640 1
chgrp http 1
cd ../1
echo 12 > 1 && git ci -am 2
cd ../2
stat 1
git pull
stat 1
The output:
$ ./1.sh 2>/dev/null | grep -F 'Access: ('
Access: (0640/-rw-r-----) Uid: ( 1000/ yuri) Gid: ( 33/ http)
Access: (0664/-rw-rw-r--) Uid: ( 1000/ yuri) Gid: ( 1000/ yuri)
Is there a way to work around it?
I want to make some files/directories accessible for writing by the web server.
|
This sounds like the user you're running has the default group set to yuri. You can confirm this like so:
$ id -a
uid=1000(saml) gid=1000(saml) groups=1000(saml),10(wheel),989(wireshark)
The UID of your account is this: uid=1000(saml) whereas the default group is git=1000(saml) and any secondary groups are thereafter.
NOTE: If you want the git clone to have specific ownership, then you have at least 2 options.
Option #1
Set a parent directory with the permissions as you want like so:
$ mkdir topdir
$ chgrp http topdir
$ chmod g+s topdir
$ cd topdir
$ git clone ....
This forced the directory topdir to enforce any child directories underneath it to have the group http applied. This will work by in large but can lead to problems, since if you move files into this git clone workspace, those files will not have their groups enforced by the changes made above.
Option #2
Prior to doing work, change your default group to http like so:
$ newgrp http
$ git clone ...
This method will force any new files created to have their group set to http instead of your normal default group of yuri, but this will only work so long as you remember to do a newgrp prior to working in this workspace.
Other options
If neither of these seem acceptable you can try using ACLs instead on the git workspace directory. These are discussed in multiple Q&A's on this site, such as in this Q&A titled: Getting new files to inherit group permissions on Linux.
| Is there a way to prevent git from changing permissions and ownership on pull? |
1,456,433,541,000 |
I have shell (php) script that gets in touch with target file this way:
inspects whether file and directory are writable with php 's is_writable() (I don't think that this is problem)
does in-place file edit with sed command:
grep -q "$search" "$passwd_file" && { sed -i "s|$search|$replace|" "$passwd_file"; printf "Password changed!\n"; } || printf "Password not changed!\n"
As a result I get (everything else correct but) file which was myuser:www-data to be myuser:myuser .
Does sed changes file group ownership as it seems, and how do I avoid it, if possible?
|
There is a little problem with sed's inplace editing mode -i. sed creates a temporary file in the same directory called sedy08qMA, where y08qMA is a randomly generated string. That file is filled with the modified contents of the original file. After the operation, sed removes the original file and renames the temporary file with the original filename. So it's not a true inplace edit. It creates a new file with permissions of the calling user and a new inode number. That behavior is mostly not bad, but for instance, hard links get broken.
However, if you want true inplace editing, you should use ed. It reads commands from the stdin and edits the file directly, without a temporary file (it's done over ed's memory buffer). A common practise is to use printf to generate the command list:
printf "%s\n" '1,$s/search/replace/g' wq | ed -s file
The printf command produces output as follows:
1,$s/search/replace/g
wq
Those two lines are ed commands. The first one searches for the string search and replaces it with replace. The second one writes (w) the changes to the file and quits (q). -s suppresses diagnostic output.
| Sed with inplace editing changes group ownership of file |
1,456,433,541,000 |
this is the first occurrence where su was required for me.
I read an article about changing the value in /sys/devices/virtual/backlight/acpi_video0/brightness to alter my laptop's screen brightness.
I first noticed that when I would $ sudo echo 10 > brightness I would get permission denied.
I switched to root using su and # echo 10 > brightness changed my brightness almost instantly.
The last weird thing to me happened when I tried # echo 20 > brightness (maxbrightness file holds the value 15) and I got a write error
Could someone explain this difference between sudo and su to me? Understanding the write error would be an added bonus. Any help, pointers, and/or links would be much appreciated.
|
Redirection does not work that way. Appending > to a command will run that redirection as the invoking user (you) and not as root. Do it with tee:
echo 20 | sudo tee /sys/devices/virtual/backlight/acpi_video0/brightness
or by invoking the command in a separate privileged shell:
sudo bash -c "echo 20 > /sys/devices/virtual/backlight/acpi_video0/brightness"
| sudo permission denied but su grants permission |
1,456,433,541,000 |
Is there an ability to not allow sudoers view your folder (only you and root can access folder)?
|
Assuming that by “sudoers” you mean people who are allowed to run commands as root with the sudo prefix, because they are mentioned in the sudoers file through a line like bob ALL=(ALL) ALL, then these people are root. What defines being root isn't knowing the password of the root account, it's having access to the root account through whatever means.
You cannot protect your data from root. By definition, the root user can do everything. Permissions wouldn't help since root can change or bypass the permissions. Encryption woulnd't help since root can subvert the program doing the decryption.
If you don't trust someone, don't give them root access on a machine where you store your data. If you don't trust someone who has root access on a machine, don't store your data on it.
If a user needs root access for some specific purpose such as comfortably administering an application, installing packages, etc., then give them their own hardware, or give them their own virtual machine. Let them be root in the VM but not on the host.
| Protect folder from sudoers |
1,456,433,541,000 |
I know you can determine the owner of directory by doing:
ls -ld ~/foo | awk '{ print $3 }'
You could then compare it to the current user by doing this:
if [[ $(ls -ld ~/foo | awk '{ print $3 }') == "$USER" ]] # or $(id -u -n ) instead of $USER
then
echo "You are the owner"
else
echo "You are NOT the owner"
fi
But you can have permissions to write without being the owner. How do you determine this?
|
I suppose
if [ -w ~/foo ]; then ....
should do what you want.
Also, stat -c %U ~/foo is a better way to obtain the owner than parsing ls output.
| Determine if the current user has write permission in a directory |
1,456,433,541,000 |
Is there anyway I can list files by typing a command in the shell which lists all the file names, folder names and their permissions in CentOS?
|
Have a look at tree, you may have to install it first. Per default tree does not show permissions, to show permissions next to the filename run
tree -p
which will recursively list all folders and directories within the current directory including permissions.
| Recursively list files with file names, folder names and permission |
1,456,433,541,000 |
I have done this:
sudo chown -R myname /usr/
and now I can't use the sudo command because of this error:
sudo: must be setuid root
And as I read this means that the owner of this file /usr/bin/sudo is not the root. It's my user now because of the chown on the /usr folder.
On many forums and blogs people suggest to do this as root:
# chown root:root /usr/bin/sudo
# chmod 4111 /usr/bin/sudo
...but the problem with this is that I need to log in as a root, but I can't because If I write su in the terminal the password is wrong (actually I use the password what I added to my user):
$ su
Password:
su: Authentication failure
So can I get back the sudo command?
Edit: My Ubuntu is under Paralells on my Mac OS X.
|
If you have a similar system that you can use as a guide to see what the correct ownership for all of the files is, then you can boot into rescue mode, drop to a root shell, and manually restore the correct ownership to all of the files in /usr.
The quickest way may be to reinstall your OS or restore from backup.
In Ubuntu or similar, then there is no root password by default (the account is disabled), which is why you can't su.
| How to get back sudo on Ubuntu? |
1,456,433,541,000 |
In the sudoers file, you can have either of the following lines
modernNeo ALL=(ALL:ALL) ALL
modernNeo ALL=(ALL) ALL
I looked at the following answers on here to understand this
Sudoers file, enable NOPASSWD for user, all commands
What is the difference between root ALL=(ALL:ALL) ALL and root ALL=(ALL) ALL?
Effect of (ALL:ALL) in sudoers?
What does "ALL ALL=(ALL) ALL" mean in sudoers?
Question 1
If I understand correctly from those above answers:
(ALL:ALL) means that you can run the command as any user and any group
(ALL) means that you can run the command as any user but your group remains the same [it remains your own group] - regardless of the user you become when you use sudo with ALL for the third entry?
Question 2
But with (ALL:ALL)
If you can run it as any group, how does sudo decide what group you run the command as if you don't specify it on the commandline using -g?
does it first try to run it as your own group and then go through a list of all the groups on your machine before finding the group that allows you to run the command?
Where does it get the list of groups from and what is the order of the groups on that list?
Or does it just revert to using root for user and/or group when your preference for what user and/or group you want to become isn't specified? If that is the case, why do (ALL:ALL) when you can do (root:root) ?
Question 3
Furthermore, in this Ubuntu Forums post, with regards to the following lines
%admin ALL=(ALL) ALL
%sudo ALL=(ALL:ALL) ALL
They say that
Users in the admin group may become root. Users in the sudo group can only use the sudo command. For instance, they could not sudo su
(ALL:ALL) refers to (user:group) that sudo will use. It can be specified with -u and -g when you run sudo. If you don't specify anything it will run as root:root, which is the default. That's how most end up using it anyway.
That confuses me; they are stating that if you can take on any group when running a command, then you are unable to become root?
|
A line like:
smith ALL=(ALL) ALL
will allow the user smith to use sudo to run at any computer (first ALL), as any user (the second ALL, the one inside parentheses) any command (the last ALL). This command will be allowed by sudo:
smith@site ~ $ sudo -u root -g root bash
But this won't:
smith@site ~ $ sudo -u root -g smith bash
as the permissions for ANY group have not been declared.
This, however:
smith ALL=(ALL:ALL) ALL
will allow this command to be executed (assuming user tom and group sawyer exist):
smith@site ~ $ sudo -u tom -g sawyer bash
tom@site ~ $ id
uid=1023(tom) gid=1087(sawyer) groups=1047(tom),1092(sawyer)
Having said that:
Q1
(ALL:ALL) means that you can run the command as any user and any group
Yes
(ALL) means that you can run the command as any user …
Yes
… but your group remains the same [it remains your own group]
No, the only group allowed is root.
Q2
how does sudo decide what group you run the command as if you don't specify it on the commandline using -g?
It defaults to root.
does it first try to run it as your own group and then go through a list of all the groups on your machine before finding the group that allows you to run the command?
No.
Where does it get the list of groups from and what is the order of the groups on that list?
There is no list to use.
As stated above, it simply falls to default root
when user:ALL is used,
or to the named group if user:group is used.
Simple rules, simple actions.
Or does it just revert to using root for user and/or group when your preference for what user and/or group you want to become isn't specified?
Yes.
If that is the case, why do (ALL:ALL) when you can do (root:root) ?
Because with (ALL:ALL) you can do:
sudo -u tom -g sawyer id
But with (root:root) you can only do:
sudo -u root -g root id
and nothing else (user- and group-wise).
Q3
For these lines:
%admin ALL=(ALL) ALL
%sudo ALL=(ALL:ALL) ALL
Users in the admin group may become root.
Yes, users in the admin group (%) could become ANY user (including root) (because of the (ALL)) but only the root group.
Users in the sudo group can only use the sudo command.
That is incorrect. The users in the sudo group (%) could execute any command (the last ALL).
Users in the sudo group (%) could become any user (the (ALL:) part) and any group (the (:ALL) part) AND may execute any command (the last ALL) (not only sudo, which is specifically incorrect).
For instance, they could not sudo su
No, they could do sudo su or sudo ls
or sudo anycommand.
(ALL:ALL) refers to (user:group) that sudo will use.
It can be specified with -u and -g when you run sudo.
They are correct here. The command sudo -u tom -g sawyer ls is correct and valid.
If you don't specify anything it will run as root:root,
which is the default.
And are correct here as well. The command sudo ls will be executed with root:root credentials (i.e., powers / privileges).
That's how most end up using it anyway.
Correct, the most used sudo command doesn't specify either a user or group.
So, it is the "most used, anyway" (default root:root).
That confuses me... they are stating that if you can take on any group when running a command, …
Yes, they state that with (ALL:ALL) the sudo command could take any user or group.
And:
… then you are unable to become root?
No, that interpretation is incorrect.
with (ALL:ALL) ALL sudo is allowed to become any user with any group to execute any command. Even root.
with (ALL) ALL sudo lose the ability to set the group, but could still set the user and run any command (sudo su - is allowed).
| Trying to understand the difference between "modernNeo ALL=(ALL:ALL) ALL" and "modernNeo ALL=(ALL) ALL" in the sudoers file |
1,456,433,541,000 |
I tried "chattr +i DIRNAME", it's great, but I cannot create files in the DIR after chattr. What else are there to prevent users from deleting a Directory?
root@HOST ~] mkdir test
[root@HOST ~] chattr +i test
[root@HOST ~] cd test
[root@HOST ~/test] touch sth
touch: cannot touch `sth': Permission denied
[root@HOST ~/test] cd ..
[root@HOST ~] chattr -i test
[root@HOST ~] cd test
[root@HOST ~/test] touch sth
[root@HOST ~/test]
UPDATE: only the directory deletion is important, so I still need the files to be deleted in it
|
So what is wrong with a simple chown/chmod?:
cd /tmp
mkdir question
sudo chown root:root question
[sudo] password for user:
chmod 777 ./question
touch sth
rm sth
cd ..
rm question -rf
rm: cannot remove `question': Operation not permitted
OK, let me tell you what is wrong with this: every user has all access to every file in the question directory due to the 777 permissions. It is better to
create a new group groupadd question
mkdir question
chown root:question ./question
chmod 770 ./question
add the users that must have access to the files to the new group: usermod -G group user
The important trick here is that the directory has a different owner than any of the users that will try to delete it.
| How to prevent users from deleting a directory? |
1,456,433,541,000 |
I have 3 users A,B and C inside a group 'admin'. I have another user 'D' in whose home directory, there is a project folder. I have made D as the owner of that folder and assigned 'admin' as the group using chgrp. Group and owners have all the permissions, but still A,B or C are unable to access the folder. I have two question :
Is it even possible for other users to access anything in another user's directory
Giving rights to a group only makes the users in that group have access to files that are outside any user's home directory ?
Edit : Here is how I had set the owner and group of the project
sudo chown -R D project
sudo chgrp -R admin project
I got an error while trying to get into the project folder within D's home directory (while being logged in as A)
cd /home/D/project
-bash: cd: /home/D/project: Permission denied
Here is the output of ls -la command :
drwxrwsr-x 7 D admin 4096 Nov 18 13:06 project
Here is the description of the group admin :
getent group admin
admin_users:x:501:A,B,C
Also note that group admin is not being listed when I type groups from the user D, but was visible when I used cut -d: -f1 /etc/group. The user I am referring to as D is actually ec2-user(the default Fedora user on Amazon servers)
Ultimately, I'm setting up a git repository on a server. I have created the repo in D's home directory, but wish A, B and C to have access to it too (and clone them)
|
Some points that seem to be necessary (though I freely admit that I am not expert in these matters), and that were not covered in RobertL's otherwise admirably thorough answer.
Make sure that the other users have actually logged into group admin:
A$ newgrp admin
Since the users are already in the group, I think you will not need to set a group password. If you do:
A$ sudo chgpasswd
admin:secret (typed into stdin)
Make sure that D's home directory is in group admin and is group-searchable.
D$ chgrp admin ~
D$ chmod g+x ~
D$ ls -lad ~
drwx--x--- 3 D admin 4096 Nov 24 19:25 /home/D
The directory needs to be searchable to allow users to enter it or its subdirectory project. It doesn't need to be group-readable, so D's own filenames are still private. To let the other users get to project easily, have them create symbolic links to it; otherwise, they'll have to type the whole path each time (autocomplete won't work because the shell can't read the pathname, it can only go to it).
| Can users in a group access a file that is in another user's home directory? |
1,456,433,541,000 |
I'm logged in remotely over SSH with X forwarding to a machine running Ubuntu 10.04 (lucid). Most X11 applications (e.g. xterm, gnome-terminal) work fine. But Evince does not start. It seems unable to read ~/.Xauthority, even though the file exists, and is evidently readable (it has the right permissions and other applications read it just fine).
$ evince
X11 connection rejected because of wrong authentication.
Cannot parse arguments: Cannot open display:
$ echo DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY
DISPLAY=localhost:10.0 XAUTHORITY=
$ strace evince
…
access("/home/gilles/.Xauthority", R_OK) = 0
open("/home/gilles/.Xauthority", O_RDONLY) = -1 EACCES (Permission denied)
…
$ ls -l ~/.Xauthority
-rw------- 1 gilles gilles 496 Jul 5 13:34 /home/gilles/.Xauthority
What's so special about Evince that it can't read ~/.Xauthority? How can I make it start?
|
TL,DR: it's Apparmor's fault, and due to my home directory being outside /home.
Under a default installation of Ubuntu 10.04, the apparmor package is pulled in as an indirect Recommends-level dependency of the ubuntu-standard package. The system logs (/var/log/syslog) show that Apparmor is rejecting Evince's attempt to read ~/.Xauthority:
Jul 5 17:58:31 darkstar kernel: [15994724.481599] type=1503 audit(13415
03911.542:168): operation="open" pid=9806 parent=9805 profile="/usr/bin/evince"
requested_mask="r::" denied_mask="r::" fsuid=1001 ouid=1001 name="/elsewhere/home/gilles/.Xauthority"
The default Evince configuration for Apparmor (in /etc/apparmor.d/usr.bin.evince) is very permissive: it allows arbitrary reads and writes under all home directories. However, my home directory on this machine is a symbolic link to non-standard location which is not listed in the default AppArmor configuration. Access is allowed under /home, but the real location of my home directory is /elsewhere/home/gilles, so access is denied.
Other applications that might be affected by this issue include:
Firefox, but its profile is disabled by default (by the presence of a symbolic link /etc/apparmor.d/disable/usr.bin.firefox -> /etc/apparmor.d/usr.bin.firefox).
CUPS PDF printing; I haven't tested, but I expect it to fail writing to ~/PDF.
My fix was to edit /etc/apparmor.d/tunables/home.d/local and add the line
@{HOMEDIRS}+=/elsewhere/home/
to have the non-standard location of home directories recognized (note that the final / is important; see the comments in /etc/apparmor.d/tunables/home.d/ubuntu), then run /etc/init.d/apparmor reload to update the Apparmor settings.
If you don't have administrator privileges and the system administrator is unresponsive, you can copy the evince binary to a different location such as ~/bin, and it won't be covered by the Apparmor policy (so you'll be able to start it, but will not be afforded the very limited extra security that Apparmor provides).
This issue has been reported as Ubuntu bug #447292. The resolution handles the case when some users have their home directory as listed in /etc/passwd outside /home, but not cases such as mine where /home/gilles is a symbolic link.
| Evince fails to start because it cannot read .Xauthority |
1,456,433,541,000 |
I am just testing out a new Ubuntu (Vivid 15.04) install on Vagrant, and getting problems with mysql and logging to a custom location.
In /var/log/syslog I get
/usr/bin/mysqld_safe: cannot create /var/log/mysqld.log: Permission denied
If I ls -l /var I get
drwxrwxr-x 10 root syslog 4096 Jun 8 19:52 log
If I look in /var/log the file doesn't exist
I thought I had temporarily disabled apparmor just to isolate if it was that or something else causing the problem, but not sure if its still creating an issue (edit: think it may still be enabled, so not sure if this is an issue or simple permissions).
If I try manually creating the file as mysql I get denied as well (I temp allowed it bash access to test, I will remove after).
touch /var/log/mysql.log
touch: cannot touch ‘/var/log/mysql.log’: Permission denied
If I look at another running server (centos) it has permissions as above (and writes as mysql user), so I'm wondering how does mysql normally get permissions to access the /var/log directory, and how can I get it to access that folder via normal running ?
Here is my apparmor profile for mysql
/usr/sbin/mysqld {
#include
#include
#include
#include
#include
capability dac_override,
capability sys_resource,
capability setgid,
capability setuid,
network tcp,
/etc/hosts.allow r,
/etc/hosts.deny r,
/etc/mysql/** r,
/usr/lib/mysql/plugin/ r,
/usr/lib/mysql/plugin/*.so* mr,
/usr/sbin/mysqld mr,
/usr/share/mysql/** r,
/var/log/mysqld.log rw,
/var/log/mysqld.err rw,
/var/lib/mysql/ r,
/var/lib/mysql/** rwk,
/var/log/mysql/ r,
/var/log/mysql/* rw,
/var/run/mysqld/mysqld.pid rw,
/var/run/mysqld/mysqld.sock w,
/run/mysqld/mysqld.pid rw,
/run/mysqld/mysqld.sock w,
/sys/devices/system/cpu/ r,
/var/log/mysqld.log rw,
# Site-specific additions and overrides. See local/README for details.
#include
}
I also added the above file to the apparmor.d/disable directoru
Note: I added this line /var/log/mysqld.log rw, it wasn't originally there, and has same issue (after doing an apparmor reload).
apparmor module is loaded.
5 profiles are loaded.
5 profiles are in enforce mode.
/sbin/dhclient
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/tcpdump
0 profiles are in complain mode.
1 processes have profiles defined.
1 processes are in enforce mode.
/sbin/dhclient (565)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
Jun 8 20:33:33 vagrant-ubuntu-vivid-64 systemd[1]: Starting MySQL Community Server...
Jun 8 20:33:33 vagrant-ubuntu-vivid-64 mysqld_safe[11231]: 150608 20:33:33 mysqld_safe Logging to '/var/log/mysqld.log'.
Jun 8 20:33:33 vagrant-ubuntu-vivid-64 mysqld_safe[11231]: touch: cannot touch ‘/var/log/mysqld.log’: Permission denied
Jun 8 20:33:33 vagrant-ubuntu-vivid-64 mysqld_safe[11231]: chmod: cannot access ‘/var/log/mysqld.log’: No such file or directory
Jun 8 20:33:33 vagrant-ubuntu-vivid-64 mysqld_safe[11231]: 150608 20:33:33 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Jun 8 20:33:33 vagrant-ubuntu-vivid-64 mysqld_safe[11231]: /usr/bin/mysqld_safe: 126: /usr/bin/mysqld_safe: cannot create /var/log/mysqld.log: Permission denied
|
It seems to me that most people create a directory named mysql inside of /var/log, change the owner of this folder to the mysql user.
sudo mkdir /var/log/mysql
sudo chown mysql:mysql /var/log/mysql
That should do it. Be sure to update the server's logging location and restart it. After you've tested re-enable mysql's apparmor profile.
| Permission denied writing to mysql log |
1,456,433,541,000 |
I have access to a remote Linux machine where every time I create a symbolic link, it is created by default with the following permissions: lrwxrwxrwx
If I try to change the permissions of the symbolic link (i.e. not the path that it points to) using for example:
chmod g-w my_symbolic_link
chmod runs correctly (no error message is printed) but when I check the permissions again, they are still the same (lrwxrwxrwx).
I am waiting to hear from the machine administrator, but I was wondering if this is normal behavior, or if it is something specific to the box.
|
It's normal behavior. What happens can vary depending on the operating system (Solaris at least used to change the link permissions); but since a symlink isn't a normal file, the permissions don't actually get used for anything. (File permissions are part of the file's inode, so the symlink can't affect them.)
| Symbolic link permissions don't change with chmod |
1,456,433,541,000 |
There's a situation I don't quite understand.
I have this directory, where the group 'webadmin' has rwx rights :
$ ls -la
total 8
drwxrwxr-x 2 root webadmin 4096 Aug 27 12:17 .
⋮ ⋮
I am in the group webadmin :
$ groups eino
eino : eino sudo webadmin
however, I can't create any file in the directory :
$ touch test.txt
touch: cannot touch 'test.txt': Permission denied
How come? Shouldn't the rwx permissions give me right to do it?
|
Log out and log back in. You probably added the group during your current session.
| Why can't I create a file in a directory where I have group write access? [duplicate] |
1,456,433,541,000 |
I have files created into my home directory with only user read permission (r-- --- ---). I want to copy this file to another directory /etc/test/ which has the folder permission of 744 (rwx r-- r--). I need to allow for the file I am copying to inherit the permission of the folder it is copied in because so far when I copy it, the files permissions are still the same (r-- --- ---). I have tried setfacl command, but it did not work? Please help.
PS. I can't just chmod -r /etc/test/ because there are many files which will be copied into this folder over time and I don't want to run chmod command every time a file is copied over.
|
Permissions are generally not propagated by the directory that files are being copied into, rather new permissions are controlled by the user's umask. However when you copy a file from one location to another it's a bit of a special case where the user's umask is essentially ignored and the existing permissions on the file are preserved. Understanding this concept is the key to getting what you want.
So to copy a file but "drop" its current permissions you can tell cp to "not preserve" using the --no-preserve=all switch.
Example
Say I have the following file like you.
$ mkdir -m 744 somedir
$ touch afile
$ chmod 400 afile
$ ll
total 0
-r--------. 1 saml saml 0 Feb 14 15:20 afile
And as you've confirmed if we just blindly copy it using cp we get this:
$ cp afile somedir/
$ ls -l somedir/
total 0
-r--------. 1 saml saml 0 Feb 14 15:20 afile
Now let's repeat this but this time tell cp to "drop permissions":
$ rm -f somedir/afile
$ cp --no-preserve=all afile somedir/
$ ls -l somedir/
total 0
-rw-rw-r--. 1 saml saml 0 Feb 14 15:21 afile
So the copied file now has its permissions set to 664, where did it get those?
$ umask
0002
If I changed my umask to something else we can repeat this test a 3rd time and see the effects that umask has on the un-preserved cp:
$ umask 037
$ rm somedir/afile
$ cp --no-preserve=all afile somedir/
$ ls -l somedir/
total 0
-rw-r-----. 1 saml saml 0 Feb 14 15:29 afile
Notice the permissions are no longer 664, but are 640? That was dictated by the umask. It was telling any commands that create a file to disable the lower 5 bits in the permissions ... these guys: (----wxrwx).
| File inheriting permission of directory it is copied in? |
1,456,433,541,000 |
I want to create specific SFTP user which will have permissions to only read all folders and subfolders in /var/www/vhosts. Any help on this ?
|
Unix systems provide the chroot command which allows you to reset the / of the user to some directory in the filesystem hierarchy, where they cannot access "higher-up" files and directories.
However in your case, it would appropriate to provide a virtual chroot implemented by the remote shell service. sftp can be easily configured to restrict a local user to a specific subset of the filesystem.
hence in your case, you want to chroot let's say, user foo user into the /var/www/vhosts/ directory.
You can set a chroot directory for your user to confine them to the subdirectory /var/www/vhosts/ like so in /etc/ssh/sshd_config;
Create user foo with password
sudo useradd foo
sudo passwd foo
Create for SFTP only group
$ sudo groupadd sftp_users
Add to a user foo for SFTP only group
$ sudo usermod -G sftp_users foo
Change owner, because read/write permission
sudo chown root.root /var/www/vhosts/
Add permission
sudo chmod 755 /var/www/vhosts/
Edit /etc/ssh/sshd_config
sudo vi /etc/ssh/sshd_config
Comment out and add a line like below
#Subsystem sftp /usr/lib/openssh/sftp-server
Subsystem sftp internal-sftp
Add at the last
Match Group sftp_users
X11Forwarding no
AllowTcpForwarding no
ChrootDirectory /var/www/vhosts/
ForceCommand internal-sftp
(NOTE : Match blocks need to be at the END of the sshd_config file.)
Restart ssh service
sudo service ssh restart
With this cenfiguration you can ssh into folder ubuntu and get files. Can not put or delete
To sftp in right folder edit /etc/passwd. Change line for user foo to look like this
$ sudo vi /etc/passwd
..
foo:x:1001:1001::/var/www/vhosts/:
..
This will change user foo home folder to your sftp server folder.
| Read only permission to sftp user in specific directory |
1,456,433,541,000 |
When SCP'ing to my Fedora server, a user keeps getting errors about not being able to modify file timestamps ("set time: operation not permitted"). The user is not the owner of the file, but we cannot chown files to this user for security reasons. The user can sudo, but since this is happening via an SCP/FTP client, there's no way to do that either. And finally, we don't want to have to give this user root access, just to allow him to use a synchronization like rsync or WinSCP that needs to set timestamps.
The user is part of a group with full rw permissions on all relevant files and dirs. Any thoughts on how to grant user permission to touch -t these specific files without chowning them to him?
Further Info This all has to do with enabling PHP development in a single-developer scenario (ie: without SCM). I'm trying to work with Eclipse or NetBeans to work on a local copy of the PHP-based (WordPress) site, while allowing the user to "instantly" preview his changes on the development server. The user will be working remotely. So far, all attempts at automatic synchronization have failed - even using WinSCP in "watch folder" mode, where it monitors a local folder and attempts to upload any changes up to the remote directory error out because it always tries to set the date/timestamp.
The user does have sudo access, but I have been told that it's really not a good idea to work under 'root', so I have been unwilling to just log in as root to do this work. Besides, it ought not to be necessary. I would want some other, non-superuser to be able to do the same thing - using their account information, establish an FTP connection and be able to work remotely via sync. So the solution needs to work for someone without root access.
What staggers me is how much difficulty I'm having. All these softwares (NetBeans, Eclipse, WinSCP) are designed to allow synchronization, and they all try to write the timestamp. So it must be possible. WinSCP has the option to turn off "set timestamp", but this option becomes unavailable (always "on") when you select monitor/synchronize folder. So it's got to be something that is fairly standard.
Given that I'm a complete idiot when it comes to Linux, and I'm the dev "server admin" I can only assume it's something idiotic that I'm doing or that I have (mis)configured.
Summary In a nutshell, I want any users that have group r/w access to a directory, to be able to change the timestamp on files in that directory via SCP.
|
Why it doesn't work
When you attempt to change the modification time of a file with touch, or more generally with the underlying system call utime, there are two cases.
You are attempting to set the file's modification time to a specific time. This requires that you are the owner of the file. (Technically speaking, the process's effective user ID must be the owner of the file.²)
You are attempting to set the file's modification time to the current time. This works if and only if you have permission to write to the file. The reason for this exception is that you could achieve the same effect anyway by overwriting an existing byte of the file with the same value¹.
Why this typically doesn't matter
When you copy files with ftp, scp, rsync, etc., the copy creates a new file that's owned by whoever did the copy. So the copier has the permission to set the file's times.
With rsync, you won't be able to set the time of existing directories: they'll be set to the time when a file was last synchronized in them. In most cases, this doesn't matter. You can tell rsync not to bother with directory times by passing --omit-dir-times (-O).
With version control systems, revision dates are stored inside files; the metadata on the files is mostly irrelevant.
Solutions
This all has to do with enabling PHP development in a single-developer scenario (ie: without SCM).
Ok, stop right there. Just because there's a single developer doesn't mean you shouldn't use SCM. You should be using SCM. Have the developer check in a file, and give him a way to press a “deploy” button to check out the files from SCM into the live directory.
There is absolutely no technical reason why you shouldn't be using SCM, but there may be a human reason. If the person working on these files styles himself “developer”, he should be using SCM. But if this is a non-technical person pushing documents in, SCM might be too complicated. So go on pushing the files over FTP or SSH. There are three ways this can work.
Do you really need to synchronize times? As indicated above, rsync has an option to not synchronize times. Scp doesn't unless you tell it to. I don't know WinSCP but it probably can too.
Continue doing what you're doing, just ignore messages about times. The files are still being copied. This isn't a good option, because ignoring errors is always risky. But it is technically possible.
If you need flexibility in populating the files owned by the apache user, then the usual approach would be to allow the user SSH access as apache. The easy approach is to have the user create an SSH private key and add the corresponding public key to ~apache/.ssh/authorized_keys. This means the user will be able to run arbitrary commands as the apache user. Since you're ok with giving the user sudo rights anyway, it doesn't matter in your case. It's possible, but not so easy, to put more restrictions (you need a separate user database entry with a different name, the same user ID, a restricted shell and a chroot jail; details in a separate question, though this may already be covered on this site or on Server Fault).
¹ Or, for an empty file, write a byte and then truncate.
² Barring additional complications, but none that I know of applies here.
| User can't touch -t |
1,456,433,541,000 |
I am trying to create a file in a directory but I am getting:
`touch`: cannot touch ‘test’: Permission denied
Here are my commands:
[user@xxx api]$ ls -l
total 184
...
drwxrwxr-x 2 root root 4096 2016-04-12 14:38 public
..
[user@xxx api]$ cd ./public
[user@xxx public]$ touch test
touch: cannot touch ‘test’: Permission denied
|
You can't edit the contents of the public directory if you don't have write and execute access.
You indicate you are attempting to create a new file. If the test file doesn't already exist in public, touch will attempt to create a new file. It cannot do this without the write and execute permissions over the parent directory. Execute is required to traverse the directory; write is required to add the inode entry for the new file. Apparently, you don't have one or both of these permissions.
If the test file does already exist in public, touch will, by default, update the modification time of the file. Only write access to the file is required for this, as the modification date/time is stored in the file's inode. If the file already exists, you will need to inspect the file's permissions using a command like ls -l public/test to determine if you have write access.
The permissions bitmask on the directory, rwxrwxr-x, means:
the root user, i.e. the owner of the directory, has write privileges to the directory as indicated by the first rwx block. This user can also read the directory (the r bit) and traverse it to access its contents (the x bit).
members of the root group, i.e. the group on the directory, who are not themselves the root user, also have similar privileges to read, write and traverse the directory as indicated by the second rwx block
All other users only have read and execute rights, as indicated by the last r-x block. As noted, for directories, execute permissions allow you to traverse that directory and access its contents. See this question for more clarity on this.
How do I get permissions?
You will need to talk to your system administrator (which might be you!) to do one of the following:
Make you the owner of the public/ directory using a command like chown user public/. This will be suitable if you are the only user who will need to have edit rights.
Create a new group with a suitable name, perhaps publiceditors, and set this as the group on the public/ directory using a command like chgrp publiceditors public/. Ensure you and any other users who require the ability to modify the directory are listed as members of the group. This approach works where multiple users need edit capability.
Make your user account a member of the root group (not something I would recommend).
Provide you with access to log in or masquerade as root, such as with sudo or su with the root password
Change the rights on the directory to grant all users write permissions, using a command like chmod o+w public. Be aware that this gives everyone on the box the ability to edit and delete files in the public directory.* You may not want this!
*In the absence of other access control enforcement, such as mandatory access control in the kernel.
What do read, write and execute permissions mean in the context of a directory?
Assuming you're on a Linux box, on a directory, a read permission bit allows you to read the directory listing. The write permission bit allows you to update the directory listed, which is required for creating a file*, editing the name of a file, unlinking (deleting) a file. The execute bit allows you to traverse the directory, access its files etc. More information on Linux directory permissions.
* Actually, you're linking a file into the directory. Most times you will do this at the point of file creation, but there are more complex examples. For example, making a hard link to a file which originally existed elsewhere in the file system will require write access to the target directory of the link, despite the fact you're not creating a new file.
Why write access to the directory?
You need to be able to write to the directory to add a reference to the relevant inode for the file you are adding.
| touch: cannot touch ‘test’: Permission denied |
1,456,433,541,000 |
It seems that Linux supports changing the owner of a symbolic link (i.e. lchown) but changing the mode/permission of a symbolic link (i.e. lchmod) is not supported. As far as I can see this is in accordance with POSIX. However, I do not understand why one would support either one of these operations but not both. What is the motivation behind this?
|
Linux, like most Unix-like systems (Apple OS/X being one of the rare exceptions), ignores permissions on symlinks when it comes to resolving their targets for instance.
However ownership of symlinks, like other files, is relevant when it comes to the permission to rename or unlink their entries in directories that have the t bit set, such as /tmp.
To be able to remove or rename a file (symlink or not) in /tmp, you need to be the owner of the file. That's one reason one might want to change the ownership of a symlink (to grant or remove permission to unlink/rename it).
$ ln -s / /tmp/x
$ rm /tmp/x
# OK removed
$ ln -s / /tmp/x
$ sudo chown -h nobody /tmp/x
$ rm /tmp/x
rm: cannot remove ‘/tmp/x’: Operation not permitted
Also, as mentioned by Mark Plotnick in his now deleted answer, backup and archive applications need lchown() to restore symlinks to their original owners. Another option would be to switch euid and egid before creating the symlink, but that would not be efficient and complicate right managements on the directory the symlink is extracted in.
| Why do Linux/POSIX have lchown but not lchmod? |
1,456,433,541,000 |
Logging out and logging back in do not fix this problem. Rebooting does not fix this problem.
NOTE: Please do not mark this question as a duplicate, I'm aware of this question: I added a user to a group, but group permissions on files still have no effect and have tried these things multiple times.
I've added my user account to some groups using usermod -aG, however I am not able to access any resources associated with that group. I can fix this using sudo su - $USER, however this only lasts for my current terminal session.
Example terminal session:
$ id
uid=1008(erik) gid=1009(erik) groups=1009(erik)
$ echo $USER
erik
$ sudo su - $USER
[sudo] password for erik:
$ id
uid=1008(erik) gid=1009(erik) groups=1009(erik),27(sudo),100(users),999(docker),1001(rvm)
I have to do this every time I open a terminal. Rebooting does not fix this.
Is there any way I get get all the groups assigned to me when I login?
Distro: Ubuntu 16.04 x64
User Catscrash has reported the same issue with Ubuntu 18.04
This seems to only happen when I open a terminal in my desktop. If I ssh to the machine or switch to a TTY (ctrl+alt+F2 for example), then I get the correct groups.
$ id
uid=1008(erik) gid=1009(erik) groups=1009(erik)
$ ssh localhost
erik@localhost's password:
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-131-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
0 packages can be updated.
0 updates are security updates.
----------------------------------------------------------------
Ubuntu 16.04.2 LTS built 2017-04-20
----------------------------------------------------------------
Last login: Thu Jul 26 09:18:38 2018 from ::1
$ id
uid=1008(erik) gid=1009(erik) groups=1009(erik),27(sudo),100(users),999(docker),1001(rvm)
$
Current using XFCE as my desktop environment. I also have KDE installed.
I've tried both possible values for starting the terminal as a login shell - neither makes any difference.
I've tried disabling apparmor via sudo systemctl disable apparmor and rebooted. After reboot, since I have docker installed, it still loads apparmor with the docker profile. So additionally I tried disabling docker: sudo systemctl disable docker and then rebooting.
After this, the output of apparmor_status is:
$ sudo apparmor_status
apparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
so, that means it's loaded but not doing anything?
Either way, it doesn't resolve the issue. I still cannot access any resource that I should have group permissions for.
Any other ideas?
|
Update2:
This seems to be a lightdm / kwallet bug, see here: https://bugs.launchpad.net/lightdm/+bug/1781418 and here: https://bugzilla.redhat.com/show_bug.cgi?id=1581495
Commenting out
auth optional pam_kwallet.so
auth optional pam_kwallet5.so
to
#auth optional pam_kwallet.so
#auth optional pam_kwallet5.so
in /etc/pam.d/lightdm - as suggested in the link above, solves the problem for now.
Update:
This seems to be an issue with lightdm. Switching to GDM solved the issue temporarily for me. Still don't know what's wrong with lightdm though.
I have exactly the same issue (Ubuntu 18.04). I don't have a solution yet, but I noticed, that everything is correct when I log in via ssh or a text console, but not when I open a terminal emulator on my desktop environment. Is this the same for you? Maybe it has something to do with some pam-files?
Also weird:
correct would be:
uid=1000(catscrash) gid=1000(catscrash) Gruppen=1000(catscrash),4(adm),6(disk),24(cdrom),27(sudo),30(dip),46(plugdev),113(lpadmin),128(sambashare),132(vboxusers),136(libvirtd)
wrong is
uid=1000(catscrash) gid=1000(catscrash) Gruppen=1000(catscrash)
now I can do
catscrash@catscrash-desktop ~ % newgrp adm
catscrash@catscrash-desktop ~ % newgrp catscrash
catscrash@catscrash-desktop ~ % id
uid=1000(catscrash) gid=1000(catscrash) Gruppen=1000(catscrash),4(adm)
catscrash@catscrash-desktop ~ % newgrp sudo
catscrash@catscrash-desktop ~ % id
uid=1000(catscrash) gid=27(sudo) Gruppen=27(sudo),4(adm),1000(catscrash)
catscrash@catscrash-desktop ~ % newgrp catscrash
catscrash@catscrash-desktop ~ % id
uid=1000(catscrash) gid=1000(catscrash) Gruppen=1000(catscrash),4(adm),27(sudo)
catscrash@catscrash-desktop ~ %
So it definitely knows about my groups, as I couldn't do that with groups I'm not in, and once I changed the primary group and back, those groups appear... weird!
I also noticed, that this only happens in KDE / plasmashell, is this the same for you? When logging in via gnome shell everything works fine.
| Missing groups at each startup |
1,284,435,541,000 |
I'm trying to understand permissions better, so I'm doing some "exercises". Here's a sequence of commands that I'm using with their respective output:
$ umask
0022
$ touch file1
$ ls -l file1
-rw-r--r-- 1 user group 0 Mar 16 12:55 file1
$ mkdir dir1
$ ls -ld dir1
drwxr-xr-x 2 user group 4096 Mar 16 12:55 dir1
That makes sense because we know that the default file permissions are 666 (rw-rw-rw-) and directories default permissions are 777 (rwxrwxrwx).
If I subtract the umask value from these default permissions I have
666-022=644, rw-r--r--, for the file1, so it's coherent with the previous output;
777-022=755, rwx-r-x-r-x, for the dir1, also coherent.
But if I change the umask from 022 to 021 it isn't any more.
Here is the example for the file:
$ umask 0021
$ touch file2
$ ls -l file2
-rw-r--rw- user group 0 Mar 16 13:33 file2
-rw-r--rw- is 646 but it should be 666-021=645. So it doesn't work according to the previous computation.
Here is the example for the directory:
$ touch dir2
$ ls -ld dir2
drwxr-xrw- 2 user group 4096 Mar 16 13:35 dir2
drwxr-xrw- is 756, 777-021=756. So in this case the result is coherent with the previous computation.
I've read the man but I haven't found anything about this behaviour.
Can somebody explain why?
EXPLANATION
As pointed out in the answers: umask's value is not mathematically subtracted from default directory and file's permissions.
The operation effectively involved is a combination of AND (&) and NOT (!) boolean operators. Given:
R = resulting permissions
D = default permissions
U = current umask
R = D & !U
For example:
666& !0053 = 110 110 110 &
!000 101 011
110 110 110 &
111 010 100
= 110 010 100 = 624 = rw--w-r--
777& !0022 = 111 111 111 &
!000 010 010
111 111 111 &
111 101 101
= 111 101 101 = 755 = rwxr--xr-x
TIP
An easy way to quickly know the resulting permissions (at least it helped me) is to think that we can use just 3 decimal values:
r = 100 = 4
w = 010 = 2
x = 001 = 1
Permissions will be a combination of these 3 values.
" " is used to indicate that the relative permission is not given.
666 = 4+2+" " 4+2+" " 4+2+" " = rw rw rw
So if my current umask is 0053 I know I'm removing read and execution (4+1) permission from group and write and execution (2+1) from other resulting in
4+2 " "+2+" " 4+" "+" " = 624 = rw--w-r--
(group and other already hadn't execution permission)
|
umask is a mask, it’s not a subtracted value. Thus:
mode 666, mask 022: the result is 666 & ~022, i.e. 666 & 755, which is 644;
mode 666, mask 021: the result is 666 & ~021, i.e. 666 & 756, which is 646.
Think of the bits involved. 6 in a mode means bits 1 and 2 are set, read and write. 2 in a mask masks bit 1, the write bit. 1 in a mask masks bit 0, the execute bit.
Another way to represent this is to look at the permissions in text form. 666 is rw-rw-rw-; 022 is ----w--w-; 021 is ----w---x. The mask drops its set bits from the mode, so rw-rw-rw- masked by ----w--w- becomes rw-r--r--, masked by ----w---x becomes rw-r--rw-.
| Why do some umask values not take effect? |
1,284,435,541,000 |
I have a directory on an nfs mount, which on the server is at /home/myname/.rubies
Root cannot access this directory:
[mitchell.usher@server ~]$ stat /home/mitchell.usher/.rubies
File: `/home/mitchell.usher/.rubies'
Size: 4096 Blocks: 8 IO Block: 32768 directory
Device: 15h/21d Inode: 245910 Links: 3
Access: (0755/drwxr-xr-x) Uid: ( 970/mitchell.usher) Gid: ( 100/ users)
Access: 2016-08-22 15:06:15.000000000 +0000
Modify: 2016-08-22 14:55:00.000000000 +0000
Change: 2016-08-22 14:55:00.000000000 +0000
[mitchell.usher@server ~]$ sudo !!
sudo stat /home/mitchell.usher/.rubies
stat: cannot stat `/home/mitchell.usher/.rubies': Permission denied
I am attempting to copy something from within that directory to /opt which only root has access to:
[mitchell.usher@server ~]$ cp .rubies/ruby-2.1.3/ -r /opt
cp: cannot create directory `/opt/ruby-2.1.3': Permission denied
[mitchell.usher@server ~]$ sudo !!
sudo cp .rubies/ruby-2.1.3/ -r /opt
cp: cannot stat `.rubies/ruby-2.1.3/': Permission denied
Obviously I can do the following (and is what I've done for the time being):
[mitchell.usher@server ~]$ cp -r .rubies/ruby-2.1.3/ /tmp/
[mitchell.usher@server ~]$ sudo cp -r /tmp/ruby-2.1.3/ /opt/
Is there any way to do this that wouldn't involve copying it as an intermediary step or changing permissions?
|
You can use tar as a buffer process
cd .rubies
tar cf - ruby-2.1.3 | ( cd /opt && sudo tar xvfp - )
The first tar runs as you and so can read your home directory; the second tar runs under sudo and so can write to /opt.
| How to copy a directory which root can't access to a directory that only root can access? |
1,284,435,541,000 |
I'm trying to determine which group(s) a running child process has inherited. I want to find all groups the process is in given its uid. Is there a way to determine this via the /proc filesystem?
|
The list of groups is given under Groups in /proc/<pid>/status; for example,
$ grep '^Groups' /proc/$$/status
Groups: 4 24 27 30 46 110 115 116 1000
The primary group is given under Gid:
$ grep '^Gid' /proc/$$/status
Gid: 1000 1000 1000 1000
ps is also capable of showing the groups of a process, as the other answers indicate.
| Determine which group(s) a running process is in? |
1,284,435,541,000 |
I have a folder udp_folder2
d------r-T 41 root root 4096 Apr 26 21:17 udp_folder2
when I'm with user other than root, I can't cp -r it into a new folder
it says: Permission denied
why? and how can I copy it with a user other than root
|
Well,
That would be because the way your current permissions are set, no one can move that file. ( Other than root, because root doesn't follow the same rules. )
You would need to either change the owner of the file (chown), OR add the other user to the group 'root' and chmod it so the group can execute on the directory, OR allow everyone else to execute the file.
So, a quick fix would be:
chmod -R o+rwx udp_folder2
That will give everyone the ability to read, write and execute on that directory.
Also... if you're attempting to copy 'udp_folder2' into the same directory that it is located now, you'll need the 'w' permission on that directory as well. For example:
/foo/udp_folder2 - you'll need 'w' on /foo to copy that directory in /foo
I'd suggest learning linux file permissions:
Linux File Permission Tutorial
| `cp` permission denied when copy a file owned by `root` |
1,284,435,541,000 |
Is there any way to retrieve UID/GID of running process?
Currently, I know only way of looking it up in htop. But I don't want to depend on third-party tool, prefer to use builtin unix commands.
Could you suggest a few useful oneliners?
This didn't satisfy my curiousity:
How to programmatically retrieve the GID of a running process
top shows only user but not the group.
|
$ stat -c "%u %g" /proc/$pid/
1000 1000
or
$ egrep "^(U|G)id" /proc/$pid/status
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
or with only bash builtins:
$ while read -r line;do [ "${line:1:2}" = "id" ] && echo $line;done < /proc/17359/status
Pid: 17359
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
| How could one determine UID/GID of running process |
1,284,435,541,000 |
We have an Ubuntu 12.04/apache server and a directory in the "/var/www/foo" and root permission.
Something is repeatedly changes the permission of this directory.
Question: How can we investigate, what is changing the permission?
|
You could investigate using auditing to find this. In ubuntu the package is called auditd.
Use that command to start a investigation if a file or folder:
auditctl -w /var/www/foo -p a
-w means watch the file/folder
-p a means watch for changes in file attributes
Now start tail -f /var/log/audit/audit.log. When the attributes change you will see something like this in the log file:
type=SYSCALL msg=audit(1429279282.410:59): arch=c000003e syscall=268 success=yes exit=0
a0=ffffffffffffff9c a1=23f20f0 a2=1c0 a3=7fff90dd96e0 items=1 ppid=26951 pid=32041
auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts5
ses=4294967295 comm="chmod" exe="/bin/chmod"
type=CWD msg=audit(1429279282.410:59): cwd="/root"
type=PATH msg=audit(1429279282.410:59): item=0 name="/var/www/foo" inode=18284 dev=00:13
mode=040700 ouid=0 ogid=0 rdev=00:00
I executed chmod 700 /var/www/foo to trigger it.
In the 1st line, you see
which executable did it: exe="/bin/chmod"
the pid of the process: pid=32041
You could also find out which user it was: uid=0, root in my case.
In the 3rd line, you see the changed mode: mode=040700
| How to investigate what is modifying a directories permission on Linux? |
1,284,435,541,000 |
My goal is to allow all users who are members of the "team" group to edit (r/w) the same set of remote files -- normal work collaboration -- using a local mount point. I have tried NFS and SSHFS using ACLs without success yet. Here I am trying to get SSHFS working by making the umask correct (which, in theory, should solve the problems I'm experiencing).
Updated description of problem:
user1, user2, and user3 all log into the same client computer. All are members of group "team". The client computer mounts a share via SSHFS. Client and server run Arch Linux (updated a couple days ago). The Client runs KDE desktop. The SSHFS mount is done via user3@sshfsrv with option allow_other.
On the server, the shared directory has permissions user3 (owner) rwx and group (team) rwx, while other have r-x permissions. The gid sticky bit is set with chmod g+s. We removed all ACLs for the umask-focused configuration.
First problem:
user2 scans a document with XSane (a Gnome app) and attempts to save it in Shared1 directory, which is part of the SSHFS mount point. The save operation fails due to permissions. A 0 byte file is written. The permissions on that file are owner (user3) rw and group (team) read only (and other none). user2 can save the scanned document to their home directory.
The terminal works as expected:
In a terminal, user2 can touch a document in the Shared1 directory and the permissions are:
-rw-rw---- 1 user3 team 6 Sep 23 19:41 deleteme6.txt
We get the correct g+rw permissions. Note that ownership is user3 while this is user2 creating the file. In /etc/fstab, the mount is specified as:
user3@sshfsrv:/home/common /home/common fuse.sshfs x-systemd.automount,_netdev,user,follow_symlinks,identityfile=/home/user3/.ssh/id_rsa,allow_other,default_permissions 0 0
In the terminal, and with a text editor (Kate in KDE), the users can collaborate on files that were created in Shared1 as expected. Any user in group "team" can create and save a file in Shared1 via nano text editor, and any other user in the group can edit / update it.
Second problem:
As a temporary workaround I tested saving the scanned images to user2's home directory, then moving them to the Shared1 directory using Dolphin File manager. Permissions errors prevent this, and sometimes it crashes Dolphin.
I can show the same result by moving text files in the terminal:
[user2@client2 Shared1]$ echo user2 > /home/user2/MoveMe/deleteme7.txt
[user2@client2 Shared1]$ mv /home/user2/MoveMe/deleteme7.txt .
mv: preserving times for './deleteme7.txt': Operation not permitted
mv: preserving permissions for ‘./deleteme7.txt’: Operation not permitted
The two errors above appear to be key to understanding the problem. If I change the mount specification to use user2@sshfsrv those errors go away for user2 but then user1 and user3 experience them. The only user that doesn't have the problem is the one used in the mount specification. (I had expected the allow_other mount option would prevent this, but it doesn't. Also using root in the mount specification doesn't seem to help.)
Removing the mount option default_permissions eliminates these errors, but it also eliminates all permissions checking. Any user in any group can read and write files in Shared1, which does not meet our requirements.
sftp-server umask setting:
As sebasth says below, when sftp-server is used, the umask in /etc/profile or ~/.bashrc isn't used. I found that the following specification in /etc/ssh/sshd_config is a good solution for setting the umask:
Subsystem sftp internal-sftp -u 0006
I do not want to use the umask mount option for sshfs (in /etc/fstab) as that does not give the desired behavior.
Unfortunately, the above "-u" flag, while required, doesn't (yet) fully resolve my problem as described above.
New Update:
I have enabled pam_umask, but that alone doesn't resolve the issue. The above "-u" option is still required and I do not see that pam_umask adds anything additional that helps resolve this issue. Here are the configs currently used:
/etc/pam.d/system-login
session optional pam_umask.so
/etc/login.defs
UMASK 006
The Shared1 directory has these permissions, as shown from the server side. The gid sticky bit is set with chmod g+s. We removed all ACLs. All files within this directory have g+rw permissions.
drwxrwsr-x 1 user3 team 7996 Sep 23 18:54 .
# cat /etc/group
team:x:50:user1,user2,user3
Both client and server are running OpenSSH_7.5p1, OpenSSL 1.1.0f dated 25 May 2017. This looks like the latest version.
On the server, systemctl status sshd shows Main PID: 4853 (sshd). The main proc status shows a umask of 022. However, I will provide the process info for the sftp subsystem further below, which shows the correct umask of 006.
# cat /proc/4853/status
Name: sshd
Umask: 0022
State: S (sleeping)
Tgid: 4853
Ngid: 0
Pid: 4853
PPid: 1
TracerPid: 0
Uid: 0 0 0 0
Gid: 0 0 0 0
FDSize: 64
Groups:
NStgid: 4853
NSpid: 4853
NSpgid: 4853
NSsid: 4853
VmPeak: 47028 kB
VmSize: 47028 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 5644 kB
VmRSS: 5644 kB
RssAnon: 692 kB
RssFile: 4952 kB
RssShmem: 0 kB
VmData: 752 kB
VmStk: 132 kB
VmExe: 744 kB
VmLib: 6260 kB
VmPTE: 120 kB
VmPMD: 16 kB
VmSwap: 0 kB
HugetlbPages: 0 kB
Threads: 1
SigQ: 0/62965
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000001000
SigCgt: 0000000180014005
CapInh: 0000000000000000
CapPrm: 0000003fffffffff
CapEff: 0000003fffffffff
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
Seccomp: 0
Cpus_allowed: 3f
Cpus_allowed_list: 0-5
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 25
nonvoluntary_ctxt_switches: 2
We need to look at the sftp-server process for this client. It shows the expected umask of 006. I'm not sure if the GID is correct. 1002 is the GID for the user3 group. The directory specifies team group (GID 50) rwx.
# ps ax | grep sftp*
5112 ? Ss 0:00 sshd: user3@internal-sftp
# cat /proc/5112/status
Name: sshd
Umask: 0006
State: S (sleeping)
Tgid: 5112
Ngid: 0
Pid: 5112
PPid: 5111
TracerPid: 0
Uid: 1002 1002 1002 1002
Gid: 1002 1002 1002 1002
FDSize: 64
Groups: 47 48 49 50 51 52 1002
NStgid: 5112
NSpid: 5112
NSpgid: 5112
NSsid: 5112
VmPeak: 85280 kB
VmSize: 85276 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 3640 kB
VmRSS: 3640 kB
RssAnon: 980 kB
RssFile: 2660 kB
RssShmem: 0 kB
VmData: 1008 kB
VmStk: 132 kB
VmExe: 744 kB
VmLib: 7352 kB
VmPTE: 184 kB
VmPMD: 12 kB
VmSwap: 0 kB
HugetlbPages: 0 kB
Threads: 1
SigQ: 0/62965
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000000000
SigCgt: 0000000180010000
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
Seccomp: 0
Cpus_allowed: 3f
Cpus_allowed_list: 0-5
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 8
nonvoluntary_ctxt_switches: 0
Original Question - can probably skip this after the above updates
I am sharing the Shared1 directory from the SSHFS file server to various client machines. All machines use Arch Linux and BTRFS. pwck and grpck report no errors on both client and server.
My goal is to allow all users in the team group to have rw permissions in the Shared1 directory. For unknown reasons, I am not able to achieve this goal. Some group members are experiencing permission denied errors (on write), as I will show below.
What am I overlooking? (I have checked all the related questions on unix.stackexchange.com and I still did not resolve this issue.)
Server:
[user2@sshfsrv Shared1]$ cat /etc/profile
umask 006
[user2@sshfsrv Syncd]$ whoami
user2
[user2@sshfsrv Syncd]$ groups
team user2
[user2@sshfsrv Syncd]$ cat /etc/fuse.conf
user_allow_other
[root2@sshfsrv Syncd]# cat /proc/18940/status
Name: sshd
Umask: 0022
Note below that the setgid bit (chmod g+s) is initially set:
[user1@sshfsrv Syncd]$ ls -la
total 0
drwxrws--x 1 user1 limited 170 Aug 29 09:47 .
drwxrwxr-x 1 user1 limited 10 Jul 9 14:10 ..
drwxrwsr-x 1 user2 team 7892 Sep 22 17:21 Shared1
[root@sshfsrv Syncd]# getfacl Shared1/
# file: Shared1/
# owner: user2
# group: team
# flags: -s-
user::rwx
group::rwx
other::r-x
[user2@sshfsrv Shared1]$ umask -S
u=rwx,g=rx,o=x
[user2@sshfsrv Shared1]$ sudo chmod g+w .
[user2@sshfsrv Shared1]$ umask -S
u=rwx,g=rx,o=x
NOTE: Even after the above step, there are still no group write permissions.
[user2@sshfsrv Shared1]$ touch deleteme2.txt
[user2@sshfsrv Shared1]$ echo deleteme > deleteme2.txt
[user2@sshfsrv Shared1]$ cat deleteme2.txt
deleteme
[user2@sshfsrv Shared1]$ ls -la deleteme2.txt
-rw-r----- 1 user2 team 9 Sep 22 17:55 deleteme2.txt
[user2@sshfsrv Shared1]$ getfacl .
# file: .
# owner: user2
# group: team
# flags: -s-
user::rwx
group::rwx
other::r-x
[root@sshfsrv Syncd]# chmod g-s Shared1/
[root@sshfsrv Syncd]# ls -la
drwxrwxr-x 1 user2 team 7944 Sep 22 17:54 Shared1
Client
[user2@client2 Shared1]$ cat /etc/fstab
user3@sshfsrv:/home/common /home/common fuse.sshfs x-systemd.automount,_netdev,user,follow_symlinks,identityfile=/home/user3/.ssh/id_rsa,allow_other,default_permissions 0 0
[user2@client2 Shared1]$ cat /etc/profile
umask 006
[user2@client2 Shared1]$ cat /etc/fuse.conf
user_allow_other
[user2@client2 Shared1]$ groups
team user2
[user2@client2 Shared1]$ echo deleteme > deleteme2.txt
bash: deleteme2.txt: Permission denied
[user2@client2 Shared1]$ touch deleteme3.txt
touch: setting times of 'deleteme3.txt': Permission denied
[user2@client2 Shared1]$ ls -la
total 19520
drwxrwsr-x 1 user2 team 7918 Sep 22 17:51 .
drwxrws--x 1 user1 limited 170 Aug 29 09:47 ..
-rw-r----- 1 user3 team 0 Sep 22 17:51 deleteme3.txt
|
The general solution is to add the following line to /etc/ssh/sshd_config on Arch Linux:
Subsystem sftp internal-sftp -u 0002
However, the gotcha for me was that users of group "team" had a ForceCommand defined in that same config file. For these users, the ForceCommand was overriding the specification listed above.
The solution was to add the same "-u" flag on the ForceCommand
Match Group team
ForceCommand internal-sftp -u 0002
Then run:
systemctl restart sshd.service
It is important to note that using the sshfs mount option umask is not recommended. It did not produce the desired behavior for me.
References:
The umask option for sshfs goes down to the underlying fuse layer
where it's handled wrongly. afaict the advice is to avoid it. – Ralph
Rönnquist Jun 17 '16 at 7:56 Understanding sshfs and umask
https://jeff.robbins.ws/articles/setting-the-umask-for-sftp-transactions
https://unix.stackexchange.com/a/289278/15010
EDIT:
while this solution works on the command line and with some desktop apps (e.g., KDE's Kate text editor), it does not work correctly with many desktop applications (including KDE's Dolphin file manager, XSane, etc.). So this turned out not to be a good overall solution.
| Proper way to set the umask for SFTP transactions? |
1,284,435,541,000 |
What is the command with which you can directly view the permission bits of a directory?
|
There's a couple ways. stat is used to show information about files and directories, so it's probably the best way. It takes a format parameter to control what it outputs; %a will show the octal values for the permissions, while %A will show the human-readable form:
$ stat -c %a /
755
$ stat -c %A /
drwxr-xr-x
$ stat -c %a /tmp
1777
$ stat -c %A /tmp
drwxrwxrwt
Another (probably more common) way is to use ls. -l will make it use the long listing format (whose first entry is the human-readable form of the permissions), and -d will make it show the entry for the specified directory instead of its contents:
$ ls -ld /
drwxr-xr-x 22 root root 4.0K Apr 28 20:32 /
$ ls -ld /tmp
drwxrwxrwt 7 root root 12K Sep 25 22:31 /tmp
| how to view a directory's permission |
1,284,435,541,000 |
I am running this command:
$ sudo tar xvzf nexus-latest-bundle.tar.gz
The extracted files belong to an unknown (1001) user:
drwxr-xr-x 8 1001 1001 4096 Dec 16 18:37 nexus-2.12.0-01
drwxr-xr-x 3 1001 1001 4096 Dec 16 18:47 sonatype-work
Shouldn't it be root the owner under a normal configuration?
I am working on a linux installation replicated from an AWS AMI.
|
When extracting files as root, tar will use the original ownership. You can override that using the --no-same-owner option (alternatively, -o).
Your tar file referred to user/group which do not exist on the system where you extracted it.
If you extract files as yourself (a non-privileged user), you can only create files owned by yourself.
The GNU tar manual says:
--same-owner
When extracting an archive, tar will attempt to preserve the owner specified in the tar archive with this option present. This is the default behavior for the superuser; this option has an effect only for ordinary users. See section Handling File Attributes.
| sudo tar changes extracted files ownership to unknown user |
1,284,435,541,000 |
When I apply default ACL in a directory I see default:mask or just mask in the following two scenario.
Scenario 1
-bash-4.2$ ls -ld test/
drwxr-x---. 2 test test 4096 Oct 15 19:12 test/
-bash-4.2$ setfacl -d -m u:arif:rwx test/
-bash-4.2$ getfacl --omit-header test
user::rwx
group::r-x
other::---
default:user::rwx
default:user:arif:rwx
default:group::r-x
default:mask::rwx
default:other::---
Scenario 2
-bash-4.2$ ls -dl dir/
drwxr-x---. 2 test test 4096 Oct 15 18:17 dir/
-bash-4.2$ getfacl dir
# file: dir
# owner: test
# group: test
user::rwx
group::r-x
other::---
-bash-4.2$ setfacl -m user:arif:rwx dir
-bash-4.2$ getfacl --omit-header dir
user::rwx
user:arif:rwx
group::r-x
mask::rwx
other::---
So what is the purpose of mask here?
|
What
This 3-bit ACL system has its roots in TRUSIX. Other ACL systems, such as the NFS4-style ones in FreeBSD, MacOS, AIX, Illumos, and Solaris, work differently and this concept of a mask access control entry is not present.
The mask is, as the name says, a mask that is applied to mask out permissions granted by access control entries for users and groups. It is the maximum permission that may be granted by any acccess control entry, other than by a file owner or an "other" entry. Its 3 bits are anded with the 3 bits of these other entries.
So, for example, if a user is granted rw- by an access control entry, but the mask is r--, the user will only actually have r-- access. Conversely, if a user is only granted --x by an access control entry, a mask of rwx does not grant extra permissions and the user has just --x access.
The default mask on a parent directory is the mask setting that is applied to things that are created within it. It is a form of inheritance.
Why
It's a shame that IEEE 1003.1e never became a standard and was withdrawn in 1998. In practice, nineteen years on, it's a standard that a wide range of operating systems — from Linux through FreeBSD to Solaris (alongside the NFS4-style ACLs in the latter cases) — actually implement.
IEEE 1003.1e working draft #17 makes for interesting reading, and I recommend it. In appendix B § 23.3 the working group provides a detailed, eight page, rationale for the somewhat complex way that POSIX ACLs work with respect to the old S_IRWXG group permission flags. (It's worth noting that the TRUSIX people provided much the same analysis ten years earlier.) This covers the raison d'être for the mask, which I will only précis here.
Traditional Unix applications expect to be able to deny all access to a file, named pipe, device, or directory with chmod(…,000). In the presence of ACLs, this only turns off all user and group permissions if there is a mask and the old S_IRWXG maps to it. Without this, setting the old file permissions to 000 wouldn't affect any non-owner user or group entries and other users would, surprisingly, still have access to the object.Temporarily changing a file's permission bits to no access with chmod 000 and then changing them back again was an old file locking mechanism, used before Unixes gained advisory locking mechanisms, that — as you can see — people still use even in the 21st century. (Advisory locking has been easily usable from scripts with portable well-known tools such as setlock since the late 1990s.)
Traditional Unix scripts expect to be able to run chmod go-rwx and end up with only the object's owner able to access the object. Again, this doesn't work unless there is a mask and the old S_IRWXG permissions map to it; because otherwise that chmod command wouldn't turn off any non-owner user or group access control entries, leading to users other than the owner and non-owning groups retaining access to something that is expected to be accessible only to the owner.And again — as you can see — this sort of chmod command was still the received wisdom twelve years later. The rationale still holds.
Other approaches without a mask mechanism have flaws.
An alternative system where the permission bits were otherwise separate from and anded with the ACLs would require file permission flags to be rwxrwxrwx in most cases, which would confuse the heck out of the many Unix applications that complain when they see what they think to be world-writable stuff.
An alternative system where the permission bits were otherwise separate from and ored with the ACLs would have the chmod(…,000) problem mentioned before.
Hence an ACL system with a mask.
Further reading
Craig Rubin (1989-08-18). Rationale for Selecting Access Control List Features for the Unix System. NCSC-TG-020-A. DIANE Publishing. ISBN 9780788105548.
Portable Applications Standards Committee of the IEEE Computer Society (October 1997).
Draft Standard for Information Technology—Portable Operating System Interface (POSIX)—Part 1: System Application Program Interface (API)— Amendment #: Protection, Audit and Control Interfaces [C Language] IEEE 1003.1e. Draft 17.
Winfried Trümper (1999-02-28). Summary about Posix.1e
https://unix.stackexchange.com/a/406545/5132
https://unix.stackexchange.com/a/235284/5132
How can I grant owning group permissions when POSIX ACLs are applied?
Performing atomic write operations in a file in bash
| What is the exact purpose of `mask` in file system ACL? |
1,284,435,541,000 |
I created a directory called folder and took away execute permission.
$ mkdir folder
$ touch folder/innerFile
$ mkdir folder/innerFolder
$ chmod -x folder
Now if I do
$ ls folder
it outputs a list of files, but when I do
$ ls -l folder
I get
ls: innerFile: Permission denied
ls: innerFolder: Permission denied
Why is that?
|
ls -l on a folder tries to stat its contents, whereas ls doesn't:
$ strace ls folder -l
...
lstat("folder/innerFolder", {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
getxattr("folder/innerFolder", "system.posix_acl_access", 0x0, 0) = -1 ENODATA (No data available)
getxattr("folder/innerFolder", "system.posix_acl_default", 0x0, 0) = -1 ENODATA (No data available)
lstat("folder/innerFile", {st_mode=S_IFDIR|0755, st_size=40, ...}) = 0
getxattr("folder/innerFile", "system.posix_acl_access", 0x0, 0) = -1 ENODATA (No data available)
getxattr("folder/innerFile", "system.posix_acl_default", 0x0, 0) = -1 ENODATA (No data available)
...
That's why you get a "permission denied" with ls -l and not with ls.
| What is the difference between 'ls' and 'ls -l' when I don't have execute permission on that directory? |
1,284,435,541,000 |
I always wondered why rsync tries to transfer a file to a remote location where it has read/execute permissions for the target dir, but no write permissions to create the actual destination file. This can be simulated even locally when trying to copy a file as a regular user to /, rsync will transfer the whole file (also taking rather long for large files) and finally fails with
rsync: mkstemp "/.myTargetFile" failed: Permission denied (13)
So it already seems to fail at startup when trying to create the temporary file (the dot-file) during transfer. Why doesn't it notice this and aborts early instead of trying to copy the whole file without having any write permissions?
And where does it copy the file to if it can't create the temporary file? I can't see any memory increase of the rsync processes and also no corresponding file in /tmp. Seems like it directly discards the data at the destination but still keeps on with transferring.
|
This seems to be a shortcoming of the current rsync protocol as explained in the bug tracker. The rsync protocol can't determine beforehand if it has write permissions at the target. Instead it just sends and checks for success or failure afterwards.
| rsync and write permissions at the target |
1,284,435,541,000 |
When I try to start my WM using startx, I am unable to because the permission of something called /dev/fb0 are restricted.
From home/user/.local/share/xorg/Xorg.0.log:
[ 198.569] (--) controlling tty is VT number 1, auto-enabling KeepTty
[ 198.569] (II) Loading sub module "fbdevhw"
[ 198.569] (II) LoadModule: "fbdevhw"
[ 198.569] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so
[ 198.570] (II) Module fbdevhw: vendor="X.Org Foundation"
[ 198.570] compiled for 1.16.0, module version = 0.0.2
[ 198.570] ABI class: X.Org Video Driver, version 18.0
[ 198.570] (EE) open /dev/fb0: Permission denied
[ 198.570] (WW) Falling back to old probe method for fbdev
[ 198.570] (II) Loading sub module "fbdevhw"
[ 198.570] (II) LoadModule: "fbdevhw"
[ 198.570] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so
[ 198.570] (II) Module fbdevhw: vendor="X.Org Foundation"
[ 198.570] compiled for 1.16.0, module version = 0.0.2
[ 198.570] ABI class: X.Org Video Driver, version 18.0
[ 198.571] (EE) open /dev/fb0: Permission denied
Now of course I can change it using chmod, but I shouldn't have to do that every time I reboot the computer, so it seems like something is wrong / I haven't set up something properly.
What should I do to fix this?
|
Gilles is correct; this is due to the changes in xorg-server 1.16 which were announced on the Arch News.
To work around the permissions issue, you can use a Xorg.wrap config file to pass root rights, using:
needs_root_rights = yes
See man Xorg.wrap for the details.
You could also try using xf86-video-modesetting instead of xf86-video-fbdev until the fbdev driver is updated.
| startx cannot open /dev/fb0: Permission denied |
1,284,435,541,000 |
I am trying to start a node.js application with on a low permissions user. All the files I know of are owned by the correct user and have permissions set reasonably well. I'm trying to use a script file to do this. I invoke the script with this command
sudo su - nodejs ./start-apps.sh
The shell script runs this command to start the app
cd "/home/nodejs/my-app"
npm start
npm start is documented here. It basically pulls the command to use out of the package.json file, which in our app looks like this:
// snip
"scripts": {
"start": "node-dev app"
},
And it spits out the error:
> [email protected] start /home/nodejs/my-app
> node-dev app
sh: 1: node-dev: Permission denied
npm ERR! [email protected] start: `node-dev app`
npm ERR! Exit status 126
That sh seems to be saying that it's reporting errors from the shell command. I don't think the problem is accessing the npm command itself, because if it were, the permission denied would be raised before any output from the npm command. But just to rule it out, here are the permissions for the npm command itself:
$ sudo find / ! \( -type d \) -name npm -exec ls -lah {} \;
-rwxr-xr-x 1 root root 274 Nov 12 20:22 /usr/local/src/node-v0.10.22/deps/npm/bin/npm
-rwxr-xr-x 1 root root 274 Nov 12 20:22 /usr/local/lib/node_modules/npm/bin/npm
lrwxrwxrwx 1 root root 38 Jan 14 07:49 /usr/local/bin/npm -> ../lib/node_modules/npm/bin/npm-cli.js
It looks like everyone should be able to execute it.
The permissions for node-dev look like this:
$ sudo find / ! \( -type d \) -name node-dev -exec ls -lah {} \;
-rwxr-xr-x 1 nodejs nodejs 193 Mar 3 2013 /home/nodejs/.npm/node-dev/2.1.4/package/bin/node-dev
-rw-r--r-- 1 nodejs nodejs 193 Mar 3 2013 /home/nodejs/spicoli-authorization/node_modules/node-dev/bin/node-dev
lrwxrwxrwx 1 root root 24 Jan 14 07:50 /home/nodejs/spicoli-authorization/node_modules/.bin/node-dev -> ../node-dev/bin/node-dev
I've already tried chowning the link to nodejs:nodejs, but the scrip experiences the same error.
Is there some file permissions problem I'm not seeing with the binary files? Or is this an npm/node-dev specific error?
|
The second node-dev is not executable, and the symlink points to that. Although the symlink is executable (symlinks are always 777), it is the mode of the file it points to that counts; note that calling chmod on the link actually changes the mode of the file it points to (symlink permissions never change).
So perhaps you need to add the executable bit for everyone:
chmod 755 /home/nodejs/spicoli-authorization/node_modules/.bin/node-dev
| Why is permission denied for npm start using node-dev? |
1,284,435,541,000 |
MacOS Mojave has extended the effects of SIP into the home directories of users. By default, access is denied to many directories in a user’s home directory. A few examples of these directories follow.
~/Library/Messages
~/Library/Mail
~/Library/Safari
[… etc.]
In order to access these directories from a terminal, the terminal application must be defined in System Preferences > Security & Privacy > Privacy > Full Disk Access. The configuration works, except for the following directory on my system. The same behavior may exist for other data in containers - not sure.
~/Library/Containers/com.apple.mail/Data/DataVaults
The intriguing behavior is easy to reproduce. The directory isn't even visible.
cd ~/Library/Containers/com.apple.mail/Data
ls
ls: DataVaults: Operation not permitted
I use rsync to mirror my home directory to an external hard drive; but, I can no longer do so because rsync complains, "IO error encountered -- skipping file deletion," which breaks the mirroring effect. I do not find any documentation on this issue. Apple support have no idea. Why is this directory special, and how can we gain access to it without disabling SIP?
Results of Further Investigation with SIP Disabled
According to System Information, the Mojave upgrade was performed on 24 September 2018. The directory was also created on the same day. My user owns the directory, and the staff group is the group owner. Its permissions are 0700. It has extended attributes as indicated by the @ symbol. No ACLs. No flags.
xattr -l ~/Library/Containers/com.apple.mail/Data/DataVaults
com.apple.quarantine: 0082;00000000;Mail;
com.apple.rootless: Mail
ls -lO DataVaults
(no result; exit 0)
After disabling SIP, deleting the directory, and reenabling SIP, the directory reappears with the same permissions as soon as Mail is opened. Mail (Version 12.0 (3445.100.39)) has no plugins.
Results from a Fresh Installation on Oct 16 2018
The directory does not exist after formatting and reinstalling. I still have no clue how it was ever there to start.
Results from an upgrade on March 29, 2019
The directory has reappeared coinciding with the upgrade to Mojave 10.14.4 (18E226) and/or Mail Version 12.4 (3445.104.8).
|
The DataVaults directory has to do with entitlements. Access is prevented unless the owner of the entitlement grants the access. The entitlements for Mail.app can be listed as follows and provides an XML plist.
codesign -d --entitlements - /Applications/Mail.app/
At this time, the only remaining method to acquire access to the directory is to turn off SIP. In regard to my rsync issue, I opted to keep SIP turned on and utilized the rsysnc option, exclude, to ignore the DataVaults directory, which, by the way, is devoid of content.
From a comment the blog at Eclectic Light Company, offering more clues:
/var/folders/t9/[long ID]/C/com.apple.QuickLook.thumbnailcache” is
a DataVault, which is a new type of privacy container that Apple
introduced sometime around 10.13.4. These files/folders are identified
by the “UF_DATAVAULT” file flag. These are implemented via SIP (not
technically sandboxing, but the same gist). Applications need an
entitlement to make or access specific data vaults, or even to stat() a
DataVault folder.
These devices are worth some deeper investigation. Apple doesn’t (and
apparently has no plans to) issue these entitlements to third-parties.
Consider the implications of that – Apple is creating a platform where
only data created in Apple applications gets the highest level of
security.
Also consider that you (the user) can’t see what’s in these DataVaults
without turning off SIP. It’s hard to tell what Apple is keeping in
these, but some of them are a bit alarming. Here are just a few known
data vaults:
~/Library/VoiceTrigger/SAT
~/Library/Containers/com.apple.mail/Data/DataVaults
/private/var/folders/0z/fs4vdwmx6g31n69qt5v5ff580000gn/0/com.apple.nsurlsessiond
That first one apparently has “Siri Audio Transcripts” – everything
you’ve ever uttered to Siri on your Mac.
I did not find a flag on ~/Library/Containers/com.apple.mail/Data/DataVaults, and a clean installation of Mojave caused the directory not to appear again since.
A summary overview of access controls was also published.
| macOS Mojave Directory Permissions |
1,284,435,541,000 |
On Debian there is the common problem, that you try to plug an ntfs formatted USB harddrive and then can't write to it as a regular user to it because the directory belongs to root.
A little time ago I read that that can be fixed with adding the uid=1000,gid=1000 (or whatever your uid and gid are) options. This does solve the problem but seems a little bit nasty to me, because if you have a multiuser system the drive always belongs to the same user and not to the user who mounted it/is logged in.
From my time with Ubuntu I remember that this wasn't a problem and you could mount NTFS drives (with GNOME) and they were writable by the user who mounted them. So it seems that GNOME is able to mount the drive with permissions given to the logged in user.
However now I'm using KDE on Debian jessie and I'm wondering if I can configure my computer to mount the drives with the permissions of the user who has the active X-session.
|
It does seem like you have modified your fstab to mount this device manually .. Or perhaps you have made entries in autofs? .. Debian/ubuntu has moved along from requiring manually configured mounts for a long time now, even for non-GUI environments.
Generally, you would not need to do manual mounts, and all the various desktop environments handle this seamlessly and often in slightly different ways (as long as the device is not setup in fstab).
Some would use udev/pmount, with consolekit/policykit helping with auth. First thing to do is to remove the manual mount settings you have and reboot. I am sure that alone will fix your automounting challenges. Why don't you try that first, and then report back if you still have challenges
Also, if you have been tinkering with the permissions of the ntfs-3g or lowntfs-3g binaries, try to revert them back to their default ownerships and permissions. In short, revert back to default settings and reboot, and you should be fine.
The solution Kiwy is suggesting (and others) would only become necessary when you have further issues after reverting to default settings..
| USB drive permissions when automounting |
1,284,435,541,000 |
I have been volunteered by my boss to be the sysadmin for our production redhat server.
He asked me to tighten the security to avoid mishaps like rm -f * that occured not long ago.
Right now, we have 53 users sudo-ing into the machine and it is an audit nightmare. I am wondering if it possible to allow user access only on specific days of the week.
For example, can I have user 'Joe' allowed to login on Tuesdays and Thursdays ONLY and 'Jane' only on Sundays? Can etc/sudoers be customized to allow this?
Is there a better way instead of using sudoers?
|
sudo does its authentication through PAM, like pretty much everything else on a Linux box.
So you should be able to use pam_time.so to do this.
By default on Debian at least, that module isn't enabled. You need to add a line that looks like this:
account requisite pam_time.so
to either /etc/pam.d/sudo to enable for only sudo or to /etc/pam.d/common-account (after the pam-auth-update block) to enable for all programs on the system.
Then edit /etc/security/time.conf to set your restrictions. The service name should be sudo. For example, to allow Fred to use sudo only between 3pm and 5pm on Friday:
sudo;*;fred;Fr1500-1700
(NOTE: I have not tested this.)
edit: To be clear, I agree with the other answer and the various commenters, you appear to have too many people running too many commands as root, and you really need to fix that. And of course if they can become root, they can edit the pam config...
| Sudoers for one day per week? |
1,284,435,541,000 |
I ran some commands without completely understanding them while trying to get screen brightness working and now I'm stuck with a nasty symlink in '/sys/class/backlight/asus_laptop' that I am trying to get rid of.
I have tried
sudo rm /sys/class/backlight/asus_laptop
sudo rm '/sys/class/backlight/asus_laptop'
su root
rm /sys/class/backlight/asus_laptop
sudo rm /sys/class/backlight/asus_laptop
Going right into directory and typing rm asus_laptop, changing ownership and using Thunar to try to remove it.
I get
rm: cannot remove '/sys/class/backlight/asus_laptop': Operation not permitted
Same goes for unlink, rmdir doesn't work, and Thunar fails.
The permissions on it are lrwxrwxrwx
How can I remove it?
|
The sysfs file system, typically mounted on /sys, just like the /proc file system, isn’t a typical file system, it’s a so called pseudo file system. It’s actually populated by the kernel and you can’t delete files directly.
So, if the ASUS laptop support isn’t appropriate for you, then you have to ask the kernel to remove it. To do so, remove the corresponding module:
sudo rmmod asus-laptop
That will remove the relevant /sys entry.
| Debian: cannot remove symlink in /sys/: operation not permitted |
1,284,435,541,000 |
I know we can use below format to redirect the screen output to a file:
$ your_program > /tmp/output.txt
However when I used below command, it says "-bash: /home/user/errors.txt: Permission denied"
sudo tail /var/log/apache2/error.log > ~/errors.txt
May I know how to make this output works? The ~/errors.txt doesn't exist. Do I need to create this txt file first before I use the redirect command?
|
Behind the pipe, the sudo doesn't work. I don't know why you can't write to your home - maybe the file belongs to root?
sudo tail /var/log/apache2/error.log | sudo tee ~/errors.txt
Maybe you need a different user behind the pipe. For sure, you don't need a preexisting file.
| tail program output to file in Linux |
1,284,435,541,000 |
How can I find all files I can not write to?
Would be good if it takes standard permissions and acls into account.
Is there an "easy" way or do I have to parse the permissions myself?
|
Try
find . ! -writable
the command find returns a list of files, -writable filters only the ones you have write permission to, and the ! inverts the filter.
You can add -type f if you want to ignore the directories and other 'special files'.
| How can I find all files I do not have write access to in specific folder? |
1,284,435,541,000 |
I have my custom Vim files in ~/.vim and settings in ~/.vimrc. However, sometimes I have to edit some files in /etc and such.
If I start Vim like this:
$ sudo vim /etc/rc.conf
I lose my config since Vim uses its default one. So: how can I run Vim with root privileges to edit files without losing my user's settings (which are in my home directory)?
I have tried:
$ su username -c "vim /usr/lib/python2.7/setuptools/dist.py"
but Bash gives me Permission denied. However, the above command works for example for: /etc/acpi/handler.sh. Why is that?
Note: username is not root.
|
Instead of sudo vim /etc/rc.conf use sudoedit /etc/rc.conf or sudo -e /etc/rc.conf. You may need to set the EDITOR environment variable to vim. This will run vim itself as the normal user, using your normal configuration, on a copy of the file which it will copy back when you exit.
| Start Vim as my user with root privileges |
1,284,435,541,000 |
I have written a "Hello, World!" C file myCFile.c on an x86 embedded board on the Debian OS.
#include <stdio.h>
int main()
{
printf("hello\n")
}
I compile the program: gcc myCFile.c
However,
tester@localhost:~/test$ ./a.out
-bash: ./a.out: Permission denied
tester@localhost:~/pravin$ ls -lrt
total 44
-rwxrwxrwx 1 tester test 54 Sep 7 07:33 myCFile.c
-rwxrwxrwx 1 tester test 16608 Sep 7 07:33 a.out
However, if I copy a.out to /run/user/1000, I can execute it.
tester@localhost:/run/user/1000$ ls
a.out bus gnupg systemd
Also, I can execute it when I compile the C file with root user and execute it. I can execute it.
root@localhost:~# gcc myCFile.c
root@localhost:~# ./a.out
hello
root@localhost:~#
Is it something related to the NOEXEC flag?
My /etc/fstab file:
# Begin /etc/fstab
/dev/root / ext4 defaults 0 0 proc
/proc proc nosuid,noexec,nodev 0 0 sysfs /sys sysfs nosuid,noexec,nodev 0 0 devpts
/dev/pts devpts gid=5,mode=620 0 0 tmpfs
/run tmpfs defaults,size=1500M 0 0 devtmpfs
/dev devtmpfs mode=0755,nosuid 0 0
# End /etc/fstab
LABEL=persistent /persistent ext4 defaults,data=journal,noatime,nosuid,nodev,noexec 0 2
/persistent/home /home none defaults,bind 0 0
/persistent/tmp /tmp none defaults,bind 0 0
|
Is it something related NOEXEC flag?
Yes; presumably /home is mounted noexec, which means you can’t run binaries there. /tmp/user/1000 works because it’s a on different file system, as is /root (root’s home directory).
In your case,
mount -o remount,exec /persistent
should allow you to execute files in your home directory.
| Can not execute "Hello, World!" C program with user other than 'root' |
1,284,435,541,000 |
A file in a ls -l listing has permissions such as:
-rw-r-----+
How do I find the extended Access Control List (ACL) permissions denoted by the +?
|
The names getfacl and setfacl as in Tom Hale's answer are semi-conventional and are derived from the original TRUSIX names getacl and setacl for these utilities.
However, on several operating systems one simply uses just the usual ls and chmod tools, which have been extended to handle ACLs; and one operating system has its own different set of commands.
The original TRUSIX scheme of POSIX-style ACLs has three permission flags in an access control list entry.
Later NFS4-style schemes divide up permissions in a more fine grained manner into between 11 and 17 permission flags.
https://superuser.com/a/384500/38062
Craig Rubin (1989-08-18). Rationale for Selecting Access Control List Features for the Unix System. NCSC-TG-020-A. DIANE Publishing. ISBN 9780788105548.
Portable Applications Standards Committee of the IEEE Computer Society (October 1997).
Draft Standard for Information Technology—Portable Operating System Interface (POSIX)—Part 1: System Application Program Interface (API)— Amendment #: Protection, Audit and Control Interfaces [C Language] IEEE 1003.1e. Draft 17.
S. Shepler, M. Eisler, D. Noveck (January 2010). "ACE Access Mask". Network File System (NFS) Version 4 Minor Version 1 Protocol. RFC 5661. IETF.
On OpenBSD and NetBSD
This situation does not arise.
OpenBSD and NetBSD both lack any ACL mechanisms.
NetBSD implements the system calls in a FreeBSD compatibility layer, but they only return an error.
OpenBSD simply doesn't have ACLs at all.
On Linux-based operating systems
Use getfacl as in Tom Hale's answer, or getrichacl.
Setting ACLs is done with setfacl or setrichacl.
Linux (a kernel, remember) has two forms of ACL.
It supports the both original TRUSIX scheme of POSIX-style ACLs, and (since 2015, but stuck in "experimental" status for a long time because there aren't enough maintainers available to review the VFS layer in Linux) a NFS4-style scheme.
There are several implementations of standard commands on Linux-based operating systems, from toybox through BusyBox to GNU coreutils.
But in all cases chmod does not handle ACLs, and ls at most only indicates their overall presence or absence.
This is unlike Solaris, Illumos, or MacOS.
Nor is there one tool for getting, or setting, ACLs.
setfacl and getfacl handle TRUSIX ACLs, whilst one has to use setrichacl and getrichacl for NFS4-style ACLs.
This is unlike FreeBSD.
Rob Landley. "chmod". toybox Manual.
On FreeBSD
Use getfacl as in Tom Hale's answer. Setting ACLs is done with setfacl.
FreeBSD has two forms of ACL.
One has POSIX-style entries like the original TRUSIX model; the other has NFS4-style entries, with 14 permissions flags.
Unlike on Solaris, Illumos, and MacOS, on FreeBSD chmod does not handle ACLs, and ls only indicates their overall presence or absence.
But there is a single tool each for getting and setting ACLs, unlike Linux-based operating systems.
The getfacl and setfacl commands on FreeBSD handle both forms of ACL.
They have several extensions beyond TRUSIX for the NFS4-style, such as the -v option to getfacl that prints NFS4-style access controls in a long form with words, rather than as a list of single-letter codes.
Robert N. M. Watson (2009-09-14). getfacl. FreeBSD General Commands Manual. FreeBSD.
On MacOS
There are no getfacl and setfacl commands on MacOS.
MacOS is like Solaris and Illumos.
MacOS only supports NFS4-style access controls, with ACL entries divided up into 17 individual permission flags.
Apple rolled ACL functionality into existing commands.
Use the -e option to ls to view ACLs.
Use the -a/+a/=a and related options to chmod to set them.
ls. BSD General Commands Manual. 2002-05-19. Apple corporation.
On AIX
There are no getfacl and setfacl commands on AIX.
IBM uses its own command names.
AIX supports both POSIX-style (which IBM names "AIXC") and NFS4-style ACLs.
Use the aclget command to get ACLs.
Use the aclset command to set them.
Use the acledit command to edit them with a text editor.
Use the aclconvert command to convert POSIX-style to NFS4-style.
"Access Control List Management". IBM AIX V7.1 documentation. IBM.
On Illumos and Solaris
There are no getfacl and setfacl commands on Illumos and Solaris.
Solaris and Illumos are like MacOS.
Illumos and Solaris support both POSIX-style and NFS4-style ACLs.
Sun rolled ACL functionality into existing commands.
Use the -v or -V option to ls to view ACLs.
Use the A prefix for symbolic modes in the chmod command to set them.
ls. User Commands. 2014-11-24. Illumos Project.
chmod. User Commands. 2014-11-24. Illumos Project.
ls. Oracle Solaris 11 Information Library. 2011. Oracle.
On Cygwin
Use getfacl as in Tom Hale's answer.
Setting ACLs is done with setfacl.
Windows NT itself has an ACL scheme that is roughly NFS4-style with a set of drctpoxfew standard-and-specific permissions flags, albeit with a larger set of security principals and a generic-rights mechanism that maps a POSIX-style set of three flags onto its standard-and-specific-rights permissions system.
Cygwin presents this as a wacky admixture of a Solaris-like ACL API, the ID mapping mechanism from Microsoft second POSIX subsystem for Windows NT (née Interix), and a Linux-like set of command-line tools that only recognize POSIX-style permissions.
getfacl. Cygwin Utilities. Cygnus.
| View extended ACL for a file with '+' in ls -l output |
1,284,435,541,000 |
I'd like to give a user or group full root access to the /bin/date command, including any manner of arguments as defined in the man pages.
I understand /etc/sudoers expects explicit command definitions, so currently I have the following:
%some_group ALL=(root) NOPASSWD: /bin/date
e.g. I want any user in some_group to be able to run:
sudo /bin/date +%T "12:34:56"
Plus countless other combinations of arguments.
I guess my question is, is there a way to use regex or safe (emphasis on safe!) wildcards to achieve the finer granularity?
UPDATE
I managed to achieve something similar with Cmnd_Alias. Why this doesn't work without is a mystery beyond having time to read further on the inner-workings of sudo and sudoers.
Your syntax wasn't quite accepted, but I managed to achieve what I needed with the following:
Cmnd_Alias DATE=/bin/date
%some_group ALL=(root) NOPASSWD: DATE
This did exactly what I needed as a member of the defined group.
|
You can use the sudo command to accomplish this. If you add the following rule to your /etc/sudoers file like so:
As root or using sudo from your normal user account:
$ sudo visudo
This will open up the /etc/sudoers file in vi/vim. Once opened, add these lines:
Cmnd_Alias NAMEOFTHIS=/bin/date
ALL ALL=NOPASSWD: NAMEOFTHIS
Where "users" is a unix group that all the users are a member of. You can determine what group users are in with either the groups <username> command or look in the /etc/groups and /etc/passwd files.
If you have a section like this, I'd add the rules above here like so:
## The COMMANDS section may have other options added to it.
##
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
# my extra rules
Cmnd_Alias NAMEOFTHIS=/bin/date
ALL ALL=NOPASSWD: NAMEOFTHIS
| Allow a specific user or group root access without password to /bin/date |
1,284,435,541,000 |
I have a issue: I need to change the permission of the symlink from 777 to 755 and I do not have any idea how should I do it. I have tried using the chmod command but it's not working.
I want
lrwxrwxrwx 1 frosu 2016_cluj 5 Jul 4 13:53 test6 -> test0
to
lrwxr-xr-x 1 frosu 2016_cluj 5 Jul 4 13:53 test6 -> test0
|
Some systems support changing the permission of a symbolic link, others do not.
chmod -- change file modes or Access Control Lists (OSX and FreeBSD, using -h)
-h If the file is a symbolic link, change the mode of the link itself rather than the file that
the link points to.
chmod - change file mode bits (Linux)
chmod never changes the permissions of symbolic links; the chmod
system call cannot change their permissions. This is not a problem
since the permissions of symbolic links are never used. However, for
each symbolic link listed on the command line, chmod changes the
permissions of the pointed-to file. In contrast, chmod ignores
symbolic links encountered during recursive directory traversals.
Since the feature differs, POSIX does not mention the possibility.
From comments, someone suggests that a recent change to GNU coreutils provides the -h option. At the moment, that does not appear in the source-code for chmod:
while ((c = getopt_long (argc, argv,
("Rcfvr::w::x::X::s::t::u::g::o::a::,::+::=::"
"0::1::2::3::4::5::6::7::"),
long_options, NULL))
and long_options has this:
static struct option const long_options[] =
{
{"changes", no_argument, NULL, 'c'},
{"recursive", no_argument, NULL, 'R'},
{"no-preserve-root", no_argument, NULL, NO_PRESERVE_ROOT},
{"preserve-root", no_argument, NULL, PRESERVE_ROOT},
{"quiet", no_argument, NULL, 'f'},
{"reference", required_argument, NULL, REFERENCE_FILE_OPTION},
{"silent", no_argument, NULL, 'f'},
{"verbose", no_argument, NULL, 'v'},
{GETOPT_HELP_OPTION_DECL},
{GETOPT_VERSION_OPTION_DECL},
{NULL, 0, NULL, 0}
};
Permissions are set with chmod. Ownership is set with chown. GNU coreutils (like BSD) supports the ability to change a symbolic link's ownership. This is a different feature, since the ownership of a symbolic link is related to whether one can modify the contents of the link (and point it to a different target). Again, this started as a BSD feature (OSX, FreeBSD, etc), which is also supported with Linux (and Solaris, etc). POSIX says of this feature:
-h
For each file operand that names a file of type symbolic link, chown shall attempt to set the user ID of the symbolic link. If a group ID was specified, for each file operand that names a file of type symbolic link, chown shall attempt to set the group ID of the symbolic link.
So much for the command-line tools (and shell scripts). However, you could write your own utility, using a feature of POSIX which is not mentioned in the discussion of the chmod utility:
int chmod(const char *path, mode_t mode);
int fchmodat(int fd, const char *path, mode_t mode, int flag);
The latter function adds a flag parameter, which is described thus:
Values for flag are constructed by a bitwise-inclusive OR of flags from the following list, defined in <fcntl.h>:
AT_SYMLINK_NOFOLLOW
If path names a symbolic link, then the mode of the symbolic link is changed.
That is, the purpose of fchmodat is to provide the feature you asked about. But the command-line chmod utility is documented (so far) only in terms of chmod (without this feature).
fchmodat, by the way, appears to have started as a poorly-documented feature of Solaris which was adopted by the Red Hat and GNU developers ten years ago, and suggested by them for standardization:
one more openat-style function required: fchmodat
Austin Group Minutes of the 17 May 2007 Teleconference
[Fwd: The Austin Group announces Revision Draft 2 now available]
According to The Linux Programming Interface, since 2.6.16, Linux supports AT_SYMLINK_NOFOLLOW in these calls: faccessat, fchownat, fstatat, utimensat, and linkat was implemented in 2.6.18 (both rather "old": 2006, according to OSNews).
Whether the feature is useful to you, or not, depends on the systems that you are using.
| How to change the Symlink permission? |
1,284,435,541,000 |
I have a binary that creates some files in /tmp/*some folder* and runs them. This same binary deletes these files right after running them. Is there any way to intercept these files?
I can't make the folder read-only, because the binary needs write permissions. I just need a way to either copy the files when they are executed or stop the original binary from deleting them.
|
You can use the inotifywait command from inotify-tools in a script to create hard links of files created in /tmp/some_folder. For example, hard link all created files from /tmp/some_folder to /tmp/some_folder_bak:
#!/bin/sh
ORIG_DIR=/tmp/some_folder
CLONE_DIR=/tmp/some_folder_bak
mkdir -p $CLONE_DIR
inotifywait -mr --format='%w%f' -e create $ORIG_DIR | while read file; do
echo $file
DIR=`dirname "$file"`
mkdir -p "${CLONE_DIR}/${DIR#$ORIG_DIR/}"
cp -rl "$file" "${CLONE_DIR}/${file#$ORIG_DIR/}"
done
Since they are hard links, they should be updated when the program modifies them but not deleted when the program removes them. You can delete the hard linked clones normally.
Note that this approach is nowhere near atomic so you rely on this script to create the hard links before the program can delete the newly created file.
If you want to clone all changes to /tmp, you can use a more distributed version of the script:
#!/bin/sh
TMP_DIR=/tmp
CLONE_DIR=/tmp/clone
mkdir -p $CLONE_DIR
wait_dir() {
inotifywait -mr --format='%w%f' -e create "$1" 2>/dev/null | while read file; do
echo $file
DIR=`dirname "$file"`
mkdir -p "${CLONE_DIR}/${DIR#$TMP_DIR/}"
cp -rl "$file" "${CLONE_DIR}/${file#$TMP_DIR/}"
done
}
trap "trap - TERM && kill -- -$$" INT TERM EXIT
inotifywait -m --format='%w%f' -e create "$TMP_DIR" | while read file; do
if ! [ -d "$file" ]; then
continue
fi
echo "setting up wait for $file"
wait_dir "$file" &
done
| Watch /tmp for file creation and prevent deletion of files? [duplicate] |
1,284,435,541,000 |
When I change the permissions on a file using chmod, existing file descriptors can continue to access the file under the previous permissions.
Can I cause those existing file descriptors to close or fail or become unusable immediately after the permission change?
|
The kernel doesn't check permissions on file descriptions. They can even be duplicated to other processes that never had access to the original file by fd passing.
The only thing I think you could try would be to manually find out processess with open file descriptors, and pull a sneaky trick to close them1. There's an example of such a "sneaky trick" is to attach a debugger (gdb) and use that to close the fd.
This is a very extreme thing to do. There's no way to know how a process will behave if it's FD is suddenly closed. In some cases processes may have a file mapped to memory so if you manage to close the file and you managed to remove any memory mapping without the processes expecting it would crash on a segmentation fault.
Much better would be to find which processes are using a file and manually kill them. At least that way you can ask them to shutdown nicely and not corrupt other data.
1 as mentioned in comments, the best shot of making this work would be to call dup2 rather than close to change the fd to point to /dev/null instead of the original file. This is because the code won't expect it to have closed and may do some very weird and insecure things when the fd (number) gets recycled.
| Can I get UNIX file permissions to take effect immediately for all processes? |
1,284,435,541,000 |
I've faced a really strange issue today, and am totally helpless about it.
Some of the servers I manage are monitored with Nagios. Recently I saw a disk usage probe failing with this error:
DISK CRITICAL - /sys/kernel/debug/tracing is not accessible: Permission denied
I wanted to investigate and my first try was to check this directory permissions, and compare these with another server (who's working well). Here are the commands I ran on the working server and you'll see that as soon as I cd into the directory, its permissions are changed:
# Here we've got 555 for /sys/kernel/debug/tracing
root@vps690079:/home/admin# cd /sys/kernel/debug
root@vps690079:/sys/kernel/debug# ll
total 0
drwx------ 30 root root 0 Jul 19 13:13 ./
drwxr-xr-x 13 root root 0 Jul 19 13:13 ../
…
dr-xr-xr-x 3 root root 0 Jul 19 13:13 tracing/
drwxr-xr-x 6 root root 0 Jul 19 13:13 usb/
drwxr-xr-x 2 root root 0 Jul 19 13:13 virtio-ports/
-r--r--r-- 1 root root 0 Jul 19 13:13 wakeup_sources
drwxr-xr-x 2 root root 0 Jul 19 13:13 x86/
drwxr-xr-x 2 root root 0 Jul 19 13:13 zswap/
# I cd into the folder, and it (./) becomes 700!!
root@vps690079:/sys/kernel/debug# cd tracing/
root@vps690079:/sys/kernel/debug/tracing# ll
total 0
drwx------ 8 root root 0 Jul 19 13:13 ./
drwx------ 30 root root 0 Jul 19 13:13 ../
-r--r--r-- 1 root root 0 Jul 19 13:13 available_events
-r--r--r-- 1 root root 0 Jul 19 13:13 available_filter_functions
-r--r--r-- 1 root root 0 Jul 19 13:13 available_tracers
…
# Next commands are just a dumb test to double-check what I'm seeing
root@vps690079:/sys/kernel/debug/tracing# cd ..
root@vps690079:/sys/kernel/debug# ll
total 0
drwx------ 30 root root 0 Jul 19 13:13 ./
drwxr-xr-x 13 root root 0 Sep 27 10:57 ../
…
drwx------ 8 root root 0 Jul 19 13:13 tracing/
drwxr-xr-x 6 root root 0 Jul 19 13:13 usb/
drwxr-xr-x 2 root root 0 Jul 19 13:13 virtio-ports/
-r--r--r-- 1 root root 0 Jul 19 13:13 wakeup_sources
drwxr-xr-x 2 root root 0 Jul 19 13:13 x86/
drwxr-xr-x 2 root root 0 Jul 19 13:13 zswap/
Have you got any idea what could causes this behavior?
Side note, using chmod to re-etablish permissions does not seems to fix the probe.
|
/sys
/sys is sysfs, an entirely virtual view into kernel structures in memory that reflects the current system kernel and hardware configuration, and does not consume any real disk space. New files and directories cannot be written to it in the normal fashion.
Applying disk space monitoring to it does not produce useful information and is a waste of effort. It may have mount points for other RAM-based virtual filesystems inside, including...
/sys/kernel/debug
/sys/kernel/debug is the standard mount point for debugfs, which is an optional virtual filesystem for various kernel debugging and tracing features.
Because it's for debugging features, it is supposed to be unnecessary for production use (although you might choose to use some of the features for enhanced system statistics or similar).
Since using the features offered by debugfs will in most cases require being root anyway, and its primary purpose is to be an easy way for kernel developers to provide debug information, it may be a bit "rough around the edges".
When the kernel was loaded, the initialization routine for the kernel tracing subsystem registered /sys/kernel/debug/tracing as a debugfs access point for itself, deferring any further initialization until it's actually accessed for the first time (minimizing the resource usage of the tracing subsystem in case it turns out it's not needed). When you cd'd into the directory, this deferred initialization was triggered and the tracing subsystem readied itself for use. In effect, the original /sys/kernel/debug/tracing was initially a mirage with no substance, and it only became "real" when (and because) you accessed it with your cd command.
debugfs does not use any real disk space at all: all the information contained within it will vanish when the kernel is shut down.
/sys/fs/cgroup
/sys/fs/cgroup is a tmpfs-type RAM-based filesystem, used to group various running processes into control groups. It does not use real disk space at all. But if this filesystem is getting nearly full for some reason, it might be more serious than just running out of disk space: it might mean that
a) you're running out of free RAM,
b) some root-owned process is writing garbage to /sys/fs/cgroup, or
c) something is causing a truly absurd number of control groups to be created, possibly in the style of a classic "fork bomb" but with systemd-based services or similar.
Bottom line
A disk usage probe should have /sys excluded because nothing under /sys is stored on any disk whatsoever.
If you need to monitor /sys/fs/cgroup, you should provide a dedicated probe for it that will provide more meaningful alerts than a generic disk space probe.
| "cd" into /sys/kernel/debug/tracing causes permission change |
1,284,435,541,000 |
I have a bash script that has to rsync to download files writing them locally, and then needs to set the owner to apache, and the group to a particular user group (that apache is not a member of).
Is there a way to create those files with those ownerships as they're being written by the rsync process, without having to go through and change them after the fact using chown? There are so many files that the time it takes to go through them later is prohibitive.
I have to do this for multiple user groups, so I shouldn't be adding apache to these groups, and certainly can't make all of them the default group.
In other words: is there a way root can create a file as user X and group Y when X is not a member of Y?
I've tried using runuser, but I'm unable to set the group (presumably because apache doesn't belong to the group).
I know you can use chmod to change permissions and add any user/group combination. What I'm asking is if there is a way to open a file for writing and use any user/group combo while creating it.
Attempt using sudo:
[root@centos7 tmp]# groups angelo
angelo : angelo wheel
[root@centos7 tmp]# groups apache
apache : apache
[root@centos7 tmp]# sudo -u angelo -g apache touch angelo-file
Sorry, user root is not allowed to execute '/bin/touch angelo-file' as angelo:apache on centos7
[root@centos7 tmp]# ls -ld angelo-file
ls: cannot access angelo-file: No such file or directory
[root@centos7 tmp]# sudo -u angelo -g angelo touch angelo-file
[root@centos7 tmp]# ls -ld angelo-file
-rw-r--r-- 1 angelo angelo 0 Nov 12 03:13 angelo-file
|
If you want to create a file as a specific user and group without using chown, you can use sudo and specify the user and group:
sudo -u \#49 -g \#58 touch /tmp/something
Note that the user you specify must have permission to write to the directory where you attempt this.
Or, you can start a shell as the current user, with the group set to something else:
sudo runuser "$USER" -g somegroup
I tried this on a Vagrant box with success:
[vagrant@localhost ~]$ sudo runuser "$USER" -g floppy
[vagrant@localhost ~]$ touch testfile
[vagrant@localhost ~]$ ls -l testfile
-rw-r--r--. 1 vagrant floppy 0 Nov 9 15:57 testfile
[vagrant@localhost ~]$
This is despite the "vagrant" user not being part of the "floppy" group.
| Create a file as a different user and group |
1,284,435,541,000 |
I'm trying to set up a debian server that will run several network-based services. These services need access to an external network drive to store their data. For security reasons, I have set up each service to run under it's own user. To allow them all to access the network share, I created a new group, driveaccess, with gip 1003
I then set up the network share by adding the following to /ets/fstab
//192.168.42.2/Data/ /media/Data cifs guest,rw,mand,gid=1003,forcegid,user=duckies%swordfish 0 0
After mounting the drive, the service accounts see the premissions as
-rwxr-xr-x 1 root driveaccess 1544704 Jun 1 2013 AppData1.dat
And the processes can read the data with no problems, but any attempt to write to the drive fails
touch: cannot touch `test.txt': Permission denied
What do I need to add to the fstab to let everything in the driveaccess group write to the share?
I already executed:
usermod -aG driveaccess serviceaccount1
|
You probably want to add explicit permissions to the mounted file system in the fstab entry:
<your other options>,file_mode=0770,dir_mode=0770
This will be on the safe side by allowing all group members to read, write and execute all files and prohibiting access to any other user of the system. If you still want read access for the others you will have to replace the 0 by an appropriate value. e.g.
<your other options>,file_mode=0775,dir_mode=0775
for read and execute rights.
| Granting all users access to mounted CIFS shares |
1,284,435,541,000 |
Actually I want to find all the files and folders that are under the same group,
for example I have this :
[test] user1/sam
a.txt user9/sam
b.txt user4/sam
I'm looking for a command that show me, all files and folders under the same group.
|
Using GNU find, you can search for all directories and files that belong to groupX:
find / -group groupX
From man find:
-group gname
File belongs to group gname (numeric group ID allowed).
| Find files belonging to a group |
1,284,435,541,000 |
I have a deployment script (based on capifony) that sets te permissions on specific servers for a Symfony2 installation. It contains the following two commands to do this for several directories:
setfacl -R -m u:www-data:rwx -m u:`whoami`:rwX app/cache
setfacl -dR -m u:www-data:rwx -m u:`whoami`:rwX app/cache
These two commands are on the Symfony2 site as a way to fix the permissions, however, these looked strikingly similar to me. So I had a look at the manpages for setfacl, and from what I could understand, the second command does exactly what the first one does with an additional option (which I don't quite understand). My question is, is my assumption correct? If so, would it have the same effect if I removed the first command?
|
The first command will change the permissions of any pre-existing files/directories. The -d in the second command is critical to setting the default permissions going forward for any directories, which in turn will provide a default set of ACLs for any files within these directories.
NOTE: That in both instances the commands will run recursively via the -R switch.
Regarding the -d switch, from the setfacl man page:
-d, --default
All operations apply to the Default ACL. Regular ACL entries in the
input set are promoted to Default ACL entries. Default ACL entries
in the input set are discarded. (A warning is issued if that happens).
This excerpt also explains it fairly well:
There are two types of ACLs: access ACLs and default ACLs. An access ACL is the access control list for a specific file or directory. A default ACL can only be associated with a directory; if a file within the directory does not have an access ACL, it uses the rules of the default ACL for the directory. Default ACLs are optional.
Source: 8.2. Setting Access ACLs.
Example
Say I have this directory structure.
$ tree
.
|-- dir1
| |-- dirA
| | `-- file1
| `-- fileA
`-- file1
2 directories, 3 files
Now let's set the permissions using the first setfacl command in your question:
$ setfacl -R -m u:saml:rwx -m u:samtest:rwX .
Which results in the following:
$ getfacl dir1/ file1
# file: dir1
# owner: saml
# group: saml
user::rwx
user:saml:rwx
user:samtest:rwx
group::rwx
mask::rwx
other::r-x
# file: file1
# owner: saml
# group: saml
user::rw-
user:saml:rwx
user:samtest:rwx
group::rw-
mask::rwx
other::r--
Without the -dR command run here, new directories would not be covered by your ACLs:
$ mkdir dir2
$ getfacl dir2
# file: dir2
# owner: saml
# group: saml
user::rwx
group::rwx
other::r-x
But if we remove this directory and run the setfacl -dR ... command and repeat this operation above:
$ rmdir dir2
$ setfacl -dR -m u:saml:rwx -m u:samtest:rwX .
Now the permissions look quite different:
$ getfacl dir1/ file1
# file: dir1/
# owner: saml
# group: saml
user::rwx
user:saml:rwx
user:samtest:rwx
group::rwx
mask::rwx
other::r-x
default:user::rwx
default:user:saml:rwx
default:user:samtest:rwx
default:group::rwx
default:mask::rwx
default:other::r-x
# file: file1
# owner: saml
# group: saml
user::rw-
user:saml:rwx
user:samtest:rwx
group::rw-
mask::rwx
other::r--
And now our newly created directory will pick up these "default" permissions:
$ mkdir dir2
$ getfacl dir2
# file: dir2
# owner: saml
# group: saml
user::rwx
user:saml:rwx
user:samtest:rwx
group::rwx
mask::rwx
other::r-x
default:user::rwx
default:user:saml:rwx
default:user:samtest:rwx
default:group::rwx
default:mask::rwx
default:other::r-x
Having these permissions in place on dir2 will now enforce these permissions on files within dir2 as well:
$ touch dir2/fileA
$ getfacl dir2/fileA
# file: dir2/fileA
# owner: saml
# group: saml
user::rw-
user:saml:rwx #effective:rw-
user:samtest:rwx #effective:rw-
group::rwx #effective:rw-
mask::rw-
other::r--
| setfacl: Are these two commands the same? |
1,284,435,541,000 |
Based on part of the first answer of this questions:
read from a file (the kernel must check that the permissions allow you to read from said file, and then the kernel carries out the actual instructions to the disk to read the file)
It requires to have root privilege to change permission to a file. With root privilege, user can access any files without worrying about permission. So, are the any relationships between root and kernel?
|
First, a clarification:
It requires to have root privilege to change permission to a file.
From man 2 chmod we can see that the chmod() system call will return EPERM (a permissions error) if:
The effective UID does not match the owner of the file, and the process is not privileged (Linux: it does not have the CAP_FOWNER capability).
This typically means that you either need to be the owner of the file or the root user. But we can see that the situation in Linux might be a bit more complicated.
So, are the any relationships between root and kernel?
As the text you quoted has pointed out, the kernel is responsible for checking that the UID of the process making a system call (that is, the user it is running as) is allowed to do what it is asking. Thus, root's superpowers come from the fact that the kernel has been programmed to always permit an operation requested by the root user (UID=0).
In the case of Linux, most of the various permissions checks that happen check whether the given UID has the necessary capability. The capabilities system allows more fine grained control over who is allowed to do what.
However, in order to preserve the traditional UNIX meaning of the "root" user, a process executed with the UID of 0 has all capabilities.
Note that while processes running as UID=0 have superuser privileges they still have to make requests of the kernel via the system call interface.
Thus, a userspace process, even running as root, is still limited in what it can do as it is running in "user mode" and the kernel is running in "kernel mode" which are actually distinct modes of operation for the CPU itself. In kernel mode a process can access any memory or issue any instruction. In user mode (on x86 CPUs there are actually a number of different protected modes), a process can only access its own memory and can only issue some instructions. Thus a userspace process running as root still only has access to the kernel mode features that the kernel exposes to it.
| What is the relationship between root and kernel? [closed] |
1,284,435,541,000 |
I have tested and confirmed that after rebooting, permissions are reset on /var/log back to whatever is the default for each distro/version listed below. The question is, why?
CentOS 7
Ubuntu >= 15.04
Debian 8/9
From what I can tell, the default permissions for CentOS 7 and Debian 8/9 are 755, root:root. The default permissions for Ubuntu 15.04+ are 775, root:syslog. If I install upstream rsyslog from the PPA, then the default permissions become 755 (confirmed on Ubuntu 16.04, presumably for versions of Ubuntu between 15.04 and 16.04 also).
Any attempt to change the permissions on /var/log from the default result in a reset, presumably on the next boot. I read someone's suggestion on a related post that it could be the rsyslog settings at work, but I toggled those settings to match my expectations and even went with some that wouldn't work (e.g., 700) and the result was still the same: reset back to whatever is the default.
I then uninstalled rsyslog (a VM with a stock rsyslog install, another with upstream rsyslog) and the permissions were still reset, so evidently the reset work is not done by rsyslog.
Is this something specific to systemd? Is this a setting I can actually override and have "stick" between reboots?
Thank you in advance for any help that you can provide.
P.S.
My testing was performed on simple installations of the distro using LVM, but with one volume in an attempt to rule out mount options for the lv being the problem.
|
I'm still digging into the specifics, but it looks like these files play a role in the permissions management of /var/log at boot time:
/usr/lib/tmpfiles.d/var.conf
/usr/lib/tmpfiles.d/00rsyslog.conf
Ironically I found them when I ran grep -ri '/var/log' /var/log on an Ubuntu 16.04 box and saw this message:
./syslog.1:Jul 9 21:18:15 ubuntu-virtual-machine systemd-tmpfiles[616]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring.
I looked in that file and found this:
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
# See tmpfiles.d(5) for details
q /var 0755 - - -
L /var/run - - - - ../run
d /var/log 0755 - - -
f /var/log/wtmp 0664 root utmp -
f /var/log/btmp 0600 root utmp -
d /var/cache 0755 - - -
d /var/lib 0755 - - -
d /var/spool 0755 - - -
I started tweaking the values for the d /var/log 0755 - - - line, but with no discernable change from my efforts I looked around further in that directory and found the /usr/lib/tmpfiles.d/00rsyslog.conf file.
In that file:
# Override systemd's default tmpfiles.d/var.conf to make /var/log writable by
# the syslog group, so that rsyslog can run as user.
# See tmpfiles.d(5) for details.
# Type Path Mode UID GID Age Argument
d /var/log 0775 root syslog -
root@ubuntu-virtual-machine:/usr/lib/tmpfiles.d# dpkg -S /usr/lib/tmpfiles.d/00rsyslog.conf
rsyslog: /usr/lib/tmpfiles.d/00rsyslog.conf
So the rsyslog package provides a conf include file that attempts to override the values set within the tmpfiles.d/var.conf conf file.
The result is that when I uninstall rsyslog, the tmpfiles.d/var.conf conf file settings apply, which in this case is 0755.
I'll need to research further into whether tmpfiles.d is intended only for package maintainers or whether sysadmins also need to manages files within that area.
Edit:
Turns out that there are three directories, with the first having greatest precedence (and intended for admins to use in order to override settings from the other two):
/etc/tmpfiles.d/*.conf
/run/tmpfiles.d/*.conf
/usr/lib/tmpfiles.d/*.conf
More info:
https://developers.redhat.com/blog/2016/09/20/managing-temporary-files-with-systemd-tmpfiles-on-rhel7/
https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html
| Permissions on /var/log reset on boot |
1,284,435,541,000 |
I'm trying to set a particular USB drive to always mount read only. If I plug it in, it is seen as sdb with a single partition, sdb1. Here are some relevant udevadm lines (not the entire output of course):
$ udevadm info -a -n /dev/sdb1
looking at device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0/host21/target21:0:0/21:0:0:0/block/sdb/sdb1':
KERNEL=="sdb1"
SUBSYSTEM=="block"
DRIVER==""
ATTR{ro}=="0"
ATTR{size}=="976768002"
ATTR{stat}==" 473 30586 33938 3460 5 0 40 1624 0 2268 5084"
ATTR{partition}=="1"
OK, so I wrote the following udev rule and saved it as /etc/udev/rules.d/10-usbdisk.rules:
SUBSYSTEM=="block",
ATTR{size}=="976768002",
MODE="0555"
According to this, using size should be enough but I have also tried other permutations. In any case, the rule does seem to be read (again, selected output lines, you can see the entire output here:
$ udevadm test $(udevadm info -q path -n /dev/sdb1) 2>&1
[...]
read rules file: /etc/udev/rules.d/10-usbdisk.rules
[...]
MODE 0555 /etc/udev/rules.d/10-usbdisk.rules:4
So, it looks like the rule should be applied and it looks like the MODE="0555" is the correct syntax. However, when I actually plug the disk in, I can happily create/delete files on it.
OS: Debian testing (LMDE)
So, what am I doing wrong? How can I mount a particular USB drive as read only automatically using udev1?
1 I know how to do this with fstab but fstab settings are ignored by gvfs. My objective is to have this mounted automatically as read only in the GUI. Presumably this will have to be done via udev or gvfs somehow.
|
Ok, the summary is that Nautilus uses GVFS and you need to tell udev to use GVFS too when reading the fstab entries, you can do this using:
/dev/block-device /mount/point auto x-gvfs-show,ro 0 0
x-gvfs-show will tell udev and anyone interested to use the GVFS helper to mount the filesystem, so gvfs has all the control mounting, umounting, moving mount points, etc.
Lets see if we understand how are drives mounted in modern Linux systems with GUI's (specifically Nautilus):
Nautilus uses GVFS as backend to mount FTP, SMB, block devices, among other things into the file system. The tool that GNOME designed for such proposes is called called Disks is the one that modify the behavior of GVFS. Now here comes the fun.
Nautilus ignores anything that it wasn't mounted using GVFS (like using fstab) and gives you a very rudimentary control over this using udev (Nautilus doesn't ask GVFS to unmount or mount devices that were not manipulated using GVFS, that includes udev, fstab, mount and any other blob) such as just unmount and mount. Using the permissions and options stored in fstab/udev you can use these filesystems accordingly but you can't modify the behavior using GVFS. If something was mounted using sudo mount -o rw /dev/sda3, nautilus tells udev that it doesn't have permissions to modify the mount point, so it pass the responsibility to udev which in turn ask polkit for permissions. If you had used GVFS, nautilus itself unmount the device without permissions, nor dialogs, etc.
| How can I create a udev rule to mount a USB drive read only? |
1,284,435,541,000 |
I want to set up a directory where all new files and directories have a certain access mask and also the directories have the sticky bit set (the t one, which restricts deletion of files inside those directories).
For the first part, my understanding is that I need to set the default ACL for the parent directory. However, new directories do not inherit the t bit from the parent. Hence, non-owners can delete files in the subdirectories. Can I fix that?
|
This is a configuration that allows members of a group, acltest, to create
and modify group files while disallowing the deletion and renaming of files
except by their owner and "others," nothing. Using the username, lev and
assuming umask of 022:
groupadd acltest
usermod -a -G acltest lev
Log out of the root account and the lev account. Log in and become root or use sudo:
mkdir /tmp/acltest
chown root:acltest /tmp/acltest
chmod 0770 /tmp/acltest
chmod g+s /tmp/acltest
chmod +t /tmp/acltest
setfacl -d -m g:acltest:rwx /tmp/acltest
setfacl -m g:acltest:rwx /tmp/acltest
ACL cannot set the sticky bit, and the sticky bit is not copied to subdirectories. But, you might use inotify or similar software to detect changes in the file system, such as new directories, and then react accordingly.
For example, in Debian:
apt-get install inotify-tools
Then make a script for inotify, like /usr/local/sbin/set_sticky.sh.
#!/usr/bin/env bash
inotifywait -m -r -e create /tmp/acltest |
while read path event file; do
case "$event" in
*ISDIR*)
chmod +t $path$file
;;
esac
done
Give it execute permission for root: chmod 0700 /usr/local/sbin/set_sticky.sh. Then run it at boot time from, say, /etc/rc.local or whichever RC file is appropriate:
/usr/local/sbin/set_sticky.sh &
Of course, in this example, /tmp/acltest should disappear on reboot. Otherwise, this should work like a charm.
| Set sticky bit by default for new directories via ACL? |
1,284,435,541,000 |
I am trying to set up an apache server on my Kubuntu 13.04 laptop. I have installed the apache2 package and sudo a2enmod userdir; sudo service apache2 restart, but still when I visit http://localhost/~user, it says something like this:
Forbidden
You don't have permission to access /~user on this server.
Apache/2.2.22 (Ubuntu) Server at localhost Port 80
Result of tail /var/log/apache2/access.log
127.0.0.1 - - [02/Aug/2013:16:22:01 +0200] "GET /favicon.ico HTTP/1.1" 404 498 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36"
127.0.0.1 - - [02/Aug/2013:16:22:02 +0200] "GET /favicon.ico HTTP/1.1" 404 498 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36"
127.0.0.1 - - [02/Aug/2013:17:35:30 +0200] "GET /~kaiyin HTTP/1.1" 403 501 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36"
127.0.0.1 - - [02/Aug/2013:17:35:30 +0200] "GET /favicon.ico HTTP/1.1" 404 498 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36"
127.0.0.1 - - [02/Aug/2013:17:35:30 +0200] "GET /favicon.ico HTTP/1.1" 404 498 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36"
127.0.0.1 - - [02/Aug/2013:17:36:26 +0200] "GET /favicon.ico HTTP/1.1" 404 499 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36"
127.0.0.1 - - [02/Aug/2013:17:36:26 +0200] "GET /favicon.ico HTTP/1.1" 404 498 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36"
127.0.0.1 - - [02/Aug/2013:21:05:17 +0200] "GET /~kaiyin HTTP/1.1" 403 501 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36"
127.0.0.1 - - [02/Aug/2013:21:05:17 +0200] "GET /favicon.ico HTTP/1.1" 404 498 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36"
127.0.0.1 - - [02/Aug/2013:21:05:17 +0200] "GET /favicon.ico HTTP/1.1" 404 498 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36"
Result of tail /var/log/apache2/error.log
[Fri Aug 02 21:05:17 2013] [error] [client 127.0.0.1] File does not exist: /var/www/favicon.ico
[Fri Aug 02 21:05:17 2013] [error] [client 127.0.0.1] File does not exist: /var/www/favicon.ico
[Fri Aug 02 21:06:54 2013] [error] [client 127.0.0.1] File does not exist: /var/www/favicon.ico
[Fri Aug 02 21:06:54 2013] [error] [client 127.0.0.1] File does not exist: /var/www/favicon.ico
[Fri Aug 02 21:06:59 2013] [error] [client 127.0.0.1] (13)Permission denied: access to /~kaiyin denied
[Fri Aug 02 21:06:59 2013] [error] [client 127.0.0.1] File does not exist: /var/www/favicon.ico
[Fri Aug 02 21:06:59 2013] [error] [client 127.0.0.1] File does not exist: /var/www/favicon.ico
[Fri Aug 02 21:07:17 2013] [error] [client 127.0.0.1] (13)Permission denied: access to /~kaiyin denied
[Fri Aug 02 21:07:17 2013] [error] [client 127.0.0.1] File does not exist: /var/www/favicon.ico
[Fri Aug 02 21:07:17 2013] [error] [client 127.0.0.1] File does not exist: /var/www/favicon.ico
|
The public_html directories need to have their permissions like this so that the user that Apache is running as can access it:
$ chmod -R 755 ~/public_html
still not work?
If you look in your Apache error logs you might see a line like this:
[Fri Aug 02 21:06:59 2013] [error] [client 127.0.0.1] (13)Permission denied: access to /~kaiyin denied
This is telling you that Apache doesn't have permissions to navigate to your user's directory (~kaiyin) in this example.
How to fix this?
You need to make sure that read + execute bits are set for either a group that Apache is a member of or the others read + execute bits are set on the user's directory as well so that Apache can access the public_html folder down below.
Example
/home
|-- [drwxr-x---] /home/sam
/home/sam
|-- [drwxr-xr-x] /home/sam/public_html
References
How can I make a public HTML folder in Ubuntu?
| Apache2 userdir enabled, but still have no access |
1,284,435,541,000 |
I'm having some doubts about how to install and allow Linux to correctly read/write to a NTFS formatted harddrive used as backup of various machines (windows included, that's how I need NTFS).
For now, I've read some pages and I have the feeling I need someone else's guidance from who already did this step-by-step, to not ruin things here.
What I need is to be able to save a Linux file, with its chown and chmod settings, to a NTFS filesystem, and be able to retrieve this information back.
What I have today is a NTFS that saves all files with the owner:group of who mounted the volume, and permissions rwxrwxrwx for all.
I read this article but it is too much information and I could not understand some things when trying to actually implement:
Is it stable in the current version?
Does Ubuntu 10.04 have all things needed already? Or do I need to install anything?
What is the relation of POSIX ACL to this? Do I need install anything regarding this or just ntfs-3g will do?
Where are Ubuntu packages to run with apt-get?
If I map the users (with usermap) can bring the harddrive to another computer with different users, will I be able to read them? (Under Linux/Windows)?
For one thing I noticed, usermap was not ready to use. So I downloaded and compiled (but not installed because I was afraid to mess up things here), the latest version of ntfs-3g. In the README file it says:
> TESTING WITHOUT INSTALLING
>
> Newer versions of ntfs-3g can be
> tested without installing anything and
> without disturbing an existing
> installation. Just configure and make
> as shown previously. This will create
> the scripts ntfs-3g and lowntfs-3g in
> the src directory, which you may
> activate for testing :
>
> ./configure
> make
>
> then, as root :
> src/ntfs-3g [-o mount-options] /dev/sda1 /mnt/windows
>
> And, to end the test, unmount the
> usual way :
> umount /dev/sda1
But it tells nothing about the mount-options that I need to use to have full backups (full == backing up / restoring files, owners, groups and permissions).
This faq says:
Why have chmod and chown no effect?
By default files on NTFS are owned by root with full access to everyone.
To get standard per-file protection you should mount with the "permissions"
option. Moreover, if you want the permissions to be interoperable with a
specific Windows configuration, you have to map the users.
Also, I did used the ntfs-3g.usermap /dev/sdb2 tools to create the map file and got this result:
# Generated by usermap for Linux, v 1.1.4
:carl:S-1-5-21-889330461-3416208041-4118870141-511
:default:S-1-5-21-2592120051-4195220491-4132615201-511
carl:carl:S-1-5-21-889330462-3416208046-4118870148-1000
Now this default was mapped because I wrote "default" to one file that was under the default user during the inquiring. I'm not sure if I did that right. I don't care for any users but carl (and root for that matter), and for any other groups but users. I saw the FAQ telling me to answer the group with the username. Isn't it the case to tell the group as "users"? And how can I check, booting Windows, if this mapping is correct?
Summary:
I need rsync to save Linux files and Windows files from various computers, to a NTFS external USB HD, without losing file permissions.
I don't know how to install and run the driver ntfs-3g to allow chown, chmod and anything else that is needed to make that possible. What options, and where?
All computers have carl username, but that doesn't guarantee that their SID, UID or GID are the same.
The environment is composed of 18 "documents" folders, 6 of them Linux, 6 of them Win7, 6 of them virtualbox Win XP. All of them will be a single "documents" folder into the NTFS external hard drive.
Reference:
I also read this forum, and maybe it is useful to someone trying to help me here.
Also thought of these other three solutions, making the filesystem ext. But the external HD may be used in Windows boxes; I could not install or have write to install drivers, so it needs to be readable easily by any Windows and NTFS is the standard.
All my Google searches was too much technical to follow.
|
You can use ntfs-3g, but make sure you place the mappings file in the right place. Once you do that you should see file ownerships in ../User/name match the unix user.
However, if you just want to use it as backup you should probably just save a big tarball onto the ntfs location. If you also want random access you can place an ext2 image file and loop mount it. That will save you from a lot of these headaches.
Ok, assuming you will mount NTFS under /ntfs
run ntfs-3g.usermap /dev/sdb1 (or whatever your ntfs partition is). Answer the questions.
Then mkdir /ntfs/.NTFS-3G. Then cp UserMapping /ntfs/.NTFS-3G/UserMapping. Now put an entry in /etc/fstab:
/dev/sdb1 /ntfs ntfs-3g defaults 0 0
Then mount /ntfs. The command ls -l /ntfs/Users/Carl should show your Linux user as the owner of files there.
| Is NTFS under linux able to save a linux file, with its chown and chmod settings? |
1,300,982,059,000 |
As I understand Unix file systems, any file on a unix system must belong to a group and a user of the said system. A file cannot belong to a group or user that does not exist on the system.
From that assumption, there are a few questions that come to mind. What happens to the group and user attributes when media is transferred between computers, be it via a flash drive, a CDROM or a network share. To whom does the file belong in the new system?
Can you limit the data to only work on your system? (Not talking about encryption here, just basics.)
Also, when you transfer data between two computers, are there ways to ensure that the group and user attributes stay intact (what belongs to root will belong to root on the new system and the same with the normal user).
|
On all native unix filesystems, file ownership is stored in the form of user and group IDs. This is also the case for basic NFS operation (although there are other possibilities at least in NFSv4) and for traditional unix archive formats such as tar.
A file can in fact belong to a user or group that doesn't exist. The file belongs to a particular ID, but there's no obligation that the ID is listed in /etc/passwd or other user or group database. For example, if you store your user database on NIS or LDAP, and the database server is temporarily inaccessible, the users' files still exist.
When you carry removable media from one system to another, you should either ensure that user and group IDs match where relevant, or ignore ownership (e.g. by using a non-unix filesystem). Root is a bit of a special case because its defining characteristic is that the user ID is 0, everywhere.
The only way to control what someone in physical possession of the media will do with the data is to use an intrinsic means of protection, i.e., one that is not dependent on how they access the system, and keep something to yourself. You can use a mathematical means of protection: cryptography (encryption for confidentiality, signing for integrity; you keep the password to yourself), or a physical means of protection (e.g. a locked box).
| How is file ownership affected across different systems? |
1,300,982,059,000 |
I have the two users equah and hoster on my machine.
I created a samba share test1 in /home/equah, which is accessible by user equah without any problem.
I also created the share test2 in /smbtest and changed ownership to user equah, which is accessible by equah too.
But when I create the share test3 in /home/hoster/sharetest and try to connect, nautilus prompts with Failed to mount Windows share: Permission denied, which is what i would like to get working.
ls -l shows the following details on the described directories:
drwx------ 14 equah equah 4096 Sep 8 20:09 /home/equah
drwxr-xr-x 2 equah equah 4096 Sep 8 20:33 /smbtest
drwxrwxrwx 3 equah equah 4096 Sep 8 20:44 /home/hoster/sharetest
drwx------ 19 hoster hoster 4096 Sep 8 20:20 /home/hoster
I also saw, that Access Control System applied permissions on the hoster home directory, which I removed to see if this was the error, but without any success.
I currently use a fresh Arch Linux installation with samba 4.8.5-1.
My samba configuration (/etc/samba/smb.conf) contains:
[global]
workgroup = EQGROUP
server string = eq-host samba server
server role = standalone server
log file = /var/log/samba/%m.log
max log size = 50
dns proxy = no
[test1]
comment = test
path = /home/equah
valid users = equah
[test2]
comment = test
path = /smbtest
valid users = equah
[test3] # <== Not Working ?
comment = test
path = /home/hoster/sharetest
valid users = equah
My toughs are, that some permission settings might prevent the logged in samba user from accessing content of a directory from which any parent directory is owned by another user. Tough creating a share in /home/hoster/sharetest/test and changing ownership of both sharetest/test to equah, does also not work to share only the test folder
|
You've got a classic ownership/permissions problem here. You've told SAMBA to allow access to /home/hoster/sharetest only to equah but your underlying filesystem permissions deny access to that user (drwx------ 19 hoster hoster 4096 Sep 8 20:20 /home/hoster).
Allow equah access to the directory and it should be ok
chmod a+x /home/hoster
Or force the access by equah to be performed by hoster
# add to smb.conf share definition
force user = hoster
In general this kind of problem can be diagnosed using log level = 3 and looking in the SAMBA server log files.
| samba - permission denied when share in another users home directory |
1,300,982,059,000 |
I want to mount a samba sghare to a directory in my home dir. I have created it but mount the share as root. Upon mount, the owner of my dir changes to root and my user doesn't have permission tow rite files on the share... How do I modify my mount line to be able to mount with write permissions? It currently looks like this: sudo mount -t cifs //IP/share/ /mount/point/ -o rw,username=user,password=pass,domain=domain
|
Use the uid and gid mount options:
uid=arg
sets the uid that will own all files or directories on the mounted
filesystem when the server does not provide ownership information. It
may be specified as either a username or a numeric uid. When not
specified, the default is uid 0. The mount.cifs helper must be at
version 1.10 or higher to support specifying the uid in non-numeric
form. See the section on FILE AND DIRECTORY OWNERSHIP AND PERMISSIONS
below for more information.
forceuid
instructs the client to ignore any uid provided by the server for
files and directories and to always assign the owner to be the value
of the uid= option. See the section on FILE AND DIRECTORY OWNERSHIP
AND PERMISSIONS below for more information.
gid=arg
sets the gid that will own all files or directories on the mounted filesystem when the server does not provide ownership
information. It may be specified as either a groupname or a numeric
gid. When not specified, the default is gid 0. The mount.cifs helper
must be at version 1.10 or higher to support specifying the gid in
non-numeric form. See the section on FILE AND DIRECTORY OWNERSHIP AND
PERMISSIONS below for more information.
| directory changes permission when mounted |
1,300,982,059,000 |
I mean, if two users have the same name, how does the system know that they're actually different users when it enforces file permissions?
This doubt came to my mind while I was considering to rename my home /home/old-arch before reinstalling the system (I have /home on its own partition and I don't format it), so that I could then have a new, pristine /home/arch. I wondered if the new system would give me the old permissions on my files or if it would recognize me as a different arch.
|
In Unix, users are identified by their ID (uid), which must be unique (in the scope of the local system).
So even if it were possible to create 2 different users with the same name
(adduser on my system refuses to do this, see this question for further information Can separate unix accounts share a username but have separate passwords?), they would need to get different uids. While you may be able to manipulate files containing the user information to match your criteria, every program is based on the assumption that uids are unique on the system, so such users would be identical.
EDIT: The other answer demonstrated a case where you have 2 different user names for the same uid - as far as the system is concerned though, this is like having two different names for the same user, so constructs like this should be avoided if possible, unless you specifically want to create an alias for a user on the system (see the unix user alias question on
serverfault for more information on the technicalities).
The system uses these uids to enforce file permissions.
The uid and gid (group id) of the user the file belongs to are written into the metadata of the file. If you carry the disk to another computer with a different user that randomly shares the same uid, the file will suddenly belong to this user on that system. Knowing that uids are usually
not more than 16-bit integers on a unix system, this shows that the uids
are not meant to be globally unique, only unique in scope of the local system.
| How does Linux identify users? |
1,300,982,059,000 |
Given the following directory tree:
.
├── d1
│ └── workspace
├── d2
│ └── workspace
├── d3
│ └── workspace
├── d4
│ └── workspace
└── d5
└── workspace
I need to set the permissions for all workspace directories as below:
chmod -R 774 d1/workspace
chmod -R 774 d2/workspace
...
How can I do the above operations in one command for all workspace directories? I can run the following command:
chmod -R 774 *
But this also changes the mode of parent directories, which is not desired.
|
You can use wildcards on the top level directory.
chmod 774 d*/workspace
Or to make it more specific you can also limit the wildcard, for example to d followed by a single digit.
chmod 774 d[0-9]/workspace
A more general approach could be with find.
find d* -maxdepth 1 -name workspace -type d -exec chmod 774 "{}" \;
| How to chmod only on subdirectories? |
1,300,982,059,000 |
I know about the chattr +i filename command which makes a file read only for all users. However, the problem is that I can revoke this by using chattr -i filename.
Is there a way to make a file readable by everyone on the system, but not writable by anyone, even the root, and with no going back (No option to make it writable again)?
|
Put it on a CD or a DVD. The once-writable kind, not the erasable ones. Or some other kind of a read-only device.
Ok, I suppose you want a software solution, so here are some ideas: You could possibly create an SELinux ruleset that disables the syscall (*) that chattr uses, even for root. Another possibility would be to use capabilities: setting +i requires the CAP_LINUX_IMMUTABLE capability, so if you can arrange the capability bounding set of all processes to not include that, then no-one can change those flags. But you'd need support from init to have that apply to all processes. Systemd can do that, but I think it would need to be done for each service separately.
(* maybe it was an ioctl instead.)
However, if you do that, remember that a usual root can modify the filesystem from the raw device (that's what debugfs is for), so you'd need to prevent that, too, as well as prevent modifying the kernel (loading modules). Loading modules can be prevented with the kernel.modules_disabled sysctl, but I'm not sure about preventing access to raw devices. And make all the relevant configuration files also immutable.
Anyway, after that, you'd also need to prevent changing the way the system boots, otherwise someone could reboot the system with a kernel that allows overriding the above restrictions.
| Make file read only on Linux even for root |
1,300,982,059,000 |
After I look through the help. I didn't found much difference between them.
-g, --gid GROUP
The group name or number of the user's initial login group. The group
name must exist. A group number must refer to an already existing
group.
-G, --groups GROUP1[,GROUP2,...[,GROUPN]]]
A list of supplementary groups which the user is also a member of.
Each group is separated from the next by a comma, with no intervening
whitespace. The groups are subject to the same restrictions as the
group given with the -g option. The default is for the user to belong
only to the initial group.
If they are the same. Why both they exist?
|
-g sets the initial, or primary, group. This is what appears in the group field in /etc/passwd. On many distributions the primary group name is the same as the user name.
-G sets the supplementary, or additional, groups. These are the groups in /etc/group that list your user account. This might include groups such as sudo, staff, etc.
| What is the difference between -g and -G options in useradd |
1,300,982,059,000 |
I am trying to chroot into an old HD to change a forgotten password,
but chroot says permission denied? what gives? I am root! The harddrive I am trying to chroot into is an old version of edUbuntu 7.10 might that have anything to do with it?
root@h:~# chroot /media/usb0/
chroot: failed to run command `/bin/bash': Permission denied
|
Chroot in ubuntu or recovering Ubuntu,Debian Linux
boot from livecd of ubuntu, if you installed with system 32bit use 32bit Live CD, If 64bit use 64 bit live cd.
Mount the Linux Partitions using
# sudo blkid
Output:
sysadmin@localhost:~$ sudo blkid
[sudo] password for sysadmin:
/dev/sda1: UUID="846589d1-af7a-498f-91de-9da0b18eb54b" TYPE="ext4"
/dev/sda5: UUID="36e2f219-da45-40c5-b340-9dbe3cd89bc2" TYPE="swap"
/dev/sda6: UUID="f1d4104e-22fd-4b06-89cb-8e9129134992" TYPE="ext4"
Here my / Partition is /dev/sda6
Mount the / Partition to mount point using
# sudo mount /dev/sda6 /mnt
Then Mount the linux access points, Linux devices, Proc, sys
Linux Device
# sudo mount --bind /dev/ /mnt/dev
proc system information
# sudo mount --bind /proc/ /mnt/proc
Kernel information to user space
# sudo mount --bind /sys /mnt/sys
If we need to enable the networking we need to do the following steps (Optional).
# cp /etc/resolv.conf /mnt/etc/resolv.conf
Change the Linux root to be the device we mounted earlier in step 2
# sudo chroot /mnt
Now try to change the root password it will work.
| chroot permission denied! But I'm root! |
1,300,982,059,000 |
I have a file path..
Is there any single command to see the file/directory permissions of all the intermediate directories in the path..?
|
Here are two oneliners. One ls call per path component:
$ (IFS=/; set -f -- $PWD; for arg; do path="${path%/}/$arg"; ls -dal "$path"; done)
Output:
# drwxr-xr-x 31 root admin 1122 4 Nov 22:08 /
# drwxr-xr-x 9 root admin 306 3 Nov 17:36 /Users
# drwxr-xr-x+ 67 janmoesen staff 2278 7 Nov 14:46 /Users/janmoesen
# drwxr-xr-x+ 53 janmoesen staff 1802 4 Nov 22:07 /Users/janmoesen/Sites
# drwxr-xr-x 28 janmoesen staff 952 7 Nov 15:01 /Users/janmoesen/Sites/example.com
With just one call to ls with all paths:
$ (IFS=/; set -f -- $PWD; for arg; do path="${path%/}/$arg"; paths+=("$path"); done; ls -dal "${paths[@]}")
Output:
# drwxr-xr-x 31 root admin 1122 4 Nov 22:08 /
# drwxr-xr-x 9 root admin 306 3 Nov 17:36 /Users
# drwxr-xr-x+ 67 janmoesen staff 2278 7 Nov 14:46 /Users/janmoesen
# drwxr-xr-x+ 53 janmoesen staff 1802 4 Nov 22:07 /Users/janmoesen/Sites
# drwxr-xr-x 28 janmoesen staff 952 7 Nov 15:01 /Users/janmoesen/Sites/example.com
| Is there a way to see the permissions of all the intermediate directories of a path..? [duplicate] |
1,300,982,059,000 |
I have a daemon (apache/samba/vsftpd/...) running on SELinux enabled system and I need to allow it to use files in a non-default location. The standard file permissions are configured to allow access.
If the daemon is running in permissive mode, everything works. When set back to enforcing it doesn't work anymore and I get a SELinux AVC denial messages.
How can I configure the system to allow the access in enforcing mode?
|
Background
SELinux adds another layer of permission checks on Linux systems. On SELinux enabled systems regular DAC permissions are checked first, and if they permit access, SELinux policy is consulted. If SELinux policy denies access, a log entry is generated in audit log in /var/log/audit/audit.log or in dmesg if auditd isn't running on the system.
SELinux assigns a label, called security context, to every object (file, process, etc) in the system:
Files have security context stored in extended attributes. These can be viewed with ls -Z.
SELinux maintains a database mapping paths patterns to default file contexts. This database is used when you need to restore default file contexts manually or when the system is relabeled. This database can be queried with semanage tool.
Processes are assigned a security context when an executable is run (execve syscall). Process security contexts can be viewed with most system monitoring tools, for example with ps Z $PID.
Other labeled objects also exist, but are not relevant to this answer.
SELinux policy contains the rules that specify which operations between contexts are allowed. SELinux operates on whitelist rules, anything not explicitly allowed by the policy is denied. The reference policy contains policy modules for many applications and it is usually the policy used by SELinux enabled distributions. This answer is primarily describing how to work with a policy based on the reference policy, which you are most likely using if you use the distribution provided policy.
When you run your application as your normal user, you probably do not notice SELinux, because default configuration places the users in unconfined context. Processes running in unconfined context have very few restrictions in place. You might be able to run your program without issues in user shell in unconfined context, but when launched using init system it might not work anymore in a restricted context.
Typical issues
When files are in a non-default location (not described in default policy) the issues are often related to the following reasons:
Files have incorrect/incompatible file context: Files moved with mv keep their metadata including file security contexts from old location. Files created in new location inherited the context from parent directory or creating process.
Having multiple daemons using the same files: The default policy does not include rules to allow the interaction between the security contexts in question.
Files with incorrect security context
If the files are not used by another daemon (or other confined process) and
the only change is the location where files are stored, the required changes to SELinux configuration are:
Add a new rule to file context database
Apply correct file context to existing files
The file context on the default location can be used as template to for the new location. Most policy modules include man page documentation (generated using sepolicy manpages) explaining possible alternative file contexts with their access semantics.
File context database uses regualr expression syntax, which allows writing overlapping specifications. It is worthwhile to note that applied context is the last specification found [src].
To add a new entry to file context database:
semanage fcontext -a -t <type> "/path/here/(/.*)?"
After new context entry is added to the database, the context from database can be applied on your files using restorecon <files>. Running restorecon with -vn flags will show what file contexts would be changed without applying any changes.
Testing a new file context without adding a new entry in database
Context can be changed manually with chcon tool. This is useful when you want to test the new file context without adding an entry to file context database.
New file context is specified in the arguments to chcon. When used with --reference= option, the security context from a reference file is copied to the target files.
using a specific context (default_t):
chcon -t default_t <target files>
or using a reference:
chcon --reference=<path to default location> <target files>
Note about different file systems & mount points
If the new location is its own mount point, the context can be set with a mount option. Context set with mount option isn't stored on disk, so it can also be used with file systems that do not support extended attributes.
mount <device> <mount point> -o context="<context>"
Allowing processes running in different security contexts to use the same files
Option 1: Booleans
Reference policy includes tunable options, called booleans, which enable/disable certain additional rules. Many of them allow inter-operation of different system daemons which usually do not use same files.
List of all possible tunable options and their descriptions can be listed using semanage boolean -l. audit2allow might also be able to directly tell which boolean needs to be enabled.
To enable/disable a boolean using semanage:
semanage boolean --on <boolean name>
semanage boolean --off <boolean name>
Booleans are the simplest way to modify the policy. However, all possible situations can not be addressed by toggling a boolean. Some booleans also allow very broad access, being overly permissive.
Option 2: Extend policy with a new module
If no boolean exists to allow the access, the policy needs to be modified by adding a custom module.
A simple module adding the required rules to allow access can be generated from log files using audit2allow with following steps:
Set the daemon's domain (security context) to permissive mode. In permissive mode the policy isn't enforced, but logs are generated on the access the policy would normally deny.
semanage permissive -a <domain>
Test your daemon in normal operation to generate log entries.
Create a new policy module and insert it.
audit2allow -a -M <name>
semodule -i <name>.pp'
Re-enable enforcing mode
semanage permissive -d <domain>
This method works best when there are only a few security contexts involved. In a complex configuration you quite likely have to write your own policy module. Some resources for getting started are gentoo wiki and the reference policy API documentation.
| Configure SELinux to allow daemons to use files in non-default locations |
1,300,982,059,000 |
I just lost some data in folder A which was inside folder B after doing rm -rf B. Before I could realize what have I done, it was all over. Now a lesson is learnt, I wish to make some of my folder idiot-proof to avoid a next time when I do something similar and want to kill myself.
One way I can think of is to write a bash function and alias it to rm. This function will look into each sub-folder for a hidden file such as .dontdelete. When found, it would ask if I really want to continue. I can not make is write-protected since there is a process which constantly write to this folder. Is there any better way to do it?
|
In researching your question I came across this technique which might help you in the future.
You can apparently touch a file in the directory like so:
touch -- -i
Now when you run the command rm -fr * in a directory where the -i is present you'll be presented with the interactive prompt from rm.
$ ls
file1 file2 file3 file4 file5 -i
$ rm -fr *
rm: remove regular empty file `file1'? n
rm: remove regular empty file `file2'? n
rm: remove regular empty file `file3'? n
rm: remove regular empty file `file4'? n
rm: remove regular empty file `file5'? n
The same thing can be achieved by just leaving an alias in place for rm to always do rm -i. This can get annoying. So often what I've seen done is to have this alias in place, and then to disable it when you really want to delete without being prompted.
alias rm='rm -i'
Now in directories you'll be greeted like this:
$ ls
file1 file2 file3 file4 file5
$ rm -r *
rm: remove regular empty file `file1'?
To override the alias:
$ \rm -r *
This still doesn't stop a rm -fr however. But it does provide you with some protection.
References
How to prevent yourself from accidentally deleting files in Unix/Linux
| Making a directory protected from 'rm -rf' |
1,300,982,059,000 |
I have an ISO file and I mount it under /mnt/isofile. Then I copied this file to another folder. But the contents are read-only and belonged to root. I tried to use chmod and chown. But it prompts with the message:
it is read only file system.
What is going on here?
NOTE: There is a tar file in the .iso, I want to compress it, but failed with the same "read only file system" message.
|
ISO 9660 is by design a read-only file system. This means that all the data has to be written in one go to the medium. Once written, there is no provision for altering the stored content. Therefore ISO 9660 is not suitable to be used on random-writable media, such as hard disks.
You need to copy whole directory tree to another directory, make your changes and then burn a new image.
| ISO file readonly? |
1,300,982,059,000 |
My error is:
mount.nfs4: access denied by server while mounting fileserver:/export/path/one
My question is:
where would the detailed log information be on the server (under systemd)?
More information:
I asked a similar question from the Ubuntu client perspective on AskUbuntu. My focus in this question is on the Arch Linux server. In particular, I am looking for logs on the server that will help me understand the problem.
Here's the background:
Our small LAN is running an Arch Linux NFS v4 file server. We have several clients running Ubuntu 15.10 and 16.04. We have one client running Ubuntu 14.04. The 14.04 client will not connect to the file server. The others all connect fine. The settings are the same on all clients. And all clients are listed in /etc/exports on the server.
I need to find more detailed error information on the Arch linux server. However, journalctl does not show anything related to nfs and it does not contain any entries that are related to the nfs access denied errors.
The 14.04 client can ping the fileserver as well as log in via SSH. The user name / ID as well as group match. (I'm using the same user account / uid on both client and server. It is uid 1000.)
Even more info:
$ sudo mount -a (on client)
mount.nfs4: access denied by server while mounting fileserver:/export/path/one
mount.nfs4: access denied by server while mounting fileserver:/export/path/two
The client can ping the fileserver (and vice versa):
$ ping fileserver
PING fileserver (192.168.1.1) 56(84) bytes of data.
64 bytes from fileserver (192.168.1.1): icmp_seq=1 ttl=64 time=0.310 ms
The client successfully logs into the LAN-based fileserver:
$ ssh fileserver
Last login: Tue Aug 16 14:38:26 2016 from 192.168.1.2
[me@fileserver ~]$
The fileserver's mount export and rpcinfo are exposed to the client:
$ showmount -e fileserver # on client
Export list for fileserver:
/export/path/one/ 192.168.1.2
/export/path/two/ 192.168.1.2,192.168.1.3
$ rpcinfo -p fileserver (on client)
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 58344 status
100024 1 tcp 58561 status
100005 1 udp 20048 mountd
100005 1 tcp 20048 mountd
100005 2 udp 20048 mountd
100005 2 tcp 20048 mountd
100005 3 udp 20048 mountd
100005 3 tcp 20048 mountd
100003 4 tcp 2049 nfs
100003 4 udp 2049 nfs
This is the error when mounting the export directly:
$ sudo mount -vvv -t nfs4 fileserver:/export/path/one /path/one/
mount: fstab path: "/etc/fstab"
mount: mtab path: "/etc/mtab"
mount: lock path: "/etc/mtab~"
mount: temp path: "/etc/mtab.tmp"
mount: UID: 0
mount: eUID: 0
mount: spec: "fileserver:/export/path/one"
mount: node: "/path/one/"
mount: types: "nfs4"
mount: opts: "(null)"
mount: external mount: argv[0] = "/sbin/mount.nfs4"
mount: external mount: argv[1] = "fileserver:/export/path/one"
mount: external mount: argv[2] = "/path/one/"
mount: external mount: argv[3] = "-v"
mount: external mount: argv[4] = "-o"
mount: external mount: argv[5] = "rw"
mount.nfs4: timeout set for Tue Aug 16 16:10:43 2016
mount.nfs4: trying text-based options 'addr=192.168.1.1,clientaddr=192.168.1.2'
mount.nfs4: mount(2): Permission denied
mount.nfs4: access denied by server while mounting fileserver:/export/path/one
|
I was having exactly the same problem, with both client and server Arch linux. The solution was to use the hostname in /etc/exports instead of the IP address. I changed this:
/srv/nfs 192.168.10(rw,fsid=root,no_subtree_check)
/srv/nfs/media 192.168.10(rw,no_subtree_check)
/srv/nfs/share 192.168.10(rw,no_subtree_check)
To this:
/srv/nfs iguana(rw,fsid=root,no_subtree_check)
/srv/nfs/media iguana(rw,no_subtree_check)
/srv/nfs/share iguana(rw,no_subtree_check)
This resulted in a slightly different problem:
[root@iguana data]# mount -t nfs4 frog:/srv/nfs/media /data/media
mount.nfs4: Protocol not supported
I don't have a lot of experience with NFS4; apparently you are not supposed to include the NFS root path in the mount command. This finally worked and mounted the volume:
[root@iguana data]# mount -t nfs4 frog:/media /data/media
| Where are NFS v4 logs under systemd? |
1,300,982,059,000 |
Accidently, I ran sudo rm -r /tmp, is that a problem ?
I recreated it using sudo mkdir /tmp, does that fix the problem ?
After I recreated the directory, In the places section in the sidebar in nautilus in Ubuntu 14.04 I can see /tmp , which wasn't there before .. Is that a problem ?
One last thing, do I have to run sudo chown $USER:$USER /tmp to make it accessible as it was before .. Would there be any side-effects after this ?
By the way, I get this seemingly-related error whenI try to use bash autocompletion
bash: cannot create temp file for here-document: Permission denied
|
/tmp can be considered as a typical directory in most cases. You can recreate it, give it to root (chown root:root /tmp) and set 1777 permissions on it so that everyone can use it (chmod 1777 /tmp). This operation will be even more important if your /tmp is on a separate partition (which makes it a mount point).
By the way, since many programs rely on temporary files, I would recommend a reboot to ensure that all programs resume as usual. Even if most programs are designed to handle these situations properly, some may not.
| Deleted /tmp accidently |
1,300,982,059,000 |
I found a strange thing while playing with pi3B.
I want to create a file in /sys/class/gpio (just poking around, no specific reason) but I get a Permission Denied. Below is some information.
pi@raspberrypi:/sys/class/gpio $ groups
pi adm dialout cdrom sudo audio video plugdev games users input netdev gpio i2c spi
pi@raspberrypi:/sys/class/gpio $ ls -ld .
drwxrwx--- 2 root gpio 0 May 6 00:28 .
pi@raspberrypi:/sys/class/gpio $ touch somefile
touch: cannot touch 'somefile': Permission denied
As you can see, I am in group gpio and the group has the write permission of directory /sys/class/gpio.
So the question is Why I can't create new files in /sys/class/gpio even if the group I am a part of has the permission.
I tried relogin and reboot after add the pi user to group gpio and that's several days ago.
OS: raspbian stretch
tried newgrp
|
/sys directory is special. You can't just poke around and create files
Wikipedia excerpt:
Modern Linux distributions include a /sys directory as a virtual filesystem (sysfs, comparable to /proc, which is a procfs), which stores and allows modification of the devices connected to the system, whereas many traditional UNIX and Unix-like operating systems use /sys as a symbolic link to the kernel source tree.
Entries in /sys are created by the kernel and by drivers; you cannot just create them from the command-line. You might edit some as root, but you cannot generally make new ones from userspace except by loading kernel modules or otherwise installing drivers or modifying the kernel.
| Have group permission but unable to create file |
1,300,982,059,000 |
In Ubuntu 14.04, listing the contents of the directory /var/spool/cron with ls -l provides the following permissions on the directories within (irrelevant columns snipped):
drwxrwx--T daemon daemon atjobs
drwxrwx--T daemon daemon atspool
drwx-wx--T root crontab crontabs
What purpose does setting a sticky bit on a directory without the executable bit serve?
|
From the manual page for sticky:
STICKY DIRECTORIES
A directory whose `sticky bit' is set becomes an append-only directory, or, more accurately, a directory in which the deletion of
files is restricted. A
file in a sticky directory may only be removed or renamed by a user if the user has write permission for the directory and the user
is the owner of the
file, the owner of the directory, or the super-user. This feature is usefully applied to directories such as /tmp which must be
publicly writable but
should deny users the license to arbitrarily delete or rename each others' files.
Any user may create a sticky directory. See chmod(1) for details about modifying file modes.
The upshot of this is that only the owner of a file in a sticky directory can remove the file. In the case of the cron tables, this means that I can't go in there and remove your cron table and replace it with one of my choosing, even though I may have write access to the directory. It is for this reason that /tmp is also sticky.
| Why would a directory have the sticky bit set without the executable bit? |
1,300,982,059,000 |
I'm trying to copy a file from my homedir to /usr. How do I setup the permissions to allow this?
$ chmod 777 KeePass-2.14.zip
$ cp KeePass-2.14.zip /usr/keepass/
cp: cannot create regular file `/usr/keepass/KeePass-2.14.zip': Permission denied
$ sudo cp KeePass-2.14.zip /usr/keepass/
cp: cannot stat `KeePass-2.14.zip': Permission denied
$
|
I'm guessing that sudo cp can't stat KeePass-2.14.zip because $HOME is on an NFS mount, and the NFS server doesn't grant your machine root permission to the NFS share.
Try:
cp KeePass-2.14.zip /tmp
sudo cp /tmp/KeePass-2.14.zip /usr/keepass/
| How to copy a file from my home folder to /usr |
1,300,982,059,000 |
I understand the concept of managing permissions on Linux with chmod using the first digit as the user, the second as the group and the third as other users as described on this answer in Understanding UNIX permissions and file types.
Let's say I have a Linux system with 5 users: admin, usera, userb, userc and guest. By default, the users usera, userb and userc will have execution permission on all files inside /usr/bin, so these users can use the command line of the system executing the files in there as those files have 755 permission. So far it's completely ok. However, I'd like to forbid the user guest from executing files on the folder /usr/bin, I know I could achieve that by changing the permission of all files inside this folder to something like 750 with chmod, but if I do that I'll mess up the permissions of the users usera, userb and userc because they will be also forbidden to execute files.
On my computer, all the files in /usr/bin belong to the group root, so I know I could create a newgroup, change the group of all those files to it and add usera, userb and userc to newgroup. But doing that sounds like way too much modification on the system's default settings. Does anyone know a smarter way of solving this problem?
How can I forbid a single user from using the command line (or executing any file on PATH) without an overcomplicated solution that requires changing the permissions of too many files?
|
Use ACLs to remove the permissions. In this case, you don't need to modify the permissions of all the executables; just remove the execute permission from /usr/bin/ to disallow traversal of that directory and therefore access of any files within.
setfacl -m u:guest:r /usr/bin
This sets the permissions of user guest to just read for the directory /usr/bin, so they can ls that directory but not access anything within.
You could also just remove all permissions:
setfacl -m u:guest:- /usr/bin
| Is it possible to forbid a specific user from executing files on /usr/bin without changing all files permission to 750? |
1,300,982,059,000 |
I have checked many similar questions but the solutions didn't work for me.
On my previous Debian wheezy installation I could mount devices from GUI with no permission problem and also after upgrading to jessie. But on my new Debian jessie installation devices mount in a read-only state whether ntfs partitions on the same HDD as my Debian installation or external USB devices, for both root user and normal user, I can't write and modify data on mounted devices.
I have found these lines in syslog that seems to be related.
udisksd[1281]: Mounted /dev/sda4 at /media/<user>/<uuid> on behalf of uid 1000
udisksd[1281]: Cleaning up mount point /media/<user>/<uuid> (device 8:4 is not mounted)
udisksd[1281]: Unmounted /dev/sda4 on behalf of uid 1000
kernel: [ 125.190099] ntfs: volume version 3.1.
udisksd[1281]: Mounted /dev/sda4 at /media/<user>/<uuid> on behalf of uid 1000
org.gtk.Private.UDisks2VolumeMonitor[1224]: index_parse.c:191: indx_parse(): error opening /media/<user>/<uuid>/BDMV/index.bdmv
org.gtk.Private.UDisks2VolumeMonitor[1224]: index_parse.c:191: indx_parse(): error opening /media/<user>/<uuid>/BDMV/BACKUP/index.bdmv
org.gnome.Nautilus[1224]: Gtk-Message: GtkDialog mapped without a transient parent. This is discouraged.
kernel: [ 137.739543] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.
kernel: [ 137.739579] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.
kernel: [ 137.739655] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.
kernel: [ 137.739678] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.
kernel: [ 137.739702] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.
kernel: [ 137.739767] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.
kernel: [ 137.739791] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.
kernel: [ 137.739814] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.
kernel: [ 137.739894] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.
kernel: [ 137.739921] ntfs: (device sda4): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.
I'm trying to figure out what makes the difference between two installations. In my new installation, unlike the previous one, I didn't install gnome task completely but only the minimal gnome packages. And the other difference is that the first time I created a fresh partition table and formatted all the partitions, ext4 and ntfs, then installed windows and then Debian, but second time I used the same partition table and only formatted ext4 partitions. Both times dual-boot with windows.
The output of cat /etc/mtab for two internal and external mounted devices reads as follows:
/dev/sdb1 /media/<user>/<uuid> ntfs rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0177,dmask=077,nls=utf8,errors=continue,mft_zone_multiplier=1 0 0
/dev/sda4 /media/<user>/<uuid> ntfs rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0177,dmask=077,nls=utf8,errors=continue,mft_zone_multiplier=1 0 0
|
After hours searching, there seems to be different causes for this issue and different solutions for each one.
I'm not an expert to provide a comprehensive answer so I hint to some frequent situations on the topic:
Ownership/permission issues for mounted devices on mount points:
File permissions won't change
USB drive auto-mounted by user but gets write permissions for root only
Damaged file-system that for security reasons mounts the device as read-only:
Permission Denied on External Hard Drive
Hibernated windows that doesn't permit a write access to windows partitions on dual-boot systems:
Unable to mount Windows (NTFS) filesystem due to hibernation
And the one that led me to answer is the type of mounting based on the file-system:
Why can't I write on External Hard disk?
My problem was the missing NTFS driver package ntfs-3g that caused the system use the Linux kernel NTFS driver ntfs. As mentioned in Debian Wiki NTFS page, ntfs, Linux kernel NTFS driver, provides read-only access, and ntfs-3g, Userspace NTFS driver via FUSE, provides read and write access.
# apt-get install ntfs-3g and a system reboot solved the problem for me.
| Permission denied on mounted devices |
1,300,982,059,000 |
I have a file that a colleague and I are editing together, on a Unix system. We are using Unix group permissions to edit it. We have one Unix group that we are both members of. Whenever I save the file, it changes the Unix group to one that he is not a member of. Is there any way to stop it from doing that?
|
Your options are to set the setgid bit (chmod g+s) on the directory to make files created within it match its group ID, or to use the newgrp command to open a shell with the desired group ID before editing the file.
| Changing Unix group for files |
1,300,982,059,000 |
This seems very strange to me. I'm running kernel 2.6.37.2 and ran:
~]$ cp -r /proc/ here
~]$ rm -rf here
I get some permission denies when copying as expected and I eventually hit Control-C. I get Permission denied on a lot of files when trying to remove the new directory and files.
As a note, I found this weird behavior because a friend sent me a .tgz of a snapshot of his /proc dir. I extracted the directory and when I was finished looking through it I had the same problem.
rm -rf as root does work.
lsattr shows the e attribute (which is what all of my files/directories show).
|
If there is a non-empty directory where you don't have write permission, you can't remove its contents.
$ mkdir foo
$ touch foo/bar
$ chmod a-w foo
$ rm -rf foo
rm: cannot remove `foo/bar': Permission denied
The reason is that rm is bound by permissions like any other command, and permission to remove bar requires write permission on foo. This doesn't apply when you run rm as root because root always has the permission to remove a file.
To make the directory tree deletable, make all the directories in it writable (the permissions of regular files don't matter when it comes to deletion with rm -f). You can use either of these commands:
chmod -R u+w here # slow if you have a lot of regular files
find here -type d -exec chmod u+w {} +
| I can't remove a directory tree with rm -rf |
1,300,982,059,000 |
The process I'm running sometimes generates core file, and that file has following file permissions:
server:~ # ls -l /mnt/process/core/core_segfault
-rw------- 1 root root 245760 Dec 2 11:29 /mnt/process/core/core_segfault
The issue is that only root user can open it for investigation, while I'd like everyone with access to it to be able to read it without me always setting permissions manually.
How could I set default permissions to something like -rw-rw-rw-?
|
Since core files contain the complete memory layout of the process at the time it crashed, they may contain sensitive information. For this reason, core files are created with ownership set to the uid of the process at the time of its crash, and permissions set rather restrictive. There is no setting to change that easily.
However, what you can do is to set the kernel.core_pattern sysctl setting to a program (which must start with a pipe character, |). The kernel will then call that program when a core file is generated, instead of dumping it to disk. This program should be able to generate the core file with the permissions you want.
Examples of programs that do so are systemd-coredump and apport.
| How to set default core file permissions |
1,300,982,059,000 |
I am trying to understand permissions in detail. I was reading about setuid and it's uses. However, this particular case confuses me.
I have made a small script and now I have set the suid bit for the script as below.
chmod u+s ramesh
I see the permissions set as below.
-rwsrw-r-- 1 ramesh ramesh 29 Sep 30 10:09 ramesh
Now, I believe with setuid any user could execute the script. Now, I did the command
chmod u-x ramesh
It gives me the permission as,
-rwSrw-r-- 1 ramesh ramesh 29 Sep 30 10:09 ramesh
Now, I understand the S denotes setuid with no executable bit. That is, no one can execute this file.
So my question is, what practical purposes do the setting of S bit have? I am trying to understand from an example perspective for setting this bit.
|
Now, I believe with setuid any user could execute the script.
Not quite. To make the script executable by every user, you just need to set a+rx permissions:
chmod a+rx script
setuid means that the script is always executed with the owner's permissions, that is, if you have the following binary:
martin@dogmeat ~ % touch dangerous
martin@dogmeat ~ % sudo chown root:root dangerous
martin@dogmeat ~ % sudo chmod a+rx,u+s dangerous
martin@dogmeat ~ % ll dangerous
-rwsrwxr-x 1 root root 0 Sep 30 17:23 dangerous*
This binary will always run as root, regardless of the user that is executing it. Obviously this is dangerous and you have to be extremely careful with setuid, especially when you are writing setuid applications. Also, you shouldn't be using setuid on scripts at all because it's inherently unsafe on Linux.
Now, I understand the S denotes setuid with no executable bit. That is, no one can execute this file.
So my question is, what practical purposes do the setting of S bit have? I am trying to understand from an example perspective for setting this bit.
I don't think that there is a practical purpose, IMO it's just a possible combination of the permission bits.
| what is the purpose of setuid enabled with no executable bit? |
1,300,982,059,000 |
How can I permanently change the ownership (or at least the group) of a LVM volume?
I figured that I have to use udev, but I don't know how the rule should look like?
Let's say I want to change the ownership of LVM/disk to user/group virtualbox, how would I do that?
|
On Debian (and hopefully your distro as well) all the LVM metadata is already loaded into udev (by some of the rules in /lib/udev/rules.d). So you can use a rules file like this:
$ cat /etc/udev/rules.d/92-local-oracle-permissions.rules
ENV{DM_VG_NAME}=="vgRandom" ENV{DM_LV_NAME}=="ora_users_*" OWNER="oracle"
ENV{DM_VG_NAME}=="vgRandom" ENV{DM_LV_NAME}=="ora_undo_*" OWNER="oracle"
ENV{DM_VG_NAME}=="vgSeq" ENV{DM_LV_NAME}=="ora_redo_*" OWNER="oracle"
You can use udevadm to find out what kinds of things you can base your udev rules on. All the E: lines can be found in ENV in udev, e.g., the E: DM_LV_NAME=ora_data line matched by one of the above rules:
# udevadm info --query=all --name /dev/dm-2
P: /devices/virtual/block/dm-2
N: dm-2
L: -100
S: block/253:2
S: mapper/vgRandom-ora_data
S: disk/by-id/dm-name-vgRandom-ora_data
S: disk/by-id/dm-uuid-LVM-d6wXWIzc7xWJkx3Tx3o4Q9huEG1ajakYr0SLSl5as3C6RoydA66sgNHxBZdpem89
S: disk/by-uuid/787651c2-e4c7-40e2-b0fc-1a3978098dce
S: vgRandom/ora_data
E: UDEV_LOG=3
E: DEVPATH=/devices/virtual/block/dm-2
E: MAJOR=253
E: MINOR=2
E: DEVNAME=/dev/dm-2
E: DEVTYPE=disk
E: SUBSYSTEM=block
E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
E: DM_NAME=vgRandom-ora_data
E: DM_UUID=LVM-d6wXWIzc7xWJkx3Tx3o4Q9huEG1ajakYr0SLSl5as3C6RoydA66sgNHxBZdpem89
E: DM_SUSPENDED=0
E: DM_UDEV_RULES=1
E: DM_VG_NAME=vgRandom
E: DM_LV_NAME=ora_data
E: DEVLINKS=/dev/block/253:2 /dev/mapper/vgRandom-ora_data /dev/disk/by-id/dm-name-vgRandom-ora_data /dev/disk/by-id/dm-uuid-LVM-d6wXWIzc7xWJkx3Tx3o4Q9huEG1ajakYr0SLSl5as3C6RoydA66sgNHxBZdpem89 /dev/disk/by-uuid/787651c2-e4c7-40e2-b0fc-1a3978098dce /dev/vgRandom/ora_data
E: ID_FS_UUID=787651c2-e4c7-40e2-b0fc-1a3978098dce
E: ID_FS_UUID_ENC=787651c2-e4c7-40e2-b0fc-1a3978098dce
E: ID_FS_VERSION=1.0
E: ID_FS_TYPE=ext4
E: ID_FS_USAGE=filesystem
E: FSTAB_NAME=/dev/mapper/vgRandom-ora_data
E: FSTAB_DIR=/opt/oracle/oracle/oradata
E: FSTAB_TYPE=ext4
E: FSTAB_OPTS=noatime
E: FSTAB_FREQ=0
E: FSTAB_PASSNO=3
Also, you can match on sysfs attributes, in either ATTR (device only) or ATTRS (parents too). You can see all the attributes like this:
# udevadm info --attribute-walk --name /dev/dm-2
Udevadm info starts with the device specified by the devpath and then
walks up the chain of parent devices. It prints for every device
found, all possible attributes in the udev rules key format.
A rule to match, can be composed by the attributes of the device
and the attributes from one single parent device.
looking at device '/devices/virtual/block/dm-2':
KERNEL=="dm-2"
SUBSYSTEM=="block"
DRIVER==""
ATTR{range}=="1"
ATTR{ext_range}=="1"
ATTR{removable}=="0"
ATTR{ro}=="0"
ATTR{size}=="41943040"
ATTR{alignment_offset}=="0"
ATTR{discard_alignment}=="0"
ATTR{capability}=="10"
ATTR{stat}=="36383695 0 4435621936 124776016 29447978 0 3984603551 342671312 0 191751864 467456484"
ATTR{inflight}==" 0 0"
Though that matching is more useful for non-virtual devices (e.g., you'll get a lot of output if you try it on /dev/sda1).
| Permanently changing the ownership (or group) of LVM volume |
1,300,982,059,000 |
I got a new drive and I can copy files fine with simple cp on the drive. However for some weird reason I get Permission denied with ffmpeg.
Permissions seem fine unless I'm missing something
> ll /media/manos/6TB/
drwxrwxrwx 13 manos 4096 Apr 16 00:56 ./
drwxr-x---+ 6 manos 4096 Apr 16 00:49 ..
-rwxrwxrwx 1 manos 250900209 Apr 15 17:28 test.mp4*
..
But ffmpeg keeps complaing
> ffmpeg -i test.mp4 test.mov
ffmpeg version n4.1.4 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
configuration: --prefix= --prefix=/usr --disable-debug --disable-doc --disable-static --enable-avisynth --enable-cuda --enable-cuvid --enable-libdrm --enable-ffplay --enable-gnutls --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopus --enable-libpulse --enable-sdl2 --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxvid --enable-nonfree --enable-nvenc --enable-omx --enable-openal --enable-opencl --enable-runtime-cpudetect --enable-shared --enable-vaapi --enable-vdpau --enable-version3 --enable-xlib
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
test.mp4: Permission denied
Simply copying like below works fine..
> cp test.mp4 test.mp4.bak
'test.mp4' -> 'test.mp4.bak'
Any ideas on what is going on? This is pretty annoying. Note ffmpeg is installed at /snap/bin/ffmpeg
|
So after a lot of digging I figured the issue is with snap package manager. Apparently by default, snap can't access the media directory so we need to manually fix this.
Check if ffmpeg has access to removable-media like below
> snap connections | grep ffmpeg
desktop ffmpeg:desktop :desktop -
home ffmpeg:home :home -
network ffmpeg:network :network -
network-bind ffmpeg:network-bind :network-bind -
opengl ffmpeg:opengl :opengl -
optical-drive ffmpeg:optical-drive :optical-drive -
pulseaudio ffmpeg:pulseaudio :pulseaudio -
wayland ffmpeg:wayland :wayland -
x11 ffmpeg:x11 :x11 -
Add that permission if it's missing
sudo snap connect ffmpeg:removable-media
| "Permission denied" with ffmpeg (via snap) on external drive |
1,300,982,059,000 |
[root@localhost ~]# vim /usr/lib64/sas12/smtpd.conf
pwcheck_method: saslauthd
mech_list: PLAIN LOGIN
log_level:3
:wq
An error occurs.
"/usr/lib64/sas12/smtpd.conf" E212: Can't open file for writing.
Why root can't open file for writing?
|
Check that the /usr/lib64/sas12 directory already exists:
root@host:~# ls /usr/lib64/sas12
If it is not the case, you must create the directory before attempting to create the file:
root@host:~# mkdir -p /usr/lib64/sas12
root@host:~# vim /usr/lib64/sas12/smtpd.conf
You vim command should now work as expected.
| Why root Can't open file for writing? |
1,300,982,059,000 |
Why can't I edit files owned by root but being e.g. somewhere deep in my personal directory, it says:
sudoedit: existingFile: editing files in a writable directory is not permitted
While I have the following function defined:
function sunano {
export SUDO_EDITOR='/usr/local/bin/nano'
sudoedit "$@"
}
And I edit like this:
sunano existingFile
Where the file is indeed owned by root:
ls -l existingFile
Proves that:
-rwxr-xr-x 1 root root 40 Jun 15 2015 existingFile
|
The manpage says
Files located in a directory that is writable by the invoking user may not be edited unless that user is root (version 1.8.16 and higher).
If you can write to the directory containing the file, then you can edit it in practice without needing sudoedit (although you may not be able to read its current contents): you can move it out of the way and create a new file with the same name. In your particular case, you can read the file, and you should find that at least some editors will allow you to edit it (at least those which save files by writing a temporary file and renaming it into place).
The reasoning behind this feature is given in sudo bug 707: basically, allowing users to edit files in directories they can write to with sudoedit can allow them to circumvent the restrictions set up in sudoedit’s configuration (and effectively edit any file on the system).
| sudoedit root owned file in a non-root directory |
1,300,982,059,000 |
If user smith's home directory has the following permissions:
$ ls -l /home/staff
drwxr-x--- 51 smith staff 4096 Sep 18 09:08 smith/
is it possible, somehow, to prevent him to change his home directory's permission to, for example, to 755?
|
One way is to use per-user groups (i.e. one group for each user) and then set the home directory permissions to root:smith, mode 0770.
Another (more hacky) way is to script this: Create a script that inspects all home directories (get them via setpwent()/getpwent()) that reside under /home (e.g. not /root) and make it either warn when there's a discrepancy or change the permissions on the spot.
I've done the latter myself in a multi-user environment in the past and worked for years like a charm.
| how to prevent a user from changing his home directory permissions? |
1,300,982,059,000 |
How can I figure out which process is changing the permissions of a file?
on a Debian server, I have the problem that something is changing the permissions on /dev/null every day at 6:20 (since 3 weeks). When I set the correct permissions, they are set back between a few minutes. Then I set it again and after that permissions stay correct until next day 6:20. It doesn't matter at which time I set the permissions.
|
Install auditd and run:
sudo auditctl -a exit,always -F arch=b64 -S fchmod -S chmod -S fchmodat \
-F path=/dev/null -k dev-null-chmod
sudo auditctl -a exit,always -F arch=b32 -S fchmod -S chmod -S fchmodat \
-F path=/dev/null -k dev-null-chmod
You'd find the culprit in the output of:
sudo ausearch -ik dev-null-chmod
You'll see the command name, pid and parent pid in there. If the command name is chmod, you'll probably want to know what ran that command. If the ppid is no longer there, you may want to also monitor all the process creation and/or executed commands with the audit system again or with bsd process accounting.
| monitor file permission changes |
1,300,982,059,000 |
First, some background:
/dev/md1 is a RAID-0 array serving as primary file store. It is mounted to /var/smb.
/dev/md2 is another RAID-0 array storing backup snapshots taken from /dev/md1. It is mounted to /var/smb/snapshots.
Three directories are made available via Samba: /var/smb/files (publicly-shared files), /var/smb/private (private files), and /var/smb/snapshots (providing read-only access to backup snapshots).
Only users in the smbusers group are allowed to access the files and snapshots shares; similarly, only users in the smbprivate group are allowed to access the files in private. Additionally, Linux permissions prohibit users not in the respective groups from accessing the files and private directories, both on the local system and within the snapshots Samba share.
This is great, because it means that we have a fully functional file server with a self-help "restore from backup" option (users can simply access the snapshots share and retrieve the file(s) they want to restore themselves), but so far I lack one key ingredient: Non-root access on the local system to the /var/smb/snapshots directory.
The snapshots must be strictly read-only to all regular users, however of course the file system must be mounted read-write to allow the backup operation to take place. The permissions on these directories are currently:
root@odin:/var/smb# ll
total 40
drwxrwxr-x 7 root root 4096 2011-04-11 15:18 ./
drwxr-xr-x 14 root root 4096 2011-04-10 19:07 ../
drwxrwx--- 15 kromey smbusers 4096 2010-12-07 13:09 files/
drwxrwx--- 7 kromey smbprivate 4096 2010-04-07 07:08 private/
drwxrwx--- 3 root root 4096 2011-04-11 15:16 snapshots/
Now, what I want is to provide access to the snapshots directory to non-root users, but in a strictly read-only fashion. I can't mount /dev/md2 read-only, though, because I have to have it read-write to run backups; I can't simply re-mount it read-write for a backup and then re-mount it back to read-only, because that provides a window of time when the backups could be written to by another user.
Previously I did this by making my snapshots directory a read-only NFS export (only to localhost) and mounting that locally (the original secured under a directory lacking traversal rights for non-root users), but this feels like a hack and there seems like there should be a better way to accomplish this. I did try the mount --bind option, but it seems to lack the ability to have different access levels (i.e. read-only versus read-write) on the two directories (unless I'm missing something: mount -r --bind dir1 dir2).
Any ideas how I can accomplish this without NFS, or is that my best option?
TL;DR: How can I make the contents of a file system available read-write to a select user, but strictly read-only to everyone else, while maintaining original permissions and ownerships on the files backed up to this file system?
|
This answer works on Debian (tested on lenny and squeeze). After investigation, it seems to work only thanks to a Debian patch; users of other distributions such as Ubuntu may be out of luck.
You can use mount --bind. Mount the “real” filesystem under a directory that's not publicly accessible. Make a read-only bind mount that's more widely accessible. Make a read-write bind mount for the part you want to expose with read-write access.
mkdir /media/hidden /media/hidden/sdz99
chmod 700 /media/hidden
mount /dev/sdz99 /media/hidden/sdz99
mount -o bind,ro /media/hidden/sdz99/world-readable /media/world-readable
mount -o bind /media/hidden/sdz99/world-writable /media/world-writable
In your use case, I think you can do:
mkdir /var/smb/hidden
mv /var/smb/snapshot /var/smb/hidden
mkdir /var/smb/snapshot
chmod 700 /var/smb/hidden
chmod 755 /var/smb/hidden/snapshot
mount -o bind,ro /var/smb/hidden/snapshot /var/smb/hidden/snapshot
I.e. put the real snapshot directory under a restricted directory, but give snapshot read permissions for everyone. It won't be directly accessible because its parent has restricted access. Bind-mount it read-only in an accessible location, so that everyone can read it through that path.
(Read-only bind mounts only became possible several years after bind mounts were introduced, so you might remember a time when they didn't work. I don't know offhand since when they work, but they already worked in Debian lenny (i.e. now oldstable).)
| Make all files under a directory read-only without changing permissions? |
1,300,982,059,000 |
[Disclaimer: there's no malicious intent to this question, I'm trying to understand the ln -s command for a school project]
Say I have a file system with my home folder, /home/anna. /home/bob is a folder I can't access, with a file I can't access, foo.txt
Can I successfully run ln -s /home/bob/foo.txt in my home folder? Is it correct to assume that if I can, it will produce a link I can't access (with the same permissions as foo.txt)?
What if I DID have read privileges on foo.txt, just not access to /home/bob?
What about the reverse case, where I could access /home/bob but not read foo.txt?
|
Yes, you can create a symbolic link to any location.
Can I successfully run ln -s /home/bob/foo.txt in my home folder? Is it correct to assume that if I can, it will produce a link I can't access (with the same permissions as foo.txt)?
Correct. The access restrictions of the target file apply. If you create a symlink to a restricted resource, you simply won't be able to access it. It is not even required that the target file actually exists.
A demo:
$ ln -s /etc/shadow foo
$ file foo
foo: symbolic link to /etc/shadow
$ cat foo
cat: foo: Permission denied
$ ln -s /etc/nonexistent bar
$ file bar
bar: broken symbolic link to /etc/nonexistent
What if I DID have read privileges on foo.txt, just not access to /home/bob?
If you don't have permissions on the parent directory, you can't access the contained file. So with a symlink you still wouldn't be able to access it. Creating a symlink doesn't affect the permissions.
What about the reverse case, where I could access /home/bob but not read foo.txt?
Again, you can create a symlink to it, but not access the file.
| Can I create a symbolic link to a file I can't access? |
1,300,982,059,000 |
I'm trying to run a game called "Dofus", in Manjaro Linux. I've installed it with packer, that put it under /opt/ankama folder. This folder ownership (and for every file inside it) is root user, and games group. As instructed by the installing package, I've added myself (user familia) in the games group (by not doing so, "I would have to input my password every time I tried to run the updater").
However, when running the game, it crashes after inputting my password (which shouldn't be required). Checking the logs, I've got some errors like those:
[29/08 20:44:07.114]{T001}INFO c/net/NetworkAccessManager.cpp L87 : Starting request GET http://dl.ak.ankama.com/updates/uc1/projects/dofus2/updates/check.9554275D
[29/08 20:44:07.291]{T001}INFO c/net/NetworkAccessManager.cpp L313 : Request GET http://dl.ak.ankama.com/updates/uc1/projects/dofus2/updates/check.9554275D Finished (status : 200)
[29/08 20:44:07.292]{T001}ERROR n/src/update/UpdateProcess.cpp L852 : Can not cache script data
So, I suspect Permission Denied errors. An error message a moment after starting
That translates to "An error has happened while writing to the disk - verify if you have the sufficient rights and enough disk space".
Then, after some research, I came across "auditd" that can log file accesses in a folder. After setting it up, and seeing which file accesses were unsuccessful, this is the result.
All of those errors actually refer to a unique file, /opt/ankama/transition/transition, with a syscall to (open). This file's permissions are rwxrwxr-x (775). So, I've rwx permissions to it, yet it gives me an error exit -13, which is a EACESS error (Permission Denied).
I've already tried to reboot the computer, to log in and log out. None of them worked.
If I set the folder permissions to familia:games, it runs with no trouble, I don't even need to input my password. However, it doesn't seem right this way. Any ideas of why I get Permission Denied errors even though I have read/write/execute permissions?
Mark has said that I could need +x permissions in all directories of the path prefix. The path itself is /opt/ankama/transition/transition. The permissions for the path prefixes are:
/opt - drwxr-xr-x(755), ownership root:root
/opt/ankama - drwxr-xr-x(755), ownership root:games
/opt/ankama/transition - drwxrwxr-x(775), ownership root:games
However, one thing that I've noticed is that all subfolders of /opt/ankama are 775, even though the folder itself is 755. I don't think this means anything, and changing the permissions to 775 doesn't work.
Also, Giel suggested that I could have AppArmor running on my system. However, running # cat /sys/module/apparmor/parameters/enabled gives me N.
|
First, when you add yourself to a group, the change is not applied immediately. The easiest thing is to logout and log back in.
Then there are write permissions of data files (as mentioned already in some of the comments). However, the solutions are not good for security.
Add a group for the game. Do not add any user to this group.
Make the game executable by chmod -R ugo+rX game-directory
Give write permissions to group only and no-one else using chmod -R ug+w,o-w game-directory
Add game to group chgrp -R game-group game-directory, chmod -R g+s game-directory
or just addgroup game-group; chgrp -R game-group game-directory; chmod -R u=rwX,g=rwXs,o=rX game-directory
If game needs to change permissions then you can do the same but for user instead of group. ie.
adduser game-owner; addgroup game-group; chown -R game-owner:game-group game-directory; chmod -R u=rwXs,g=rwXs,o=rX game-directory
| Why do I get "Permission Denied" errors even though I have group permission? |
1,300,982,059,000 |
I have the su executable with the following permissions:
bash-4.2# ls -la /bin/su
-rws--s--- 1 root wheel 59930 Sep 14 2012 ./su
When I am logged in as a user, not in the wheel group and try to run su, I get an error, which is correct:
bash-4.2$ su
bash: /bin/su: Permission denied
After that I add this user to wheel group from root:
bash-4.2# usermod -a -G wheel user
But for the same terminal session I still can't run su:
bash-4.2$ su
bash: /bin/su: Permission denied
For the new sessions I can run su.
How to allow to run su instantly after I added the user to the appropriate group?
|
Simply have the user run
newgrp wheel
This will start a new shell with the group ID changed to that of wheel. If you want to start a new shell and kill off the previous one, use
exec newgrp wheel
instead.
This is because the kernel still has the previous groupset associated with the currently running processes.
| How to allow to run su instantly after I added the user to the appropriate group |
1,300,982,059,000 |
Recently I came across to command:
chmod -R 6050 /usr/lib/hadoop-yarn/bin/container-executor
I don't know what that mean? I know file permissions like 777 etc. in a mode rwx for owner group others. But this results in
---Sr-s---. 1 root hadoop 36024 Oct 17 20:40 container-executor
Can someone please explain a bit?
|
The 050 should be clear, that sets read and execute bits for the group
the first 6 sets the set-user-ID and set-group-ID bits (see man 2 chmod).
Effectively this means that executing container-extractor can only be done by root or members of the group hadoop and that the executable runs with effective uid being root and effective gid being hadoop.
| What is chmod 6050 good for |
1,300,982,059,000 |
After getting a new VPS with Debian 9, I created a new user using root.
I created a new username called joe with this command adduser joe. Then, I used usermod -aG sudo joe to grant administrative privileges.
After that, I logged out and used Putty to login as joe. I entered the password for joe. After entering the password, it displayed this message:
Could not chdir to home directory /home/joe: Permission denied
-bash: /home/joe/.bash_profile: Permission denied
I checked the directory of /home/joe by using this command:
sudo ls -al /home/joe
total 20
drw-r--r-- 2 joe joe 4096 Feb 7 16:32 .
drwxr-xr-x 4 root root 4096 Feb 7 16:32 ..
-rw-r--r-- 1 joe joe 220 Feb 7 16:32 .bash_logout
-rw-r--r-- 1 joe joe 3526 Feb 7 16:32 .bashrc
-rw-r--r-- 1 joe joe 675 Feb 7 16:32 .profile
How can I enter into /home/joe directory after login as joe?
|
Apparently /home/joe doesn't have execute permission for the user. Execute permission for the directory allows to traverse it.
Try sudo chmod 755 /home/joe and then log in again.
| Could not chdir to home directory /home/user: Permission denied |
1,300,982,059,000 |
I have a ReadyNAS box named "storage" that I believe is based on Debian. I can ssh into it as root. I'm trying to reconfigure the webserver, but I'm running into a file permissions problem that I just don't understand. I can't do anything with /etc/frontview/apache/apache.pem even as root! It doesn't appear to have any special permissions compared to other files in the same directory and I can work with those.
storage:~# whoami
root
storage:~# cd /etc/frontview/apache/
storage:/etc/frontview/apache# ls -lah apache.pem*
-rw------- 1 admin admin 4.0k Jul 10 2013 apache.pem
-rw------- 1 admin admin 4.0k Jun 9 05:57 apache.pem.2017-02-04
-rw------- 1 admin admin 1.5k Jun 9 05:57 apache.pem.orig
storage:/etc/frontview/apache# touch apache.pem
touch: creating `apache.pem': Permission denied
storage:/etc/frontview/apache# touch apache.pem.2017-02-04
storage:/etc/frontview/apache# rm -f apache.pem
rm: cannot unlink `apache.pem': Operation not permitted
What is so special about this file that it can't be touched? I can't delete it. I can't change the permissions on it. I can't change the owner of it.
The directory seems to be fine. It has space left, it isn't mounted read-only. In fact I can edit other files in the same directory.
# ls -ld /etc/frontview/apache
drwxr-xr-x 8 admin admin 4096 Jun 9 05:44 /etc/frontview/apache
# df /etc/frontview/apache
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/hdc1 2015824 504944 1510880 26% /
|
I just found the problem. The "immutable" attribute was set on that file. ls doesn't show it. You need a different command to see it:
# lsattr apache.pem*
----i--------- apache.pem
-------------- apache.pem.2017-02-04
-------------- apache.pem.orig
Once I remove the immutable bit, I can edit that file:
# chattr -i apache.pem
# touch apache.pem
| Permission denied for only a single file in a directory as root user on an ext3 filesystem under RAIDiator OS |
1,300,982,059,000 |
I have a problem with my sudo permissions in an MS ActiveDirectory to Debian LDAP authentication/authorization setup.
What I have so far
I've configured nslcd with libpam-ldap via ldaps and ssh login is working great.
getent passwd myuser
myuser:*:10001:10015:myuser:/home/myuser:/bin/bash
On my ActiveDirectory Server, the Unix Package is installed which adds the necessary attributes like posixGroup, posixAccount, gid, gidNumber, uid, uidNumber and so on.
My example user looks like this:
(I choose 10000+ to be on the safe side)
cn: myuser
uid: myuser
uidNumber: 10015
gidNumber: 10000
I can restrict SSH logins by adding the following to /etc/nslcd.conf
filter passwd (&(objectClass=posixAccount)(|(memberOf=CN=group1,OU=groups,DC=domain,DC=com)(memberOf=CN=group2,OU=groups,DC=domain,DC=com)))
This specifies that only users with objecClass=posixAccount and group either group1 or group2 can login.
So far so good. However, I can't tell sudo to use those groups.
Here is what I tried
in /etc/sudoers
// This one works, but only because the user has gidNumber=10000 set.
// It doesn't matter if a group with this ID actually exist or not.
// So it's not really permission by LDAP group.
%#10000 ALL=(root) ALL
// This is what I want, but it doesn't work.
%group1 ALL=(root) ALL
The Problem
Somehow I need to tell sudo to take the requesting username, check what ldap-groups it belongs to and then see if the permissions for that group are sufficient to execute the command or not.
Unfortuntely I have no idea where to start. Everything else works so far and I'm only stuck with sudo permissions. I thought about mapping the users gidNumber field to the groups gidNumber field but I don't know if mapping a user field to a group field is even possible.
I don't think so, since mapping in nslcd is specified like this
map passwd field1 field2
and passwd tells nslcd that it has to mapp user fields. Instead of passwd I could use groups, but not both of them.
|
Sorry about the long post, but it seems to work. I just had a typo in sudoers file. Took me a bit long to find it though, since the syntax was still correct but I couldn't execute any commands.
However, it's working now.
// Problem was that one ALL was missing, allowing me to execute no root cmds.
%group1 ALL=(root) !/bin/su
// Fixed it
%group1 ALL=(root) ALL, !/bin/su
Update: I realized a bit late but I also changed the following in /etc/nsswitch.conf
sudoers: ldap files
I just didn't think it was the fix because I still had the above mentioned sudoers typo.
Problem solved :)
| Sudo permissions by ldap groups via nslcd |
1,300,982,059,000 |
I have a PC with Ubuntu 16.04 installed. Recently I want to install some packages but have trouble installing them. After some digging, I found that the failure seems to be related to the linux user account system. The problem is that any file with a name prefixed by passwd. cannot be created in /etc path.
# ls /etc/passwd.*
ls: cannot access '/etc/passwd.*': No such file or directory
# touch /etc/passwd.test-test-test
touch: cannot touch '/etc/passwd.test-test-test': Permission denied
# ls /etc/passwe.*
ls: cannot access '/etc/passwe.*': No such file or directory
# touch /etc/passwe.test-test-test
#
I can create that file in other paths, such as / or /usr, but not in /etc, and I can create file with other file names in /etc, but not with file names prefixed by passwd.. I can’t reproduce this problem with other PCs.
I have tried other commands:
nano /etc/shadow.xxx
echo xxx > /etc/shadow.xxx
touch /etc/test-temp-file && mv /etc/test-temp-file /etc/shadow.xxx
systemctl stop apparmor
Reboot the system
Nothing works.
What could cause this problem?
Here are some debug command outputs:
# ls -ld /etc
drwxr-xr-x 136 root root 12288 Aug 12 10:07 /etc
# lsattr -d /etc
----------I--e-- /etc
# ls -dZ /etc
? /etc
# type -a touch
touch is /usr/bin/touch
touch is /bin/touch
# file "$(command -v touch)"
/usr/bin/touch: symbolic link to /bin/touch
Here is the strace output:
# strace touch /etc/passwd.test-test-test
execve("/usr/bin/touch", ["touch", "/etc/passwd.test-test-test"], [/* 22 vars */]) = 0
brk(NULL) = 0x8da000
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=80559, ...}) = 0
mmap(NULL, 80559, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9bc360e000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\t\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1868984, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9bc360d000
mmap(NULL, 3971488, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9bc3033000
mprotect(0x7f9bc31f3000, 2097152, PROT_NONE) = 0
mmap(0x7f9bc33f3000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1c0000) = 0x7f9bc33f3000
mmap(0x7f9bc33f9000, 14752, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f9bc33f9000
close(3) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9bc360c000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9bc360b000
arch_prctl(ARCH_SET_FS, 0x7f9bc360c700) = 0
mprotect(0x7f9bc33f3000, 16384, PROT_READ) = 0
mprotect(0x60e000, 4096, PROT_READ) = 0
mprotect(0x7f9bc3622000, 4096, PROT_READ) = 0
munmap(0x7f9bc360e000, 80559) = 0
brk(NULL) = 0x8da000
brk(0x8fb000) = 0x8fb000
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=1668976, ...}) = 0
mmap(NULL, 1668976, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9bc3473000
close(3) = 0
open("/etc/passwd.test-test-test", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666) = -1 EACCES (Permission denied)
utimensat(AT_FDCWD, "/etc/passwd.test-test-test", NULL, 0) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=2995, ...}) = 0
read(3, "# Locale name alias data base.\n#"..., 4096) = 2995
read(3, "", 4096) = 0
close(3) = 0
open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
write(2, "touch: ", 7touch: ) = 7
write(2, "cannot touch '/etc/passwd.test-t"..., 41cannot touch '/etc/passwd.test-test-test') = 41
open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
write(2, ": Permission denied", 19: Permission denied) = 19
write(2, "\n", 1
) = 1
close(1) = 0
close(2) = 0
exit_group(1) = ?
+++ exited with 1 +++
|
I found out why. It is because ISecTP (Endpoint Security for Linux Threat Prevention) was installed on my PC. It includes "Access Protection", which uses either the fanotify kernel interface, or injection of a custom module into the kernel (configurable which of these it does), to cause accesses to arbitrary paths to be denied. I wasn’t aware of it because I’m not the only one who uses the PC. After uninstalling it, everything is fine now.
Thank you, everyone, for your help!
| Why can’t I create a file with a name prefixed by “passwd.” in “/etc”? |
1,434,136,955,000 |
I'm getting a permissions error in CentOS 7 when I try to create a hard link. With the same permissions set in CentOS 6 I do not get the error. The issue centers on group permissions. I'm not sure which OS version is right and which is wrong.
Let me illustrate what's happening. In my current working directory, I have two directories: source and destination. At the start, destination is empty; source contains a text file.
[root@tc-dlx-nba cwd]# ls -l
total 0
drwxrwxrwx. 2 root root 6 Jun 12 14:33 destination
drwxrwxrwx. 2 root root 21 Jun 12 14:33 source
[root@tc-dlx-nba cwd]# ls -l destination/
total 0
[root@tc-dlx-nba cwd]# ls -l source/
total 4
-rw-r--r--. 1 root root 8 Jun 12 14:20 test.txt
[root@tc-dlx-nba cwd]#
As you can see, regarding the permissions the two directories are 777, with both the owner and group set to root. The text file's owner and group are also both set to root. However, the text file's permissions are read-write for the owner but read-only for the group.
When I'm logged in as root, I have no problem creating a hard-link in the destination directory pointing to the text file (in the source directory).
[root@tc-dlx-nba cwd]# ln source/test.txt destination/
[root@tc-dlx-nba cwd]# ls destination/
test.txt
However, if I log in as another user, in this case, admin, I cannot create the link. I get: "Operation not permitted."
[root@tc-dlx-nba cwd]# rm -f destination/test.txt
[root@tc-dlx-nba cwd]# su admin
bash-4.2$ pwd
/root/cwd
bash-4.2$ ln source/test.txt destination/
ln: failed to create hard link ‘destination/test.txt’ => ‘source/test.txt’: Operation not permitted
What happens actually makes sense to me, but since the above is allowed in CentOS 6, I wanted to check to see if I was misunderstanding something. To me, it seems like a bug in CentOS 6 that has been fixed in CentOS 7.
Anyone know what gives? Am I right believing that the above behavior is the correct behavior? Is it CentOS 6 that is correct? Or, are both right and perhaps there is some subtle group permissions issue that I'm missing? Thanks.
Edit: I tried the same test just now on a Debian v7 VM that I have. Debian agrees with CentOS 7: "Operation not permitted."
Edit #2: I just tried the same thing on Mac OS X (Yosemite). That worked the way CentOS 6 did. In other words, it allowed the link to be created. (Note: On OS X, the root group is called "wheel." That's the only difference, as far as I can tell.)
|
I spun up some fresh CentOS 6 and 7 vm's and was able to recreate the exact behavior you showed. After doing some digging, it turns out that this is actually a change in the kernel regarding default behavior with respect to hard and soft links for the sake of security. The following pages pointed me in the right direction:
http://kernel.opensuse.org/cgit/kernel/commit/?id=561ec64ae67ef25cac8d72bb9c4bfc955edfd415
http://kernel.opensuse.org/cgit/kernel/commit/?id=800179c9b8a1
If you make the file world writable, your admin user will be able to create the hard link.
To revert to the behavior of CentOS 6 system wide, new kernel parameters were added. Set the following in /etc/sysctl.conf:
fs.protected_hardlinks = 0
fs.protected_symlinks = 0
then run
sysctl -p
As for why your program opts to use links instead of copying files, why create an exact copy of a file you need to use when you can just create an entry that points to the original blocks? This saves disk space and the operation is less costly in terms of CPU and I/O. The new hard link is the same file, just with different metadata/inode. If you were to delete the original file after creating a hard link, it won't affect the link. A file is only 'deleted' once all links have been removed.
| Hard link permissions behavior different between CentOS 6 and CentOS 7 |
1,434,136,955,000 |
With umask, I can determine the permissions for newly created files. But if I am a member of multiple groups, how do I set the default group for newly created files?
This question seems relevant, but its answers relate how the system administrator can change the default group for a particular user. I am not system administrator, but just a mere user, and have no permission to do usermod -g even on myself. So how would I proceed to set the default group for newly created files?
|
To change your default group on the fly, use newgrp:
newgrp some_group
After running that command, you will be in a new shell with your group set to some_group and files that you create will be in group some_group. newgrp may or may not ask for a password depending on how permissions are set.
Related: To find out which groups you belong to, run groups.
| When a member of multiple groups, how do I set the default group for newly created files? |
1,434,136,955,000 |
I use the 'tap' net device with KVM to get my vm connect to the Internet. But I have to be root, or use 'sudo', which is inconvenient. I think I can put my user account into some group so I can access the net device without root privilege. I tried the netdev group, but does not work. My account is already in the kvm group.
What else should I do? Or is there any way to allow me using KVM freely without permission issue?
|
The group is whoever has read and write permissions to /dev/net/tun. The default setup varies from distribution to distribution. The ownership and permissions of devices is set by udev.
Create a file /etc/udev/rules.d/zzz_net_tun.rules containing
KERNEL=="tun", GROUP="netdev", MODE="0660", OPTIONS+="static_node=net/tun"
This will make the device accessible by all users in the netdev group. The setting takes effect when the device is created, so if it already exists, do chgrp netdev /dev/net/tun; chmod 660 /dev/net/tun.
(adapted from the Gentoo Wiki wiki)
| Which user group can use the 'tap' net device? |
1,434,136,955,000 |
I am trying to understand the difference in behaviour between FreeBSD ACLs and Linux ACLs. In particular, the inheritance mechanism for the default ACLs.
I used the following on both Debian 9.6 and FreeBSD 12:
$ cat test_acl.sh
#!/bin/sh
set -xe
mkdir storage
setfacl -d -m u::rwx,g::rwx,o::-,m::rwx storage
touch outside
cd storage
touch inside
cd ..
ls -ld outside storage storage/inside
getfacl -d storage
getfacl storage
getfacl outside
getfacl storage/inside
umask
I get the following output from Debian 9.6:
$ ./test_acl.sh
+ mkdir storage
+ setfacl -d -m u::rwx,g::rwx,o::-,m::rwx storage
+ touch outside
+ cd storage
+ touch inside
+ cd ..
+ ls -ld outside storage storage/inside
-rw-r--r-- 1 aaa aaa 0 Dec 28 11:16 outside
drwxr-xr-x+ 2 aaa aaa 4096 Dec 28 11:16 storage
-rw-rw----+ 1 aaa aaa 0 Dec 28 11:16 storage/inside
+ getfacl -d storage
# file: storage
# owner: aaa
# group: aaa
user::rwx
group::rwx
mask::rwx
other::---
+ getfacl storage
# file: storage
# owner: aaa
# group: aaa
user::rwx
group::r-x
other::r-x
default:user::rwx
default:group::rwx
default:mask::rwx
default:other::---
+ getfacl outside
# file: outside
# owner: aaa
# group: aaa
user::rw-
group::r--
other::r--
+ getfacl storage/inside
# file: storage/inside
# owner: aaa
# group: aaa
user::rw-
group::rwx #effective:rw-
mask::rw-
other::---
+ umask
0022
Notice that the outside and inside files have different permissions. In particular, the outside file has -rw-r--r--, which is the default for this user and the inside file has -rw-rw----, respecting the default ACLs I assigned the storage directory.
The output of the same script on FreeBSD 12:
$ ./test_acl.sh
+ mkdir storage
+ setfacl -d -m u::rwx,g::rwx,o::-,m::rwx storage
+ touch outside
+ cd storage
+ touch inside
+ cd ..
+ ls -ld outside storage storage/inside
-rw-r--r-- 1 aaa aaa 0 Dec 28 03:16 outside
drwxr-xr-x 2 aaa aaa 512 Dec 28 03:16 storage
-rw-r-----+ 1 aaa aaa 0 Dec 28 03:16 storage/inside
+ getfacl -d storage
# file: storage
# owner: aaa
# group: aaa
user::rwx
group::rwx
mask::rwx
other::---
+ getfacl storage
# file: storage
# owner: aaa
# group: aaa
user::rwx
group::r-x
other::r-x
+ getfacl outside
# file: outside
# owner: aaa
# group: aaa
user::rw-
group::r--
other::r--
+ getfacl storage/inside
# file: storage/inside
# owner: aaa
# group: aaa
user::rw-
group::rwx # effective: r--
mask::r--
other::---
+ umask
0022
(Note Debian's getfacl will also show the default ACLs even when not using -d where as FreeBSD does not, but I don't think the actual ACLs for storage are different.)
Here, the outside and inside files also have different permissions, but the inside file does not have the group write permission that the Debian version does, probably because the mask in Debian retained the w while the mask in FreeBSD lost the w.
Why did FreeBSD lose the w mask but Debian retained it?
|
In short I’d say (assume) they’re using umask differently.
0022 is exactly group-other unset W. You can change umask to remove write prohibition and check the result.
Citing Solaris aka SunOS manual (and comments as well) since that seems to be pretty related: "… The umask(1) will not be applied if the directory contains default ACL entries. …"
| Why did FreeBSD lose the w mask but Debian retained it? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.