date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,370,359,393,000 |
I'm taking some early forays into setting up a basic LAMP box. It's my first time setting up the software I'll use as opposed to just being handed a working environment, so go easy on me :)
I have installed Apache, and the corresponding htdocs folder has permissions of drwxr-xr-x. I can copy from remote to local fine, but when trying to copy a small directory I get permission denied.
I should mention I am logging in using my own admin user account on the box, and of course htdocs is not owned by me.
So I figure, in my naivety that I just need to sudo the command - that didn't work. Okay, next I'll "fix" the permissions to 774 based on what I read on the web. Nope that did not work either. I am thinking "do I need to add write access to the third "user"? That seems a weird one.
Then I read a forum thread where the guy was told that because the folder was root owned, he'd have to scp the files into his home/ dir on the remote host, then sudo cp them to the apache folder.
Seems a longwinded method to me, but before I try and do that, I thought I'd ask here whether that is true, and whether there were any best practices here and whether any of my assumptions were wrong?
Secondly - what is appropriate permissions for htdocs?
I'm still in the early stages and will probably eventually setup some FTP access, but I'd would be good to know.
|
There are many ways to skin this cat. Here are some for you to consider:
The htdocs tree almost certainly doesn't have to be owned by root. What matters is that it be readable by the Apache user. Depending on the *ix system in question, that may be apache, www-data, or something else. The default file mode you give above, drwxr-xr-x (abbreviated 755) is fine for this.
So, the question is, who should own this tree, and which group should it belong to. This may be enough:
$ sudo chown -R dan.apache /var/www
This says user dan owns /var/www and everything under it (-R, recursive) and that group apache has some permissions to it, too. If httpd is running as group apache, it probably gets enough permission to read files in the tree and change directories within it, sufficient for most sites.
Another way is to go with whatever permissions you have and simply tell scp to impersonate the owner of the /var/www/ tree:
mybox$ scp ~/site-mirror/index.html [email protected]:/var/www/htdocs
That copies the local copy of the root index.html file to the appropriate location on example.com, logging in as user www. You can use whatever user name and host name you need here. You just need the ability to login as the /var/www/ tree owner's user remotely. If you can't do that, consider going with option #1, at least to get things set up in a way that does allow you to scp files directly.
If you set up pre-shared keys for SSH, you won't even have to give a password.
Instead of scp, I recommend you use rsync for web site development:
mybox$ rsync -ave ssh --delete ~/site-mirror [email protected]:/var/www/htdocs
This mirrors the contents of ~/site-mirror on mybox (your local work machine) into /var/www/htdocs on example.com, logging in as user www. The advantage of using rsync over raw scp is that you don't have to copy and re-copy files that haven't changed. The Rsync algorithm computes the changes and sends only that.
| Permissions "problem" using SCP to copy to root owned folder from local |
1,370,359,393,000 |
I have a regular file and I changed its permission to 444. I understand that as the file is write protected, we can't modify or remove the contents of file but when I try to remove this file using rm, it generates a warning stating whether I want to remove a write protected file or not. My doubt is that isn't that depends on the directory permissions that whether a file can be deleted or not ? Why rm is generating a warning even when directory is having a write and execute permission. Does it also depends on the file permission whether a file can be deleted or not ? or is it totally dependent on directory permissions only ?
|
Because the standard requires it:
3. If file is not of type directory, the -f option is not specified,
and either the permissions of file do not permit writing and the
standard input is a terminal or the -i option is specified, rm
shall write a prompt to the standard error and read a line from the
standard input. If the response is not affirmative, rm shall do
nothing more with the current file and go on to any remaining
files.
So a) this is a matter specific to the rm utility (it doesn't say anything about how permissions work in general) and b) you can override it with either rm -f file or true | rm file
Also, this was rm's behaviour since quite a long time -- 46 years, or maybe even longer.
| Why rm gives warning when deleting a write protected file? |
1,370,359,393,000 |
I've created a new user for FTP.
useradd -g www-data -d /srv/www/vhosts/project/ black
I made a mistake, I actually need the user to be in the group www.
How can I change the group?
|
You can use:
sudo groupadd www # Add a new group, if not exists
usermod -a -G www black # Add the existing user black to www
usermod -g www black # Change the primary group of black to www
To confirm that it has been added:
groups black
From removing user from secondary group, in this case www-data group:
gpasswd -d black www-data
| How to change usergroup? |
1,370,359,393,000 |
If I have a file that I want to make world-readable, but it is deep in several layers of directories that are not world-executable, I have to change the permissions for the whole path and the file.
I can do chmod 755 -R /first/inaccessible/parent/dir but that changes permissions for all the other files in the path directories and makes the file itself executable when I just want it to be readable.
Is there a straightforward way to do this in bash?
|
Integrating chepner's astute comment regarding the directories only really needing execute-permission:
Setup:
$ mkdir -p /tmp/lh/subdir1/subdir2/subdir3
$ touch /tmp/lh/subdir1/subdir2/subdir3/filehere
$ chmod -R 700 /tmp/lh
$ find /tmp/lh -ls
16 4 drwx------ 3 user group 4096 Oct 23 12:01 /tmp/lh
20 4 drwx------ 3 user group 4096 Oct 23 12:01 /tmp/lh/subdir1
21 4 drwx------ 3 user group 4096 Oct 23 12:01 /tmp/lh/subdir1/subdir2
22 4 drwx------ 2 user group 4096 Oct 23 12:01 /tmp/lh/subdir1/subdir2/subdir3
23 0 -rwx------ 1 user group 0 Oct 23 12:01 /tmp/lh/subdir1/subdir2/subdir3/filehere
Prep:
$ f=/tmp/lh/subdir1/subdir2/subdir3/filehere
Do it:
$ chmod o+r "$f"
$ (cd "$(dirname "$f")" && while [ "$PWD" != "/" ]; do chmod o+x .; cd ..; done)
chmod: changing permissions of `.': Operation not permitted
$ find /tmp/lh -ls
16 4 drwx-----x 3 user group 4096 Oct 23 12:01 /tmp/lh
20 4 drwx-----x 3 user group 4096 Oct 23 12:01 /tmp/lh/subdir1
21 4 drwx-----x 3 user group 4096 Oct 23 12:01 /tmp/lh/subdir1/subdir2
22 4 drwx-----x 2 user group 4096 Oct 23 12:01 /tmp/lh/subdir1/subdir2/subdir3
23 0 -rwx---r-- 1 user group 0 Oct 23 12:01 /tmp/lh/subdir1/subdir2/subdir3/filehere
If you really prefer the intermediate directories to also have other-execute permissions, just change the chmod command to chmod o+rx.
The error message I got from the above results from my non-root userid attempting to change the permissions of the /tmp directory, which I don't own.
The loop runs in a subshell to isolate the changing of directories from your current shell's $PWD. It starts the loop by entering the directory containing the file then loops upwards, chmod'ing along the way, until it lands in the root / directory. The loop exits when it reaches the root directory -- it does not attempt to chmod the root directory.
You could make a script-file or function out of it like so:
function makeitreadable() (
chmod o+r "$1"
cd "$(dirname "$1")" &&
while [ "$PWD" != "/" ]
do
chmod o+x .
cd ..
done
)
| Quickly make a file readable and its path executable? |
1,370,359,393,000 |
I've the script which loads the SSH key from the variable (as part of script in CI environment) in order to not keep the private file in the public repository, however ssh-add complains about the wrong permissions (and it seems it's not possible to bypass it). So my approach is to find the method of changing the permission of anonymous pipe which is created on the fly.
For example:
$ stat <(:)
File: ‘/dev/fd/63’
Size: 0 Blocks: 0 IO Block: 512 fifo
Device: 397f3928h/964639016d Inode: 818277067 Links: 0
Access: (0660/prw-rw----) Uid: ( 501/ kenorb) Gid: ( 20/ staff)
Access: 2015-10-10 22:33:30.498640000 +0100
Modify: 2015-10-10 22:33:30.498640000 +0100
Change: 2015-10-10 22:33:30.498640000 +0100
Birth: 2015-10-10 22:33:30.498640000 +0100
shows 0660 permission. I've checked my umask and it seems it has nothing to do with that.
Here is a simple test (on OS X, which by default has 0660):
$ ssh-add <(cat ~/.ssh/id_rsa)
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0660 for '/dev/fd/63' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
On Linux it seems to work, because it's 0500 by default. Where this permission is controlled from?
To clarify, I'm not looking to change the permission of any file, as I'd like to use an anonymous pipe.
The question is:
How do I temporary change the permission of a pipe?
|
So far I've found the following workaround using named FIFO:
$ mkfifo -m 600 fifo
$ cat ~/.ssh/id_rsa >fifo | ssh-add fifo
Identity added: fifo (fifo)
where the option -m sets the FIFO permission.
| How to change permission of anonymous pipe? |
1,370,359,393,000 |
I need to pass a user and a directory to a script and have it spit out a list of what folders/files in that directory that the user has read access to. MS has a tool called AccessChk for Windows that does this but does something like this exist on the Unix side? I found some code that will do this for a specific folder or file but I need it to traverse a directory.
|
TL;DR
find "$dir" ! -type l -print0 |
sudo -u "$user" perl -Mfiletest=access -l -0ne 'print if -r'
You need to ask the system if the user has read permission. The only reliable way is to switch the effective uid, effective gid and supplementation gids to that of the user and use the access(R_OK) system call (even that has some limitations on some systems/configurations).
The longer story
Let's consider what it takes for instance for a $user to have read access to /foo/file.txt (assuming none of /foo and /foo/file.txt are symlinks)?
He needs:
search access to / (no need for read)
search access to /foo (no need for read)
read access to /foo/file.txt
You can see already that approaches that check only the permission of file.txt won't work because they could say file.txt is readable even if the user doesn't have search permission to / or /foo.
And an approach like:
sudo -u "$user" find / -readable
Won't work either because it won't report the files in directories the user doesn't have read access (as find running as $user can't list their content) even if he can read them.
If we forget about ACLs or other security measures (apparmor, SELinux...) and only focus on traditional permission and ownership attributes, to get a given (search or read) permission, that's already quite complicated and hard to express with find.
You need:
if the file is owned by you, you need that permission for the owner (or have uid 0)
if the file is not owned by you, but the group is one of yours, then you need that permission for the group (or have uid 0).
if it's not owned by you, and not in any of your groups, then the other permissions apply (unless your uid is 0).
In find syntax, here as an example with a user of uid 1 and gids 1 and 2, that would be:
find / -type d \
\( \
-user 1 \( -perm -u=x -o -prune \) -o \
\( -group 1 -o -group 2 \) \( -perm -g=x -o -prune \) -o \
-perm -o=x -o -prune \
\) ! -type d -o -type l -o \
-user 1 \( ! -perm -u=r -o -print \) -o \
\( -group 1 -o -group 2 \) \( ! -perm -g=r -o -print \) -o \
! -perm -o=r -o -print
That one prunes the directories that user doesn't have search right for and for other types of files (symlinks excluded as they're not relevant), checks for read access.
Or for an arbitrary $user and its group membership retrieved from the user database:
groups=$(id -G "$user" | sed 's/ / -o -group /g'); IFS=" "
find / -type d \
\( \
-user "$user" \( -perm -u=x -o -prune \) -o \
\( -group $groups \) \( -perm -g=x -o -prune \) -o \
-perm -o=x -o -prune \
\) ! -type d -o -type l -o \
-user "$user" \( ! -perm -u=r -o -print \) -o \
\( -group $groups \) \( ! -perm -g=r -o -print \) -o \
! -perm -o=r -o -print
The best here would be to descend the tree as root and check the permissions as the user for each file.
find / ! -type l -exec sudo -u "$user" sh -c '
for file do
[ -r "$file" ] && printf "%s\n" "$file"
done' sh {} +
Or with perl:
find / ! -type l -print0 |
sudo -u "$user" perl -Mfiletest=access -l -0ne 'print if -r'
Or with zsh:
files=(/**/*(D^@))
USERNAME=$user
for f ($files) {
[ -r $f ] && print -r -- $f
}
Those solutions rely on the access(2) system call. That is instead of reproducing the algorithm the system uses to check for access permission, we're asking the system to do that check with the same algorithm (which takes into account permissions, ACLs...) it would use would you try to open the file for reading, so is the closest you're going to get to a reliable solution.
Now, all those solutions try to identify the paths of files that the user may open for reading, that's different from the paths where the user may be able to read the content. To answer that more generic question, there are several things to take into account:
$user may not have read access to /a/b/file but if he owns file (and has search access to /a/b, and he's got shell access to the system), then he would be able to change the permissions of the file and grant himself access.
Same thing if he owns /a/b but doesn't have search access to it.
$user may not have access to /a/b/file because he doesn't have search access to /a or /a/b, but that file may have a hard link at /b/c/file for instance, in which case he may be able to read the content of /a/b/file by opening it via its /b/c/file path.
Same thing with bind-mounts. He may not have search access to /a, but /a/b may be bind-mounted in /c, so he could open file for reading via its /c/file other path.
To find the paths that $user would be able to read. To address 1 or 2, we can't rely on the access(2) system call anymore. We could adjust our find -perm approach to assume search access to directories, or read access to files as soon as you're the owner:
groups=$(id -G "$user" | sed 's/ / -o -group /g'); IFS=" "
find / -type d \
\( \
-user "$user" -o \
\( -group $groups \) \( -perm -g=x -o -prune \) -o \
-perm -o=x -o -prune \
\) ! -type d -o -type l -o \
-user "$user" -print -o \
\( -group $groups \) \( ! -perm -g=r -o -print \) -o \
! -perm -o=r -o -print
We could address 3 and 4, by recording the device and inode numbers or all the files $user has read permission for and report all the file paths that have those dev+inode numbers. This time, we can use the more reliable access(2)-based approaches:
Something like:
find / ! -type l -print0 |
sudo -u "$user" perl -Mfiletest=access -0lne 'print 0+-r,$_' |
perl -l -0ne '
($w,$p) = /(.)(.*)/;
($dev,$ino) = stat$p or next;
$writable{"$dev,$ino"} = 1 if $w;
push @{$p{"$dev,$ino"}}, $p;
END {
for $i (keys %writable) {
for $p (@{$p{$i}}) {
print $p;
}
}
}'
And merge both solutions with:
{ solution1; solution2
} | perl -l -0ne 'print unless $seen{$_}++'
As should be clear if you've read everything thus far, part of it at least only deals with permissions and ownership, not the other features that may grant or restrict read access (ACLs, other security features...). And as we process it in several stages, some of that information may be wrong if the files/directories are being created/deleted/renamed or their permissions/ownership modified while that script is running, like on a busy file server with millions of files.
Portability notes
All that code is standard (POSIX, Unix for t bit) except:
-print0 is a GNU extension now also supported by a few other implementations. With find implementations that lack support for it, you can use -exec printf '%s\0' {} + instead, and replace -exec sh -c 'exec find "$@" -print0' sh {} + with -exec sh -c 'exec find "$@" -exec printf "%s\0" {\} +' sh {} +.
perl is not a POSIX-specified command but is widely available. You need perl-5.6.0 or above for -Mfiletest=access.
zsh is not a POSIX-specified command. That zsh code above should work with zsh-3 (1995) and above.
sudo is not a POSIX-specified command. The code should work with any version as long as the system configuration allows running perl as the given user.
| How to recursively check if a specfic user has read access to a folder and its contents? |
1,370,359,393,000 |
I'm using the Raspbian (a distribution made for Raspberry Pi, which is based on Debian).
I have some scripts that use i2c.
Normally only root has read and write permissions for i2c.
I'm using this command to add i2c r/w permissions for normal user:
# chmod a+rw /dev/i2c-*
However after reboot, these devices have their default permissions.
What is the best way to make my i2c available for r/w for a normal user permanently?
Is there a more "elegant" way than adding my script to init.d that runs the command above after my Raspberry Pi boots?
|
You can do this using udev. Create a file in /etc/udev/rules.d with the suffix .rules, e.g. local.rules, and add a line like this to it:
ACTION=="add", KERNEL=="i2c-[0-1]*", MODE="0666"
MODE=0666 is rw for owner, group, world. Something you can do instead of, or together with that, is to specify a GID for the node, e.g:
GROUP="pi"
If you use this instead of the MODE setting, the default, 0660 (rw for owner and group) will apply, but the group will be pi, so user pi will have rw permissions. You can also specify the OWNER the same way.
Pay attention to the difference between == and = above. The former is to test if something is true, the latter sets it. Don't mix those up by forgetting a = in ==.
You have to reboot for this to take effect.
"Writing udev rules" Reference
| How can I set device rw permissions permanently on Raspbian? |
1,370,359,393,000 |
For my home network I wanted to buy a NAS which supports disk encryption and NFS since it is important for me that the backup is encrypted but also that it preserves owner, groups and permissions (therefore NFS). This way I thought I could use something like rsnapshot or rBackup to backup my data and get multiple snapshots. Unfortunately I didn't find any NAS which supports NFS and encryption at the same time. So I was wondering if there is any possibility to get this using an NAS which without NFS (using for example CIFS instead of NFS). So I am looking for a backup solution which meets the following requirements:
backup to a NAS in my local network (i.e. I don't want to use a local usb drive)
it should preserve owner, groups and permissions and symbolic links
it should be encrypted
there should be multiple snapshots available like in rsnapshot or rBackup
it should be easy to access the files of a snapshot
it should be not too slow
Any ideas how to do this in detail?
Edit:
I just try to sum up the answers so far and want to ask some additional question to clarify some points. It seems to be the most flexible option to use a container, FUSE or otherwise "faked" filesystem which doesn't depend on the target device. In this approach I can use any backup script I like and the encryption is done by the client CPU. The possibilities are:
EncFS
Truecrypt
dmcrypt/luks
S3QL
I am not sure if it is possible to read and write on the NAS via S3QL from two clients simultanously. Is it correct that this is no problem for the other approaches? Concerning the permissions, in any case I have just to make sure to make it work with NFS. For example I could just make my backup script to preserve numerical uid/gid and setup no users on the NAS at all.
EncFS seems to be the easiest solution so far. In Truecrypt and dmcrypt/luks I have to choose the containersize in advance which seems to be not so flexible as EncFS or Truecrypt. However are there any significant differences between those solutions concerning read/write performance and stability?
Another interesting approach mentioned so far is to use duplicity as a backup script which does the encryption via gpg by itself.
|
You can use EncFS on top of NFS
encfs /encrypted_place_at_nfs /mnt/place_to_access_it_unencrypted
| Best way to make encrypted backups while preserving permissions to a windows file system |
1,332,495,956,000 |
I have a remote mount with cifs and it would seem there is no way to have bash execute scripts from that mount, is it possible to enable such execution?
ls -lh ini*
-rwxrwxr-x 1 alan 500 222 2012-03-23 10:16 initall.sh
bash --version
GNU bash, version 4.2.8(1)-release (i686-pc-linux-gnu)
./initall.sh
bash: ./initall.sh: Permission denied
The cifs mount seems to support also unix extensions as I am able to chmod the file correctly.
Here follows the mount options:
user,auto,pass=***,uid=alan,user=***
|
The user mount option turns off exec by default. Change the mount options to include exec explicitly.
| Is it possible to enable execution of files from a cifs mount in bash? |
1,332,495,956,000 |
Updated (and snipped) with more details below.
I've set up a cron script and I'm trying to debug why it's not running. [Snipped context testing, which is all ok; see revision 2 for details] The command itself, in case it helps, (arrows indicate line-wrapping for legibility) is:
/usr/bin/php -C /etc /path/to/process.php
↪ >>/path/to/stdout.log 2>>/path/to/stderr.log
[Snipped permissions testing, which is all ok; see below and revision 2 for details]
Checking crontab (again, wrapped for legibility), I get:
[blackero@XXXXXXXXXXX to]$ sudo crontab -u cronuser -l
MAIL="blackero@localhost"
30 9 * * * cronuser /usr/bin/php -C /etc /path/to/process.php
↪ >>/path/to/stdout.log 2>>/path/to/stderr.log
20 18 7 * * cronuser /usr/bin/php -C /etc /path/to/process.php
↪ >>/path/to/stdout.log 2>>/path/to/stderr.log
22 18 7 * * cronuser echo "Test" > /path/to/test.txt
↪ 2> /path/to/error.txt
Update #1 at 2012-02-08 12:32 Z
[Snip: Having tried derobert's suggestion (revision 3)], I know that the cronuser can run the script properly and can write to the two .log files. (One of the first things the process.php script does is download a file by FTP; it is successfully doing that too.) But, even after fixing the MAIL="" line (both by removing it and by changing it to MAILTO="blackero@localhost"), the cron task still doesn't run, nor does it send me any email.
A friend suggested that I retry the
9 12 8 * * cronuser /bin/echo "Test" > /var/www/eDialog/test.txt
↪ 2> /var/www/eDialog/error.txt
task, after passing the full path to /bin/echo. Having just tried that, it also didn't work and also generated no email, so I'm at a loss.
Update #2 at 2012-02-08 19:15 Z
A very useful chat conversation with oHessling, it would seem that the problem is with pam. For each time that cron has tried to run my job, I have /var/log/cron entries:
crond[29522]: Authentication service cannot retrieve authentication info
crond[29522]: CRON (cronuser) ERROR: failed to open PAM security session: Success
crond[29522]: CRON (cronuser) ERROR: cannot set security context
I fixed that by adding the following line to /etc/shadow:
cronuser:*:15217:0:99999:7:::
As I found on a forum, if the user does not appear in /etc/shadow, then pam won't continue processing the security request. Adding * as the second column means this user cannot log in with a password (as no hash is specified). Fixing that led to a different error in /var/log/cron, so, double-checking my crontab I noticed I had specified the username each time.
Correcting that means my crontab now reads:
[blackero@XXXXXXXXXXX ~]$ sudo crontab -u cronuser -l
MAILTO="blackero@localhost"
30 9 * * * /usr/bin/php -C /etc /path/to/process.php
↪ >>/path/to/stdout.log 2>>/path/to/stderr.log
52 18 8 * * /usr/bin/php -C /etc /path/to/process.php
↪ >>/path/to/stdout.log 2>>/path/to/stderr.log
9 12 8 * * /bin/echo "Test" > /path/to/test.txt
↪ 2> /path/to/error.txt
but now /var/log/cron shows me:
Feb 8 18:52:01 XXXXXXXXXXX crond[16279]: (cronuser) CMD (/usr/bin/php -C /etc
↪ /path/to/process.php >>/path/to/stdout.log 2>>/path/to/stderr.log)
and nothing comes into the stdout.log or the stderr.log. No mail was sent to me and none of the other files in /var/log/ has any entry in the right timeframe, and I'm running out of ideas as to where to look to see what's going wrong
|
I've found the problem. The -C command line switch I'm sending to php, which should have been -c. I've no idea why cron wasn't reporting that to me in any manner, let alone a useful manner (or how I somehow managed to get it into the crontab with a capital C but test it on the CLI with a lowercase), but running it yet again in the CLI with a colleague here acting as my monkey and suddenly it was obvious.
Now how stupid do I feel?
Well, at least it's resolved now and cron is happily running my damn script. Thank you everyone for all your help.
| Frustrating issue where neither cron nor su -c runs my job (permissions?) |
1,332,495,956,000 |
When I write:
chmod g=rws,u=rwx,o=rx folder_name
I get:
drwxrwSr-x
But S is not the same as s, right?
|
You specified the s bit for the group without the x bit. Executable and setgid are separate bits, the ls command just combines them into a single letter to gain space. You need
chmod g=rwxs,u=rwx,o=rx folder_name
i.e. chmod 2775 folder_name. You did chmod 2765 folder_name.
S means setxid without executable, it's in uppercase because that's rarely useful and likely a sign of a mistake that needs to be fixed.
| Directory special permission problem |
1,332,495,956,000 |
When a process successfully get an fd using open(flags=O_RDWR), it will be able to read/write to that file as long as the fd isn't closed (Regular file on local filesystem), even if some other process use chmod to cancel the read/write permission for the corresponding user. Does Linux kernel check file permissions on inode or open file description? But how about when the process try to execute that file using execveat, will the kernel read the disk to check the x bit and suid bit permission? What kind of permissions are recorded in open file description, does it contain a full ACL or simply readable/writable bit so every operation else(execveat, fchdir, fchmod, etc) will check the on-disk info?
What if I transfer this fd to another process of another whose fsuid doesn't have read/write/execute bit on that file(according to the on-disk filesystem info), will that receiver process be able to read/write/execute the file through the fd?
|
execveat is handled by do_open_execat, which specifies that it wants to be able to open the target file for execution. The file opening procedure is handled via do_filp_open and path_openat, with a path-walking process which is documented separately. The result of all this, regardless of how the process starts, is a struct file and its associated struct inode which stores the file’s mode and, if relevant, a point to the ACLs. The inode data structure is shared by all the file descriptions which reference the same inode.
The kernel guarantees that the inode information in memory is up-to-date when retrieved. This can be maintained in the dentry and inode caches in some cases (local file systems, ext4, ext3, XFS, and btrfs in particular), in others it will involve some I/O (in particular over the network).
The permission check itself is performed a little later, by bprm_fill_uid; that takes into account the current permissions on the inode, and the current privileges of the calling user.
As discussed previously, permissions are only verified when a file is opened, mapped, or its metadata altered, not when it’s read or written; so file descriptors can be passed across processes without new permission checks.
| How does Linux check permission for file descriptor? |
1,332,495,956,000 |
The problem is that I have PHP files that do not work in the browser. I suspect because the user is missing read permissions.
The files are located in a dir called "ajax"
drwxrwxrwx. 2 root root 4096 Sep 13 14:33 ajax
The content of that dir:
-rwxrwxrwx. 1 root root 13199 Sep 13 14:33 getOrderDeliveryDates.php
-rwxrwxrwx. 1 root root 20580 Sep 13 14:33 getParcelShops.php
-rwxrwxrwx. 1 root root 1218 Sep 13 14:33 index.php
-rwxrwxrwx. 1 root root 814 Sep 13 14:33 lang.php
-rwxrwxrwx. 1 root root 6001 Sep 13 14:33 prod_reviews.php
I'm 100% certain logged in as root:
[root@accept: nl (MP-git-branch)] $
doublecheck with command id:
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
It is driving me nuts.
tried sudo (even though I already am root).
sudo chmod 777 filename
tried chown (even though I already am the owner root).
sudo root filename
There are no errors or warnings at all.
OS is CentOS 6
|
CentOS (and other Fedora/RHEL derivatives) enables an additional security mechanism known as SELinux. It applies additional restrictions on most system daemons. These additional restrictions are checked after regular unix permissions.
For non-default configurations you often need to adjust SELinux. Files contain a specific security label, which is used by SELinux to apply the security policy. If your problem only occurs with some files, you need to correct the SELinux labels on the problematic files. Use chcon with --reference= option to copy the label from a file which works to apply the same label on your problematic file(s):
chcon --reference=<path to working file> <path to not working file(s)>
If your files are in non-standard location, you should add a rule in file labeling database. This avoids problems the next time file system is relabeled or restorecon is used. Choose the label appropriately or use the already applied label (check the existing security labels with ls -lZ).
Adding a labeling rule for /path/to/directory and its contents using semanage:
semanage fcontext -a -t httpd_user_rw_content_t '/path/to/directory(/.*)?'
If your files are on different file system, you can use context option for the mount point to apply/override the default labeling.
| Cannot change permissions as root on a file owned by root |
1,332,495,956,000 |
When modifying permissions on Windows I backup the ACLs to a file first using a commands like:
subinacl /noverbose /output=C:\temp\foldername_redir_permissions_backup_star_star.txt /subdirectories "W:\foldername\*.*"
and...
subinacl /noverbose /output=C:\temp\foldername_redir_permissions_backup.txt /subdirectories "W:\foldername\"
...to back them up.
And then if they need to be restored, a command like...
subinacl /playfile C:\temp\foldername_redir_permissions_backup_star_star.txt
...can be used to restore them.
So can the same thing be done for POSIX permissions on Linux / Unix? And what about ACL extended permissions?
|
setfacl is designed to accept getfacl output as input. Meaning you can run getfacl, save the output to a file, do your thing, then restore the ACL. The exact procedure can vary depending on your platform. On Linux though:
# Take a peek at the current ACL
[root@vlp-fuger ~]# getfacl newFile
# file: newFile
# owner: root
# group: root
user::rw-
group::r--
group:provisor:rwx
mask::rwx
other::r--
# Backup ACL
[root@vlp-fuger ~]# getfacl newFile > newFile.acl
# Remove the group permission, add another that we'll later want to get rid of
[root@vlp-fuger ~]# setfacl -x g:provisor newFile
[root@vlp-fuger ~]# setfacl -m g:ihtxadm:r-x newFile
[root@vlp-fuger ~]# getfacl newFile
# file: newFile
# owner: root
# group: root
user::rw-
group::r--
group:ihtxadm:r-x
mask::r-x
other::r--
# Restore ACL to where it was
[root@vlp-fuger ~]# setfacl --restore=newFile.acl
# Resulting ACL
[root@vlp-fuger ~]# getfacl newFile
# file: newFile
# owner: root
# group: root
user::rw-
group::r--
group:provisor:rwx
mask::rwx
other::r--
You can also use --set-file on the setfacl you use to restore and set it to - if you want to pipe the old ACL in. You can also use getfacl -R to backup the ACL's of entire directory trees.
| Equivalent of subinacl.exe on linux / unix? |
1,332,495,956,000 |
It is a production server - CentOS 6.1
Some user having root access in the past had logged on to the server and removed the SUID bit from the /bin/su file and then exited root. Now we are not able to switch back to root. Also ssh access for root is disabled on the server, so root directly cannot login to the machine. Since we are nor able to su to root nor able to ssh as root we can't set the suid bit for the /bin/su file back.(Also we are not able to switch between users using su)
How it should had been:
$ ll /bin/su
-rwsr-xr-x. 1 root root 30092 Mar 10 2011 /bin/su
How it is right now:
$ ll /bin/su
-rwxr-xr-x. 1 root root 30092 Mar 10 2011 /bin/su
Is there any way we can switch back to root or set the SUID bit in any way?
Note: We want to avoid a reboot no networking usermode because the server is in use 24x7 and getting downtime is a bit difficult. If reboot was possible then we could simply login using single user mode as root and reset that bit.
Feel free to give creative answers. I can test your answers on our test environment.
|
Some options:
sudo -i, that's the most obvious alternative.
sudo -l then look for a command that you are allowed to use that you could use to solve the problem, like : editing a file executed by root, like crontab, logrotate, executon yum/rpm...
go to the console, and connect as root (only ssh is restricted if I understood)
open a graphical session, some distribution have tools to become root that don't rely on sudo. Also, many of them have some update manager .. Maybe you can reinstall the package which provides su
if you have a configuration management tool like puppet/chef/ansible/fai... Push the configuration!
investigate your crontab to see if you can edit a file to escalade.
if your server is connected to a central authentication system (especially LDAP/nis), create an account with high permission (group wheel, or user uid=0).
if it's a virtual server, shut it down, then mount and edit the filesystem.
Some silver bullets:
reboot your server in single user mode (red hat) or specify init=/bin/sh (Debian and rhel/CentOS 7), then fix the permission.
reboot the server in a CD/DVD/USB/NetBoot and use the recovery (or just mount and edit)
And some really ugly:
find a vulnerability in you system to compromise it!
If your sysadmin did a good job, a regular user can't do any of those things (but you are the sys admin)
| SUID accidently removed from /bin/su file |
1,332,495,956,000 |
I would like to lock down an Arch Linux user account to the maximum extent possible. The only functionality required for the account is to accept a non-terminal SSH session which allows the client to create a tunnel to the internet.
The situation is that I want to share my remote connection with a few friends. I will provide them with an SSH key for the account and configure their programs as necessary.
The complication is that I don't want to place 100% faith in their ability to secure the key file. I'd rather minimize the potential damage of a compromise while it's still hypothetical - and take the opportunity to learn more about security.
Is there any way I can achieve a completely isolated and/or locked down account? Can I allow SSH connections but refuse terminal access?
I appreciate any help!
|
When you add keys to an authorized_keys file you have several options to restrict what that key can do. In this situation, you can disallow running any commands. Simply prefix it with command="".
For example:
command="" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDc7nKsHpuC6W/U131p0yDh455sLE9pWmFxdK...
When the user wants to connect, they have to pass -N to ssh. This tells the ssh client not to try running a command, but to just open a connection (and do tunneling if configured). If the client is started without -N, it'll immediately disconnect.
For example:
ssh -N -D 8080 host.example.com
| How can I lock down a user account to the point that it can read/write/execute as little as possible? |
1,332,495,956,000 |
I'm writing a bash script that does a little house-cleaning for me (clearing the log files from any Rails projects in the current directory). I'm making it executable, and I'm not sure what best practice dictates as far as setting the "group" and "others" file permissions. Should I just set permissions to 700 (only the owner can rwx)?
Part of my confusion is how "ownership" is determined when a file is copied from one system to another. If my UID is 509 and I set my_file.sh file-permissions to 700, I'm guessing the file-ownership is determined by storing a UID on the file. If I share my_file.sh and someone downloads it to their system, does the UID get changed to match their own? Does it depend on how the file is transferred (scp, git, http, etc.)?
|
I'm not sure what best practice dictates as far as setting the "group" and "others" file permissions.
A normal approach would be 755, so group and other have read-execute permission. Pretty much everything in (e.g.) /usr/bin is set that way.
If I share my_file.sh and someone downloads it to their system, does the UID get changed to match their own? Does it depend on how the file is transferred (scp, git, http, etc.)?
Almost certainly it does get set to their UID, but there are methods that can retain the original value -- for example, if you tar the file and then open it somewhere as root, you get the original numerical UID.
There's generally no significance to that, however, unless you have some particular reason you want to deploy the file somewhere with a specific UID. If your concern is about the ability to read and modify the script, there's no way to prevent that regardless of what UID or methodology you use. Someone who transports a file to another system where they have root access can do whatever they want with it. However, they cannot replace or modify the original unless they have write permission on the original.
On the local system, a file with read-execute permission can be copied (since it is readable) and the copy will have the UID of the user who made the copy. You can set a file executable but not readable, but there is no purpose in doing so; a file must be executable and readable in order to be executed. In other words, if you want everyone to be allowed to use the script, you need minimally world read-execute permission on it.
| What permissions should I set for an executable I'm likely to share? |
1,332,495,956,000 |
I have a machine that I can only access using SSH.
I was messing with the hostnames, and now it says:
ssh: unable to resolve hostname
I know how to fix it in /etc/hosts.
Problem is, I need sudo to fix them because my normal account doesn't have permissions.
What's the best way to fix the hosts?
|
You don't need sudo to fix that, try pkexec,
pkexec nano /etc/hosts
pkexec nano /etc/hostname
After running pkexec nano /etc/hosts, add your new hostname in the line that starts with 127.0.1.1 like below,
127.0.0.1 localhost
127.0.1.1 your-hostname
And also don't forget to add your hostname inside /etc/hostname file after running pkexec nano /etc/hostname command,
your-hostname
Restart your PC. Now it works.
| How to edit /etc/hosts without sudo? |
1,332,495,956,000 |
I have some bash scripts that I use with the user 'root' to manage iptable rules.
The problem is that I want these things at the same time:
The script must be owned by root
Permissions must be 700
I want to have an executable binary that certain user can execute. This executable will run the mentioned script as root.
This used to work, and is still what I use in older distributions:
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
int main()
{
setuid(0);
system("/root/iptables/my-iptables-script.sh");
return 0;
}
So I compile this and then use sudo chown root and sudo chmod 4777. This way the user can now execute the binary and run the script owned by root.
But now I installed Ubuntu 13.10 and when I run that binary I get "permission denied" for the script.
Is it possible that something changed in this respect since 12.04?
What can I do?
|
The easiest and cleanest solution is probably to use sudo.
You can configure it to allow a given unix group to run exactly this script as root.
%iptablegroup ALL = (root) NOPASSWD: /path/to/script
Then all you have to do is add the needed users to that group and everything should be fine.
| How to let a user execute a script owned by root? Setuid doesn't seem to work anymore [duplicate] |
1,332,495,956,000 |
If I was to chmod 777 blah.sh on server A and then move it to server B, would it retain the permissions, or would it default to some other permissions once it hit server B?
|
That depends entirely and completely on the method of transfer.
rsync will preserve permissions with -p, ownership with -o (using numeric id only if you pass --numeric-ids). -a combines -rlopt, unless you negate it with --no-o or --no-p.
scp will not normally preserve permissions, but will if you pass -p.
cp has roughly the same flags (-p for permissions, -a for everything and recursing).
tar has the -p flag as well.
cpio will always preserve permissions.
| File permissions when transferring to another server |
1,332,495,956,000 |
Context: I am making an in-browser control panel that gives me one button access to a library of scripts (.sh and .php) that I've written to process various kinds of data for a project. It's a "one stop shop" for managing data for this project.
I've made good progress. I have apache, PHP and MySQL running, and I have my front end up at http://localhost. Good so far!
Now the problem I'm having: I have an index.php which works fine, except the default apache user (which on my machine is called "_www") seemingly doesn't have permissions to run some of my scripts.
So when I do:
<?php
echo `ls`;
echo `whoami`;
echo `/Path/To/Custom/Script.sh`;
?>
I get the output of ls and whoami, but I get nothing back from the custom script. If I run the custom script as me (in an interactive shell), of course it works.
Finally, my question: What's the right way to configure this. Have the webserver run as me? Or change permissions so that _www can run my custom scripts?
|
The first-best thing would be to put the script in a standard location (such as /usr/local/bin) where the web server would have sufficient permissions to execute it.
If that's not an option, you can change the group of the script using chgrp groupname path, then make it executable for the group by chmod g+x path. If the _www user isn't already in that group, add it to the group by usermod -aG groupname _www.
| Permissions: What's the right way to give Apache more user permissions? |
1,332,495,956,000 |
I have noticed rm -rf has a strange behavior in my macOS BigSur.
I created a directory for a font installation:
~/code ❯ ls -la 10:06:54
total 16
drwxr-xr-x 21 fredguth staff 672 Oct 30 08:47 .
drwxr-xr-x+ 71 fredguth staff 2272 Nov 5 10:07 ..
drwxr-xr-x 7 fredguth staff 224 Nov 5 09:57 FontPro <<<<======== This directory
...
I am user fredguth, the owner of the directory.
~/code ❯ whoami 3m 34s 10:21:34
fredguth
I am trying to rm -rf FontPro.
This happens:
rm: FontPro/tfm/MinionPro-MediumItCapt-osf-l1-ly1--lcdfj.tfm: Permission denied
rm: FontPro/tfm/MinionPro-BoldIt-lf-t2a--base.tfm: Permission denied
...
rm: FontPro/dvips/a_fzbwjk.enc: Permission denied
rm: FontPro/dvips: Directory not empty
rm: FontPro: Directory not empty
I don't get it. I use rm -rf for non-empty directories, and I don't want to use sudo if I don't strictly need to.
Is there any macOs setting preventing me from rm -rf, or am I missing something else?
|
Newbie mistake. On comments suggestion:
~/code ❯ ls -ld FontPro/tfm
drwxr-xr-x 8156 root staff 260992 Oct 26 21:26 FontPro/tfm
I just realized that there is this subdir that is owned by root.
This is the culprit.
~/code ❯ sudo chown -R fredguth FontPro 33s 10:33:20
~/code ❯ ls -ld FontPro/tfm 10:33:39
drwxr-xr-x 8156 fredguth staff 260992 Oct 26 21:26 FontPro/tfm
~/code ❯ rm -rf FontPro 10:33:44
~/code ❯
working now.
P.S. @JG7 and @roaima, if you post an answer I can mark yours as the right one.
| Permission denied on trying to "rm -rf" my own directory on macOS command line |
1,332,495,956,000 |
I've set up wireguard and I am very happy with it, I can't help but wonder two things though:
How come wg-quick can only be used by root ?
Can I give another user permission to do wg-quick <up/down> <interface> without actual root access?
I am guessing that it was to do with moving/copying/symlinking the file into a root owned directory, and reloading a daemon or something. Anyhow, I'd love some elaboration and a suggestion to a solution
|
Warning: Following this answer is a security risk, It gives full root access with no password needed, to the user.
Using hint from @ctrl-alt-delor answer, I managed to craft a working solution.
In my scenario, I wanted a non sudo user called jenkins being able to run wg-quick <up/down> just like author.
Run: sudo visudo
root ALL = (ALL) ALL
%admin ALL = (ALL) ALL
Below added lines -------
jenkins ALL = (ALL) NOPASSWD: /opt/homebrew/bin/bash
jenkins ALL = (ALL) NOPASSWD: /usr/sbin/networksetup
jenkins ALL = (ALL) NOPASSWD: /sbin/ifconfig
jenkins ALL = (ALL) NOPASSWD: /opt/homebrew/bin/wg
jenkins ALL = (ALL) NOPASSWD: /opt/homebrew/bin/wg-quick
It's important to list the prerequisites before the final command that we want to execute! Otherwise, it will not work! I hope that this will help someone!
After this, the normal user jenkins was able to run this command.
If you face
encountered an error: "wg-quick: Version mismatch: bash 3 detected, when bash 4+ required"
then simply run
which bash
copy path output and put it at the beggining of wg-quick file(/opt/homebrew/bin/wg-quick)
In my case, top of the wg-quick will have: #!/opt/homebrew/bin/bash
After this operation, user can run this wg-quick up/down and execute this job from automator/sh script :)
Resource used:
https://osxdaily.com/2014/02/06/add-user-sudoers-file-mac/
| Is there a 'good' way to enable a user to use wg-quick without root access? |
1,332,495,956,000 |
On a Linux machine (a computing cluster, actually), I copied a folder from another user (who granted me permissions to do so using the appropriate chmod).
This folder contains symbolic links to files I cannot access. I want to update them so that they point to copies of the same files, that I own.
However, when I try to do so using ln -sf, I get Permission denied.
Why is that so?
That's the link:
$ ls -l 50ATC_Rep2.fastq
lrwxrwxrwx 1 bli cifs-BioIT 55 21 nov. 13:45 50ATC_Rep2.fastq -> /pasteur/homes/mmazzuol/Raw_data/CHIP_TEST/BM50.2.fastq
I don't have permission to access its target, but I have a copy of it. That's the new target I want:
$ ls -l ../../../raw_data/CHIP_TEST/BM50.2.fastq
-rwxr-xr-x 1 bli cifs-BioIT 4872660831 21 nov. 14:00 ../../../raw_data/CHIP_TEST/BM50.2.fastq
And that's what happens when I try ln -sf:
$ ln -sf ../../../raw_data/CHIP_TEST/BM50.2.fastq 50ATC_Rep2.fastq
ln: accessing `50ATC_Rep2.fastq': Permission denied
It seems that the permissions of the current target is what counts, not the permissions on the link itself.
I can circumvent the problem by first deleting the link, then re-creating it:
$ rm 50ATC_Rep2.fastq
rm: remove symbolic link `50ATC_Rep2.fastq'? y
$ ln -s ../../../raw_data/CHIP_TEST/BM50.2.fastq 50ATC_Rep2.fastq
$ ls -l 50ATC_Rep2.fastq
lrwxrwxrwx 1 bli cifs-BioIT 40 21 nov. 18:57 50ATC_Rep2.fastq -> ../../../raw_data/CHIP_TEST/BM50.2.fastq
Why can I delete the link, but not update it?
|
It appears as if the GNU ln implementation on Linux uses the stat() function to determine whether the target exists or not. This function is required to resolve symbolic links, so when the target of the pre-existing link is not accessible, the function returns EACCESS ("permission denied") and the utility fails. This has been verified with strace to be true on a Ubuntu Linux system.
To make the GNU ln use lstat() instead, which does not resolve symbolic links, you should call it with its (non-standard) -n option (GNU additionally uses --no-dereference as an alias for -n).
ln -s -n -f ../../../raw_data/CHIP_TEST/BM50.2.fastq 50ATC_Rep2.fastq
Reading the POSIX specification for ln, I can't really make out whether GNU ln does this for some undefined or unspecified behaviour in the specification or not, but it is possible that it uses the fact that...
If the destination path exists and was created by a previous step, it is unspecified whether ln shall write a diagnostic message to standard error, do nothing more with the current source_file, and go on to any remaining source_files; or will continue processing the current source_file.
The "unspecified" bit here may give GNU ln the license to behave as it does, at least if we allow ourselves to interpret "a previous step" as "the destination path is a symbolic link".
The GNU documentation for the -n option is mostly concerned about the case when the target is a symbolic link to a directory:
'-n'
'--no-dereference'
Do not treat the last operand specially when it is a symbolic link
to a directory. Instead, treat it as if it were a normal file.
When the destination is an actual directory (not a symlink to one),
there is no ambiguity. The link is created in that directory. But
when the specified destination is a symlink to a directory, there
are two ways to treat the user's request. 'ln' can treat the
destination just as it would a normal directory and create the link
in it. On the other hand, the destination can be viewed as a
non-directory--as the symlink itself. In that case, 'ln' must
delete or backup that symlink before creating the new link. The
default is to treat a destination that is a symlink to a directory
just like a directory.
This option is weaker than the '--no-target-directory' ('-T')
option, so it has no effect if both options are given.
The default behaviour of GNU ln when the target is a symbolic link to a directory, is to put the new symbolic link inside that directory (i.e., it dereferences the link to the directory). When the target of the pre-existing link is not accessible, it chooses to emit a diagnostic message and fail (allowed by the standard text).
OpenBSD ln (and presumably ln on other BSD systems), on the other hand, will behave like GNU ln when the target is a symbolic link to an accessible directory, but will unlink and recreate the symbolic link as requested if the target of the pre-existing link is not accessible. I.e., it chooses to continue with the operation (allowed by the standard text).
Also, GNU ln on OpenBSD behaves like OpenBSD's native ln, which is mildly interesting.
Removing the pre-existing symbolic link with rm is not an issue whatsoever, as you appear to have write and executable permissions for the directory it's located in.
| Why permission denied upon symbolic link update to new target with permissions OK? |
1,332,495,956,000 |
I am running RHEL6. I have a requirement to have "umask 077" in my /etc/bashrc which I am not allowed to change. We have a folder designated for group collaboration where we would like everyone in the same group to be able to rwx. Therefore, users must set "umask 002" manually or in their local .bashrc file or remember to chmod. They often forget and the administrator gets called upon to "fix" permissions because the owner of the file is not available.
Is there a way I can force the folder to "umask 002"?
I've read that I should use setfacl but I think umask overrides this.
|
See How do I force group and permissions for created files inside a specific directory?
What I tested was to create a directory /var/test. I set the group to be tgroup01. I made sure anything created under /var/test would be set to the tgroup01 group. I then made sure the default group permissions for anything underneath /var/test were rwx.
sudo mkdir /var/test
sudo chgrp tgroup01 /var/test
sudo chmod 2775 /var/test
sudo setfacl -m "default:group::rwx"
If I then create a directory foo or touch a file blah, they have the correct permissions
ls -al /var/test
drwxrwsr-x+ 3 root tgroup01 .
drwxr-xr-x 5 root root ..
-rw-rw-r-- 1 userA tgroup01 blah
drwxrwxr-x+ 2 userA tgroup01 foo
| How can I override the umask setting for all users for a specific folder? |
1,332,495,956,000 |
I am trying to do a chroot as a certain user. For one user it works, for other users it does not, and I have no idea what is going on.
My /etc/passwd in the chroot directory looks like this (relevant part):
test0:x:1000:1000:test0:/home/test:/bin/bash
test1:x:1001:1001:test1:/home/test:/bin/bash
sudo chroot --userspec=test0 chroot_dir/ /bin/bash --login works well
sudo chroot --userspec=test1 chroot_dir/ /bin/bash --login says chroot: failed to run command ‘/bin/bash’: Permission denied
details of /bin/bash in chroot: -rwxr-xr-x 1 user user 455188 Sep 19 08:58, where user is my username in the system.
Any ideas why user test1 does not work? If you need any more information, just please ask, I will put them in. Thanks a lot in advance.
|
With chroot (and no user namespaces, which is the case here), the directories and files necessary to run the command you give to chroot need to be accessible to the user you specify. This includes:
the chroot’s root;
bin and bin/bash in the chroot;
lib and any libraries therein used by bash, if any (ldd bin/bash will tell you what they are);
when bash gets going, home/test and any startup scripts (.bashrc etc. if necessary).
Running chmod -R 777 obviously fixes all this; you can use more restrictive permissions, as long as user id 1001 can read and execute the appropriate files. chmod -R 755 bin lib and chmod 755 . would allow bash to start.
| chroot: failed to run command ‘/bin/bash’: Permission denied |
1,332,495,956,000 |
I'm just starting with my own Virtual Server (and Linux). I've an apache2 and a few WordPress sites. I need to send mails via PHP (contact forms). I managed to install ssmtp with the help of a few tutorials. It sends mail with an gmail account. I'm not sure about the right permissions of the ssmtp.conf:
When I chmod 600 /etc/ssmtp/ssmtp.conf I cant't send mails from the commandline, php-contact forms are also not working.
When I chmod 640 /etc/ssmtp/ssmtp.conf I can send mails from the commandline, but php-contact forms are not working.
When I chmod 666 /etc/ssmtp/ssmtp.conf I can't send mails from the commandline and php-contact forms are working fine.
Obviously I would like to stay with 666, but I'm not sure if this could be a security problem.
|
It appears that you have your Gmail password in the configuration file so you would want the the third number to be 0 (No permissions to Others). Ideal is 640. You can change the ownership of the configuration file (using the command chown) e.g. chown root:mail /etc/ssmtp/ssmtp.conf.
You can send from the command line using sudo or as root. Your web server user also need to be a member of group mail. Or you can change that to root:www-data if the user group of the web server is www-data.
| Permissions for /etc/ssmtp/ssmtp.conf |
1,332,495,956,000 |
I often find that the unix way of handling file permissions is powerful, particularly when combined with ACLs, but rather difficult to handle. Setting file modes, group ownership and extended attributes correctly for every created file can quickly become tedious.
Is there any approach out there which replaces this concept (probably per mount) with something simpler, where files inherit permissions from their containing directories by default?
I know this would probably violate a number of POSIX expectations, but on the other hand, thungs like quiet vfat mouts already disregard mode and several permission changes, so that shouldn't prevent new ideas from being developed.
To give an example, I'm looking for something where I can be sure that as long as a user drops his file inside a certain directory, it will be writable and deletable by a given group, and readable only by the rest of the world, no matter the user's umask and current group.
Reasons why what I know so far doesn't seem sufficient:
Permissive file in restrictive dir: Changing the umask to 0777 and the mode of a directory to 0770, one can grant read-write access within a group and lock out the rest of the world. The dir should also have to have the sgid bit set so its files get the correct group instead of the user's primary group. But an umask of 0777 has a risk of opening large holes in places not restricted in this fashion, and umask doesn't count for much if people start moving stuff around using e.g. mv.
ACL defaults: Using setfacl one can set defaults for newly created files in a given directory. This is better than the above, but it only works for newly created files. Agin this won't work if people start moving files around, and again it won't work for cases where the umask is too restrictive.
|
Is there any approach out there which replaces this concept (probably
per mount) with something simpler, where files inherit permissions
from their containing directories by default?
Yeah, they're called default ACLs:
[root@ditirlns02 acl-test]# setfacl -m d:u:jadavis6:rwx --mask .
[root@ditirlns02 acl-test]# getfacl .
# file: .
# owner: root
# group: root
user::rwx
group::r-x
other::r-x
default:user::rwx
default:user:jadavis6:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
[root@ditirlns02 acl-test]# mkdir subDir
[root@ditirlns02 acl-test]# getfacl subDir
# file: subDir
# owner: root
# group: root
user::rwx
user:jadavis6:rwx
group::r-x
mask::rwx
other::r-x
default:user::rwx
default:user:jadavis6:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
[root@ditirlns02 acl-test]# getfacl testFile
# file: testFile
# owner: root
# group: root
user::rw-
user:jadavis6:rwx #effective:rw-
group::r-x #effective:r--
mask::rw-
other::r--
[root@ditirlns02 acl-test]# getfacl subDir/testFile
# file: subDir/testFile
# owner: root
# group: root
user::rw-
user:jadavis6:rwx #effective:rw-
group::r-x #effective:r--
mask::rw-
other::r--
[root@ditirlns02 acl-test]# mkdir subDir/nestedDir
[root@ditirlns02 acl-test]# getfacl subDir/nestedDir
# file: subDir/nestedDir
# owner: root
# group: root
user::rwx
user:jadavis6:rwx
group::r-x
mask::rwx
other::r-x
default:user::rwx
default:user:jadavis6:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
Kind of a belabored example, but it illustrates that default ACL's inherit to subdirectories at creation time, and apply directly (as effective ACE's) to both directories and files. By design, changes in default ACL's won't actively descend down. Unix strives to be as inert as possible, so the expectation is the if you want new permissions to be applied to files that already exist then you'll do some setfacl or chmod magic to get it done. Changing automatically isn't even desirable. You would constantly be hearing about files that were accidentally left way too open or how the Admin didn't think about a particular directory nested underneath the changed directory that was used for an application that's now locked out, etc.
But an umask of 0777 has a risk of opening large holes in places not restricted in this fashion
Well this doesn't really relate to your first point, but POSIX ACL's take care of this too. ACL mask trumps the umask setting in a user's shell in terms of permissibility (actually they kind of work together, insofar as ACL mask will deny rights and umask just won't give them, leading to an implict deny). You can modify it with the setfacl command:
[root@ditirlns02 acl-test]# setfacl -m m:r-x testFile
[root@ditirlns02 acl-test]# getfacl testFile
# file: testFile
# owner: root
# group: root
user::rw-
user:jadavis6:rwx #effective:r-x
group::r-x
mask::r-x
other::r--
As you can see, even though the basic DAC on my personal account has me at "rwx" my account still only gets "r-x" because the ACL mask prevents that from happening. You can also manage default ACL masks the same way as other default ACL entries:
[root@ditirlns02 acl-test]# getfacl afterMask
# file: afterMask
# owner: root
# group: root
user::rwx
user:jadavis6:rwx #effective:r-x
group::r-x
mask::r-x
other::r-x
default:user::rwx
default:user:jadavis6:rwx #effective:r-x
default:group::r-x
default:mask::r-x
default:other::r-x
[root@ditirlns02 acl-test]# getfacl subDir
# file: subDir
# owner: root
# group: root
user::rwx
user:jadavis6:rwx
group::r-x
mask::rwx
other::r-x
default:user::rwx
default:user:jadavis6:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
I went back to the directory from before so you could see the "no automatic recalculation" in action.
Again this won't work if people start moving files around
I kind of got ahead of myself, so I've addressed why this isn't desirable behavior but I can elaborate. Basically, it's true that if you change the default ACL's you'll almost always want to change the ACL's for things that already exist. The problem lies in the fact that you can't design a system to properly anticipate what the permissions should be. If you did that, you would open yourself up to other security and stability concerns.
For instance:
You have a folder at /srv/applicationX/shares/accounting/deftManeuver that holds proprietary information about your company's performance in various markets, only certain people should have access to it.
/srv/applicationX/shares is shared out over samba or NFS and is used company-wide.
A new department spins up, different group memberships require you to give them rwx on the shares directory.
The new department now has access to the proprietary information. and worse yet, you don't even realize that permissions are set up that way because it's been a year since you've had to do anything with deftManeuver so you forgot it was even there.
That's kind of a drastic example, but it illustrates the problem. Leaving permissions inert, the platform can at least say "Well these permissions used to be acceptable, so maybe they're still pretty close to what they're trying to do." Whereas in the Windows world, you have permissions changing access controls on files you don't even know exist.
This way, you can set up the deftManeuver once with the appropriate restrictions, and if you need to open it up, then the platform forcing you to explicitly tell it "I want xyz on directory abc and its descendants" the platform can at least hedge its bet against you not doing a recursive setfacl.
In my work life, I've been saved by this feature several times. I've opened a directory up too much to fix a problem, I've been told "hey hey hey, no don't do that" by the security people, and in the interim period, only new files had insecure permissions on them rather than the cumulative information built up over several years/decades.
EDIT (optional ACL rant):
This isn't to say that there aren't actual issues with POSIX ACL's, just that the objections listed here are either dealt with in-model or are features rather than defects.
The problem with regular POSIX ACL's is expressiveness. You still only get rwx but more operations should be targeted. Windows/NTFS takes a shotgun approach to permissions including stuff that doesn't make sense (like no native concept of a mask, a per-user delete permission instead of doing it the Unix way of saying "keeping a filename is pointless if it's an empty file so collapse it into 'write' or append permissions, etc) but include a lot of things that do make a lot of sense like having a right to append, a right to change permissions, a right to take ownership, etc.
There are also little things like not being able to set mask per-user or (better yet) per-group:
[root@ditirlns02 acl-test]# setfacl -m m:g:testGroup:rwx .
setfacl: Option -m: Invalid argument near character 3
[root@ditirlns02 acl-test]#
So there's no way to explicitly allow certain effective permissions to exceed the mask (what's good as a general rule is not always the case and this forces people to decide between lengthy work arounds between different users, or setting an overly permissive mask. Guess which route is usually taken...)
I honestly don't think anybody does permissions in a way that comprehensive, expressive and secure.
| Permissions by path instead of file mode bits |
1,332,495,956,000 |
Consider the following scenario:
I have rwx access to a directory as a member of the group id of the directory.
The system admin does not let users run chown (see this thread for details)
How can I take recursive ownership of the directory?
I believe I can do the following, assuming that I want to own A
cp -R A B
rm -R A
mv B A
but this is tedious and can require a large amount of space if A is large.
|
You only really need to take ownership of directories. Ordinary files will take care of themselves the next time you modify them, symbolic links and pipes don't matter, and I'm going to assume there are no devices or other exotic types.
You can make a recursive copy of the directories, but make hard links from the regular files instead of copying them. With GNU coreutils (Linux, Cygwin):
cp -al A B
Every regular file A/dir/file is hard-linked as B/dir/file. You can then remove the source tree.
If you don't have GNU coreutils, you can use rsync instead:
cd A
rsync -a --link-dest=$PWD . ../B
To make sure that deleting A will not actually remove any file, check that all regular files have a hard link count of at least 2 — the following command should not output anything:
find A -type f -links 1
| chown not permitted, but I have write access. How can I take recursive ownership? |
1,332,495,956,000 |
I am trying to make my laptop camera accessible by a guest system on it.
With the guest system not running, I open it in virt-manager, go to "Show virtual hardware details" → "Add Hardware" → "USB Host Device". Here I choose my camera (001:002 Chicony Electronics Co., Ltd HD User Facing) and click "Finish". The procedure seems to be the same as described in the KVM documentation.
This results in the following stanza added to the XML config of the guest machine.
<hostdev mode="subsystem" type="usb" managed="yes">
<source>
<vendor id="0x04f2"/>
<product id="0xb6dd"/>
</source>
<address type="usb" bus="0" port="6"/>
</hostdev>
This looks correct according to the Red Hat's manual on attaching and updating a device with virsh.
However, I cannot run the guest with it because qemu is denied permission.
Error starting domain: internal error: qemu unexpectedly closed the monitor: 2022-03-13T05:27:57.240470Z qemu-system-x86_64: -device {"driver":"usb-host","hostdevice":"/dev/bus/usb/001/002","id":"hostdev0","bus":"usb.0","port":"6"}: failed to open /dev/bus/usb/001/002: Permission denied
Traceback (most recent call last):
File "/gnu/store/r9jxh3pv020qa05pza3jiky2vppn68mx-virt-manager-3.2.0/share/virt-manager/virtManager/asyncjob.py", line 65, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/gnu/store/r9jxh3pv020qa05pza3jiky2vppn68mx-virt-manager-3.2.0/share/virt-manager/virtManager/asyncjob.py", line 101, in tmpcb
callback(*args, **kwargs)
File "/gnu/store/r9jxh3pv020qa05pza3jiky2vppn68mx-virt-manager-3.2.0/share/virt-manager/virtManager/object/libvirtobject.py", line 57, in newfn
ret = fn(self, *args, **kwargs)
File "/gnu/store/r9jxh3pv020qa05pza3jiky2vppn68mx-virt-manager-3.2.0/share/virt-manager/virtManager/object/domain.py", line 1329, in startup
self._backend.create()
File "/gnu/store/7c16ipd35j0fdl6mrjbg3v9zsn8iivi0-python-libvirt-7.9.0/lib/python3.9/site-packages/libvirt.py", line 1353, in create
raise libvirtError('virDomainCreate() failed')
libvirt.libvirtError: internal error: qemu unexpectedly closed the monitor: 2022-03-13T05:27:57.240470Z qemu-system-x86_64: -device {"driver":"usb-host","hostdevice":"/dev/bus/usb/001/002","id":"hostdev0","bus":"usb.0","port":"6"}: failed to open /dev/bus/usb/001/002: Permission denied
The device that it is trying to access is correct:
$ lsusb -s 001:002
Bus 001 Device 002: ID 04f2:b6dd Chicony Electronics Co., Ltd HD User Facing
The device is owned by root. It seems that read access is not enough for qemu.
$ LC_ALL=C ls -l /dev/bus/usb/001/002
crw-rw-r-- 1 root root 189, 1 Mar 13 06:15 /dev/bus/usb/001/002
My guess is that the device is owned by root for a good security reason. Similarly, virt-manager does not prompt me to run qemu as root. How do I safely manage permissions to allow the guest access the camera?
Another approach, which I initially tried, was to use GNOME Boxes to enable access to the camera device in the respective guest settings. It tries to use SPICE USB redirection, which is similar to what is described in the SPICE user manual, but uses qemu-xhci host adapter instead of ich9-ehci1. However, when I try to flip the switch in the guest settings for the camera device, it just notifies that its redirection failed. Here are the relevant parts in my guest machine configuration, which seem to be OK:
<controller type="usb" index="0" model="qemu-xhci" ports="15">
<alias name="usb"/>
<address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
</controller>
<redirdev bus="usb" type="spicevmc">
<alias name="redir0"/>
<address type="usb" bus="0" port="2"/>
</redirdev>
<redirdev bus="usb" type="spicevmc">
<alias name="redir1"/>
<address type="usb" bus="0" port="3"/>
</redirdev>
<redirdev bus="usb" type="spicevmc">
<alias name="redir2"/>
<address type="usb" bus="0" port="4"/>
</redirdev>
<redirdev bus="usb" type="spicevmc">
<alias name="redir3"/>
<address type="usb" bus="0" port="5"/>
</redirdev>
So, how can I make the camera available to the guest?
|
On a Debian machine, spice-client-glib-usb-acl-helper from SPICE GTK is used to pass through USB devices to VMs on demand or per configuration. It validates ACLs (Linux Access Control Lists) via the PolicyKit kit.
The helper is invoked by the program that the user runs with user privileges but determines access to privileged resources, so it needs to be set setuid to be able to run with elevated privileges. On a Debian machine, it is set setuid during the installation process:
$ LC_ALL=C ls -l /usr/libexec/spice-client-glib-usb-acl-helper
-rwsr-xr-x 1 root root 18512 Mar 1 2023 /usr/libexec/spice-client-glib-usb-acl-helper
Note the s for the setuid bit in the permissions.
However, on a Guix System, the installation is assumed to be carried by an unprivileged user. So, additional configuration is necessary after installation. Furthermore, the Guix store cannot contain setuid programs for security reasons. Thus, the setuid programs need to be declared in the system configuration file.
Here is an inelegant example of such additions to make the SPICE USB passthrough work on a Guix System:
(use-modules …
(gnu system setuid)
(gnu packages spice))
(use-service-modules …
virtualization dbus)
(operating-system …
(services
(append (list …
(service libvirt-service-type)
(service virtlog-service-type)
(simple-service 'spice-polkit polkit-service-type (list spice-gtk)))))
(setuid-programs
(append (list (setuid-program
(program (file-append spice-gtk "/libexec/spice-client-glib-usb-acl-helper"))))
%setuid-programs)))
Note the line extending the policy and then a snippet to extend the list of the default setuid programs.
With such a configuration, it is possible to use the switches in the interface in GNOME Boxes to pass through USB devices to the guest machine.
| How to make a USB device available to a QEMU guest? |
1,332,495,956,000 |
I added the SELinux label svirt_sandbox_file_t to /home
chcon -Rt svirt_sandbox_file_t /home
The label is shown using:
[user@localhost ~]$ ls -Z
unconfined_u:object_r:svirt_sandbox_file_t:s0 Desktop
unconfined_u:object_r:svirt_sandbox_file_t:s0 Documents
...
How can I remove the svirt_sandbox_file_t label again?
I tried rebooting, I added a /home/.autorelabel to trigger relabeling, but the label won't go away. I am using Fedora 23.
|
I believe if you set /etc/selinux/config to disabled reboot. Then set it to enforcing and reboot, it will re-label if you've had trouble getting it to relabel otherwise. It's weird that restorecon didn't work though.
If you want to reset things the hard way, the /home directory itself should be:
system_u:object_r:home_root_t
and each user home directory (and the files within it) should be:
unconfined_u:object_r:user_home_dir_t:s0
You can set these with either the chcon command, or using a combination of semanage and restorecon
chcon -t home_root_t /home
chcon -Rt user_home_dir_t /home/*
or
semanage fcontext -a -t home_root_t /home
semanage fcontext -a -t user_home_dir_t /home/*
restorecon -R /home
please note that generally speaking chcon is used to force an immediate change, while leaving the defaults in place, so that a restorecon will restore it to the default contexts. In your case that seems to have gone wrong for some reason.
Generally semanage fcontext is intended to write a local context file to /etc/selinux/targeted/contexts/files/file_contexts.local
a wealth of information on the current context, and default context's can be found in:
/etc/selinux/targeted/contexts/default_contexts
/etc/selinux/targeted/contexts/files/file_contexts
/etc/selinux/targeted/contexts/files/file_contexts.homedirs
It is possible that somehow those files were somehow damaged, and overall there are many sub contexts that may not be fully restored by the above actions depending on how those files have been modified. It may be a good idea to examine those files and see if you can find your added context mapping and remove it that way as well.
Theoretically you could also take a virtual machine, or another machine(or perhaps just find them online) and copy the known good defaults into their proper directories, and allow the system to relabel in order to get the proper defaults. This too will have some short comings though.
At the end of the day a bit of trial and error will be necessary, the chcon/semanage commands listed above should give you the broad strokes, but its possible some of your sub directories will have their own contexts.
Some additional contexts that may be helpful(all of these are in /home/username would be:
ls -laZ /home/username
##context########################### Directory##
unconfined_u:object_r:cache_home_t:s0 .cache
unconfined_u:object_r:config_home_t:s0 .config
unconfined_u:object_r:dbus_home_t:s0 .dbus
unconfined_u:object_r:gconf_home_t:s0 .gconf
unconfined_u:object_r:gconf_home_t:s0 .gconfd
unconfined_u:object_r:gpg_secret_t:s0 .gnupg
unconfined_u:object_r:gconf_home_t:s0 .local
unconfined_u:object_r:ssh_home_t:s0 .ssh
Please note that this is based on my home directory, there will be more that you may have to hunt down, but if you get most of those correct, you should be more or less back on track.
| how to remove SELinux label? |
1,332,495,956,000 |
I struggled with this problem on FreeBSD recently, but thank God for ZFS which solved it for me there. However I have it again in CentOS with ext4 and don't know if there is an easy way around it (or any way around it). What I want is a directory in which any user in a certain group has guaranteed read/write access to the files, regardless of clueless users' umasks, poor FTP client upload decisions, etc.. I don't think it's possible, but I'd like to be wrong. It looks like the reason it's not possible is that ext4 ACLs cannot override file permissions, only intersect with them. For example:
# mkdir bar
# chmod 700 bar
# getfacl bar
# file: bar
# owner: root
# group: mygroup
# flags: -s-
user::rwx
group::rwx #effective:---
group:mygroup:rwx #effective:---
mask::---
other::---
default:user::rwx
default:group::rwx
default:group:mygroup:rwx
default:mask::rwx
default:other::---
You can see that the default ACL and mask both specify rwx for mygroup but the file permissions trump that and result in ---. Unfortunately that means if a user's FTP client (for example) uploads files as 640, others in the group wouldn't be able to mess with it. Is there a way around this?
|
The permissions granted by an ACL are additive, but perhaps you're expecting them to be recursive? (they aren't)
You can almost get what you want with ACLs. You need to start out by setting the ACL like above recursively on every file and directory in the tree. Be sure to include the default:group:mygroup:rwx setting on directories. Now, any new directory will get those settings automatically applied to it, and and new file in those directories likewise.
There are two times when this still fails:
when someone moves a file or directory from outside the tree. Since the inode already exists, it won't get the defaults set on it.
when someone extracts files from an archive using an ACL-aware program which overwrites the defaults.
I don't know any way to handle those two other than writing a cron job to periodically run chgrp -R mygroup DIRECTORY; chmod g+rwx -R DIRECTORY. This may or may not be practical depending on the number of files in your shared directory.
Here's a slightly modified version of a script I use to fix ACLs on a tree of files. It completely overwrites any ACLs on anything in the tree with a specific list of read-write groups and read-only groups.
#! /usr/bin/env perl
use strict;
use warnings;
use String::ShellQuote;
use Cwd 'abs_path';
# Usage: fix-permissions.pl DIRECTORY RW_GROUP1,RW_GROUP2... RO_GROUP1,RO_GROUP2...
my $dir= $ARGV[0];
my @rw_groups= split ',', $ARGV[1] if $ARGV[1];
my @ro_groups= split ',', $ARGV[2] if $ARGV[2];
-d $dir or die "No such directory'$dir'\n";
$dir= abs_path($dir);
$dir =~ m|/[^/]+/| or die "Cowardly refusing to run on a top-level directory: $dir\n";
# Give all files rw-r----- and all directories rwxr-x---
# then give each rw_group read/write access, then each ro_group
# read-only access to the whole tree
my $dir_perm= join(',',
'u::rwx',
'g::r-x',
'o::---',
'd:u::rwx',
'd:g::r-x',
'd:o::---',
( map { "g:$_:rwx" } @rw_groups ),
( map { "d:g:$_:rwx" } @rw_groups ),
( map { "g:$_:r-x" } @ro_groups ),
( map { "d:g:$_:r-x" } @ro_groups )
);
my $file_perm= join(',',
'u::rwx',
'g::r-x',
'o::---',
( map { "g:$_:rw-" } @rw_groups ),
( map { "g:$_:r--" } @ro_groups )
);
for (
"find ".shell_quote($dir)." -type d -print0 | xargs -0 -r setfacl --set ".shell_quote($dir_perm),
"find ".shell_quote($dir)." ! -type d -print0 | xargs -0 -r setfacl --set ".shell_quote($file_perm)
) {
0 == system($_) or die "command failed: $_\n";
}
| Can ACLs override file perms on Linux? |
1,332,495,956,000 |
I need to provide user access to Ubuntu 14.04 Server, only limited to certain folder. To enjoy the ssh security and not to open up new service and ports (ie, ftp), I'd like to stick with sftp. However, just creating a user and enabling ssh access is too generous - the user then can log on via ssh and see whatever there is that is viewable by everybody.
I need the user to find themselves in a specific directory after login, and, according to their privileges, read/write files, as well as create folders where permitted. No access to any file or directory above the user's "root" folder.
What would be the suggested method to achieve this? Is there some very restricted shell type for this? I tried with
$ usermod -s /bin/false <username>
But that does not let the user to cd into subfolders of their base folder.
|
If you want to restrict a user to SFTP, you can do it easily in the SSH daemon configuration file /etc/ssh/sshd_config. Put a Match block at the end of the file:
Match User bob
ForceCommand internal-sftp
ChrootDirectory /path/to/root
AllowTCPForwarding no
PermitTunnel no
X11Forwarding no
If the jail directory is the user's home directory as declared in /etc/passwd, you can use ChrootDirectory %h instead of specifying an explicit path. This syntax allows specifying a group of user accounts as SFTP-only — all users whose group as declared in the user database is sftponly will be restricted to SFTP:
Match Group sftponly
ForceCommand internal-sftp
ChrootDirectory %h
AllowTCPForwarding no
PermitTunnel no
X11Forwarding no
| Provide sftp read/write access to folder and subfolders, restrict all else |
1,332,495,956,000 |
I have two servers, client and server in fairly obvious roles: server hosts the NFS share and client has it mounted. There is a shared group among several users on client called shared which also exists on server. My permissions on server for the share look like this:
user@server $ ls -al /export/share/
drwxrwsr-x+ 3 shared shared 4096 Apr 19 01:25 .
drwxr-xr-x 3 root root 4096 Apr 12 20:10 ..
The goal is pretty clearly displayed, I'd like all members of the shared group to be able to create, write, and delete files in this directory. On client, a ls -la of the mounted directory leads the same results.
The NFS exports file on server looks like this:
/export/share 10.0.0.0/24(rw,nohide,insecure,no_subtree_check,async)
The mount on client in /etc/fstab looks like this:
10.0.0.1:/export/share /mnt/share nfs _netdev,noatime,intr,auto 0 0
The output of mount from client:
10.0.0.1:/export/share on /mnt/streams type nfs (rw,noatime,intr,vers=4,addr=10.0.0.1,clientaddr=10.0.0.2)
However, I still can't seem to be able to create files in that directory using a user in the group.
For instance, a user jack:
user@server $ id jack
uid=1001(jack) gid=1001(jack) groups=1001(jack),1010(shared)
If I try touching a file in the mounted folder on client, permission is denied:
user@client $ sudo -u jack touch /mnt/share/a
touch: cannot touch `/mnt/share/a': Permission denied
Why isn't this working as expected? Shouldn't I be able to create files and folders as jack in this folder since he's a member of the shared group?
|
How are you disseminating the user/group info that's contained in /etc/passwd and /etc/group? You typically need to use NIS, LDAP, or rsync the /etc/passwd /etc/group files to all the machines that are automounting these mounts. Otherwise the clients know nothing of the permissions on the filesystem.
You might want to peruse the NIS Howto.
| Default directory permissions over NFS |
1,332,495,956,000 |
At my school we have a shared server environment with several users and several groups.
Relavent Groups:
students
daemon (runs apache)
I want to allow students to be able to have full access to files they own and no access to other students files.Two different students should not be able to even see each others files.
I also want apache to be able to read and execute all student files. Specifically I want apache to be able to read a password file owned by each student, I also want the owner of the password file to have full access to it.
From my understanding, the best way to do this is to change the group owner of the password file to be apache.
So after reading this,
https://serverfault.com/questions/357108/what-are-the-best-linux-permissions-to-use-for-my-website
it seems a simple chgrp would fix it.
But then I run into this:
You must be owner of the file(s) as well as a member of the destination group (or root) to use this operation.
So each of the students are not a part of the daemon group, they cannot run this command.
Giving them that group would be pointless in that they would be able to see other student's password files as well.
From the previous thread I gathered that the current security settings are unfit and I have scheduled a meeting with my system administrator tommarrow.
But I'm still unsure what I should ask my systemadmin to do.
I can't really ask him to manually change the permissions for every password file on the server, the filenames and locations are different and many students are not even set up yet.
Allowing students to have full access to chgrp seems dangerous,
My inclination seems to ask him to create some type of script that would prompt the student for a file and then run chgrp in place of the student, thus giving apache group ownership. This seems viable, but also pretty complicated as I'm still pretty new to Linux. Would he be able to do something like this easily?
I've also considered ACL's but my train of thought goes right back to chgrp, giving students access to setacl seems dangerous.
|
ACLs are the answer. The students don't need any special permission to run setfacl, a user can set the ACL of any file that he owns.
If you need to set up your system for ACLs, see Make all new files in a directory accessible to a group
Tell students that if they need a file to be accessible to Apache, then they must run
setfacl -m group:daemon:r ~/path/to/password.file
setfacl -m group:daemon:x ~ ~/path ~/path/to
The x permission on the directories is necessary to access files (including subdirectories) in these directories.
| How to configure permissions to allow apache to securely have access to a file in a shared environment? |
1,332,495,956,000 |
Simple question and there is perhaps a simple answer.
I have several directories in my home folder that I would like to make available as a directory on my webserver. So, what I did was to create a symlink:
iMac:/Library/WebServer/Documents/ ls -ltr
-rw-rw-r-- 1 root admin 44 Nov 20 2004 index.html.en
-rw-rw-r-- 1 root admin 31958 May 18 2009 PoweredByMacOSXLarge.gif
-rw-rw-r-- 1 root admin 3726 May 18 2009 PoweredByMacOSX.gif
-rwxr-xr-x 1 mego admin 0 Jan 6 2011 favicon.ico
lrwxrwxr-x 1 mego admin 52 Jul 26 13:45 myadmin -> /Users/mego/Downloads/phpMyAdmin-3.4.3.2-english
iMac:/Library/WebServer/Documents/ ln -s /Users/mego/opt/rel/src/main/web/ rel
iMac:/Library/WebServer/Documents/ ls -ltr
-rw-rw-r-- 1 root admin 44 Nov 20 2004 index.html.en
-rw-rw-r-- 1 root admin 31958 May 18 2009 PoweredByMacOSXLarge.gif
-rw-rw-r-- 1 root admin 3726 May 18 2009 PoweredByMacOSX.gif
-rwxr-xr-x 1 mego admin 0 Jan 6 2011 favicon.ico
lrwxrwxr-x 1 mego admin 52 Jul 26 13:45 myadmin -> /Users/mego/Downloads/phpMyAdmin-3.4.3.2-english
lrwxrwx--- 1 mego admin 47 Oct 12 09:58 rel -> /Users/mego/opt/rel/src/main/web/
Permissions on /Users/mego/opt/rel are recursively set to a+rx so everybody can read and execute.
When I try to change the permission, i.e. "chmod a+rx rel" and "chmod -R a+rx /Users/mego/opt/rel", zero effect.
The output of
ls -ld / /Users /Users/mego /Users/mego/opt /Users/mego/opt/rel /Users/mego/opt/rel/src /Users/mego/opt/rel/src/main /Users/mego/opt/rel/src/main/web
iMac:~/ ls -ld / /Users /Users/mego /Users/mego/opt /Users/mego/opt/rel /Users/mego/opt/rel/src /Users/mego/opt/rel/src/main /Users/mego/opt/rel/src/main/web
drwxrwxr-t@ 39 root admin 1394 Sep 14 15:30 /
drwxr-xr-x 7 root admin 238 Aug 29 10:04 /Users
drwxr-xr-x+ 98 mego staff 3332 Oct 15 10:59 /Users/mego
drwxrwxr-x 19 mego staff 646 Oct 14 20:47 /Users/mego/opt/rel
drwxrwxr-x 5 mego staff 170 May 31 08:01 /Users/mego/opt/rel/src
drwxrwxr-x 6 mego staff 204 Oct 12 08:42 /Users/mego/opt/rel/src/main
drwxrwxr-x 5 mego staff 170 Oct 12 08:42 /Users/mego/opt/rel/src/main/web
iMac:~/
Must be something related to users home folder. But strangely enough, another folder "myadmin" has correct permissions and it works. What am I doing wrong?
Thank you in advance.
|
/Users/mego has an ACL that may be preventing access. That's what the + after the traditional unix permissions on the output of ls -l for this directory indicates. Run ls -lde /Users/mego to view this ACL.
Note that if a user is denied access to /Users/mego (what matters is the executable bit), it won't have access to anything under it. So if the web server user doesn't have execution permission /Users/mego, it doesn't matter that /Users/mego/opt/rel is world-readable: the web server user won't be able to reach that far. It doesn't matter that a symbolic link is involved, either: access through a symbolic link involves traversing the path to the target.
Use chmod to manipulate the ACL. The examples in the man page should get you going (if you can't figure out what you need from the examples, ask here, and post the output of ls -lde /Users/mego).
| Why can't I change permission on a symlink on Mac? |
1,332,495,956,000 |
If I copy a file with a base ACL of:
u::rw-,g::r--,o::r--
into a directory with a default ACL of:
u::rwx,g::r-x,g:users:rwx,m::rwx,o::r-x
I obtain a file with mask of m::r--. I would have expected the union of the permissions of the two group entries (i.e. m::rwx).
Why it is so? Does it depend on the mode parameter used by cp in the creation of the file?
|
(I assume you're working on Linux, the workings of ACLs differ between unix variants.)
cp doesn't do anything special when you copy the file; it creates the file with the mode of the original file, masked by the mask of the directory. Since cp doesn't do anything to the file's mask, the mask ends up being the intersection of the directory mask (rwx) and the file's group permissions (r).
open("dir/file", O_WRONLY|O_CREAT|O_EXCL, 0644)
| ACL mask does not work as expected |
1,332,495,956,000 |
I have a 100 GB of xml doc that I'm migrating to a database in waves.
In vim, I can edit the file, but I'm unable to save changes with :wq, :q or :xx.
I get the error message this file is read only - press ! to override.
Nothing works, so I use the :q! which ignores my changes and exits the vim.
How can i save my changes?
Bonus question
If I don't wait for the entire file to load into memory and press Ctrl + c to view what has been populated, will saving that document only save what was loaded into the memory and delete the rest?
|
The message file is read only - press ! to override means that you don't have permissions to write to the file which you are editing, so the changes you have made can't be written in that file.
Easiest solution is to write the edited file content to the other file with :w file_name, assuming that file_name is a path to a file in which you do have permissions to write.
Other than that you need to find out why you don't have permissions to write to your original file.
You can do that with ls -l file_name.
(I have explained the output of ls -l file_name here.)
Now that you understad a bit more about permissions, you have 3 options:
Use sudo vim file_name to edit the file as the root user.
Give other users write permissions with chmod.
Change the owner of the file with chown.
| Unable to write changes to a file in Vim editor |
1,437,283,631,000 |
I was trying to backup some directories and some of the copies made by sudo cp -av resulted in being owned by root while others preserved their attributes. Is this a known issue or am I missing something?
The source (ext4) is a former ubuntu system disk being used externally, directory structure intact but it is only used for storage, not for boot. The username/groupname and uid/gid is the same as in the previous system.
The destination (btrfs) formatted from NTFS, using the 4.1.2 btrfs-progs.
$ sudo cp -av /mnt/src/home/user/thecakeisalie/ /mnt/dest/subvol/
drwx------ 6 user user 4096 Jul 18 09:11 /mnt/src/home/user/thecakeisalie/
drwx------ 3 root root 4096 Jul 18 20:36 /mnt/dest/subvol/thecakeisalie/
File: ‘/mnt/src/home/user/thecakeisalie/’
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: 812h/2066d Inode: 9044504 Links: 6
Access: (0700/drwx------) Uid: ( 1000/user) Gid: ( 1000/user)
Access: 2015-07-18 20:21:08.725414953 -0700
Modify: 2015-07-18 09:11:06.873427304 -0700
Change: 2015-07-18 20:08:34.161737231 -0700
Birth: -
File: ‘/mnt/dest/subvol/thecakeisalie/’
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: 805h/2053d Inode: 660098 Links: 3
Access: (0700/drwx------) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2015-07-18 20:36:23.909377491 -0700
Modify: 2015-07-18 20:36:09.729089386 -0700
Change: 2015-07-18 20:36:09.729089386 -0700
Birth: -
Testing some other directories under /mnt/src/home/user/thecakeisalie/ resulted in the expected behaviour with ls -l, stat outputs exactly matching.
Some 'well-behaving' directories have been created this afternoon but I tested this on ones that haven't been touched way before I started to use the drive externally and some of them are ok as well.
After backup I chown-ed everything so there is no issue but I am really curious what the cause could be. I googled a lot but I was either not using the right search phrase or this is well-known.
mjturner below had a point so I tried the cp -a commands to my ~/Download dir on the internal system disk (ext4) and the results are the same therefore I don't think it's a btrfs issue.
Last week I fixed up an old laptop where the circumstances were similar: to upgrade Ubuntu 13.10 I had to install Ubuntu 15.04 on another new partition and after boot I did sudo cp -a the entire home from the old system. 13.10 had 2 users (alpha, bravo) and 15.04 had been set up with 1 user (alpha). bravo's entries ended up showing the GID/UID (of course) whereas alpha's looked and worked the same as before. (I have to check whether the GID/UID of old and new alpha where the same).
Some extra info on the current system, uname:
Linux 3.19.0-22-generic #22-Ubuntu SMP Tue Jun 16 17:14:22 UTC 2015 i686
I am going to do a massive clean-up on the source drive (getting rid of the system dirs and moving the main storage dirs to the root) and I'll test again.
In the meantime, are there any other commands that I can use to test the source and destination for differences? It doesn't matter how low I have to dig (I wanted to brush up on C anyway).
|
I realized that I forgot to mention that I aborted cp -a after checking the destination in another terminal as I was copying 300+ gb of data.
Thanks to Gilles' comment I started testing to see whether it only happens to directories or not. As the tests prove below, basically all files are written as root and the old attributes are applied to the file/directory once it has finished copying.
TEST_1: 3 gb folder and CTRL-C during sudo cp -a: the current file is truncated, left as root and so is the directory.
home/Download# ls -l
total 20
drwx------ 3 root root 4096 Jul 19 15:11 ./
drwxr-xr-x 3 user user 12288 Jul 19 15:11 ../
drwx------ 2 root root 4096 Jul 19 15:11 thecakeisalie/
home/Download# cd thecakeisalie/; ls -l
total 16164
drwx------ 2 root root 4096 Jul 19 15:11 ./
drwx------ 3 root root 4096 Jul 19 15:11 ../
-rw------- 1 user user 2109623 May 19 2013 file1
-rw------- 1 user user 2520465 May 19 2013 file2
-rw------- 1 root root 393216 Jul 19 15:11 file3
TEST_2: Allow sudo cp -a to finish:
home/Download# ls -l
total 20
drwx------ 3 user user 4096 Jul 19 15:11 ./
drwxr-xr-x 3 user user 12288 Jul 19 15:11 ../
drwx------ 2 user user 4096 Jul 19 15:11 thecakeisalie/
home/Download# cd thecakeisalie/; ls -l
total 16164
drwx------ 3 user user 4096 Jul 19 15:11 ./
drwxr-xr-x 3 user user 12288 Jul 19 15:11 ../
-rw------- 1 user user 2109623 May 19 2013 file1
(...)
-rw------- 1 user user 2520465 May 19 2013 last_file
| `sudo cp -a` changes ownership to root (instead of preserving the original user) |
1,437,283,631,000 |
I'm trying to setup postfix, dovecot and procmail to work together with virtual users. In the end I want to have virtual users and the possibility to add rules to sort incoming rules. For the last thing, I need procmail (right?).
When I send an email to my server, I don't get it in my Maildir, and see this in mail.log:
Jun 17 21:01:03 cs postfix/smtpd[24811]: connect from dub0-omc2-s13.dub0.hotmail.com[157.55.1.152]
Jun 17 21:01:03 cs postfix/smtpd[24811]: D8C9F44D88: client=dub0-omc2-s13.dub0.hotmail.com[157.55.1.152]
Jun 17 21:01:03 cs postfix/cleanup[24816]: D8C9F44D88: message-id=<[email protected]>
Jun 17 21:01:04 cs postfix/qmgr[24806]: D8C9F44D88: from=<my-test-email>, size=1617, nrcpt=1 (queue active)
Jun 17 21:01:04 cs procmail[24818]: Denying special privileges for "/etc/procmailrcs/default.rc"
Jun 17 21:01:04 cs postfix/smtpd[24811]: disconnect from dub0-omc2-s13.dub0.hotmail.com[157.55.1.152]
Jun 17 21:01:04 cs postfix/pipe[24817]: D8C9F44D88: to=<my-virtual-email>, relay=virtualprocmail, delay=0.18, delays=0.15/0/0/0.02, dsn=2.0.0, status=sent (delivered via virtualprocmail service)
Jun 17 21:01:04 cs postfix/qmgr[24806]: D8C9F44D88: removed
How can I fix the line "Denying special privileges" procmail spits out?
camilstaps@cs:/# ls -al /etc/procmailrcs
total 12
drwxr-xr-x 2 root vmail 4096 Jun 17 19:48 .
drwxr-xr-x 97 root root 4096 Jun 17 19:47 ..
-rw------- 1 vmail postfix 44 Jun 17 19:48 default.rc
Here's my /etc/postfix/master.cf:
smtp inet n - - - - smtpd
submission inet n - n - - smtpd
pickup unix n - - 60 1 pickup
cleanup unix n - - - 0 cleanup
qmgr unix n - n 300 1 qmgr
tlsmgr unix - - - 1000? 1 tlsmgr
rewrite unix - - - - - trivial-rewrite
bounce unix - - - - 0 bounce
defer unix - - - - 0 bounce
trace unix - - - - 0 bounce
verify unix - - - - 1 verify
flush unix n - - 1000? 0 flush
proxymap unix - - n - - proxymap
proxywrite unix - - n - 1 proxymap
smtp unix - - - - - smtp
relay unix - - - - - smtp
showq unix n - - - - showq
error unix - - - - - error
retry unix - - - - - error
discard unix - - - - - discard
local unix - n n - - local
virtual unix - n n - - virtual
lmtp unix - - - - - lmtp
anvil unix - - - - 1 anvil
scache unix - - - - 1 scache
maildrop unix - n n - - pipe
flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
uucp unix - n n - - pipe
flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)
ifmail unix - n n - - pipe
flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)
bsmtp unix - n n - - pipe
flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient
scalemail-backend unix - n n - 2 pipe
flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}
virtualprocmail unix - n n - - pipe flags=DRXhuq user=vmail
argv=/usr/bin/procmail -m E_SENDER=$sender E_RECIPIENT=$recipient ER_USER=$user ER_DOMAIN=$domain ER_DETAIL=$extension NEXTHOP=$nexthop /etc/procmailrcs/default.rc
mailman unix - n n - - pipe
flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py
${nexthop} ${user}
I'm on Ubuntu Server 13.04.
|
man procmail states:
Denying special privileges for "x"
Procmail will not take on the identity that comes with the rcfile because
a security violation was found (e.g. -p or variable assignments on the
command line) or procmail had insufficient privileges to do so.
I the presented case the error message is caused by variable assignments on the command line e.g. E_SENDER=$sender.
Possible Fixes:
Use another "non special to procmail" directory to store the script instead of /etc/procmailrcs
(As I understand /etc/procmailrcs magic is not required in the case)
OR
Pass use positional parameters on the command line and assigment in *.rc file
procmail script invocation:
/usr/bin/procmail -m /etc/procmailrcs/default.rc $sender $recipient $user $domain $extension $nexthop
procmail script (initial part):
# DROPRIVS - procmail magical variable, assigment causes side effects
DROPPRIVS=yes
E_SENDER=$1
E_RECIPIENT=$2
ER_USER=$3
ER_DOMAIN=$4
ER_DETAIL=$5
NEXTHOP=$6
| Procmail: Denying special privileges for "/etc/procmailrcs/default.rc" |
1,437,283,631,000 |
I'm using munin 1.4.5 on OpenSUSE 11.4. Lately logrotate was updated to fix some permission problems and after that complained with
Mar 3 12:15:05 lucien logrotate: error: "/var/log/munin" has insecure permissions. It must be owned and be writable by root only to avoid security problems. Set the "su" directive in the config file to tell logrotate which user/group should be used for rotation.
Mar 3 12:15:05 lucien logrotate: error: error reading /var/log/munin/munin-html.log: Bad file descriptor
Mar 3 12:15:05 lucien logrotate: error: error reading /var/log/munin/munin-limits.log: Bad file descriptor
Mar 3 12:15:05 lucien logrotate: error: error reading /var/log/munin/munin-update.log: Bad file descriptor
Mar 3 12:15:05 lucien logrotate: error: error reading /var/log/munin/munin-graph.log: Bad file descriptor
Mar 3 12:15:05 lucien logrotate: error: error reading /var/log/munin/munin-node.log: Bad file descriptor
So I addded su directives to /etc/logrotate.d/munin and /etc/logrotate.d/munin-node:
/var/log/munin/munin-html.log
/var/log/munin/munin-nagios.log
/var/log/munin/munin-limits.log
/var/log/munin/munin-update.log {
su munin munin
daily
missingok
rotate 7
compress
copytruncate
notifempty
create 640 munin munin
}
/var/log/munin/munin-graph.log {
su munin www
daily
missingok
rotate 7
compress
copytruncate
notifempty
create 660 munin www
}
/var/log/munin/munin-cgi-graph.log {
su wwwrun munin
daily
missingok
rotate 7
compress
copytruncate
notifempty
create 640 wwwrun www
}
/var/log/munin/munin-node.log {
su munin munin
daily
missingok
rotate 7
compress
copytruncate
notifempty
create 640 munin munin
}
Now logrotate doesn't rotate anymore.
Mar 5 12:15:05 lucien logrotate: error: error reading /var/log/munin/munin-html.log: Bad file descriptor
Mar 5 12:15:05 lucien logrotate: error: error reading /var/log/munin/munin-limits.log: Bad file descriptor
Mar 5 12:15:05 lucien logrotate: error: error reading /var/log/munin/munin-update.log: Bad file descriptor
Mar 5 12:15:05 lucien logrotate: error: error reading /var/log/munin/munin-graph.log: Bad file descriptor
Mar 5 12:15:05 lucien logrotate: error: error setting owner of /var/log/munin/munin-cgi-graph.log-20120305: Operation not permitted
Mar 5 12:15:05 lucien logrotate: error: error opening /var/log/munin/munin-node.log: Permission denied
An ls -la of /var/log/munin/ is here.
How do I get logrotate to work again with munin?
|
Turns out this was a bug introduced in logrotate-3.7.9-6.9.1 and fixed in logrotate-3.7.9-6.12.1.
| logrotate doesn't work for munin after last update on OpenSUSE 11.4 |
1,437,283,631,000 |
When I plug in my external hard-drive when running KDE, it prompts me to mount the device (by clicking an icon, no sudo involved), and once I've done that I am the owner of the files. This is great.
When using other window managers (awesome, fluxbox, etc), I have to mount manually (sudo mount...) and thus root becomes the owner. sudo chown -R myname /mount_point just gives me "operation not permitted" errors. How can I make myself as user the owner of the file system on the external drive?
I use this drive for backups and having to do that as root is tedious (and I wouldn't be surprised if it's dangerous as well).
|
You should add a line to your /etc/fstab file with the path to your device, the path to where you want to mount it, then include "user,noauto" as the file system mount options. This will keep the system from trying to mount it at boot time, but allow you to mount the device as an ordinary user without using sudo. For example here is a line I use to mount my sd card reader:
/dev/sdf1 /mnt/sd auto noauto,user 1 1
Then I can just mount /mnt/sd as an ordinary user any time I want to access my card.
| how to chown mounted device? |
1,437,283,631,000 |
On CentOS 7, I am trying to debug an issue where the nginx amplify agent cannot read /proc/$pid/io even though it is owned by the proper user.
One of the nginx worker processes right now is pid 5693:
# ps aux | grep 5693
nginx 5693 0.5 0.0 129000 14120 ? S Jul18 16:10 nginx: worker process
the nginx user has permission to read the file:
# ls -lAh /proc/5693/io
-r-------- 1 nginx nginx 0 Jul 20 11:30 /proc/5693/io
...but can't actually read it:
# sudo -u nginx /bin/sh -c 'cat /proc/5693/io'
cat: /proc/5693/io: Permission denied
...even though selinux is disabled:
# sestatus
SELinux status: disabled
Root is able to read /proc/5693/io just fine, and the nginx user can read other files in /proc/5693.
It seems like there must be some other security mechanism in place that is preventing the access, but I have no idea what it might be.
|
According to what proc(5) has to say on /proc/[pid]/io, _"Permission to access this file is governed by a ptrace access mode TRACE_MODE_READ_FSCREDS check; see ptrace(2)."_ The Ptrace access mode checking section of the ptrace(2) man page contains a list of things that are checked to grant or deny permission, including whether the process is marked dumpable, whether you have the same fsuid as the target process etc, might be worth it having a look at it.
The documentation was added very recently, check upstream.
https://lwn.net/Articles/692203
http://man7.org/linux/man-pages/man5/proc.5.html
I suspect you need to change the GID your process is running under, in addition to the UID.
| Owner can't read /proc/$pid/io |
1,437,283,631,000 |
When running as non-root, if I try to use readlink(2) on a /proc/<pid>/exe for a process not owned by me I get a permission error. So how then does ps with the -f option, which isn't setuid root, determine the executables for processes of a different user?
|
The -f option does not display the full path to the executable, it displays the command line used to invoke the executable. This information is world-readable, from /proc/PID/cmdline, unlike the path to the executable from /proc/PID/exe which can only be read by the user who executed the process.
You can check what data ps is reading by observing its system calls — run strace ps -ef -p 1 | less:
…
stat("/proc/1", {st_mode=S_IFDIR|0555, st_size=0, ...}) = 0
open("/proc/1/stat", O_RDONLY) = 6
read(6, "1 (init) S 0 1 1 0 -1 4202752 78"..., 1024) = 191
read(6, "", 833) = 0
close(6) = 0
open("/proc/1/status", O_RDONLY) = 6
read(6, "Name:\tinit\nState:\tS (sleeping)\nT"..., 1024) = 752
read(6, "", 272) = 0
close(6) = 0
…
open("/proc/1/cmdline", O_RDONLY) = 6
read(6, "/sbin/init", 2047) = 10
close(6) = 0
…
If you pass the c option, then ps reports the command name from /proc/PID/stat, which is also world-readable. This is the basename of the executable (with no path information) truncated to 16 characters.
I don't think ps has an option to report the path to the executable found in /proc/PID/exe. You can list it with lsof (the txt file descriptor) — and it predictably complains /proc/1/exe (readlink: Permission denied) when asked to print the information about another user's process.
N.B. My answer is about Linux. The details of what information can be reported about other users' processes and how it works are very different across Unix variants.
| How does ps get the executable of processes of other users? |
1,437,283,631,000 |
I have a question concerning permissions.
I'm running lighttpd and a ftp server.
I want to add a ftp user that is able to upload files to /var/www, which then are viewable in a browser.
What is the safest way to set this up (apart from not using ftp)?
|
usermod -a -G ftp user
chown -R :ftp /var/www/html
chmod -R g+w /var/www/html
| File permissions issue with webserver and ftp server |
1,437,283,631,000 |
My default umask is 077. When I create a directory, it has permissions 700:
mkdir AA
$ stat -c'%A %n' AA/
drwx------ AA/
now I want to set default permissions recursively to 750:
setfacl -R --default --modify g::rx,o::--- AA
and confirm it works as expected:
$ touch AA/zz
$ stat -c'%A %n' AA/zz
-rw-r----- AA/zz
Now I want to copy another existing directory ZZ inside my new AA:
$ stat -c'%A %n' ZZ ZZ/zz
drwx------ ZZ
-rw------- ZZ/zz
that existing directory has permissions 700 and file inside has 600.
$ cp -r --no-preserve=all ZZ/ AA/
$ stat -c'%A %n' AA/ZZ AA/ZZ/zz
drwx------ AA/ZZ
-rw------- AA/ZZ/zz
but my umask is not honored, even though I have used --no-preserve=all to specifically not transfer existing permissions from the existing ZZ.
How can I make cp act the same as if when I use touch to create new files?
Regardless what the original permissions are, I want to copy over an existing directory structure, while honoring my default umask/setfacl settings.
|
Solution:
Using Debian 11.5 / cp (GNU coreutils) 8.32
1) This will use your user umask:
$ cp -r --no-preserve=all ZZ/ AA/
2) This will use effective umask from destination directory:
$ cp -r ZZ/ AA/
3) This will not use any umask:
$ cp -r --preserve=all ZZ/ AA/
You as (user that can create file in AA directory) have no restriction to overwrite default mode of file you create. That include when using utility as cp. Acl is not restricting you from creating any file permissions as long as you have write permission.
Conclusion:
All works as expected. The utility cp need one more options that will use user umask and acl if exist.
#=========================================================
This is part of manual for "acl":
CHANGES TO THE FILE UTILITIES
On a system that supports ACLs, the file utilities ls(1), cp(1), and mv(1) change their behavior in the following way:
• For files that have a default ACL or an access ACL that contains more than the three required ACL entries, the ls(1) utility in the long form produced by ls -l displays a plus sign (+) after the permission string.
• If the -p flag is specified, the cp(1) utility also preserves ACLs. If this is not possible, a warning is produced.
......
#----------------------------------------------------------------
STANDARDS
The IEEE 1003.1e draft 17 (“POSIX.1e”) document describes several security extensions to the IEEE 1003.1 standard. While the work on 1003.1e has been abandoned, many UNIX style systems implement parts of POSIX.1e draft 17, or of earlier drafts.
Linux Access Control Lists implement the full set of functions and utilities defined for Access Control Lists in POSIX.1e, and several extensions. The implementation is fully compliant with POSIX.1e draft 17; extensions are marked as such. The Access Control List manipulation functions are defined in the ACL library (libacl, -lacl). The POSIX compliant interfaces are declared in the <sys/acl.h> header. Linux-specific extensions to these functions are declared in the <acl/libacl.h> header.
| cp overrides my default permissions settings, when copying files with: cp -r --no-preserve=all |
1,437,283,631,000 |
I am using sftp with the internal-sftp for debian.
What I'm trying to acomplish is to jail all users to a specific folder which is working fine. I also need to have a single user that has "admin" rights on sftp but is not a root user. The admin user will be putting files in the sftp users directories, so they will be able to access them.
The admin user will be a "non-technical" person using winscp or other client to do stuff. There is no way I can force him to use bash.
I came up with the following solution:
SFTP configuration
Using sshd_config I set up this:
Match group users
ChrootDirectory /home
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp -d %u
Match group sftponly
ChrootDirectory /home
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp -d admin/sftp/%u
So my 'admin' user has all the sftp users in his home. 'admin' is also in the users group. all other users are created in the sftponly group. 'admin' is also in the sftponly group.
Directory setup
The directory setup is as follows:
-/
-home
-admin
-sftp
-user1
-user2
I created a script for creating the sftp users that perform the following:
add user $U:
useradd -d / -g 1000 -M -N -s /usr/sbin/nologin $U
set user $U password
echo "$U:$P" | chpasswd
create directory /home/admin/sftp/$U
mkdir $SFTP_PATH/$U
set ownership
chown $U:sftponly $SFTP_PATH/$U
set permissions
chmod u=rx,go= -R $SFTP_PATH/$U
chmod g+s $SFTP_PATH/$U
Setup ACL
setfacl -Rm u:admin:rwx,u:$U:r-x,g::--- $SFTP_PATH/$U
setfacl -d -Rm u:admin:rwx,u:$U:r-x,g::--- $SFTP_PATH/$U
So far so good.
Now what I wan't to have in point 6 is a setup that will allow the user admin to create a subdirectory in the $SFTP_PATH/$U that will be accessible to the $U itself. This works fine for the first directory created (user tester):
# pwd
/home/admin/sftp/tester
# ls -alh
dr-xrwx---+ 2 tester sftponly 4.0K Oct 22 16:06 tester
# su admin
$ cd /home/admin/sftp/tester
$ mkdir subdir
$ ls -alh
admin@server:/home/admin/sftp/tester$ ls -alh
total 20K
dr-xrwx---+ 3 tester sftponly 4.0K Oct 22 22:41 .
drwxrwx---+ 28 admin sftponly 4.0K Oct 22 15:19 ..
dr-xrwx---+ 2 admin users 4.0K Oct 22 22:41 subdir
$ cd subdir
admin@storage:/home/admin/sftp/tester/subdir$ mkdir nesteddir
mkdir: cannot create directory ‘nesteddir’: Permission denied
When I test the acl i get:
admin@storage:/home/admin/sftp/tester$ getfacl subdir/
# file: subdir/
# owner: admin
# group: users
user::r-x
user:admin:rwx
user:tester:r-x
group::---
mask::rwx
other::---
default:user::r-x
default:user:admin:rwx
default:user:tester:r-x
default:group::---
default:mask::rwx
default:other::---
So my question is: Being admin and having setfacl for admin as rwx, why can I create the directory subdir but cannot create the directory nested?
Is there something I am missing here?
I know of proftpd and pureftp but if possible I would like to use the ssh way. If there is no way to do this this way I would appreciate to point me in the right direction and recommend software that would be able to achieve this setup out of the box.
Please note: user admin has his own directory under /home/admin/sharedfiles/, where he stores files that are then shared with the sftp users. The files are shared using hard links in their folders. For example if admin wants to share a file (the files are very big like 500GB) with 3 users he just puts hardlinks in their folders to those files and the can download them without having to copy the big files to the folder of each user.
The issue occured when admin wanted to put different categories of shares in different folders fo the users.
EDIT:
I noticed that if I change the ownership of the newly created folder to 'tester' - then the creating of nested directories is possible for the admin user. However I still have to change the ownership of the nested directory to allow for further directory nesting.
# chown tester:sftponly subdir
# su admin
$ cd /home/admin/sftp/tester/subdir
$ mkdir nested # <----- works fine
$ cd nested
$ mkdir deepdir
mkdir: cannot create directory ‘deepdir’: Permission denied
So if I wanted to create the next nested directory then I have to chown tester:sftponly nested and then as user admin I can create the deepdir directory.
Please note that the ACL is inherited and theoretically user admin has rwx permissions to all files and directories under the first folder, that is subdir.
Maybe this will help in finding the reason for failing setfacl?
|
Group varies when creating subdir:
drwxrwx---+ 28 admin sftponly 4.0K Oct 22 15:19 ..
dr-xrwx---+ 2 admin *users* 4.0K Oct 22 22:41 subdir
Nested directory creation possibly restricted by the subdir's distinct group.
| Why does default setfacl fail for nested directories? |
1,437,283,631,000 |
Can I do this?
sudo chown -R myUsernName /usr/lib
I mean can I do this without worrying that my OS will be broken? Or permissions will be screwed up?
Here is the reason why I would like to do it
https://stackoverflow.com/questions/6752873/node-js-npm-install-fails
and here is why I don't and I came here to ask you guys:
How to get back sudo on Ubuntu?
|
Two things:
(1) There is absolutely no advantages for this. The files in /usr/lib are supposed to be owned by root/system, as MANY things on the system which are owned by root are dependent on them.
(2) This is also a very good way to break your system.-
Just to make a point, follow this general rule of thumb:
If in doubt, don't do it.
| Is it safe to chown on /usr/lib? |
1,437,283,631,000 |
Suppose I have a folder, containing other files and folders, and I'd like to find out recursively which subfiles and subfolders have non-default permissions (i.e., not 644 or 755).
Which command can be used to do that? The command should output a list of the relevant files and folders, and their permissions.
|
You can do the entire task using just find:
$ find . -type f ! \( -perm 755 -o -perm 644 \) -printf "%m\t%p\n"
Example
Make all permutations of permissions (000-777).
$ touch {0..7}{0..7}{0..7}
$ for i in {0..7}{0..7}{0..7}; do chmod $i $i;done
$ find . -type f | wc -l
512
A sample of our find command's list of files it's finding:
$ find . -type f ! \( -perm 755 -o -perm 644 \) -printf "%m\t%p\n"| head -10
734 ./734
376 ./376
555 ./555
663 ./663
256 ./256
336 ./336
2 ./002
152 ./152
527 ./527
416 ./416
If we run our find command we can confirm that it worked:
$ find . -type f ! \( -perm 755 -o -perm 644 \) -printf "%m\t%p\n" | grep 755
$ find . -type f ! \( -perm 755 -o -perm 644 \) -printf "%m\t%p\n" | grep 644
| Which command to use to find all files/folders with non-default permissions? |
1,437,283,631,000 |
Back with another probably very very basic UNIX question.
I understand the premise of Tape Archive Zips (.tgz) is that they preserve uid, gid, permissions...
However, it seems that this isn't portable. For example, what if the user john makes a .tgz on one UNIX machine and unzips on a machine without that user, or with a user with the same name but different UID.
How does this work?
|
Really old tar formats only store numerical user and group identifiers, and so they have the problem you describe.
However beginning with the POSIX standard from 1988, tar formats such as the Unix standard tar format or pax also store the username and group name, so they can preserve ownership by name. Given a tarball containing a file owned by uid 1234, with username john, tar will look for a user named john, and extract the file with that ownership if possible (potentially with a uid that’s not 1234), falling back to uid 1234 if there is no such user.
None of this is perfect, which is why tar doesn’t restore ownership unless run as root (aside from the fact that it needs to be root to change ownership anyway); by default files are extracted with the ownership of the running user.
| File ownership preservation with .tgz |
1,437,283,631,000 |
If the current user only has execute (--x) permissions on a file, under which user does the interpreter (specified by #!/path/to/interpreter at the beginning of the file) run?
It couldn't be the current user, because he doesn't have permission to read the file. It couldn't be root, because then arbitrary code included in the interpreter would gain root access.
As which user, then, does the interpreter process run?
Edit: I think my question assumes that the file has already been read enough to know which interpreter it specifies, when in reality it wouldn't get that far. The current shell (usually b/a/sh) interpreting the command to execute the target file would attempt to read it, and fail.
|
If the user has no read permission on an executable script, then trying to run it will fail, unless she has the CAP_DAC_OVERRIDE capability (eg. she's root):
$ cat > yup; chmod 100 yup
#! /bin/sh
echo yup
^D
$ ./yup
/bin/sh: 0: Can't open ./yup
The interpreter (whether failing or successful) will always run as the current user, ignoring any setuid bits or setcap extended attributes of the script.
Executable scripts are different from binaries in the fact that the interpreter should be able to open and read in order to run them. However, notice that they're simply passed as an argument to the interpreter, which may not try to read them at all, but do something completely different:
$ cat > interp; chmod 755 interp
#! /bin/sh
printf 'you said %s\n' "$1"
^D
$ cat > script; chmod 100 script
#! ./interp
nothing to see here
^D
$ ./script
you said ./script
Of course, the interpreter itself may be a setuid or cap_dac_override=ep-setcap binary (or pass down the script's path as an argument to such a binary), in which case it will run with elevated privileges and could ignore any file permissions.
Unreadable setuid scripts on Linux via binfmt_misc
On Linux you can bypass all the restrictions on executable scripts (and wreck your system ;-)) by using the binfmt_misc module:
As root:
# echo ':interp-test:M::#! ./interp::./interp:C' \
> /proc/sys/fs/binfmt_misc/register
# cat > /tmp/script <<'EOT'; chmod 4001 /tmp/script # just exec + setuid
#! ./interp
id -u
EOT
As an ordinary user:
$ echo 'int main(void){ dup2(getauxval(AT_EXECFD), 0); execl("/bin/sh", "sh", "-p", (void*)0); }' |
cc -include sys/auxv.h -include unistd.h -x c - -o ./interp
$ /tmp/script
0
Yuppie!
More information in Documentation/admin-guide/binfmt-misc.rst in the kernel source.
The -p option may cause an error with some shells (where it could be simply dropped), but is needed with newer versions of dash and bash in order to prevent them from dropping privileges even if not asked for.
| Who runs the interpreter for files that are execute-only? |
1,437,283,631,000 |
I have some scripts in a folder that I run often. These scripts are updated frequently. To be more specific, every time we do a deployment on our server, we replace the scripts with updated ones from our git repo.
Do we have to make them executable every time?
|
If you are simply checking out from git, you should be able to set the executable mode flag on the files in git itself.
If you are committing from *Nix (including macOS) then you can usually¹ just chmod +x the file before you git add git commit.
If you are committing from somewhere that doesn't have an executable bit, or perhaps from Windows, see the answer to How to create file execute mode permissions in Git on Windows?.
This should result in the files having executable mode set on them as they are updated by git during a git pull or git checkout etc.
¹Note this can only work if you've cloned onto a filesystem that stores the executable bit +x and has been mounted in a way that allows it; some filesystems might not, such as NTFS or FAT32.
| Is it necessary to set the executable bit on scripts checked out from a git repo? |
1,437,283,631,000 |
I'm trying to create a sub directory under existing directory tree. I want to know if only the permissions of directory where I'll be creating my sub directory matter or the parent directories will also have some effect on the permission to create the directory?
I'll be doing this programmatically, so I need to be sure that I have covered a wide range of scenarios.
|
Yes, they matter. To create a directory, you need to be able to write to its parent directory. Creating a directory is just like creating a file (after all, everything is a file) so you need write access to the parent. In addition, you need to be able to get to the parent directory which means you need execute access to all directories in the tree:
$ sudo tree -pgu
.
└── [drwxr-xr-x terdon terdon] dir1
└── [drwx------ bob bob ] dir2
└── [drwxr-xr-x terdon terdon] dir3
In the example above, dir2 is owned by bob. This means I cannot cd into it, and I can't cd into its subdirectory dir3 either:
$ cd dir1/dir2/
bash: cd: dir1/dir2/: Permission denied
$ cd dir1/dir2/dir3
bash: cd: dir1/dir2/dir3: Permission denied
If I give myself execute access to dir2, I will be able to move to both dir2 and dir2/dir3, but I still won't have the right to create files/directories in dir2:
$ sudo tree -pgu
.
└── [drwxr-xr-x terdon terdon] dir1
└── [drwx--x--x bob bob ] dir2
└── [drwxr-xr-x terdon terdon] dir3
$ cd dir1/dir2/
$ ls
ls: cannot open directory '.': Permission denied
$ touch file
touch: cannot touch 'file': Permission denied
As you can see above, while I can move into the directory, I can't list its contents because I don't have read access to it and I can't create anything there because I don't have write access.
So, to be able to create a new file or directory inside a directory you need:
Execute permissions on every parent directory of your target directory.
Execute and write permissions for the target directory.
| Required permission to create directory |
1,437,283,631,000 |
I want to make a backup of my home directory to an NTFS partition (an unfortunate limitation). However, when I last tried using just cp, the attributes (owner, etc) went away. How can I make a backup while still preserving these attributes? My first instinct is to make a tarball, but I'm not sure if this will work.
For reference, I'm running Ubuntu Raring devel.
|
Unfortunately, the NTFS permissions model and the Unix one don't look alike at all. There simply is no way to sanely map between them.
Use tar, but read the documentation carefully so all permissions get faithfully stored (including ACLs and SELinux contexts).
| How can I backup a directory to NTFS while preserving Unix file attributes? |
1,437,283,631,000 |
I started off with changing to the folder I want to change permissions for, and that is the opt folder.
$ cd /opt/
test@testVM:/opt$
So I tried changing the permissions for this folder now using:
sudo chmod 775
And that hasn't worked. It showed this message:
Try 'chmod --help' for more information.
There is something I am forgetting or leaving out.
Please can you show me what I am doing wrong?
Thanks in advance.
|
You forgot the "change what" part of the command.
Most commands are like a simple "verb-noun" type structure. (Which, if you think about it, tends to explain why we sound like Yoda when we talk)
You said "chmod 755"... which is the verb... where's the noun?
sudo chmod 755 . # the '.' means 'here'
-or-
sudo chmod 755 /opt # always better to specify exactly what you want
My question is going to be: Why do you want to do that? What need do you have to change the permissions of /opt? (not that it's vitally important for me to know, but you should know that changing permissions of anything that's not in your /home folder is usually not a good idea. Think about what you're doing.)
| Change Folder Permissions |
1,437,283,631,000 |
I have a zip file containing lot of files and directories for a certain application which should be running on Linux. Some files needs to be set as executable, but the zip file format afaik does not preserve execution rights.
I need to manually set the execution right on files after extracting the archive and (I am getting to it) my question is:
If I do not know which files need to be executable, is it good idea to add the execution permission recursively to the whole directory? Can it posses any security risks? Is anyone aware of any other problems it may cause?
|
There isn't any direct security risk with making a file executable unless it's setuid or setgid. Of course, there's the indirect risk that something you expect to be inert — some data file that you'd normally open in an application — can also be executed directly on your system with nefarious consequences. For example, if you have a file called README that actually contains a rootkit program, opening it in a text editor is safe, but you'd better not execute it.
A reasonable heuristic to recognize files that are meant to be executable is to look at their first few bytes and recognize executable signatures. This is a matter of convenience rather than security, but if you're willing to make all files executable anyway, it means you don't have a security concern but a usability concern anyway. Here's a possible heuristic:
for x in *; do
case $(file - <"$x") in
*executable*) chmod +x -- "$x";;
esac
done
Here's another heuristic which should differ only in corner cases ($'\177' is ksh/bash/zsh syntax, replace it by a literal character 0177 = 127 = 0x7f in other shells).
for x in *; do
case $(head -c 4 <"$x") in
'#!'*|$'\177'ELF) chmod +x -- "$x";;
esac
done
In both cases, just because a file is recognized as executable doesn't mean you can execute it on your system; for example a binary for the wrong processor architecture will be happily made executable. Here's a different approach that makes all scripts executable, but dynamically linked binaries only if they're for the right architecture and you have the required libraries, and misses statically linked binaries altogether.
for x in *; do
case $(head -c 2 <"$x") in
'#!') chmod +x -- "$x";;
*) if ldd -- "$x" >/dev/null 2>/dev/null; then chmod +x "$x"; fi;;
esac
done
| Setting execution right to whole directory - is it good or bad idea? |
1,437,283,631,000 |
When the file owner is part of several groups, how does ls -l decide which group to show? For example, on MacOS, I see
drwx------+ 48 flow2k staff 1536 Feb 5 10:11 Documents
drwxr-xr-x+ 958 flow2k _lpoperator 30656 Feb 22 16:07 Downloads
Here groups shown for the two directories are different (staff and _lpoperator) - what is this based on? I am a member of both groups.
|
I think this question stems from a misunderstanding of how groups work. The groups listed in ls -l are not the group that the user is potentially in, but the group that the file is owned by. Each file is owned by a user and a group. Often, this user is in the group, but this is not necessary. For example, my user is in the following groups:
$ groups
audio uucp sparhawk plugdev
but not in, say, the group cups. Now, let's create a file.
$ touch foo
$ ls -l foo
-rw-r--r-- 1 sparhawk sparhawk 0 Feb 23 21:01 foo
This is owned by the user sparhawk, and the primary group for me, which is also called sparhawk. Let's now change the group owner of the file.
$ sudo chown sparhawk:cups foo
changed ownership of 'foo' from sparhawk:sparhawk to sparhawk:cups
$ ls -l foo
-rw-r--r-- 1 sparhawk cups 0 Feb 23 21:01 foo
You can see that the group that now owns the file is not a group that I am in.
This concept allows precise manipulation of file permissions. For example, you could create a group with members X, Y, and Z, and share files between the three of them. You could further give X write permissions, but only give the others (the group) read permissions.
| Which owner group does `ls -l` show? |
1,437,283,631,000 |
I am trying to recursively change the permission of all files and directories in my project.
I found a post on the magento forum saying that I can use these commands:
find ./ -type f | xargs chmod 644
find ./ -type d | xargs chmod 755
chmod -Rf 777 var
chmod -Rf 777 media
It worked for find ./ -type d | xargs chmod 755.
The command find ./ -type f returned a lot of files, but I get chmod: access to 'fileXY.html' not possible: file or directory not found on all files, if I execute find ./ -type f | xargs chmod 644.
How can I solve this?
PS: I know that he recommended to use 777 permission for my var and media folder, which is a security risk, but what else should we use?
|
I’m guessing you’re running into files whose names contain characters which cause xargs to split them up, e.g. whitespace. To resolve that, assuming you’re using a version of find and xargs which support the appropriate options (which originated in the GNU variants, and aren’t specified by POSIX), you should use the following commands instead:
find . -type f -print0 | xargs -0 chmod 644
find . -type d -print0 | xargs -0 chmod 755
or better yet,
chmod -R a-x,a=rX,u+w .
which has the advantages of being shorter, using only one process, and being supported by POSIX chmod (see this answer for details).
Your question on media and var is rather broad, if you’re interested in specific answers I suggest you ask it as a separate question with more information on what your use of the directories is.
| xargs + chmod - file or directory not found |
1,340,490,908,000 |
There's been more than one occasion where something has gone with system because all the sudden there were permision changes to certain key files (sometimes taking a long time to find which file is the culprit) becomes -rw-------. Once I do a chmod 777 filename, then everything appears to be fine.
For example, I was trying to Install vncserver on an Ubuntu machine. For whatever reason, vncserver failed to start so I rebooted the machine and then. Then I was unable to log into Xcfe session because the /home/user/.Xauthority was file -rw------- instead of -rwxrwxrwx. Did chmod 777 /home/user/.Xauthority and it corrected my issue.
This wasn't the only time I've experience something along these lines. So my question is, what cuases this to happen? Do I need to be watch as to what install?
|
What causes a file to lose permissions is either a program changing the permissions (rare) or a program recreating a new file with the same name and different permissions. The latter is what is happening here.
The .Xauthority file is maintained through the xauth utility. Whenever xauth changes the file, it first creates a new version, then moves it into place. This avoids having a malformed half-written file if xauth fails in the middle for any reason (disk full, power failure, …).
The .Xauthority file is always (re-)created with mode 600 (accessible only to the owner, with read and write permissions, i.e., rw-------) because these are the permissions that make sense for the file. The file contains confidential data, so it must not be accessible to other users. The file isn't executable, so it doesn't have any execute permission.
Whatever problem you're trying to solve, you're doing it wrong. The permissions 777 on .Xauthority are nonsensical. In most common situations, .Xauthority will have the correct data automatically. Occasionally, you might need to copy permissions from one cookie file to another with xauth merge, sometimes preceded by xauth extract. I suggest that you ask a question to find out what you should be doing instead; be sure to describe your problem precisely.
To summarize: in this case, your permissions don't stick because they don't make sense, so the program that normally manipulates the file doesn't bother to replicate them.
| What causes files to lose permissions? |
1,340,490,908,000 |
I have a prompt that asks me to delete all the files in a directory that the owner (u) can't r, w, nor x, in one command.
I tried this command:
find data -type f ! -perm -u=rwx -exec rm -f {} \;
... but I think it removes too many files.
|
I think you want this, which assumes that you are using GNU find specifically:
find -type f \! -perm /u=rwx -exec echo rm -f {} \;
Note that I added an echo for testing.
If the files that get printed match your expectations, take it out. :)
| Delete all files without user permissions |
1,340,490,908,000 |
Example script:
#!/bin/sh -e
sudo useradd -m user_a
sudo useradd -m user_b -g user_a
sudo chmod g+w /home/user_a
set +e
sudo su user_a <<EOF
cd
umask 027
>> file_a
>> file_b
>> file_c
ls -l file_*
EOF
sudo su user_b <<EOF
cd
umask 000
rm -f file_*
ls -l ~user_a/
set -x
mv ~user_a/file_a .
cp ~user_a/file_b .
ln ~user_a/file_c .
set +x
ls -l ~/
EOF
sudo userdel -r user_b
sudo userdel -r user_a
Output:
-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_a
-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_b
-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_c
total 0
-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_a
-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_b
-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_c
+ mv /home/user_a/file_a .
+ cp /home/user_a/file_b .
+ ln /home/user_a/file_c .
ln: failed to create hard link ‘./file_c’ => ‘/home/user_a/file_c’: Operation not permitted
+ set +x
total 0
-rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_a
-rw-r----- 1 user_b user_a 0 Jul 11 12:26 file_b
userdel: user_b mail spool (/var/mail/user_b) not found
userdel: user_a mail spool (/var/mail/user_a) not found
|
Which system are you running? On Linux, that behaviour is configurable, through /proc/sys/fs/protected_hardlinks (or sysctl fs.protected_hardlinks).
The behaviour is described in proc(5):
/proc/sys/fs/protected_hardlinks (since Linux 3.6)
When the value in this file is 0, no restrictions are placed
on the creation of hard links (i.e., this is the historical
behavior before Linux 3.6). When the value in this file is 1,
a hard link can be created to a target file only if one of the
following conditions is true:
The calling process has the CAP_FOWNER capability ...
The filesystem UID of the process creating the link matches
the owner (UID) of the target file ...
All of the following conditions are true:
the target is a regular file;
the target file does not have its set-user-ID mode bit
enabled;
the target file does not have both its set-group-ID and
group-executable mode bits enabled; and
the caller has permission to read and write the target
file (either via the file's permissions mask or because
it has suitable capabilities).
And the rationale for that should be clear:
The default value in this file is 0. Setting the value to 1
prevents a longstanding class of security issues caused by
hard-link-based time-of-check, time-of-use races, most
commonly seen in world-writable directories such as /tmp.
On Debian systems protected_hardlinks and the similar protected_symlinks default to one, so making a link without write access to the file doesn't work:
$ ls -ld . ./foo
drwxrwxr-x 2 root itvirta 4096 Jul 11 16:43 ./
-rw-r--r-- 1 root root 4 Jul 11 16:43 ./foo
$ mv foo bar
$ ln bar bar2
ln: failed to create hard link 'bar2' => 'bar': Operation not permitted
Setting protected_hardlinks to zero lifts the restriction:
# echo 0 > /proc/sys/fs/protected_hardlinks
$ ln bar bar2
$ ls -l bar bar2
-rw-r--r-- 2 root root 4 Jul 11 16:43 bar
-rw-r--r-- 2 root root 4 Jul 11 16:43 bar2
| Why can I not hardlink to a file I don't own even though I can move it? |
1,340,490,908,000 |
I know I could just put something like sudo mypassword in my .bash_profile, but I don't want to run every command as root.
I want password to autofill under following circumstances:
only the commands requiring root privileges
only commands that I explicitly state I plan to run su to root with sudo
Example:
sudo cd /var/root #When I type this
Password: #I don't want to be prompted for my password
#I want to fill it from my `.bash_profile`
But:
cd /var/root #When I type this
-bash: cd: /var/root: Permission denied #I still want this, or the like, returned
I saw this post on increasing sudo timeout, but I don't think it's quite equivalent. For example, I want it to carry across different shell log-in sessions. I could be wrong.
Any suggestions regarding what to (or not to!) add to my .bash_profile, or which method (timeout vs profile) is preferable would be greatly appreciated! Thank you in advance.
|
If you don't want to be challenged every time for your password then I'd recommend setting it to NOPASSWD in your /etc/sudoers file rather than hardcode your password in your logins. At least this way your primary login's password will remain intact and not be completely exposed in your .bashrc.
To make this change run the command sudo visudo, and change your user accounts entry to something like this:
userX ALL=(ALL) NOPASSWD: ALL
| How do I fill my password automatically from .bash_profile when running command as sudo? |
1,340,490,908,000 |
I'm attempting to start a service that our company created. In one particular environment, it's failing to start on three "permission denied" errors. I'm not sure which files it is failing on, so I would like to log all permissions-related errors while I attempt to start the daemon.
I've found auditd, but I've been unable to place a watch on the whole disk for specifically permissions-related errors. What is the best way to audit all permissions-related errors?
|
You could use strace to view all filesystem activity of the processes related to the daemon, and see which ones fail when the permission denied errors appear.
If the error comes from a shell script that starts the service, you can run sh -x /path/to/startup/script (or bash -x /path/to/startup/script if the script begins with #!/bin/bash) and the shell will print each line as it executes it.
| A service is failing to start on permissions denied--which files is it failing on? |
1,340,490,908,000 |
I am trying to run the statistics software Stata 11 on Ubuntu 11.10. as a regular user and I get the following error message:
bash: xstata: Permission denied
The user priviledges seem ok to me, tough:
-rwxr-x--x 1 root root 16177752 2009-08-27 16:29 xstata*
I would very much appreciate some advice on how to resolve this issue!
|
In the ls output you can see the file owner(root) and group(root). The user priviiledges apply to file owner (rwx), file group (r-x) and others (--x). Because you are not the root (and I suppose that you are not in the root group), only other (--x) applies to you. Thus you can run the file, but not read it. As a quick fix, try chmod +r xstata, this gives the read permission to all.
| "Permission denied" when starting binary despite "rwx" priviledge |
1,340,490,908,000 |
I have this situation
$ ls -la
-rw-r--r-- 1 user user 123 Mar 5 19:32 file-a
-rwx---rwx 1 user user 987 Mar 5 19:32 file-b
I would like to overwrite file-b with file-a but I would like to preserver all permissions and ownership of file-b.
This does not work, because it uses permissions of file-a
cp file-a file-b # << edit: this works as expected! My fault!
mv file-a file-b
This works, but it can be called only from shell. Imagine the situation I can call only execve or similar function.
cat file-a > file-b
I know, that I can execute something like
sh -c "cat file-a > file-b"
but this introduce difficulties with escaping filenames so I don't want to use this.
Is there some common command that can do this or should I write my own helper c program pro this task?
|
A simple command to copy a file without copying the mode is
dd if=file-a of=file-b
but then you get dd’s verbose status message written to stderr.
You can suppress that
by running the command in a shell and adding 2> /dev/null,
but then you’re back to square 1.
If you have GNU dd, you can do
dd if=file-a of=file-b status=none
| Overwrite file preserving target permissions without shell invocation |
1,340,490,908,000 |
At first I wanted to install a package on a server to which I don't have root access. Since I don't have root access I tried to build it myself but I get an error in the configuration stage.
Here is the commands I run:
cd ~
git clone https://github.com/stella-emu/stella.git
cd stella/
./configure --prefix=$HOME/atari
Then I get the following error:
Running Stella configure...
mkdir: cannot create directory `/tmp/cg-2059': Permission denied
config.guess: cannot create a temporary directory in /tmp
Looking for C++ compiler... none found!
Is there any way I can fix this?
Here are some diagnosis information
-bash-4.2$ ls -ld /tmp
drwxr-xr-x 7 root root 4096 Dec 9 20:39 /tmp
-bash-4.2$ find /tmp -mindepth 1 -maxdepth 1 -printf x | wc -c
12
-bash-4.2$ mkdir ~/tmp
-bash-4.2$ ls
amin bs94 Maildir public_html skel.tar.gz speedtest_cli.py speedtest.py stella tajdari tmp
-bash-4.2$ cd stella/
-bash-4.2$ TMPDIR="$HOME/tmp" ./configure --prefix=$HOME/atari
Running Stella configure...
Looking for C++ compiler... none found!
-bash-4.2$ type -a c++ g++ clang++
c++ is /usr/bin/c++
g++ is /usr/bin/g++
-bash: type: clang++: not found
-bash-4.2$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 7.11 (wheezy)
Release: 7.11
Codename: wheezy
So now I'm trying to use junest but again after running:
git clone git://github.com/fsquillace/junest ~/.local/share/junest
export PATH=~/.local/share/junest/bin:$PATH
I get:
-bash-4.2$ junest
mktemp: failed to create directory via template `/tmp/junest.XXXXXXXXXX': Permission denied
Error: null argument
-bash-4.2$ junest -u
mktemp: failed to create directory via template `/tmp/junest.XXXXXXXXXX': Permission denied
Error: null argument
|
See roaima’s answer for the mktemp issue.
Even with that fixed though, you won’t be able to build the current release of Stella. Stella needs GCC 4.8 or later to build, but Debian 7 only has GCC 4.7. You’ll need an older release of Stella (such as 3.7.2 which is the version in Debian 7; I think 4.2 should be OK too).
(I’m the Debian Stella maintainer.)
| mktemp: failed to create directory via template Permission denied |
1,340,490,908,000 |
I have a directory, that I cannot delete with rmdir. I get always a permission denied error. But when list the directory (with ls -l) I get this:
drwxrwxrwx 2 user user 4096 Aug 28 09:34 directory
stat gives me that:
File: `directory/'
Size: 4096 Blocks: 16 IO Block: 32768 directory
Device: 12h/18d Inode: 102368771 Links: 2
Access: (0777/drwxrwxrwx) Uid: ( 1000/ user) Gid: ( 1000/ user)
Access: 2015-08-31 03:00:20.630000002 +0200
Modify: 2015-08-28 09:34:16.772930001 +0200
Change: 2015-08-31 12:25:04.920000000 +0200
So how delete that directory.
|
If you are trying to delete a directory foo/bar/, the permissions of bar isn't the relevant factor. Removing the name bar from directory foo is a modification of foo. So you need write permissions on foo.
In your case, check the current directory's permissions with ls -ld .
You might find this answer to "why is rm allowed to delete a file under ownership of a different user?" enlightening.
| How to delete that directory? |
1,340,490,908,000 |
So I'm messing around on a test server and accidentally ran the following (which resulting in SSH breaking):
# chmod -R 777 /var
Because it's a test server, I'd rather not re-install right now, I have things I would like to test.
I understand that 777 is very bad set of mode bits on a live server, and so I already understand that it would be a very bad thing to do on a server with anything valuable on it.
Is there anyway to get SSH functioning again?
|
Reset all UIDs and GIDs:
for i in $(rpm -qa); do rpm --setugids $i; done
Reset all permissions:
for i in $(rpm -qa); do rpm --setperms $i; done
Try to restart:
service sshd restart
Does that help?
| 777'd some files. How do I repair SSH? |
1,340,490,908,000 |
I want to build a tar file as regular (non-root) user with some prepared binaries and configuration files like this;
etc/binaries.conf
usr/bin/binary1
usr/bin/binary2
that are mean to be extracted into the file system under the / directory.
Like a traditional software package .deb, .rpm etc but I need to be "package manager independent". So probably I will just have a .tar file (maybe some gzip, bzip, lzip should be added to the mix but that's outside).
PROBLEM / QUESTION
My problem here is that I don't want to build this tar as the root user, and I want to know if there is a way to build this tar as a regular (non-root) user and then, when the .tar file is distributed to the machines and the real root user extract those binaries, they will be installed as files owned by the root user or the user who extract the binaries ?
EXAMPLE
Because right now, when I just create the .tar file as a regular (non-root) user with
$ tar cf dist.tar dist/
And then extract the .tar as root user with
# tar xf dist.tar -C /
I see the binaries and the config file with the regular user as owner, not the root user.
$ ls -la /usr/bin/binary1
-rwxr-xr-x 1 user user 30232 jun 20 19:06 /usr/bin/binary1
And I wan to have
$ ls -la /usr/bin/binary1
-rwxr-xr-x 1 root root 30232 jun 20 19:06 /usr/bin/binary1
Just to clarify, this hand made packaging is very specific for some task in a closed infrastructure, so right now, using .deb, .rpm or any other more sophisticated packaging system is not an option.
|
The extraction is what determines the ownership, not the creation of the archive. You can see that by looking at the archive's table of contents, e.,g.,
tar tvf dist.tar
If creating the file as regular user
tar --owner 0 --group 0 dist.tar dist
do the magic
| Create, as a regular user, a tar with files owned by root |
1,340,490,908,000 |
Whenever I've attempted to run growisofs via sudo I've always gotten the following error message.
$ sudo -i growisofs
:-( growisofs is being executed under sudo, aborting!
See NOTES paragraph in growisofs manual page for further details.
$ sudo -s growisofs
:-( growisofs is being executed under sudo, aborting!
See NOTES paragraph in growisofs manual page for further details.
Which leads me to having to do a sudo su - followed by growisofs.
$ sudo su - -c growisofs
growisofs: previous "session" device is not specified, do use -M or -Z option
-or-
$ sudo su -
# growisofs ...
Is there a alternative way I can do this without having to do the su -?
Background
This behavior is built into the tool growisofs to thwart giving access to the filesystem with elevated privileges.
http://fy.chalmers.se/~appro/linux/DVD+RW/growisofs.1m.html
excerpt
NOTES
If executed under sudo(8) growisofs refuses to start. This is done for the following reason. Naturally growisofs has to access the data set to be recorded to DVD media, either indirectly by letting mkisofs generate ISO9660 layout on-the-fly or directly if a pre-mastered image is to be recorded. Being executed under sudo(8), growisofs effectively grants sudoers read access to any file in the file system. The situation is intensified by the fact that growisofs parses MKISOFS environment variable in order to determine alternative path to mkisofs executable image. This means that being executed under sudo(8), growisofs effectively grants sudoers right to execute program of their choice with elevated privileges. If you for any reason still find the above acceptable and are willing to take the consequences, then consider running following wrapper script under sudo(8) in place for real growisofs binary.
#!/bin/ksh
unset SUDO_COMMAND
export MKISOFS=/path/to/trusted/mkisofs
exec growisofs "$@"
But note that the recommended alternative to the above "workaround" is actually to install growisofs set-root-uid, in which case it will drop privileges prior accessing data or executing mkisofs in order to preclude unauthorized access to the data.
|
What growisofs is doing here is looking for the SUDO_COMMAND environment variable, and aborting if the variable is found. The reason sudo su - works is because su - clears the environment.
Rather than having to get a full shell, you can do:
sudo env -i growisofs
This will wipe the environment, just like su -. The only difference is that su - will also put the basic variables (in /etc/profile and such) back, where as env -i wont (completely empty environment).
A more precise solution would be:
sudo env -u SUDO_COMMAND growisofs
This will preserve the environment except for SUDO_COMMAND.
| How can I run growisofs via sudo? |
1,340,490,908,000 |
I am trying to set default permissions on my directory structure using acl. I would like to have following default permissions for directories and for files respectively:
drwx--x---
-rw-r-----
but when I set default permissions for group to x only:
setfacl -R -d -m g::x my_dir
then newly created directories have my desired permissions, but newly created files have -rw------- instead of -rw-r-----. In other words, I am trying to remove r permission from directories, while preserving r permission on files.
How can I achieve this ?
|
Linux/Solaris ACLs don't support this. You can't set different default ACLs for files and directories.
Having directories that can be traversed but whose content cannot be listed (executable but not readable) is rarely useful. The fact that is works at all is a bit of a historical accident. Yes, it can occasionally be useful — but do you really need it? (You may want to ask this as a separate question.)
If you really need directories and files with different permissions, here are a few possibilities you can consider:
Have your application change ownership of the files that it creates instead of relying on intrinsic filesystem properties.
Make everything private by default (setfacl -d -m group:mygroup:X) and use one of the suggestions in Group+rx permission only in directories using ACL?:
Expose group-public files through bind mounts rather than directly.
Watch the tree with inotify and run setfacl on new regular files.
| setting default permissions with setfacl |
1,340,490,908,000 |
I'm trying to connect my velleman k8055 board per usb to my pc.
For this I have the udev rule
SUBSYSTEM !="usb_device", ACTION !="add", GOTO="velleman_rules_end"
ATTRS{idVendor}=="10cf", ATTRS{idProduct}=="5500", MODE="0660", GROUP="k8055", SYMLINK+="k8055_0"
ATTRS{idVendor}=="10cf", ATTRS{idProduct}=="5501", MODE="0660", GROUP="k8055", SYMLINK+="k8055_1"
ATTRS{idVendor}=="10cf", ATTRS{idProduct}=="5502", MODE="0660", GROUP="k8055", SYMLINK+="k8055_2"
ATTRS{idVendor}=="10cf", ATTRS{idProduct}=="5503", MODE="0660", GROUP="k8055", SYMLINK+="k8055_3"
LABEL="velleman_rules_end"
from jeremyz's k8055 github repo.
After plugging the board in, I even get the k8055_0 symlink, but it's rights are root:root.
But I want that users from the group k8055 can access this link (which is not possible with root:root permissions).
|
GROUP and MODE do have an effect. They affect the device node, not the symbolic link.
Linux doesn't support permissions on symbolic links. All symbolic links are world-readable and cannot be written to (only overwritten by a new link). So it doesn't matter that the symbolic link belongs to root: other users can access it anyway.
Since the device node has the group and permissions you specify, you are getting the desired access control. Users in the k8055 group can access the device (via the symlink or directly); users outside that group can see where the symbolic link points to but then cannot access the device.
| udev GROUP and MODE assignments on symbolic link have no effect |
1,340,490,908,000 |
Well I'm just too green to Linux, but I'm stuck with a thing that I should know, and I don't.
My file has the following permission bits sets:
-r-xr-xr-x
is owned by root ( but it should not matter since -x is active even for any user) it is not writable, and since it resides on a CDROM even if is a virtual iso mounted as a cdrom it sounds ok, but I can't execute:
It says "Permission Denied"
What I miss? The mount itself has execution permission, so it should execute, why it does not?
EDIT
I solved the issue, but not my doubt, since expliciting bash ./autorun.sh works - i need a root account anyway for what's inside, but it works.
|
The most likely explanation is Patrick's: the filesystem is mounted with the noexec option, so the execute permission bits on all files are ignored, and you cannot directly execute any program residing on this filesystem. Note that the noexec mount option is implied by the user option in /etc/fstab (supposedly for security reasons, even though unlike the nodev and nosuid options, noexec does not in fact provide any security). If you use user and want to have executable files, use user,exec.
It's also possible that the shebang line of the script points to a file that exists but isn't executable — in that case, the error message confusingly refers to the script even though the error is with the interpreter. However it's unlikely that the shebang would point to a wrong existing file (if the error was “not found”, a dangling shebang would be more plausible).
| Can't execute a file with execute permission bit set [duplicate] |
1,340,490,908,000 |
Let's say you open a file on which you have write permission.
Meanwhile you change permissions and remove write permission while you still have the file open in some editor.
What will happen if you edit and save it?
|
The permissions of a file are checked when the file is opened. Changing the permissions doesn't affect what processes that already have the file open can do with it. This is used sometimes with processes that start with additional privileges, open a file, then drop those additional privileges: they can still access the file but may not be able to reopen it.
However editors typically do not keep a file open. When an editor opens a document, what happens under the hood is that the editor loads the file contents in memory and closes the file. When you save the document, the editor opens the file and writes the new content.
Editors can follow one of two strategies when saving a file. They can create a new file, then move it into place. Alternatively, they can open the existing file and overwrite the old contents. Overwriting has the advantage that the file's permission and ownership do not change, and that it works even in a read-only directory. The major disadvantage of overwriting is that if saving fails midway (editor crash, system crash, disk full, …), you are left with a truncated document. Different editors choose different strategies; the good one do write-to-new-then-move if possible, and overwrite only in a read-only directory (after making a backup somewhere else).
If the editor follows the new-then-move strategy, the permissions on the file don't matter: the editor will create a new file, and it only needs write permission on the directory for that. There are two exceptions: if the directory has the sticky bit, changing the ownership of the file (but not the permission) may make it impossible for the process to move the new file into place. Another exception is on systems that support delete permission through ACLs (such as OSX): revoking the delete permission from the file may make the move impossible.
If the editor follows the overwrite strategy, revoking write permission will make saving impossible. (However, some editors that overwrite by default may fall back to new-then-move.)
In Vim, you can force the overwrite strategy by turning off the backupcopy option; see also why inode value changes when we edit in "vi" editor?. In Emacs, you can force the overwrite strategy by setting the backup-by-copying variable to t.
| File permissions and saving |
1,340,490,908,000 |
I have set up a new ec2 ubuntu box and installed apache and php5. By default you log onto the box with the ubuntu user who has sudo privileges.
I believe apache runs as root and php5 runs as www-data.
I think that I want all of the files in /var/www to be chown'ed to www-data:www-data, and folders set as 755 and files as 644 - unless there is an issue with that.
Things run fine like this, but my issue is that when I rsync files from my laptop to the server it changes the ownership on them (and adds new files) as ubuntu:admin
I have been through the rsync manual and searched Google, and I've seen a syntax like:
rsync -avzcO (source) (destination) --owner=www-data --group=www-data
However it appears that --owner and --group don't not take an argument but instead are meant to be used to force the remote files to have the same owner as on the local file system if rsync is being done as a super user.
So, I haven't found a solution with rsync to set the remote user and group of files during rsync.
What am I doing wrong?
Thoughts:
Maybe my files in /var/www should just be owned by ubuntu:admin ?
Maybe I have the rsync syntax wrong
I guess I could rsync as the www-data user but that doesn't seem like a good idea
I could use suPHP like I would do on a shared host, but that seems like quite a bit of trouble
|
You don't want them owned by www-data. Apache initially runs as root and then drops privileges to www-data. You don't want your web content writable by the user that owns the apache processes. This creates a security vulnerability.
In the event that Apache is breeched, ideally the www-data user will have no access to any of the system. This is the most secure configuration. If the web content is owned by www-data, if Apache is breeched then the attacker can overwrite any of your web content.
Your web content should be owned by a normal user (this excludes nobody, www-data and root). Only things that need to be writable by Apache should be owned by www-data.
| How should I rsync files in /var/www if I want them to be owned by www-data? |
1,340,490,908,000 |
I know that root can do anything, but is there a way to at least alert root that they are about to delete a folder that perhaps shouldn't be deleted?
We keep a work directory in our /tmp folder, and from time to time an administrator will come along and purge the /tmp folder with roughly sudo rm -rf *. Is there a way to give something like a prompt or alert that they are about to delete a specific folder? Something along the lines of:
The folder XXXXX is protected from deletion - do you really want to delete this folder? (y/n)
I know, the best solution is to move this folder elsewhere (the /tmp folder is called temp for a reason after all!), but that has other problems. Hence my question.
Asking this question makes me wonder, is it bad practice to actually blindly delete all the contents of the /tmp folder? Isn't a better approach to only delete files that are more than a certain age?
|
Moving your work folder is the solution. You're right that it is a little bit dangerous to wipe out files in /tmp blindly — normally, it's done either on system boot/shutdown or by using an access-time based deletion program (like tmpwatch). But by its definition, the space volatile and it's not reasonable to expect otherwise.
If you really want to prevent this, though, SE Linux could do it. You would give the directory a particular label, and configure it so that root doesn't normally have the unlink permission for objects with that label. This seems like significantly more work than just moving the directory to a better shared location, though — and since it causes an SE Linux audit message rather than the nice "are you sure y/n" prompt you're imagining, it seems like it'll eventually cause frustrating confusion.
| Give a warning when something is about to be deleted (by root) |
1,340,490,908,000 |
If I have a folder with e.g. the following permissions (0700):
drwx------ 2 patrick patrick 4096 Sep 12 00:00 test
If I understand it correctly, this means only the folders owner can:
read: list files
write: create, delete and rename files (but this only works if execute is set as well)
execute: access file contents and meta-information
If I now add a file to the folder with the permission (0777):
-rwxrwxrwx 1 patrick patrick 0 Sep 12 00:00 testfile
There is no way another user can access the file, even as the file itself can be read, written and executed by all users. Am I correct or am missing some finer points of the Unix permission system?
If I'm correct, is there any reason (except keeping things nice and tidy) to still worry about correct file permissions in that folder? Are there cases (e.g. accesses via relative paths?) where other users may still access the said file?
|
A file could be hard-linked elsewhere (and on Linux, bind-mounted elsewhere), and that will bypass the permissions of the containing original directory:
# mkdir dir; echo yup > dir/in
# ln dir/in out
# touch out-b; mount --bind dir/in out-b
# chmod 700 dir
# su user
user$ cat out out-b
yup
yup
user$ cat dir/in
cat: dir/in: Permission denied
user$ ls dir/in
ls: cannot access 'dir/in': Permission denied
user$ ls dir
ls: cannot open directory 'dir': Permission denied
As such a hard-link or bind-mount can happen inadvertently (as part of something else, etc) it's never a good idea to give more permissions than necessary.
drwx------ 2 patrick patrick 4096 Sep 12 00:00 test
If I understand it correctly, this means only the folders owner can:
execute: access file contents and meta-information
No, the x (execute/search) permission on a directory doesn't mean that. It means that only the owner can access the files via the entries of this directory. Acessing the files' contents is controlled by their own permissions.
| Can a user access files if he does not have access to the parent directory? |
1,340,490,908,000 |
I got an exercise to create a directory called Projekte and I'm supposed to give the groups Auftrag and Support the permissions r and w but the others only r
I just realised that this is impossible. What can I do?
Thanks for any help
|
You would not be able to do this by creating a new group, as you need some users to have read and some to have read/write. That is unless you need the users from Auftrag and Support to have read/write and everyone else to have read permissions, in which case you could create a group which contains all the users from Auftrag and Support, and set group write and world read.
Alternatively, and assuming your filessytem supports them, you could use extended ACLs:
https://wiki.archlinux.org/index.php/Access_Control_Lists
For example:
# setfacl -m "g:Auftrag:rw" /file/path
# setfacl -m "g:Support:rw" /file/path
| How can I give permissions of files to multiple groups? |
1,340,490,908,000 |
I have 5 internal drives and 3 external.
I would like the internal drive's files to be owned by my default user hutber
I have tried to chown them with sudo as seen here:
It looks successful, however
I am unsure if its possible to change, but here is my mount options for the drive
And just an overview of all drivers. With the drive in question on display.
|
Is this drive mapped in /etc/fstab? If so, you can modify the options there, this "nosuid" option need to be removed as others have pointed, and you can also add "gid=ownerGroupID, uid=ownerID" to the options list in order to have the files on the drive explicitly mapped to particular uid/gid and these be further usable to you.
| hdd mount option to grant ownership of files on drive |
1,340,490,908,000 |
Because files created by default having permission of 666 and umask (in permission bits form) subtract bit-wise from this permission, can we do something to give execute permission without using permission character (r,w,x) ?
I am refering to using bit-wise mask, e.g
umask 002
not setting permission character such as
umask u+x
umask u=rwx
|
This is not possible. umask only prevents permissions but never adds them. Thus you get execute permission only if the creating open() syscall does contain them. This is the case if a compiler creates an executable file.
| umask XXX (permission bits) to give execute permission to files |
1,340,490,908,000 |
Is it possible to give two users different permissions on the same directory? I want to use it for ftp: userFull gets R+W and userLim gets only Read, depending on who logs on. Im getting stuck on the ownership versus group rights... (I use CentOS+Directadmin and Proftpd)
So the following is what I want if it's possible at all:
/home/myDir - userFull - read & write
/home/myDir - userLim - read only
|
Yes, by using ACL - Access Control Lists. (if not avail, install via yum install acl)
Before you start setting ACL, you initially need to enable ACL support for filesystem, for doing it manually use:
mount -o remount,acl $filesystem
But you need to enter this command every time you boot the system. To avoid this, you can enable acl when the filesystem is mounted, by using fstab.
Eg. /etc/fstab (for your home directory), if you are using ext4 file system:
LABEL=/home /home ext4 defaults,acl 1 2
For more information go to redhat documentation link.
By setfacl you can assign permission like::
setfacl -m u:Full:rwx /home/myDir
setfacl -m u:Lim:rx /home/myDir
After that by getfacl, you can view the permissions:
getfacl /home/myDir
For more info, please visit CentOS documentation page.
| Two users with different permissions on same directory |
1,340,490,908,000 |
I need to create a hierarchy of UNIX groups. Something like below:
A
|\
| \
B c
|\
D e
|\
f g
...where A, B and D are UNIX groups and c,e,f and g are UNIX accounts that are members of those specific groups. I have googled a lot but it seems that this is not possible.
Currently, we have the following:
Group A has members c.
Group B has members e.
Group D has members f,g.
UPDATE:
@John's post made me realize that I needed to re-frame my requirements to remove the ambiguity.
What I require is:
Limit access to a directory only to members of group B (so B is
group owner of that folder). As group D is a sub-group of B, members of D would be members of group B and have access to that directory as well.
But members of Group B needs to have the same rights as members of group A. (So if group A is a directory group owner then automatically group B is the directory group-owner).
By the way, this is a real-world problem where I have full control over group B and its members; and limited or no control over other groups and their members. So I cannot create new groups and give membership to members from group A or D.
|
With normal unix permissions, you can't do this.
With ACLs you can (or should be able to).
You need to be using a filesystem that supports ACLs. Most modern linux filesystems do.
The basic command is setfacl
In your example, if group B owns directory /B you would add access rights for group D as follows:
setfacl -m group:B:rwx,group:D:rwx /B
This is only the most basic example but might get the idea across. This does require careful and explicit setting of access control, but can do much more than basic unix permissions. It isn't nearly as capable as, as full AD group policy and the like, though.
Here's some documentation of ACLs in general
| How do I create a hierarchy of UNIX groups as below? |
1,340,490,908,000 |
Is it normal, that when I am logged in as root, and the used su user, I can't access that users screen sessions?
In this case, screen complains about it not having permissions on /dev/pts/x.
I assume that it can't control the terminal which was opened as root in the way it needs: am I right?
|
In general, you can change the ownership of /dev/pts/x to the user that you su to, as root, before you actually su. That way, the user that you su to will have access to attach the screen to your origin terminal.
# chown someuser /dev/pts/x
# su - someuser
$ screen -dr somescreen
If this is something you want to make more smooth, you could look into how ownership is set on terminal devices, so that you could, say, make them group read/writable, and have a small group where users have access. This can have severe security implications, so do take care if you're exploring that path!
| GNU Screen does not work when su'ing from root to a normal user |
1,340,490,908,000 |
Consider the following line in a /etc/sudoers file:
username ALL=(ALL) ALL, !/usr/bin/passwd
as far as I know, this allows user username to use sudo, unless he not uses /usr/bin/passwd. But apparantly the user is still able to get a root shell using sudo -s/sudo -i and do whatever he likes. Have I understood this correctly? What would be a better configuration if I indeed want to disallow the user to change any password as root.
|
Without using additional security levels like SELinux, you cannot do this. But then it is a bad idea too, since there are really a lot of other possibilities to lock other user out if one can get (nearly full) root rights via sudo.
See https://serverfault.com/questions/36759/editing-sudoers-file-to-restrict-a-users-commands
| exclude commands from user's sudo permissions |
1,340,490,908,000 |
I've got the basics of primary and secondary groups down, but still have some questions I can't seem to find solid answers to:
Can many users belong to the same primary group?
Can one user's primary group be a secondary group of another user?
|
Yes, they can.
$ id foo
uid=1002(foo) gid=1002(foo) groups=1002(foo)
$ id bar
uid=1003(bar) gid=1003(bar) groups=1003(bar)
Changing the primary group of user foo to bar which is the primary group for user bar:
$ sudo usermod -g bar foo
Now:
$ id foo
uid=1002(foo) gid=1003(bar) groups=1003(bar)
$ id bar
uid=1003(bar) gid=1003(bar) groups=1003(bar)
Yes, it can be.
$ id foo
uid=1002(foo) gid=1002(foo) groups=1002(foo)
$ id bar
uid=1003(bar) gid=1003(bar) groups=1003(bar)
Adding user bar to group foo which is the primary group of user foo:
$ sudo usermod -a -G foo bar
Now:
$ id foo
uid=1002(foo) gid=1002(foo) groups=1002(foo)
$ id bar
uid=1003(bar) gid=1003(bar) groups=1003(bar),1002(foo)
| Primary and secondary groups |
1,340,490,908,000 |
To start with, I apologize if this is a painfully obvious/trivial issue, I'm still learning the ins and outs of linux/unix.
I work with a few servers that require access via ssh and private key to log into. So, the command is something like this:
ssh -i /path/to/key.pem [email protected]
I've created a bash script that let's me just use my own call, access, and just has a basic switch statement for the arguments that follow to control which server I log into. For example, access server1 would issue the appropriate ssh command to log into server1.
The Problem
The ssh call just hangs up and I'm left with an empty terminal that won't accept SIGINT (Ctrl + C) and I must quit the terminal and open it up again to even use it.
As far as I can tell, this might be a permissions thing for the private key. Its permissions are currently 600. Changing it to 644 gives me an error that the permissions are too open and exits the ssh attempt. Any advice?
|
There is ssh_config, made for this, where you can specify your hosts aliases and keys and store it without creating such hara-kiri as bash scripts to do so. It is basically stored in your ~/.ssh/config in this format:
Host host1
Hostname 000.000.000.000
User user
IdentityFile /path/to/key.pem
and then you can simply call
ssh host1
to get to 000.000.000.000
If you really want to be effective and have even shorter shortcuts, bash alias is more suitable than the bash scripts.
alias access="ssh -i /path/to/key.pem [email protected]"
If you really want to use bash script, you need to force ssh to allocate you TTY on remote server using -tt option:
ssh -tti /path/to/key.pem [email protected]
For more tips, you can browse through the manual page for ssh and ssh_config.
| Unable to SSH Using a Private Key Through Bash Script |
1,340,490,908,000 |
What permissions are needed? I mean the minimum permission needed to actually cd into a directory.
It's just a general question I have. If someone has executable permissions is that enough to be able to access a directory via the CD command? Thanks!
|
x - execute - permission needed to cd into the directory.
r - read - permission needed to do a ls inside the directory.
w - write - permissions needed to create a new file (or sub-directory) inside the directory.
| To cd into a directory [duplicate] |
1,340,490,908,000 |
I installed RHEL 5.1 on a virtual machine. I would like to install VMware Tools, but I keep getting an error. I am performing the installation via the tar procedure. I get the following error:
bash: ./VMware-install.pl: /usr/bin/perl: bad interpreter: Permission denied
The ./VMware-install.pl and /usr/bin/perl files have full rwx permissions, but I keep getting the same error.
Does anyone know how to fix this?
|
Simplify your situation: This is not a VMware install problem, it's a "Why doesn't the system recognize /usr/bin/perl?" problem. Once that's fixed, you should be able to install VMware... at least, you've overcome the first hurdle.
So, try: /usr/bin/perl -e 'print "Hello, world\n";' and see what you get. This will be your first clue into the underlying problem.
If it works, try /usr/bin/perl ./VMware-install.pl
If it doesn't work, it's something weird and will probably take more investigation, such as what filesystem perl is located on and such.
But I'd start at zooming in on /usr/bin/perl.
| bash bad interpreter and permission denied |
1,340,490,908,000 |
Context:
I'm trying to convert a .deb package to .rpm using alien, I use this command:
$ alien -r foo.deb
but it complains thusly:
> Warning: alien is not running as root!
> Warning: Ownerships of files in the generated packages will probably be wrong.
I think all alien needs root for is to guarantee that it has permission to create foo.deb's root-owned files for the foo.rpm output, but I'm not sure.
Questions:
Do packages always need some root-owned files?
Why do they need root-owned files at all?
If I'm wrong, why does alien need root?
|
Rpm and deb packages contain archives of the files to install (cpio archives in the case of rpm, tar in the case of deb). These archives contain metadata about each file, including its name, modification date, owning user and group, and permissions. When a package is installed, each file ends up having the ownership described in the archive (unless a post-installation script modifies it).
Most files installed by packages are owned by root, because no user is authorized to modify them.
Alien converts packages by unpacking the archive and repacking it (as well as other things like converting pre/post-installation scripts). For example, to convert an rpm into a deb, alien calls cpio to extract the archive to a temporary location, then tar to build a new archive. If the unpacking is not done with root permissions, then all the temporary files will be owned by the user who is doing the unpacking, so when the files are packed into the new archive, they will end up being owned by that user.
Alien doesn't actually need to run as root since it doesn't need to modify anything in the system. Fakeroot runs alien (or any other command) in an environment where that command receives fake information about filesystem operations, pretending that operations that normally require root (such as changing file ownership) have succeeded. This way, the unpacking is done as root and sets correct file owernship (as far as alien and its subprocesses are concerned) and thus the repacking creates the intended archive.
| Why does file ownership matter within an RPM or DEB package? |
1,340,490,908,000 |
As shown by the following code:
ll
total 136
-rwxr-xr-x 1 kaiyin kaiyin 19067 May 9 2013 dbmeister.py
-rwxr-xr-x 1 kaiyin kaiyin 1617 Jul 29 2011 locuszoom
-rwxr-xr-x 1 kaiyin kaiyin 112546 May 9 2013 locuszoom.R
./locuszoom
-bash: ./locuszoom: Permission denied
locuszoom is executable globally, but still can't be executed. The files are on a harddisk mounted at /media/data1.
|
The harddisk needs to be remounted so that exec mount option is included.
excerpt from mount man page
FILESYSTEM INDEPENDENT MOUNT OPTIONS
....
exec Permit execution of binaries.
You can do this 1 of 2 ways.
Examples
Via the command line.
$ mount -o remount,exec /media/data1
Or in your /etc/fstab.
# <file system> <dir> <type> <options> <dump> <pass>
/dev/sdb1 /media/data1 ext4 rw,exec,noauto 0 0
| File executable by all, yet still cannot be executed? |
1,379,574,688,000 |
I have this file structure:
> APPLICATION 1
>> CONTROLLER (@JuniorProgrammers)
>> MODELS (symlink to SHARED/MODELS)
>> VIEWS (@Designers)
> APPLICATION 2
>> CONTROLLER (@JuniorProgrammers)
>> MODELS (symlink to SHARED/MODELS)
>> VIEWS (@Designers)
> SHARED
>> MODELS (@SeniorProgrammers)
I need php to be able to read Folder 1.1 contents, but programmers who FTP into Folder 1 won't be able to read the symlink (they CAN see the symlink, but not FOLLOW it including all reads, and writes).
The @ is the groups of users that have read/write access of each layer.
|
Symlinks themselves have 777 because in Unix, file security is judged on a file/inode basis. If it's the same data they're operating on, it should have the same security conditions, regardless of the name you gave the system to open it.
[root@hypervisor test]# ls -l
total 0
lrwxrwxrwx. 1 root root 10 Jun 8 16:01 symTest -> /etc/fstab
[root@hypervisor test]# chmod o-rwx symTest
[root@hypervisor test]# ls -l
total 0
lrwxrwxrwx. 1 root root 10 Jun 8 16:01 symTest -> /etc/fstab
[root@hypervisor test]# :-(
Since permissions are set on the inode, it won't work with hard links even:
[root@hypervisor test]# echo "Don't Test Me, Bro" > testing123
[root@hypervisor test]# ls -l
total 4
lrwxrwxrwx. 1 root root 10 Jun 8 16:01 symTest -> /etc/fstab
-rw-r--r--. 1 root root 19 Jun 8 16:06 testing123
[root@hypervisor test]# ln testing123 newHardLink
[root@hypervisor test]# ls -l
total 8
-rw-r--r--. 2 root root 19 Jun 8 16:06 newHardLink
lrwxrwxrwx. 1 root root 10 Jun 8 16:01 symTest -> /etc/fstab
-rw-r--r--. 2 root root 19 Jun 8 16:06 testing123
[root@hypervisor test]# chmod 770 testing123
[root@hypervisor test]# chmod 700 newHardLink
[root@hypervisor test]# ls -lh
total 8.0K
-rwx------. 2 root root 19 Jun 8 16:06 newHardLink
lrwxrwxrwx. 1 root root 10 Jun 8 16:01 symTest -> /etc/fstab
-rwx------. 2 root root 19 Jun 8 16:06 testing123
A symlink isn't an inode (which actually stores the data you're wanting to secure) ergo in the Unix model it complicates things by having two different sets of permissions for protecting the same data.
It sounds like this is an attempt to give different groups of people different levels of access rights to the same file. If that's the case you're actually supposed to use POSIX ACL's (via setfacl and getfacl) to give the files appropriate permissions on the target of the symlink.
EDIT:
To elaborate on the direction you're probably wanting to go in, it's something like:
# setfacl -m u:apache:r-- "Folder 2.1"
# setfacl -m g:groupOfProgrammers:--- "Folder 2.1"
# setfacl -m g:groupOfProgrammers:r-x "Folder 1"
The above gives the apache user (substitute with whatever user your apache/nginx/whatever is running as) read-only access to the target of the symlink, and gives groupOfProgrammers read access to the directory the symlink is in (so that groupOfProgrammers can get a complete directory listing there), but turns all permission bits off for the same target of the symlink.
| How Do I Block Read Access to a Symbolic Link? |
1,379,574,688,000 |
"Not working" might be slightly inaccurate, since it still seems to mount and umount things properly, but it's inaccessible to users other than the root user, which defeats the purpose entirely.
It's still installed on the system, and works fine from the root account, but my regular user accounts don't have sufficient privileges to use it. I'm not aware of having changed any permissions relating to mounting, and remove --purge pmounting followed by a re-install does nothing (both with apt-get and with aptitude.
Any idea what I kicked over by accident?
|
I assume your users are not in the correct group (plugdev), from man pmount:
Important note for Debian: The permission to execute pmount
is restricted to members of the system group plugdev. Please add all
desk- top users who shall be able to use pmount to this group by
executing
adduser user plugdev
(as root).
Don't forget to either logout after you added the user to the group or use sg plugdev to switch to the new group.
| pmount not working on Debian wheezy |
1,379,574,688,000 |
[nathanb@ka /x/sim/nathanb/nbsim1] ls -al ,nvram
-rw-r--r-- 1 root root 2097152 Jul 5 2011 ,nvram
[nathanb@ka /x/sim/nathanb/nbsim1] sudo chmod a+w ,nvram
chmod: changing permissions of `,nvram': Operation not permitted
The volume is mounted rw, obviously, since I can modify other stuff. But even if I su as root, I can't chmod this file.
[root@ka /x/sim/nathanb/nbsim1] chmod +w ,nvram
chmod: changing permissions of `,nvram': Operation not permitted
I did an strace on the chmod, and I see this:
stat64(",nvram", {st_mode=S_IFREG|0644, st_size=2097152, ...}) = 0
chmod(",nvram", 0666) = -1 EPERM (Operation not permitted)
Here's the output of stat
[root@ka /x/sim/nathanb/nbsim1] stat ,nvram
Name: ,nvram Size: 2097152 Perms: 0644/-rw-r--r--
Type: Regular File Blocks: 4120 User: 0/root
Inode: 205777 IOsize: 32768 Group: 0/root
Access Time: Sat Jul 23 09:27:31 2011 Links: 1
Modify Time: Tue Jul 5 18:36:35 2011 FS device: 28
Change Time: Sat Jul 23 09:30:35 2011 Maj/Min: 0/0
And just to prove that there's not weird uid stuff going on:
[root@ka /x/sim/nathanb/nbsim1] id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),503(mailman),21(slocate),30(gopher),500(http),14(uucp),188(xelus),16(radadmin)
Any ideas?
|
You tagged this under /nfs.. If that file is on an NFS filesystem, you might need to export it on the server with no_root_squash to allow root on the clients to change the permissions on the file system.
| Why can I not chmod a file? |
1,379,574,688,000 |
How can I mount a Windows partition so that the files within it don't have execution permission? I mount a Windows partition using:
sudo mount /dev/sda3 win
win is a folder in my home dir.
This of course works. But files in the mounted partition are given execute permission, or to be specific, 777.
How to mount the partition so that files are given 666 or other permission?
|
man mount has a section "Mount options for ntfs" (assuming your file system is NTFS and not FAT) where it says,
uid=value, gid=value and umask=value
Set the file permission on the filesystem. The umask value is given in octal. By default, the files are owned by root and not readable by some‐body else.
sudo mount /dev/sda3 win/ -o fmask=111
will mount the ntfs file system with all files having
rw-rw-rw- permissions.
Directories will still be executable, but this is needed to allow you to cd into them.
| Mounting a Windows partition without giving execute permission |
1,379,574,688,000 |
I am trying to parental control myself by restricting web access via OpenDNS. The OpenDNS account password will be handed to someone trustworthy. Now, I want to put some restriction on the /etc/resolv.conf, perhaps using a key or password, but not the root password. Also I do not want to compromise the accessibility by the kernel. Is this possible?
|
No, not the way you're trying to do it. Root has access to every file on the system. You can make it harder to modify the file (note: it has to be publicly readable), but if you have root access, you can't prevent yourself from modifying it.
There is no password protection feature for files. Even if there was one, being root, you could remove it. (You can encrypt a file, but that makes it unreadable.)
One way to make it harder to modify the file is to set the immutable attribute: chattr +i /etc/resolv.conf. Then the only way to modify it will involve running chattr -i /etc/resolv.conf. (Or going to a lower level and modifying the disk content — with a very high risk of erasing your data if you do it wrong.)
If you want to put a difficult-to-bypass filter on your web browsing, do it in a separate router box. Let someone else configure it and don't let them give you the administrator password.
| Password protecting a system file? (e.g. /etc/resolv.conf) |
1,379,574,688,000 |
In AIX really, how can I search in several directories and those below it, for files that are not of the specific permissions of 755.
So I want to search /path/to/, /path/to/mydir, /path/to/mydir/andthisoneto, etc., but not /path.
|
If I understand correctly, you want this:
find /path -mindepth 2 -type f -not -perm 0755
Or maybe just this, if my understanding is off:
find /path/to -type f -not -perm 0755
| Search for file permisions other than 755 |
1,379,574,688,000 |
I added a new group: ircuser and a new user: ircuser
In visudo I placed this line:
myuser localhost=(ircuser) NOPASSWD: /usr/bin/irssi
Created ircuser directory, where config files, caches, etc should be saved:
drwxrwx--- 2 ircuser ircuser 4096 Mar 2 10:28 ircuser
When issuing the command:
sudo -Hu ircuser /usr/bin/irssi
or
sudo -u ircuser /usr/bin/irssi
The program can't save the config file in the ircuser directory.
** ERROR **: Couldn't create /home/myuser/_web/ircuser/.irssi directory
aborting...
Aborted
But, it is being run as ircuser:
ps auxw | grep irssi
ircuser 11962 0.0 0.0 23684 2504 pts/6 S+ 11:18 0:00 /usr/bin/irssi
So, albeit irssi is run by ircuser it can't write to a directory owned by the same user?
What do I need to change to allow it saving there?
|
The problem probably lies in not having eXecute permission in one of the parent directories leading up to ircuser's home directory. In order for any user to traverse, not necessarily look into a directory, that user must have execute permission either via a group, or via other. If you have these permissions:
drwxrwx--- 2 myuser myuser 4096 Mar 2 10:28 /home/myuser
And ircuser is not part of the myuser group, then ircuser can't access any file underneath even if it has permissions for that directory. If you try this instead:
drwxrwx--x 2 myuser myuser 4096 Mar 2 10:28 /home/myuser
Then ircuser can't browse myuser's home directory, but it can potentially access some file beneath it such as /home/myuser/_web/ircuser
UPDATE: A few more details I left out from the above description. Permissions are evaluated as you traverse the file system. It's possible to be able to access a folder starting from the current directory that you can't access starting from the root directory. If you change your working directory to somewhere else, you will loose your handle on the current directory and loose access to the files in it. If you use something like sudo su - ircuser, su will switch to the home directory of ircuser before dropping root privileges. At that point, you have a valid handle for ircuser's home directory because it's the current working directory. If you start irssi, it will be running in ircuser's home directory as ircuser. If you try to access .irssi, that will work because you have eXecute permission on the current direcory. If you have to traverse a directory where you are lacking eXecute permission, iy will fail. For example, opening up the file /home/myuser/_web/ircuser/.irssi or even starting from the current directory and using the relative path of ../../_web/ircuser/.irssi because it requires traversing /home/myuser where you have no eXecute permission.
| Run sudo as another non-root user and save in this user's home directory |
1,379,574,688,000 |
Usually, when we run
ls -l
we can see such picture -rwx...- so the owner always has the full stack of permissions.
And usually the owner of the system files is the root. Are there any system files where the root doesn't have all permissions, e.g. only read? If they exist, why do they have so strict permission policy even for the root?
|
Files that aren't meant to be executed don't have the x permission even for root so he doesn't accidentally execute something. Files that root should think twice before overwriting lack the w permission for root. Root can override this without changing the file's permissions, but most programs prompt before doing so. I believe read permission isn't checked at all for root.
| System files with strict permissions |
1,379,574,688,000 |
what does the a in chattr +ia <filename> do? and why would you add the a in combination with the i? note: I know the i is for immutable
|
The letters `acdeijstuADST' select the new attributes for the files:
append only (a), compressed (c), no dump (d), extent format (e),
immutable (i), data journalling (j), secure deletion (s), no tail-merg‐
ing (t), undeletable (u), no atime updates (A), synchronous directory
updates (D), synchronous updates (S), and top of directory hierarchy
(T).
from the manpage for chattr
Files with this flag will fail to be opened for writing. This also blocks certain potentially destructive system calls such as truncate() or unlink().
$ touch foo
$ chattr +a foo
$ python
> file("foo", "w") #attempt to open for writing
[Errno 1] Operation not permitted: 'foo'
> quit()
$ truncate foo --size 0
truncate: cannot open `foo' for writing: Operation not permitted
$ echo "Appending works fine." >> foo
$ cat foo
Appending works fine.
$ rm foo
rm: cannot remove `foo': Operation not permitted
$ chattr -a foo
$ rm foo
This option is designed for log files.
| what does the "a" in chattr +ia do? |
1,379,574,688,000 |
In Linux the following two commands work as expected:
mkdir -m 555 new_directory
mkdir -p a/b/c
But the following does not work as expected:
mkdir -m 555 -p a/b/c
the 3 directories are created but only the latest recieves the 555 permission. The a and b directories have the default permissions.
So how accomplish the goal described in the title? Is it possible?
BTW - I selected 555 how a random case, it fails with 666 and 777 too
|
If you expressly list the directories, parent first, you can achieve your stated aim of creating the directories in one command:
mkdir -m 555 -p a a/b a/b/c
With shells with support for csh-style brace expansion such as bash you can simplify this a little at the expense of readability:
mkdir -m 555 -p a{,/b{,/c}}
Notice, however, that for permissions 555 both commands will fail if it actually needs to create any of the parent directories: such directories are created with permissions that do not allow writing, and therefore next level directories cannot be created.
Finally, a bash shell script that will also give you the functionality to create the multiple directories in one command as requested, by wrapping the complexity in a function. This one will attempt to apply the permissions to newly created directories from the bottom up, so it will be possible to end up with directories that have no write permission:
mkdirs()
{
local dirs=() modes=() dir old
# Grab arguments
[[ "$1" == '-m' ]] && modes=('-m' "$2") && shift 2
dir=$1
# Identify missing directories
while [[ "$dir" != "$old" ]]
do
[[ ! -d "$dir" ]] && dirs+=("$dir")
old="$dir"
dir="${dir%/*}"
done
# Create necessary directories and maybe fix up permissions
for dir in "${dirs[@]}"
do
mkdir -p "${modes[@]}" "$dir" || return 1
[[ -n "${modes[1]}" ]] && chmod "${modes[1]}" "$dir"
done
}
Example
mkdirs -m 555 a/b/c
ls -ld a a/b a/b/c
dr-xr-xr-x+ 1 roaima roaima 0 Jan 7 10:01 a
dr-xr-xr-x+ 1 roaima roaima 0 Jan 7 10:01 a/b
dr-xr-xr-x+ 1 roaima roaima 0 Jan 7 10:01 a/b/c
As always, this function can be put standalone into an executable script that's somewhere in your $PATH:
#!/bin/bash
mkdirs()
{
...as above...
}
mkdirs "$@"
| How create many nested directories and defining the permission to all them in one command? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.