date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,636,902,417,000 |
I created directory test, created file 1.txt in test, wrote 'Before' in this file.
Then I went
cd ..
and used the command:
tar -cvzf ./test.tgz ./test
Then I entered the test dir again. Opened the 1.txt file again. Changed content to "After". I saved the file and changed chmod to read only by executing:
chmod -w ./1.txt
So for now my 1.txt is read only. Then I go up
cd ..
and extract the test.tgz archive.
tar -xvzf ./test.tgz
Then I go again to test dir, do
cat 1.txt
and get "Before".
It is not logical that this happens, since the file was set to be read-only.
Why does it happen?
|
Tar didn't overwrite the existing read-only file, it removed it and then created a new file with the same name. This is a consequence of the way -x works; it replaces existing versions of a file by design in order to accommodate the old incremental backup method of appending files to an existing archive. A tar archive might have multiple versions of a file in it; only the last one will appear on disk after extraction is completed. A side effect of this is that tar also removes existing copies of files even if they appear only once in the archive.
| Tar overwrites read only files |
1,636,902,417,000 |
I have two tar files which are basically backups of my /data folder on android. The problem is the permissions and ownership of the files inside latest back-up is messed up. However the permissions in older tar is perfect.
I want a way to read permissions from old.tar and copy over the changes to latest.tar.
I tried extracting files to my own system using -p and --same-owner flag but the ownership changes to root instead of system because my computer doesn't have a user or group called system but my android phone does. I thought I'd extract the files, and write a script which uses stat and grep to read permissions and ownership and sets the permissions and ownerships of the files in latest.tar. But it doesn't seem to work that way. Can anyone give me some help on this issue.
|
I'm not sure Tar can do that. It should be possible, though, to untar both backups (to different directories), then use something along the lines of
cd /mnt/oldbackup ;
find ./ -exec getfacl {} | setfacl /mnt/newbackup/{}
as the output of getfacl can be used as stdin for setfacl according to the manpage.
| Copy just permissions and ownership from one tar file to another |
1,636,902,417,000 |
Can the output a list command contain permissions listed. For example, when executing find / -name filename 2> /dev/null and I get results, is it possible to have my results include the file permissions? Thanks in advance!
|
Check out this command:
$ find . -name '*.sh' -printf "depth="%d/"sym perm="%M/"perm="%m/"size="%s/"user="%u/"group="%g/"name="%p/"type="%Y\\n
depth=1/sym perm=-rwxr-xr-x/perm=755/size=1678/user=root/group=root/name=./yadpanned.sh/type=f
depth=1/sym perm=-rwxr-xr-x/perm=755/size=154/user=root/group=root/name=./remove.sh/type=f
| List permissions with find command [duplicate] |
1,636,902,417,000 |
I'd like to cover the bases on a vulnerability which tries to download itself and save the result in a newly created directory inside the /tmp/ directory.
To be on the safe side, I wish to make it impossible to create folders inside /tmp/. Or if that is not feasible, I would like to prevent creating folders in just one specific directory inside /tmp.
|
use ls -l -d /tmp/ and you will see that the permissions are set to drwxrwxrwt, i.e. d: a directory, rwx: read, write and execute permissions allowed for owner, group and others (in this order), t sticky bit, i.e. only file owners are allowed to delete files (not the group despite permissions). Let's leave the sticky bit aside for the moment and mention that a directory needs to be executable for being accessible.
Now if you want to restrict write permission for others (owner and group is root) then use chmod o-w /tmp/ (as root, i.e. using sudo )
HOWEVER: /tmp/ is rather important for may processes that need temporary data, so I would suggest not to restrict permissions for this folder at all!
Since you are heading for a specific folder the simplest would be to manually create that folder (as root) and then restrict permission for it:
sudo mkdir /tmp/badfolder
sudo chmod -R o-w /tmp/badfolder/
Side note on chmod: -R do recursively, u,g,o: user,group,other , +- add/remove permission to r,w,x read,write,execute. I.e. for allowing gorup members to write to a file, use chmod g+w file.
Update:
In case the process is running as root, you also need to set the 'i' attribute. From man chattr
A file with the `i' attribute cannot be modified: it cannot be deleted or renamed, no link can be created to this file and no data can be written to the file. Only the superuser or a process possessing the
CAP_LINUX_IMMUTABLE capability can set or clear this attribute.
This would also apply if the folder was not owned by root. Simply use
chattr +i /tmp/badfolder
Use chattr -i /tmp/badfolder for removing it and -R for doing either recursively.
| Preventing the creation of a specific directory |
1,636,902,417,000 |
Trying to install tor-browser on my Linux Mint 17 system, I screwed up my sudo privileges.
I wanted to run the command:
sudo chown $USER -Rv /usr/bin/tor-browser/
but instead I typed
sudo chown $USER -Rv /usr/bin/ tor-browser/
and then the ownership and permissions went out the door.
Now I get a message every time I want to use sudo:
sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set
|
First restart the pc, press SHIFT key while Ubuntu is booting.
This will bring you up the boot menu.
Go to Advanced Options.
Select your OS version in (recovery mode), and press Enter Key.
example : Ubuntu 14.04 (recovery mode)
It will bring you up another screen. Now select “Drop to root shell prompt” and press Enter.
It will load a command line at the bottom of the screen.
Now run each of the following commands.
# mount -o remount,rw /
# mount -a
# chown root:root /usr/bin/sudo
# chmod 4755 /usr/bin/sudo
# init 6
This will fix your sudo Above error.
| Restore sudo privileges after "sudo chown $USER -Rv /usr/bin" |
1,636,902,417,000 |
After having installed the MySQL database on openSUSE I realized that for all files in /usr/bin the owner was changed to the "mysql" user of the "mysql" group. Maybe there was some mistake of mine. The worst problem was with the /usr/bin/sudo command, which obviously did not work, but I've taken back the ownership to root (having logged to root) and it is OK now.
Should I change owner of all files in /usr/bin to root or may this cause some malfunctioning of other programs? Should they also have the "Set UID" option marked in the Privileges tab as sudo does?
|
Yes, all files under /usr should be owned by root, except that files under /usr/local may or may not be owned by root depending on site policies. It's normal for root to own files that only a system administrator is supposed to modify.
There are a few files that absolutely need to be owned by root or else your system won't work properly. These are setuid root executables, which run as root no matter who invoked them. Common setuid root binaries include su and sudo (programs to run another program as a different user, after authentication), sudoedit (a companion to sudo to edit files rather than run an arbitrary programs), and programs to modify user accounts (passwd, chsh, chfn).
In addition, a number of programs need to run with additional group privileges, and need to be owned by the appropriate group (and by the root user) and have the setgid bit set.
You can, and should, restore proper permissions from the package database. If you attempt to repair manually, you're bound to miss something and leave some hard-to-diagnose bugs lying around. Run the following commands:
rpm -qa | xargs rpm --setugids --setperms
| Should /usr owner be root? |
1,636,902,417,000 |
Trying to understand why ssh-agent has sgid bit and found this post ssh-agent has sgid
I have another question, why the group ownership of ssh-agent is nobody not root? What is the reason behind it? Will it still work if group ownership is root?
|
If it were setgid root then the agent would run as group root, which likely has broader permissions than the user it started as. That could be a security risk; at the least, running something as root unnecessarily is a red flag (even the group) and requires extra attentiveness.
Setting the group ownership to nobody, which is a group that shouldn't have any meaningful permissions or files attached, means that ssh-agent doesn't get any more rights than the user started with. As the linked question says, the reason it's setgid in the first place is to prevent ptracing the program, rather than because it actually needs different permissions. In the discussion thread linked from the other question, one of the developers notes:
it would seem that the group is of no
consequence. It's the fact that the binary is setgid anygroup that's
important.
nobody is a handy group to use when you only want a side effect of setgid, not the behaviour itself.
I imagine it would still work with setgid root. I just tried that here, and it didn't complain at all and seemed to work in cursory testing. That said, I can't think of any actual reason to change it to that - everyone seems to be better off with it running as group nobody than group root.
I don't suggest changing the permissions of files installed by your package manager, in any case, because they tend to get upset about any modifications to the files they control.
| Why ssh-agent group ownership is not root |
1,636,902,417,000 |
If I'm not a sudoer, is it possible to view the list of sudoers?
Does /etc/group show this information?
|
No you're unable to find out whom has access to sudo rights if you yourself do not have access directly. You could possibly "back into it" by seeing what users if any are members of the Unix group "wheel".
Example
This shows that user "saml" is a member of the wheel group.
$ getent group wheel
wheel:x:10:saml
Being a member of the "wheel" group typically allows for full sudo rights through this rule that's often in a systems sudoers file, /etc/sudoers.
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
But there are no guarantees that the administrator of a given system decided to give sudo rights out in this manner. The just as easily could've done it like so:
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
saml ALL=(ALL) ALL
In which case, without sudo rights you could never gain access to a system's /etc/sudoers file to see this entry.
What about /etc/groups
This file only shows users who have a 2nd, 3rd, etc. group associated with them. Often times user accounts only have a single group associated, in which case you'd need to use a slightly different command to find out a given user's primary group:
$ getent passwd saml
saml:x:1000:1000:saml:/home/saml:/bin/bash
Here user "saml" has the primary group 1000. This GID equates to this group:
$ getent group 1000
saml:x:1000:saml
But none of this actually tells you anything as to which user accounts have sudo rights.
Why the big secret?
This is all done to prevent what's known as a side channel attack. Leaking information out, such as which accounts have privileges, would give important information out to a would be attacker, if they were able to gain access to any account on a given system. So often times it's best to mask this info from any non-privileged account.
| View list of sudoers with no sudo privileges |
1,636,902,417,000 |
I am on slackware64 v14.0 and I have file that belongs to me:
-rwxrwxr-x+ 1 nass shares 137934 Mar 7 00:06 myfile.csv*
I am a member of the "shares" group.
The folder that contains myfile looks like this
drwxrwsr-x+ 12 nass shares 4096 Mar 12 04:54 winmx/
I now want to give ownership of this file to another user of this pc.
The other user is also a member of the shares group.
However,
chown otheruser myfile.csv
does not do the trick. I get a
chown: changing ownership of 'myfile.csv': Operation not permitted
I had recently asked a similar question about gid, but this is not the same problem.
How can I solve this ?
|
You (as a regular user) can't "give away" your files. Root, however, can do it.
| can not chown a file from my $user to another $user [duplicate] |
1,636,902,417,000 |
The executable files that gcc creates have execution permissions
-rwxrwxr-x
which are different than the permissions that the source file has.
-rw-rw-r--
How does gcc set these permissions ?
|
Four things intervene to determine the permission of a file.
When an application creates a file, it specifies a set of initial permissions. These initial permissions are passed as an argument of the system call that creates the file (open for regular files, mkdir for directories, etc.).
The permissions are masked with the umask, which is an attribute of the running process. The umask indicates permission bits that are removed from the permissions specified by the application. For example, an umask of 022 removes the group-write and other-write permission. An umask of 007 leaves the group-write permission but makes the file completely off-limits to others.
The permissions may be modified further by access control lists. I won't discuss these further in this post.
The application may call chmod explicitly to change the permissions to whatever it wants. The user who owns a file can set its permissions freely.
Some popular choices of permission sets for step 1 are:
666 (i.e. read and write for everybody) for a regular file.
600 (i.e. read and write, only for the owner) for a regular file that must be remain private (e.g. an email, or a temporary file).
777 (i.e. read, write and execute for everybody) for a directory, or for an executable regular file.
It's the umask that causes files not to be world-readable even though applications can and usually do include the others-write permission in the file creation permissions.
In the case of gcc, the output file is first created with permissions 666 (masked by the umask), then later chmod'ed to make it executable. Gcc could create an executable directly, but doesn't: it only makes the file executable when it's finished writing it, so that you don't risk starting to execute the program while it's incomplete.
| How does gcc handle file permissions? |
1,636,902,417,000 |
Much to my immediate chagrin, removing write permission from a file does not seem to protect it from rm -f:
touch foo
chmod a-w foo
rm -f foo
How can I protect a file from accidental deletion when rm will be called with the -f flag? It looks like chattr +i foo would work, but it requires root on my system (is that intended?), so I'm looking for a non-root solution.
|
To prevent files from being added or deleted to a directory, you can to remove the write permission for the directory.
| Protect files from `rm -f` |
1,636,902,417,000 |
I'm trying to read two specific files, namely status and smaps_rollup for all the processes under /proc. All process directories have dr-xr-xr-x permission and I'm able to enter every one of these directories.
For all the processes the permissions for both of these files are -r--r--r--.
Here's the bizarre behavior. Let's say I try to read both the files for PID 1. I can read status file, but not smaps_rollup. See below:
$ cd /proc/1
$ ls -l status smaps_rollup
-r--r--r-- 1 root root 0 Apr 5 18:34 smaps_rollup
-r--r--r-- 1 root root 0 Mar 21 12:18 status
$ grep "Swap:" status
VmSwap: 1072 kB
$ grep "Swap:" smaps_rollup
grep: smaps_rollup: Permission denied
$ cat smaps_rollup
cat: smaps_rollup: Permission denied
I looked for related questions and came across some of them[1][2][3][4]. None of them has the same problem. The solutions to these other problems had to do with fixing the missing executable permission on the directory. That's not the case here.
Here is mount info for proc:
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
I'm running Arch Linux with kernel 6.2.7-arch1-1 provided by Arch Linux.
I'm looking for a correct explanation of this bizarre behavior. And is there something I can do to fix this problem, besides using sudo as a workaround?
Can't read file even though permissions are correct
Why I can't read file?
File read permissions for 'others' not working
How can I check read permission of /proc// files?
|
This doesn’t appear to be explicitly documented; it’s documented transitively in man 5 proc, through the documentation for /proc/[pid]/maps:
Permission to access this file is governed by a ptrace
access mode PTRACE_MODE_READ_FSCREDS check; see ptrace(2).
The smaps documentation says
The first of these lines shows the same information as is
displayed for the mapping in /proc/[pid]/maps.
Since the latter is sensitive, it is protected in a similar fashion. smaps_rollup is less sensitive and could be opened up but the latter patch hasn’t got anywhere as far as I can tell.
It should be possible to use capabilities to work around this, but I haven’t tried.
Many files, directories and links in /proc aren’t necessarily as accessible as indicated by their permissions; in particular, many files require the same privileges as maps and smaps, PTRACE_MODE_READ_FSCREDS. These requirements are detailed in man 5 proc. This means that the visible permissions should only be considered an upper bound on permissions; in particular, they are useful to determine whether a /proc entry is intended for updating kernel settings (it’s writable) rather than only viewing them.
| "Others" cannot read 'smaps_rollup' file with -r--r--r-- permission under /proc/PID/. Why? |
1,636,902,417,000 |
I am debugging a program and not quite sure why I can not drop privileges.
I have root permissions via sudo and I can call setgid/setuid, but the operation [is] is not supported.
Basic code to reproduce (golang):
package main
import (
"fmt"
"os"
"strconv"
"syscall"
)
func main() {
if os.Getuid() != 0 {
fmt.Println("run as root")
os.Exit(1)
}
uid, err := strconv.Atoi(os.Getenv("SUDO_UID"))
check("", err)
gid, err := strconv.Atoi(os.Getenv("SUDO_GID"))
check("", err)
fmt.Printf("uid: %d, gid: %d\n", uid, gid)
check("gid", syscall.Setgid(gid))
check("uid", syscall.Setuid(uid))
}
func check(message string, err error) {
if err != nil {
fmt.Printf("%s: %s\n", message, err)
os.Exit(1)
}
}
Example output:
$ sudo ./drop-sudo
uid: 1000, gid: 1000
gid: operation not supported
System info:
$ uname -a
Linux user2460234 4.15.0-34-generic #37-Ubuntu SMP Mon Aug 27 15:21:48 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
|
Your programming language simply does not support such things.
It's complex to do this stuff on Linux, because of the architecture of Linux. The C libraries (e.g. GNU and musl) hide this complexity. It continues to be one of the known problems with threads on Linux.
The Go language does not replicate the mechanism of the C libraries. The current implementation of those functions is not a system call, and has not been since 2014.
Further reading
Jonathan de Boyne Pollard (2010). The known problems with threads on Linux. Frequently Given Answers.
Michał Derkacz (2011-01-21). syscall: Setuid/Setgid doesn't apply to all threads on Linux. Go bug #1435
| Why I can not drop sudo root privileges? |
1,636,902,417,000 |
In the / directory, I use:
ls -alFd /tmp
to check the /tmp directory permissions and I get drwxrwxrwt.
I know rwxrwxrw means user, group, other permissions are read, write and execute.
But I don't know the meaning of the d and t. of the drwxrwxrwt, can someone explain it?
|
The d letter means it's a directory (a folder if you prefer that name).
The t letter means that file is 'sticky'. Only the owner and root can delete a sticky file.
You may want to take a look at this page if you want to know more about the sticky file permission.
| What's meaning of the `d` and `t.` of the `drwxrwxrwt.` in linux? [duplicate] |
1,636,902,417,000 |
This question is regarding samba file access.
I have created a folder A, and under folder A created two folders B and C. And also created three users A, B and C.
User A has access to all three folders but User B has only access to folder B and User C has only access to folder C.
Permission of B & C folders are:
drwxrwxr-x 3 a b 4096 May 10 16:22 b
drwxrwxr-x 3 a c 4096 May 10 16:43 c
Problem:
When user B creates any new file under folder B, it's permission becomes
drwxr-x--- 2 b b 4096 May 10 16:21 New Folder
whereas I want it to keep the owner, group and permission same as folder B for any newly created files.
|
Folder b and c are owned by user b and c.
A file created by a user will belong to that user.
You can use the user permission for b and c, and the group permissions for a.
If you set the SGID bit (g+s) on a folder, created files will get the group permission of that folder.
mkdir a
chown a:a a
chmod g+s a
mkdir b
chown b:a b
mkdir c
chown c:a c
(assuming all users are in a group of the same name.)
| Keep same file owner for newly created files |
1,636,902,417,000 |
I mounted a ISO image with Furius ISO Mount. I cd to the mounted directory and tried to copy a file with
sudo cp file /dir
but cp writes error message
cp: cannot stat `file': Permission denied
The permissions of file are -r--r--r--
sudo chmod 777 file writes
chmod: cannot access `file': Permission denied
Do you know where the problem could be?
|
Sounds to me like furiousisomount could be repsonsible for this issue. I know similar issues with broken file system modules and similar.
I usually mount ISOs via the loop device of the kernel. You can use it this way:
mount some.iso /mnt -o loop=/dev/loop0
| Permission denied (even as root) on a mounted ISO image with Furius ISO Mount |
1,636,902,417,000 |
I don't understand unix users, groups, permissions, etc. For example, things managed by the chmod, chgrp, usermod, groupadd, etc. commands. How do all these things work?
|
I'd start here: Filesystem Permissions
| Unix users, groups, and permissions |
1,636,902,417,000 |
Compiled a binary from the golang source, but it won't execute. I tried downloading the binary, which also didn't work. Permissions all seem to be right. Running the file from go for some reason works.
Output of ~/go$ go run src/github.com/exercism/cli/exercism/main.go1:
NAME:
exercism - A command line tool to interact with http://exercism.io
USAGE:
main [global options] command [command options] [arguments...]
Output of ~/go/bin$ ./exercism:
bash: ./exercism: Permission denied
Output of ~/go/bin$ ls -al:
total 9932
drwxr-xr-x 2 joshua joshua 4096 Apr 28 12:17 .
drwxr-xr-x 5 joshua joshua 4096 Apr 28 12:17 ..
-rwxr-xr-x 1 joshua joshua 10159320 Apr 28 12:17 exercism
Output of ~/go/bin$ strace ./exercism:
execve("./exercism", ["./exercism"], [/* 42 vars */]) = -1 EACCES (Permission denied)
write(2, "strace: exec: Permission denied\n", 32strace: exec: Permission denied
) = 32
exit_group(1) = ?
+++ exited with 1 +++
|
Check that noexec is not in effect on the mount point in question. Or choose a better place to launch your script from.
$ mount | grep noexec
[ snip ]
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime)
$ cat > /dev/shm/some_script
#!/bin/sh
echo hi
$ chmod +x /dev/shm/some_script
$ /dev/shm/some_script
bash: /dev/shm/some_script: Permission denied
$ mv /dev/shm_script .
$ ./some_script
hi
noexec exists specifically to prevent security issues that come from having world-writable places storing executable files; you might put a file there, but someone else might rewrite it before you execute it, and now you're not executing the code you thought you were.
| Despite execution privilege, getting permission denied |
1,636,902,417,000 |
I have to run some tests on a server at the University. I have ssh access to the server from the desktop in my office. I want to launch a python script on the server that will run several tests during the weekend.
The desktop in the office will go on standby during the weekend and as such it is essential that the process continues to run on the server even when the SSH session gets terminated.
I know about nohup and screen and tmux, as described in questions like:
How to keep processes running after ending ssh session?
How can I close a terminal without killing the command running in it?
What am I doing right now is:
ssh username@server
tmux
python3 run_my_tests.py -> this script does a bunch of subprocess.check_output of an other script which itself launches some Java processes.
Tests run fine.
I use Ctrl+B, D and I detach the session.
When doing tmux attach I reobtain the tmux session which is still running fine, no errors whatsoever. I kept checking this for minutes and the tests run fine.
I close the ssh session
After this if I log in to the server via SSH, I do am able to reattach to the running tmux session, however what I see is something like:
Traceback (most recent call last):
File "run_my_examples.py", line 70, in <module>
File "run_my_examples.py", line 62, in run_cmd_aggr
File "run_my_examples.py", line 41, in run_cmd
File "/usr/lib64/python3.4/subprocess.py", line 537, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib64/python3.4/subprocess.py", line 858, in __init__
restore_signals, start_new_session)
File "/usr/lib64/python3.4/subprocess.py", line 1456, in _execute_child
raise child_exception_type(errno_num, err_msg)
PermissionError: [Errno 13] Permission denied
I.e. the process that was spawning my running tests, right after the end of the SSH session, was completely unable to spawn other subprocesses. I have chmoded the permissions of all files involved and nothing changes.
I believe the servers use Kerberos for login/permissions, the server is Scientific Linux 7.2.
Could it be possible that the permissions of spawning new processes get removed when I log off from the ssh sessions? Is there something I can do about it? I have to launch several tests, with no idea how much time or space they will take...
The version of systemd is 219
The file system is AFS, using fs listacl <name> I can confirm that I do have permissions over the directories/files that are used by the script.
|
Thanks to Mark Plotnick I was able to identify and fix the issue.
The problem is the interaction between the AFS file system used by the server and Kerberos handling the authentication. The same issue was brought up in this question on SO.
Basically what is happening is that when I ssh into the server, Kerberos gives the authentication token to the session. This token is used also to access the AFS file system. When closing the SSH session this token gets destroyed and the processes running start to get permission denied errors when trying to access files on the AFS.
The way to fix this is to start a new window inside screen/tmux and launch the command:
kinit && aklog
After that you can detach from screen/tmux and close the ssh session safely.
The commands above create new Kerberos tokens and associate those with the screen/tmux session, in this way when the ssh connection is closed the initial tokens get revoked but since the subprocesses now use those you created they don't suffer permission denied errors.
To summarize:
ssh username@server
tmux
Launch the process you need to keep running
Create a new window with Ctrl+B, C
kinit && aklog
Detach from the session with Ctrl+B, D
Close ssh session
| Why do I get permission denied error when I log out of the SSH session? |
1,636,902,417,000 |
Assuming that you can't reach the internet nor reboot the machine, how can I recover from
chmod -x chmod
?
|
1 - use a programming language that implements chmod
Ruby:
ruby -e 'require "fileutils"; FileUtils.chmod 0755, “chmod"'
Python:
python -c "import os;os.chmod('/bin/chmod', 0755)”
Perl:
perl -e 'chmod 0755, “chmod”'
Node.js:
require("fs").chmod("/bin/chmod", 0755);
C:
$ cat - > restore_chmod.c
#include <sys/types.h>
#include <sys/stat.h>
int main () {
chmod( "/bin/chmod", 0000755 );
}
^D
$ cc restore_chmod.c
$ ./a.out
2 - Create another executable with chmod
By creating an executable:
$ cat - > chmod.c
int main () { }
^D
$ cc chmod.c
$ cat /bin/chmod > a.out
By copying an executable:
$ cp cat new_chmod
$ cat chmod > new_chmod
3 - Launch BusyBox (it has chmod inside)
4 - Using Gnu Tar
Create an archive with specific permissions and use it to restore chmod:
$ tar --mode 0755 -cf chmod.tar /bin/chmod
$ tar xvf chmod.tar
Do the same thing but on the fly, not even bothering to create the file:
tar --mode 755 -cvf - chmod | tar xvf -
Open a socket to another machine, create an archive and restore it locally:
$ tar --preserve-permissions -cf chmod.tar chmod
$ tar xvf chmod.tar
Another possibility would be to create the archive regularly and then editing it to alter the permissions.
5 - cpio
cpio allows you to manipulate archives; when you run cpio file, after the first 21 bytes there are three bytes that indicate the file permissions; if you edit those, you're good to go:
echo chmod |
cpio -o |
perl -pe 's/^(.{21}).../${1}755/' |
cpio -i -u
6 - Dynamic loaders
/bin/ld.so chmod +x chmod
(actual paths may vary)
7 - /proc wizardry (untested)
Step by step:
Do something that forces the inode into cache (attrib, ls -@, etc.)
Check kcore for the VFS structures
Use sed or something similar to alter the execution bit without the kernel realising it
Run chmod +x chmod once
8 - Time Travel (git; yet untested)
First, let's make sure we don't get everything else in the way as well:
$ mkdir sandbox
$ mv chmod sandbox/
$ cd sandbox
Now let's create a repository and tag it to something we can go back to:
$ git init
$ git add chmod
$ git commit -m '1985'
And now for the time travel:
$ rm chmod
$ git-update-index --chmod=+x chmod
$ git checkout '1985'
There should be a bunch of git-based solutions, but I should warn you that you may hit a git script that actually tries to use the system's chmod
9 - Fighting Fire with Fire
It would be great if we could fight an Operating System with another Operating System. Namely, if we were able to launch an Operating System inside the machine and have it have access to the outer file system. Unfortunately, pretty much every Operating System you launch is going to be in some kind of Docker, Container, Jail, etc. So, sadly, that is not possible.
Or is it?
Here is the EMACs solution:
Ctrl+x b > *scratch*
(set-file-modes "/bin/chmod" (string-to-number "0755" 8))
Ctrl+j
10 - Vim
The only problem with the EMACs solution is that I'm actually a Vim kind of guy. When I first delved into this topic Vim didn't have a way to do this, but in recent years someone made amends with the universe, which means we can now do this:
vim -c "call setfperm('chmod', 'rwxrwxrwx') | quit"
| How can I recover from a `chmod -x chmod`? [duplicate] |
1,636,902,417,000 |
I'm on Debian 8. While How to set default file permissions for all folders/files in a directory? is about permissions, I'd like something similar for ownership.
Whenever I login as root and add a file to a daemons config directory, the ownership of the newly created file is root:root. While this is OK for most situation, here it isn't. I'd like to have the ownership set to daemon:daemon automatically when I create a file somewhere under the config directory.
How do I accomplish that?
|
You can't.
You can use chmod to set the sticky bit on a directory (chmod g+s directory/) and that will cause all files created in the directory to be in the same group as the directory itself. But that only affects the group, not the owner.
You can also set your umask or ACLs on the directory to affect the default permissions of files created.
But you can't automatically set the owner of a file you (root) created to some other user. You have to do that with chown.
You're just going to have to get used to the chown, chgrp, and chmod commands.
| How to set default owner per directory? |
1,636,902,417,000 |
Suppose the owner/user doesn't have the write permission on a directory but he has it on a file under it. Can the file here be edited or not? If yes, is there any situation where the file cannot be edited?
|
Yes, the file can be edited.
As far as the directory is concerned, the file can not be edited if you remove the execute permission on the directory for the target (owner/group/others).
EDIT: If you want the owner to not be able to edit the file by changing the permission of the directory (assuming the same user owns the directory and file), then you can simply remove the execute permission on the directory for the owner. For example you can make the permission for the owner as rw- i.e. 6.
| Can a file be edited with the 'write' permission on it but not on its parent directory? |
1,636,902,417,000 |
I want to delegate non-root user to delete read-only subvolumes (snapshots).
What exactly capabilities/rights I need to grant, so that he can remove his own read-only snapshots?
I've already mounted the btrfs with -o user_subvol_rm_allowed so users can remove read/write snapshots.
I need it to augment otherwise brillant SnapBtr.py, so non-root users can operate it.
|
A user can not delete readonly snapshots directly, but he can make them writeable first and then delete them. For this you need to use the btrfs property command:
btrfs property set -ts /path/to/snapshot ro false
If the user is the owner of the snapshot, this should make it writeable and therefore deletable.
| What privileges are needed to delete a read-only btrfs subvolume? |
1,636,902,417,000 |
A script of mine works fine when I run it, but fails when run by a different user, with an error of the form
chmod: changing permissions of `/A/B/C/D/E': Operation not permitted
(Here /A/B/C/D/E is a directory. FWIW, the script resides in /A/B/C/D.)
In case it matters, the permissions structure of the directory in question and all its ancestors is as follows:
drwxrwsrwx kjo11 proj1 /A/B/C/D/E/
drwxrwsrwx cwr8 proj1 /A/B/C/D/
drwxrwsr-x root proj1 /A/B/C/
drwxrwsr-x root proj1 /A/B/
drwxr-xr-x root root /A/
drwxr-sr-x root root /
(In this listing, kjo11 is my $USER name. As it happens, cwr8 is the $USER name of the user for which the script fails. In any case, we are both in the proj1 group.)
The output of uname is Linux.
I'm stumped. The only detail that catches my attention is that the error is Operation not permitted, as opposed to Permission denied, but I can't make anything more out of this.
What conditions can cause such errors on running chmod?
|
Only the owner of a file, or the superuser, may alter the permissions on a file. This is true, even if the user is a member of the group that owns the file and the permissions of the file and parent directory would suggest setting permissions should be possible.
You can control the permissions of files and directories at creation time, by using the umask facility of your shell:
$ umask 002
$ mkdir -p targetdir
$ ls -ld targetdir
...
drwxrwxr-x 2 dan wheel 2 19 Mar 15:13 targetdir
If you're doing this in a script, it's probably a good idea to save the original umask value so you can restore it upon successful creation of your directories.
| script failing with "chmod: ... Operation not permitted" |
1,636,902,417,000 |
Can someone explain why I get permission denied when running touch -m on this file even though it is group writable and I can write to the file fine.
~/test1-> id
uid=1000(plyons) gid=1000(plyons) groups=1000(plyons),4(adm),20(dialout),24(cdrom),46(plugdev),109(lpadmin),110(sambashare),111(admin),1002(webadmin)
~/test1-> ls -ld .; ls -l
drwxrwxr-x 2 plyons plyons 4096 Feb 14 21:20 .
total 4
-r--rw---- 1 www-data webadmin 24 Feb 14 21:29 foo
~/test1-> echo the file is writable >> foo
~/test1-> touch -m foo
touch: setting times of `foo': Operation not permitted
~/test1-> lsattr foo
-------------e- foo
~/test1-> newgrp - webadmin
~/test1-> id
uid=1000(plyons) gid=1002(webadmin) groups=1000(plyons),4(adm),20(dialout),24(cdrom),46(plugdev),109(lpadmin),110(sambashare),111(admin),1002(webadmin)
~/test1-> touch -m foo
touch: setting times of `foo': Operation not permitted
~/test1-> echo the file is writable >> foo
~/test1->
|
From man utime:
The utime() system call changes the access and modification times of
the inode specified by filename to the actime and modtime fields of
times respectively.
If times is NULL, then the access and modification times of the file
are set to the current time.
Changing timestamps is permitted when: either the process has appropri‐
ate privileges, or the effective user ID equals the user ID of the
file, or times is NULL and the process has write permission for the
file.
So, to change only the modification time for the file (touch -m foo), you'd need to either be root, or the owner of the file.
Being able to write to the file only gives you permission to update both the modified and access times to the current time; you can not update either separately, nor set them to a different time.
| cannot touch -m a writable file |
1,636,902,417,000 |
I know that this setfacl is to give permission:
setfacl -m u:user:rwx myfolder
But is there one to take permissions such as:
setfacl -m u:user:-rwx myfolder
|
Yes, you can specify - in the relevant permission field to set an ACL with restricted permissions:
setfacl -m u:user:--- myfolder
This doesn’t work to restrict the permissions of the owner, which makes sense since the owner of a file can change the ACLs anyway. It does work to restrict the permissions of a non-owner user who would otherwise have access to the file (through group or “other” permissions).
| Can I take away permission from a specific user in the command setfacL? |
1,636,902,417,000 |
Is it possible to detect the files in a folder that have changed their permissions? I have read about the command find and It detects files that have changed the date of last modification but changing permissions does not change this date.
|
Check out the stat command, this shows 3 times the last time the file was accessed, when it was last modified and when it's permissions were last changed.
The one which you're interested in is permissions (change), see the below output for an example file I have just chmod'ed;
prompt::11:26:45-> stat ideas.md
File: ‘ideas.md’
Size: 594 Blocks: 8 IO Block: 4096 regular file
Device: 27h/39d Inode: 117 Links: 1
Access: (0770/-rwxrwx---) Uid: ( 0/ root) Gid: ( 992/ vboxsf)
Context: system_u:object_r:vmblock_t:s0
Access: 2014-12-21 19:15:29.000000000 +0000
Modify: 2014-12-21 19:15:29.000000000 +0000
Change: 2014-12-22 11:26:45.000000000 +0000
Birth: -
Or as @0xC0000022L says you could use stat -c to show just the output you need;
prompt::11:32:46-> stat -c %z ideas.md
2014-12-22 11:26:51.000000000 +0000
| Detect changes in permissions |
1,636,902,417,000 |
If I run chmod 770 ./folderName, then users who are not the owner or in the group owning ./folderName (i.e. users in the “others” category) cannot access ./folderName/folderB or ./folderName/fileC, even after running:
chmod 777 ./folderName/folderB
chmod 777 ./folderName/fileC
Right? Does that rule applies to all Linux distributions? Thank you.
|
That's correct. Removing execute permission will prevent access to the directory and all subdirectories, even if subdirectories are more permissive.
This will probably hold true for anything Unix like (and it may even be a POSIX requirement).
| `chmod 770 folderName` restrics access to subdirectories & subfiles? |
1,636,902,417,000 |
I don't understand why the permission does not change for a user when I run the chmod command with fakeroot.
Initially, the file has these permissions:
-rwxr-xr-x a.txt*
When I try to change the permission for the file using chmod it works fine:
chmod 111 a.txt
---x--x--x a.txt*
When I run it with fakeroot it doesn't seem to do the work fine. It sets the permissions for group and other correctly, but not for the user. The permissions for read and write are set, no matter what the 1st value in chmod command is.
fakeroot chmod 111 a.txt
-rwx--x--x a.txt*
Am I missing something?
|
Fakeroot doesn't carry out all the file metadata changes, that's the point: it only pretends to the program that runs under it. Fakeroot does not carry out changes that it can't do, such as changing the owner. It also does not carry out changes that would cause failures down the line. For example, the following code succeeds when run as root, because root can always open files regardless of permissions:
chmod 111 a.txt
cp a.txt b.txt
But when run as a non-root user, cp fails because it can't read a.txt. To avoid this, chmod under fakeroot does not remove permissions from the user.
Fakeroot does pretend to perform the change for the program it's running.
$ stat -c "Before: %A" a.txt; fakeroot sh -c 'chmod 111 a.txt; stat -c "In fakeroot: %A" a.txt'; stat -c "After: %A" a.txt
Before: -rwx--x--x
In fakeroot: ---x--x--x
After: -rwx--x--x
Generally speaking, file metadata changes done inside fakeroot aren't guaranteed to survive the fakeroot call. That's the point. Make a single fakeroot call that does both the metadata changes and whatever operations (such as packing an archive) you want to do with the changed metadata.
| Issue with changing permissions with fakeroot |
1,636,902,417,000 |
I've been assigned to lock down all /var/log files so that they cannot be read except by the root user. I've been stumped by the /var/log/boot.log file. It seems that after every boot the file no matter what what previous permission state gets set to 644 permissions.
I've gone through the exercise of changing the umask in a number of key /etc/init.d files and functions to no avail.
Anybody got any idea as to the specific program doing this and maybe how to get the perms on /var/log/boot.log to be 600?
|
Via a fgrep -r boot.log /usr, it is plymouth to blame. The plymouth manual page is uh kinda lacking on Centos 6, though a romp through the source code does show that there is a no_boot_log option, apparently that can be set by passing no-boot-log somewhere (assuming you're okay with no logs from plymouth). Ah! With more digging there is a world_readable flag that twiddles the mode used for the open(2) call, except this is set only as the third argument to
log_is_opened = ply_logger_open_file (session->logger, filename, true);
Sad trombone. Anyways, you'll probably be fiddling with the initrd image to customize this, or maybe filing bug reports with RedHat to a) write some damn docs so that less source code spelunking is required and b) offer an option somehow to configure that mode perhaps via kernel arg or something.
| What program specifically sets /var/log/boot.log to 644 perms in RHEL/Centos 6? |
1,636,902,417,000 |
I usually find the answers to all my Unix related problems already posted as questions and answers. However, this particular issue has had me stumped for the past hour so I thought I’d ask my first question on this site.
Problem
I have a development / staging server server running CentOS 5.11.
Running locate as a regular user results in no output (not even an error message):
locate readdir
However, running the command as the superuser prints a list of valid results:
$ sudo locate readdir
/home/anthony/repos/php-src/TSRM/readdir.h
/home/anthony/repos/php-src/ext/standard/tests/dir/readdir_basic.phpt
... etc.
strace usually helps me debug any such issues and running strace locate readdir shows:
stat64("/var/lib/mlocate/mlocate.db", 0xbff65398) = -1 EACCES (Permission denied)
access("/", R_OK|X_OK) = -1 EACCES (Permission denied)
exit_group(1) = ?
Check permissions
I checked the ownership and permissions of the locate binary and its default database. As expected the command is setgid with slocate as the group owner while the database has the appropriate ownership and permissions.
$ ls -l /usr/bin/locate
-rwx--s--x 1 root slocate 22280 Sep 3 2009 /usr/bin/locate
$ sudo ls -l /var/lib/mlocate/mlocate.db
-rw-r----- 1 root slocate 78395703 May 8 04:02 /var/lib/mlocate/mlocate.db
$ sudo ls -ld /var/lib/mlocate/
drwxr-x--- 2 root slocate 4096 Sep 3 2009 /var/lib/mlocate/
There are also no unusual file attributes:
$ sudo lsattr /usr/bin/locate /var/lib/mlocate/mlocate.db
------------- /usr/bin/locate
------------- /var/lib/mlocate/mlocate.db
Compare with working system
Meanwhile, everything works as expected on the Production server. Running locate readdir as a regular (non-root) user returns a list of results as it should:
$ locate readdir
/usr/include/php/TSRM/readdir.h
/usr/lib/perl5/5.8.8/i386-linux-thread-multi/auto/POSIX/readdir.al
/usr/share/man/man2/readdir.2.gz
For comparison, I also ran this command through strace but I then got the same permission denied error as on the staging server. I was wondering how this could be until I read the manual page for sudo. Listed in the Bugs section:
Programs that use the setuid bit do not have effective user ID privileges while being traced.
So, unfortunately, I can’t use strace for debugging.
I compared the results of all the above commands between the Staging and Production servers and there’s no difference between them. Both systems have the mlocate-0.15-1.el5.2 RPM with no modifications to their files as shown by rpm -V mlocate.
Other considerations
I thought it might be related to the fact that on the problematic staging server, my login is authenticated using Winbind but I created a regular local user on the same box and I still have the same issue. There’s obviously something else that I’m missing but I simply don’t know what it is.
I suspect it is related to the setgid file permission, maybe PAM or possibly SELinux. I don’t know much about either PAM or SELinux: I’ve only ever looked at PAM when configuring Winbind authentication while SELinux was installed with the OS but I’ve never used it.
Note: the production server has been subject to far fewer modifications than the development server which has had some experimentation.
|
The problem was the permissions for / (the root directory) and the clue for finding that was this line from your strace output:
access("/", R_OK|X_OK) = -1 EACCES (Permission denied)
You were missing group read permission settings for /. But because you still had x (execute) permission, which allows you to traverse a directory, you could still access all of the files on the filesystem, which is why most everything continued working while those permissions were in effect. The only thing you were not allowed to do is list the contents of /. Most commands don't need to list /, they either use pathnames relative to the current directory or absolute pathnames that access specific well-known directories off the root (like /etc and /var).
For security reasons, locate, even though it has access to a complete inventory of filenames generated by a privileged user, insists on reporting only results that the calling user would be able to find by scanning the whole filesystem from the root. Since you couldn't list /, which makes scanning anything straight from the root a non-starter, locate would report nothing at all.
| No output from locate command |
1,636,902,417,000 |
System file: ext4
I changed the owner of files to apache: with the command:
chown -R apache: wp.localhost
Then, I could not change the permissions of directories in wp.localhost nor the wp.localhost itself
I use the command chmod +w wp.localhost for example. and I do not see any permission change on it.
I also changed the group of folders by the command again, But did not solve the problem.
chown -R apache:users wp.localhost
Commads and permissions before and after:
#ls -ld wp.localhost
drwxr-xr-x 6 apache users 4096 Mar 28 15:26 wp.localhost/
# chmod +w wp.localhost
# ls -ld wp.localhost
drwxr-xr-x 6 apache users 4096 Mar 28 15:26 wp.localhost/
|
If you want to grant global write permission on that directory, you have to do
chmod a+w wp.localhost [1]
This is because omitting the 'who is affected' letter (u, g, o or a) implies a, but won't set bits that are set in your current umask. So, for example, if your umask was 0022, the 'write' bit is set in the 'group' and 'other' positions, and chmod will ignore it if you don't specify a explicitly.
The chmod man page is explicit about this:
If none of these ['who is affected' letters] are given, the effect is
as if a were given, but bits that are set in the umask are not
affected.
[1] Think carefully before doing this!
| chmod does not change permissions of certain directories |
1,636,902,417,000 |
I am trying to setup a password-less SSH configuration between two machines and I am having a problem. There are a ton of howtos out there that I have followed and have had no success. Here are the steps that I've taken
Generate the authentication keys on the client. (Pressed enter when prompted for a passphrase)
[root@box1:.ssh/$] ssh-keygen -t rsa
Copy the public key to the server.
[root@box1:.ssh/$] scp id_rsa.pub root@box2:.ssh/authorized_keys
Verified the authorized key was created successfully on the server
Executed the following command:
[root@box1:.ssh/$] ssh root@box2 ls
And I was still prompted for a password. I read a note on one howto that said "depending on the version of SSH that is running..." (although it did not specify which versions needed this), it might require:
The public key in .ssh/authorized_keys2
Permissions of .ssh to 700
Permissions of .ssh/authorized_keys2 to 640
I also followed those steps and had no success. I have verified that the home, root, and .ssh directories are not writable by group (according to https://unix.stackexchange.com/tags/ssh/info).
Anyone have any ideas what I'm missing?
Thanks
EDIT: I also copied the public key to the second box using the ssh-copy-id command and that generated the .ssh/authorized_keys file.
[root@box1:.ssh/$] ssh-copy-id -i id_rsa.pub root@box2
EDIT2: Including version information
// box1 (system keys were generated on)
Linux 2.6.34
OpenSSH_5.5p1 Debian-6, OpenSSL 0.9.8o 01 June 2010
// box2
Linux 2.6.33
Dropbear client v0.52
EDIT3: Debug output
[root@box1:.ssh/$] ssh -vvv root@box2 ls
OpenSSH_5.5p1 Debian-6, OpenSSL 0.9.8o 01 Jun 2010
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to box2 [box2] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug3: Not a RSA1 key file /root/.ssh/id_rsa.
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug3: key_read: missing keytype
debug3: key_read: missing whitespace
debug2: key_type_from_name: unknown key type '-----END'
debug3: key_read: missing keytype
debug1: identity file /root/.ssh/id_rsa type 1
debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048
debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: Remote protocol version 2.0, remote software version dropbear_0.52
debug1: no match: dropbear_0.52
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.5p1 Debian-6
debug2: fd 3 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit:
diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-
group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit:
[email protected],[email protected],ssh-rsa,ssh-dss
debug2: kex_parse_kexinit:
aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-
cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysatoe
debug2: kex_parse_kexinit:
aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-
cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysatoe
debug2: kex_parse_kexinit:
hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-
sha1-96,hmac-md5-96
debug2: kex_parse_kexinit:
hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-
sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,[email protected],zlib
debug2: kex_parse_kexinit: none,[email protected],zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit:
aes128-ctr,3des-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes256-cbc,twofish256-cbc,twofish-
cbc,twofish128-cbc,blowfish-cbc
debug2: kex_parse_kexinit:
aes128-ctr,3des-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes256-cbc,twofish256-cbc,twofish-
cbc,twofish128-cbc,blowfish-cbc
debug2: kex_parse_kexinit: hmac-sha1-96,hmac-sha1,hmac-md5
debug2: kex_parse_kexinit: hmac-sha1-96,hmac-sha1,hmac-md5
debug2: kex_parse_kexinit: zlib,[email protected],none
debug2: kex_parse_kexinit: zlib,[email protected],none
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: found hmac-md5
debug1: kex: server->client aes128-ctr hmac-md5 none
debug2: mac_setup: found hmac-md5
debug1: kex: client->server aes128-ctr hmac-md5 none
debug2: dh_gen_key: priv key bits set: 132/256
debug2: bits set: 515/1024
debug1: sending SSH2_MSG_KEXDH_INIT
debug1: expecting SSH2_MSG_KEXDH_REPLY
debug3: check_host_in_hostfile: host 192.168.20.10 filename
/root/.ssh/known_hosts
debug3: check_host_in_hostfile: host 192.168.20.10 filename
/root/.ssh/known_hosts
debug3: check_host_in_hostfile: match line 3
debug1: Host 'box2' is known and matches the RSA host key.
debug1: Found key in /root/.ssh/known_hosts:3
debug2: bits set: 522/1024
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /root/.ssh/id_rsa (0x54b1c340)
debug2: key: /root/.ssh/id_dsa ((nil))
debug1: Authentications that can continue: publickey,password
debug3: start over, passed a different list publickey,password
debug3: preferred
gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering public key: /root/.ssh/id_rsa
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey,password
debug1: Trying private key: /root/.ssh/id_dsa
debug3: no such identity: /root/.ssh/id_dsa
debug2: we did not send a packet, disable method
debug3: authmethod_lookup password
debug3: remaining preferred: ,password
debug3: authmethod_is_enabled password
debug1: Next authentication method: password
EDIT4: Another interesting development. Instead of generating the keys on box1 (running OpenSSH) and copying them to box2 (running dropbear) I did it in reverse:
[root@box2:.ssh/$] dropbearkey -t rsa -f id_rsa
[root@box2:.ssh/$] dropbearkey -y -f id_rsa | grep "^ssh-rsa" >> authorized_keys
[root@box2:.ssh/$] scp authorized_keys root@box1:.ssh/
And with that I am successfully able to issue commands password-less from box2 to box1 ONLY if I specify the ID file:
[root@box2:.ssh/$] ssh -i id_rsa root@box1 ls
Still unable to issue commands from box1 (OpenSSH) to box2 (dropbear).
|
I found the source of the problem. There was a vague message in /var/log/messages about strange ownership that tipped me off. So I checked, and the permissions of /root, /root/.ssh, and /root/.ssh/* were all correct (700), but the ownership was default.default. I'm not sure how that happened... but I ran:
[root@box1:.ssh/$] chown root.root /root
[root@box1:.ssh/$] chown root.root /root/.ssh
[root@box1:.ssh/$] chown root.root /root/.ssh/*
To changed the ownership to root and passwordless login works in both directions.
| How to setup password-less ssh with RSA keys |
1,636,902,417,000 |
On Mac using terminal and "chown" command I can set owner for a folder like this:
sudo chown -R _www somefolder
However this replaces me with _www.
I.e. I'm not in the list of owners anymore.
I then have to open folder properties in Finder, add myself as a second owner and set permissions using the GUI.
And this is what the ACL looks like:
Is there a way to add TWO owners using terminal?
In other words how to add a second owner to a folder using terminal?
Not necessarily chown.
PS: Just in case.. on the screenshots users "_www" and "Oleg (Я)" have permissions "Read and write".
|
Found the answer (type this in Terminal):
sudo chmod +a 'Oleg allow list,add_file,search,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity' somefolder
Where 'Oleg' is a user name and 'somefolder' is a folder name in question.
The permissions inside single quotes after the 'allow' keyword are just copied from the output of ls -le
Now both users '_www' and 'Oleg' can read, write files and subdirectories, etc.
That was the intention.
Strictly speaking yes, you can not add a second "owner" in POSIX attributes sense, e.g. via Chown.
However in Mac you can give owner-like permissions to numerous users via ACL like Philippos commented (thanks for hinting).
| How to add a second owner of a folder using terminal on Mac? |
1,636,902,417,000 |
I just switched over from Kubuntu to Feodra KDE Spin. Now, I have an LVM setup where my home partition is on its own volume. I created a user with the same username as the one I had in the last distro. I was dismayed to find out that I couldn't log in. The screen just went blank, then I saw a blinking cursor, and then I was bounced back to the login screen. I tried to login from a tty, which worked, but I got this wonderful message:
-- user: /home/user: change directory failed: Permission denied
Okay. So I logged in as root and tried to chown everything back to user. Nothin' goin'. I chmoded my /home/user directory. Still nothing. I'm officially at a loss as to what I should try next, and I thought you fine folks might help. Here's some information for you:
id -u user
1000
stat /home/user/
File: '/home/user/'
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fd02h/64770d Inode: 11272193 Links: 22
Access: (0700/drwx------) Uid: ( 1000/user) Gid: ( 1000/user)
Context: system_u:object_r:unlabeled_t:s0
Access: 2017-06-16 19:42:02.224062623 -0400
Modify: 2017-06-16 19:42:00.651082621 -0400
Change: 2017-06-16 19:42:00.651082621 -0400
Birth: -
It all looks good to me, but what do I know? Strangely enough, when I am logged in as user, I can cd into that directory with no issues.
|
Based upon the OP's information (which, btw, was excellent for the question), the SELinux context was incorrect. In the OP's question, the Context showed as:
Context: system_u:object_r:unlabeled_t:s0
However, a home directory should have user_home_dir_t.
To resolve the situation, running restorecon -Rv /home (using /home ensures that home directories for other users are updated; one could fix just the particular users's home dire by restorecon -Rv /home/user) will adjust the situation. The result should be similar to:
File: ‘/home/user’
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fd00h/64768d Inode: 18642668 Links: 16
Access: (0700/drwx------) Uid: ( 1000/ user) Gid: ( 1000/ user)
Context: unconfined_u:object_r:user_home_dir_t:s0 <-- THE CONTEXT
Access: 2017-06-16 19:10:34.914968689 -0600
Modify: 2017-06-16 18:30:31.135767008 -0600
Change: 2017-06-16 18:30:31.135767008 -0600
Birth: -
Using the -R would also ensure that directories in the /home/user were properly adjusted. For example the .ssh directory has a context of unconfined_u:object_r:ssh_home_t:s0.
| Can't log in, seemingly locked out of home directory |
1,636,902,417,000 |
How do I grant a specific user the right to change user and group ownership of files and directories inside a specific directory?
I did a Google search and saw that there is such a thing as setfacl, which allows for granting users specific rights to change permissions for files and directories. From what I read, though, this command does not allow granting chown permissions.
So, say a file has
user1 user1 theFile1
user1 user1 theDirectory1
Issuing the following command would fail.
[user1@THEcomputer]$ chown user2 theFile
I do have root access on the computer. Is there a way to grant a user to issue chown commands inside a directory?
UPDATE: How to add a user to a group.
Here is the article that I used to add datamover to the hts group.
[root@Venus ~]# usermod -a -G datamover hts
[root@Venus ~]# exit
logout
[hts@Venus Receive]$ groups
hts wireshark datamover
[hts@Venus Receive]$
UPDATE (address comment by RuiFRibeiro):
Changing the ownership of the directory to the directory does not work, see screenshot.
[datamover@Venus root]$ ls -la
total 311514624
drwxrwxrwx. 6 datamover datamover 4096 Oct 14 14:05 .
drwxr-xr-x 4 root root 4096 Aug 20 16:52 ..
-rwxrwxrwx. 1 datamover datamover 674 Aug 31 16:47 create_files.zip
drwxrwxrwx 2 datamover datamover 4096 Oct 17 17:07 dudi
-rwxrwxrwx. 1 datamover datamover 318724299315 Oct 13 15:47 Jmr400.mov
-rwxrwxrwx. 1 datamover datamover 182693854 Aug 31 16:47 Jmr_Commercial_WithSubtitles.mov
-rwxrwxrwx. 1 datamover datamover 80607864 Aug 31 16:47 Jmr_DataMover_Final.mov
drwxrwxrwx. 2 datamover datamover 122880 Aug 23 11:54 ManyFiles
drwxrwxrwx. 3 datamover datamover 4096 Oct 25 07:18 Receive
drwxrwxrwx 2 datamover datamover 4096 Oct 14 13:40 sarah
-rwxrwxrwx 1 datamover datamover 3184449 Oct 14 14:05 SourceGrid_4_40_bin.zip
[datamover@Venus root]$ cd ./Receive/
[datamover@Venus Receive]$ ls -la
total 178540
drwxrwxrwx. 3 datamover datamover 4096 Oct 25 07:18 .
drwxrwxrwx. 6 datamover datamover 4096 Oct 14 14:05 ..
-rwxrwxrwx 1 hts hts 182693854 Oct 25 07:18 Jmr_Commercial_WithSubtitles.mov
drwxrwxrwx 2 datamover datamover 122880 Oct 23 13:33 ManyFiles
[datamover@Venus Receive]$ chown datamover:datamover ./Jmr_Commercial_WithSubtitles.mov
chown: changing ownership of './Jmr_Commercial_WithSubtitles.mov': Operation not permitted
Here is an attempt as the owner of the file:
[hts@Venus Receive]$ chown datamover:datamover Jmr_Commercial_WithSubtitles.mov
chown: changing ownership of 'Jmr_Commercial_WithSubtitles.mov': Operation not permitted
So as you can see, neither possibility works.
UPDATE (address countermode's answer)
Group ownership may be changed by the file owner (and root). However, this is restricted to the groups the owner belongs to.
Yes, one does have to log out first. Here is the result of my attempt:
[hts@Venus ~]$ groups hts
hts : hts wireshark datamover
[hts@Venus ~]$ cd /mnt/DataMover/root/Receive/
[hts@Venus Receive]$ ls -la
total 178540
drwxrwxrwx. 3 datamover datamover 4096 Oct 25 07:18 .
drwxrwxrwx. 6 datamover datamover 4096 Oct 14 14:05 ..
-rwxrwxrwx 1 hts hts 182693854 Oct 25 07:18 Jmr_Commercial_WithSubtitles.mov
drwxrwxrwx 2 datamover datamover 122880 Oct 23 13:33 ManyFiles
[hts@Venus Receive]$ chown hts:datamover ./Jmr_Commercial_WithSubtitles.mov
[hts@Venus Receive]$ ls -la
total 178540
drwxrwxrwx. 3 datamover datamover 4096 Oct 25 07:18 .
drwxrwxrwx. 6 datamover datamover 4096 Oct 14 14:05 ..
-rwxrwxrwx 1 hts datamover 182693854 Oct 25 07:18 Jmr_Commercial_WithSubtitles.mov
drwxrwxrwx 2 datamover datamover 122880 Oct 23 13:33 ManyFiles
[hts@Venus Receive]$ chown datamover:datamover ./Jmr_Commercial_WithSubtitles.mov
chown: changing ownership of ‘./Jmr_Commercial_WithSubtitles.mov’: Operation not permitted
[hts@Venus Receive]$
Adding hts to the datamover group does indeed allow me to change the ownership of the group part, so now a partial answer and validation for the statement.
|
Only root has the permission to change the ownership of files. Reasonably modern versions of Linux provide the CAP_CHOWN capability; a user who has this capability may also change the ownership of arbitrary files. CAP_CHOWN is global, once granted, it applies to any file in a local file system.
Group ownership may be changed by the file owner (and root). However, this is restricted to the groups the owner belongs to. So if user U belongs to groups A, B, and C but not to D, then U may change the group of any file that U owns to A, B, or C, but not to D. If you seek for arbitrary changes, then CAP_CHOWN is the way to go.
CAUTION CAP_CHOWN has severe security implications, a user with a shell that has capability CAP_CHOWN could get root privileges. (For instance, chown libc to yourself, patch in your Trojan Horses, chown it back and wait for a root process to pick it up.)
Since you want to restrict the ability to change ownership to certain directories, none of the readily available tools will aid you. Instead you may write your own variant of chown that takes care of the intended restrictions. This program needs to have capability CAP_CHOWN e.g.
setcap cap_chown+ep /usr/local/bin/my_chown
CAUTION
Your program will probably mimic the genuine chown, e.g. my_chownuser:group filename(s). Do perform your input validation very carefully. Check that each file satisfies the intended restrictions, particularly, watch out for soft links that point out of bounds.
If you want to restrict access your program to certain users, you may either create a special group, set group ownership of my_chown to this group, set permissions to 0750, and add all users that are permitted to this group. Alternatively you may use sudo with suitable rules (in this case you also don't need capability magic). If you need even more flexibility, then you need to code the rules you have in mind into my_chown.
| How to grant a user rights to change ownership of files/directories in a directory |
1,411,584,047,000 |
Is there some permissions setup that would allow a user, say, john, to add a file to a directory d, but not be able to remove an existing file owned by another user, say, root?
My understanding is that this is not possible, since one needs execute permissions to add a file to the directory, but this also gives on the power to unlink any file in the directory.
(I'm using Mac OS 10.9, but this question presumably applies to all POSIX-ish systems.)
|
Yes, to do so, you would want to set the sticky bit for that directory.
excerpt
Another important enhancement involves the use of the sticky bit on
directories. A directory with the sticky bit set means that only the
file owner and the superuser may remove files from that directory.
Other users are denied the right to remove files regardless of the
directory permissions. Unlike with file sticky bits, the sticky bit on
directories remains there until the directory owner or superuser
explicitly removes the directory or changes the permissions.
That is, you would give user john execute permissions on the directory d so that they could add files to it, and then mark the directory as "sticky" with chmod +t /path/to/d to ensure that john (and any other users with +x permissions) are only able delete files (or subdirectories) that they own.
| Ability to add files to a directory but not remove existing files |
1,411,584,047,000 |
On OS X, a friend of mine changed the permissions on /usr/bin recursively using Finder, in order to grand write access to everyone.
Here is how it's done:
Go to /usr/bin in Finder, then mess with the permissions at the bottom of the info window:
After that, you can no longer run Terminal.app for example. But you can still run Disk Utility, which is needed to recover from this without a terminal.
Here is the error that you have in this case:
Last login: Fri Jul 4 15:39:24 on ttys001
login(27006,0x7fff78115310) malloc: *** error for object 0x7fceb3412cc0: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Luckily, I quickly found a question mentioning this problem here.
My first thought was that this is a hardware problem (maybe some random corruption on the hard drive / in RAM / etc..).
How is this error related to the wrong permissions in /usr/bin?
While trying to work on the broken system to get a clean difference listing, I got this:
$ sudo -s
sudo: effective uid is not 0, is sudo installed setuid root?
Here is the result of diskutil verifyPermissions (which solves the problem BTW):
(Too big to be posted here)
Each line is of the form:
Permissions differ on "usr/bin/sudo"; should be -r-s--x--x ; they are -rwxr-xrwx
but I left only the filename, the permission it should have and the current permission:
http://pastie.org/9358204
Permissions differ on "usr/bin/login"; should be -r-sr-xr-x ; they are -rwxr-xrwx
|
The issue is indeed that the Finder's way of modifying permissions doesn't only affect the indicated bits as one might think. For some reason it zeroes out the first octal of the file's mode and it leaves the executable bits untouched. So, some vital programs get their setuid/setgid and sticky bits stripped off which makes them either useless or behave erratically.
The setuid bit is needed by some of the programs in /usr/bin/ because they must interact with the system on a lower level than the usual programs. For example, sudo, passwd, newgrp or login need privileges beyond the ones given to an ordinary user and that's why you need a password to execute them. If you remove their setuid bit, they simply can't do their job, which causes them to exit prematurely or even crash.
So, for example, the correct permissions of /usr/bin/login are 4555 or -r-sr-xr-x and after your friend's manipulation we have 0757 or -rwxr-xrwx. The Terminal.app calls /usr/bin/login to attach your user to a tty (see man stty) and the missing setuid bit causes it to fail. The freeing of an unallocated pointer is probably a bug related to that. On OS X 10.6.8, I don't get this pointer error, but Terminal.app quits immediately after starting and I find an entry like login[6647]: pam_open_session(): system error in /var/log/system.log.
Edit. As Antoine Lecaille mentions in a comment, a simple way to make Terminal.app dysfunctional is to issue $ sudo chmod -s /usr/bin/login. Note that you can't even open a new window afterwards since this also relies on a call to login. To undo it, just do $ sudo chmod +s /usr/bin/login.
I tested the Finder's effect on permissions as follows:
$ # create a directory with the same permissions as /usr/bin
$ mkdir -m 755 test
$ sudo chown root:wheel test
$ ls -l | grep test
drwxr-xr-x 2 root wheel 68 Jul 6 15:01 test
$ # create 4096 empty files with all possible permissions
$ cd test
$ sudo touch file_{0..7}{0..7}{0..7}{0..7}
$ for perms in {0..7}{0..7}{0..7}{0..7}; do sudo chmod $perms file_$perms; done
The for loop may take a minute to complete because chmod is slow. After this, you have files file_wxyz with permissions wxyz in the folder test. For example
$ ls -l file_4555
-r-sr-xr-x 1 root wheel 0 Jul 6 15:02 file_4555
Now we can pull your friend's stunt and change the permissions of the folder and all its contents using the Finder: $ open . and Cmd+I and do what you explained in your post. I decided to grant read permissions to the group wheel and read+write rights to everyone.
Now let's see what happened to our files: The following pipe lists the directory, reads out the column containing the permissions, sorts it and suppresses duplicate lines:
$ ls -l | awk '{print $1}' | sort -u
-rw-r--rw-
-rw-r--rwx
-rw-r-xrw-
-rw-r-xrwx
-rwxr--rw-
-rwxr--rwx
-rwxr-xrw-
-rwxr-xrwx
total
$ ls -l file_4555
-rwxr-xrwx 1 root wheel 0 Jul 6 15:02 file_4555
As you can see, the setuid/setgid/sticky bits are no longer set; read and write permissions are the same for all files; and the permissions now only differ in their execute bits (of which there are eight possible combinations).
| Why do some applications stop working when permissions are changed in /usr/bin? |
1,411,584,047,000 |
I'm using a PHP script to call shell_exec and execute wget to download some files to /var/www/dir/. (Internal tool, so security isn't much of an issue)
The directory has 777 permissions. But when I run wget, all of the files are 644 by default. Ideally, I would like 665 for group write access for group www-data.
How do I set the permissions for wget downloaded files? I don't want to run chmod -R after every call.
|
The permissions that are applied to new files that get created are controlled by the user's umask in a given shell. You can see what they are using the command umask.
$ umask
0002
To get the permissions of these new files set to 665 you'll need to set the umask to this:
$ umask 112
This will enable all the bits in rwxrwxrwx, enabling the bits rw-rw-r-x. The mask is specifying which bits to "mask" so that they're disabled.
Incorporating using shell_exec
You could do something like this to enable the umask using shell_exec in PHP:
"umask 112; ...wget..."
The semicolon above designates that these are actually 2 commands. The umask will run first, followed by the second command wget ....
| Changing default permissions for wget? |
1,411,584,047,000 |
I know about the different permissions and how to change permissions, etc. But I have just seen one of my files has the permissions:
-rwsr-x--- 1 root scott 26974 Dec 8 2010 extjob
What does the "s" mean, in the permissions?
There is another question on U&L that has a little to do with the same permissions, but doesn't explain what they mean.
|
It is setuid. You can refer to this link to get more details.
You can also try man setuid.
| What does the "s" mean in the following permission:"-rwsr-x---" |
1,411,584,047,000 |
[root@nixos:/etc/nix]# sudo chmod 777 /etc/nix/nix.conf
chmod: changing permissions of '/etc/nix/nix.conf': Read-only file system
I remember this is some filesystem / Linux kernel utility to change this, I'm not sure what it's called though?
|
This file is managed by Nix - hence it's in the store path - and read only:
readlink -f /etc/nix/nix.conf
/nix/store/9cidrvc5n3fjf9zplxrwiyh0g9nq07bb-nix.conf
Instead in order to modify this, you need to set the nix.extraOptions in configuration.nix to modify this file.
https://github.com/NixOS/nix/pull/3111
Nix config can also be set at ~/.config/nix/nix.conf though see here for more info: https://nixos.org/manual/nix/unstable/command-ref/conf-file.html
| Nixos unable to modify or chmod nix config - '/etc/nix/nix.conf' |
1,411,584,047,000 |
I am getting a permission denied error on CentOS 6.10 64 bit
Kindly note that the "#" indicates a Root Level User prompt.
# cd /tmp
# chmod 777 file*
# /bin/ls -l file*
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_00.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_01.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_02.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_03.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_04.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_05.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_06.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_07.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_08.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_09.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_10.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_11.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_12.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_13.dat
-rwxrwxrwx 1 apache apache 824 Sep 17 17:15 file_14.dat
-rwxrwxrwx 1 apache apache 1 Sep 17 17:15 file_15.dat
# cat file* > file.dat
cat: file_00.dat: Permission denied
cat: file_02.dat: Permission denied
# /bin/ls -l file.dat
-rw-rw-r-- 1 root root 10713 Sep 17 17:32 file.dat
The size of the full file is 10713, which is 824*13+1, meaning it was, successfully, copied every file except the files "00" and "02". A successful copy should be 12361 bytes, 824*15+1. However, there is nothing different about these two files, except that the machine refuses to let me read them.
The command "chmod 777" is redundant, just to emphasize the situation. Before running that command, permissions were all in the form "-rw-r--r--", which still means that I should not be getting a permission denied error.
There is no "." on the permissions, so theoretically, Selinux should not be involved, but even if it is involved, why is it only picking on just two files?
I can repeat the process that creates these files, and it will choose a different set of files to be unreadable.
Does anybody have an explanation and fix for this?
UPDATE:
I have modified the process that creates the files. Previously it was receiving the data from a JavaScript client that broke an XLSX file into chunks to allow uploading spreadsheets of massive size. The server would receive the chunks as base64, decode each chunk to binary, then save it in a temporary file to be concatenated into a final XLSX file.
What it does now is save each temporary chunk as base64 (100% ASCII). Once all the chunks are uploaded, it reads each file, then decodes it to binary, and appends it to the final XLSX file.
Works fine. I think we'll leave it that way.
As a test, I wrote a quick 3 line program to read one of the base64 chunks, decode it to binary, then save it. Then I tried to read result. Guess what? Permission denied on the binary file.
So apparently, what makes the file unreadable is some pattern of data inside the file.
Using this method resolves the issue, but I still would like to know how a pattern of binary data inside a file creates a "Permission denied" error on the outside.
|
FINAL UPDATE
Turns out our service provider had a Red-Hat Linux anti-virus program running. Which, obviously, I was unaware of.
Turn off the anti-virus, and all files magically become readable. Turn it back on, and a certain select few of the files happen to match some virus signature.
The anti-virus should be on the look-out for executable files. (the files were originally mode 644 when the problem surfaced)
There should be different error message.
Oh well. Henceforth we will encode the files in Base64, problem solved.
Thanks again to all who helped.
| Permissions denied on files despite 777 mode |
1,411,584,047,000 |
I'm trying to figure how is the sticky bit used in NFS v3.
RFC 1813 says on page 22:
0x00200 Save swapped text (not defined in POSIX).
What do they mean by "swapped text"?
In "NFS Illustrated", the author, Brent Callaghan, says it means not to cache. However, I haven't seen this explanation in other places.
|
The text section of an executable is the actual executable code, this is what it refers to. On Linux this request is ignored, it is just an optimisation, made by the admin. The kernel can do this for it self, without the prompt.
It is saying that if the executable text gets swapped out, and the process ends, then keep it for next time. On linux (local)executables are not swapped out, as it is as quick to reload from file. Maybe it is a bit different for NFS.
The sticky bit has other meanings for other file types:
You described for executables.
For directories, it stops non owners from deleting files.
I assume that nfs is the same, when I used it 20 years ago it was.
from: http://netbsd.gw.com/cgi-bin/man-cgi?sticky+7+NetBSD-current
Later, on SunOS 4, the sticky bit got an additional meaning for files
that had the bit set and were not executable: read and write operations
from and to those files would go directly to the disk and bypass the
buffer cache. This was typically used on swap files for NFS clients on
an NFS server, so that swap I/O generated by the clients on the servers
would not evict useful data from the server's buffer cache.
| What does the "sticky bit" mean in NFS? |
1,411,584,047,000 |
There is a process that I own whose documentation claims I can send SIGABRT to in order to get some debugging information. However, when I try to send SIGABRT, I get back "Operation not permitted".
I have also tried sending the same signals to other processes I own to make sure that there isn't some underlying block preventing me from sending SIGABRT altogether, but they respond in the appropriate manner. It's just this one program, but it's every instance of that program. A system call trace of the process shows that it never receives the signal.
I have tried running /bin/kill explicitly in order to rule out any weirdness in my shell's builtin kill, and, other than some minor output differences, there was no change in behavior.
root can send SIGABRT to the process and it works as I expect it to.
I've been in this game for a while, but I've never seen an instance where a user is not able to send a signal to a process that he owns, nor have I seen an instance where a user can send one signal but not another.
The OS is FreeBSD 9.0 and the process is a ruby process that is part of a Phusion Passenger Ruby-on-Rails application running under Apache.
I'm currently at a total loss. Does anyone have any idea what is going on?
Update: Turns out that the security.bsd.conservative_signals sysctl was set to 1, and that prevents many signals from being delivered to setuid processes, according to the man page. Setting it to 0 solves the problem.
While there was a setuid call somewhere up the process chain — the process is a child of an Apache httpd, and Apache changes its uid to relinquish root permissions — the process itself is not setuid, and its EUID, RUID, and SVUID are all the same as the user sending the signal. The only inspection of the process that I can find that would indicate that any setuid happened is the P_SUGID flag in ps's "flags" field. ("Had set id privileges since last exec") It seems like that shouldn't be the case, but it's handled in an Apache module and I don't know its exact methods.
For the record, it's a ruby process that's functioning as part of a Ruby on Rails application being handled by mod_passenger, AKA mod_rails.
|
From the latest version of the kill(2) manpage:
For a process to have permission to send a signal to a process designated
by
pid,
the user must be the super-user, or
the real or saved user ID of the receiving process must match
the real or effective user ID of the sending process.
A single exception is the signal SIGCONT, which may always be sent
to any process with the same session ID as the sender.
In addition, if the
security.bsd.conservative_signals
sysctl
is set to 1, the user is not a super-user, and
the receiver is set-uid, then
only job control and terminal control signals may
be sent (in particular, only SIGKILL, SIGINT, SIGTERM, SIGALRM,
SIGSTOP, SIGTTIN, SIGTTOU, SIGTSTP, SIGHUP, SIGUSR1, SIGUSR2).
In what sense do you own the process? What exactly is the status of the process relating to real uid, effective uid, what binary it is running, owner and setid-bits of that binary, etc?
| `kill -s TERM` works, `kill -s ABRT` gets "Operation not permitted" |
1,411,584,047,000 |
I accidentally wreaked unknown amounts of havoc on my web server by running
sudo chown -R myuser:mygroup * .*
in /var/www, not remembering that .* would include the parent directory (as ..). I realized what was happening after a second or so, but by then it was too late, half the directories in /var had been "re-owned". I know I can reset most of it with
sudo chown -R root:root /var
but what files are there that need to be owned by specific non-root users (or groups) that I would have to change manually?
This is on Gentoo, and here's a directory listing:
$ ls -l /var
drwxr-xr-x 9 root root 4096 May 12 2009 cache
drwxr-xr-x 4 root root 4096 Aug 20 22:49 db
drwxr-xr-x 3 root root 4096 Aug 20 22:42 dist
drwxr-xr-x 4 root root 4096 Nov 1 2009 edata
drwxr-xr-x 2 root root 4096 Jun 17 2008 empty
drwxr-xr-x 5 git git 4096 Feb 13 2010 git
drwxr-xr-x 23 root root 4096 Jul 19 03:22 lib
drwxrwxr-x 3 root uucp 4096 Aug 12 00:14 lock
drwxr-xr-x 10 root root 4096 Aug 20 03:10 log
lrwxrwxrwx 1 root root 15 Nov 7 2008 mail -> /var/spool/mail
drwxr-xr-x 10 root root 4096 Aug 21 00:22 run
drwxr-xr-x 8 root root 4096 Feb 13 2010 spool
drwxr-xr-x 2 root root 4096 Jun 17 2008 state
drwxr-xr-x 13 root root 4096 Dec 23 2009 svn
drwxrwxrwt 5 root root 4096 Aug 14 01:53 tmp
drwxr-xr-x 13 root root 4096 Aug 11 20:21 www
drwxr-xr-x 2 root root 4096 Dec 14 2008 www-cache
I can provide listings of subdirectories but that gets pretty long pretty fast. (dist, edata, git, svn, and www are things I manage myself so ownership in those won't be an issue)
|
Well, "/var" is generally for data generated by programs, so it may not be possible to tell you exactly who should own what without duplicating your system. I can think of two ways you might fix it:
Set up another version of your web server on a spare or virtual machine and then check /var.
Just change to root/root and then see what errors come up (most of the directories will have this ownership structure).
The downside to 1 is the amount of time it will take; the plus side being that it will be accurate. Item 2 is much faster but less accurate even if it's mostly true. The big problem here is that on an important production box 2 may not be feasible.
| What files in /var need to have specific owners? |
1,411,584,047,000 |
Situation:
Linux machine running in Azure
looking for a public domain that returns 112 results
the packet response size is 1905 bytes
Case 1:
interrogating google DNS 8.8.8.8 - it returns un-truncated response. Everything is OK.
Case 2:
interrogating Azure DNS 168.63.129.16 - it returns a truncated response and tries to switch to TCP, but it fails there, with error "unable to connect to server address". However, it works perfectly well if I run the interrogation with "sudo".
The problem can be reproduced all the time:
Without sudo:
$ dig aerserv-bc-us-east.bidswitch.net @8.8.8.8
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>> aerserv-bc-us-east.bidswitch.net @8.8.8.8
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49847
;; flags: qr rd ra; QUERY: 1, ANSWER: 112, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;aerserv-bc-us-east.bidswitch.net. IN A
;; ANSWER SECTION:
aerserv-bc-us-east.bidswitch.net. 119 IN CNAME bidcast-bcserver-gce-sc.bidswitch.net.
bidcast-bcserver-gce-sc.bidswitch.net. 119 IN CNAME bidcast-bcserver-gce-sc-multifo.bidswitch.net.
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 59 IN A 35.211.189.137
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 59 IN A 35.211.205.98
--------
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 59 IN A 35.211.28.65
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 59 IN A 35.211.213.32
;; Query time: 12 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Thu Oct 03 22:28:09 EEST 2019
;; MSG SIZE rcvd: 1905
[azureuser@testserver~]$ dig aerserv-bc-us-east.bidswitch.net
;; Truncated, retrying in TCP mode.
;; Connection to 168.63.129.16#53(168.63.129.16) for aerserv-bc-us-east.bidswitc h.net failed: timed out.
;; Connection to 168.63.129.16#53(168.63.129.16) for aerserv-bc-us-east.bidswitch.net failed: timed out.
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>> aerserv-bc-us-east.bidswitch.net
;; global options: +cmd
;; connection timed out; no servers could be reached
;; Connection to 168.63.129.16#53(168.63.129.16) for aerserv-bc-us-east.bidswitch.net failed: timed out.
With sudo:
[root@testserver ~]# dig aerserv-bc-us-east.bidswitch.net
;; Truncated, retrying in TCP mode.
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>> aerserv-bc-us-east.bidswitch.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8941
;; flags: qr rd ra; QUERY: 1, ANSWER: 112, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1280
;; QUESTION SECTION:
;aerserv-bc-us-east.bidswitch.net. IN A
;; ANSWER SECTION:
aerserv-bc-us-east.bidswitch.net. 120 IN CNAME bidcast-bcserver-gce-sc.bidswitch.net.
bidcast-bcserver-gce-sc.bidswitch.net. 120 IN CNAME bidcast-bcserver-gce-sc-multifo.bidswitch.net.
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 60 IN A 35.211.56.153
.......
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 60 IN A 35.207.61.237
bidcast-bcserver-gce-sc-multifo.bidswitch.net. 60 IN A 35.207.23.245
;; Query time: 125 msec
;; SERVER: 168.63.129.16#53(168.63.129.16)
;; WHEN: Thu Oct 03 22:17:18 EEST 2019
;; MSG SIZE rcvd: 1905
I checked everything I found over internet, I saw nowhere an explanation why this works as intended only when ran from root account or with sudo permissions if the response package size is too big and the response gets truncated, forcing the DNS query to switch from UDP to TCP.
Adding "options edns0" or "options use-vc" or "options edns0 use-vc" to /etc/resolv.conf doesn't help either.
Same behavior in CentOS 7.x, Ubuntu 16.04 and 18.04
Update: tested with curl and telnet, the behavior is the same. Works with sudo or from root account, fails without sudo or from standard account.
Can anyone please provide some insight about why it needs superuser permissions when switching from UDP to TCP and help with some solution, if any?
UPDATE:
I know this is long post, but please read it all before answering.
Firewall is set to allow any to any.
Port 53 is open on TCP and UDP in all the test environments I have.
SELinux/AppArmor is disabled.
Update2:
Debian9 (kernel 4.19.0-0.bpo.5-cloud-amd64 ) works correctly without the sudo.
RHEL8 (kernel 4.18.0-80.11.1.el8_0.x86_64) works correcly, but with huge delays (up to 30sec), without sudo.
Update3:
List of distributions I was able to test and it doesn't work:
RHEL 7.6, kernel 3.10.0-957.21.3.el7.x86_64
CentOS 7.6, kernel 3.10.0-862.11.6.el7.x86_64
Oracle7.6, kernel 4.14.35-1902.3.2.el7uek.x86_64
Ubuntu14.04, kernel 3.10.0-1062.1.1.el7.x86_64
Ubuntu16.04, kernel 4.15.0-1057-azure
Ubuntu18.04, kernel 5.0.0-1018-azure
Ubuntu19.04, kernel 5.0.0-1014-azure
SLES12-SP4, kernel 4.12.14-6.23-azure
SLES15, kernel 4.12.14-5.30-azure
So, basically the only distribution I tested and is without problems is Debian 9. Since RHEL 8 has huge delays, which may trigger time outs, I cannot consider it fully working.
So far, the biggest difference between Debian 9 and the rest of distributions I tested is the systemd (missing on debian 9)... not sure how to check if this is the cause.
Thank you!
|
"Can anyone please provide some insight about why this works like this and help with some solution, if any?"
SHORT ANSWER:
A default Azure VM is created with broken DNS: systemd-resolved needs further configuration. sudo systemctl status systemd-resolved will quickly confirm this. /etc/resolv.conf points to 127.0.0.53- a local unconfigured stub resolver.
The local stub resolver systemd-resolved was unconfigured. It had no forwarder set so after hitting 127.0.0.53 it had nobody else to ask. Ugh. Jump to the end to see how to configure it for Ubuntu 18.04.
If you care about how that conclusion was reached, then please read the Long Answer.
LONG ANSWER:
Why DNS Responses Truncated over 512 Bytes:
TCP [RFC793] is always used for full zone transfers (using AXFR) and
is often used for messages whose sizes exceed the DNS protocol's
original 512-byte limit.
Source: https://www.rfc-editor.org/rfc/rfc7766
ANALYSIS:
This was trickier than I thought. So I spun-up an Ubuntu 18.04 VM in Azure so I could test from the vantage point of the OP:
My starting point was to validate nothing was choking-off the DNS queries:
sudo iptables -nvx -L
sudo apparmor_status
All chains in the iptables had their default policy set to ACCEPT and although Apparmor was set to "enforcing", it wasn't on anything involved with DNS. So no connectivity or permissions issues observed on the host at this point.
Next I needed to establish how the DNS queries were winding through the gears.
cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "systemd-resolve --status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0
search ns3yb2bs2fketavxxx3qaprsna.zx.internal.cloudapp.net
So according to resolv.conf, the system expects a local stub resolver called systemd-resolved. Checking the status of systemd-resolved per the hint given in the text above we see it's erroring:
sudo systemctl status systemd-resolved
● systemd-resolved.service - Network Name Resolution
Loaded: loaded (/lib/systemd/system/systemd-resolved.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-10-08 12:41:38 UTC; 1h 5min ago
Docs: man:systemd-resolved.service(8)
https://www.freedesktop.org/wiki/Software/systemd/resolved
https://www.freedesktop.org/wiki/Software/systemd/writing-network-configuration-managers
https://www.freedesktop.org/wiki/Software/systemd/writing-resolver-clients
Main PID: 871 (systemd-resolve)
Status: "Processing requests..."
Tasks: 1 (limit: 441)
CGroup: /system.slice/systemd-resolved.service
└─871 /lib/systemd/systemd-resolved
Oct 08 12:42:14 test systemd-resolved[871]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
<Snipped repeated error entries>
/etc/nsswitch.conf set the order sources of sources used to resolved DNS queries. What does this tell us?:
hosts: files dns
Well, the DNS queries will never hit the local systemd-resolved stub resolver as it's not specified in /etc/nsswitch.conf.
Are the forwarders even set for the systemd-resolved stub resolver?!?!? Let's review that configuration in /etc/systemd/resolved.conf
[Resolve]
#DNS=
#FallbackDNS=
#Domains=
#LLMNR=no
#MulticastDNS=no
#DNSSEC=no
#Cache=yes
#DNSStubListener=yes
Nope: systemd-resolved has no forwarder set to ask if a local ip:name mapping is not found.
The net result of all this is:
/etc/nsswitch.conf sends DNS queries to DNS if no local IP:name mapping found in /etc/hosts
The DNS server to be queried is 127.0.0.53 and we just saw this is not configured from reviewing its' config file /etc/systemd/resolved.conf. With no forwarder specified in here, there's no way we'll successfully resolve anything.
TESTING:
I tried to override the stub resolver 127.0.0.53 by directly specifying 168.63.129.16. This failed:
dig aerserv-bc-us-east.bidswitch.net 168.63.129.16
; <<>> DiG 9.11.3-1ubuntu1.9-Ubuntu <<>> aerserv-bc-us-east.bidswitch.net 168.63.129.16
;; global options: +cmd
;; connection timed out; no servers could be reached
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 24224
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;168.63.129.16. IN A
;; Query time: 13 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Tue Oct 08 13:26:07 UTC 2019
;; MSG SIZE rcvd: 42
Nope: seeing ;; SERVER: 127.0.0.53#53(127.0.0.53) in the output tells us that we've not overridden it and the local, unconfigured stub resolver is still being used.
However using either of the following commands overrode the default 127.0.0.53 stub resolver and therefore succeeded in returning NOERROR results:
sudo dig aerserv-bc-us-east.bidswitch.net @168.63.129.16
or
dig +trace aerserv-bc-us-east.bidswitch.net @168.63.129.16
So any queries that relied on using the systemd-resolved stub resolver were doomed until it was configured.
SOLUTION:
My initial- incorrect- belief was that TCP/53 was being blocked: the whole "Truncated 512" was a bit of a red-herring. The stub resolver was not configured. I made the assumption- I know, I know, "NEVER ASSUME ;-) - that DNS was otherwise configured.
How to configure systemd-resolved:
Ubuntu 18.04
Edit the hosts directive in /etc/nsswitch.conf as below by prepending resolve to set systemd-resolved as the first source of DNS resolution:
hosts: resolve files dns
Edit the DNS directive (at a minimum) in/etc/systemd/resolved.confto specify your desired forwarder, which in this example would be:
[Resolve]
DNS=168.63.129.16
Restart systemd-resolved:
sudo systemctl restart systemd-resolved
RHEL 8:
Red Hat almost does everything for you in respect to setting up systemd-resolved as a stub resolver, except they didn't tell the system to use it!
Edit the hosts directive in /etc/nsswitch.conf as below by prepending resolve to set systemd-resolved as the first source of DNS resolution:
hosts: resolve files dns
Then restart systemd-resolved:
sudo systemctl restart systemd-resolved
Source: https://www.linkedin.com/pulse/config-rhel8-local-dns-caching-terrence-houlahan/
CONCLUSION:
Once systemd-resolved was configured my test VM's DNS behaved in the expected way. I think that about does it....
| Unable to run DNS queries when response is bigger than 512 Bytes and truncated |
1,411,584,047,000 |
Please excuse me if this is too basic and you're tempted to throw an RTFM at me.
I want to prevent users from copying certain files while granting them read access to the same files. I thought this was impossible until I came across this example in the SELinux Wiki:
allow firefox_t user_home_t : file { read write };
So I was thinking, is it possible to give the files in question a mode of 0700 for instance, and use SELinux to grant read access only to the application that the users will normally be using to read the files?
Again, I'm sorry if this is too basic, it's just that I'm on a tight schedule and I want to give an answer to my boss one way or the other (if it's possible or not) as soon as possible and I know nothing about SELinux so I'm afraid reading on my own to determine whether it's possible or not would take me too much time. Please note that I'm not averse to reading per se and would hugely appreciate pointers to the relevant documentation if it exists.
So basically, my question is, is there a way to do this in SELinux or am I wasting my time pursuing such an alternative?
P.S. I'm aware that granting read access can allow users who are really intent on copying the files to copy and paste them from within the application they'll read them with; I'm just looking for a first line of defense.
EDIT
To better explain my use case:
The files in question are a mixture of text and binaries.
They need to be read by proprietary commercial software: they are simulation models for an electronics simulation software.
These models are themselves proprietary and we don't want the users simulating with them leaking them out for unauthorized use.
The software only needs to read the models and run a few scripts from these files; it will not write their contents anywhere.
In short, I want only the simulation software to have read and execute access to these files while preventing read access for the users.
|
I think it's important to note that the cat isn't the problem in my comment above, but shell redirection. Are you trying to restrict copying of binaries or text files? If it's binaries, then I believe you can work something out with rbash (see http://blog.bodhizazen.net/linux/how-to-restrict-access-with-rbash/).
However, if it's text files, I'm not sure how you can prevent someone from just copying from their local terminal.
I'm not sure any general SELinux solution would be helpful here. Does your application that reads files need to write data anywhere? If not and these files only need to be read by your application, you could just give your application's type read-only access to the files of the type you would like it to read and don't give it write anywhere.
I think some more information on the exact permissions required by your use-case might be helpful, sorry for the vague answer.
UPDATE - MORE SPECIFIC ANSWER
I think you can achieve what you want without SELinux, as this is how many things are handled (e.g. normal users changing their password in /etc/shadow via the passwd command):
Make a separate user and/or group for your commercial software (might already be done)
Give the files read-only access by said user and/or group
Make sure normal users do not have access to those files
Make your executable setuid or getgid (depending on whether you used a group or user) e.g. chmod g+s or chmod u+s
When users run the application, they will now have the same permissions that the application user or group has, thereby allowing read access to those specific files within the desired application.
UPDATE2 - MULTIPLE USERS AND GROUPS
If you have multiple applications and groups, you can likely achieve the functionality you are looking for with sudo. Many people are aware of its ability to let you run commands a root, but it's usefulness goes far beyond that example. I'm not sure this an ideal setup, but it's one way to do what you're attempting.
You can still make all the application files owned by the application, but then you can make separate groups for each set of files.
This is what your /etc/sudoers or a file in /etc/sudoers.d/ could look like:
User_Alias FILEGROUP1 = user1, user2
User_Alias FILEGROUP2 = user3, user4
Cmnd_Alias MYEDITOR = /usr/local/bin/myeditor, /usr/local/bin/mycompiler
FILEGROUP1 ALL=(:fileset1) NOPASSWD: MYEDITOR
FILEGROUP2 ALL=(:fileset2) NOPASSWD: MYEDITOR
Where user1 and user2 need access to files owned by the group fileset1 and user3 and user4 need access to files owned by the group fileset2. You could also use groups instead of users.
The users could access their files through the editor by doing sudo -g fileset1 /usr/local/bin/myeditor or something similar.
It might help to create some wrapper scripts for the necessary sudo -g commands for your users, especially since it sounds like may be a graphical application.
More details:
http://www.garron.me/linux/visudo-command-sudoers-file-sudo-default-editor.html
https://serverfault.com/questions/166254/change-primary-group-with-sudo-u-g
http://linux.die.net/man/8/sudo
| SELinux: Can I disable copying of certain files? |
1,411,584,047,000 |
Does the suid bit have any special meaning for device files in Linux ?
|
I believe it is not. This bit is only used on executable files. It's defined in Linux kernel headers as S_ISUID. If you grep kernel sources for this constant, you will find that it is only used in:
should_remove_suid function, which is used on FS operations that should remove SUID/SGID bit,
prepare_binprm function in fs/exec.c which is used when prepairing executable file to set EUID on exec,
pid_revalidate function in fs/proc/base.c which is used to populate procfs,
notify_change function in fs/attr.c which is used when changing file attributes,
is_sxid function in include/linux/fs.h which is only used by XFS and GFS specific code and notify_change function,
in filesystem specific code (of course)
So it seems to me that this bit is only used (from userspace perspective) when executing files. At least on Linux.
| Does the suid bit have any meaning for device files? |
1,411,584,047,000 |
Recently I tried to move a directory that I own to another directory (which I also own), but I couldn't. I then noticed that I don't own the parent directory.
This made me wonder what are the rules for moving a directory in UNIX. Do you need to have read/write permissions to both it and its parent? Also what happens if it contains files or directories that you don't own?
|
Your user needs write/executable (wx) permissions on directory to make create/delete any files in it (even if you don't own them and don't have read permissions). There is no need in owning it.
Thus to move directory you need to have wx premissions on parent directory to be able to operate with files and dirs in it and wx on directory you're going to move and on all nested directory (permissions on files in it don't matter at all, if you're not going to change them).
| How do permissions work when moving directories? |
1,411,584,047,000 |
I have a group of files that I cannot access via cat or vim although I have read access.
The parent directory has permissions set as follows:
drwxrws--- 2 test test_grp 94424 May 10 20:01 my_test_grp
Then the file that I would like to access has permissions set as follows:
-rwxrwx--- 1 test test_grp 3398 May 10 19:40 my_test_file.txt
When I execute id -gn, it returns the following: test_grp
When I try to execute cat, I receive the following:
cat: my_test_file.txt: Permission denied
I've tried logging out and logging back in to the terminal, to no avail. Are there any other recommended steps to remediate this issue?
Also, I do not have sudo access.
Update
After doing a df -h I can see that the directory is in a location whose filesystem is a mounted NFS. Could that be a factor as well?
|
A possible explanation is that there are permissions on the server which NFS is unable to express. The permissions transmitted over the network are not what determines whether an access is authorized: the server gets to decide. Normally the permissions and the access control decision are based on the same information, and therefore they're consistent. However, if something along the way loses some information about permissions, then the two may be inconsistent.
Some examples of permissions that NFS is unable to express are access control lists (you're using NFSv3 which doesn't always support ACL; and NFSv4 has ACL but they aren't exactly the same as Linux's), and Linux security frameworks such as SELinux and AppArmor.
If this is the problem, then diagnosing it without access to the server would require a lot of guesswork. Without help from the server administrator, you're unlikely to resolve this problem.
| Unable to read file I have read permissions on |
1,411,584,047,000 |
To add weight to a discussion I'm having, I'm trying to find concrete examples of why having the /root directory world readable is bad from a security point of view.
I have found plenty of instances online of people repeating the mantra that it's really not good to give /root say, 755 perms, but with no further evidence.
Could someone please provide a scenario where a system's security can be compromised if this is the case? The less contrived the better - so, for example, how can a freshly installed Centos system suffer if /root has 755 perms?
EDIT - Thanks for the replies, but so far no concrete examples. To put it another way, how could you use the fact that /root is visible to compromise the system? Are there any examples of programs being installed and assuming that /root is not accessible to everyone?
EDIT 2 - I think the consensus so far is that it's not a great security risk, other than someone not checking perms and using the directory as if it were private to root.
|
Fundamentally I think it comes down to a choice made by the core developers and nothing more than that. Why? Because by default, there should be almost nothing of any value to anyone in /root. No one should be logging in as the root user for general stuff.
For example, on FreeBSD everyone can read /root. Some files within /root can not be read for security reasons but you can still "see" those files are there with ls (just can not read them). For example, .history is set -rw------- but .login is -rw-r--r--.
FreeBSD has a slightly different approach to security to Linux. Historically FreeBSD has been for servers and while it can be run as a Desktop it really is better (by default) as a server.
Personally, I see nothing wrong with this set up (/root can be read).
The /root on FreeBSD has almost nothing in it except for configs really. Mail should be forwarded to a real user. No one should be logging in as the root user. The account should only be used for installation of and configuration of software as well as maintenance tasks. Other than a few security sensitive files (like .history) there is nothing to hide in /root in my opinion, on FreeBSD.
For more reading on this, try the FreeBSD handbook section on security. I did not see anything on their choice to make /root readable in a quick scan but there is a lot info there.
| Examples of why a world readable /root directory is bad? |
1,411,584,047,000 |
I have an NFS mount in fstab:
10.0.12.10:/share1 /net/share1 nfs rw 0 0
which defaults to root as owner and group and 777 permissions. How do I specify another owner and different permissions? I can use chown and chmod, but it certainly should be possible straight from the mount command?
The system OS is Ubuntu Server 14.04.
|
It isn't possible from the mount command, because mount has to handle a variety of different filesystem types - including ones that might not support 'classic' ugo unix style permissions.
You are "stuck with" chown/chgrp/chmod. (Where applicable).
Bear in mind the server has permissions on its own filesystem. It may well be doing some manner of mapping - more commonly you'll see root -> nobody, but NFSv4 and idmap opens a whole new can of worms there. (It doesn't apply direct uid/gid ownership, but rather maps userids against a common directory.)
| How to specify owner and permissions for an NFS mount? |
1,411,584,047,000 |
It's straightforward to realise that the permissions of a file are not relevant to the ability to delete that file. The ability to modify the directory listing is controlled by the directory's permissions.
However, for years I have believed that the purpose of the write permission was to allow modification of the directory, and the execute permission is for 'search' - listing files, or changing into the directory.
Today I discovered that one cannot rm a file in a directory unless both the write and execute bits are set. In fact, without execute set, write appears almost useless.
$ tree foo/
foo/
└── file_to_delete
0 directories, 1 file
$ chmod -x foo
$ ls -ld foo
drw-rw-r-- 2 ire_and_curses users 4096 Sep 18 22:08 foo/
$ rm foo/file_to_delete
rm: cannot remove ‘foo/file_to_delete’: Permission denied
$ chmod +x foo/
$ rm foo/file_to_delete
$ tree foo/
foo/
0 directories, 0 files
$
I find this behaviour pretty surprising. For directories, what is the reason that execute is required to make write useful in practice?
|
Without the execute bit, you can't run a stat() on the files in the directory, which means you can't determine the inode information of those files. To remove a file, you must know information which would be returned by stat().
A demonstration of this:
$ ls -ld test
drw------- 2 alienth alienth 4096 Sep 18 23:45 test
$ stat test/file
stat: cannot stat ‘test/file’: Permission denied
$ strace -e newfstatat rm test/file
newfstatat(AT_FDCWD, "test/file", 0x1a3f368, AT_SYMLINK_NOFOLLOW) = -1 EACCES (Permission denied)
newfstatat(AT_FDCWD, "test/file", 0x7fff13d4f4f0, AT_SYMLINK_NOFOLLOW) = -1 EACCES (Permission denied)
rm: cannot remove ‘test/file’: Permission denied
+++ exited with 1 +++
You can also demonstrate this with a simple ls -l. The metadata info of the directory may be readable and writable to your user, but without execute you can't determine the details of the file within the directory.
$ ls -l test
ls: cannot access test/file: Permission denied
total 0
-????????? ? ? ? ? ? file
| Why are both write and execute permissions on a directory necessary to be able to delete files? [duplicate] |
1,411,584,047,000 |
CentOS 6.x / OpenVZ
Recently my VPS provider moved my OpenVZ container to a new server. After that move, I've noticed that files/directories for one of my user accounts show an odd owner / group.
For example I'll see stuff like this:
[root@exampleserver ~] ls -l /home/foouser
-rw-rw-r-- 1 65534 65534 370123 Jan 1 2014 ExampleFile.txt
I'd expect to see "foouser" as the owner/group instead of 65534.
Likewise, when I try switching to the user, I get an error:
[root@exampleserver ~] su - foouser
su: warning: cannot change directory to /home/foouser: Permission denied
-bash: /home/foouser/.bash_profile: Permission denied
My guess is that some numeric identifier for the user accounts didn't persist with the move so now my user account isn't associated with old files.
What would cause this and how can I fix it?
|
65534 is some kind of default/nobody UID & GID value. Your VPS provider made some sort of mistake when they copied over your container. For example they used rsync but failed to use its --numeric-ids option.
The user IDs inside your container don't exist outside the container and some copy tools, upon seeing UIDs and GIDs that they can't resolve, revert them to defaults. That's how this kind of mistake can happen. A competent virtualization provider shouldn't make that mistake though.
Your options are:
chown/chgrp all the files back to what they should be. However, the information on what the original owners and groups were has been lost, so in some cases it might not be obvious how to reconstruct them (e.g. "should this or that file be writable by the web server user or my own user?").
Complain to the provider and get them to redo the copy, doing it properly this time.
| Why am I seeing a number as owner / group instead a name? |
1,411,584,047,000 |
I use this command to check whether gpg password is valid or not:
gpg -o /dev/null --local-user $KEY_ID -as <(echo 1234)
This is a hacked command to check gpg password that I posted here. I can see that the /dev/null file permission changed from 666 to 644 and sometimes it become corrupted and some scripts are not able to write to /dev/null:
I can fix this by recreating /dev/null using the following command:
rm -f /dev/null; mknod -m 666 /dev/null c 1 3
1st question is, how do I prevent the /dev/null permission change or the file become corrupted when running the command so I don't have to manually recreate the /dev/null.
2nd question (if the 1st question does not have solution): How do I verify in bash whether the /dev/null file is corrupted or not? I want to use this method to recreate /dev/null as the last resort.
Any idea ?
|
man gpg looking at the effect of the -o parameter :
-o file
Write output to file. To write to stdout use - as the filename.
Therefore, the command you issue creates an ordinary file named /dev/null with your default permissions.
In other words, issuing this command, you override the c in the crw-rw-rw permissions meaning its original definition as a character special device, rendering it totally useless regarding its original purpose.
In order to achieve trouble-free what you are willing to do, and as specified in the man, you should definitely use the hyphen (-) as the filename.
Then, as with any other *ux command, feel free to redirect the standard out to /dev/null. (>/dev/null)
| How to prevent /dev/null file's permission change or corrupted? |
1,411,584,047,000 |
I configured a CentOS server to be a SFTP server that receives customer files in a secure way. Then I need to be able to access these files via SMB.
The 'root' of my SFTP is in /var/inbound/
Then under /var/inbound/ I have one directory for each customer (e.g. /var/inbound/customer1/
Then in order to jail users, I have a sub-directory called uploads under each customer directory (e.g. /var/inbound/customer1/uploads/)
I managed to make the permissions work as expected and everything is fine and dandy to support customer access to the SFTP. One important aspect is that I 'jailed' users to their /var/inbound/ directories.
Here is now I created the /var/inbound directory:
sudo mkdir /var/inbound
sudo chown root.root /var/inbound #root must be owner of directory
And here is how I create the sub-directories for each customer:
sudo mkdir -p /var/inbound/[username]/uploads
sudo chown root /var/inbound/[username]
sudo chmod go-w /var/inbound/[username]
sudo chown [username]: /var/inbound/[username]/uploads
sudo chmod 770 /var/inbound/[username]/uploads
NOTE: Both the /var/inbound/[username]/ and
/var/inbound/[username]/uploads/ directories need a special set of
permissions. Perform the following commands, replacing [username] with
the user in question.
Now I'll spare you from the remaining SSH/SFTP configuration. But suffice to say that I can get users to be jailed to their own directories, and that I disabled their SSH/console access using SCPONLY.
Now where things get complicated...
I now need to give SMB access to a specific account (let's call it fileaccess) to the /var/inbound/ directory, which will be accessible from a Windows Server host. I do manage to see the /var/inbound directory as a share from Windows, including its sub-directories. However I cannot see some files, and I have no write access to the files I am meant to have access to either.
$ ls -l /var/inbound
total 0
drwxr-xr-x. 3 root root 20 Jan 5 11:53 testuser
$ ls -l /var/inbound/testuser
total 0
drwxrwxr-x. 2 testuser sftponly 53 Jan 5 12:26 uploads
Now here is the directory I want to access with the fileaccess account:
$ ls -la /var/inbound/testuser/uploads/
total 12
drwxrwx---. 2 testuser sftponly 53 Jan 5 15:12 .
drwxr-xr-x. 3 root root 20 Jan 5 11:53 ..
-rw-r--r--. 1 fileaccess sftponly 30 Jan 5 12:26 test2.txt
-rw-r--r--. 1 testuser sftponly 26 Jan 5 12:25 test3.txt
-rw-rw-r--. 1 dmgmadmin dmgmadmin 14 Jan 5 11:53 test.txt
When I connect via SMB with the fileaccess account, I can only see the test.txt, but I cannot open the file (access denied).
Here is my smb.conf. As you can see I've been trying a series of different options:
[global]
workgroup = <MYDOMAINNAMEGOESHERE>
security = user
passdb backend = tdbsam
[inbound]
comment = Incoming files (as %u)
path = /var/inbound/
valid users = fileaccess
guest ok = No
read only = No
writeable = Yes
browseable = Yes
create mask = 0640
directory mask = 0750
NOTE: While I do have a domain, this CentOS machine is not part of it. It does have an entry on my Windows AD DNS, and is configured to use the DNS server -- but that is the end of it. I want this machine to be isolated. So attempts to connect to this server are made with local CentOS accounts.
I am particularly concerned that this might be a Linux file-system access issue, and that necessary changes might conflict with required SFTP permissions (e.g. SFTP requires the /var/inbound// directories to be owned by root).
I wonder if there is a way to enforce in the SMB.conf the access rights for the account in question, so that account has browse/read/right permissions. I tried all sorts of config options in smb.conf (I've been reading the manual for smb.conf here).
|
Seems like I was chasing a zebra all along.
Thanks to the help of users derobert, terdon and others in the /dev/chat channel, we found out that the issue is indeed SELinux. In fact, the CentOS wiki documentation on Samba says the following:
"Now we're going to use the semanage command (part of the SELinux
package) to open up the directory(s) you desire to share with the
network. That's right. Without doing this, you'll start up samba and
get a bunch of blank directories and panic thinking the server deleted
all your data!"
So the command that I needed to perform was:
sudo semanage fcontext -a -t samba_share_t '/var/inbound(/.*)?'
sudo restorecon -R /var/inbound
And boom! Now I can access the files as expected.
| Linux and SMB permissions not working as expected |
1,411,584,047,000 |
On my up-to-date Arch Linux, lsblk works fine without sudo:
$ lsblk -o NAME,FSTYPE
NAME FSTYPE
sda
├─sda1 ext4
├─sda2 ext4
├─sda3 swap
├─sda4
└─sda5 ext4
sr0
$ lsblk --version
lsblk from util-linux 2.26.2
On my Ubuntu 14.04, getting the filesystem types needs sudo:
$ lsblk -o NAME,FSTYPE
NAME FSTYPE
sda
├─sda1
├─sda2
├─sda3
├─sda4
├─sda5
├─sda6
│ └─lvmg-homelvm (dm-0)
└─sda7
sdb
└─sdb1
└─lvmg-homelvm (dm-0)
$ sudo lsblk -o NAME,FSTYPE
NAME FSTYPE
sda
├─sda1 ntfs
├─sda2 ntfs
├─sda3 ext4
├─sda4
├─sda5 btrfs
├─sda6 LVM2_member
│ └─lvmg-homelvm (dm-0) btrfs
└─sda7 swap
sdb
└─sdb1 LVM2_member
└─lvmg-homelvm (dm-0) btrfs
$ apt-cache policy util-linux
util-linux:
Installed: 2.20.1-5.1ubuntu20.4
Candidate: 2.20.1-5.1ubuntu20.4
Why? And which other columns need sudo?
Additional info:
On Arch:
$ ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Jun 19 16:19 /dev/sda
brw-rw---- 1 root disk 8, 1 Jun 19 16:19 /dev/sda1
brw-rw---- 1 root disk 8, 2 Jun 19 16:19 /dev/sda2
brw-rw---- 1 root disk 8, 3 Jun 19 16:19 /dev/sda3
brw-rw---- 1 root disk 8, 4 Jun 19 16:19 /dev/sda4
brw-rw---- 1 root disk 8, 5 Jun 19 16:19 /dev/sda5
$ groups
wheel locate systemd-journal networkmanager fuse muru
(My primary group is muru, not wheel, despite what the order may suggest.)
On Ubuntu:
$ ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Jun 12 17:05 /dev/sda
brw-rw---- 1 root disk 8, 1 Jun 12 17:05 /dev/sda1
brw-rw---- 1 root disk 8, 2 Jun 12 17:05 /dev/sda2
brw-rw---- 1 root disk 8, 3 Jun 12 17:05 /dev/sda3
brw-rw---- 1 root disk 8, 4 Jun 12 17:05 /dev/sda4
brw-rw---- 1 root disk 8, 5 Jun 12 17:05 /dev/sda5
brw-rw---- 1 root disk 8, 6 Jun 12 17:05 /dev/sda6
brw-rw---- 1 root disk 8, 7 Jun 12 17:05 /dev/sda7
brw-rw---- 1 root disk 8, 16 Jun 12 17:05 /dev/sdb
brw-rw---- 1 root disk 8, 17 Jun 12 17:05 /dev/sdb1
$ groups
muru adm cdrom sudo dip plugdev lpadmin sambashare debian-tor libvirtd autopilot
On Arch:
$ stat -c "%A %U %G" `which lsblk`
-rwxr-xr-x root root
On Ubuntu:
$ stat -c "%A %U %G" `which lsblk`
-rwxr-xr-x root root
|
The behavior of lsblk changed in util-linux at version 2.25.2-4:
util-linux (2.25.2-4ubuntu2) vivid; urgency=low
Add missing libudev-dev build-dependency. This
makes the "LABEL" information of lsblk available for non-root users
(closes: #776905)
-- Michael Vogt Tue, 03 Feb 2015 09:06:46 +0100
@muru did additional testing to determine that FSTYPE, UUID, and LABEL are the only fields which need sudo in util-linux version 2.20.1-5.
| Why and when does `lsblk` require `sudo`? |
1,411,584,047,000 |
Whenever I create new directories in my home (or its subdirectories) they do not have write permission, even though umask is set correctly. Files I make DO have write permission.
[mmanary@seqap33 ~]$ umask
0002
[mmanary@seqap33 ~]$ mkdir testDir
[mmanary@seqap33 ~]$ touch testFile
[mmanary@seqap33 ~]$ ls -l
dr-xr-x--- 2 mmanary mmanary 0 Apr 15 10:25 testDir
-rw-rw-r-- 1 mmanary mmanary 0 Apr 15 10:26 testFile
If I switch to a shared group storage directory, then new directories DO have write permission. I can switch them with chmod easily, BUT when using tar, the new directory cannot be written in to so the tar fails with "Permission Denied". Any help is appreciated.
Edit: I have read other suggested questions, but not seem to apply directly because they involve more complicated cases (other users involved). In case this helps:
[mmanary@seqap33 ~]$ getfacl .
# file: .
# owner: mmanary
# group: mmanary
user::rwx
group::r-x
other::---
Edit2: On advice from comments, my filesystem is NFS
|
Talked to the infrastructure people, and the answer is that there are extended ACLs in place that act differently based on location, and that they were erroneously set.
| mkdir permissions do not correspond to umask (change depending on location) |
1,411,584,047,000 |
I installed Noip and ran the command which created the config file
/usr/local/bin/noip2 -C
and then I ran the run command
/usr/local/bin/noip2
and it returned
Can't locate configuration file /usr/local/etc/no-ip2.conf. (Try -c). Ending!
I checked the location of the file and it was definitely there.
Any idea why it could not locate the file?
Output of strace:
execve("/usr/local/bin/noip2", ["/usr/local/bin/noip2"], [/* 15 vars */]) = 0
brk(0) = 0x1375000
uname({sys="Linux", node="raspberrypi", ...}) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb6f33000
access("/etc/ld.so.preload", R_OK) = 0
open("/etc/ld.so.preload", O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=44, ...}) = 0
mmap2(NULL, 44, PROT_READ|PROT_WRITE, MAP_PRIVATE, 3, 0) = 0xb6f32000
close(3) = 0
open("/usr/lib/arm-linux-gnueabihf/libcofi_rpi.so", O_RDONLY) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\270\4\0\0004\0\0\0"..., 512) = 512
lseek(3, 7276, SEEK_SET) = 7276
read(3, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1080) = 1080
lseek(3, 7001, SEEK_SET) = 7001
read(3, "A.\0\0\0aeabi\0\1$\0\0\0\0056\0\6\6\10\1\t\1\n\2\22\4\24\1\25"..., 47) = 47
fstat64(3, {st_mode=S_IFREG|0755, st_size=10170, ...}) = 0
mmap2(NULL, 39740, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb6f07000
mprotect(0xb6f09000, 28672, PROT_NONE) = 0
mmap2(0xb6f10000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1) = 0xb6f10000
close(3) = 0
munmap(0xb6f32000, 44) = 0
open("/etc/ld.so.cache", O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=44950, ...}) = 0
mmap2(NULL, 44950, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb6efc000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
open("/lib/arm-linux-gnueabihf/libc.so.6", O_RDONLY) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\214y\1\0004\0\0\0"..., 512) = 512
lseek(3, 1198880, SEEK_SET) = 1198880
read(3, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1360) = 1360
lseek(3, 1198444, SEEK_SET) = 1198444
read(3, "A.\0\0\0aeabi\0\1$\0\0\0\0056\0\6\6\10\1\t\1\n\2\22\4\24\1\25"..., 47) = 47
fstat64(3, {st_mode=S_IFREG|0755, st_size=1200240, ...}) = 0
mmap2(NULL, 1242408, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb6dcc000
mprotect(0xb6eef000, 28672, PROT_NONE) = 0
mmap2(0xb6ef6000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x122) = 0xb6ef6000
mmap2(0xb6ef9000, 9512, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb6ef9000
close(3) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb6f32000
set_tls(0xb6f324c0, 0xb6f32b98, 0xb6f37048, 0xb6f324c0, 0xb6f37048) = 0
mprotect(0xb6ef6000, 8192, PROT_READ) = 0
mprotect(0xb6f36000, 4096, PROT_READ) = 0
munmap(0xb6efc000, 44950) = 0
rt_sigaction(SIGHUP, {SIG_IGN, [], 0x4000000 /* SA_??? */}, NULL, 8) = 0
rt_sigaction(SIGPIPE, {SIG_IGN, [], 0x4000000 /* SA_??? */}, NULL, 8) = 0
rt_sigaction(SIGUSR1, {SIG_IGN, [], 0x4000000 /* SA_??? */}, NULL, 8) = 0
rt_sigaction(SIGUSR2, {SIG_IGN, [], 0x4000000 /* SA_??? */}, NULL, 8) = 0
rt_sigaction(SIGALRM, {0xa568, [], 0x4000000 /* SA_??? */}, NULL, 8) = 0
getcwd("/home/pi", 4096) = 9
lstat64("/home/pi/noip2", 0xbef6f670) = -1 ENOENT (No such file or directory)
open("/usr/local/etc/no-ip2.conf", O_RDWR) = -1 EACCES (Permission denied)
open("/usr/local/etc/no-ip2.conf", O_RDONLY) = -1 EACCES (Permission denied)
write(2, "Can't locate configuration file "..., 79Can't locate configuration file /usr/local/etc/no-ip2.conf. (Try -c). Ending!
) = 79
exit_group(-1) = ?
Thanks
|
open("/usr/local/etc/no-ip2.conf", O_RDWR) = -1 EACCES (Permission denied)
open("/usr/local/etc/no-ip2.conf", O_RDONLY) = -1 EACCES (Permission denied)
noip2 tries to open its configuration file for reading and writing, and when this fails it tries again just to read, which also fails. The failure is due to a lack of permission; the error message is unhelpfully generic.
Check the permissions of the configuration file and of the directories leading to it (well, / and /usr are surely ok, or your system would be broken in more visible ways).
ls -ld /usr/local /usr/local/etc /usr/local/etc/no-ip2.conf
The directories must have at least the x permission bit for the user running the command — probably for all users. The file itself must have at least the r permission bit. The directories should have the r permission bit (strictly speaking, it isn't required, but it's the normal thing; see Do the parent directory's permissions matter when accessing a subdirectory? for details).
You probably want chmod a+rX /usr/local/etc /usr/local/etc/no-ip2.conf, unless the configuration file is supposed to be confidential (e.g. because it contains a password).
If one of the entries has + after the r/w/x permission bits, then there is a security framework such as SELinux which may be imposing additional restrictions.
| Noip “Can't locate configuration file”, but the file is there |
1,411,584,047,000 |
I have been following this guide on installing debian-kit on my Sony Xperia Tablet Z and the installation goes fine until I try to apt-get install andromize which fails with the error groupadd: failure while writing changes to /etc/group. I also get the same message if I try to add a user with adduser.
I have partitioned my external SD card to 10gb FAT32 and 20GB ext2 and used the mk-debian -i /dev/block/vold/179:37 which is the correct partition.
If I look in /etc/ I can see that there are additional files called group- and passwd- as well as group and passwd but I have no idea if this is relevant.
I'm logged in as root and the partition is loaded readwrite because all other apt-get installs work, it just fails on any that modify the users / groups.
ls -l /etc/group returns the following...
-rw-r--r--. 1 root root 476 Jul17 19:13 /etc/group
|
The solution in the end (inspired by @steeldriver) was to download this app from the play store because under Android KitKat you need to change the SELinux mode to permissive.
A combination of that and apt-get install selinux-policy-default fixed the permission problems and LXDE now works great on my Sony Xperia Tablet Z
| groupadd failure while writing changes to /etc/group |
1,411,584,047,000 |
I was having some permission problems and used the following command on directory Media:
chmod -R ugo+r Media
It didn't help so then I did:
chmod -R 775 Media
Now I get this error when I try to cd:
jeff@nacho:/DataVolume/shares$ cd Media
-bash: cd: Media: Permission denied
Even though when I do a directory listing everything looks fine:
drwxrwxr-x 8 root share 65536 Oct 15 22:38 Media
drwxrwxr-x 11 root share 65536 Oct 15 23:52 Public
Note: I can cd to Public with no problem. If I su then I can access the directory.
What am I missing?
Extra info:
jeff@nacho:/var/www/Admin/webapp/htdocs$ mount |grep -vE '^none'
/dev/md0 on / type ext3 (rw,noatime)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755,size=50M)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,size=50M)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
tmpfs on /tmp type tmpfs (rw,size=50M)
/var/log on /var/log.hdd type none (rw,bind)
ramlog-tmpfs on /var/log type tmpfs (rw,size=20M)
/dev/sda4 on /DataVolume type ext4 (rw,noatime,nodelalloc)
/DataVolume/cache on /CacheVolume type none (rw,bind)
/DataVolume/shares on /shares type none (rw,bind)
/DataVolume/shares on /nfs type none (rw,bind)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
|
I "fixed" my problem by adding myself to the group share:
su
usermod -aG share jeff
/etc/init.d/ssh restart
Though that doesn't help me understand why I couldn't get a directory listing to being with (even after chmod -r 777), but the point is moot now.
| Can't get directory listing of folder I have permissions to |
1,411,584,047,000 |
[vagrant@localhost vagrant]$ sudo chmod 755 luser-demo01.sh
[vagrant@localhost vagrant]$ ls -l
total 5
-rwxrwxrwx 1 vagrant vagrant 101 May 30 19:44 luser-demo01.sh
-rwxrwxrwx 1 vagrant vagrant 3464 May 30 19:16 Vagrantfile
[vagrant@localhost vagrant]$ chmod 755 luser-demo01.sh
[vagrant@localhost vagrant]$ ls -l
total 5
-rwxrwxrwx 1 vagrant vagrant 101 May 30 19:44 luser-demo01.sh
-rwxrwxrwx 1 vagrant vagrant 3464 May 30 19:16 Vagrantfile
[vagrant@localhost vagrant]$
I am trying to change file permissions for the above .sh file from 777 to 755 using chmod, but it's not working. I have tried using other permissions like 600 and 666 with chmod but the file permissions never change. I'm using CentOS 7.
My commands are the following: sudo chmod 755 <filename> or chmod 755 <filename>.
I was recommended to use chown to gain ownership of the file. However, after trying that, I see that even that is not working as expected. I am attaching an image below for reference. I am probably using the commands wrong.
[vagrant@localhost vagrant]$ chown -c root luser-demo01.sh
changed ownership of 'luser-demo01.sh' from vagrant to root
[vagrant@localhost vagrant]$ ls -l
total 5
-rwxrwxrwx 1 vagrant vagrant 101 May 30 19:44 luser-demo01.sh
-rwxrwxrwx 1 vagrant vagrant 3464 May 30 19:16 Vagrantfile
[vagrant@localhost vagrant]$
The output of the findmnt command is as follows:
[vagrant@localhost vagrant]$ findmnt -T /vagrant
TARGET SOURCE FSTYPE OPTIONS
/vagrant /vagrant vboxsf rw,nodev,relatime
/vagrant vagrant vboxsf rw,nodev,relatime
|
The most common cause for this kind of behavior (note: no error messages when trying to change permissions or ownership) is that the files are located in a filesystem that does not support Unix-style file ownerships/permissions, like a VFAT/FAT32/ExFAT filesystem, or a SMB/CIFS share from a system that won't support the Unix extensions of the SMB protocol.
Such filesystems typically allow setting a default owner/group for all files and directories within, and perhaps one set of permissions for all files and another for all directories. These are usually set using mount options at filesystem mount time.
There are no error messages because the filesystem driver "knows" that changing the per-file ownerships and permissions is fundamentally not possible, so the driver simply does nothing and reports a successful operation whenever asked to change owners/permissions.
To identify the filesystem type used, run findmnt --target . in the directory that contains the files whose permissions you would like to change. Please edit your question to add the output of that command in it as text, not as a picture of text.
The filesystem type appears to be vboxsf, indicating this is a VirtualBox share from the host system. If VirtualBox is running as a regular user in the host system, it will have no privileges to change file ownerships on the host side. And if the host filesystem happens to be VFAT/FAT32/ExFAT, there will be no way to store the permissions for individual files, so the driver cannot rely on the host filesystem capabilities at all. So it will have to assume it must provide all the ownership/permissions emulation by itself, just like a VFAT filesystem driver has to do.
See man mount.vboxsf for mount options to use for setting the permissions.
| Changing file permissions from 777 to 755 or changing the owner doesn't do anything |
1,411,584,047,000 |
On a multi-user system, what protects against any user accessing any other users files via root? As context, the question is based on my understanding as follows:
There are two commands related to root privileges, sudo and su. With sudo, you don't become another user (including root). sudo has a pre-defined list of approved commands that it executes on your behalf. Since you are not becoming root or another user, you just authenticate yourself with your own password.
With su, you actually become root or another user. If you want to become user Bob, you need Bob's password. To become root, you need the root password (which would be defined on a multi-user system).
ref's: howtogeek.com:
su switches you to the root user account and requires the root account’s password. sudo runs a single command with root privileges –
it doesn’t switch to the root user.
and
If you execute the su bob command, you’ll be prompted to enter Bob’s password and the shell will switch to Bob’s user account; Similar description at computerhope.com
tecmint.com:
‘sudo‘ is a root binary setuid, which executes root commands on behalf
of authorized users
If you become root, you have access to everything. Anyone not authorized to access another user's account would not be given the root password and would not have sudo definitions allowing it.
This all makes sense until you look at something like this link, which is a tutorial for using sudo -V and then sudo su - to become root using only your own password.
If any user can become root without the root password, what mechanism protects user files from unauthorized access?
|
The major difference between sudo and su is the mechanism used to authenticate. With su the user must know the root password (which should be a closely guarded secret), while sudo is usually configured to ask the for the user's own password. In order to stop all users causing mayhem, the privileges discharged by the sudo command can, fortunately, be configured using the /etc/sudoers file.
Both commands run a command as another user, quite often root.
sudo su - works in the example you gave because the user (or a group where the user is a member) is configured in the /etc/sudoers file. That is, they are allowed to use sudo. Armed with this, they use the sudo to temporarily gain root privileges (which is default when no username is provided) and as root start another shell (su -). They now have root access without knowing root's password.
Conversely, if you don't allow the user to use sudo then they won't be able to sudo su -.
Distros generally have a group (often called wheel) whose members are allowed to use sudo to run all commands. Removing them from this group will mean that they cannot use sudo at all by default.
The line in /etc/sudoers that does this is:
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
While removing users from this group would make your system more secure, it would also result in you (or other system adminstrators) being required to carry out more administrative tasks on the system on behalf of your users.
A more sensible compromise would configure sudo to give you more fine grained control of who is allowed to use sudo and who isn't, along with which commands they are allowed to use (instead of the default of all commands). For example,
## Allows members of the users group to mount and unmount the
## cdrom as root
%users ALL=/sbin/mount /mnt/cdrom, /sbin/umount /mnt/cdrom
(only useful with the previous %wheel line commented out, or no users in the wheel group).
Presumably, distros don't come with this finer grained configuration as standard as it's impossible to forecast what the admin's requirements are for his/her users and system.
Bottom line is - learn the details of sudo and you can stop sudo su - while allowing other commands that don't give the user root shell access or access to commands that can change other users' files. You should give serious consideration to who you allow to use sudo and to what level.
WARNING: Always use the visudo command to edit the sudoers file as it checks your edits for you and tries to save you from the embarrassing situation where a misconfigured file (due to a syntax error) stops you from using sudo to edit any errors. This is especially true on Debian/Ubuntu and variants where the root account is disabled by default.
| What mechanism prevents any user from accessing any other user's files via root? |
1,411,584,047,000 |
So for convenience, I store all my data on my Windows partition so that I can access my data easily from both Linux and Windows. However, I tried compiling a C++ program with g++, and found out that I cannot run the program with ./program_filename, as it tells me
bash: program_filename: Permission denied
Doing
cp program_filename ~/program_filename
and running it from my home directory works just fine, however.
So I tried chmod +rwx program_filename, but ls -l shows that the permissions are still set as -rw-------. for all files in the directory. Nothing changes when I do this as root, either.
Is there a simple fix for this?
(In case it's useful, I am running Fedora 16 x64)
|
Make sure that your mount options allow the execute permission bit.
There are mount options one can use to limit the permissions of files within the mounted filesystem: general noexec prevents all files from being executable, FAT-specific option showexec grants the permission only to files with extensions .exe, .com and .bat. Note also that noexec is implied by user and users.
If you use user or users you can still get the execute permission bit working by mounting with explicitly specified exec mount option after the user or users option.
See mount manpage for details.
| Why can't I run programs on another partition in Linux? |
1,411,584,047,000 |
If I type:
ls -l file.txt
I see that the rights for that file are equivalent to "456":
4 = owner (r--)
5 = group (r-x)
6 = others (rw-)
Which are the rights for root in this case? Does it have 777?
Could the rights be changed so that the root will have less permissions than the owner?
|
I would check this page out. It talks about file permissions in depth.
But to answer your question directly, no:
The super user "root" has the ability to access any file on the system.
In your example for instance if the file is owned by say bob and the group owner was also bob, then you would see something like this:
-r--r-xrw-. 1 bob bob 8 Jan 29 18:39 test.file
The 3rd bit group the (rw) would also apply to root, as root being part of the others group. If you tried to edit that file as root you would see that you have no problem doing so.
But to test your theory even more, if the file was owned by root:
-r--r-xrw-. 1 root root 8 Jan 29 18:40 test.file
And you again went to edit the file, you would see that you still have no problem editing it.
Finally if you did the extreme:
chmod 000 test.file
ls -lh test.file
----------. 1 root root 8 Jan 29 18:41 test.file
And you went again to edit the file you would see (at least in vi/vim) "test.file" [readonly]. But you can still edit the file and force save it with :wq!.
Testing @Stéphane Chazelas claim with a shell script file:
#!/bin/sh
echo "I'm alive! Thanks root!"
[root ~]# ls -lh test.sh
----------. 1 atgadmin atgadmin 31 Jan 30 10:59 test.sh
[root ~]# ./test.sh
-bash: ./test.sh: Permission denied
[root ~]# sh test.sh
I'm alive! Thanks root!
@Shadur already said it so I'm just going to quote instead of restating it:
Note: The execution bit is checked for existence, not whether it's applicable to root.
| What are the root permissions for a file? |
1,370,359,393,000 |
I have a directory shared through Samba. I want users to be able to create/modify/delete files but not create/erase directories. I haven't found a way to do it. Maybe with SELinux? But how?
|
The elegant way would be using richacls. But that is not an official part of the kernel yet and thus may be difficult to use for you.
An easy workaround would be to use the samba parameters directory mask and force directory security mode to render newly created directories useless (inaccessible) to the users so that they learn not to create directories.
The funny (and portable!) way would be to create so many (invisible) subdirectories that the file system's subdirectory limit is reached. If a new subdirectory is needed the admin would simply rename one of them.
| Prevent creating directories but allow creating files |
1,370,359,393,000 |
I have a Python script that needs to open a file in a directory that I created: /var/www/html/myDIR/myFILE.htm
The directory needed to be created as root using sudo mkdir /var/www/html/myDIR as required by the parent folder.
As a result, my Python script cannot touch /var/www/html/myDIR/myFILE.htm.
What minimum permissions are required to allow scripts (that are not running as root) access to this file (or any file in this position)?
|
When creating the directory, set its group ownership to the same group as the user who will be running the script. Include the group permission g+wx. The script will then be able to create and edit files in that directory.
| How do I allow a script access to a file? |
1,370,359,393,000 |
If I set a passphrase on my private key like so:
openssl rsa -des -in insecure.key -out secure.key
and I remove the passphrase like so:
openssl rsa -in secure.key -out insecure.key
then my private key (insecure.key) ends up with a file mode of 644.
How can I tell openssl to create insecure.key with a file mode of 600 (or anything)?
I know that I can simply chmod the file afterwards, but what if I lose connection? Then there's a private key on the filesystem that anybody could read.
|
You can try to set umask before converting it
umask 077; openssl rsa -in secure.key -out insecure.key
Edit: To not affect other files in the current shell environment by the umask setting execute it in a subshell:
( umask 077; openssl rsa -in secure.key -out insecure.key )
| Remove passphrase from private key and set specific file mode |
1,370,359,393,000 |
if I have a folder that's restricted with say 600, thus no access for group or everyone, but the folder contains files with 777, would this be safe?
Are there any work-arounds to access the 777 file as group or everyone, despite it residing inside a 600 folder?
|
You can't access/enter a directory (or create files) with permissions set to 600 as a regular user. You are also not able to acces/list (well, sort of) files at all with said folder permissions.
| Permissions on folders vs files? |
1,370,359,393,000 |
I am logged into a CentOS 7 server as root. I created a folder /somefolder. I want someusername to be able to write to that folder via scp from a remote computer. What command should I run so that someusername is able to type in scp /some/directory/in/remotepc someusername@centos7server:/somefolder/ and successfully transfer the file?
I can guess something like chmod -R u+rw /somefolder, but that is just a guess. And how would I specify which user?
|
While Anthon's answer is technically correct, I'm writing this one to explain where Octal Permissions come from, and how to calculate them. Octal Permissions is one of the most important concepts in the *nix world.
Why This Concept is Important
Since the Birth of Unix Circa 1969 -1974 on a discarded DEC PDP-7 (see photo and history) and Linus Torvalds creation of Linux circa 1994, as a Unix like clone, file permissions have always existed at a Granular level.
Granular file permissions means that if need be, a user can grant permissions starting at the file level, and work their way up the ladder to the directories, then to the directories' parents, all the way to the root.
Windows on the other hand, did not have granular permissions until the release of Windows 2000, and even now Windows Permissions are very tough to manage without the use of the GUI Window, or an add on Active Directory Server to achieve the behavior of a *nix system.
How Octal Permissions Work
All *nix file permissions work on 2 concepts:
The User class - a.k.a. UGO (User, Group, Others)
The Mode class - a.k.a. RWX (Read, Write eXecute)
As Jared Heeschen states in his article:
Now we look at the other way chmod can be used - with numbers. This is
the more commonly-used format, but also the least user-friendly.
Since a computer works in binary, the file permissions also work in binary. If we look at a permission string as:
UGO UGO UGO UGO
========================
111 110 101 100
when converted to base 10, we get:
UGO UGO UGO UGO
========================
111 110 101 100
7 6 5 4
Thanks, Jared for the Math
The Final Step
Having converted our binary representations to decimal numbers, we can now combine the permission for all three parts of the user class:
ls -al:
U G O
========
-rwxrwxrwx owner group file-count date filename
Mode: 0777
U G O
========
-rw-r--r-- owner group file-count date filename
Mode: 0644
As a shortcut, we can use this Handy Permissions Calculator and Decoder. Once these octal numbers are understood, a user can use the chmod command and use the octal sequences to quickly change modes. As always, for more, type man chmod
References
How Linux file permissions work
Using chmod - octal mode
| assigning read/write privileges for a folder to a user in CentOS 7 |
1,370,359,393,000 |
I have a script that runs regularly via cron, that creates a tar.gz file for the purpose of backing up a directory.
For reasons beyond my control, the only user who can execute the script via cron is a root user. So the resulting tar file can not be moved or deleted by any other user.
So, as part of the script, I want to execute a chown and chmod on the tar file so that other users can manipulate it.
But is it good enough to just change permissions on the tar file, or will the root user permissions also be saved to the files inside the tar? When a user unpacks the tar file, will they be able to act on those files as if they created the files themselves?
|
If the user extracting is a "ordinary" user, the files will be owned by that user (by default).
From the manual page of tar
--same-owner
try extracting files with the same ownership as exists in the archive (default for superuser)
--no-same-owner
extract files as yourself (default for ordinary users)
| If I change permissions on a tar file, will that apply to the files inside it? |
1,370,359,393,000 |
I'm setting up a debian box with shared webhhosts.
These users don't have ssh permissions, just ftp.
The users are allowed to use PHP and I setup suphp for that so the php processes runs under their own user account, etc.
I'm a little bit worried about the security of the system files, especially the /etc folder. I notice that most files in this directory have permissions like:
drwxr-xr-x 2 root root 4096 Mar 4 20:00 pam.d
-rw-r--r-- 1 root root 1358 Mar 5 00:48 passwd
-rw------- 1 root root 1358 Mar 5 00:48 passwd-
drwxr-xr-x 2 root root 4096 Feb 18 14:22 pear
drwxr-xr-x 4 root root 4096 Apr 29 2010 perl
drwxr-xr-x 6 root root 4096 Feb 18 14:22 php5
drwxr-xr-x 2 root root 4096 Mar 4 17:42 phpmyadmin
Are the read-world permissions which debian standard gives the files in /etc really needed? What's the best mask I can give those files? Are there any files in /etc that should be world readable?
|
The default permissions are fine, and needed. If you e.g. didn't leave passwd world readable, a lot of user-related functionality would stop working. File such as /etc/shadow shouldn't be (and aren't) world readable.
Trust the OS to get this right, unless you know very well that the OS is wrong.
| debian security /etc permissions |
1,370,359,393,000 |
This is my user
$ id
uid=1000(pzk) gid=1000(pzk) groups=1000(pzk)
This is my directory structure
$ ls -tlrh
total 12K
d-w--w--w- 2 root root 4.0K Apr 13 10:53 write-for-everyone
dr--r--r-- 2 root root 4.0K Apr 13 10:53 read-for-everyone
d--x--x--x 2 root root 4.0K Apr 13 10:53 execute-for-everyone
From given permissions for write-for-everyone, I should be able to create a file inside write-for-everyone. But, I am NOT.
$ touch write-for-everyone/x
touch: cannot touch 'write-for-everyone/x': Permission denied
Please help me in figuring this out.
|
The w bit on a directory controls making changes to the list of filenames in the directory, so creating, renaming and removing files. But any of those operations also involves access the files themselves within the directory, and for that, the x permission is needed. A system call involved would be something like open("dir/file1", O_WRONLY | O_CREAT).
The w does not give any access without the x bit.
On the other hand, reading the list of files in the directory works with only the r bit, as that only requires accessing the directory itself, not the files within. A system call involved would be something like open("dir", O_RDONLY).
In a sense, the x bit on directory dir controls access past the slash in a path like dir/somefile.
| touch command not able to create file in write-permitted directory |
1,370,359,393,000 |
The freedesktop organization defines the standard for .desktop files. Unfortunately it defines not the permissions of the file (see freedesktop mailinglist) and software is distributed with
a) executable .desktop files
b) non executable .desktop files
c) mixed a) and b) in one software package.
This is not very satisfying for Linux distributors, who aim to provide a consistent system. I want to use the broad audience of sx, to find out
what advantage has a .desktop file without execution bit? Is there any reason for not having all .desktop files executable if the filesystem alows it?
Are there known security problems? Are there programs which have difficulties with executable .desktop files?
|
One obvious reason a .desktop has not necessarily the executable bit set is these files were not intended to be executable in the first place. A .desktop file contains metadata the tell the desktop environment how to associate programs to file types but was never designed to be executed itself.
However, as a .desktop file indirectly tell the graphic environment what to execute, it has an indirect capacity to launch whatever program is defined in it, opening the door to exploits. To avoid malicious .desktop files to be responsible to the launch of hostile or unwanted programs, KDE and gnome developers introduced a custom hack that somewhat deviates the intended Unix file execution permission purpose to add a security layer. With this new layer, only .desktop files with the executable bit set are taken into account by the desktop environment.
Just turning a non executable file like a .desktop one to an executable one would be a questionable practice because it introduces a risk. Non binary executable files with no shebang are executed by a shell (be it bash or sh or whatever). Asking the shell to execute a file which is not a shell script has unpredictable results.
To avoid that issue, a shebang needs to be present in the .desktop files and should point to the right command designed to handle them, xdg-open, like for example Thunderbird does here:
#!/usr/bin/env xdg-open
[Desktop Entry]
Version=1.0
Name=Thunderbird
GenericName=Email
Comment=Send and Receive Email
...
In this case, executing the .desktop file will do whatever xdg-open (and your Desktop Environment) believe is the right thing to do, possibly just opening the file with a browser or a text editor which might not be what you expect.
| What is the advantage of .desktop files without executable bit set? |
1,370,359,393,000 |
After adding new ssh key to .ssh/authorized_hosts I can no longer ssh to the machine without entering password.
What is even more funny is that the .ssh directory is suddenly inaccessible when I'm logged in via ssh (no direct console access):
pi@prodpi ~ $ ls -la
drw------- 2 pi pi 4096 Mar 13 2015 .ssh
pi@prodpi ~ $ cd .ssh/
-bash: cd: .ssh/: Permission denied
pi@prodpi ~ $ ls .ssh/
ls: cannot access .ssh/authorized_keys: Permission denied
ls: cannot access .ssh/known_hosts: Permission denied
authorized_keys known_hosts
pi@prodpi ~ $ sudo ls .ssh/
authorized_keys known_hosts
The user is pi. What- if not directory permissions- could prevent me from accessing the folder as owner and potentially screw ssh login?
|
To enter a directory you have to set executable permission on it.
This should do it:
chmod u+x .ssh/
| Cannot cd to .ssh |
1,370,359,393,000 |
I was trying to experiment with users, groups, and permissions. The results can be seen below:
vagrant@cats:/$ ls -l | grep home
drwxr-xr-x 5 root admin 4096 Sep 28 05:49 home
vagrant@cats:/$ cat /etc/group | grep "^admin"
admin:x:1002:vagrant
vagrant@cats:/$ cd home
vagrant@cats:/home$ pwd
/home
vagrant@cats:/home$ cd ..
vagrant@cats:/$ sudo chmod 770 home
vagrant@cats:/$ ls -l | grep home
drwxrwx--- 5 root admin 4096 Sep 28 05:49 home
vagrant@cats:/$ cd home
-bash: cd: home: Permission denied
vagrant@cats:/$ ?
I don't understand why I can't get in. The user vagrant is in the group admin, the group admin owns the directory home and only the owner and or group members can read, write or execute files in home. But for some reason I'm locked out. What am I missing here?
|
If you have done changes to your user (adding or changing groups etcetera), you need to log out and then in again for them to take effect. Or you can change to your own user in a subshell (su vagrant) and try again.
| What part of chmod 770 am I misunderstanding? [duplicate] |
1,370,359,393,000 |
When I want to grant access to another user to my file, I use chmod 777 file, but if I want to be sure I'm granting permission just for that user, how can I do it?
-- update
The file is owned by "root", so it's mine if I access it with sudo, I suppose (or maybe I'm confused.. please correct me).
I want to share a folder called /Data in the root. The other user I want to share it is the root of an embedded system, which I'm accessing with telnet and NFS.
The files inside /Data are generated by me, and every time I generate them, I have to use the command chmod 777 /Data so I can access them from the embedded system.
I'm using Ubuntu in my computer, and a compiled-here-linux in the embedded system.
|
You need to find a group that only you and that user is part of, and give correct permission to the group, not the world.
Could be easier with access control lists, if available.
| How do I give all the permissions to a file for a single user that's not me? |
1,370,359,393,000 |
I'm using apache2 and postgres running on Ubuntu Server 10.04.
I have removed the startup scripts for both of these apps and I'm using supervisor to monitor and control them. The problem I have run into is that both of these need directories in /var/run (with the correct permissions for the users they run under) for pid files. How do I create these during startup as they need to be created as root and then chown'd to the correct user?
Edit
It seems the best way to so this is to creat the directories with custom init scripts. As I have no shell scripting skills at all how do I go about this?
|
In reply to this comment:
There are currently no startup scripts
for the servcies. The supervisor
daemon is started by the init.d
scripts and then the other services
are started by this service, which
should not run as root.
If your supervisor is started from init.d script, then just create another init.d script with the preferences to be run before the supervisor starts ( how you achieve this is totally dependent on your flavor of **IX ).
In its start method create needed directories with required permissions.
In its stop method tear those directories down.
| Create directory in /var/run/ at startup |
1,370,359,393,000 |
I messed up some permission issues by trying to change permissions on what I thought was just one directory but what turned out to be '/'. Now I am having sudo problems:
In the console as a non-root user, when I try to login as root, I get:
sudo su
sudo: unable to stat /etc/sudoers: Permission denied
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
However I can get root terminal access to a directory by using the GUI Nemo file browser then right clicking and clicking 'open as root'. Most of the other posts with similar issues have had this issue be due to having incorrect file/directory permissions but I don't think this is the exact problem because when I do ls -ld /etc/ / ls -l /etc/sudoers I get:
drwxr-xr-x 157 root root 12288 Dec 15 15:36 /etc/
-rw-r--r-- 1 root root 755 Dec 15 15:36 /etc/sudoers
The update system also seems to not work.
I have tried:
apt-get -o Dpkg::Options::="--force-confmiss" install --reinstall sudo
but this does not seem to really do anything productive
This is the contents of sudoers:
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
# Host alias specification
# User alias specification
# Cmnd alias specification
# User privilege specification
root ALL=(ALL:ALL) ALL
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
# See sudoers(5) for more information on "#include" directives:
#includedir /etc/sudoers.d
|
It is not a problem of the sudoers configuration file. You can check the configuration of your file using visudo -c. If you run against your file, you will check that it is parsed OK.
I can't identify the problem with only the info you provided, but here are some things you can try.
Make sure that all the path /etc/sudoers is executable
Make sure that the / directory is with permissions 755(drwxr-xr-x)
Try to reconfigure the package with the default values running dpkg-reconfigure as root
Please provide info about the new permissions on /.
PS: I find weird is that your sudoers file has write permissions, remember you should only edit the sudoers file with visudo.
| sudo: unable to stat /etc/sudoers: Permission denied … Mint 18.2 Cinnamon |
1,370,359,393,000 |
I'm trying to give the ubuntu user on my machine the ability to rm folders and files (mainly in the /www/ folder) without the need to invoke a password or the need to use sudo.
I've been following this article and ended up trying this in my sudoers file.
ubuntu ALL = (ALL) NOPASSWD: /bin/rm
This unfortunately doesn't do the trick.
|
First of all, you should probably consider why this isn't working in the first place. If you don't understand why they are owned as different users there is a good chance that you will be breaking something by attempting to hack around the limitation. It is quite likely that there is a security issue you will be opening yourself up to by doing this.
Additionally, sudo is the right method for a lot of things, but if you are setting it up without a password that is another indication that you are doing something wrong from a security standpoint. Don't assume that because its "your" machine and "you are the only one using it" that you don't need to worry about these things. Especially if you are running a service like and HTTP server, there is a reason the files being served are owned by a different user than you normally use on the system!
All caveats aside, the proper fix for this is to use the normal file system permission levels to give your ubuntu user permission to operate on the files without having to change to another user or escalate to root privilege levels. This is probably easiest done by adding the ubuntu user to the group that owns /www.
# Find out what group owns /www
stat -c %G /www
# Add the ubuntu user to that group
adduser ubuntu <group_that_owns_www>
Now assuming your files have group as well as owner permissions (i.e. 664 or similar for files and 775 for directories) you should be good to go for normal file operations with no special sudo interventions.
Note: After adding a user to a group you have to actually login again in order to for the system to recognize you as part of the new group.
| Give user permission to rm without password or sudo |
1,370,359,393,000 |
On my CentOS 7.6, I have created a folder (called many_files) with 3,000,000 files, by running:
for i in {1..3000000}; do echo $i>$i; done;
I am using the command find to write the information about files in this directory into a file. This works surprisingly fast:
$ time find many_files -printf '%i %y %p\n'>info_file
real 0m6.970s
user 0m3.812s
sys 0m0.904s
Now if I add %M to get the permissions:
$ time find many_files -printf '%i %y %M %p\n'>info_file
real 2m30.677s
user 0m5.148s
sys 0m37.338s
The command takes much longer. This is very surprising to me, since in a C program we can use struct stat to get inode and permission information of a file and in the kernel the struct inode saves both these information.
My Questions:
What causes this behavior?
Is there a faster way to get file permissions for so many files?
|
The first version requires only to readdir(3)/getdents(2) the directory, when run on a filesystem supporting this feature (ext4: filetype feature displayed with tune2fs -l /dev/xxx, xfs: ftype=1 displayed with xfs_info /mount/point ...).
The second version in addition also requires to stat(2) each file, requiring an additional inode lookup, and thus more seeks on the filesystem and device, possibly quite slower if it's a rotating disk and cache wasn't kept. This stat is not required when looking only for name, inode and filetype because the directory entry is enough:
The linux_dirent structure is declared as follows:
struct linux_dirent {
unsigned long d_ino; /* Inode number */
unsigned long d_off; /* Offset to next linux_dirent */
unsigned short d_reclen; /* Length of this linux_dirent */
char d_name[]; /* Filename (null-terminated) */
/* length is actually (d_reclen - 2 -
offsetof(struct linux_dirent, d_name)) */
/*
char pad; // Zero padding byte
char d_type; // File type (only since Linux
// 2.6.4); offset is (d_reclen - 1)
*/
}
the same informations are available to readdir(3):
struct dirent {
ino_t d_ino; /* Inode number */
off_t d_off; /* Not an offset; see below */
unsigned short d_reclen; /* Length of this record */
unsigned char d_type; /* Type of file; not supported
by all filesystem types */
char d_name[256]; /* Null-terminated filename */
};
Suspected but confirmed by comparing (on a smaller sample...) the two outputs of:
strace -o v1 find many_files -printf '%i %y %p\n'>info_file
strace -o v2 find many_files -printf '%i %y %M %p\n'>info_file
Which on my Linux amd64 kernel 5.0.x just shows as main difference:
[...]
getdents(4, /* 0 entries */, 32768) = 0
close(4) = 0
fcntl(5, F_DUPFD_CLOEXEC, 0) = 4
-write(1, "25499894 d many_files\n25502410 f"..., 4096) = 4096
-write(1, "iles/844\n25502253 f many_files/8"..., 4096) = 4096
-write(1, "096 f many_files/686\n25502095 f "..., 4096) = 4096
-write(1, "es/529\n25501938 f many_files/528"..., 4096) = 4096
-write(1, "1 f many_files/371\n25501780 f ma"..., 4096) = 4096
-write(1, "/214\n25497527 f many_files/213\n2"..., 4096) = 4096
-brk(0x55b29a933000) = 0x55b29a933000
+newfstatat(5, "1000", {st_mode=S_IFREG|0644, st_size=5, ...}, AT_SYMLINK_NOFOLLOW) = 0
+newfstatat(5, "999", {st_mode=S_IFREG|0644, st_size=4, ...}, AT_SYMLINK_NOFOLLOW) = 0
+newfstatat(5, "998", {st_mode=S_IFREG|0644, st_size=4, ...}, AT_SYMLINK_NOFOLLOW) = 0
+newfstatat(5, "997", {st_mode=S_IFREG|0644, st_size=4, ...}, AT_SYMLINK_NOFOLLOW) = 0
+newfstatat(5, "996", {st_mode=S_IFREG|0644, st_size=4, ...}, AT_SYMLINK_NOFOLLOW) = 0
+newfstatat(5, "995", {st_mode=S_IFREG|0644, st_size=4, ...}, AT_SYMLINK_NOFOLLOW) = 0
+newfstatat(5, "994", {st_mode=S_IFREG|0644, st_size=4, ...}, AT_SYMLINK_NOFOLLOW) = 0
+newfstatat(5, "993", {st_mode=S_IFREG|0644, st_size=4, ...}, AT_SYMLINK_NOFOLLOW) = 0
+newfstatat(5, "992", {st_mode=S_IFREG|0644, st_size=4, ...}, AT_SYMLINK_NOFOLLOW) = 0
+newfstatat(5, "991", {st_mode=S_IFREG|0644, st_size=4, ...}, AT_SYMLINK_NOFOLLOW) = 0
+newfstatat(5, "990", {st_mode=S_IFREG|0644, st_size=4, ...}, AT_SYMLINK_NOFOLLOW) = 0
[...]
+newfstatat(5, "891", {st_mode=S_IFREG|0644, st_size=4, ...}, AT_SYMLINK_NOFOLLOW) = 0
+write(1, "25499894 d drwxr-xr-x many_files"..., 4096) = 4096
+newfstatat(5, "890", {st_mode=S_IFREG|0644, st_size=4, ...}, AT_SYMLINK_NOFOLLOW) = 0
[...]
| Huge performance difference of the command find with and without using %M option to show permissions |
1,370,359,393,000 |
I want certain files to be able to be altered by myself on my basic account. To me, they are high priority files, with many backups. But we have some young'uns in the house and I don't quite trust them. I feel like they will find a way to delete the files. Is there a way I could hide them, or make them invisible without a command needed to be input from the command line?
|
Directory permissions:
The write bit allows the affected user to create, rename, or delete
files within the directory, and modify the directory's attributes
The read bit allows the affected user to list the files within the
directory
The execute bit allows the affected user to enter the directory, and
access files and directories inside
The sticky bit states that files and directories within that
directory may only be deleted or renamed by their owner (or root)
You can save the files under the ownership of root user and thus this will require them to use password before accessing those files.
As said in directory permissions, you can take away 'write bit' and 'execute bit' thus not allowing them to enter directory. only give them read permission so that they can view files without altering and deleting them.
you can learn the use of sticky bit (link here) and disabling alter and delete feature on every file inside that directoryIf they have root password then hiding files is only the way to protect your files and root is god of the system, if they have root password, so they are the real god of your system !
| Is there a way to protect a file from being deleted, but not from being altered? |
1,370,359,393,000 |
I'm wondering if Linux/Unix has something equivalent to the permission system in Android
In Android there are permissions that an app needs to ask for, and that a user will see and need to approve when she installs a new app (for example there are permissions for the Camera and for accessing the network)
|
What you're looking for is called Mandatory Access Control, or MAC. Android enables it by default and it is tightly integrated into the userland APIs, but the technologies that lie at base of the MAC in Android (i.e., SELinux) are part of the default Linux kernel. Additionally, there exists another framework for MAC called AppArmor, which android does not use but which has similar features.
Configuring SELinux or AppArmor is not for the faint of heart. However, many distributions ship with default SELinux and/or AppArmor policies that you can use. For instance, Ubuntu ships with AppArmor enabled by default, and RHEL/CentOS ship with a few SELinux rule sets that you can choose from, with the least restrictive one of the set being enabled by default. Debian, too, has an SELinux rule set that it ships with, but it is not enabled by default, and it is not as well tested with SELinux enabled.
Most of the distributions that ship with MAC enabled don't have a very restrictive set of rules; after all, if it gets in the way too much, people will just disable it, and then you don't reap the benefits. However, it's certainly possible to enable a more restrictive set of rules -- it just means you may need to debug things a bit more, as most applications on the Linux desktop are not tested with MAC enabled.
One feature of some SELinux rule sets is the "SELinux sandbox". If you use that, applications that run inside it will have very few permissions. This can be useful to test an application without risk of it misbehaving and eating your files. For more information on that, you can read https://www.linux.com/learn/run-applications-secure-sandboxes-selinux.
| Does Linux have something equivalent to Android's permissions? |
1,370,359,393,000 |
Out of the box the ESET NOD32 antivirus for Linux 64bit running on Linux Mint 18 wrongly installs the service configuration file as an executable, flooding the system log:
/var/log/syslog
with such text:
Configuration file /lib/systemd/system/esets.service is marked executable. Please remove executable permission bits. Proceeding anyway.
|
I am showing ESET NOD32 service as an example, but this applies generally to all /lib/systemd/system/*.service files.
Long-listing the service file:
ls -l /lib/systemd/system/esets.service
reveals the execution bits set:
-rwxr-xr-x 1 root root 360 Sep 22 08:53 /lib/systemd/system/esets.service
Solution is to set the proper user rights:
sudo chmod 644 /lib/systemd/system/esets.service
And you will no longer see such message in your syslog.
I have already reported this cosmetic problem to the ESET development team.
| syslog: Service is marked executable. Please remove executable permission bits. Proceeding anyway |
1,370,359,393,000 |
If I have two users called john and sally. Both are part of the users group. john creates a directory with permissions 775. sally then puts a file there with 644 permissions.
Even though the file obviously has no group write privileges. Can john then modify/delete that file in the directory he owns but the file he does not own?
|
He can delete the file because unlinking depends on the directory's permissions, not the files. In this way, he can modify it since he can remove and replace it in the directory.
| Directory permissions vs file permssions |
1,370,359,393,000 |
I want to edit the /var/www files using any editor like phpstorm or eclipse etc, without changing the default user/groups setting for /var/www.
Since phpstorm is invoke by a script, I don't know how to make the phpstorm part of www-data group so it get write permission.
What are other options otherwise?
|
PhpStorm should use the same permissions as the user that runs/ launches the script (yourself). Add yourself to the www-data group, or set up a new group.
Let's say you created a new group called "www-pub" and added yourself to it as per instructions.
Remember to log out and back in to have the new group membership take effect. Test your group membership with the groups command.
Then, change permissions of the /var/www directory as follows:
chown -R :www-pub /var/www
Set the group of all files in /var/www to "www-pub" recursively
chmod -R o+r /var/www
Allows everyone (including apache) to read all files in /var/www
chmod -R g+w /var/www
Allows group members to write to all files in /var/www
find /var/www -type d -exec chmod g+s {} \;
sets new files to retain the group of the directory they are created in
If you need apache to also be able to write files (or read files without giving read to everyone else), add apache's user (usually "www-data") to the "www-pub" group.
Also, in PhpStorm's Deployment Options, make sure "override default permissions on (files/ directories)" are either unchecked or set to allow writing by group.
This process should also work for IntelliJ IDEA/ Webstorm.
| How to edit /var/www files using phpstorm? |
1,370,359,393,000 |
I get this error when I try creating a directory:
[rex <03:57 PM> /var/tmp/pb82]$ mkdir foo
mkdir: cannot create directory `foo': Permission denied
But doesn't the following output indicate that I should be able to create directories there since I am a member of the www-data group to which that directory belongs?
[rex <03:57 PM> /var/tmp/pb82]$ ls -l ..
total 8
drwxrwxr-x 5 root www-data 4096 Aug 7 15:32 jinfo
drwxrwxr-x 3 root www-data 4096 Aug 7 20:43 pb82
[rex <03:58 PM> /var/tmp/pb82]$ whoami
rex
[rex <03:58 PM> /var/tmp/pb82]$ groups rex
users www-data
Edit: in response to @UlrichDangel:
[rex <04:08 PM> /var/tmp/pb82/jinfo]$ id
uid=1008(rex) gid=100(users) groups=100(users)
|
You probably added yourself to the www-data group and didn't relogin afterwards. To change your group membership you can use
sg www-data
to get a new shell with the appropriate permissions.
groups will return the data from the database and not your effective permissions - from man groups:
Print group memberships for each USERNAME or, if no
USERNAME is specified, for the current process (which may differ if the groups database has changed).
| Why this error: "cannot create directory `foo': Permission denied" |
1,370,359,393,000 |
I have multiple users on my system. I'd like to have shared directories like music, video. pictures etc. The problems is that I want users to be able to write any new files to any directory, but not be able to delete or modify any files they don't own. With standard unix perms if you can add a file to a directory you can also delete others. I'd also like to make sure all the files in these directories are always readable by the user group.
Can I do this with POSIX ACL's? or do I need something more advanced like SELinux (or other security framework).
example of what I don't want to work.
su - root
mkdir /home/music
chmod 775 /home/music
chgrp users /home/music
su - user1 /home/music
touch /home/music/testfile
ll /home/music/testfile
su - user2
rm /home/music/testfile
ll /home/music
|
If I understand you correctly you want for your music/video etc. directories the same semantic as for /tmp.
For this, you could put the sticky bit on the directories. To quote from the chmod man-page:
RESTRICTED DELETION FLAG OR STICKY BIT
The restricted deletion flag or sticky bit is a single bit, whose
interpretation depends on the file type. For directories, it prevents
unprivileged users from removing or renaming a file in the directory
unless they own the file or the directory; this is called the
restricted deletion flag for the directory, and is commonly found on
world-writable directories like /tmp.
| What's the best way to configure shared filesystem directories? (beyond standard unix perms) |
1,370,359,393,000 |
I have script that can be runned from different users on the same machine. This script should write logs to the same file on every run.
Minimal version of script:
#!/usr/bin/env bash
# 2
touch /var/tmp/lll.log # 3
chmod 666 /var/tmp/lll.log # 4 (You can comment this line, but this will change nothing)
echo ghghhghg >> /var/tmp/lll.log # 5
There is no problem when it started from root and then from other user, but error thrown when order is opposite.
./savetmp.sh: line 5: /var/tmp/lll.log: Permission denied
Output of ls -ld /var/tmp /var/tmp/lll.log:
.rw------- 9 armoken 1 May 10:52 /var/tmp/lll.log
drwxrwxrwt - root 1 May 10:52 /var/tmp
cat /proc/sys/fs/protected_regular:
1
How to fix that?
|
When you run this as the user armoken the file is created according to your current permissions settings, which are such that you can read/write the file but no-one else can:
ls -l /var/tmp/lll.log
-rw------- 9 armoken 1 May 10:52 /var/tmp/lll.log
So when other users try to write to this file they have no permission to do so.
However, it's more complicated than this because you have the protected regular files security feature enabled in your system's kernel (cat /proc/sys/fs/protected_regular returns non-zero). This means that, regardless of these permissions, no-one other than the owner can write to a file in a sticky directory such as /var/tmp - not even root - unless the file is owned by the owner of the directory itself.
So, if you want everyone to be able to read/write this file in this directory you need to set it up so that root owns it and that anyone can write to it. But bear in mind this means other people can erase or change content in the file too.
#!/bin/sh
if [ ! -f /var/tmp/lll.log ]
then
# File does not exist
if [ "$(id -u)" -eq '0' ]
then
# We are root so create the file (and continue)
>/var/tmp/lll.log
chmod a=rw /var/tmp/lll.log
else
echo 'ERROR: Log file does not exist. Have your systems administrator create it before proceeding' >&2
exit 1
fi
fi
# Now anyone can read/write the contents of the file
echo 'This is a test message' >>/var/tmp/lll.log
This is not defensive coding, though, as anyone can still create the file and prevent others from using it.
A better solution might be to use a logger. For example, this will write to the files managed through journalctl (and/or /var/log/user.log otherwise)
logger 'This is a test message'
journalctl --since today | tail
…
May 01 10:14:24 myServer myUser[18892]: This is a test message
…
| Root couldn't write to file with rw permissions for all users and owned by other user |
1,370,359,393,000 |
I have a group called homeperms on an Ubuntu system, with a few users:
$ cat /etc/group | grep "homeperms"
homeperms:x:1004:jorik,tim.wijma,vanveenjorik,jorik_c
And I've done $ sudo chgrp -R homeperms /home.
But when I try to
md /home/flask.
I get a permission denied error (Happens with any other folder name too).
I don't want to 777 the folders since I'm going to be dealing with web server stuff.
Permissions of home in /
drwxr-xr-x 12 vanveenjorik homeperms 4096 Jul 6 09:06 home
permissions inside /home:
drwxr-xr-x 6 vanveenjorik homeperms 4096 Jul 3 20:08 19150
drwxr-xr-x 3 codeanywhere-ssh-key homeperms 4096 Jul 6 08:00 codeanywhere-ssh-key
drwxr-xr-x 2 vanveenjorik homeperms 4096 Jul 6 09:13 downloads
drwxr-xr-x 5 vanveenjorik homeperms 4096 Jul 4 08:43 jorik
drwxr-xr-x 4 jorik_c homeperms 4096 Jul 6 08:09 jorik_c
drwxrwxr-x 4 vanveenjorik homeperms 4096 Jul 3 20:15 mkdir_python
drwxr-xr-x 5 vanveenjorik homeperms 4096 Jul 4 09:09 tim.wijma
drwxr-xr-x 3 vanveenjorik homeperms 4096 Jul 3 18:20 ubuntu
drwxr-xr-x 5 vanveenjorik homeperms 4096 Jul 4 09:27 vanveenjorik
drwxrwxr-x 3 vanveenjorik homeperms 4096 Jul 3 22:28 venvs
I am trying to do this on the user 'jorik_c' and with sudo this (of course) works flawlessly
Before this gets marked as duplicate, the answer to this question, didn't help.
|
By typing the command chgrp -R homeperms /home , You effectively changed the group ownership of /home and everything underneath to homeperms.
BUT, the group still does not have WRITE access on the directory. Per your output:
drwxr-xr-x 12 vanveenjorik homeperms 4096 Jul 6 09:06 home
Remember, file permissions display as: OWNER, GROUP, EVERYONE ELSE
You can fix it quickly by any of the following:
# retaining (rewriting) your existing permissions + toggling the WRITE-ACCESS bit for GROUP
chmod 776 /home
# similar, accomplished the same but simplified
chmod g+w /home
| Can't create directory in directory owned by group |
1,370,359,393,000 |
I have read this and this, and found that my problem is different and more specific.
I understand the following points.
+x on the directory grants access to files inodes through this specific directory
meta information of a file, which is used by ls -l, is stored in its i-node, but file name does not belong to that
From the 2 points above, since ls without -l does not need to access the i-nodes of the files in the directory, it should successfully list the file names and return 0.
However, when I tried it on my machine, the file names are listed, but there were some warnings like permission denied, and the return code is 1.
b03705028@linux7 [~/test] chmod 500 permission/
b03705028@linux7 [~/test] ls --color=no permission/
f1*
b03705028@linux7 [~/test] chmod 400 permission/
b03705028@linux7 [~/test] ls --color=no permission/
ls: 無法存取 'permission/f1': 拒絕不符權限的操作
f1
b03705028@linux7 [~/test] echo $0
bash
The Chinese characters basically talk about permission denied
My unix distribution is Linux 4.17.11-arch1
|
I suspect ls in your case is an alias to something like ls --color=auto; in that case, ls tries to find information about the files contained inside the directory to determine which colour to use.
ls --color=no
should list the directory without complaining.
If it still complains, then you may be using another option, like -F or --classify, that needs to access file metadata (-F/--classify looks at the file type, for example).
To be sure that you run ls without going through an alias, use either of
command ls
or
\ls
To remove an alias for ls, use
unalias ls
| Why ls without -l return 1 when permission to directory is r-- |
1,370,359,393,000 |
How to avoid apache allow download certain files? (for example .py and .sh)
I want to avoid files to be served and downloaded like
www.site.com/file.sh
www.site.com/file.py
but I've tried to do this
<Files ~ "^\.sh">
Order deny,allow
deny from all
</Files>
<Files ~ "^\.py">
Order deny,allow
deny from all
</Files>
also this
<FilesMatch "\.(sh|py)$">
Order deny,allow
deny from all
</FilesMatch>
Nothings seems to work, I've tried to put the file in
/etc/apache2/apache2.conf
also
/etc/apache2/sites-available/default
Nothing works, apache still let me download the files.
|
Your file match clauses seem to be incorrect.
<Files ~ "^\.py">
will match files whose names will start with characters .py. You'll want
<Files ~ "\.py$">
instead.
But your FilesMatch regular expression looks correct. So maybe the problem lies elsewhere. Perhaps your Apache uses the new-style access control directives only?
Try replacing the old-style
Order deny,allow
deny from all
with the new-style equivalent:
Require all denied
| How to avoid apache allow download certain files? |
1,370,359,393,000 |
What are the standard ownership settings for files in the .gnupg folder?
After doing sudo chown u:u * mine now looks like this:
drwx------ 2 u u 4,0K jan 18 22:53 crls.d
drwx------ 2 u u 4,0K jan 18 22:33 openpgp-revocs.d
drwx------ 2 u u 4,0K jan 18 22:33 private-keys-v1.d
-rw------- 1 u u 0 sep 28 02:12 pubring.gpg
-rw-rw-r-- 1 u u 2,4K jan 18 22:33 pubring.kbx
-rw------- 1 u u 32 jan 18 22:28 pubring.kbx~
-rw------- 1 u u 600 jan 19 22:15 random_seed
-rw------- 1 u u 0 sep 28 02:13 secring.gpg
srwxrwxr-x 1 u u 0 jan 20 10:20 S.gpg-agent
-rw------- 1 u u 1,3K jan 18 23:47 trustdb.gpg
However, before that, originally at least pubring.gpg,secring.gpg and random_seed were owned by root.
|
The .gnupg directory and its contents should be owned by the user whose keys are stored therein and who will be using them. There is in principle no problem with a root-owned .gnupg directory in your home directory, if root is the only user that you use GnuPG as (in that case one could argue that the directory should live in /root or that you should do things differently).
I can see nothing wrong with the file permissions in the file listing that you have posted. The .gnupg folder itself should additionally be inaccessible by anyone other than the owner and user of the keys.
The reason why the files may initially have been owned by root could be because GnuPG was initially run as root or by a process executing as root (maybe some package manager software or similar).
GnuPG does permission checks and will warn you if any of the files have unsafe permissions. These warnings may be turned off (don't do that):
--no-permission-warning
Suppress the warning about unsafe file and home directory
(--homedir) permissions. Note that the permission checks that
GnuPG performs are not intended to be authoritative, but rather
they simply warn about certain common permission problems. Do
not assume that the lack of a warning means that your system is
secure.
Note that the warning for unsafe --homedir permissions cannot be
suppressed in the gpg.conf file, as this would allow an attacker
to place an unsafe gpg.conf file in place, and use this file to
suppress warnings about itself. The --homedir permissions
warning may only be suppressed on the command line.
The --homedir directory referred to above is the .gnupg directory, usually at $HOME/.gnupg unless changed by using --homedir or setting GNUPGHOME.
Additionally, the file storing the secret keys will be changed to read/write only by default by GnuPG, unless this behaviour is turned off (don't do that either):
--preserve-permissions
Don't change the permissions of a secret keyring back to user
read/write only. Use this option only if you really know what
you are doing.
This applies to GnuPG 2.2.3, and the excerpts above are from the gpg2 manual on an OpenBSD system.
| What are the standard ownership settings for files in the `.gnupg` folder? |
1,370,359,393,000 |
I am running FreeBSD 10.2 and used the Let's Encrypt py27-certbot package to create an SSL Certificate.
Now I want to access that Certificate, however when I attempt to run
sudo cd /usr/local/etc/letsencrypt/live/
I am unable to access it (after the command runs, I am in the same directory I ran cd from.)
Shouldn't root be able to access any file (especially one it created?)
|
Try to become root (sudo su -) and then access the contents of the file/folder.
Using sudo elevates your permissions only temporarily.
If you are not a member of a group that has execute permissions on a directory you will [not] be allowed to enter that directory. Below, I have removed the execute bit from the permissions of group wheel, of which this user is a member. (previously drwxr-xr-x)
drwxr--r-x 2 root wheel 128 Sep 1 18:48 zfs
[user@host /etc]$ sudo cd zfs
[user@host /etc]$
I am able to execute the command sudo cd zfs and it runs fine. But when the command completes I find that my working path is not inside the zfs directory.
Verify the permissions of the directory that you are attempting to enter. The user or member of the group must have the execute permission.
| Why Is Root Unable to Access a Directory FreeBSD? |
1,370,359,393,000 |
I understand that with Unix file permissions, there's "user", "group", and "world" octets. For the sake of this discussion, let's assume that setuid/sticky bits don't exist.
Consider the following example:
$ echo "Hello World" > strange
$ chmod 604 strange
$ ls -l strange
-rw----r-- 1 mrllama foo 12 Apr 13 15:59 strange
Let's assume that there's another user, john, who is a member of my group, foo.
What permissions does John have regarding this file?
Does the system go with the most specific permission match (i.e. John isn't owner, but he's in the foo group, so use the permissions for the "group" octet)?
...or does it go by the most permissive of the octets that apply to him (i.e. John meets the criteria for "group" and "world", so it goes with the more permissive of the two)?
Bonus questions:
What if the permissions were instead 642? Can John only read, only write, or both?
Are there any reasons to have strange permissions like 604?
|
When determining access permissions using Unix-style permissions, the current user is compared with the file's owner, then the group, and the permissions applied are those of the first component which matches. Thus the file's owner has the owner's permissions (and only those), members of the file's group have the group's permissions (and only those), everyone else has the "other users'" permissions.
Thus:
John has no permissions for this file.
The most specific permission match wins, not the most permissive (access rights aren't cumulative).
With permissions 642, John could read the file.
There are reasons to give permissions such as 604: this allows the group to be excluded, which can be handy in some situations — I've seen it on academic systems with a students group, where staff could create files accessible to anyone but students.
root has access to everything, regardless of the permissions defined on the file.
For more complex access control you should look into SELinux and POSIX ACLs. (SELinux in particular can even limit what root has access to.)
| Restrictive "group" permissions but open "world" permissions? |
1,370,359,393,000 |
I have an RHEL 6 system which has 20 users. I have 20 ports on which separate versions of a service is running. I want user a to access port a, but not other ports. Is there a way to do this? (possibly by modifying iptables)?
|
You can use the "owner" iptables module to do this. As an example to restrict port 999 to the user 'fred' only you can use:
iptables -I OUTPUT -p tcp --dport 999 -j REJECT
iptables -I OUTPUT -p tcp --dport 999 -m owner --uid-owner fred -j ACCEPT
The above rules are inserted to the top of the OUTPUT chain so the order reject then accept.
| allowing users to access certain ports on server |
1,370,359,393,000 |
Here's my situation whenever I reboot:
$ systemctl --failed
UNIT LOAD ACTIVE SUB DESCRIPTION
nginx.service loaded failed failed A high performance web server and a reverse proxy server
...
$ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
2014/01/18 05:44:47 [emerg] 254#0: open() "/run/nginx.pid" failed (13: Permission denied)
nginx: configuration file /etc/nginx/nginx.conf test failed
$ cd /run
$ ls -al | grep nginx
$ sudo systemctl start nginx
$ ls -al | grep nginx
-rw-r--r-- 1 root root 4 Jan 18 06:27 nginx.pid
I don't understand how nginx.pid could have incorrect permissions before it's even created, or what one would do to resolve that.
I'm using Arch and I've seen similar issues relating to chroot-jail, but I did not install nginx in a chroot.
|
The issue seems to be you are using an unprivileged user to test the Nginx configuration. When the test occurs, it attempts to create /run/nginx.pid, but fails and this causes the configuration test to fail. Try running nginx as root.
$ sudo nginx -t
or
$ su - -c "nginx -t"
This way, the Nginx parent process will have the same permission it would when run by systemctl.
If this resolves the error at testing, but not when run from systemctl, you may want to check this page on investigating systemd errors.
| Systemd fails to start Nginx on reboot, but it works manually |
1,370,359,393,000 |
I have two users, Alice and Bob. Bob should be allowed to list, ls, Alice's home directory. Alice also has a file in her home directory that Bob should also be allowed to read.
I run these commands as root:
[root@corvatsch ~]# setfacl -m user:bob:r /home/alice/
[root@corvatsch ~]# setfacl -m user:bob:r /home/alice/file
This yields the following result in the ACLs:
[root@corvatsch ~]# getfacl -c /home/alice/
user::rwx
user:bob:r--
group::---
mask::r--
other::---
and
[root@corvatsch ~]# getfacl -c /home/alice/file
getfacl: Removing leading '/' from absolute path names
user::rw-
user:bob:r--
group::r--
mask::r--
other::r--
It looks as if Bob should now be able to read Alice's home folder as well as the content of the her file.
When Bob tries that, he gets:
[bob@corvatsch ~]$ ls -l /home/alice/
ls: cannot access /home/alice/file: Permission denied
total 0
-????????? ? ? ? ? ? file
(Note the questionmarks!) and
[bob@corvatsch ~]$ cat /home/alice/file
cat: /home/alice/file: Permission denied
Looks like Bob can read the home directory, although in a weird way. Ls lists the file but seems to have problems with the ACLs?
And cating the file seems to not work at all.
Can somebody explain what i am missing?
NOTE: (I'm running CentOS 6.4)
|
The /home/alice/ directory needs executable access for the user accessing it.
EDIT: BTW, the question marks are there to indicate that ls can't get the permissions on the file.
| ACL, ls, "permission denied" and a lot of questionmarks |
1,370,359,393,000 |
How can I modify my sshd_config so that it blocks access to all users except the root user?
I've had a look and I tried
AllowUsers root
DenyUsers *
But that doesn't do anything
|
I tried this myself, adding only the AllowUsers root line, which worked without a hitch. Probably an obvious question, but since you didn't mention it explicitly: did you restart the sshd service after making the modification?
| Block SSH to all but root user |
1,370,359,393,000 |
I want to install RhodeCode on a test server at work. However, the internet access is restricted for that server, and RhodeCode has a lot of dependencies (I don't even have Python on that server). So I have to take a snapshot of the entire OS from the server, restore it in a virtual machine at home, install RhodeCode and everything else required, then copy it back at work - I already have some apps on the server, and I would like to avoid reinstalling them.
The first solution would be to take the HDD home (yes, I can do it, but I would like to avoid it).
The second solution would be to use Clonezilla and backup/restore the partitions.
However, is there another way to do it, using tar or something like it, while preserving the permissions and ACLs?
Update: Due to limited hardware resources I can't use VMware (or an equivalent) to run a virtual machine with RhodeCode.
A solution that is filesystem independent would be great, so I can use ext3 in the virtual machine.
|
The only way I have found to do this, in regards to the filesystem change, is by working at the file level. Backing up a whole partition, which just contains files, is where FSArchiver really excels. I have done a whole system with FSArchiver, but you are required to fix your bootloader and other specific configurations. You also can get FSArchiver in the SystemRescueCD, or possibly in a standard package repo.
Working at the block level would be much simpler, I prefer partimage if the filesystem change is not a hard requirement.
| How can I move the entire OS to a different server? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.