date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,393,534,555,000 |
A brief introduction to my question: the command ps will print the information of system processes.
But when I login as root and change the permission x of ps
chmod -x /bin/ps
chmod u+x /bin/ps
ls -l /bin/ps
-rwxr--r-- 1 root root ...... ps
Then I create a new shell script file call_ps_via_root.sh
#!/bin/bash
ps
and set the x permission
chmod +x /bin/call_ps_via_root.sh
ls -l /bin/call_ps_via_root.sh
-rwxr-xr-x 1 root root ...... call_ps_via_root.sh
After this, I login with a normal user sammy, and I type ps, it will print
Permission denied
and I type /bin/call_ps_via_root.sh
It is still denied. How can I make it work by call_ps_via_root.sh?
|
You can't: if you make /bin/ps only executable by root, it will be only executable by root. You can't just wrap a script around it to bypass the permission check.
set-user-id
If you want that a normal user calls ps as root you have to look at the set-uid permission. From the setuid article on Wikipedia:
setuid and setgid (short for "set user ID upon execution" and "set
group ID upon execution", respectively)1 are Unix access rights
flags that allow users to run an executable with the permissions of
the executable's owner or group. They are often used to allow users on
a computer system to run programs with temporarily elevated privileges
in order to perform a specific task. While the assumed user id or
group id privileges provided are not always elevated, at a minimum
they are specific.
See also the man page of chmod
sudo
If instead you want a normal user to execute something executable by root only use sudo. It will allow you to configure which user will be able to execute what.
| Call 'ps' as a normal user in Linux |
1,393,534,555,000 |
I need to display the permissions details for all the files names in a directory that begin with just "_".
I have tried various commands using ls and find and no joy.
|
Try:
ls -l -- -*
The -- indicates that what follows are not command line options.
Based on your update, for underscores you should just be able to do:
ls -l _*
Though the -- option won't hurt (it just won't do anything in that case)
| display permission for all files that start with - |
1,393,534,555,000 |
I want to be able to write to a folder.
I already changed the chmod 755 and the owner group, but it does not seem to work.
What other settings must be adjusted?
myName@homeserver /sharedfolders/media/Media % ll
total 64
drwxr-xr-x+ 308 nobody users 20480 Dec 18 18:42 Movies
drwxr-xr-x+ 47 nobody users 4096 Nov 8 14:23 TvShows
myName@homeserver /sharedfolders/media/Media % id
uid=1000(myName) gid=985(users) groups=985(users),973(docker),998(wheel)
myName@homeserver /sharedfolders/media/Media % touch hello
touch: cannot touch 'hello': Permission denied
myName@homeserver /sharedfolders/media/Media % ls -ld .
drwxr-sr-x+ 11 nobody users 4096 Mar 19 2020 .
% getfacl .
# file: .
# owner: nobody
# group: users
# flags: -s-
user::rwx
user:nobody:---
group::rwx
group:116:r-x
group:nobody:---
mask::rwx
other::r-x
default:user::rwx
default:user:nobody:---
default:group::rwx
default:group:116:r-x
default:group:nobody:---
default:mask::rwx
default:other::r-x
|
You need to give yourself write permission on the directory where you’re trying to create files. Since you want to use the group permissions for this,
sudo chmod g+w .
will do this for the current directory, and then
touch hello
should work.
Your chmod 755 command doesn’t set the group write permission bit; see Understanding UNIX permissions and file types for details.
Note that the + at the end of the permissions indicates that FACLs are set; you can check those with
getfacl .
| Why do I get permission denied even though I am part of the owner group? |
1,393,534,555,000 |
I am a teacher and I use Linux which is great! But students are curious about this "new" operating system they do not know and in GUI they tweak program settings which affects hidden files inside /home/user:
[profesor@240-kateder ~]$ ls -a
. .dbeaver4 .gtkrc-2.0 .sane
.. .dbeaver-drivers .icons .swt
.bash_history .dropbox .kde4 .themes
.bash_logout .eclipse .local .thumbnails
.bash_profile .esd_auth .lyx .ViberPC
.bashrc .FlatCAM .masterpdfeditor .w3m
.cache .FreeCAD .mozilla .Xauthority
.config .gimp-2.8 .pki .xinitrc
.convertall .gnupg .qucs .xournal
This is unwanted because over time program interfaces will change so dramatically that programs will be missing toolbars, buttons, main menus, status menus... and students end up with completely different GUI, so they are calling me about the issue and we spend too much time.
Now to optimize this I have to make sure that program settings (hidden files inside /home/user) aren't changed, so I tried to change them like sudo chmod -R 555 ~/.* but this didn't work out well for all of the programs, because some of the programs want to manipulate their settings at boot and they therefore fail to start withouth sudo. And student's don't have sudo privileges.
But sudo chmod -R 555 ~/.* worked for .bash_profile, .bash_logout, .bashrc, .bash_history, .xinitrc so I was thinking if I would:
prevent user from deleting .bash_profile, .bash_logout, .bashrc, .bash_history, .xinitrc
copy all hidden setting files into a folder /opt/restore_settings
program .bash_profile to clean up all settings in users home directory on login using rm -r ~/.* (I assume this wouldn't delete files from point 1., if I protect them) and then restore settings from the /opt/restore_settings.
I wan't to know your opinion about this idea, or if there is any better way to do it. And I need a way to prevent users from deleting files from point 1. Otherwise this can't work.
|
Totally different approach: Create a group students, give each student his own account with group membership in students. Have a script that restores a given home directory from a template to a known good state, possibly deleting all extra dot files. Tell students about this script.
If you have a number of computers, centralize this approach (user management on a single central server), and use a central file server for student home directories, so each student gets the same home directory on any machine.
Together with proper (basic chmod) permissions everywhere, this will ensure that each student can only wreak havoc in his or her own home directoy, and can restore it when it breaks, possibly loosing their own customizations in this process, so they'll be more cautious next time.
BTW, that's a very standard setup for many users on a cluster of machines.
| Good way to prevent student from messing program settings in /home/user |
1,393,534,555,000 |
This maybe a weird question but please listen carefully
Let's say I have a file as
-rwxr-w--- user3 user2 4095 somefile
Right now I'm in a user2's shell(? is this right to say)
and if I execute this 'somefile' with vi editor, I guess this somefile belongs to the user3, So If I execute some command line in the vim, am I executing command in user3's shell?
|
No, vim is not set user id (that is, it will not change effective userid). running a command line from vim will give you a shell (that is the word) as user2.
By the way, to edit the file you must either
be user user3
belong to group user2, merely being user2 is not enough.
There used to be a bug in redhat 4.x (or still is) when running visudo, which allow you to run a shell as root. This was a light bug, as you must allready be in sudoers to run visudo.
| Which user does vi run commands as? |
1,393,534,555,000 |
I would like to write to a Device File (of a printer) located at /dev/usb/lp0. The file is owned by lp user and group. This file is created whenever I connect my printer device to the system.
I tried adding myself to the lp group. However the lp0 file doesn't appear when the printer is connected. Removing myself from the group fixes the issue.
One solution to get write permission is to -
Detect whenever the device is connected
Trigger a shell script that runs sudo chmod 0666 /dev/usb/lp0
This led me to the answer at https://unix.stackexchange.com/a/28711
The shell script is successfully triggered but it doesn't run the sudo command*, since the shell script was not executed from the terminal. I have tried using sudo and gksudo, both have failed to prompt me to enter password i.e, I am unable to escalate permissions through a background shell script.
What I have tried?
setuid from Unix & Linux @ StackExchange, but it seems to be a very bad idea.
echo 'my_insecure_password' | sudo -S command, it didn't work*.
I did not try Polkit, which was suggested in other answers, due to the level of its complexity. But I am willing to go for it with proper directions.
|
Adding yourself to the lp group is probably the best solution here. That would not cause the lp0 file not to appear. (It's theoretically possible that your system has been configured to cause lp0 not to appear if you're in the lp group, but 1. that would have to be a local configuration, not a default setup from a distribution; and 2. I don't see why someone would have set this up.)
What follows is for academic interest only. In your scenario, you don't need this.
If you needed to change the permissions on the device file, then How to run custom scripts upon USB device plug-in? is not exactly what you need — that's for more complex cases that require a script. To change the Unix permissions or the ownership on a device file, use OWNER, GROUP and MODE assignments directly in the udev rule. That is, do create a file under /etc/udev/rules.d, but the line in that file should have something like GROUP="mygroup" instead of RUN="/path/to/script".
If you want to do something more complex, such as setting an access control list, you'll need to run a script. You don't need to escalate permissions in that script: it's already running as root! Just call the program you need to run as root, e.g. setfacl.
| Escalate permissions through a background shell script |
1,393,534,555,000 |
What is the most reliable way to give all users read/write privileges for a given directory, all of its subdirectories, and files in CentOS 7?
In an eclipse web application project that uses Maven, I am getting the following compilation error in the pom.xml:
Parent of resource: /home/user/workspace/MinimalDbaseExample/target/m2e-wtp is marked as read-only.
Since this sounds like a permissions issue, I typed in the following in the CentOS 7 terminal:
chmod -R ugo+rw /home/user/workspace/MinimalDbaseExample/target/
And I also tried:
chmod -R 0777 /home/user/workspace/MinimalDbaseExample
But eclipse is still showing the compilation error, even after multiple Project clean and Maven update operations. However, I am able to import the same zipped project file into a Windows version of eclipse, and there is no compilation error related to file permissions in the Windows version, so this causes me to wonder if perhaps my above chmod statements did not actually open up the file permissions in the CentOS 7 machine.
Is there a better statement syntax that can reliably open up read write permissions to all users for the given directory and all its recursive subdirectories and files?
|
You said you wanted to grant read and write permissions to all subdirectories and files under: /home/user/workspace/MinimalDbaseExample ... right?
Octal 0777 permissions grant rwxrwxrwx symbolically.
Octal 0755 permissions grant rwxr-xr-x symbolically.
Octal 0666 permissions grant rw-rw-rw- symbolically.
To set read/write/execute permissions to the /home/user/workspace/MinimalDbaseExample directory and all files and folders within it, choose which permission set you want, and do the following as an example:
1) Make your present working directory : /home/user/workspace
2) Type: chmod -R 0777 MinimalDbaseExample/
Following this procedure exactly, grants the folder MinimalDbaseExample/ and all files and subdirectories therein 0777/drwxrwxrwx permissions.
I tested this setting up some dummy directories under my '~' directory and verified it worked.
Credit goes to this thread, but it should not be at all this complex... I hope you make progress.
https://stackoverflow.com/questions/3740152/how-to-set-chmod-for-a-folder-and-all-of-its-subfolders-and-files-in-linux-ubunt
| reliable way to give all read/write access recursively in CentOS 7 |
1,393,534,555,000 |
Long story short, I've got an Ubuntu server box that was recently compromised (I believe through a known exploit of an older version of Tomcat, which has already been updated). Part of the exploit set the permissions on pretty much everything to 777.
In attempting to fix the incorrect permissions I inadvertently set /lib to 644 instead of 755 as it should have been. As a result of this, no programs can be run (including but not limited to chmod), and the system cannot fully boot (either normally or into recovery mode).
Is there any way to gracefully recover from this mistake, or do I basically have to reinstall Ubuntu from scratch at this point?
The one thing I can do is access a limited command prompt through grub. Using this I can browse the machine's filesystem, but I haven't found any way to use it to modify any permissions. Is there a a way to do this using grub's command prompt?
|
Well, you can recover from the permissions problem by booting a live CD/DVD/USB-drive, mounting your root filesystem (in a subdirectory), and running the chmod command there. SystemRescueCd is a distribution designed especially for this sort of repair, but any live CD that can handle your root filesystem will work.
But if your server has been compromised, it's very hard to be sure you've rooted out every trace of the compromise. The cracker could have left backdoors in surprising places. You're probably better off wiping the drive and reinstalling from scratch.
| Accidentally set /lib permissions to 644 |
1,393,534,555,000 |
I edited /etc/resolvconf.conf a few weeks ago to add this line:
name_servers="1.0.0.1 2606:4700:4700::1111,2606:4700:4700::1001"
Today, I went back to edit it again, and sudo vim /etc/resolvconf.conf opens the file read-only and overriding with w! fails. I tried su, and sudo sh -c "vim /etc/resolvconf.conf", and nothing's working.
ls -l /etc/resolvconf.conf
-rw-r--r-- 1 root root 320 Jan 4 00:05 /etc/resolvconf.conf
What's happening here? How is this possible?
|
Hope the file is set with the immutable flag
To check :
lsattr /etc/resolvconf.conf
----i---------- /etc/resolvconf.conf
To remove immutable flag
chattr -i /etc/resolvconf.conf
| I can't write to a /etc/resolvconf.conf as root anymore? |
1,393,534,555,000 |
I created folder /home/john/Desktop/test.
I want to give it access to user john itself and to user mike.
I created group:
sudo groupadd jm
And added users to same group:
sudo usermod -a -G jm john
sudo usermod -a -G jm mike
Then gave right:
sudo chgrp -R jm /home/john/Desktop/test
sudo chmod -R 770 /home/john/Desktop/test
When I login with mike and write cd /home/john/Desktop/test ,
it writes Permission denied.
What may be the problem?
Output of ls -la:
drwxrwx---+ 2 john jm 4096 Nov 7 15:35 test
|
To summarise the discussions in the comments below the question itself:
For a user to have access to a directory, the user also has to have at least execute permissions on all directories above that directory, and on the directory itself. This may be achieved through either of the user, group or "other" permission bits.
For user mike to have access to the directory /home/john/Desktop/test, the user must therefore have x permissions on all of the directories
/,
/home,
/home/john,
/home/john/Desktop, and on
/home/john/Desktop/test.
If the user is not the owner of a directory in this list, they must be part of a group that has x permissions on it, or the directory must have x permissions set for "others".
Related:
Do the parent directory's permissions matter when accessing a subdirectory?
Execute vs Read bit. How do directory permissions in Linux work?
| One folder - multiple user permission |
1,393,534,555,000 |
I thought that I knew how to set up permissions in Linux. I apparently don't.
I have a user called "web3". This user was automatically created by ISPConfig (A server management application like CPannel).
I also have an application that I installed on the server called "Drush". I installed "Drush" while logged in as root. This application is located at:
/root/.composer/vendor/drush/drush/drush
This file and it's containing folder have the following permissions:
-rwxr-xr-x 1 root root
drwxr-xr-x 9 root root
Since the file allows read and execute permissions to everyone, how come every time I login as the "web3" user and try to run the aforementioned application I get the following error message:
/root/.composer/vendor/drush/drush/drush: Permission denied
I have faced this problem before but I resorted to giving sudo full root permissions to the user I was having problems with. On a local development environment, this is not a big deal. I am managing my own Dedicated Server now and this sledgehammer solution will not do.
What am I doing wrong?
I'd appreciate any help!
|
/root/ is root's home directory. The permissions on /root/ are hopefully 700, preventing anyone but root from traversing the entire directory tree below it.
You're being prevented from running the binary as a non-root user by permissions further up the directory tree.
Installing anything into /root/ is unusual, you would normally install executable code to be used by multiple users into /opt/ or another directory.
So those are the two main things that are 'wrong'. You need to find a better location to install the code, and to ensure the full path is accessible to the users you want to use it.
Lastly, as others have pointing out, while you often need to be root to complete an install, the resulting files should only be owned by root if absolutely necessary. In many cases, specific users are created (such as the www-data user, or an oracle user) which limits exposure if the code is compromised. I don't know your application, but it might be worth either installing it as the web3 user or installing it as root, but changing the permissions later to a non-privileged user created specifically for the task.
You should resist the urge to open up the permissions on /root/ to fix the issue, and sudo is a sticking plaster over the problem. The problem is that you should not install executable code into root's home directory.
| User Permissions problem in CentOS 7: “Permission denied” |
1,393,534,555,000 |
I never went outside of windows before last week, I was asked to make a VM with elasticsearch/logstash/kibana/jdk8 on it.
I'm using a regular non sudoer user.
I extracted each tar.gz file and got 4 folder.
First to launch elastic search I had to make the 'elastisearch' file executable using chmod +x.
Then it failed because other files needed to be executable.
Then it failed because some files in the bin folder of the java JDK needed to be executable.
Is there an other way to install stuff on linux? or each time it's a game about "let's see which file I need to chmod"?
Thanks.
Ps : I'm using CentOs 7
|
Is there an other way to install stuff on linux? or each time it's a game about "let's see which file I need to chmod"?
Yeah it's called vendors using the native package managers which solves a whole host of problems related to releasing software, including permissions. Some ISV's just have a habit of "the admin will figure it out" become the standard expectation. There are some cases where you have to do that but a lot of problems have already been solved if they would just adhere to a workflow.
In your specific case, you probably didn't specify the p option to your tar command which would've instructed it to preserve file permissions on the resulting files if they were included in the tar archive to begin with (you just have to try it and see).
To your original question:
Why do I need to set some file as executable?
It's just an access control measure. It gives you a way to not have a file be ran by itself. For example, if a directory tree for an installer includes the final executables but you want to make it so they can't execute anything other than the installer program. Or if you're working on a script but don't want anyone to execute it until you're finished.
| Why do I need to set some file as executable? |
1,393,534,555,000 |
The bits 750 invoke the -rwxr-x--- permissions, on a given file, to the user who owns the file and "the" group. My query regards the group. Which group on the system would receive these permissions?
Am I right in assuming that they would be awarded to the group that the owner belongs to? If so, is this always the case?
|
Am I right in assuming that they would be awarded to the group that the owner belongs to?
Wrong. A file can belong to any user and any group. There is no relationship between user and group in such way.
Every file has user owner and group owner. These are separate entities. If you do ls -l it will show you owner user and owner group of every file, e.g.:
$ ls -l
-rw-r--r-- 1 user_owner group_owner 22K May 2 13:06 file.png
-rw-r--r-- 1 user_owner group_owner 22K May 2 13:12 file.jpg
To change user owner, you use chown command. To change group owner, you use chgrp command.
You can also use chown and specify both, user and a group, by separating them with colon like this:
$ chown user:group file
| With a file with 750 permission, which group has "5" permission? |
1,393,534,555,000 |
So I have a user that is able to go in /etc and read files like httpd.conf.
I would like to deny that user access to any critical locations such as /etc, /var and so on.
What would be the best way to do so? I don't really want to modify my whole folder/file permission just for one user.
|
You might want to set up a chroot jail look at Jailkit.
The jail ch anges the root, as in /, to new path.
As a simple example as a start.
First off you would most likely want to have chroot directory on a separate
partition - such that the user can't fill your system partition.
But for sake of simplicity:
Make chroot directory:
# mkdir /usr/chroot_test
# cd /usr/chroot_test
Make system directories:
# mkdir bin etc home lib var
Add some basic tools:
Here one can use ldd to find dependencies.
# ldd /bin/bash
linux-gate.so.1 => (0xb774d000)
libtinfo.so.5 => /lib/libtinfo.so.5 (0xb770a000)
libdl.so.2 => /lib/libdl.so.2 (0xb7705000)
libc.so.6 => /lib/libc.so.6 (0xb755a000)
/lib/ld-linux.so.2 (0xb774e000)
Copy them to chroot's lib:
# cp /lib/{libtinfo.so.5,libdl.so.2,libc.so.6,ld-linux.so.2} lib/
Now enter chroot
# chroot /usr/chroot_test
bash-4.2# ls
bash: ls: command not found
bash-4.2# pwd
/
bash-4.2# exit
exit
OK. Works. Add some more tools:
# sed '/=>/!d;/=>\s*(/d;s/.*=>\s*\([^ ]*\) .*/\1/' < <(ldd /bin/{ls,cat,vi}) | sort -u)
... copy
Etc.
Then add chroot login (http://kegel.com/crosstool/current/doc/chroot-login-howto.html).
But, as mentioned, by using jailkit this can be simplified: http://olivier.sessink.nl/jailkit/howtos_chroot_shell.html.
| Taking away a User's Read/Write/Execute Permission |
1,393,534,555,000 |
I've set some Spotify UI settings in /Users/username/Library/Application Support/Spotify/prefs that I'd like to keep. I'm having an issue where the application overwrites this file every time it launches. I've tried to prevent this from happening with chmod a-w prefs and running ll returns that its permissions are -r--r--r-- with my username as the owner and staff as the group. When I start Spotify, it resets the file to default and changes the permission back to -rw-r--r--. I'm never asked for my sudo password during this. How is this happening?
|
The files and its directory belong to your user. So the application running as your user has access to do what it likes with them.
In this context, the most likely thing is that Spotify is deleting completely re-writing the file. This requires write permission on the directory, not the file.
You could try to remove all write permissions (and even chown root ... it) from the parent directory:
chmod 555 '/Users/username/Library/Application Support/Spotify'
This might cause other problems with the application, but unfortunately there is very little you can do to prevent the app re-writing the file.
| How is a -r--r--r-- file being modified by an application? |
1,621,975,863,000 |
I am currently trying to set all files with extensions .html in the current directory and all its subdirectories to be readable and writeable, and executable by their owner and only readable (not writeable or executable) by groups and others. However, some of the files have spaces in their name, which I am unsure how to deal with.
My first attempt:
chmod -Rv u+rw,go+r '* *.html'
When I tried my first attempt, I get the following message:
chmod: cannot access '* *.html': No such file or directory
failed to change mode of '* *.html' from 0000 (---------) to 0000 (---------)
My second attempt:
find . -type f -name "* *.html" | chmod -Rv u+rw,go+r
I added a pipe operator in order to send the find command's output to chmod. However, when I tried my second attempt, I get the following:
chmod: missing operand after ‘u+rw,go+r’
After my attempts, I'm still confused on how to deal with spaces in file names in order to change set permissions recursively. What is the best way to deal with this issue? Any feedback or suggestion is appreciated.
|
Use the -exec predicate of find:
find . -name '* *.html' -type f -exec chmod -v u+rw,go+r {} +
(here, adding rw-r--r-- permissions only as it makes little sense to add execute permissions to an html file as those are generally not meant to be executed. Replace + with = to set those permissions exactly instead of adding those bits to the current permissions).
You can also add a ! -perm -u=rw,go=r before the -exec to skip the files which already have (at least) those permissions.
With the sfind implementation of find (which is also the find builtin of the bosh shell), you can use the -chmod predicate followed by -chfile (which applies the changes):
sfind . -name '* *.html' -type f -chmod u+rw,go+r -chfile
(there, no need to add the ! -perm... as sfind's -chfile already skips the files that already have the right permissions).
That one is the most efficient because it doesn't involve executing a separate chmod command in a new process for every so many files but also because it avoids looking up the full path of every file twice (sfind calls the chmod() system call with the paths of the files relative to the directories sfind finds them during crawling, which means chmod() doesn't need to look up all the paths components leading to them again).
With zsh:
chmod -v -- u+rw,go+r **/*' '*.html(D.)
Here using the shell's recursive globbing and the D and . glob qualifiers to respectively include hidden files and restrict to regular files (like -type f does). Add ^f[u+rw,go+r] to also skip files which already have those permissions.
You can't use chmod's -R for that in combination with globs. Globs are expanded by the shell, not matched by chmod, so with chmod -Rv ... *' '*.html (note that those * must be left unquoted for the shell to interpret them as globbing operators), you'd just pass a list of html files to chmod and only if any of those files were directories would chmod recurse into them and change the permissions of all files in there.
| How do I handle file names with spaces when changing permissions for certain files in the current directory and all its subdirectories? |
1,621,975,863,000 |
Working on Raspian, after compiling using make, I do a ls -I myFile in order to see the permissions of the file and if it is marked as executable as it should and instead of getting something like that -rw-r--r-- 1 pi pi 24204 Dec 26 09:49 myFile I get what I would get if I ran a simple ls but without the myFile listed. What am I doing wrong?
|
The answer is found in man ls,
-I, --ignore=PATTERN
do not list implied entries matching shell PATTERN
In other words, you asked ls to ignore the file, to only list the other files.
If you want a long list, you should use the option -l lower case 'ell'.
-l use a long listing format
So try
ls -l myFile
| ls -I myExecutable is listing the rest of the dir files instead of showing info about the file |
1,621,975,863,000 |
Is there a way to check if you have permission to reboot without actually running sudo reboot? I don't want to try it, because if I have permission, then it will just reboot the server and I don't want this to happen. Just need to check if I have the permission to reboot.
I don't have read permission to /etc/sudoers as I'm not root. Any solution other than trying the command?
|
Only write the wtmp record:
reboot -w
-w: only write a wtmp reboot record and exit.
| How to check if you have permission to reboot without actaully running the reboot command? |
1,621,975,863,000 |
After every reboot I have to set chmod o+rw /dev/ttyS0 to be able to print to my POS printer via serial port in bash.
Is it possible to save permissions and also baud rate, bits, stop bit and parity after the device is closed?
|
you can check the group owner of /dev/ttyS0 with:
ls -l /dev/ttyS0
and then add your user in this group:
usermod -a -G {group-name} username
| How to set custom permissions on /dev/ttyS0 so that they persist after reboot? |
1,621,975,863,000 |
If you have 10k rep on AskUbuntu, you can see this answer, which I deleted, because I'm apparently wrong.
Basically, a user's question was about how to change permissions of theme files in /usr/share/themes so they could be opened for writing.
I answered that sudo chmod a+rw /usr/share/themes would do the trick, and admittedly gksudo whateveryouwannado might be the better option, but I probably take a lot of risks with my system that others don't with theirs (and got yelled at by muru for it).
Specifics aside, due solely to the existence of themes in /usr/share/, I was convinced /usr/share was for stuff writable by all users, like themes, about which I stand corrected.
So, where should packages / I put stuff that should be writable and readable by all users? (Pretending for a moment that security is not an issue, which it isn't for me.)
|
The point is, you should not create directories with write permission for all users. At least restrict the write access to a group of users.
By convention, all local modifications of a system are in /usr/local/$dir. In your scenario I would advise /usr/local/share/themes.
Do not be so quick dismissing security concerns. Take for instance, the example of a web server. Fully write-capable directories will be often be abused to upload scripts that are run with the privileges of the web server user (often www-data). If you are storing the themes with access to all users, they will be changed and subverted to distribute malware without any need to escalate to root.
As I commented in the original question, I manage web servers with hundreds of virtual hosts, and what we do is create a group for each virtual host, then add users to that group.
Will leave here a link about applying setuid and setgid to directories.
https://www.gnu.org/software/coreutils/manual/html_node/Directory-Setuid-and-Setgid.html
| What's the convention for "public" files? |
1,621,975,863,000 |
I have read a lot about how great UNIX ACLs are. For example, one good example can be viewed here. However, are there any disadvantages or configurations that cannot be expressed using ACL?
|
This will probably get closed for soliciting opinions or being too broad but I'll do my best. "UNIX ACL" is a really indirect way of referring to it. I'm supposing you mean POSIX-style ACL's. The chief drawbacks there are with the lack of expressiveness in the number of operations you can specify since it just extends the traditional read/write/execute permissions such that you can specify more users than just the owner and more groups than just the file's primary group.
Most of these limitations aren't really important though, and rwx does what most people want. Other access controls usually get to "close enough" territory once you factor in file attributes such as making a file immutable or append-only (via chattr).
There are other ACL implementations than just POSIX, though. There's one out there for implementing NFS ACL's at the filesystem level called richacls but support is incomplete for the time being.
POSIX ACL's also don't really control capability execution on their own. They had to add that ability to SELinux so that you could do things like give a user CAP_CHOWN in general but restrict them from doing that with files that have a particular SELinux type.
| UNIX ACL disadvantages [closed] |
1,621,975,863,000 |
I'm trying to run Motion app on a Linux box.
For various security reasons, I'd rather run it as regular user than sudo. However, when I run "service motion start" without the sudo in front, it says:
chown: changing ownership of `/var/run/motion': Operation not permitted
Googling doesn't seem to yield any results relevant to this situation.
|
When I've installed motion in the past it's been done so that it's running as its own designated user, typically motion. I'd suggest doing the same thing here for your installation as well.
EDIT #1
The OP asked how this was done. I explained that if you install the motion package via either Debian/Ubuntu or Fedodra repos the installation would be done so that everything you needed to run motion as another user, motion in my cases, was done out of the box by default.
If you look at the files that would typically get installed with motion, a SYSV init script is often provided, /etc/init.d/motion. Within this script, on Ubuntu, is a section like this:
case "$1" in
start)
if check_daemon_enabled ; then
if ! [ -d /var/run/motion ]; then
mkdir /var/run/motion
fi
chown motion:motion /var/run/motion
log_daemon_msg "Starting $DESC" "$NAME"
if start-stop-daemon --start --oknodo --exec $DAEMON -b --chuid motion ; then
log_end_msg 0
else
log_end_msg 1
RET=1
fi
fi
;;
If you look at the start-stop-daemon line you'll notice that when motion is started ($DAEMON) the switch --chuid motion is passed, which will run the motion daemon process as user motion.
Something similar is done on my Fedora & CentOS systems as well in their corresponding /etc/init.d scripts for motion.
| Run Linux "motion" app as user |
1,621,975,863,000 |
I have a list of users (user1,user2,user3,superuser). user1, user2 and user3 belong to a usergroup called normalusers . Now, I need to issue the access control list command for the user superuser to view the home pages of the users (user1, user2, user3). I have a setfacl command as below.
setfacl -m user:superuser:rx /home/user1
The above command works perfectly fine and the user superuser has access to user1 directory. Now, I need to issue the rights to the remaining users too. I wanted to apply the ACL rules to all the users inside the home directory. So, I issued the following command.
setfacl -m user:superuser:rx /home/
However, the above command did not allow me to view all the users. I was wondering if the setfacl command can be modified to access all the home directories belonging to a particular group.
|
You need the --recursive switch:
setfacl -R -m user:superuser:rx /home/
Otherwise the only thing that you are changing is the /home directory acl.
| setfacl access issues |
1,621,975,863,000 |
In Linux, I had created a userid. After creating this, I encountered a problem that the .EXE Files are not opened on simple click. They seem to be not privileged for my user account.
How can I overcome from this?
|
Assuming these .exefiles were actually compiled for Linux (and your specific architecture) you need to ensure they have execute permissions:
chmod +x your_file_names_here
To make sure these files are actually meant to run on Linux, check the output of
file one_file_name_here
| Privileges on Linux? |
1,621,975,863,000 |
I'm trying to execute this command but it does not get the expected results
chmod -R u-x+Xrw,g-x+Xrw,o-x+Xr *
I want all my directories to be executable, and all my files to NOT be, this is for a storage folder of a web server where I dont want anyone executing anything. just read and write, but i want it's directories to be traversable.
For some reason, the files end up being executable too, what am I doing wrong?.
EDIT: I know how to solve the problem with solutions shown in other answers, however i would like to know why the command as written here keeps the x flag on files when it seems it would remove it.
|
Since you want different permissions for files (read & write) vs directories (read & execute), I'd recommend using two separate commands instead of trying to combine them into one. The wildcard * will match files and directories.
Secondly, the X permission adds "execute" ...
if the file is a directory or already has execute permission for some user
... so if the file started off with any (user, group, or other) execute permission, then it will end up with executable permissions.
Consider two separate commands:
find /base/path -type d -exec chmod u+rx,g+rx,o+rx {} +
and
find /base/path -type f -exec chmod u-x+rw,g-x+rw,o=r {} +
Adjust the permission sets according to your own policies; the above commands:
on directories: add read & execute for everyone
on files: removes execute for user and group
on files: adds read & write for user and group
on files: sets other to only read
| Adding execute for specific users and groups on folders and remove it from files |
1,621,975,863,000 |
I have two partitions that I mount during boot, which aren't on my boot drive (SSD), but on internal HDD. I can access the files fine using Nautilus, I can edit them (e.g. with PhpStorm) but when I try to open a file using the "browse" functionality (e.g. when uploading a file using Firefox or opening a repository in GitKraken) I get a "permission denied" error. I can't even upload a picture of the problem here, so this is what it says:
Could not read the contents of www.
Error opening directory '/media/www': Permission denied
I really don't know what's wrong. Anyone have an idea?
The drives in question are the last 2 lines in my fstab file:
/etc/fstab
UUID=5221e846-702d-47b0-bc92-aad063ae8fcd / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sda2 during installation
UUID=6C58-64C5 /boot/efi vfat umask=0077 0 1
/dev/disk/by-uuid/01D398F590634270 /media/www auto nosuid,nodev,nofail,x-gvfs-show 0 0
/dev/disk/by-uuid/01D2EB44D1D7F3E0 /media/wouter auto nosuid,nodev,nofail,x-gvfs-show 0 0
mount | grep /dev/sd
/dev/sda5 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/sda2 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
/dev/sdb3 on /media/www type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096,x-gvfs-show)
/dev/sdb2 on /media/wouter type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096,x-gvfs-show)
|
I solved the problem. It is a problem with Firefox' permissions.
Open "Ubuntu Software"
Find Firefox in the "Installed" tab and click on it
Choose "Permissions"
Make sure "Read/write files on removable storage devices" is on
| Permission denied when browsing for files using Ubuntu 18.04 |
1,621,975,863,000 |
I am running a PHP script on my Apache server and from the script I need to copy some files (to run a Bash script that copies files). I can copy to a directory /tmp with no problems, but when I want to copy to /tmp/foo then I get this error:
cp: cannot create regular file '/tmp/foo/file.txt': Permission denied
even though the permissions for the directory /tmp and /tmp/foo are set to the same value.
Do you know what is the problem?
|
/tmp Directory has all the permissions (read/write) for all users. but if you made /tmp/foo by your own account, it has its permissions just for you! if you want to make it writable for other users (or programs) change its permission with this command:
chmod 777 /tmp/foo
If you have any other files inside this directory from before, add -R flag to above command.
Update:
Use this command to change /tmp/foo owner from your own to apache default user:
sudo chown www-data:www-data /tmp/foo -R
also please check your apache2 configuration to see which user it has for running the php scripts.
| cp: cannot create regular file: Permission denied |
1,621,975,863,000 |
we want to check that all files and folders under /hadoop/hdfs are with permissions - hdfs:hadoop
is it possible to do this test with find command ?
in case find capture files/folder that not have this permissions then find will print these files/folders
|
You may list all entries under /hadoop/hdfs that does not belong to user hdfs and group hadoop with
find /hadoop/hdfs ! '(' -user hdfs -group hadoop ')' -ls
The -ls at the end will list the found pathnames in a format that is reminiscent of the output from ls -l.
| find + how to verify that all files and folders are with groups and owner hdfs:hadoop |
1,621,975,863,000 |
I'm curious if anyone can help me with what the best way to protect potentially destructive command line options is for a linux command line application?
To give a very hypothetical scenario: imagine a command line program that sets the maximum thermal setting for a processor before emergency power off. Lets further pretend that there are two main options, one of which is --max-temperature (in Celsius), which can be set to any integer between 30 & 50. There is also an override flag --melt which would disable the processor from shutting down via software regardless of how hot the processor got, until the system electrically/mechanically failed.
Certainly such an option like --melt is dangerous, and could cause physical destruction at worst case. But again, lets pretend that this type of functionality is a requirement (albeit a strange one). The application has to run as root, but if there was a desire to help ensure the --melt option wasn't accidentally triggered by confused, or not experience users how would you do that?
Certainly a very common anti-pattern (IMO) is to hide the option, so that --help or the man page doesn't reveal its existence, but that is security through obscurity and could have the unintended consequence of a user triggering it, but not being able to find out what it means.
Another possibility is to change the flag to a command line argument that requires the user to pass --melt OVERRIDE, or some other token as a signifier that they REALLY mean to do this.
Are there other mechanisms to accomplish the same goal?
|
I'm assuming you're looking at this from the POV of the utility programmer. This is broad enough that there isn't (and can't be) a single right answer, but some things come to mind.
I think most utilities just have a single "force" flag (-f), that overrides most safety checks. On the other hand, e.g. dpkg has a more fine-grained --force-things switch, where things can be a number of different keywords.
And apt-get makes you write a complete sentence to verify in some cases, like removing "essential" packages. See below. (I think it's not just a command line option here, since essential packages are e.g. those that are required to install packages, so undoing a mistaken action may be very hard. Besides, the whole operation may not be known up front, before apt has had a chance to calculate the package dependencies.)
Then, I think cdrecord used to make the user wait a couple of seconds before actually starting the work, so that you had a chance to verify the settings were sane while the numbers were running down.
Here's what you get if you try to apt-get remove bash:
WARNING: The following essential packages will be removed.
This should NOT be done unless you know exactly what you are doing!
bash
0 upgraded, 0 newly installed, 2 to remove and 2 not upgraded.
After this operation, 2,870 kB disk space will be freed.
You are about to do something potentially harmful.
To continue type in the phrase 'Yes, do as I say!'
?] ^C
Which one to choose is up to you as the program author - you'll have to base the decision on the danger level of the action, and on your own level of paranoia. (Be it based on caring about your users, or on the fear of getting blamed for the mess.)
Something that has the potential to cause the processor to literally (halt and) catch fire probably goes in the high end of the "danger" axis and probably warrants something like the "type 'Yes, do what I say'" treatment.
That said, one thing to realise is that many of the actual kernel-level interfaces are not protected by any means. Instead, there are files under /sys that can change things just by being opened and written to, no questions asked apart from the file access permissions. (i.e. you need to be root.)
This goes for hard drive contents too (as we should know), and, in one case two years back, to the configuration variables of the motherboard firmware. It seems it was possible to "brick" computers with a misplaced rm -rf.
No, really. See lwn.net article and the systemd issue tracker.
So, whatever protections you would implement, you would only protect the actions done using that particular tool.
| How to protect potentially destructive command line options? |
1,621,975,863,000 |
Imagine this scenario in a LAN: one Linux NFS fileserver (srv) and three Linux clients (A, B, C). There are files / directories on srv with root ownership and no access rights granted to non-root users. Those are the files this question is concerned with. I'll call them "root-restricted files".
A is the local sysadmin. He or she will need to access root-restricted files on srv freely.
B is a local developer who has sudo rights on his or her machine. However, B should not be able to read or write (or traverse) root-restricted files/directories on server. In fact, B should also not be able to access files on srv not owned by groups B belongs to, even though B has sudo rights.
C is a local user with no sudo rights. C should have access to normal files on srv, but no permissions to local or server root-restricted files.
Given:
srv at 192.168.1.1
A at 192.168.1.2
B at 192.168.1.3
C at 192.168.1.4
Would this /etc/exports accomplish the goals?
/srv/nfs 192.168.1.2(rw,no_root_squash)
/srv/nfs 192.168.1.3(rw,root_squash)
/srv/nfs 192.168.1.4(rw,root_squash)
Which other NFS options are recommended? But most importantly, is root_squash capable of achieving this solution if we assume the IP address cannot be not spoofed?
Next, assuming a developer with sudo rights on their machine could spoof their IP address and look like 192.168.1.2, which has no_root_squah, what solution is needed? LDAP + Kerberos? Something else?
Can our goal be accomplished with NFS at all? Is something like SSHFS or Samba 4 a better solution?
(Editing suggestions welcome if "root-restricted files" is not the best term.)
|
NFS simply uses the UID/GID provided by the client. Using squash_root option in exports for the share maps the root user to anonymous user (nobody/nogroup). This doesn't prevent a malicious/compromised client from providing some other UID/GID, which might allow access to other files.
If you want to secure your NFS server from spoofed users, you need to use Kerberos to authenticate your NFS users. NFS with Kerberos also provides optional data integrity and encryption. To get a quick overview of what is involved, there is a quick howto in Ubuntu wiki.
| How to keep users with local sudo rights from having sudo rights on NFS file server? |
1,621,975,863,000 |
I want to create a directory where multiple users will be able to contribute to the same files and I want each file that any user creates to have write permission by default for everyone in the group.
I did setgid for a directory and all new files have the right group. However new files are still created without write permissions in the group.
Here is an illustration of what I'm trying to do:
(as a root user):
mkdir --mode=u+rwx,g+rws,o-rwx /tmp/mydir
chown root.mygroup /tmp/mydir
touch /tmp/mydir/test.txt
Then when I do ls -la /tmp/mydir/ I'm getting
drwxrws--- 2 root mygroup 4096 Sep 12 12:04 .
drwxrwxrwt 11 root root 4096 Sep 12 12:04 ..
-rw-r--r-- 1 root mygroup 0 Sep 12 12:03 test.txt
So, write permission never gets populated for a group for all new files authored by members of that group. I understand that other group users still can override that by doing chmod g+w for specific files such as test.txt in the example above and this is the right behavior in most of the cases, but is there a way to recursively alter that for a specific directory and allow write permissions to be automatically set for a group as well as the owner for all new files within that dir?
|
Default permissions for new files and folder are determined by umask. If you configure the default umask for your users to 002, group permission will be set to rw for new files and folders. Configuring umask for all users can be done using pam_umask.
To use pam_umask, on Debian based distributions you should configure the module in /etc/pam.d/common-session by appending following to the end of the file:
session optional pam_umask.so
Then configure the desired umask value in /etc/login.defs.
Note that the mask configured using PAM isn't applied to all Gnome applications (for details, see How to set umask for the entire gnome session). However sessions launched from ssh or tty are not affected.
If you do not want to alter the default umask on your system, you can use POSIX Access Control Lists. When ACL is set for a directory, new files inherit the default ACL. ACLs can be set and modified using setfacl and getfacl respectively. Some file systems might need additional mount flag to enable ACLs.
| Auto set write permission for a group |
1,621,975,863,000 |
I have 2 Ubuntu servers: 12.04.5 LTS and 16.04.1 LTS, in a local network, where I'm a administrator of the both servers and can be the super users on them.
Let's say each server is A and B, and I log in to the server A now.
When I want to copy a file from A to a directory in B, where root authority of B is required to put the file, how can I do that?
My trial was as follows but it didn't work due to no authority for the server B:
sudo scp /foobar/foo/bar.txt user@serverB:/bar.txt
scp: /bar.txt: Permission denied
The sudo power affects only the permission of the source and doesn't affect the permission of the destination directory.
Of course, If I change the permission of the destination directory appropriately, I can copy the file without no permission error. But changing the permission every time when I copy files is a little annoying.
And root login is not allowed to both servers as the default configuration of Ubuntu is so.
If any of you know some good way, please teach me.
I use bash shell.
|
If you can read the file on the source machine as a regular user (rather than root), consider a pull scp rather than a push one:
serverB:~$ sudo scp user@serverA:/foobar/foo/bar.txt /bar.txt
If you cannot read the file on the source machine, you'll need to do the two steps you described. There is no way to have sudo work on more than one machine.
| Can I extend the power of sudo to the distination directorys/files on bash shell? |
1,621,975,863,000 |
I'm looking for all the alternative ways you can think of to list the files which are executable by anyone (owner, group, others) in the current directory and subdirectories.
For alternative ways, I mean those not using the find command:
find -L . -type f -perm -111
find -L . -type f -perm -a=x
One method that I'd like to see is a combination of ls and grep.
|
By using ls and grep:
ls -lAR | grep "^\-..x..x..x"
-l: Shows permissions in rwx format.
-R: Recursive.
| Alternative ways to list files executable by all |
1,621,975,863,000 |
I have 2 users, GM and FTP. I want GM to have read access, and FTP to have all access. How can I do this?
|
You need to first understand that permissions are file-based.
If you want file /var/example.txt readable by GM and writable by FTP, you could set its owner to FTP (chown FTP /var/example.txt) and set its permissions to 644 (chmod 644 /var/example.txt or chmod u+rw,o+r /var/example.txt). This will make the file readable by everyone but only writable by FTP.
If you only want GM to have read access, you will need to put both users in the same group and set the permission to 640.
For more details on file permissions, refer to the chmod man page.
| Chmod/chown permissions question [closed] |
1,621,975,863,000 |
I have a sed script that changes some content in /etc/shadow. The actual change isn't important, I will put it just as an example:
root@device:~ sed -i 's/root:\(.*\):0:0/root:\1:10:0/' /etc/shadow
sed: can't create temp file '/etc/passwdH5HWP7': Permission denied
As the output shows, there seems to be some permission error, but I am running the command as root.
The sed being used is from BusyBox v1.22.1 on an embedded distribution.
If I try the example in the home folder there is no error. Also, I can edit /etc/shadow normally via a text editor.
Is sed creating files as a nonroot user?
|
The reason is probably that / (containing /etc) is a read only filesystem, but has a symlink for /etc/shadow, /etc/passwd, and other dynamic files that lands on a read-write filesystem.
This will allow you to edit the shadow and passwd files directly. The sed -i fails because its implementation doesn't actually update in place. Rather, it creates a temporary file and writes the changes to that, and then replaced the original file with the temporary copy. The error message you are seeing says that sed can't write the (temporary) file /etc/passwdH5HWP7.
Solution? Either don't use -i and control the temporary file's location yourself, or provide the -i flag with a filename. In both cases you need to provide a filename in a location with read-write access.
| Is sed run as a different user? |
1,621,975,863,000 |
I sort-of-cloned an existing Debian 7.x distro by copying the contents of the root filesystem (not the special dirs of course) to another HDD. I booted, things seem to run, but - I get some weird errors. One of them - sudo and su wouldn't run, complaining about lack of a setuid permission for the binaries. Well, I fixed that, and now they don't complain, but - maybe there are other files whose permission has been screwed up during the copy? Is there someway to verify and fix all relevant file permissions?
|
Original host (or another similar freshly installed distro):
getfacl -R / > permissions.acl
your host:
setfacl --restore=permissions.acl
from here
| setuid (and other) permissions lost when copying / elsewhere - what to do? |
1,621,975,863,000 |
I maintain a shared public computer. Every user has their own directory in /home. We don't like to make /home a network mount because it takes a long time for new users to login, but we would like to set it so users can't write to their /home directories. We have network storage available to users that they should use instead, but we have no way to force them to use it.
Ideally I want to prevent them from writing data to the local machine at all, instead directing them to write to the storage server. If that's not possible, I'd at least like to limit their disk quota on the local machine so they can't consume all the space. How can I go about doing that?
|
You could create per directory limits by mounting filesystem image files on subdirectories in /home. This won't disable /home, but it will solve your problem in so far as it will prevent people from writing more than a fixed amount.
A filesystem image file works like this:
Create an empty file of a fixed size, e.g. 100 MB:
dd if=/dev/zero of=/var/home/bob.img bs=1024 count=100000
You don't have to use /var/home for these; it's a directory which otherwise would not exist. These files should be owned root and set mode 600 so that no one else can read them. Make sure, obviously, that you have room to create a 100 MB file for every user, but keep in mind you will be able to eliminate everything in /home at the same time and free up that space.
Create a filesystem in the image:
mke2fs -m 0 /var/home/bob.img
This will warn you that bob.img is not a block special device -- proceed anyway.
The first time you do this, presumably you want to copy in the user's existing home, so you'll have to mount it temporarily:
mount /var/home/bob.img /mnt/tmp
mv /home/bob/* /mnt/tmp
mv /home/bob/.* /mnt/tmp
The last one will ask if you want to overwrite . and ... Don't. It's just for moving "hidden" dot files in the toplevel, which the first mv will have left behind. You could also use a filebrowser or some other method to do the move. /home/bob should now be empty, and you can move the mounted image there:
umount /mnt/tmp
mount /var/home/bob.img /home/bob
Voila, everything seems to be back the way it was -- except /home/bob is now an independent filesystem, and user bob won't be able to put more than 100 MB there. Also, because it is an existing image file, this space will be reserved for bob and won't get taken up with anything else.
You'll need a init service to mount all these at boot time; it could be as simple as:
#!/bin/bash
for img in /var/home/*.img; do
name=$(basename $img .img)
mount $img "/home/$name"
done
They should be unmounted automatically when the system shuts down. The data is as safe as it would be anywhere else.
| How to disable local file writes? |
1,621,975,863,000 |
If user A is member of group foo, is it then possible for A to share a file for all members within foo without root permissions?
chown foo:foo file
Is not permitted without privileges.
A can say
chmod o+rw file
but if A do not want to make it public for other users than those within foo, that does not work.
|
You can create a directory for the group where you want the shared files to live.
Then, you can set setgid bit on the directory, which forces all newly created files to inherit the group from the parent directory. This way you don't need to chgrp the files.
So, for example:
mkdir /shared
chgrp sharegroup /shared
chmod g+swr /shared
Now, if any user creates a files in /shared, its owner will be that user and group will be sharegroup.
You also need to make sure that the default umask for users is 0664, which means group members can write to files too.
| Store a file for all users within a group |
1,621,975,863,000 |
note: I don't want to use a udev rule.
I need to change (programmatically) the permissions of some device. To understand what I have to do (in code) I want to do this using just chmod command.
So far, I've tried this:
root# ls -l /dev/sdb
brw-rw-rw- 1 root disk 8, 16 Apr 7 05:27 /dev/sdb
root# chmod 0600 /dev/sdb
root# ls -l /dev/sdb
brw------- 1 root disk 8, 16 Apr 7 05:27 /dev/sdb
as you can see, /dev/sdb has read and write permissions only for the owner (root). But I'm still able to create new files and read files from my connected flash drive.
What am I missing? How can I use chmod to prevent users from writing to some device?
|
If I understand you right then there is some file system on /dev/sdb that you have mounted. What matters here are the permissions in the file system that resides on /dev/sdb, the permissions of /dev/sdb are completely irrelevant for your question. Except that with permissions 0666 anyone can bypass the access control mechanisms for that file system and access the content on the device arbitrarily, but this is a different issue.
If you want to restrict access to the files within the file system, then you have to assign appropriate ownership and permissions to the files (beginning with the root of that file system). For file systems like FAT mount(8) lets you set the ownership and access permissions for all files within the file system. If you want to expose the entire tree to certain users only and hide it from all others, then mount it somewhere where only those users have access to. But note that any user can see that something is being mounted (mount(8) or df(1) will show them).
chroot(1) is not going to help you at all.
| Can I change permissions on a device with chmod? |
1,621,975,863,000 |
I have some folders with account/user "OpenERP" and i created new folder (web_theme) under this directory .
When i try to open/read this folder (web_theme) it throws "Do not have the Permissions"
I have pointed new folder is in root
drwxrwxr-x 7 openerp openerp 4096 Oct 7 10:25 web
drwxrwxr-x 4 openerp openerp 4096 Oct 7 10:30 web_calendar
drwxrwxr-x 4 openerp openerp 4096 Oct 7 10:30 web_rpc
drwxrwxr-x 4 openerp openerp 4096 Oct 7 10:30 web_tests
drwx------ 4 root root 4096 Oct 18 02:42 web_theme
Tried below commands
su/sudo chmod -R 0770 web_theme
chmod -R 0755 web_theme
chmod 666 web_theme
It Throws Error:
chmod: changing permissions of `web_theme': Operation not permitted
chmod: cannot read directory `web_theme': Permission denied
How do i resolve.
|
an addon to cbliard's answer:
if you find the numeric notation of permissions a bit tedious, you can also use a symbolic form (which i find easier to read).
the following will allow all users (that is: the owner of the file, it's group and all others) to both read all files/directories within web_theme and to execute these files/directories. the capital X will make sure that only those files/dirs are marked executable that are already executable "for some user". in practice this means that it will mark directories as "executable" (which is needed to traverse them), but not ordinary files:
chmod -R a+rX web_theme
as cbliard has stated you have to be superuser (root) or the owner of the files/directories (in this case this is also root) to run this command. since sudo seems to be forbidden for this particular task on your machine, try something like:
openerp@vv:~$ cd ~/instances/openerp/webclient/addons/web_theme
openerp@vv:...$ su
root@vv:...# chmod -R a+rX
| Change Folder Permissons in Debian |
1,621,975,863,000 |
I have read that if folder has only x set permission to execute it actually means that you are permitted to search this directory. So how to search it?
|
You misunderstand. “Search” permission is a bit of a misnomer; if you have execution permission but not read permission on a directory, you can access a file in this directory only if you know its name. That is, given a name, you can search the file with this name (and, more importantly, you can access the file that you find). You do that in the usual way, by accessing the file directoryname/filename. You can't browse the list of entries in the directory, so you can't make more advanced searches such as pattern matching. That would require the read permission; the read permission is precisely what lets you browse the list of entries in the directory.
See also Execute vs Read bit. How do directory permissions in Linux work?
| How to search folder with only x set (execution) permission |
1,621,975,863,000 |
I used these instructions to install mongodb on my OS X machine. I did not make the files owned by root though, I used my local user (markdsievers), and installed to /usr/local/mongodb and usr/local/mongodb_data. I've chmod'ed and chown'ed all files and subdirectories of those to rwxrwxr-x markdsievers staff.
As user markdsievers I can start up the database without error using:
$ sudo mongod --dbpath=/usr/local/mongodb_data
However, if I start it with:
$ mongod --dbpath=/usr/local/mongodb_data/
I get:
Unable to create / open lock file for lockfilepath: /usr/local/mongodb_data/mongod.lock errno:13 Permission denied
What am I missing here?
|
First, an aside: storing your mongo data in /usr/local/mongodb_data seems a little strange; most behind-the-scenes storage is in /var/, or, for self-installed applications, /var/local/. See hier(7) or the Filesystem Hierarchy Standard for more details. (The FHS is mis-named: because it is descriptive, not prescriptive, it is not a standard. But it is worth reading.)
Your mongodb.lock file is owned by root because you executed:
sudo mongod --dbpath=/usr/local/mongodb_data
sudo(8) executes programs with a different effective user id (see seteuid(2), setreuid(2) for details). Because you didn't specify any other user with a -u option, sudo(8) defaulted to the root account. Thus, your lock file was created with root owner and group. (Compare sudo id with id to see what changes.)
What is strange though, is that the lock file should have been removed when you stop the mongod database. Be sure you're stopping it properly -- not only so the lock files are removed -- but also so you know data has been properly saved to disk.
| permission denied when executing a binary |
1,621,975,863,000 |
The terms "file permission" and "file mode" are often used interchangeably. However, some tools exclusively use one term or the other. Interestingly, the venerable chmod tool specifically refers to "file mode".
Is there a technical or historical difference between them?
|
“Mode” is defined as
A collection of attributes that specifies a file's type and its access permissions.
They aren’t interchangeable, a file’s mode is more than its permissions.
A file’s mode can be retrieved using stat, and the various values extracted using macros defined in sys/stat.h.
See Understanding UNIX permissions and file types for more details on file types and permissions.
| Is there a difference between file permissions and mode? |
1,621,975,863,000 |
I have the following directory
$ ls -al
total 16
drwxr-xr-x 4 root root 4096 Oct 21 14:50 .
drwxr-xr-x 24 root root 4096 Oct 21 11:28 ..
drwxrwx--- 8 root mygroup 4096 Jan 12 2022 foobar
Is there a way to configure foobar's permissions, so that each new file created belongs by default to mygroup group?
|
Setting the setgid bit for a directory should do precisely what you're after (assuming your Unix-like OS and/or filesystem supports it):
chmod g+s foobar
After that, the permissions should look like:
drwxrws--- 8 root mygroup 4096 Jan 12 2022 foobar
If I remember my Unix history correctly, the BSD branch of the Unix family tree used to set the group owner of any new files based on the group of the directory the file was created in by default. On the other hand, the SystemV-based branch set the group ownership based on the primary group of the process creating the file.
In the strict old SysV style, the commands sg and newgrp would have been important tools when users were working with group projects. But then SunOS 4.x and System V Release 4 allowed the system administrator to use the setgid bit on directories to choose whether the BSD or classic SysV group ownership assignment style was in effect in that directory.
This "setgid on directories" behavior was adopted by several OSs of the Unix family, including Linux. But I think there are OSs that didn't adopt it (maybe in the BSD side, as the SysV branch's "optional behavior" was already their standard behavior?). And the choice of filesystem type might also affect whether this feature is available or not.
| Configure directory so that each new file has always same group ownership |
1,621,975,863,000 |
Is there a philosophy behind running a folder as an executable in linux?
user@node main % ls -lash ./bin
total 0
0 drwxrwxrwx 2 user staff 64B May 23 21:04 .
0 drwxr-xr-x 6 user staff 192B May 23 21:04 ..
user@node main % ./bin
zsh: permission denied: ./bin
Permission denied implies that it may be allowed. If it's not, then why is it permission denied rather than something like can't run a directory?
Or is it just a weird artifact of the API when directories are involved in this way?
P.S. I am aware that x flag is adopted in the directory context to allow/deny cd-ing into them and long-listing (ls -l) them, this is not what this question is about.
P.S.S. In Python, a directory can be treated as a python "executable" if it has a certain file structure inside. (I.e. It's possible to pass a directory instead of a python file to be run by the python interpreter).
|
Running a folder isn’t possible using Linux APIs. In particular, execve returns EACCES when an attempt is made to do so — this is what Zsh represents as “permission denied”, probably because that error can also be returned if execute permission is denied. The canonical error message for EACCES is “Permission denied”; execve uses it to cover a variety of errors, including any attempt to run a file which is not a regular file, which is what is happening here.
Most shells behave like Zsh, but a couple handle this differently; for example, Bash outputs
bash: ./bin: Is a directory
Zsh can also be instructed to “run” a folder by changing to it, with the autocd option (setopt autocd). fish always changes to a folder if you try and run it.
| Is "running a folder" possible in Linux? |
1,621,975,863,000 |
My user is already in 'wheel' group of CentOS and the 'ping' command doesn't work:
ping: socket: Operation not permitted
A guide here about using 'chmod +s' for 'ping':
https://github.com/MichaIng/DietPi/issues/1012#issuecomment-532840857
However, I saw another command 'chmod g+s', how does it differ from 'chmod +s'?
|
chmod +s sets both the UID and GID bits, while chmod g+s sets only the GID bit (and chmod u+ssets only the UID).
UID and GID bit lets the program run as the owner and/or the owner's group - rather than as the user and group of who actually started it. For example, a program may always run as if it had been started by root.
Lets say you have a file with ownership root:adm... chmod g+swould give the program access to some logs (bad?)... chmod +s would in addition let the program run with full root privileges (much much worse!).
| What's the difference between 'chmod g+s' and 'chmod +s' |
1,621,975,863,000 |
I've just installed the latest Linux Mint version and I'm trying to set up some Steam games. I only have 10 GB on my home directory, so I'm willing to create a game library inside the root directory (/usr/games/steamlibrary). I was able to create a folder there, but I have no idea how to let Steam write there. I get the following error: 'New Steam library folder must be writeable'. How can I fix this?
|
Instead to think about how make Steam able to write to... you should think about how make that folder writeable to me;
probably you will get the same result but it has a different logic and mostly affect security: I would not be happy if an application can easily write in my root!
Anyway, I would make writable the folder to me:
chown -hR $USER:$GROUP /usr/games/steamlibrary
where $USER is your user and $GROUP your group.
In this way you are able to write to that dir (even if it resides in the root), thus to Steam.
| How to allow Steam to write into root directory? |
1,621,975,863,000 |
I'm using rsync to back up a set of files in /etc. The 'source' files are on an ext4 filesystem, and the 'destination' is an ext4 partition on a USB thumb drive. My incantation is similar to this:
rsync -av --recursive --files-from=my/etcfiles /etc ./my/backup/etc/
Not unexpectedly, I get errors with this including:
rsync: failed to set times on "/my/backup/etc/somefile": Operation not permitted (1)
rsync: mkstemp "/my/backup/etc/somefile.erGL4a" failed: Permission denied (13)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1196) [sender=3.1.2]
I think this is because the a option in rsync preserves ownership (root) of the files I'm backing up, and so root privileges are required to complete rsync's operations.
This rsync operation completes successfully when run under sudo, but I need to set this up as a cron job, and using sudo in a crontab brings an issue (password storage) I'd like to avoid.
Another possibility may be changing ownership of the root-owned files (using the chown=USER:GROUP option) during the rsync backup. I've not tried that because it occurs to me that, even if it works without sudo, the ownership would have to be restored if the backups were ever needed.
I've been stewing over this for the better part of a day now, and growing weary of wrangling with rsync's myriad options. So - My question is this:
How can I avoid using sudo to make backups of files in /etc without committing an even worse bodge?
|
Set up a root cron job and then the script will run as root anyways. To access the root crontab, run sudo crontab -e.
| Can I avoid using `sudo` to back up files owned by root? |
1,533,638,364,000 |
I've been trying to set up a shared folder to store some files for a group of people, for example /home/project. At the moment, I've done the following:
Created a group, let's call it "members", and added two users, user1 and user2.
When I run cat /etc/group I get the following return:
members:x:1005:user1,user2
Which at least seems to be correct. Then I create the directory and addign permissions following, to be honest, some internet guides.
mkdir /home/project
sudo chown -R root.members /home/project
sudo chmod 775 /home/project
sudo chmod 2775 /home/project
All of that seems to go fine, but when I create a test text file as user2, user1 can read that file, but doesn't have write permissions. What am I doing wrong?
|
Sometimes it's the simple things that we look over when dealing with technical issues. Don't we all do it from time to time? Group membership is acquired when the user logs out and back in again.
| Setup Group Permissions but no Write Access |
1,533,638,364,000 |
With root-user, I've executed this command:
setfacl -R -d -m u:MYUSER:rwx /myfolder
When I then change to that user ( su MYUSER ) and try to remove a file ( rm /myfolder/somefile.sql then I get the this error:
rm: cannot remove 'somefile.sql': Permission denied
I can't mv it either; then I get this error:
mv: cannot move 'somefile.sql' to 'someotherfile.sql': Permission denied
I've added MYUSER to /etc/sudoers, - so when I run: sudo rm /myfolder/somefile.sql, then I'm prompted for MYUSERs password; and then it works. But I need it to work without sudo, so I can run it as a crontab-job.
If I write getfacl /myfolder, then I get this output:
# file: /myfolder/
# owner: root
# group: root
user::rwx
group::r-x
other::r-x
default:user::rwx
default:user:MYUSER:rwx <-- That looks right, doesn't it?
default:group::r-x
default:mask::rwx
default:other::r-x
... Why in the name of Zeus can't I remove files in this directory?
|
MYUSER is a default owner, but not an effective owner.
You need to run both
setfacl -R -d -m u:MYUSER:rwx /myfolder
setfacl -R -m u:MYUSER:rwx /myfolder
note second command do not have a default (-d/--default) flag.
this sould result in getfacl giving
# file: /myfolder/
# owner: root
# group: root
user::rwx
user:MYUSER:rwx
group::r-x
other::r-x
default:user::rwx
default:user:MYUSER:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
| Unable to remove or change files after setfacl rwx-command |
1,533,638,364,000 |
I would like to setup a node.js https server using a certificate I already have on my debian8 machine.
This certificate's group is set to libretodoapi (a user / group I've created to run the node.js app). The permission 640 should allow read access to that file:
root@nijin:/# ls -l /etc/letsencrypt/archive/api.libretodo.org/privkey1.pem
-rw-r----- 1 root libretodoapi 1704 Jan 11 23:11 /etc/letsencrypt/archive/api.libretodo.org/privkey1.pem
That said, trying to access the file as libretodoapi fails:
root@nijin:/# sudo -u libretodoapi cat /etc/letsencrypt/archive/api.libretodo.org/privkey1.pem
cat: /etc/letsencrypt/archive/api.libretodo.org/privkey1.pem: Permission denied
The predecessor folders all belong to root:
root@nijin:~# namei -lo /etc/letsencrypt/archive/api.libretodo.org/privkey1.pem
f: /etc/letsencrypt/archive/api.libretodo.org/privkey1.pem
drwxr-xr-x root root /
drwxr-xr-x root root etc
drwxr-xr-x root root letsencrypt
drwx------ root root archive
drwxr-xr-x root root api.libretodo.org
-rw-r----- root libretodoapi privkey1.pem
I don't believe that there is a bug somewhere. Much rather, I think I don't know something about unix permissions which can explain that behavior. Do you know what I am missing?
|
All directories in the hierarchy, from the root (/) down to the parent directory of the file, must have x permissions for the user/group to enable them to access the file.
The execute permission on a directory enables a user to access the directory while the read permission enables a user to list its content.
See also the question Execute vs Read bit. How do directory permissions in Linux work?
| Why can't I access this lets-encrypt certificate file, even though I've set up the group? |
1,533,638,364,000 |
I've noticed that unlike most logs, /var/log/auth.log isn't world-readable. What sensitive data is logged to auth.log that would make it have these more-restricted permissions? (I'm trying to determine if making it world-readable is safe). This is on Ubuntu 14.
|
It contains logs of all connections, which can be considered private information, at least on a multi-user system: it’s only the administrator’s business to know when and how the users log in to a system (and even that knowledge needs careful consideration). As Giacomo Catenazzi points out though, there are other ways of obtaining this information, which by default aren’t restricted.
More importantly perhaps, it also contains logs of sudo commands, which could easily contain sensitive information (and is also one of the reasons you should avoid specifying passwords on command lines). Again on a multi-user system, the administrator probably doesn’t want all the users to be able to see everything that’s done with sudo...
Note that this is just the default auth.log setup, and as with any aspect of logging, system administrators can reconfigure it any way they want.
| /var/log/auth.log permissions |
1,533,638,364,000 |
Because of the default umask settings on my systems, file permissions always default to no access for group and other. This is fine typically but annoying when I'm installing software that I need others to access. Is there a quick way I can reset permissions of all files and folders in a tree after an install based on the use permissions.
Basically copy user except for write.
rwx------ to rwxr-xr-x
rw------- to rw-r--r--
|
You can use find to do this :
find <dirpath> -perm 700 -type d -exec chmod 755 {} \; ## For directories
find <dirpath> -perm 600 -type f -exec chmod 644 {} \; ## For files
| Set permissions based on user permissions |
1,533,638,364,000 |
I created a websites folder into the / directory, and gave it full permission with sudo chmod -R 777 /websites/.
After that, I made a change in /etc/nginx/conf.d/default.conf to point to the websites directory:
server {
listen 80;
server_name localhost;
location / {
root /websites;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /websites/nginx/html;
}
}
But I am having an 403 Forbidden, when I tried to browse to public ip of the server.
Why is it happening? How can I solve it?
I have this in the nginx error.log:
2017/08/27 20:41:03 [error] 3849#3849: *37 "/websites/index.html" is
forbidden (13: Permission denied), client: **.**.130.159, server:
localhost, request: "GET / HTTP/1.1", host: "**.**.**.120"
|
The error log very clearly says:
Your nginx would try to read /websites/index.html, but it can't
This is why it gives 403 error, not because of its configuration.
It is because of the 13: Permission denied. It is a system error. Thus, your nginx is configured well, it tries to read that file, but it can't.
The next question is, why it can't. First, you should check, what it does. Sudo to the user, on which nginx is running (it is probably www-data, so the command is: sudo -u www-data /bin/bash), and try to read that file for yourself (cat /websites/index.html).
The next step depends on, what is the result.
@sebasth has right in his comment:
Possibly wrong permissions on the file/folder, or/and SELinux policy
not permitting access. If you have SELinux enabled you should check
audit logs (tools such as audit2why might be helpful).
I think the two most probable outcomes:
Something wasn't set up correctly with the permissions, despite that your chmod command looks okay
There is some SELinux thingy making your life nicer.
| Nginx 403 error, when nginx.conf set to serve from /websites |
1,533,638,364,000 |
I was going through my old code and I stumbled upon some strange behaviour.
Folder and files inside it are chowned to me, however I am not allowed to enter it, or even list its content. Unless ofcource I am a root user.
Here is a screenshot. Could you please explain ?
GNU coreutils : 8.25,
OS : Ubuntu 16.04.2 LTS
|
Directories must have executable bit set to be "searchable". That includes listing contents as well as entering them with cd. Try chmod +x on the Scripts dir, for example. This is different from regular files where executable bit allows them to be executed (of course).
More information here: Execute vs Read bit. How do directory permissions in Linux work? and In Linux, "Write" Permission Is Equivalent To "Execute" For Directories?
| cd and ls permissions denied [duplicate] |
1,533,638,364,000 |
I want to delete directory 982899. It is located under directory big.
When I first try to delete 982899, it shows many lines of messages like this:
rm: cannot remove `982899/.../...v': Permission denied
So I use chmod 777 . to make directory big be able to change everything.
However, after it, rm -rf 982899 still shows the same messages:
rm: cannot remove `982899/.../...v': Permission denied
I even executed chmod 777 982899, but nothing changed!
Why ? What should I do to delete directory 982899?
|
rm -rf 982899 will try to recursively remove anything inside that directory, and then, once it is empty, remove the directory itself. So your problem may be that you do not have permission to delete the items inside 982899. You can try to chmod -R 777 982899, or chown -R <your_user> 982899 to get around this. Be careful though that chxxx commands use an uppercase -R for recursive operation.
| rm: cannot remove `/../...v': Permission denied |
1,533,638,364,000 |
I am SSH'ing into a debian server, and to avoid having multiple connections at the same time, I use tmux.
I changed the permissions of a directory (here, /opt/syncserver), and set the owner to the group and user www-data.
The permissions of this directory are equivalent to 770 in chmod, which means rwxrwx--- (read/write/exec to owner and group).
I then added the main user (that we will call user1 here) to the group www-data, because he wasn't in it previously. I then tried to cd into the newly modified directory, without success (Permission denied error).
Creating a new shell in the same tmux session does not solve the problem either as it should (see the probable duplicate of this question).
I tried launching another SSH session, still with the same user, and had no problem going into the directory.
How can new shells created in a tmux session not take in account the modifications of permissions ? Is there a way to fix this, or am I just completely mistaken and did something wrong at the beginning ?
Creating a new tmux session (with the other one still attached) does not solve the problem either. I guess restarting completely tmux should solve the problem, but I would like to avoid this and to know why does this happen.
|
How can new shells created in a tmux session not take in account the modifications of permissions ?
The uid, gid, and supplementary groups associated with a process are only reset at login time. New shells created in a tmux session are not new logins, they're just new children of the tmux process.
To get your group memberships updated, you have to re-login, or use one of a very small set of commands (newgrp, su, sudo) that will start subshells with re-initialized groups (but those commands won't help you re-initialize the credentials of an already-running process like tmux).
| Why are user permissions modifications not taken into account in a tmux session? [duplicate] |
1,533,638,364,000 |
I know that there is no difference in 0022 and 022 referring to link. I have file 1.c, with permissions 0066. But when I change the mode of file 1.c to 1066, and then when I check the permissions of file with ls -l, it effects the permissions. Each time different permission bits are changing with change in this first digit. What actually it signifies?
[vm4 ~]# ls -l 1.c
----rw-rw- 1 root root 10 Dec 23 22:48 1.c
[vm4 ~]# chmod 1066 1.c
[vm4 ~]# ls -l 1.c
----rw-rwT 1 root root 10 Dec 23 22:48 1.c
[vm4 ~]# chmod 2066 1.c
[vm4 ~]# ls -l 1.c
----rwSrw- 1 root root 10 Dec 23 22:48 1.c
[vm4 ~]# chmod 5066 1.c
[vm4 ~]# ls -l 1.c
---Srw-rwT 1 root root 10 Dec 23 22:48 1.c
|
Yes, there is a difference between 0022 and 022. Not for umask, but yes for chmod.
The permissions are described by three letters per user, group,and others.
That is usually rwxrwxrwx (or - where needed) in a ls output:
$ touch 1.c
$ ls 1.c
-rw-r--r-- 1 user user 0 Feb 13 09:01 1.c
Where each set bit is shown by a letter, and unset bits are shown with -.
Therefore:
rwx means 111 which is binary for an octal value of 7.
rw- means 110 which is binary for an octal value of 6.
r-- means 100 which is binary for an octal number of 4.
But beside the basic rwx, there are some other letters which represent additional permissions to be set. Those permissions are also 3 bits, and are written like a four digit octal number and represented as this:
For files:
0644 ==> rw-r--r--
1644 ==> rw-r--r-T # sticky bit (ignored in linux)
0644 ==> rw-r--r--
2644 ==> rw-r-Sr-- # Group ID does not match.
0655 ==> rw-r-xr-x
2644 ==> rw-r-sr-x # Run with group ID: SGID
0644 ==> rw-r--r--
2644 ==> rwSr--r-- # User ID does not match.
0755 ==> rwxr-xr-x
2744 ==> rwsr-xr-x # Run with User ID: SUID
Full permissions (7):
$ chmod 7777 1.c; ls -l 1.c
-rwsrwsrwt 1 user user 0 Feb 13 09:01 1.c
For directories:
SGID means that new files inside this dir will inherit group owner.
SUID Mostly ignored in Linux and Unix. BSD varies.
Sticky Protect files inside from being modified by a different user.
Links:
- Sticky bit
In Linux: the Linux kernel ignores the sticky bit on files.
When the sticky bit is set on a directory, files in that directory may only be unlinked or renamed by root or the directory owner or the file owner.
- SetUID and SetGID
- Directories and the Set-User-ID and Set-Group-ID Bits
- System Administration Guide: Security Services
| what is significance of first digit in 0022 when I run umask on linux? [duplicate] |
1,533,638,364,000 |
I have an NFS directory that is using file-based authentication.
I can ssh to the server with my username/password.
Running id, I can get the the uid for my user.
I have mounted the NFS share.
I can't read/write to the mounted directory, due to permissions.
How do I read/write to this directory, using the uids that I have retrieved from the server? Should I create a local user with the same uid?
Side question, how does my password play into this? If someone gets my ``uid``` (which seems small and brute-forcable), can they easily read/write into my directory?
|
I have an NFS directory that is using file-based authentication.
There is no such thing as "file-based authentication". You are probably using "UNIX authentication".
How do I read/write to this directory, using the uids that I have retrieved from the server? Should I create a local user with the same uid?
Yes. One sane way to manage an NFS network is to have the same uids across all systems. For large networks, this is best done with a distributed authentication mechanism such as NIS, Kerberos, or LDAP.
Side question, how does my password play into this? If someone gets my uid (which seems small and brute-forcable), can they easily read/write into my directory?
Absolutely. From a security perspective, NFS is like an open barn door. It was initially designed for LANs where every host is trusted.
You can run NFS over VPN, but you must still trust every VPN member. Bottom line, if you care about security you shouldn't be using NFS at all, there are tons of better solutions for distributed data these days.
Update: By way of suggesting a specific solution rather than saying "tons", you can use FUSE sshfs to securely connect to a specific user's directory over SSH, secured by the usual SSH protocols.
| Mounting an NFS directory, with filesystem permissions for a user that doesn't exist locally |
1,533,638,364,000 |
I am trying to write a script that runs on boot that checks to see if the boot filesystem is read only, if it is, then run fsck to fix permissions and reboot.
Point 1: I am having trouble figuring out how to grep just the permissions line from a ls command.
Point 2: I am also having trouble figuring out how to check if that command returns a string without w in it.
For example, an ls -l command returns a line similar to the following:
drwx-----+ 5 Admin staff 170 Oct 12 05:41 Documents
I want to grab just the following string:
drwx-----+
Then, check if it does not contain write permissions.
Below is the script I have so far.
#!/bin/bash
$DIR='home'
#If $DIR has only read permissions run loop
if [[ #point 1 = #point 2 ]] then
#fix permissions on $DIR
umount ${DIR}
fsck
reboot
fi
|
Do not ever parse the output of ls. Scripting 101.
man find
man findmnt
The root file system is never mounted read-only if the system has reached the multiuser target or runlevel. This means that the only user who can ever find the root file system mounted read-only is root. And root can write everywhere regardless of permissions. Therefore, to check whether the root filesystem is readonly you can simply try to touch a file:
if touch /testfile ; then
# The root filesystem is read-write
rm /testfile
else
# The root filesystem is read-only
# Do something about it
fi
However, this should not be needed. The system should drop into a single-user shell if it cannot mount the root file system read-write.
| How do I grep a directories permissions and see if it lacks write permissions? |
1,533,638,364,000 |
All the scripts in my ~/bin have '#!/bin/sh' as the first line, yet the scripts won't run without sh. This problem appeared after I shifted all my personal data to an ntfs partition and made symlinks to them in the home directory. I think I have set the permissions correctly in all the affected folders(774 to directories and files in bin and 664 to all the other files), and added this entry to /etc/fstab:
UUID=5BE8B8E020325D09 /mnt/DATA ntfs-3g auto,users,permissions 0 0
That's how the problem looks:
saga@terminal:~$ syncit a b
bash: /home/saga/bin/syncit: Permission denied
saga@terminal:~$ sh ~/bin/syncit a b
<works correctly>
Any idea what's happening here?
output of ls -l:
for /mnt/DATA/bin/:
-rwxrwxr-- 1 saga 1001 67 Sep 22 23:56 bldcpp
-rwxrwxr-- 1 saga 1001 23 Sep 22 23:56 cnl
-rwxrwxr-- 1 saga 1001 62 Sep 22 23:57 conct
-rwxrwxr-- 1 saga 1001 23 Sep 22 23:57 cx
-rwxrwxr-x 1 saga 1001 479 Sep 22 23:44 defperms
-rwx------ 1 saga saga 0 Sep 22 23:48 ju
-rwxrwxr-- 1 saga 1001 27 Sep 22 23:57 mke
-rwxrwxr-- 1 saga 1001 58 Sep 22 01:54 process
-rwxrwxr-- 1 saga 1001 329 Sep 22 23:58 ptr
-rwxrwxr-- 1 saga 1001 1336 Sep 1 22:48 syncit
-rwxrwxr-- 1 saga 1001 639 Sep 8 12:13 vidplacer
for ~/bin:
lrwxrwxrwx 1 saga saga 13 Sep 22 04:04 /home/saga/bin -> /mnt/DATA/bin
output of mount:
saga@terminal:~$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=1963732k,nr_inodes=490933,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=396576k,mode=755)
none on /dev/.bootchart/proc type proc (rw,relatime)
/dev/mapper/terminal--vg-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (rw,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids,release_agent=/run/cgmanager/agents/cgm-release-agent.pids)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
/dev/sda10 on /mnt/DATA type fuseblk (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allo w_other,blksize=4096)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
cgmfs on /run/cgmanager/fs type tmpfs (rw,relatime,size=100k,mode=755)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=396576k,mode=700,uid=1000,gid=1000)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
gvfsd-fuse on /root/.gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
I moved my data to sda10
|
/mnt/DATA is mounted with the noexec flag. As a quick fix remount /mnt/DATA without noexec, i.e.
mount -o remount,exec /mnt/DATA
For persistence, modify the according entry in /etc/fstab such that this flag is not set by default.
| scripts won't execute |
1,533,638,364,000 |
I think this is a weird question, I don't know if it is possible, but here I go:
I have a shared directory in a server, so people can use it from their computers, so let's say I have a directory call Mantenimientos/ and inside it I have two other directories Fisico/ and Logico/ I want people to have permission to write into those last directories, but I don't want them to have permission to change those directories names or move them.
Is that possible?
OS: Solaris 10 5/08
English is not my native tongue, if there's something you can't understand please ask, and any correction is welcome as well.
|
Renaming directories requires write permission in the parent directory, so let's say you have
BASE
BASE/Mantenimientos
BASE/Mantenimientos/Fiscio
BASE/Mantenimientos/Logico
The Mantenimientos directory would be made r-x, and the Fiscio and Logico directories would be rwx permission.
e.g.
$ ls -ld Mantenimientos
drwxr-xr-x 4 root root 4096 Aug 30 13:04 Mantenimientos/
$ cd Mantenimientos
$ ls -Al
total 4
drwxrwxrwx 2 root root 4096 Aug 30 13:04 Fiscio/
drwxrwxrwx 2 root root 4096 Aug 30 13:04 Logico/
So I can write to the two directories, but not to the Mantenimientos directory. This means I can not rename them
$ mv Fiscio changed
mv: cannot move 'Fiscio' to 'changed': Permission denied
But I can create files
$ echo a file > Fiscio/file1
$ echo another > Logico/file2
$
| How to change permission only inside the directory? |
1,533,638,364,000 |
There is a file in a directory that me and the other users want to be able to read. I want the other users not to be able to see the other files in the directory. They should be able to read a specific file in a directory if they know the name of the file.
|
I'm assuming you want to create a directory which other members of your group cannot list, but where you can make files accessible to them anyway... Accessing a directory requires the "execute" permission, listing its contents requires the "read" permission. If you make a directory executable but not readable, users can access files stored within but can't list its contents.
Given a shared group:
mkdir -m710 demo
chgrp shared demo
echo "secret" > demo/file1
chmod 640 demo/file1
Then other users in the shared group will be able to view the contents of demo/file1, but ls demo will fail.
Note that if others guess the names of other files, they will be able to access those files, if they have the permission. So make sure to keep the other files private.
| Limited access to a directory's contents |
1,533,638,364,000 |
How do I make there be no group owner of a file in Mac OSX, since
chgrp nogroup file
doesn't work? If I try, the group owner doesn't change at all.
|
Use chgrp nobody file instead.
| How do I make there be no group owner of a file in Mac OSX? |
1,533,638,364,000 |
So basically I need to list files in sub directory which has all permission for user and group but not other. Basically rwxrwx---
All I got is:
ls -d */*
to show the sub directories but now how do I the permission part, I know I need to use "|" but what command do I do?
Thanks!
|
You should use the find command. To get all files and directories with rwxrwx--- in the branch of current directory use:
find . -perm 770
If you only need to check for files:
find . -type f -perm 770
If you only need to check the immediate subdirectories (in FreeBSD/OSX):
find . -depth 2 -perm 770
or Linux:
find . -mindepth 2 -maxdepth 2 -perm 770
| List file in sub directory with certain permission |
1,533,638,364,000 |
After making a rm related mistake on some of my files (which was fine because I have backups), I started thinking on how I could limit myself to making such mistakes. I have not done anything with file permissions, so I started reading up on that, but I think I might be missing something. I essentially would like to set my data backups (one: a Windows NT Filesystem external hard drive; two: a UNIX-based server, but I don't know the exact details of it) so that when I would try to remove or otherwise manipulate my files, I would either not be able to do so without sudo or some other override, or get some kind of "are you sure?"-prompt. Or, perhaps there is some kind of "standard" file permissions for data backups that people use that are better than what I'm thinking about?
I have been playing around a bit with chmod and various setups, but I can't seem to get it right. As far as I understand it 755 seem to be kind of standard, but as far as I understand that would not stop me from doing the kind of mistakes I did. What about 555? Should you setup folders differently from your files? What do people generally use to protect their long-term storage, i.e. files that you won't access that often?
|
If you remove the 'w' bit, you can't accidentally overwrite whatever you removed the 'w' bit from. If that's a directory, that means you can't add or remove files from that directory; if that's a file, that means you can't change the file. Downside of that method, however, is that you lose data (IMO, file permissions are part of your backup data).
An alternative is to use the 'immutable' extended attribute:
chattr +i file_or_directory
Downside of that method is that it can be confusing: ls -l tells you you can write to the file, yet if you try it the kernel says 'permission denied', even as root; you have to remember the extended attributes (which lsattr can tell you about)
On your final remark: the best way to avoid accidentally changing files which you don't need to access all that often is to, simply, not make them available for normal usage. Don't automatically mount the filesystem; make the mount step be part of your backup procedure. If you do that, all these issues go away.
| File permissions on data backups |
1,533,638,364,000 |
I want to identify all files which does not have any permission for others irrespective of any permission for user and groups . How the find command will look like. e.g
drwxrws--- 2 jboss users 4096 Sep 14 2012 answ
|
find . ! \( -perm -o=r -o -perm -o=w -o -perm -o=x \)
| Identify all files having no permission for others using find command |
1,533,638,364,000 |
I've a weird problem on my linux machine, I've multiple users, we can say u1, u2, u3... who all belong to a group G. I have a group folder in /home who belong to one of these user (we'll say u1), and I wanted to allow other G users to read, write and execute in this folder, so I changed the folder's group to G (the owner still is u1), and set rwx permissions for the owner, for the group (G) and 000 for others, but G users can't access the folder...
Why is that ? any ideas ?
Thanks !
|
follow this instruction:
1) make sure that all the users u1,u2,u3 are in the group G:
lid -g GroupName
the output must contain all susers.
2) set group woner of the directory "recursively":
chown -R u1:GroupName /home/u1
Note: if you don't set the group owner recursively, you won't be able to view inner files and directories.
3) set the permissions of group owner of the directory "recursively":
chmod -R g+rwx /home/u1
Note: if you don't set the group owner permissions recursively, the changes won't be applied for inner files and directories.
now if you type ls -l /home/u1, the output will be like this:
drwxrwx---. 16 u1 GroupName 4096 Jan 8 2015 u1
I hope you get your problem solved soon :)
| Allowed group can't access a folder [closed] |
1,533,638,364,000 |
While doing a wargame challenge, I ran into an issue with permissions. The info given by /proc/PID/status is not in adequation with the permissions that should be given to the processus.
I am user user1. I am supposed to use a program which is SETUID:
-r-sr-x--- 1 user2 user1 6297 Jun 20 2013 program
So it should execute with the effective UID of user2.
I'm temporarily stopping the program just after launch, to prevent it from terminating:
~/program "test" &
PID=$!
kill -SIGSTOP $PID
echo $PID
Then, I cat /proc/$PID/status, and I see:
Uid: 1003 1003 1003 1003
Gid: 1003 1003 1003 1003
The IDs are:
$ id user1
uid=1003(user1) gid=1003(user1) groups=1003(user1)
$ id user2
uid=1035(user2) gid=1035(user2) groups=1035(user2),1003(user1)
Given the manual (man 5 proc), /proc/$PID/status should give Uid, Gid: Real, effective, saved set, and filesystem UIDs (GIDs).
But here, the process has the effective ID of user1 whereas it should have the effective ID of user2.
I thought this might be because I stop the program too early, so I tried to attach gdb to it, and continue execution until it actually executes code from the main function of program (sources are given), but the effective UID given by /proc/$PID/status is still the one of user1 and not of user2.
Am I missing something?
Edit: remove the source of the challenge, I'm probably not authorized to post it.
|
It's because you are too early, if you wait until the UIDs are changed then your process runs as user2. This worked for me:
./program "test" &
PID=$!
sleep 0.0005
kill -SIGSTOP $PID
grep ^Uid /proc/$PID/status
Another try is to add a delay with usleep() and send the SIGSTOP later during that sleep. Then the programm runs with user2 as effective uid. You can check that, but without attaching with gdb or strace. Most probably it's some kind of linux kernel interna, that the process needs some time to change the UIDs.
When running the process from a terminal the execve() syscall is called; from the manpage:
If the set-user-ID bit is set on the program file pointed to by
filename, [...] and the calling process is not being ptraced, then
the effective user ID of the calling process is changed to that of the
owner of the program file.
When you attach gdb to the process, you will not see the uid of user2, because you're ptraceing the process, as described in the manual page above. Or else you could attach to a sudo-process and gain root permissions.
However, this program never gets a segmentation fault (SIGSEGV), unless you would force one with kill -SIGSEGV $PID. IF your programm gets a SIGSEGV the launch_debugger() routine is called. This will call a gdb and as argument just your program binary with without any argument, which will replace the current running process. So in the debugger will have priviledges of user2 and therefore you can do what you want in there, with user2's permissions.
You can then, for example, do the following inside gdb:
(gdb) file bash
Reading symbols from /bin/bash...(no debugging symbols found)...done.
(gdb) run
Starting program: /bin/bash
user2@host:~$ id
uid=1035(user2) gid=1003(user1) groups=1035(user2),1003(user1)
Now, consider the same binary with a setuid bit and the owner is root.
| Wrong eUID in `/proc/PID/status` when SETUID is used |
1,533,638,364,000 |
There is no stat command in Solaris 10. Is there any way to get numeric file permission?
|
GNU stat is available in the SUNWgnu-coreutils package. If you're not able to install that, the pkgproto command is an alternative.
From the manual page:
pkgproto /bin=bin /usr/bin=usrbin /etc=etc
f none bin/sed=/bin/sed 0775 bin bin
f none bin/sh=/bin/sh 0755 bin daemon
f none bin/sort=/bin/sort 0755 bin bin
f none usrbin/sdb=/usr/bin/sdb 0775 bin bin
f none usrbin/shl=/usr/bin/shl 4755 bin bin
d none etc/master.d 0755 root daemon
f none etc/master.d/kernel=/etc/master.d/kernel 0644 root daemon
f none etc/rc=/etc/rc 0744 root daemon
It's trivial to extract that output so that you just have the octal file permissions.
| Stat substitution command to capture numeric file permission in Solaris 10 |
1,533,638,364,000 |
I have the following folders:
drwx------ 4 ccote domain^users 4096 Apr 17 11:18 ccote
drwxrwx--- 2 ccote ccote_jponchar_gnicolas 4096 Feb 20 10:58 ccote-jponchar-gnicolas
drwx------ 14 gnicolas domain^users 4096 Nov 28 2014 gnicolas
drwx------ 3 jgodbout domain^users 4096 Oct 24 2014 jgodbout
drwx------ 2 jponchar domain^users 4096 Sep 22 2014 jponchar
drwxr-xr-x 2 pagagne domain^users 4096 Jun 2 15:28 pagagne
drwx------ 4 plavigne domain^users 4096 Feb 26 14:57 plavigne
I want to give the pagagne user access to the ccote-jponchar-gnicolas folder. I use the following command to add that user to the ccote_jponchar_gnicolas group for that folder:
usermod -a -G ccote_jponchar_gnicolas pagagne
However, that user receives the following message when trying to access the folder:
bash: cd: ccote-jponchar-gnicolas/: Permission denied
What is wrong ?
|
The user pagagne will need to logout and login to his/her shell for the group to be visible in his groups.
You could also check if the user has indeed been added to the group:
groups pagagne
| Adding user to the group owning a folder doesn't give that user access |
1,533,638,364,000 |
I use debian jessie and I have done one of those bad mistakes and broke my system with a mistyped command and worse mistakes that follow in such situations.
Trying to fix some permissions I mistakenly used chmod recursively on root folder:
# chmod -R 0644 /
and then realizing immediately I rushed in doing something to stop it but the system was frozen and the worse mistake was the hard powering off the system.
Now I think I have some user manager problem and after booting with some "failed to start service" messages I don't have the Gnome user login and I can't also login in console. And this is what that flashes several times and then stays on screen:
[ ok ] Created slice user-113.slice
Starting user manager for UID 113...
[ ok ] Started user manager for UID 113
[ ok ] Stopped user manager for UID 113
[ ok ] Removed slice user-113.slice
|
The good news is that all your data is still there. The mixed news is that your system installation may or may not be recoverable — it depends where chmod stopped.
You will need to boot into a rescue system to repair it. From the rescue system, mount your broken installation somewhere, say /mnt. Issue the following commands:
chmod 755 /mnt
find /mnt -type d -perm 644 >/mnt/bad-permissions
find /mnt -type d -exec chmod 755 {} +
The first find command saves a record of directories with bad permissions into a file. The purpose is to see where permissions have been modified. The second find command changes all directories to be publicly accessible.
You now have a system where all directories listed in /mnt/bad-permissions and all files in these directories are world-readable. Furthermore files in these directories are not executable. Depending on which files were affected, this may be easily repairable or not. See Wrongly set chmod / 777. Problems? for what you can try to get the system functional, to which you should add
chmod a+x /bin/* /sbin/* /usr/bin/* /usr/sbin/* /lib*/ld-*
But even if you manage to get something working, there's a high risk that some permissions are still wrong, so I recommend reinstalling a new system, then restoring your data. How do I replicate installed package selections from one Debian system to another? (Debian Wheezy) should help.
| Broken system after chmod -R 644 / |
1,533,638,364,000 |
In the terminal, some operating system commands require root privileges and some don't. What is the mechanism for controlling this? Is each command actually a separate program with its own execution permissions or is there a table in Bash? (Am I correct that Bash is the command shell and the terminal is a user interface that passes it commands?) I'm referring to operating system commands as opposed to running applications from the terminal.
|
Yes each applications typically has it's own permissions set via "permission bits" on the actual application. You can see these if you use the command ls -l on the various executables that you're trying to run.
$ ls -l /sbin/ | grep autrace
-rwxr-x---. 1 root root 15792 Aug 24 14:40 autrace
03:03:22-slm~ $ autrace
bash: /usr/sbin/autrace: Permission denied
But there are some commands where the "data" that they'll attempt to touch/access is what's restricted so looking at the permissions is not sufficient:
$ ls -l /sbin/ | grep "\bfdisk"
-rwxr-xr-x. 1 root root 230512 Apr 25 05:19 fdisk
$ fdisk -l
$
Here the command executed as my userid, but that user does not have permissions to access the information about the physical disks on my system, and so fdisk shows me no output. If I elevate myself to root using sudo I can see the output as I intended:
$ sudo fdisk -l | head -10
Disk /dev/sda: 238.5 GiB, 256060514304 bytes, 500118192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5D1229E8-1234-1234-1234-ABCDEFG128790
Device Start End Size Type
/dev/sda1 2048 411647 200M EFI System
Mechanisms that control this
There is no centralized control, all the control is decentralized and stored with the files as permissions bits on either the applications/executables (as I showed above), or on the data files that these tools will use, or on the directories where the files are contained.
Command shell
Your description isn't quite right with respect to Bash being a shell, and the terminal being a user interface that passes it commands. Rather the terminal is an application that is executed, and within it there's a shell running, typically Bash but it can be any number of shells.
For example
Here's the output of the ps command that shows how the processes for my current shell are structured:
$ ps axf | less
...
8549 ? Sl 0:08 /usr/libexec/gnome-terminal-server
8552 ? S 0:00 \_ gnome-pty-helper
10286 pts/13 Ss 0:00 \_ bash
12783 pts/13 Sl+ 5:49 | \_ vinagre
12868 pts/14 Ss 0:00 \_ bash
15742 pts/14 R+ 0:00 \_ ps axf
15743 pts/14 S+ 0:00 \_ less -r
Here you can see that my terminal, gnome-terminal, is running on top and it has child processes underneath it. These child processes are 2 bash shells where one of them is running an application called vinagre and another is running this ps command that I'm showing you here.
Additional restrictions
What I've described above is the foundation for how executables can be used by users on the system. But these are just the basics. Beyond these there are additional technologies such as ACL (Access Control Lists) and access control policies for the various exectuables.
ACLs are pretty straightforward, giving users outside of the traditional model of owner, group, and other, a more granular control.
Tools such as SELinux and AppArmor take this same approach but introduce at the Linux Kernel level the ability for rules to be put into place that restrict how application X can interact with the system as a whole. For example if you're running a Samba server, this application would need to be granted access to your filesystem outside of the normal areas that it would typically operate. You'd have to add extra policies to allow this.
excerpt of SELinux man page
NSA Security-Enhanced Linux (SELinux) is an implementation of a
flexible mandatory access control architecture in the Linux operating
system. The SELinux architecture provides general support for the
enforcement of many kinds of mandatory access control policies,
including those based on the concepts of Type Enforcement®, Role-
Based Access Control, and Multi-Level Security. Background
information and technical documentation about SELinux can be found at
http://www.nsa.gov/research/selinux.
Command are part of what?
If you're confused whether a application is an actual file on the system, or if it's something else you can use the command type to determine this.
$ type pwd
pwd is a shell builtin
$ type fdisk
fdisk is /usr/sbin/fdisk
So in the above examples, pwd is built into Bash, whereas fdisk is an actual file that resides at /usr/sbin/fdisk.
NOTE: For anything that's built-in, they're governed by the permissions of the Bash shell from which they're being invoked!
| What is the mechanism for differentiating root requirements for terminal commands? |
1,533,638,364,000 |
I'm having issue similar to this:
How to copy files as Jenkins "post build" action if i don't have privileges to destination directory
I'm willing to move/copy/rsync files from jenkins workspace to
/var/www/app
with rights setted to
apache:apache
I've added jenkins to group apache, but the instance of jenkins cannot copy files to /var/www/app.
I've also tried with setting privileges of /var/www/app to apache:jenkins but still, Jenkins keep spitting out error: Permission denied or Operation not permitted
PS: Forgot to add OS is centOS ;)
EDIT 1:
This is log from jenkins script runnig:
[workspace] $ /bin/sh -xe /tmp/hudson1379987233097some_more_numbers.sh
+ sh /path_to_sh_script/script.sh sending incremental file list
application/
rsync: failed to set permissions on "/var/www/app/application":
Operation not permitted (1)
And this is the script itself :)
#!/bin/bash
rsync -avzh /path/to/jenkins/jobs/app/workspace/default/application /var/www/app ;
rsync -avzh /path/to/jenkins/jobs/app/workspace/default/library
/var/www/app ;
rsync -avzh
/path/to/jenkins/jobs/app/workspace/default/public /var/www/app ;
|
After a long and fruitful discussion in the comments, and following this link the user managed to solve the problem adding
--no-perms --omit-dir-times
to the rsync options.
Preliminary attempts to solve the issue:
I guess if security does not concern you for a short period of time, you can try
chmod a+rwx /var/www/app
and then try to write to this directory. Note that if there are subdirectories you must do it recursively with:
chmod --recursive a+rwx /var/www/app
If it's successful, then you can start removing permissions gradually and this will help you pinpoint the problem.
Verify that the user jenkins is already a group member of apache with
groups apache
| Privileges for jenkins at apache's folder |
1,533,638,364,000 |
How does Unix or Linux stop other users from accessing or modifying each others files? I know permissions are a part of it. Is there a specific concept in use?
|
The concept is the permissions concept.
To explain it, we first need to look at two things:
the uid and the filesystem.
The uid
In Unix-like systems, every user has a User ID (UID)
in addition to the user name and other properties.
The uid is a number.
This uid can't be changed by the user itself.
You can check your user uid with the command
$ id
uid=1001(username) gid=1001(username) groups=1001(username)
The uid in this example output is 1001.
The uid is unique only on one operating system.
If you have two computers, two users can have the same uid.
The filesystem
A Unix-like filesystem stores, for every file, the uid of the owner of the file and the permissions the owner has for this file, and much more.
You can see the permissions of a file if you execute
$ ls -l file
-rw-r--r-- 1 username usergroup 1145 27 Feb 07:15 file
The permissions for the owner are "-rw-r--r--", so the owner can read the file and write to the file.
How it all works
For the final understanding of the permission scheme, we need the kernel of the Unix system.
Every time we access a file or do something with a file, we don't touch the file directly. Instead we use a system call which asks the kernel to do the work for us.
Inside the system call, there are routines to get the uid of the user and to get the permissions of the file. The kernel then checks if the owner of the file is equal to the uid of the user; and if so, it then checks if the user has the needed permission.
The trick is that a normal user can't influence the kernel. The kernel is started in the boot process of the computer and then can be influenced only by the root user (with uid = 0).
The user can't change his uid, and he can't change permissions of a file he doesn't own (he would have to change his uid) because the kernel manages these things and only root can change them.
You can find more about permissions here and more about system calls and how they work here. And if you want to know what else is stored in the filesystem, read this about inodes.
What I left out
This answer will give you only a small idea of the permission scheme, but I hope it will help you understand the things I left out better, like
Groups, which work like the uid scheme
Permissions for all users
File types
sudo and su
Permissions over networks
... (If you want to know more, get a good book about operating systems.)
| User sharing in Unix |
1,533,638,364,000 |
I have a file on my Darwin system and the permissions are:
-rwxr-xr-x@
User: read, write, execute
Group: read, execute
Other: read, execute
What is the 11th notation the @ mean?
In addition to this, I was led to believe that files/directories only had 10 places for their permissions? This, including the missing d from the front, would make 11.
|
@ means there are "extended attributes".
| Explain the "@" symbol in this permissions example [duplicate] |
1,533,638,364,000 |
I have set the umask to 0.
So:
$umask
0000
I do
echo 'test' > test.txt
And test.txt is created. If I do: ls -l test.txt I see:
$ls -l test.txt
-rw-rw-rw- 1 jim None 5 Jun 30 22:50 test.txt
Why aren't the rights rwxrwxrwx?
|
The shell uses 0666 for the default permissions when creating a new file. As umask only removes permissions, never adds them, that is what the resultant file will have.
| Doesn't umask apply to files? |
1,533,638,364,000 |
OK, I formatted my flash to ext4 file system,
changed all the permissions to 777 and mounted it to /var/www/html/web.
Now, when i access localhost/web it gives the following error:
"You don't have permission to access /web/cv on this server."
But when I normally access localhost it loads index.html that locates in /var/www/html directory, it means it has to do with mounting of flash.
Can't I hold my web directory inside my flash card in Linux?
Why it gives permission error, maybe it could be related to Apache server?
All guesses and solutions would be greatly appreciated.
BTW I am using Redhat Linux Enterprise Server 6
|
You're probably running into SELinux issues. The directories on the flash drive probably aren't labelled such that httpd_t can touch them. You can do a setenforce 0 ; service httpd restart and attempt to access again to confirm. If that is what's going on then you can either configure SELinux to go into permissive mode (last ditch "just trying to get it to work" solution) or run a recursive restorecon on /var/www
| Permission error to access mounted directory in localhost |
1,533,638,364,000 |
I want to run the below command
echo 1000 > /sys/class/backlight/intel_backlight/brightness
i cannot do it it like below beecause bash is the process that actually directs output to the root-owned brightness file.
sudo echo 1000 > /sys/class/backlight/intel_backlight/brightness
so how would i run this command but i
do not want to be prompted for root password
do not want to login as root, execute command and exit
should use available sudo permissions available for current user to execute command
|
What you're asking is impossible. Your current process is not being run by root, and only the root user can issue setuid. As such, another process has to be launched as the root user first (by using a setuid executable, in this case, sudo).
Here is the closest thing to what you're asking, with the impossible removed:
echo 1000 | sudo tee /sys/class/backlight/intel_backlight/brightness
If you don't want this to prompt for a password, add a line in /etc/sudoers (by using visudo), like so (replace rag with your username):
Cmnd_Alias BACKLIGHT = /usr/bin/tee /sys/class/backlight/intel_backlight/brightness, ! /usr/bin/tee /sys/class/backlight/intel_backlight/brightness *
rag ALL=(root) NOPASSWD: BACKLIGHT
| run a command as different user with available sudo permissions |
1,533,638,364,000 |
Running in Csh when using Tilde Notation performing a
~/
at the command line, I receive a "Permission denied." error. This directory is owned by the user performing the command and has the permissions.
myhostname% ~/
/my/home/dir/: Permission denied.
Checking the permissions:
myhostname% whoami
myuser
myhostname% cd ..
myhostname% pwd
/my/home
myhostname% ls -la
total 40
drwxr-xr-x 7 myuser mygroup 4096 Sep 16 10:49 .
drwxr-xr-x 3 root root 4096 Sep 27 2010 ..
drwxr-xr-x 19 myuser mygroup 4096 Jan 15 13:36 dir
I think I'm missing a setting somewhere in the .cshrc file but I'm not sure why or what is causing this. The reason I believe it's in that area is because when I exit back out into BASH I can perform the same operation (I guess this is sort of obvious since I'm switching profiles). Is there something glaringly obvious I'm missing?
myhostname% exit
logout
-bash-3.2$ ~/
-bash: /home/me/: is a directory
|
Directory cannot be executed even it has the executable permission. The executable permission means with the right permission user could access the directory and its content, such as reading files in the directory (still requires read permission for listing file).
| "~/" receives a permission denied error in Csh |
1,533,638,364,000 |
I downloaded the Source Code Pro family of fonts, but cannot install them via the Font Viewer. If I give myself admin powers, I can manually add them to the rest of the fonts, so I'm guessing it's a permission issue.
What would I need to do to install them via the Font Viewer?
As a slight aside, I also read something about making sure the fonts were in the font cache as well. Is that necessary? If so, what's that entail?
|
You can copy the fonts in to ~/.fonts folder and run fc-cache -fv command to cache them. To do this you don't need admin privileges.
| Trying to install fonts on Linux Mint Lisa, but it looks like I have a permissions or ownership problem |
1,533,638,364,000 |
I have a Python virtualenv, and the Python executable is located in the /bin directory. In this virtualenv I'll have to execute some unsafe code, that can damage my system. I tried to chmod a-r on the virtualenv, and now nobody can write there, but its parent directory is now unprotected.
So I thought I could change permissions on /bin/python, so that it can write nowhere, how can I do this?
I tried chmod a-r bin/python but it is still allowed to remove files and directories even outside the env.
|
Firstly, chmod a-r bin/python does not prevent python to remove files. It prevents anyone not owner or not in the correct group to read that file.
If you wish to run unsafe code in a "jail", I suggest using chroot jail. Bear in mind that in order for chroot to run effectively, python executable should not be ran under root privileges.
| chmod - change permissions on a file |
1,533,638,364,000 |
I have a folder on a Linux machine that I would like to be read only for members of one group, read write for members of another group, and read write for the owner. Others would not be able to access this folder at all.
Is there a way to do this without using ACLs?
|
The simple answer is "no" ... that's exactly what ACL's are for, controlling access to resources :-)
The normal unix model only includes one set of permissions for the group owner, not two. You could perhaps hack this by making them read only to world and read write for the group and owner, but that has the obvious drawback of been world readable. If you need more fine grained control use ACLs. That's what they were designed to do.
| Different access rights for different groups for a folder on Linux |
1,685,040,610,000 |
In my case when I create a file or folder as user ludow the owner of the file or folder is root
exemple
❯ whoami
ludow
❯ touch test
❯ ls -al | grep test
-rwxrwxrwx 1 root root 0 30 oct. 21:02 test
chown not working
❯ chown -v ludow:ludow test
membership of 'test' changed from root:root to ludow:ludow
the owner doesn't change
❯ ls -al | grep test
-rwxrwxrwx 1 root root 0 30 oct. 21:02 test
all my files are owned by root, even those that shouldn't be
here is some information about my environment
❯ neofetch
' ludow@Spiron
'o' ------------
'ooo' OS: Artix Linux x86_64
'ooxoo' Host: Inspiron 15 5510
'ooxxxoo' Kernel: 6.0.5-x64v1-xanmod1
'oookkxxoo' Uptime: 54 mins
'oiioxkkxxoo' Packages: 1252 (pacman), 5 (flatpak)
':;:iiiioxxxoo' Shell: zsh 5.9
`'.;::ioxxoo' Resolution: 1920x1080, 1920x1080
'-. `':;jiooo' DE: Plasma 5.26.2
'oooio-.. `'i:io' WM: KWin
'ooooxxxxoio:,. `'-;' Theme: Artix-dark [Plasma], Artix-dark [GTK2/3]
'ooooxxxxxkkxoooIi:-. `' Icons: [Plasma], Colloid-nord-dark [GTK2/3]
'ooooxxxxxkkkkxoiiiiiji' Terminal: alacritty
'ooooxxxxxkxxoiiii:'` .i' CPU: 11th Gen Intel i5-11320H (8) @ 4.500GHz
'ooooxxxxxoi:::'` .;ioxo' GPU: Intel TigerLake-LP GT2 [Iris Xe Graphics]
'ooooxooi::'` .:iiixkxxo' Memory: 3500MiB / 7696MiB
'ooooi:'` `'';ioxxo'
'i:'` '':io'
'` `'
what would be a solution to restore the default behavior, without reinstalling os?
/etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
UUID=5895-EEC1 /boot/efi vfat umask=0077 0 2
UUID=04cddafd-0517-4528-a181-d4592f483992 / xfs defaults,noatime 0 1
UUID=9cfe2ed5-6cc5-4a67-8bf8-bad85c9a3f3d swap swap defaults,noatime 0 0
UUID=05F56DAC5B0B310A /home ntfs defaults,noatime 0 2
tmpfs /tmp tmpfs defaults,noatime,mode=1777
|
Your home directory is an NTFS partition (from Microsoft Windows). NTFS has a very different permissions model to Linux and so users cannot be directly mapped into Linux out of the box.
The behaviour you're seeing is the default behaviour where all files in the partition are automatically interpreted as being owned by root.
I believe there is now a way to map NTFS users into Linux users, you may need to spend some time on Google figuring out how. There's some reference to it here: https://man.archlinux.org/man/extra/ntfs-3g/ntfsusermap.8.en
As a general rule, it's not a great idea to have your Windows home directory be exactly the same as your Linux home directory. Applications will try to store files on the root of the home directory with configuration and caches etc. If you happen to install the same app on both OS, you may find that the cache or config differ enough to confuse the app in one OS or the other.
Generally it's better to have a sub directory (perhaps even ~/Documents) be shared, but the root ~ be separate.
| My files are created with the wrong owner [duplicate] |
1,685,040,610,000 |
I'm writing a script that should be as POSIX compliant as possible, so I'm avoiding bashims. It requires elevated permissions (root) for some parts, and prompts the user if necessary - which requires checking whether the current user is root.
There are various approaches, e.g. checking UID, EUID, id -u, etc. These are covered in many questions related to the detection itself: [1], [2], [3], [4], [5], etc., but they are usually bash-related, and have posix-related warnings in comments as an afterthought.
I don't know which shells support which methods. Is there a single approach that is POSIX compliant and would work on most shells?
|
id -u is specified by POSIX, so any POSIX-compliant environment should support
[ "$(id -u)" = 0 ]
as a test for effective root.
My tests on Busybox (with its ash and id) suggest that it supports the above, at least with the compilation options used for my Busybox. (I imagine it’s possible to build a Busybox which doesn’t meet the POSIX spec for these features, but then it would be out of scope for your question really.)
| POSIX-compliant elevation detection |
1,685,040,610,000 |
I don't know how to describe very well the situation. I don't have access to a directory, but I do have access to some stuff within that directory.
For example:
In /path
[me@pc path]$ ls
ls: cannot open directory '.': Permission denied
But I can still go into a couple of folders e.g.
[me@pc path]$ cd folder_1
[me@pc path/folder_1]$
How can I list all the folders I have access to? If I run ls or find ./ I get Permission denied so I don't get to see anything, but I know I have access to some folders because I am able to cd into them.
|
It's not possible to get a listing without changing the permissions on /path or changing users.
In the UNIX file permission model, accessing the contents of a directory and listing its contents are considered separate permissions, with the listing controlled by the directory's read permission, and the ability to access files and folders under the directory controlled by its execute permission.
In this case, you have execute permissions for /path, but not read permissions. This means you are not able to view the listing for the directory, but you are allowed to access files and directories contained within the folder if you have access to those. You are even able to create new files and directories within /path if you have write permission. But you are not allowed to get a listing of its contents.
| How to list directories I have access to? |
1,685,040,610,000 |
As far I know, a file's ownership on Linux depends on the file's owner's UID.
What happens if a user in a different machine has the same UID as a user on the server and then the file is copied to the server? Who owns that file?
What happens if a user on a different machine has UID that is not the same as any user on the server and then the file is copied to the server? Who owns that file?
I have created few users and a group. Then copy pasted:
$ sudo adduser --gecos "" --disabled-password --no-create-home user1
$ sudo adduser --gecos "" --disabled-password --no-create-home user2
$ sudo adduser --gecos "" --disabled-password --no-create-home user3
$ sudo adduser --gecos "" --disabled-password --no-create-home user4
$ sudo addgroup userstart
$ sudo gpasswd -M user1,user2,user3,user4 userstart
$ sudo chown :userstart /home/blueray/Desktop/Permissions
$ sudo runuser -u user1 -- cp /home/blueray/Desktop/Permissions/test.html /home/blueray/Desktop/Permissions/test-copy.html
$ ls -la /home/blueray/Desktop/Permissions
total 72
drwxrwxr-x 2 blueray userstart 4096 Feb 8 11:57 .
drwxr-xr-x 3 blueray blueray 4096 Feb 8 11:55 ..
-rw-r--r-- 1 user1 user1 31017 Feb 8 11:57 test-copy.html
-rw-rw-r-- 1 blueray blueray 31017 Feb 6 05:50 test.html
The user who copied the file seems to own the file. Is it always the case?
|
Generally speaking, a non-privileged user cannot create files with different ownership than his own UID, so when he copies a file, the new file in the destination will always be owned by the UID of the user who ran the cp command.
This only applies for the case that a non-privileged user (non-root) copies the files, and it does't matter if he copies them from a remote machine or from a local one, and who was the original owner of the file.
If some user copies a file to a remote machine, the file will belong to the UID of that user on the remote machine. For instance, let's say you have user foo that has UID 100 on machine A, and on machine B there's also a user foo but with UID 101. If user foo copies a file from machine A to machine B (and it doesn't matter who was the original owner of the file and what was the method of copying), it will be created on machine B under the same user, but with his UID on machine B - 101. And again, this doesn't apply to copies ran by root.
| How does Linux handle permissions of files created on a different machine? |
1,685,040,610,000 |
What's the difference in AIX ACLs with permit versus specify:
This is what the documentation says:
"The permit, deny, and specify keywords are defined as follows:
permit
Grants the user or group the specified access to the file
deny
Restricts the user or group from using the specified access to the file
specify
Precisely defines the file access for the user or group
If a user is denied a particular access by either a deny or a specify keyword, no other entry can override that access denial."
source: https://www.ibm.com/docs/el/aix/7.1?topic=system-aixc-access-control-list
Don't know if this is a very subtle english issue, and me not being native speaker.
Want to understand the difference.
Here an example:
attributes: SUID
base permissions:
owner (frank): rw-
group (system): r-x
others : ---
extended permissions:
enabled
permit rw- u:dhs
deny r-- u:chas, g:system
specify r-- u:john, g:gateway, g:mail
permit rw- g:account, g:finance
both "specify" and "permit" seem to work the same.
Edit:
Thank you user sllabres for the detailed answer.
|
Interesting question.
From a quick check there is a difference who overlapping permissions are combined.
If you there is a user with read permission to a file and write permission via a group and both are 'permit' ACL, the user is able to read and write to the file. (the permissions are logically ORed together)
If there is a 'specify' ACL with e.g. only read permissions, only the read permissions are valid and the write permissions from e.g. a group are ignored.
If there are multiple 'specify' ACL they seem to be logically AND combined.
Example
testuser@testserver: /home/testuser >
# aclget test
*
* ACL_type AIXC
*
attributes:
base permissions
owner(root): rw-
group(system): ---
others: -w-
extended permissions
disabled
permit r-- u:testuser
With this permissions (disabled ACL) user 'testusr' can write to the file test (permissions from other) but not read.
testuser@testserver: /home/testuser >
# echo "data" > test
testuser@testserver: /home/testuser >
# cat test
cat: 0652-050 Cannot open test.
testuser@testserver: /home/testuser >
Enabling the ACL results in the ability to read the file, but 'testusr' cannot longer read to it due to the ACL having only specific read permissions now.
# aclget test
*
* ACL_type AIXC
*
attributes:
base permissions
owner(root): rw-
group(system): ---
others: -w-
extended permissions
enabled
permit r-- u:testuser
testuser@testserver: /home/testuser >
# echo "data" > test
The file access permissions do not allow the specified action.
ksh: test: 0403-005 Cannot create the specified file.
testuser@testserver: /home/testuser >
# cat test
data
Extending the ACL with the users group (staff) and write permissions for the group results in read permissions due to testuser being permitted to read in the ACL and write permissions via the staff group which testuser is member of. (logical ORed)
# aclget test
*
* ACL_type AIXC
*
attributes:
base permissions
owner(root): rw-
group(system): ---
others: -w-
extended permissions
enabled
permit r-- u:testuser
permit -w- g:staff
testuser@testserver: /home/testuser >
# echo "data" > test
testuser@testserver: /home/testuser >
# cat test
data
If read permissions for the user is change from 'permit' to 'specify' only the read permissions are valid, and the write permissions via the staff group not longer vaild.
# aclget test
*
* ACL_type AIXC
*
attributes:
base permissions
owner(root): rw-
group(system): ---
others: -w-
extended permissions
enabled
specify r-- u:testuser
permit -w- g:staff
testuser@testserver: /home/testuser >
# echo "hi" > test
The file access permissions do not allow the specified action.
ksh: test: 0403-005 Cannot create the specified file.
testuser@testserver: /home/testuser >
# cat test
data
If both ACL in this example u:testuser and g:staff are changed to 'specify' no read or write access is allowed (logical AND)
# aclget test
*
* ACL_type AIXC
*
attributes:
base permissions
owner(root): rw-
group(system): ---
others: -w-
extended permissions
enabled
specify r-- u:testuser
specify -w- g:staff
testuser@testserver: /home/testuser >
# echo "data" > test
The file access permissions do not allow the specified action.
ksh: test: 0403-005 Cannot create the specified file.
testuser@testserver: /home/testuser >
# cat test
cat: 0652-050 Cannot open test.
Changing the specify ACL g:staff permission to read and write only the read permission is granted and not read and write permissions as it would be with an allow ACL.
# aclget test
*
* ACL_type AIXC
*
attributes:
base permissions
owner(root): rw-
group(system): ---
others: -w-
extended permissions
enabled
specify r-- u:testuser
specify rw- g:staff
testuser@testserver: /home/testuser >
# echo "hi" > test
The file access permissions do not allow the specified action.
ksh: test: 0403-005 Cannot create the specified file.
testuser@testserver: /home/testuser >
# cat test
data
testuser@testserver: /home/testuser >
| AIX ACLs difference "permit" versus "specify" |
1,685,040,610,000 |
I am writing a shell script to do some complex task which is repetitive in nature.
To simplify the problem statement, the last step in my complex task is to copy a file from a particular path (which is found based on some complex step) to a pre-defined destination path. And the file that gets copied will have the permission as 600. I want this to be changed to 644.
I am looking at an option where in I can instruct the system to copy that file and change the permission all in one command. Something like -
cp -<some_flag> 644 <source_path> <destination_path>
You may say, I can change the permission of the file once it is copied in a 2 step process. However, there is problem with that as well. Reason being, the source path is obtained as an output of another command. So I get the absolute path for the file and not just the file name to write my script to call chmod with a name.
My command last segment looks like -
...| xargs -I {} cp {} /my/destination/path/
So I dont know the name of the file to call chmod after the copy
|
Just include the chmod in your xargs call:
...| xargs sh -c 'for file; do
cp -- "$file" /my/destination/path/ &&
chmod 700 /my/destination/path/"$file";
done' sh
See https://unix.stackexchange.com/a/156010/22222 for more on the specific format used.
Note that if your input to xargs is a full path and not a file name in the local directory, you will need to use ${file##*/} to get the file
name only:
...| xargs sh -c 'for file; do
cp -- "$file" /my/destination/path/ &&
chmod 700 /my/destination/path/"${file##*/}";
done' sh
| How to copy a file and change destination file permission in one step? |
1,685,040,610,000 |
I am running Docker on Linux Mint 20 (ubuntu 20.04 base). I have installed docker through the software manager app.
I am trying to run docker commands without using sudo, but they are still not working. Docker is installed. When I run just the docker command, it gives me the expected output.
Observe the following terminal output. I have added my current user to the 'docker', yet permission issues still persist.
myUser@myUser-devDesktop:~$ sudo usermod -a -G docker ${USER}
[sudo] password for myUser:
myUser@myUser-devDesktop:~$ grep 'docker' /etc/group
docker:x:137:myUser
myUser@myUser-devDesktop:~$ docker images ls
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/images/json?filters=%7B%22reference%22%3A%7B%22ls%22%3Atrue%7D%7D: dial unix /var/run/docker.sock: connect: permission denied
myUser@myUser-devDesktop:~$ docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
myUser@myUser-devDesktop:~$
|
Adding yourself to a group with adduser or usermod is not effective until you've run a suitable PAM invocation, since PAM is usually responsible for setting the user and group IDs for your process. Right now, you've told the system configuration files that when you log in, you should be granted membership in the docker group, but that doesn't affect your current session.
You can either log out and log back in again, which will cause your new session to have the proper group, or run su - myUser to spawn a new shell with the new privileges.
| Can not run docker commands without sudo |
1,685,040,610,000 |
I'm trying to append the address information to a file. I am getting an error message of
tee: /OR_595.txt: Permission denied
I'm using the following code to create the file.
cstates=($(awk -v FS=^ '{print $5}' "$1"))
for i in "${cstates[@]}"
do
:
if [[ ! -f "./$2/$i/${i}_595.txt" ]]; then
echo "Making ${i}_595.txt File"
touch "./$2/$i/${i}_595.txt"
chmod a+x "./$2/$i/${i}_595.txt"
else echo "File ${i}_595.txt already exists"
fi
done
This code is writing to the file.
file_name="$1"
while IFS=^ read -r company_name address1 address2 city state zip phone
do
printf "Company Name: %s\nCompany Address: %s%s, %s, %s, %s\nCompany Phone Number: %s\n\n" \
"${company_name}" "${address1}" "${address2}" "${city}" "${state}" "${zip}" "${phone}" | tee -a "${outputdir}/${state}_595.txt" > /dev/null
done < $file_name
I've checked the permissions on the each of the folders, subfolders and the file
drwxrwxr-x 56 jh78454 jh78454 4096 Feb 19 14:58 States
drwxrwxr-x 2 jh78454 jh78454 4096 Feb 19 15:14 WA
-rwxrwxr-x 1 jh78454 jh78454 0 Feb 19 15:14 WA_595.txt
I've looked at the permissions via WinSCP for several of the folders and files and they are all the same. Not sure why I'm getting permission denied error.
|
Not sure why I'm getting permission denied error.
Apparently ${outputdir} expands to an empty string (the variable is not defined or empty) and ${state} expands to OR. This way ${outputdir}/${state}_595.txt expands to /OR_595.txt.
/OR_595.txt points to a file named OR_595.txt in the root directory /. This file probably doesn't exist. It's normal a regular user cannot create "random" files in /.
Define outputdir so ${outputdir}/${state}_595.txt points to a file you can write to. You're using tee -a so maybe the design is the file already exists. I guess the first snippet is supposed to create the file. It uses ./$2/$i/, so you need to set outputdir in the second snippet accordingly.
| tee: /OR_595.txt: Permission denied |
1,685,040,610,000 |
I am trying to create a function in Bash that creates a directory, then sets the ownership, like this:
create_dir() {
local PATH=$1
local OWNER=$2
# create log directory
echo -e "\n* Creating directory $PATH"
if [[ -d $PATH ]]
then
echo " * $PATH already exist, no action done"
else
echo " * $PATH does not exist, creating dir"
sudo mkdir $PATH
fi
sudo chown $OWNER $PATH
}
then I call it in the same script like this:
# create upload directory, and set owner
create_dir "/www/upload" "deployer"
when I execute the script:
sudo ./dir_setup.sh
it results the following:
* Creating log directory /www/upload
* /www/upload already exist, no action done
./dir_setup.sh: line 15: sudo: command not found
Sudo command is installed and works as expected. The same happens with or without using sudo when calling dir_setup.sh
If I change the line to
sudo chown $OWNER $PATH
to
/usr/bin/sudo chown $OWNER $PATH
it works fine. I am logged in as user "deployer" I created, has sudoer rights, the owner of dir_setup.sh the same user, has 777 rights.
If I take that line from the function and put it somewhere else, it also works fine.
Any idea why do I need to call sudo with /usr/bin/sudo instead of just calling it using sudo when inside a function? And why does it work outside the function?
|
You've redefined the reserved variable $PATH, which contains the set of locations in which to search for commands such as sudo.
Don't use all capitals for your variable names, and all will be well.
create_dir() {
local path="$1"
local owner="$2"
# create log directory
printf "\n* Creating directory %s\n" "$path"
if [[ -d "$path" ]]
then
printf " * %s already exists, no action done\n" "$path"
else
printf " * %s does not exists, creating dir\n" "$path"
sudo mkdir "$path"
fi
sudo chown "$owner" "$path"
}
Notice also that I've double-quoted the variables when they are used, so that their values don't get parsed by the shell.
You could also simplify this at the expense of some logging:
create_dir() {
local path="$1" owner="$2"
printf "\n* Creating directory %s\n" "$path"
sudo mkdir -p "$path"
sudo chown "$owner" "$path"
}
| SUDO not found when calling inside a function in BASH script |
1,685,040,610,000 |
I'm looking for something like test -x but which only succeeds when a file is executable by "other", e.g. after chmod o+x FILE.
I could try to parse the output of ls -l,
-rw-r--r-x 1 me me 0 Dec 23 10:47 t
but is there a more elegant way?
|
One line to execute in shell (/bin/sh or /bin/bash):
PERMISSIONS=$(stat -c '%a' FILENAME); [ $((0${PERMISSIONS} & 0001)) -ne 0 ] && echo "executable by others" || echo "not executable by others"
Should be not a problem to create a script based on this.
| How do I check if a file is executable by other |
1,685,040,610,000 |
I want to be able to shutdown (or restart) my system without having to enter my password. My /etc/doas.conf looks like this and my user is in the wheel group
permit nopass :wheel as root cmd /sbin/poweroff
permit nopass :wheel as root cmd /sbin/reboot
permit :wheel
I thought this would be enough so I can enter
$ poweroff
but I get the message
poweroff: must be superuser.
when I do
$ doas poweroff
I still have to enter my password.
How can I configure doas so that my user can poweroff or reboot without having to enter my password? And is it possible to configure it so that I don't have to enter doas at all?
|
The commands that you enter in the doas.conf file (which you should enter with a full path for safety) has to occur exactly like that on the command line. This means that to power off your system, you would type
doas /sbin/poweroff
You may obviously set up a handy alias for this:
alias poweroff='doas /sbin/poweroff'
With that alias in effect, you would just have to use poweroff to power off your system.
Additionally, the last match in the doas.conf file counts. In your case, the permit :wheel matches due to you being in the wheel group, and this does not specify nopass, which means that you will have to use your password with doas to run /sbin/poweroff.
Simply delete that last rule in the doas.conf file (or move it to the top):
permit :wheel
permit nopass :wheel as root cmd /sbin/poweroff
permit nopass :wheel as root cmd /sbin/reboot
| shutdown without password using doas |
1,685,040,610,000 |
Is there an option in find that allows me to suppress the error messages that I get from it trying to access directories for which I don't have access?
I know I can just discard stderr, but it seems like such an obvious need that I'm not convinced that an option that does this does not exist, despite me not finding one in the documentation.
|
To avoid getting permission errors from find, you would have to avoid provoking these errors. You do that by avoiding entering directories that are not accessible.
Find and display the pathnames of directories that are not readable by the current user, but don't descend into them, GNU find style:
find / -type d ! -readable -prune
The -prune action removes the pathname currently under investigation from the search path of find.
With standard find, you would have to combine -perm and -user and -group in a complicated way to test the permissions on each directory depending on the ownerships of the directory. I think I've tried to do that a couple of times, but it's difficult.
To only care about the "others" permission bits:
find / -type d ! -user "$(id -u)" ! -group "$(id -g)" ! -perm -005 -prune
This would find any directory not owned by the current user, not belonging to the current user's group, and whose permission bits does not allow "others" to read (list) or execute (enter) it, and then prune these from the search path.
The full thing, testing all the permission bits, may possibly look something like
find / -type d \( \( -user "$(id -u)" ! -perm -500 \) -o \
\( ! -user "$(id -u)" -group "$(id -g)" ! -perm -050 \) -o \
\( ! -user "$(id -u)" ! -group "$(id -g)" ! -perm -005 \) \) -prune
The difference between this and the -readable of GNU find is that -readable also considers ACLs etc.
To discard permission errors from find, redirect its standard error stream to /dev/null.
| Discard "access denied" stderr natively in find |
1,685,040,610,000 |
I'm trying to create a bash script that will be able to perform a find command to get the permissions and full path of every file/directory in a directory tree and create a file for each unique permission and printing each full file path to that permissions file. Then in the future I could read these files and use them to permissions to the way they were when I ran the script.
For instance:
drwxrwxrwx /home/user/testDirectory
-rwxrwxrwx /home/user/testDirectory/testFile
drwxr-xr-x /home/user/testDirectory/directory2
-rwxr-xr-x /home/user/testDirectory/directory2/test2
The above would create 2 files (e.g. 777.txt and 755.txt) that would each have 2 of the lines.
I am struggling with the logic to create a file for each unique permission and then send the full file path.
What I have so far (I doubt the array is necessary, but I have played with sorting the array by permission which I can do with -k 1.2 on the sort command to ignore the d flag):
declare -a PERMS
i=0
while read line
do
PERMS[$i]="$line"
(( i++ ))
done < <( find /opt/sas94 -printf ""%M/" "%p/"\n")
|
Try this:
#!/bin/bash
while read file; do
stat -c '%A %n' "$file" >> $(stat -c '%a' "$file").txt
done < <(find "$1")
Usage:
./script.sh /path/to/directory
the first stat -c '%A %n' "$file" prints the permissions and path to the file, e.g. -rw-rw-rw- /foo/bar
the second stat -c '%a' "$file" prints the permissions in octal form, e.g. 666
The output of the first stat is appended to the filename created by the second stat with suffix .txt.
| Bash Script to create a file for each unique permission in a listing/find command and printing each directory/file path instance of that permission |
1,685,040,610,000 |
Bit of history first: where I work, some developers/shareholders have brought their intellectual property together to form the company. IP still remains theirs alone, individually, as well as the source code.
In addition, we have also had some problems with industrial espionage from 3rd parties a few years back.
All of this had led some of those developers/company owners to come up with unorthodox measures to ensure that, even if stolen, our binaries cannot be used.
Current problem: we're renting a supercomputer to do some hard number crunching in order to meet a deadline. Trouble is, the executable in case has a static dependency to a text file buried deep inside our network directory structure.
Why not just recompile without this 'dependency'? Because the developer in question is currently currently away on a personal trip, and isn't expected to return in order to recompile this code before our deadline is met.
Execution:
./run.sh
Error output:
forrtl: No such file or directory
forrtl: severe (29): file not found, unit 1, file /foo/bar/.xyz
Image PC Routine Line Source
number_crunch 000000000048B933 Unknown Unknown Unknown
number_crunch 0000000000499ADB Unknown Unknown Unknown
number_crunch 0000000000445941 Unknown Unknown Unknown
number_crunch 0000000000403BFE Unknown Unknown Unknown
libc.so.6 00002AAAAB6C10BD Unknown Unknown Unknown
number_crunch 0000000000403B09 Unknown Unknown Unknown
Contents of run.sh:
#!/bin/bash
#SBATCH --nodes=10
#SBATCH --job-name=number_crunch
#SBATCH --cpus-per-task=8
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/share/intel/ics2013/composer_xe_2013_sp1.2.144/compiler/lib/intel64/
module load glibc
./number_crunch
What I need: some way to trick the binary into acknowleding the /foo/bar/.xyz structure, without having root powers.
Is this possible? I know that alias does not allow for slashes in the alias name, and ln requires that I have permission to write on /.
|
What about patching your binary in-place? strings yourbinary | grep -F /foo/bar/.xyz should print out /foo/bar/.xyz. If /foo/bar/.xyz is sufficiently unique in the strings, you could do:
sed -i "s_/foo/bar/\.xyz_/control/.xyz_g" yourbinary
where /control/ is a directory you have control over. The replacement string's length (in number of bytes) must be equal to the original string's length. If the replacement string is shorter, you may be able to pad it with null bytes: sed -i "_/foo/bar/.xyz_/contr/xyz\x00\x00\x00_g" yourbinary (o, l, and . were removed for null bytes), but the success of this may depend on whether or not there are hardcoded dependencies on the length of /foo/bar/.xyz. Alternatively, you can make the path longer by adding some / characters (/tmp/////.xyz).
If the replacement string is longer, you're probably out of luck for this style of in-place patching. However, you may be able to combine this with a symlink solution if necessary, where /control/xyz is a path of suitable length but it points to a longer path where the real file resides.
If you have the expertise and you need more control over which instances of the string are replaced, you can do this with a hex editor instead of sed.
I would test this change before doing anything important with it.
| Create symlink to a directory I don't have permissions over |
1,685,040,610,000 |
I'm trying to read physical memory as a non-root user using /dev/mem. Checking the permissions of /dev/mem:
~/w/e/setup ❯❯❯ ls -lha /dev/ | grep mem
crw-r----- 1 root kmem 1, 1 Oct 15 09:29 mem
From my understanding, a member of the kmem group should be allowed to read from /dev/mem. I check my group memberships:
~/w/e/setup ❯❯❯ groups
docker users video uucp kmem wheel autologin
The current user is a member of the kmem group, so I try to read a bit from /dev/mem:
~/w/e/setup ❯❯❯ head /dev/mem | hexdump -C
head: cannot open '/dev/mem' for reading: Operation not permitted
To my surprise, the operation is not permitted. The same operation works when I login as root.
Can someone explain, why I cannot read from /dev/mem as a member of group kmem?
How can I enable non-root read-only access to /dev/mem for a specific user?
|
/dev/mem can only be opened by processes with CAP_SYS_RAWIO; head, not running as root, doesn’t have that capability. You can “fix” this using setcap (but only do this on a copy of the binary...):
cp /usr/bin/head .
sudo setcap cap_sys_rawio+ep head
./head /dev/mem | hexdump -C
Enabling access to /dev/mem for a specific user thus involves group membership (so that the device can be opened) and binary capabilities.
| Non-root read access to /dev/mem by kmem group members fails |
1,685,040,610,000 |
I'm trying to set the ACLs for a particular file, but using the options
R for recursive
d for default
m for modify
does not seem to have any effect, as indicated below:
/home/pkaramol/Desktop/somedir
$ getfacl afile
# file: afile
# owner: pkaramol
# group: pkaramol
user::rw-
group::rw-
other::---
/home/pkaramol/Desktop/somedir
$ sudo setfacl -Rdm u:bullwinkle:rwx afile
/home/pkaramol/Desktop/somedir
$ getfacl afile
# file: afile
# owner: pkaramol
# group: pkaramol
user::rw-
group::rw-
other::---
|
The use of -Rd really only makes sense when dealing with directories. To modify the ACLs for a given file and add another user you merely do this:
$ sudo setfacl -m u:user1:rwx somefile
$ getfacl somefile
# file: somefile
# owner: root
# group: root
user::rw-
user:user1:rwx
group::r--
mask::rwx
other::r--
Per man setfacl page:
-R, --recursive
Apply operations to all files and directories recursively. This
option cannot be mixed with `--restore'.
-d, --default
All operations apply to the Default ACL. Regular ACL entries in the
input set are promoted to Default ACL entries. Default ACL entries in
the input set are discarded. (A warning is issued if that
happens).
| Setting access control list does not have any effect |
1,685,040,610,000 |
I want to mount one of my media folders of my Synology DiskStation (DS414J, DSM 6.2) on my laptop (Manjaro running on Kernel 4.17.18) via SMB/CIFS. I set up a DiskStation user called media that has read/write access to this specific folder. I mount the folder with the following /etc/fstab entry:
//{disk station IP}/{folder}/ /home/{user}/NAS/{folder} cifs auto,x-systemd.automount,cache=none,rsize=130048,wsize=57344,users,user=media,pass={the password},workgroup=WORKGROUP,ip={disk station IP} 0 0
Mounting and read access works (I can access the files and e.g. play them with VLC) with the regular user. However, when I try to perform any write operations, I get "Permission denied" error.
Output of ls -la on the share shows following:
drwxr-xr-x 2 root root 0 01. Jan 2018 .
drwxr-xr-x 2 root root 0 01. Jan 2018 ..
-rwxr-xr-x 1 root root 5,8M 01. Jan 2018 '01.file'
-rwxr-xr-x 1 root root 3,7M 01. Jan 2018 '02.file'
-rwxr-xr-x 1 root root 3,2M 01. Jan 2018 '03.file'
How do I configure my laptop to allow my regular user to have read/write access to the share?
|
Your share has world-read access, hence anyone who can access the mount point can read the contents. When your system mounts the share, it maps the share owner (which has r/w access) to root, hence your regular user can't perform any write operations.
You can change this mapping to set your regular user as the owner and group of the share by using uid= and gid= mount options. This should allow write access.
| Allow write access for regular user on CIFS share |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.