date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,415,049,504,000
Why is umask setting a different permission for a directory and file I have created Consider: [user@server1 ~]$ umask 0770 [user@server1 ~]$ mkdir TEST2; touch TEST2.txt; [user@server1 ~]$ ls -l d------rwx 2 user group_name 4096 Mar 5 05:16 TEST2 -------rw- 1 user group_name 0 Mar 5 05:16 TEST2.txt Now shouldn't the file TEST2.txt have the permission 007 as umask is set to 0770?
umask doesn't enforce rights, it forbids them. Have a look at strace: file: open("newfile", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666) = 3 directory: mkdir("newdir", 0777) = 0 touch doesn't ask for execution rights for a file (which wouldn't make sense).
Different permission between directory and file with 'umask'
1,415,049,504,000
I've installed hddtemp on my Arch Linux, but it needs to be run with root permissions. I want to execute it as a normal user without using sudo. How can I do this?
It's possible to assign users in a group permission to run an executable by using the /etc/sudoers mechanism. For instance, to permit all users in the users group to run hddtemp with root permissions run visudo as root and add: %users ALL = (root) NOPASSWD: /path/to/hddtemp
Make a programme executable by common users
1,415,049,504,000
On Fedora 35, I create my own service to schedule a backup. I have the script in /usr/local/bin/ and service files plus timer in /lib/systemd/system/ ls -an /usr/local/bin/ prints -rwxr--r--. 1 0 0 3360 Dec 1 18:31 backup.sh ls -an /lib/systemd/system/schedule-backup_root* -rw-r--r--. 1 0 0 230 Dec 1 18:14 /lib/systemd/system/schedule-backup_root.service -rw-r--r--. 1 0 0 388 Dec 1 16:49 /lib/systemd/system/schedule-backup_root.timer but then when I start the service systemctl start schedule-backup_root.service Dec 01 18:36:13 fallen-robot systemd[1]: Started Nightly snapshot backup job for ROOT volume. Dec 01 18:36:13 fallen-robot systemd[75159]: schedule-backup_root.service: Failed to locate executable /usr/local/bin/backup.sh: Permission denied Dec 01 18:36:13 fallen-robot systemd[75159]: schedule-backup_root.service: Failed at step EXEC spawning /usr/local/bin/backup.sh: Permission denied Dec 01 18:36:13 fallen-robot systemd[1]: schedule-backup_root.service: Main process exited, code=exited, status=203/EXEC Dec 01 18:36:13 fallen-robot systemd[1]: schedule-backup_root.service: Failed with result 'exit-code'. My service file looks like this: [Unit] Description=Nightly snapshot backup job for ROOT volume [Service] Type=simple ExecStart=/usr/local/bin/backup.sh -s / -b /run/media/borko/BackupTest/ -t "Fallen Robot ROOT Backup Report" [Install] WantedBy=default.target Why it cannot access the file?
Disable SElinux or set it to permissive. setenforce 0 This will disable it temporarily. Test again after disabling it, and see if this solves the issue.
systemctl cannot access service file, Permission denied
1,415,049,504,000
In the following example directory $ ls -l total 0 -rw-r--r-- 1 user testgroup 0 12 feb 12:00 file1 -rw-rw-r-- 1 user testgroup 0 12 feb 12:00 file2 -rw--w-r-- 1 user testgroup 0 12 feb 12:00 file3 -rw-r--r-- 1 user testgroup 0 12 feb 12:00 file4 -rw-rwxr-- 1 user testgroup 0 12 feb 12:00 file5 I would like to find all the files where group permissions are exactly -w-, that is 2 (only write permission). I am using $ bash --version GNU bash, versione 4.4.23(1)-release (amd64-portbld-freebsd12.0) on FreeBSD 12. It is not GNU find. My attempt was with $ find . -perm -g+w ./file2 ./file3 ./file5 but this returns all the files having at least group write permissions; I would like to list instead the files whose group is only permitted to write. How to accomplish this?
You can add more conditions to exclude files with other permission bits set. find . -perm -g+w ! -perm -g+r ! -perm -g+x or (as proposed in steeldriver's comment) find . -perm -g+w ! -perm /g+rx Example: $ ls -l file* -rw-r--r-- 1 bodo bodo 0 Feb 12 12:50 file1 -rw-rw-r-- 1 bodo bodo 0 Feb 12 12:50 file2 -rw--w-r-- 1 bodo bodo 0 Feb 12 12:50 file3 -rw--wxr-- 1 bodo bodo 0 Feb 12 12:50 file4 -rw-rwxr-- 1 bodo bodo 0 Feb 12 12:50 file5 $ find . -perm -g+w ! -perm -g+r ! -perm -g+x ./file3 $ $ find . -perm -g+w ! -perm /g+rx ./file3 $
Find all files with group write only permissions
1,415,049,504,000
I need to create a kind of "backup operator" account, which can read all the files on a system for copying to a backup system without the permission to modify any of them, including those that belong to root. The root account seems to be the only one capable of doing that, but then the root account is not prevented from running anything it wants. The other option I can think of is placing an account in a group that has read rights and make that account a member of every users group. The basic rwx permissions in Linux don't seem created for that. Does Linux have something more sophisticated for such a purpose such as something ACLs might offer? The permissions are for a user who logs in from a remote backup server and backups up all the files to the remote server. If the backup server gets compromised that account should not be able to log into the server being backed up and do some damage. Accounts on the backed up server should also not be able to log into the backup server and do some damage if it gets compromised.
I suppose usually, one would just run the backup utility as root, through cron or through a forced command on an SSH key, and then trust the utility to not do anything dangerous. Using ACLs to give permissions to all files on the system would be a bit awkward, since you'd need to have the ACLs set for each and every file individually (as POSIX ACLs don't really have a concept of giving access to a subtree, you just have default ACLs that automatically get copied to new files). And the owners of those files can just go and remove those permissions, accidentally or on purpose. Security-conscious programs (like SSH or GPG) might also get a bit angry if they notice your files are readable by someone else. (They don't even need to know about ACLs to do that, since the traditional permission bits mask the permissions granted by the ACLs, so any access granted by ACLs is evident in the traditional permission bits.) However, there actually is a way. The Linux capabilities system contains a capability just for that: CAP_DAC_READ_SEARCH Bypass file read permission checks and directory read and execute permission checks; invoke open_by_handle_at(2); use the linkat(2) AT_EMPTY_PATH flag to create a link to a file referred to by a file descriptor. (I'm not sure how the that last one is related to the others, but I'll ignore it...) If you have a particular utility you want to have that capability, you can give it to it with setcap: # setcap "CAP_DAC_READ_SEARCH+ep" /path/to/backupcmd Though now, anyone who can run the binary /path/to/backupcmd, will have access to that ability. So, you probably want to protect that particular file from access by arbitrary users. For example, make it owned by root:backup, with permissions rwx--x---, where backup is the group of users who are supposed to be able to run it. # chown root:backup /path/to/backupcmd # chmod 710 /path/to/backupcmd
How to configure an account to have read permissions to on all files on a system, including root's files?
1,415,049,504,000
When I want to vimdiff root files, I use the following alias, as per this suggestion. alias sudovimdiff='SUDO_EDITOR=vimdiff sudoedit' I can then use the following command. $ sudovimdiff /root/a /root/b However, if one of the files is writable by my user, the command fails. $ sudovimdiff /root/a /tmp/b sudoedit: /tmp/b: editing files in a writable directory is not permitted Is there a way to vimdiff one root and one non-root file, using my user's environment settings (i.e. sudoedit)?
May be useful related to that sudoedit error message: sudoedit: ... editing files in a writable directory is not permitted Please try a modification to sudoers file using sudo visudo, add a line: Defaults !sudoedit_checkdir More here.
Can I sudoedit a file in a writable directory when using vimdiff?
1,415,049,504,000
We have a directory and want to protect that from remove and rename, but we need to be able to rename, remove and create contents. What we can do?
Permission to remove and rename a directory is determined by its parent's permissions, not its own (just like other files). Just set the permissions on the directory to what you need and make its parent -w. Depending on your use case you may want to make the directory sticky +t as well - then users can't move around others' files, only their own.
Permissions to change only directory content and not itself
1,415,049,504,000
I have a USB flash drive with ext4 file system and its files are owned by my user on my local machine, for example by myuser@myhost with 700 permissions. If I unplug my flash drive and plug it in other Linux machine, can users of that machine have access to files in the flash drive? What if there is also a user named myuser, can he access those files?
Filesystems designed for unix, such as ext4, track the user via a number, the user ID. The user name is not recorded. You can see your own user ID with the command id -u. You can see the user ID who owns a file with ls -ln /path/to/file. If you take an ext4 filesystem to a different machine, the files will still have the same permissions, and they will have the same user ID. This may or may not be the right user. In general, different machines don't have the same user IDs for the same users unless this requirement was taken into account when creating the user or the machines pool from the same user database. Permissions on a file only protect that file inside one system. Permissions on a removable drive have no effect for someone who pops the drive into their own computer. If you want to exchange files via USB, FAT32 is usually the filesystem of choice. It's what most flash drives are formatted for when they're sold. If you need to store files with names or attributes that FAT32 doesn't support, create an archive (e.g. .tar.gz).
Permissions on an ext4 filesystem on a removable drive used in different machines
1,415,049,504,000
After editing /etc/group, How this update starts functioning without restarting the system in Unix? Is there any command we need to run?
Any changes to /etc/group will be made immediately. That file is parsed when looking for access. If you are trying to modify membership for a user already logged in though, that user may need to log out and back in for the membership changes to take effect.
how to initialize `/etc/group`?
1,415,049,504,000
I've just installed Debian Sid on my computer but I cannot mount external USB drives as you can see in the image. Mounting failed Cannot mount "SENZATITOLO" Not authorized What should I do?
You are using a lightweight window manager. I've had this problem before in another such window manager. So it sounds like a problem with polkit to me. Does your system have a directory called /etc/polkit-1/localauthority/? If so, create a file (as root)... I use the nano text editor here. Paste into the terminal with CTRL + SHIFT + V. su root nano /etc/polkit-1/localauthority/50-local.d/55-storage.pkla add the following lines: [Storage Permissions] Identity=unix-group:plugdev Action=org.freedesktop.udisks.filesystem-mount;org.freedesktop.udisks.drive-eject;org.freedesktop.udisks.drive-detach;org.freedesktop.udisks.luks-unlock;org.freedesktop.udisks.inhibit-polling;org.freedesktop.udisks.drive-set-spindown ResultAny=yes ResultActive=yes ResultInactive=no now add yourself to the plugdev group: usermod -a -G plugdev <your username> logout and log back in. If it is polkit, you should be able to mount media now.
How to mount USB stick on Debian Sid?
1,415,049,504,000
Background I am working on a RHEL 5 cluster. I want my Fortran program to read the file /home/bob/inputs/input_1 I asked Bob to give me permission to read all contents of inputs: [bob@server]$ chmod -R a+r /home/bob/inputs/* I linked these to a shared directory: [david@server]$ ln -s /home/bob/inputs/ /home/share/inputs/ My (Fortran) program tried to read /home/share/inputs/input_1 and said: File /home/share/inputs/input_1 not found! I tried to locate the file myself (in the process, bob gave a+rwx permissions): [david@server]$ls -ltrh /home/share/inputs/input_1 lrwxrwxrwx 1 bob bob 33 Oct 25 15:42 /home/share/inputs/input_1 -> /home/bob/inputs/input_1 From this, I concluded that a) inputs_1 exists and b) all users have rwx permission. I tried to read it: [david@server]$ more /home/share/inputs/input_1 /home/share/inputs/input_1: No such file or directory And am told that it does not exist. I look for the target file /home/bob/inputs/input_1 but am denied permission. [david@server]$ls -ltrh /home/bob/inputs/input_1 ls: /home/bob/inputs/input_1 Permission denied Something bizzare happens if I ls the directory contents: [david@server]$ls -ltrh /home/bob/inputs/ ?--------- ? ? ? ? ? input_1 ?--------- ? ? ? ? ? input_2 ?--------- ? ? ? ? ? input_3 ... (n-4 lines omitted) ?--------- ? ? ? ? ? input_n although if bob does this, he gets: -rwxrwxrwx 1 bob bob 269 May 24 input_1 ... (n-2 lines omitted) -rwxrwxrwx 1 bob bob 2.0K Jan 19 input_n Questions: Is there a simple explanation for this apparently (to me) inconsistent behavior? Where do I go from here?
You need execute permissions on /home/bob/inputs. You can set it with: chmod a+x /home/bob/inputs
Why is bash giving me (apparently) conflicting information about a file?
1,415,049,504,000
For one specific user I want to be able to restart Apache. This user does have sudo privileges and I could run sudo /etc/init.d/apache2 reload, but I want to include this restart script in a git post-receive hook. So this would prompt for the password and fail. So the question is: what is the proper way to allow this user to restart apache, without requiring sudo? I want to restrict option to only restarting Apache, and only this particular user.
You should consider using sudo with the NOPASSWD config. See man 5 sudoers Ex: Host_Alias LOCAL=192.168.0.1 user_foobar LOCAL=NOPASSWD: /etc/init.d/apache2
How do I restart apache as non-root (using a git-hook)?
1,415,049,504,000
I am running Ubuntu 10.04 LTS. I want to use my laptop to play music in a party. MY screensaver does not need a password to deactivate. I would like to allow people to use my computer to play the music they like, but I would like to prevent them to have access to certain directories in the same manner or similar that linux prevents people unauthorized to install programs from the synaptic package manager. I would like this to be at the level of the command line and the file browser. But with the root password to be able to have access. Is this done by changing the permissions of the directory? If so how, which is the command from the terminal? Will that also prevent people from executing the files in the directory as well? Can I also block their searching of the directory and contents?
The simplest way I can think of is simply creating a new user partyuser, and assigning it read permissions to a 'public' music directory. To make the music directory (with it's subdirectories and files) to become readable by others you run: # chmod -R r+o /path/to/music_dir That way this user can not list or access files in your own user's home directory by default. The only easy way to do this is by becoming root. If you wish to have the ability to become root as the partyuser for this reason, simply add the user to the /etc/sudoers file and use sudo. Also remember to add the user to appropriate groups, that will enable use of the audio and/or graphics etc. on the system.
How to make a folder request root password to view execute?
1,415,049,504,000
I have a development machine that use runs on CentOs. Whenever i pull from git using git pull i get "permission denied" issue/error. Git apparently doesn't have the permission to overwrite the files needed when i do a pull. Thus after every time i have to sudo git pull to get it to work. I would rather not do a sudo git pull because i'd like everyone to be able to pull from our development server. How do i configure git to have the proper permissions to just be able to pull without sudoing? Is this because I may have not configured git properly? If so how do i configure git to allow the correct permissions? Not sure if this helps but a which git reveals this: /usr/bin/git Example error i execute: git commit -m "my fun message" i get: error: Unable to append to .git/logs/refs/heads/stage: Permission denied fatal: cannot update HEAD ref
Git itself doesn't have any permissions. It relies entirely on the operating system level permissions. If you're the only person using that git repo, then do this: cd dir_of_repo sudo chown -R ${LOGNAME} $(pwd) sudo chmod -R u+rwX $(pwd) If you're sharing this with other people, then you probably need to read Understanding UNIX permissions and chmod.
Permissions issue with git
1,415,049,504,000
I'm running Wowza Media Server on my server as "root". The problem is that all files created (recorded) by Wowza are "root:root" and aren't writable, editable, or deletable by any other users. How can I make it so that Wowza records files that are writable by other users? I'd assume I'd use a group to facilitate this, but I'm not sure as to the recommended way to do this. Should I create a specific user to run Wowza as? How can I make this happen?
Wowza should really be run as a different user. I suggest creating a dedicated user and group for Wowza. Any files created by Wowza will be owned by it's user and it's primary group. To create the user: groupadd wowza # Create a group for Wowza useradd -c 'Wowza Media Server' -d /path/to/media -g wowza wowza The above command will create a group called wowza and a user called wowza. If needed you can invoke su as a wrapper around it to run it as a different user: su -l -c 'umask 002; wowza-media-server' wowza The above command when run from root will invoke the command wowza-media-server as user wowza. The command wowza-media-server will be running as user wowza and any files it creates will be owned by user wowza and group wowza. The umask 002 ensures that any files created by wowza-media-server will be group writable. Then you can add users to that group and they will be able write to any files created by wowza-media-server.
How to configure permissions to allow file access?
1,415,049,504,000
I am setting the umask to a new value as below. However although I am applying rxw to the user, this does not seem to be respected (?) /home/pkaramol $ umask u=rxw,g=rw,o=r /home/pkaramol $ umask -S u=rwx,g=rw,o=r /home/pkaramol $ rm -rf afile && touch afile; /home/pkaramol $ ls -l afile -rw-rw-r-- 1 pkaramol pkaramol 0 Μar 2 10:30 afile edit: $ mount | grep -E '\s/\s' /dev/sda3 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) $ mount | grep -i home /home/pkaramol
touch creates files with permission 666 (-rw-rw-rw-) by default (with umask 000). umask can only subtract permissions ("take permissions away"). In your case only o=w is affected. It cannot add any flags (u=x) to newly created files. You have to use chmod for that.
umask not being applied? [duplicate]
1,415,049,504,000
From https://unix.stackexchange.com/a/350629/674 For directories, the execute permission is a little different; it means you can do things to files within that directory (e. g. read or execute them). cd into a directory needs execution permission, but does it do something to some file in the directory, and if yes, how? The best I can think of is cd do something to the file . under the directory, but why doesn't cd just deal with the directory itself, instead of any file under it, so as to avoid needing execution permission? Thanks.
On a directory, the execute permission is known as the search permission. It is required in order to access a directory, in a general sense: access files inside the directory, as in the quote above, but also access the directory itself. cd uses chdir, which is defined as requiring search permission on all components in a path it’s given (see EACCES there).
Why does cd need execution permission of a directory? [duplicate]
1,415,049,504,000
What are reasonable PDF file permissions? Lately I've found that inside a directory, with many PDF files, all files have all possible permissions set on. Being uncomfortable with that I thought about changing them. However, I don't know what be reasonable for PDF files considering that I want to be able to open and, sometimes, change them (e.g. PDF "text maker"). Here a print-screen for how I've found them (file_1.pdf), and how I consider to change them(file_2.pdf). I suppose that the initial permission set was lost while I've copied them from a backup HDD. Could you please consider example-peaking a effective way of copying them around without loosing their permission settings. Thank you.
No, a PDF file is not an executable binary or script and should never need to be executable. Assuming the documents live on a Unix filesystem, you may remove the executable bits using chmod a-x *.pdf If some of your file systems are non-Unix file systems, the permissions on your files may be messed up like this regardless of how you copy the files around between them. On Unix file systems, I tend to use rsync -a (or rsync --archive) to copy files between hosts or local directories to preserve permissions and timestamps.
Does a PDF file need execution permissions?
1,415,049,504,000
I'm getting a permission denied error on an index.php file of a site running on Nginx. The error is below: 2018/01/19 05:50:01 [error] 9664#9664: *17 FastCGI sent in stderr: "PHP message: PHP Warning: Unknown: failed to open stream: Permission denied in Unknown on line 0 Unable to open primary script: /var/www/the-site/index.php (Permission denied)" while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: www.the-site.com, request: "GET /index.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm-the-site.sock:", host: "www.the-site.com" Permissions on the file are -rw-rw-r--. 1 root root 418 Aug 2 17:49 index.php Changing the file permissions to 777 (temporarily) does not help: -rwxrwxrwx. 1 root root 418 Aug 2 17:49 index.php However, if I move the file: mv index.php index-old.php and replace it with a new index.php with the following content: <?php phpinfo(); ?> then that works fine. The user and group are the same, and the permissions are now weaker: $ ls -l index* -rwxrwxrwx. 1 root root 418 Aug 2 17:49 index-old.php -rw-r--r--. 1 root root 20 Jan 19 05:56 index.php Here's the result of ls -Z: $ ls -Z index* -rwxrwxrwx. root root unconfined_u:object_r:user_tmp_t:s0 index-old.php -rw-r--r--. root root unconfined_u:object_r:httpd_sys_content_t:s0 index.php
-rw-r--r--. 1 root root 20 Jan 19 05:56 index.php ^ this one The dot after the permission bits indicates a SELinux security context. If you're running a SELinux system, it would need to match so that nginx can read the file. You can use ls -Z to view the security context, and restorecon to restore the default security context (based on the file location, I think), or chcon to change it. Something like this $ restorecon /var/www/the-site/index.php or this for the full directory. $ restorecon -r /var/www/the-site (I can't test that anywhere now, check the syntax) See, e.g. the Red Hat documentation on SELinux labels.
File with same owner, group and less strict permissions can't be opened by nginx
1,415,049,504,000
I have a directory with the following permissions: lighttpd drwx------ Which contains the following file: lighttpd.pid -rw-r--r-- Unfortunately, when trying to run: cat lighttpd/lighttpd.pid with a user that isn't the owner, nor a member of the owning group, I get the message: cat: lighttpd/lighttpd.pid: Permission denied How would I enable a user that isn't an owner, nor a group member, access to lighttpd.pid?
Grant execute/search (x) permissions for 'others' to the lighthttpd directory. $ chmod o+X lighthttpd The capital X file mode bit selector in chmod enables the execute/search only if the file is a directory (or already has execute permission for some user). The execute/search bit, when set on directories allows the affected user to enter (call open()) the directory, and access files and directories inside. In addition, they need read (r) permissions for the files themselves (which according to the question is already set). Without read (r) permission on the directory, users are not able to obtain the contents of the directory, so they will need to know the name of the file they are going to access in advance. If the underlying filesystem supports Posix Access Control Lists, you can also grant the execute/search permission on the directory with setfacl for a specific user without adjusting the owner or group assignment: $ setfacl -m u:user:x lighthttpd You can determine if the filesystems support Posix ACLs by verifying if it's been mounted with the acl mount option /by running mount: $ mount | grep /dev/sdaX /dev/sdaX on /mountpoint type ext4 (rw,acl) If acl is not present in the output of mount, it might still be one of the default options for that filesystem type. You can verify this with tune2fs: $ sudo tune2fs -l /dev/sdaX |grep acl Default mount options: user_xattr acl If acl is not enabled you for some reason do not want to grant all users the ability to enter the directory, you can follow Larkeith's advice and link the file you want the users to be able to access to another pathname in the filesystem.
How to Allow User Access to a Specific File in a Restricted Directory?
1,415,049,504,000
I am trying to understand what the precedence and combination of the permission options set in fstab when mounting a disk with those that are associated with each file on disk in the case of ext4 being the file-system in use. More specifically: exec and executable flag suid and suid flag dev defaults vs nothing at all For instance does rw in fstab mean that the files will have read and write permissions when mounted? What will happen if they have only read associated with the file? Do the mount options affect the permissions of the mounted files as stored on on disk? Or do they filter them out somehow keeping only what is allowed in both? What happens to files newly created on the mounted disk? There are many different articles out there, about linux permissions particular subject, but none of those I stumbled upon tackles the issue in its entirety. If someone has a link to such an article it would be very nice to share it!
Mount options don't affect the stored permissions bits, but they affect the effective permissions. For example, it's possible to have a file with execute permissions (i.e. chmod a+x myfile has succeeded, ls -l shows the file having execute permissions, etc.), but if the filesystem is mounted with the noexec option, then attempting to execute the file results in a “permission denied” error. Similarly the ro option causes any attempt to write to fail, the nodev option causes any attempt to access a device to fail (even though devices can be created), and the nosuid option causes any attempt to execute a file to ignore the setuid and setgid bits. Another way to put it is that the algorithm to decide whether a file operation is allowed goes something like this: If write permission is needed and the filesystem is mounted ro, deny immediately. If execute permission is needed and the filesystem is mounted noexec, deny immediately. If the file is a device and the filesystem is mounted nodev, deny immediately. If the file's user is one of the groups of the process attempting access, allow or deny based on the user permission bits stored in the filesystem. If the file's group is one of the groups of the process attempting access, allow or deny based on the user permission bits stored in the filesystem. Allow or deny based on the “other” permission bits stored in the filesystem. (I simplified to show only the most important parts for our purposes here. Other considerations include access control lists, extended attributes such as immutable and append-only, and security modules such as SELinux and AppArmor. The ultimate complete and accurate — but not easy-to-read — reference would be the source code, e.g. the may_open function in the Linux kernel.) And the setuid/setgid determination is not done (the setuid/setgid bits from the file metadata are not taken into account) if the filesystem is mounted nosuid.
How fstab mount options work together with per file defined permissions in linux
1,415,049,504,000
I want to change files with KomodoEdit that need sudo permissions. I can't start KomodoEdit with sudo, though (for whatever reason). Can I somehow grant Komodo permission to edit those files (in particular I am talking about apache2 files and /etc/hosts)?
Use sudoedit <file>. It creates a local copy of the file, edits it with user rights and copies it back to the original location. The advantage is that the editor is running as regular user. To specify a different editor than the default one you can set EDITOR temporarily: EDITOR=/usr/bin/someeditor sudoedit /etc/hosts This requires the sudo package to be installed and the user to be added to the sudo group.
Change files with Editor that need sudo permissions
1,415,049,504,000
I use archlinux x64. I'm studying web development and in order to edit files served by Apache under srv/http, directory which Apache serves, I created a group adding my user and Apache's user so I could edit the files without the need of moving them between directories. The thing is, I can properly edit files within the directory with my user but whenever I save them, it's user and group reverts to my user and group. For example: Me: user1:users Apache: http:http Directory ownership: http:development Then I open the file /srv/http/index.html with my user, which looks like this... rw-rwr-- 1 http development 1034 Mar 20 20:48 index.html (as you can see it has read and write permissions to owner and to group) and when I save it, the file permissions reverts to this... rw-rw-r-- 1 user1 users 1034 Mar 20 20:48 index.html I fail to understand what's happening cause if I type groups to see my user active groups I get this lp wheel network video audio storage users development where indeed says I'm a member of development. I think its something else. Could anybody tell me what's happening and how can I correct it at save time? I know its not a big issue but I want to correct it before I get the lost hyphen like problem. PD- I use sublime editor if matters.
In UNIX, only root can change the owner of files. As a consequence, we can conclude that the owner of the file is not changing when you edit it. Instead what must be happening is that your editor is writing out the edited contents into a new file and replacing the old file with the new one. Because it is a brand new file, the file ends up being tagged with you are the owner. The are some advantages to updating files in this manner: It is atomic: readers always see the old version or the new version, never a partially written new version. It is easier to recover from errors. If an error such as disk full occurs, just delete the new temporary file (before renaming it on top of the old version) to roll back. If you were updating the file in place you might be left unable to complete and update and also unable to roll back. You can "update" a file that you do not have write access to (because you never actually write the old file). Any users that still have the file open can continue to use the old version as long as they need, so they are not disrupted. Useful for executable files! There are also disadvantages: You require write permission on the directory in which the file resides (or at least, somewhere else on the same filesystem), in order to create a new temporary file in it and then rename that temporary file. You cannot preserve the owner of the file and you may or may not be able to preserve its group. There is a long laundry list of other things that you might preserve by replicating them in the new temporary file before moving it in place, such as the permissions, the extended attributes, whether or not the file is a symlink to an actual file elsewhere, resource forks (MacOS), etc... Unless you are very careful and very exhaustive, it's hard not to miss one or more of those. So it's a compromise. Automated tasks such as background scripts, software installation, and the like, usually opt for replacing the old version with a brand new file, especially because of atomicity. Text editors and other human tasks usually opt for editing the file in place. I am unfamiliar with your editor, but it appears to be making the opposite choice that most other editors make. You will have to see if you can configure it to stop doing that. By the way, it's actually much better if the owner of files inside your document root are owned by you, not by the apache user. It provides better assurance that the web server (if compromised, for example) cannot edit the files. So you might consider ignoring this particular "problem" and considering it a good thing.
File owner changes after editing a group editable file
1,415,049,504,000
Trying to run a NZBGet (Python) Script, I've tried to run manually with: /mnt/local/ext001/MEDIA/NZBGet/scripts/videosort/VideoSort.py but this results in: bash: /mnt/local/ext001/MEDIA/NZBGet/scripts/videosort/VideoSort.py: Permission denied I've tried running this as sudo and SU and permissions are 777 currently, but still get the same message. How can permission be denied? EDIT: It seems the partition is mounting with noexec, despite using the following: /mnt/local/ext001 ext4 auto,rw,exec,async,user,suid,noatime,nodiratime,relatime 0 2 Any idea why it is not accepting the exec option?
You're right that the order of the mount options is important here. From the man page: users Allow every user to mount and unmount the filesystem. This option implies the options noexec, nosuid, and nodev (unless overridden by subsequent options, as in the option line users,exec,dev,suid). The exec option is before the users option, not subsequent to it, so the users option overrides it and sets the volume to noexec.
Unable to run python script - Permission Denied
1,415,049,504,000
I have some folder, need anybodies php scripts can create sub folder/files and unlink files. I do sudo chown -R apache:apache /var/www/public_html/a But after that, my ftp user cannot upload files in that folder. And I do sudo chown -R yulichika:users /var/www/public_html/a that ftp can access the folder, but anybodies php scripts with a wrong permisson. I do not want set the whole folder with 0777, so how to chown 2 users with the permission to operate the same folder? thanks.
You can use access control lists commands. First set owner apache to the directory sudo chown -R apache:apache /var/www/public_html/a Now set acl so that ftp user can upload folders. FOR USER sudo setfacl -R -m u:yulichika:rwx /var/www/public_html/a FOR GROUP sudo setfacl -R -m g:users:rwx /var/www/public_html/a Hope this will solve your problem.
centos folder permission ftp user and apache
1,415,049,504,000
I made a very simple program in C which writes "Test string" into file named "file.txt": root@3:~# cat test.c #include "unistd.h" #include "string.h" #include "stdio.h" main() { FILE *fp; int fd; fp = fopen("file.txt", "w"); fd = fileno(fp); write(fd, "Test string\n", strlen("Test string\n")); } root@3:~# I created a file named "file.txt" before I execute the test: root@3:~# ls -l file.txt -r-------- 1 root root 0 sept 21 22:28 file.txt root@3:~# As seen above, file.txt has only read permissions. However, if I execute the test, the "Test string" is written into "file.txt": root@3:~# strace ./test execve("./test", ["./test"], [/* 22 vars */]) = 0 brk(0) = 0x188d000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc03a44e000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=30251, ...}) = 0 mmap(NULL, 30251, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fc03a446000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300\357\1\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1599504, ...}) = 0 mmap(NULL, 3713112, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fc039ea6000 mprotect(0x7fc03a028000, 2093056, PROT_NONE) = 0 mmap(0x7fc03a227000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x181000) = 0x7fc03a227000 mmap(0x7fc03a22c000, 18520, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fc03a22c000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc03a445000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc03a444000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc03a443000 arch_prctl(ARCH_SET_FS, 0x7fc03a444700) = 0 mprotect(0x7fc03a227000, 16384, PROT_READ) = 0 mprotect(0x7fc03a450000, 4096, PROT_READ) = 0 munmap(0x7fc03a446000, 30251) = 0 brk(0) = 0x188d000 brk(0x18ae000) = 0x18ae000 open("file.txt", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3 write(3, "Test string\n", 12) = 12 exit_group(12) = ? root@3:~# cat file.txt Test string root@3:~# How can this happen?
You are able to write to the file because you are the root user. Consider the following: > cat file.txt > ls -al file.txt -r-------- 1 sdanna staff 0 Sep 21 20:43 file.txt > ./a.out Segmentation fault: 11 > sudo ./a.out > cat file.txt Test string Here ./a.out is your presented program. As you can see, when I run the command as a normal user, I receive a segmentation fault as I try to operate on the null pointer returned by the failed fopen. If I run the command as root it works fine. The root user can always write to the file unless the file's extended attributes are changed to prevent modification. The path_resolution(7) man page on linux, sums up the situation nicely: On a traditional UNIX system, the superuser (root, user ID 0) is all- powerful, and bypasses all permissions restrictions when accessing files. On Linux, superuser privileges are divided into capabilities (see capabilities(7)). Two capabilities are relevant for file permissions checks: CAP_DAC_OVERRIDE and CAP_DAC_READ_SEARCH. (A process has these capabilities if its fsuid is 0.) The CAP_DAC_OVERRIDE capability overrides all permission checking, but grants execute permission only when at least one of the file's three execute permission bits is set. The CAP_DAC_READ_SEARCH capability grants read and search permission on directories, and read permission on ordinary files. The root user on Linux has both of the required capabilities.
write() system call ignores file permissions
1,415,049,504,000
If a directory /foo/bar/baz has 700 (rwx------) for its permissions, thereby making it inaccessible to all but its owner (and the superuser), does it matter what the "group" and "other" permissions are for the directories and files under /foo/bar/baz? What motivated this question is that I noticed that the files /root/.bashrc /root/.profile in a freshly-installed Debian system have 644 (rw-r--r--) as their permissions. I would have expected 600 (rw-------) instead. But then I realized that /root itself has permission 700. This made me wonder whether a 700 permission of a directory would render moot the g- and o- permissions of anything under it. (If the answer is "no", i.e., if the g- and o- permissions of the contents still matter even when a component along the path has 700 permissions, then I'd like to know if there's any reason not to change the permissions of the files above to 600.)
Yes it does (could). If you create a file underneath /foo/bar/baz which is readable by others and then create a hard link to this file in an accessible path, they'll be able to read it regardless of the permissions on /foo/bar/baz.
Permissions under a 700 (rwx------) directory
1,415,049,504,000
I have a 4TB external hard drive connected to an Linux server. The fstab permissions on this drive are set so that only one particular non-root user has access to it: /dev/disk/by-uuid/CEE0476DE0388DA9/ /mnt/USBexternal ntfs-3g defaults,auto,uid=51343,gid=50432,umask=077 0 0 From a remote location, this user has been successful at doing rsync backups to this external hard drive. However, the external drive doesn't stay mounted as reliably as an internal hard drive does. Every couple of days I'm having to login as root do this command: mount -a I would like to give this user the ability to mount this drive, but when the non-root user does mount -a, it tells them they do not have permission to do this: nonrootuser@server:~$ mount -a mount: only root can do that When the non-root user tries to mount this drive specifically, it tells them it is already mounted (even though it isn't): nonrootuser@server:~$ mount /mnt/USBexternal/ mount: according to mtab, /dev/sdb1 is already mounted on /mnt/USBexternal As mentioned, the drive is not actually mounted, but (because of the output above) if the non-root user tries to unmount the drive, it says their request disagrees with fstab: nonrootuser@server:~$ umount /mnt/USBexternal/ umount: /mnt/USBexternal/ mount disagrees with the fstab How can I permit this user the ability to mount this drive, without giving them any other administrative powers?
You can setup an entry in the /etc/sudoers file for this user to be able to use the mount command. Add something like the following to the end of the /etc/sudoers file: username ALL=NOPASSWD: /usr/bin/mount, /sbin/mount.ntfs-3g, /usr/bin/umount Be sure that the exact path to each executable is correct for your system. For example, your mount command might be in /bin instead of /usr/bin. Adding the mount.ntfs-3g part is important to provide that access for the user. I can see in your mount command that you are using a ntfs-3g filesystem type. You could, instead, create a shell script to handle the mounting/unmounting and place that in your sudoers file. For example: create /usr/local/bin/mount-ntfs-drive script: #!/bin/bash device_path="/dev/disk/by-uuid/CEE0476DE0388DA9/" mount_point="/mnt/USBexternal" if [ "$1" = "-u" ] ; then # do unmount /bin/umount $mount_point else # do mount /bin/mount $device_path $mount_point fi edit /etc/sudoers file: username ALL=NOPASSWD: /usr/local/bin/mount-ntfs-drive Be sure to do chmod +x /usr/local/bin/mount-ntfs-drive. Also, when your user runs the file, they will need to use the fully qualified path for it to work. It might work from their path but not sure. sudo /usr/local/bin/mount-ntfs-drive
Allow NonRoot User to Mount a Particular NTFS External Hard Drive
1,415,049,504,000
How do I grant read, write and execute to specific group? What I did: adduser test addgroup developer setfacl -m g:developer:rwx /opt/spago41/ When I login as test I can't run: startup.sh in /opt/spago41/ Is the setfacl command not working?
I think you were missing the "recursive" parameter: setfacl -Rm g:developer:rwx /opt/spago41/
How do I allow rwx access to a specific group with ACLs?
1,415,049,504,000
When I write ls -la the output is : tusharmakkar08-Satellite-C660 tusharmakkar08 # ls -la total 88 drwxr-x---+ 10 root root 4096 Apr 18 19:43 . drwxr-xr-x 4 root root 4096 Mar 18 17:35 .. drwxr-xr-x 4 root root 32768 Jan 1 1970 CFB1-5DDA drwxrwxrwx 2 root root 4096 Feb 23 00:09 FA38015738011473 drwxrwxrwx 2 root root 4096 Apr 17 14:00 Local drwxrwxrwx 2 root root 4096 Mar 19 05:04 Local\040Disk1 drwxrwxrwx 2 root root 4096 Apr 18 19:43 Local\134x20Disk1 drwxrwxrwx 2 root root 4096 Feb 23 00:09 Local Disk drwxrwxrwx 1 root root 24576 Apr 19 15:15 Local\x20Disk1 drwxrwxrwx 2 root root 4096 Feb 23 00:08 PENDRIVE Now I want to change the permissions of CFB1-5DDA .. but i am unable to do so . When I write chmod 777 CFB1-5DDA still the permissions remain unchanged. The output of sudo blkid -c /dev/null is tusharmakkar08-Satellite-C660 tusharmakkar08 # sudo blkid -c /dev/null /dev/sda2: UUID="FA38015738011473" TYPE="ntfs" /dev/sda3: LABEL="Local Disk" UUID="01CD72098BB21B70" TYPE="ntfs" /dev/sda4: UUID="2ca94bc3-eb3e-41cf-ad06-293cf89791f2" TYPE="ext4" /dev/sda5: UUID="CFB1-5DDA" TYPE="vfat" The output of cat /etc/fstab is tusharmakkar08-Satellite-C660 tusharmakkar08 # cat /etc/fstab # /etc/fstab: static file system information. # # #Entry for /dev/sda4 : UUID=2ca94bc3-eb3e-41cf-ad06-293cf89791f2 / ext4 defaults 01 #Entry for /dev/sda2 : UUID=FA38015738011473 /media/sda2 ntfs-3g defaults,locale=en_IN 0 0 #Entry for /dev/sda5 : UUID=CFB1-5DDA /media/tusharmakkar08/CFB1-5DDA vfat defaults 0 0 /dev/sda3 /media/tusharmakkar08/Local\134x20Disk1 fuseblk defaults,nosuid,nodev,allow_other,blksize=4096 0 0 /dev/sda3 /media/tusharmakkar08/Local\x20Disk1 ntfs-3g defaults,nosuid,nodev,locale=en_IN 0 0 #/dev/sda3 /media/tusharmakkar08/Local\134x20Disk1 ntfs defaults,nls=utf8,umask=0222,nosuid,nodev 0 0 And the output of mount is tusharmakkar08-Satellite-C660 tusharmakkar08 # mount /dev/sda4 on / type ext4 (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,mode=755) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset) cgroup on /sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu) cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct) cgroup on /sys/fs/cgroup/memory type cgroup (rw,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,relatime,blkio) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,relatime,perf_event) /dev/sda2 on /media/sda2 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda5 on /media/tusharmakkar08/CFB1-5DDA type vfat (rw) /dev/sda3 on /media/tusharmakkar08/Local\x20Disk1 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfsd-fuse on /run/user/tusharmakkar08/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=tusharmakkar08)
chmod 777 CFB1-5DDA fails because CFB1-5DDA is a mount point and the mounted file system is vfat. So you are trying to write meta data to a file system which the file system does not support (i.e. cannot store). Simple as that. strace chmod 777 CFB1-5DDA shows you the kernel error. In order to change the access rights you have to change the mount (-o remount or umount; mount).
Unable to change permissions of file system root
1,415,049,504,000
I've just upgraded my development machine and have moved over a website I was working on. However, the permissions don't seem to have moved over properly. The dev machine is a Linux machine which runs Apache, where all the folders and sub-folders were set to 755 and all the files and files within all sub folders were set to 644. Instead of me having to run the commands: chmod 755 chmod 644 Is there a quicker way of doing this without having to do each and every file and folder individually?
for files: find -type f -exec chmod 0644 {} \; for dirs: find -type d -exec chmod 0755 {} \;
PHP file permissions for Development machine
1,415,049,504,000
This command: sudo chown -R root:root directory will remove the SUID bit and reset all capabilities for files. I wonder why it's done silently and it's not mentioned in the man page. Weirdly the GUID bit is not removed. And it doesn't matter who the file or directory belonged to prior to running this command. Also SUID/GUID bits are not removed for directories (thought they are useless in this case). Presumably it's done in the name of security but to me it must not be done silently. This gets even worse: $ setcap cap_sys_rawio,cap_sys_nice=+ep test $ getcap -v test test cap_sys_rawio,cap_sys_nice=ep $ chown -c -v -R 0:0 . ownership of './test' retained as root:root ownership of '.' retained as root:root $ getcap -v test test The SUID bit for the test file is removed completely silently. It's as if the command is doing a lot more than requested.
The permissions and capability sets aren’t cleared by the chown utility, they’re cleared by the chown system call (on Linux): When the owner or group of an executable file is changed by an unprivileged user, the S_ISUID and S_ISGID mode bits are cleared. POSIX does not specify whether this also should happen when root does the chown(); the Linux behavior depends on the kernel version, and since Linux 2.2.13, root is treated like other users. In case of a non-group-executable file (i.e., one for which the S_IXGRP bit is not set) the S_ISGID bit indicates mandatory locking, and is not cleared by a chown(). When the owner or group of an executable file is changed (by any user), all capability sets for the file are cleared. As alluded to above, this is partially specified by POSIX: Unless chown is invoked by a process with appropriate privileges, the set-user-ID and set-group-ID bits of a regular file shall be cleared upon successful completion; the set-user-ID and set-group-ID bits of other file types may be cleared. If it were to inform the user about this, the chown utility would have to explicitly check for further changes made to files’ metadata when it invokes the chown function. As far as the rationale is concerned, I suspect it’s to reduce the potential for gotchas for the system administrator — chown root:root on Linux can be considered as safe, even if a user prepared a setuid binary ahead of time. The GNU chown man page doesn’t mention this behaviour, but as is often the case with GNU software, the man page documents the utility only partially; its “SEE ALSO” section points to the system call documentation (which is admittedly overkill for most users) and the info page, which does describe this behaviour: The chown command sometimes clears the set-user-ID or set-group-ID permission bits. This behavior depends on the policy and functionality of the underlying chown system call, which may make system-dependent file mode modifications outside the control of the chown command. For example, the chown command might not affect those bits when invoked by a user with appropriate privileges, or when the bits signify some function other than executable permission (e.g., mandatory locking). When in doubt, check the underlying system behavior. (I’m limiting this to Linux based on your tags on the question; since Linux restricts owner changes to privileged processes, there are fewer security implications than on some other Unix-style systems. See explanation on chown(1) POSIX spec for details.)
Why does chown reset/remove the SUID bit and reset capabilities?
1,415,049,504,000
I'm looking at the result of running ls -l on /proc/<pid>/fd/: lr-x------ 1 root root 64 Apr 22 23:13 0 -> /dev/null lrwx------ 1 root root 64 Apr 22 23:13 1 -> 'socket:[19700]' lrwx------ 1 root root 64 Apr 22 23:13 2 -> 'socket:[19700]' ... What do the permissions mean on the symlinks? The first thing that occurs to me is that they represent the "mode" of the file descriptors. However, if that is indeed the case, why would stdout be readable? Furthermore, why would all of the descriptors be executable?
Linux' /proc filesystem presents objects which are actually not files as files: it's using a known API to present objects as files. For the file descriptors, there is no actual symlink existing. But a symlink is a convenient way to present information about the file descriptor. Such symlink is created on-the-fly when needed (and might possibly stay cached in the VFS so they will usually have the date of the first time they were displayed). Usually on Linux symlinks are always having all possible allowed rights because it's their targed which is validated instead. But here, the rights are presented to reflect the way the file descriptor was opened. They are presented as owned by the user (or sometimes root when for example the process is set as non-dumpable or non-ptracable) and limited to user access attributes even if the access attributes are not really checked (the ownership check is enforced, see below). Many details are documented in proc(5) at the /proc/[pid]/fd/ entry for example: /proc/[pid]/fd/ [...] For file descriptors for pipes and sockets, the entries will be symbolic links whose content is the file type with the inode. A readlink(2) call on this file returns a string in the format: type:[inode] [...] Permission to dereference or read (readlink(2)) the symbolic links in this directory is governed by a ptrace access mode PTRACE_MODE_READ_FSCREDS check; see ptrace(2). so one can check only process belonging to the same user (or not even, if process is set as non-dumpable/non-ptracable and other special caveats). What I didn't manage to find documented in proc(5) is that usually the access rights presented on the symlinks for proc/[pid]/fd/ reflect the way the file descriptor was opened. So opening read-only (ls -l /proc/self/fd/9 9</dev/null), write-only (ls -l /proc/self/fd/9 9>/dev/null) or read-write (ls -l /proc/self/fd/9 9<>/dev/null) will respectively be displayed with these access rights: lr-x------ l-wx------ lrwx------ Likewise, (non-named) pipes created with pipe(2) will have one FD in read mode and one in write mode. Sockets are bidirectional: there's no notion of "opening" them read-only or write-only, actually there's never an open(2) system-call for them. They will be seen as lrwx------ to reflect they can be read or written to.
What do the permissions mean in /proc/<pid>/fd/?
1,415,049,504,000
I have an issue reading man pages that live on an NFS mount. I managed to isolate it to the following minimal example. The man page file is on an NFS mount /data and its path is /data/program.1 I can print the file to the console with cat /data/program.1 so I definitely have read permissions (755 on the directories involved and 644 on the file, with nothing else like ACL or sticky bits etc.) However, man -l /data/program.1 does not work in general. But mysteriously enough, immediately after reading the file or its metadata (e.g. a successful ls /data/program.1) suddenly man -l /data/program.1 does work for a short while (~30 seconds), looks related to some cache. Although it still seems nondeterministic (after the ls, it mostly works but if I do it repeatedly, some attempts do not work, then works again back and forth) However, strangely enough, the whole problem only exists on some client machines, on other client machines of the same NFS server (with identical mount options) there is no issue whatsoever. When it "doesn't work" it outputs man: /data/program.1: Permission denied Using strace man -l /data/program.1 I see the following relevant line: stat("/data/program.1", 0x7ffe5ac9c9e0) = -1 EACCES (Permission denied) And if I just run man program (with the appropriate MANPATH), I see: access("/data/program.1", R_OK) = -1 EACCES (Permission denied) I therefore thought the access call cannot be done, but when I compile my own C program to call it, it works (prints 0): #include <unistd.h> #include <stdio.h> int main(){ printf("%d", access("/data/program.1", R_OK)); } What could be the issue here? I looked at the source code of man and perhaps it has something to do with this line (https://git.savannah.gnu.org/cgit/man-db.git/tree/src/man.c#n3746) drop_effective_privs()? Otherwise I cannot explain why everything has access to the file (cat, head, my own C program etc.), but man doesn't (except when another program has recently read the metadata). Ubuntu 18.04 is installed both on the clients and the server. The mount looks like this: x.x.x.x:/srv/nfs/data on /data type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=x.x.x.x,fsc,local_lock=none,addr=x.x.x.x)
Ubuntu 18.04 has AppArmor enabled by default and it blocks network access from man, including accessing files over NFS. To allow network/NFS access for man, add the following to /etc/apparmor.d/local/usr.bin.man: # TCP/UDP network access for NFS network inet stream, network inet6 stream, network inet dgram, network inet6 dgram, Then run systemctl reload apparmor. Alternatively, disable AppArmor, e.g. by setting the apparmor=0 kernel parameter.
Man cannot read manpage from NFS, although the file is readable
1,415,049,504,000
I would like to request information on using rsync. I tried reading the manuals, but the examples are few and confusing for me. I do not need advanced features or live sync or remote sources or remote destinations. Everything is with ext4. Just using my laptop's HDD and an external HDD over USB. On Ubuntu. My ultimate object is to move the contents of my /home to an external drive. Wipe my laptop, switch it to LVM, re-install Ubuntu, update, install same programs I had before, then boot a live USB and copy the contents of my backed up /home (now on my external HDD) onto the /home of the new installation (installed with same username and UID as last time). I would like to keep all permissions and ownership the same. I tried copy-pasting everything onto the external drive, but I got error messages. I know that doing a copy-paste from the GUI on a live USB will change everything to root ownership (which would be double plus not good). I see all of these flags in the man page ... and all I understand is rsync /home/jonathan /media/jonathan/external-drive/home/jonathan from rsync /source/file/path /destination/file/path I already use this hard drive to back up most folders and big files like Movies, etc. Is there a way to copy-paste what I want, while saving permissions, and only adding the hitherto ignored .config files and only changing changed files? I would like to be able to do this manually about once a week to back up settings AND my personnel files in case I ever need to reinstall in an emergency or my hard drive fails.
Here is a quick rsync setup and what it does. rsync -avz /home/jonathan /media/jonathan/external-drive/home/jonathan This will recursively copy the files, preserve attributes permissions ownership etc. from /home/jonathan to the external folder. for safe keeping you could also do a tar to get everything together and then send one file over. tar zcvf /media/jonathan/external-drive/home/jonathan/jonathansFiles.tgz /home/jonathan then uncompress later. tar zxvf jonathansFiles.tgz
Using rsync to back up /home folder with same permissions, recursive
1,415,049,504,000
The sysadmins at work want us to remove all world permissions from all files and directories within a specific filesystem. Normally, I would agree with such security practices. However, I don't understand the point in doing so because the parent directory only allows users within a specific group to access the filesystem. For example, the parent directory has permissions like: drwxr-x--- 5 root group 4096 Apr 7 15:27 /example And the subdirectories have permissions like: drwxrwxrwx 2 user group 4096 Apr 10 00:34 /example/dir1 drwxrwxrwx 2 user group 4096 Apr 8 12:52 /example/dir2 drwxrwx--- 2 user group 4096 Apr 12 08:13 /example/dir3 Note: /example/dir1 and /example/dir2 are what currently exists. /example/dir3 is what our sysadmins want the permissions to be. Is there any benefit to removing world permissions from /example/dir1 and /example/dir2 when only users in group can access this filesystem? What are the best practices?
Accountability: If you have the permission like dir3, you know quickly that who has access to that file. In the other case, you should go back until you see the correct permission. Safety: you may need dir4 accessible by webbrowser, so you sysadmin need to change also example, but now he does not know that are the correct permissions of dir1 and dir2. In general and the best practice: it is always better to use the minimal permissions, so you know (by seeing additional permissions) what are the additional requirements (it is a sort of lazy boy documentation; but we all tend to be lazy on documentation). Note: some program could complain (and stop) if they find "world/all" permission: they usually check just current directory permission (why do complex checks? and with volumes mounted multiple time, it could be difficult (and not portable) to find possible attacks)
Is there a benefit to securing all directories under a parent directory?
1,415,049,504,000
Assuming there is a php website and I want to block an ip at the firewall level based on the site code execution. The site is run under non-root user. I was going to pass IP from the site code to a script (writeable to root only) like #!/bin/bash function validate_ip() { ... code here ...} if validate_ip $1; then /usr/sbin/iptables -I INPUT -s $1 -j DROP echo 'blocked'; else echo 'bad IP $1'; fi using suid bit. I want to add additional IP validation to avoid XSS and other bad things (consider it paranoia if you like), so do not want to allow the site to call iptables directly. The script does not work can't initialize iptables table 'filter': Permission denied (you must be root) because bash drops suid bit There is workaround: allow iptables in sudo but I don't think it's secure. I have no time/possibility to develop/buy a binary which will do the task. One suggested binary wrapper around script but I hesitate, perhaps there is a better way? So, the question is: how can I allow non-root app to block ip in iptables firewall in a secure way?
Instead of making your bash script suid root, run your bash script through sudo. As a side benefit, this also lets you easily lock down who can run your script as root and also the arguments passed. You could, for example, only allow: phpuser ALL=(root) NOPASSWD: /usr/local/sbin/your-script [0-9][0-9][0-9].[0-9][0-9][0-9].[0-9][0-9][0-9].[0-9][0-9][0-9] then make sure your PHP script always formats each IP address octet as three digits. If it's too hard to have PHP call sudo (which it shouldn't be!) you can have the script do it itself, with something like: #!/bin/sh [ "$(id -u)" -eq 0 ] || exec sudo -- "$0" "$@" # rest of script here (I'm not entirely sure iptables will be happy with the leading 0s, if not you can strip them off). PS: Please quote your variables in your shell script: if validate_ip "$1"; then /usr/sbin/iptables -I INPUT -s "$1" -j DROP # ⋮
how to run setuid task properly?
1,504,963,396,000
The directory /path/to/dir is meant to be used by any member of the group examplegroup. All the members of this group should be able to modify the content of the directory without limitations (read/write files, create files, run executables). It is obvious that the group owner of the directory must be examplegroup. But what could be an appropriate choice for the user owner of the directory? root, one of the users of the examplegroup, nobody, or someone else?
Whoever is the owner will be able to change permissions of the directory, which would allow him to remove the group-write permission. So it should be a user who is trusted with that ability. It could be the group leader, for instance. Or it could be someone outside the group if none of them are so trusted. If you set it to root, then only system administrators will be able to change directory permissions. But if permissions never need to change, this shouldn't be a problem.
Owner of shared directory
1,504,963,396,000
I have a website being served by Nginx and I've recently setup travis builds and deployments for it. Nginx is running the website as www-data user. I've created an user deploy so that Travis can login on the server through SSH and deploy the website. Deployed files are being stored with deploy user as owner, which is different than the user that is running the website (www-data). I'm afraid of having permissions problems running the website with this setup. Should I use the same user www-data/deploy to run and deploy the website? Using this approach, will I have problems by allowing the user running the website to login remotely through SSH? Please enlighten me regarding this.
Actually, the files should not be owned by www-data because that means Nginx can modify them, which in most cases is not what you want (unless it is a CMS that needs to self-update.) So all the files should be owned by deploy. If it is a CMS and you need to write in a few folders, then those very few (one?) folders should indeed be own by www-data. That can cause a problem if the deployment does the very first installation as well. Either offer the user to run a manual step, or have a special tool do that job, but if it is a one time thing, just do it manually, it's going to be easy enough (especially because you only have one folder like that, right?) The CMS can also tell you if there is such a problem and stop instead of serving pages. That way you know immediately and you can avoid having problems when you try to upload a file or some similar action. Of course, files are not owned by www-data but they need to be readable by www-data. So either make it readable by others (-rw-r--r--) or look into setting the group to www-data (-rw-r-----). In most cases, I've seen people not even take the risk of using the group. They just let others access the files because it is safer that way. Of course, it also means that Nginx would have no access rights to the deploy user and group.
Should I deploy with the same user who is running the website?
1,504,963,396,000
I use Ubuntu and someone advised me to change permissions to a file with sudo chmod " +x ". It is not clear to me if +x means to change permissions to something very general as 777 or 775, or something totally different. I've tried to Google +x unix and +x linux, but I couldn't find data regarding it in a fast search. Here are the orders I have done (thus I'm stuck in stage 4): install wget run wget http://files.drush.org/drush.phar run sudo mv drash.phar /usr/local/bin/drush run sudo chmod +x /usr/local/bin/drush test drush
x is a symbolic presentation of the executable mode (it can be in numeric format as well) that chmod command uses to set the executable permission of a file (in case of directories it sets their searchable mode). + sets executable mode and - unsets it. From man page of chmod: The format of a symbolic mode is [ugoa...][[+-=][perms...]...], where perms is either zero or more letters from the set rwxXst, or a single letter from the set ugo. Multiple symbolic modes can be given, separated by commas. See this link for more information: chmod
What does the argument " +x " means in Unix? (Regarding permissions)? [duplicate]
1,504,963,396,000
I have yet another frustrating problem. I have a group of users belonging to the "testing" group. I have a folder located at /var/log/projects with the setgid bit set. This is so any new files or folders that get created in /projects will always retain the group ownership of "testing". [root@system log]# ll | grep projects drwxr-s---. 4 root testing 4096 Jun 10 19:36 projects When I touch a file or create a folder in that directory they inherit the correct perms and ownership. [root@system log]# touch /var/log/projects/testfile [root@system log]# ll /var/log/projects/ total 4 -rw-r--r--. 1 root testing 0 Jun 10 19:49 testfile And when I create a new folder its works as expected. [root@system projects]# mkdir folder1 [root@system projects]# ll total 8 drwxr-sr-x. 2 root testing 4096 Jun 10 19:52 folder1 -rw-r--r--. 1 root testing 0 Jun 10 19:49 testfile So far so good. However I am using this folder for remote syslogs from other systems. When I start the rsyslogd service, any folders of files created by that process inherit the ownership of root:root. drwx--S---. 2 root root 4096 Jun 10 19:44 remotehost I was under the impression that the purpose of the setgid bit was for my use case. Can anyone tell me what I am doing wrong or how I can fix this so that any folders/files created by the rsyslogd process have the group ownership of "testing"? This is on a RHEL 6 server.
Since rsyslog ignores setgid sticky bits I was able to correct the issue by using the following directives in my rsyslogd.conf custom template config: $template TmplAuth, "/var/log/projects/%FROMHOST-IP%/%PROGRAMNAME%.log" $template TmplMsg, "/var/log/projects/%FROMHOST-IP%/%PROGRAMNAME%.log" $umask 0000 $DirCreateMode 0750 $FileCreateMode 0640 $FileGroup testing $DirGroup testing authpriv.* ?TmplAuth *.info,mail.none,authpriv.none,cron.none ?TmplMsg NOTE $DirCreateMode and $FileCreateMode will not work until you override the default umask with the $umask 0000 directive.
setgid sticky bit not working
1,504,963,396,000
As I understand it, a file has 3 sets of permissions: owner permissions, group permissions, and everybody else permissions. Moreover, the file has is assigned to a owner, and a group. How linux combines all this information, to actually determine the permissions a given user has over the file? For example, say a file is: --- rw- --x That is, the owner has no permissions, the group can read/write, and everybody else can only execute. Now user "Joe" comes to this file. Depending on which groups Joe belongs to, and whether Joe is or not the owner of this file, what can he do with it? He could execute the file, because x is set for everybody. But if "Joe" is the owner, x is forbidden for the owner. What takes precedence?
Linux permissions are exclusive of each other. So, owner permissions apply only to owner, group permissions apply to everyone in group except owner, and others permissions apply to others i.e. not group and owner. Only one of these permissions will be used depending on the UID and GID of the process that tries to access the file. In your case, if Joe is the owner of the file, he can't do anything regardless of which group he is in. If Joe is not the owner, but belongs to the group, he can read and write, but not execute. If Joe is neither owner nor part of group, he can only execute.
Given the permissions, owner and group of a file, what's the algorithm that determines whether a given user can read/write/execute a file? [duplicate]
1,504,963,396,000
I've followed these instructions to use google-drive-ocamlfuse to mount Google Drive folders on a headless server But I've encountered an issue, unless I run the command to mount my ~/drive folder as root (via sudo) it throws an error. (precise)lukes@localhost:~$ google-drive-ocamlfuse -label me ~/drive /fuse: failed to exec fusermount: No such file or directory So I figured I'd require root privileges and ran sudo google-drive-ocamlfuse -label me /home/lukes/drive (precise)lukes@localhost:~$ sudo google-drive-ocamlfuse -label me /home/lukes/drive/ [sudo] password for lukes: (precise)lukes@localhost:~$ ls -l ls: cannot access drive: Permission denied total 4 drwx--x--- 3 lukes 1001 4096 May 24 17:00 Downloads d????????? ? ? ? ? ? drive Huh? thats a wierd looking output from ls, so I figured because I mounted it as root I need to run sudo ls -l (precise)lukes@localhost:~$ sudo ls -l total 8 drwx--x--- 3 lukes 1001 4096 May 24 17:00 Downloads drwxrwxr-x 2 lukes lukes 4096 May 24 18:29 drive So the drive folder is owned correctly. Not sure what I can do to fix the fact I can't cd into it. N.B. I can sudo su and then cd drive && ls no problems, but I can't edit any of the files that are in my Google Drive folder, which defeats the point of having mounted them in the first place.
When you mount a FUSE filesystem, by default, only the user doing the mounting can access it. You can override this by adding the allow_other mount option, but this is a security risk if the filesystem wasn't designed for it (and most filesystems accessed via FUSE aren't): what are the file permissions going to allow other users to do? Furthermore only root can use allow_other, unless explicitly authorized by root. Anyway, you should do the mounting as your ordinary user, not as root. FUSE is designed to be used as an ordinary user. Depending on your distribution and how your system is configured, you may need to be in the fuse group. Check the permissions on /dev/fuse: you can use FUSE iff you have read-write access to it. Anyway, the error you got doesn't indicate a permission problem. The command fusermount should be in /bin or /usr/bin, on every user's $PATH. If you don't have it, the most likely explanation is that you need to install it. For example, on Debian/Ubuntu/…, install the fuse package.
Mounting Google Drive with google-drive-ocamlfuse
1,504,963,396,000
The main reason I want this is my heavy use of dircolors, especially for ls --color=auto. For example, whenever a .mp3 file is copied from NTFS, it will have permissions set by umask 022 which ought to be standard value in most modern distros. However, for audio files this makes no sense: due to the fact that their permissions get set to 755 (rwxr-xr-x), they will get the same color as an executable shell script, while I'd really like to have this color reserved for true executables. This is not Windows; even with the x permission set for owner/group/other you cannot expect ./track1.mp3 to work in terminal so that it make an attempt to pick a default console player. So I'd like to have a certain umask ONLY for audio files, i. e. that any files like .mp3, .wav, .ogg and so on would always get set their mode to 644, while leaving all other files copied to this place at their default umask of 022. Is there any way to accomplish this? (NOTE: cp --preserve will NOT preserve original permissions on NTFS either, since NTFS is notoriously ignorant about *NIX permission systematic.)
I would use the install tool to copy from NTFS. install -m644 file1 ... fileN destination_directory
Setting correct permissions automatically for certain file type when file is copied from non-Linux file system
1,504,963,396,000
normally I can only do 3 levels: owner, group, others But I want 6 for group1, 5 for group2, 0 for all other groups how can I do this?
Traditional Unix permissions are limited to owner, group and other. But most modern unices support access control lists. On modern Linux systems, ACL support is enabled by default in the filesystem, but you may need to install the ACL utilities getfacl and setfacl (e.g. on Debian/Ubuntu/Mint you may need to install the acl package). chmod u=rwx,go= somefile setfacl -m g:group1:rw -m g:group2:rx somefile
how to set more than 3 level of access permissions for a single file/folder?
1,504,963,396,000
I have a virtualbox VM with arch Linux running on my windows PC (which I unfortunately have to use for work). I use this to work on my windows PC with a Linux environment as an alternative to Cygwin. I have set up a Virtualbox shared folder which shares my C:\ drive with my Linux VM but I seem to be unable to change the file permissions within any of the folders. This is a problem as now git thinks all of my files permissions have changed. » ll README.txt -rwxrwx--- 1 root vboxsf 4.5K Oct 28 10:42 README.txt » chmod 644 README.txt » ll README.txt -rwxrwx--- 1 root vboxsf 4.5K Oct 28 10:42 README.txt » sudo chmod 644 README.txt » ll README.txt -rwxrwx--- 1 root vboxsf 4.5K Oct 28 10:42 README.txt » git diff README.txt | cat diff --git a/README.txt b/README.txt old mode 100644 new mode 100755 How do I fix this? The folder was mounted using Automount from the VirtualBox Manager on windows.
This will not work as it unlikely that your hosts mapped in filesystem (i.e. Windows C: drive, so most likely NTFS) supports the full range of permission bits that Linux git expects. In a similar situation I have exported a Linux directory via Samba and used that from Windows and Linux without problems. This however has the disadvantage that you cannot access the data when the VM is not running.
In a Virtualbox VM how do I set the filesystem permissions?
1,504,963,396,000
I have two computers with the same user, me@Home and me@Work. I usually kept their folders synced by bringing Work home (it's a laptop) and rsyncing over LAN. However, now I'm not bringing Work home anymore, so I started using the uni's ssh server to keep my computers synced. me@Work -> my_name@Uni me@Home <- my_name@Uni However, when rsyncing from Work to Uni, using -avuz, which should preserve ownership, file ownership is lost. I made some tests and the issue seems to be the unmatched "me" user at Uni. Not only that, directories owned by www-data also didn't have their ownership preserved (since there isn't such user at Uni either), which, one can imagine, caused me some trouble. I don't have root access at Uni nor can my username be changed. Is there anyway I can make this work without setting up a ssh server myself or start bringing Work home again?
rsync can't preserve ownership if it's being run by a non-root user on the destination system, because only the superuser is allowed to create files that are owned by someone else. Instead of using rsync create a tar file on the intermediate system. Then when you restore it on the ultimate target system, you can do so as root in order to give the original ownership to the files.
Preserve ownership with rsync and nonexistent user
1,504,963,396,000
Does setting u+s on a directory imply u+x?
No, there's a difference ;-) # ls -l x -rw-r--r-- 1 root root 0 Jan 27 20:07 x # chmod u+s x # ls -l x -rwSr--r-- 1 root root 0 Jan 27 20:07 x # chmod u+x x # ls -l x -rwsr--r-- 1 root root 0 Jan 27 20:07 x See, e.g. http://www.linuxnix.com/2011/12/suid-set-suid-linuxunix.html You see the difference more clearly Capital S: chmod 4655 (no execute) small s: chmod 4755 (execute set) For when you would need capital S? Good question...
Setuid directory permission implies execute permission?
1,504,963,396,000
I installed Apache via yum on CentOS 6.4. I changed the DocumentRoot in /etc/httpd/conf/httpd.conf to point to /home/djc/www: DocumentRoot "/home/djc/www" <Directory "/home/djc/www"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> FS permissions: djc@vm ~ $ ls -l drwxrwxr-x. 3 djc djc 4096 Jan 14 11:17 www No SELinux: djc@vm ~ $ sestatus SELinux status: disabled What am I missing?
For the new docroot to be accessible by Apache, the Apache users must be able to access all directories in the path leading up to /home/djc/www. So even though /home/djc/www is accessible to everyone, /home/djc must be executable by the Apache user. so for example if you have: $ ls -ld ~ drwx------ 1 djc djc 0 Jan 13 15:16 /home/djc You can make it accessible like this and it should be enough: $ chmod o+x ~ $ ls -ld ~ drwx-----x 1 djc djc 0 Jan 13 15:16 /home/djc
Apache throws 403 Forbidden after moving DocumentRoot (CentOS 6.4)
1,504,963,396,000
I set some additional locations to the PATH environment variable in my ~/.bashrc so that these are included/sourced in logins and non-interactive scripts that are scheduled with cron. I've noticed though that on one system the PATH is modified correctly, but none of the scripts within will run despite ownership and permissions being set correctly (as far as I can tell). $ ls -l total 756 -rw-r-xr-x 1 slackline slackline 300 Sep 6 07:35 backup -rwxr-xr-x 1 slackline slackline 978 Dec 30 10:28 bbc_mpd -rwxr-xr-x 1 slackline slackline 355483 Nov 29 07:31 get_iplayer -rwxr-xr-x 1 slackline slackline 110 Sep 6 07:35 rsync.albums -rwxr-xr-x 1 slackline slackline 114 Sep 6 07:35 rsync.climbing -rwxr-xr-x 1 slackline slackline 108 Sep 6 07:35 rsync.films -rwxr-xr-x 1 slackline slackline 125 Sep 6 07:35 rsync.mixes -rwxr-xr-x 1 slackline slackline 117 Sep 6 07:35 rsync.pics -rwxr-xr-x 1 slackline slackline 117 Sep 6 07:35 rsync.torrents -rwxr-xr-x 1 slackline slackline 95 Sep 6 07:35 rsync.work The contents of one of my scripts, which synchronizes a directory to my NAS to back it up: $ cat ~/bin/rsync.work #!/bin/bash source ~/.keychain/$HOSTNAME-sh /usr/bin/rsync -avz /mnt/work/* readynas:~/work/. which fails to run when called: $ rsync.work bash: /home/slackline/bin/rsync.work: Permission denied but works when preceeded with bash -x : $ bash -x /home/slackline/bin/rsync.work + source /home/slackline/.keychain/kimura-sh ++ SSH_AUTH_SOCK=/tmp/ssh-P3GL1A3Juwhe/agent.4209 ++ export SSH_AUTH_SOCK ++ SSH_AGENT_PID=4210 ++ export SSH_AGENT_PID + /usr/bin/rsync -avz /mnt/work/android /mnt/work/arch /mnt/work/classes /mnt/work/doc /mnt/work/linux /mnt/work/lost+found /mnt/work/nc151.tar /mnt/work/nc152now-11.rar /mnt/work/personal /mnt/work/ref /mnt/work/scharr 'readynas:~/work/.' sending incremental file list sent 1,176,907 bytes received 19,786 bytes 30,296.03 bytes/sec total size is 27,852,538,230 speedup is 23,274.59 $ set -x ; ~/bin/rsync.work ; set +x + /home/slackline/bin/rsync.work bash: /home/slackline/bin/rsync.work: Permission denied + set +x $ set -x ; bash -x ~/bin/rsync.work ; set +x + bash -x /home/slackline/bin/rsync.work + source /home/slackline/.keychain/kimura-sh ++ SSH_AUTH_SOCK=/tmp/ssh-P3GL1A3Juwhe/agent.4209 ++ export SSH_AUTH_SOCK ++ SSH_AGENT_PID=4210 ++ export SSH_AGENT_PID + /usr/bin/rsync -avz /mnt/work/android /mnt/work/arch /mnt/work/classes /mnt/work/doc /mnt/work/linux /mnt/work/lost+found /mnt/work/nc151.tar /mnt/work/nc152now-11.rar /mnt/work/personal /mnt/work/ref /mnt/work/scharr 'readynas:~/work/.' sending incremental file list sent 1,174,755 bytes received 19,786 bytes 39,165.28 bytes/sec total size is 27,852,538,230 speedup is 23,316.52 + set +x My ~/.bashrc has the following line in it. $ grep PATH ~/.bashrc # Additions to system PATH PATH="/home/slackline/bin:$PATH:/usr/local/stata/:/usr/local/stattransfer/" export PATH And I can run the rsync command at the command line myself (so it's not a case of permission being denied on the SSH connection). $ /usr/bin/rsync -avz /mnt/work/* readynas:~/work/. sending incremental file list sent 1,176,723 bytes received 19,786 bytes 32,781.07 bytes/sec total size is 27,852,538,230 speedup is 23,278.17 (Backup is obviously up to date). The version of Bash installed is: $ eix -Ic bash [I] app-admin/eselect-bashcomp (1.3.6@08/29/13): Manage contributed bash-completion scripts [I] app-shells/bash (4.2_p45@08/16/13): The standard GNU Bourne again shell [I] app-shells/bash-completion (2.1@08/28/13): Programmable Completion for bash [I] app-shells/gentoo-bashcomp (20121024@08/28/13): Gentoo-specific bash command-line completions (emerge, ebuild, equery, repoman, layman, etc) Found 4 matches. The permissions on the directory (and its structure) are: $ ls -l ~/ | grep bin drwxr-xr-x 2 slackline slackline 4096 Dec 30 10:29 bin $ stat -c"%n (%U) %a" / /home /home/slackline /home/slackline/bin / (root) 755 /home (root) 755 /home/slackline (slackline) 755 /home/slackline/bin (slackline) 755 And an strace shows $ strace rsync.work strace: Can't stat 'rsync.work': No such file or directory $ echo $PATH /home/slackline/bin:~/bin:/usr/local/bin:/usr/bin:/bin:/opt/bin:/usr/x86_64-pc-linux-gnu/gcc-bin/4.8.2:/usr/games/bin:/usr/local/stata/:/usr/local/stattransfer/:/usr/local/stata/:/usr/local/stattransfer/ $ ls -l ~/bin/ | grep work -rwxr-xr-x 1 slackline slackline 95 Sep 6 07:35 rsync.work $ rsync.work bash: /home/slackline/bin/rsync.work: Permission denied I can't work out what's going wrong here and would be grateful of any thoughts/ideas on how to trouble shoot this. EDIT : Tidied up the various edits made in response to questions to hopefully read a bit more coherently and make it easier to follow what I'd tried and how it fits in with Mark Plotnick's solution.
You mentioned in the comments that your home directory's filesystem is mounted with the users mount option. $ grep home /etc/fstab LABEL=home /home ext4 noatime,users 0 4 – The users mount option implies noexec. From mount(8): users Allow every user to mount and unmount the filesystem. This option implies the options noexec, nosuid, and nodev (unless overridden by subsequent options, as in the option line users,exec,dev,suid).
permission denied on scripts in ~/bin
1,504,963,396,000
Possible Duplicate: Can’t rename a directory that I own I am trying to understand why when a dir X is owned by user A cannot rename it when parent dir of X is owned by user B. Can anyone please explain? $ls -l ~ drwxr-xr-x 11 root root 4096 Jan 31 09:43 mymedia ~/mymedia$ ls -l drwxr-xr-x 6 rag rag 4096 Jan 31 08:34 Entertainment ~/mymedia$ mv Entertainment/ entertainment mv: cannot move `Entertainment/' to `entertainment': Permission denied
When you rename a file, you don't change the file, you change its parent directory. A file name is an entry in a directory. Think of phone directories, to change the name associated with a phone number in a directory, you need to modify the directory, not the phone line. The name is associated with the phone line only in that directory. That phone number may be in another directory under a different name (hard links). There's a caveat though for renaming directories as directories contain a reference to their parent (their .. entry). To be able to move a directory, it's not enough to have write permission to the old parent (to remove the entry) and the new parent (to add a new entry), you also need to have write permission to the directory itself to update the .. entry (if the old and new parent are different).
why cannot rename subdir when parent dir owner is not the same user [duplicate]
1,504,963,396,000
We have created an asset management application web app using php. This app allows users to browse, upload, rename, replace assets (binary files like images and 3d models) We developed in windows environment and every thing works fine but when we hosted in linux server (joyent) we are running with permission issues in some scenarios. Below is environment setup Our public web root folder is located in home/jill/web/public (webroot) . All assets are located under home/jill/web/public/assets (assets root) and in sub folders as well Below are the use cases the way we handling the asset management - Bulk upload all assets using ftp program with ftp user (ftpuser) Bulk upload all assets using webadmin user (jill), who is owner of home/jill/web/public Upload assets using the web app (default web user is www) In all above use cases, we also overwrite assets with latest modified files. Now there is a flash application (game) hosted in the webroot which access all those assets and loads in the app. Now we get permission errors when we try to overwrite/update file using a user id different from which is originally created by a different user. Example: Try to overwrite a file using web app which was initially uploaded using ftp user and vice-verse What is the way we can handle this scenario so any file or directory created under webroot\assets by any user can be modified by any user ? I am a developer not familiar with unix/linux but I think it is something to do with handling groups and group permission but I am not sure how to set ?
There are basically two things to this: 1) permissions on the files and directory that already exist: Alan's answer mostly covers this: create a special group to which you add all users that might need to write the files. Make sure that the directory where you are uploading is itself writable for that group: chmod 0775 path/to/the/directory. Any existing files will need chmod 0664. The "magic" numbers are octal and stand for triplets: setuid, owner permissions, group permissions, world permissions. The setuid is no of interest for you, keep it 0. for the others the octal number (0-7) tells you the permissions: if the 0th bit is on, the file/directory is executable (for directory it means it can be entered), if 1st bit is on it is writeable, the 2nd bit is managing readability - e.g. 0754 would mean that owner has all permissions, group members can read and execute, and the rest of the world may only read it. You can write the same with mnemonics like this: chmod u=rwx,g=rx,o=r. See man chmod a Linux system for in depth explanation. 2) permissions on newly created files: Look for umask setting in whatever does the uploading. This says with what permissions new files are created.Again see man umask on Linux/UNIX system, the idea is, that whatever bits you set in umask (the same notation as for chmod explained above) are excluded from the permissions on a newly created files - for example if you set your umask to 0023, your files will be created with all permissions for you, not writeable for your default group and not writeable neither executable by anybody else. It is usually bad idea to set the 0th bit here, since on directory creation it makes it non-executable which blocks entering the directory at all (for the set of users for which the bit is set, in the 0023 example for "anybody else"). In addition to this, it might be worth assigning default ACLs to the directory if the underlying filesystem supports them. That would allow finer grained access control (ACL stands for Access Control List, similar to the Windows one). See man setfacl for more information. BIG FAT WARNING: It's not a good idea to make files world-writable! Keep the rights at the minimum that works.
How to handle user permission issue?
1,504,963,396,000
I trying to deploy an rails application into /home/app/myapp, but when application tries to connect to Mysql, I get this error: ** [out :: 192.168.110.50] /home/app/myapp/shared/bundle/ruby/1.9.1/gems/mysql2-0.3.11/lib/mysql2/mysql2.so: failed to map segment from shared object: Operation not permitted - /home/app/myapp/shared/bundle/ruby/1.9.1/gems/mysql2-0.3.11/lib/mysql2/mysql2.so 'app' user has root privilegies, so it no make sense. After googling, I find that noexec in home folder can block system calls. This my fstab file: $cat /etc/fstab # # /etc/fstab # Created by anaconda on Wed Oct 17 16:48:10 2012 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/VG00-LVbarra / ext4 defaults 1 1 UUID=3d5ccda7-932f-4b48-a010-9ddcb99873c0 /boot ext4 defaults 1 2 /dev/mapper/VG00-LVhome /home ext4 defaults,noexec,nosuid 1 2 /dev/mapper/VG00-LVtmp /tmp ext4 defaults,noexec,nosuid 1 2 /dev/mapper/VG00-LVusr /usr ext4 defaults 1 2 /dev/mapper/VG00-LVvar /var ext4 defaults,noexec,nosuid 1 2 How to remove noexec flag from home folder? Thank you!
Looks like mprotect failed, but anyway, to remove the noexec flag, change /dev/mapper/VG00-LVhome /home ext4 defaults,noexec,nosuid To /dev/mapper/VG00-LVhome /home ext4 defaults,nosuid And remount /home with mount -o remount /home
Remove noexec from Home folder
1,504,963,396,000
Possible Duplicate: Redirecting stdout to a file you don't have write permission on I'm trying to install drupal according to the instructions given in this tutorial: http://how-to.linuxcareer.com/how-to-install-drupal-7-on-ubuntu-linux and am stuck on a step: $ cd /etc/apache2/sites-available $ sudo sed 's/www/www\/drupal/g' default > drupal bash: drupal: Permission denied The permissions for /var/www/drupal are set to 777.
The tutorial does not use sudo and requires a root shell. You can get a root shell with sudo -i. In case you prefer sudo, the redirection is handled by the shell and not by the sudo command. So you can't create a file in /etc/apache2/sites-available by directing the output as you did. According to the sudo manual, you should use a subshell like: $ cd /etc/apache2/sites-available $ sudo sh -c "sed 's/www/www\/drupal/g' default > drupal"
permission denied when redirecting sudo sed output [duplicate]
1,504,963,396,000
Say two people with different primary unix GIDs share and need to frequently edit the same file. The users are not members of each others' primary unix GIDs, but they are both members of a common second group. The accepted answer (and other posted answers) to this question suggest setting the sticky bit on the file's parent directory so that such a file does not get the user's primary group ID whenever one of the users modifies the file (i.e. they claim it removes the need to call newgrp in every login session whenever one of the users wants to edit the shared file) However I thought that something like this could only be done with with the setgid bit. From Wikipedia: Setting the setgid permission on a directory (chmod g+s) causes new files and subdirectories created within it to inherit its group ID, rather than the primary group ID of the user who created the file (the owner ID is never affected, only the group ID). Why would the sticky bit help with this?
You're right, it's the setgid bit that has this effect. The sticky bit has an effect on a directory too, but it's unrelated: it means that only the owner of a file can delete it, as opposed to anyone with write permission on the directory (think /tmp).
Sticky bit vs setgid for facilitating shared write access
1,504,963,396,000
I have two Linux installations on my computer, with /home on a different partition but shared for the two installs. And each install has a different username to avoid conflicts. The thing is, I'm a developer, I don't want mix users, but I want to setup permissions for a shared folder. Example, Ubuntu, main user: raul, home folder: /home/raul Fedora, main user: ricardo, home folder: /home/ricardo I want a /home/shared where raul & ricardo have permissions over this folder, maybe www-data and root, but any other user on any Linux distro. I hope you got my problem. EDIT: This seems to be more complex than expect. This note is the best I can explain, with my actual English level, so please be nice. On the same computer, I have distros A and B installed. Distros A and B are sharing /home in another partition but have different users... so I have /home/a for user A in distro A, /home/b and you know... So I like to have a folder for example /home/shared where users A and B can both read and write to the folder, like part of the same group, BUT user A doesn't exist on distro B and vice versa. Then how do I tell each distro to make me a group with a user from another distro?
Not sure exactly what your question is. Can you be more specific? Specifically, I'm having difficulty parsing I want a /home/shared where raul & ricardo have permission over this folder, maybe www-data and root, but any other user on any linux distro. Do you want to know how to set up a shared folder/partition? If so, you could just set up a group in each installation with the same group id. Then perhaps use acl to make sure group has rw permission to the partition. man addgroup says A GID will be chosen from the range specified for system GIDS in the configuration file (FIRST_GID, LAST_GID). To override that mechanism you can give the GID using the --gid option. So you could do addgroup [options] [--gid ID] group where group and ID is the same in both installations. For a tutorial about acl see Using ACLs with Fedora Core 2, and see my answer to a recent question about sharing a directory between two users. Obviously, you'll need to mount the partition with acl support on both installations. Once acl is set up, all files and directories in the folder will have group permissions rw and so raul from one installation and ricardo from the other installation will both be able to read and write to that folder. EDIT: In response to raul's comment below: If I understand your question correctly, and you are trying to share data between two www-data users on two installations, then this a slightly different question than the one you seemed to be asked with raul and ricardo, because in this case, the users would be the same. www-data would be typically created by a web server installation like apache, so creating them with matching ids would be difficult unless it was already the case (see below). I think there should be no problem in altering the uids/gids after the event to match, but I'm not 100% sure about that. Perhaps the experts here can advise. Note that Debian defaults to uid/gid=33 for www. It is possible it would not be the same for other Linux distributions. However, if your installations were both the same distribution, the ids would very likely match. Indeed, if this were the case, you could just use the www-data group as your group, and you would not have to do anything.
How can I setup group permission for different user on multiple Linux installations?
1,504,963,396,000
I am connecting to a remote server over SSH but am running into a permissions issue. My remote account is thelq and primarily part of the group thelq. I'm also part of the generic group users. Another remote account called game is primarily part of the group users. On the remote server I can freely view and edit all of game's files. However on the local server I'm apparently not part of the users group. Strangely explicitly specifying the users group for the gid option still doesn't allow me access to the users group Example of what I tried in command line: [thelq@quackwall ~]$ sshfs -o idmap=user -o uid=1000 -o gid=1000 -o allow_other -o default_permissions quackgame:/home game-home thelq@quackgame's password: [thelq@quackwall ~]$ echo something >> game-home/game/INIT-SETUP bash: game-game/game/INIT-SETUP: Permission denied I'm confused on what else to do as Linux permissions are not my strong point. I was thinking that longing in as me on the remote machine would allow me access to everything the groups says I can but apparently not. Any suggestions? As per @penguin359's request [thelq@quackwall ~]$ ls -ld game-home drwxr-xr-x 2 thelq allusers 4096 Apr 5 19:06 game-home [thelq@quackwall ~]$ sshfs quackgame:/home game-home thelq@quackgame's password: [thelq@quackwall ~]$ ls -ld game-home drwxr-xr-x 1 root root 4096 Jan 5 18:20 game-home [thelq@quackwall ~]$ ssh quackgame id thelq@quackgame's password: uid=1000(thelq) gid=1000(thelq) groups=1000(thelq),4(adm),20(dialout),24(cdrom),46(plugdev),100(users),107(sambashare),109(lpadmin),110(admin) [thelq@quackwall ~]$ ssh quackgame ls -ld /home/game thelq@quackgame's password: drwxr-xr-x 11 game users 4096 2011-04-11 21:59 /home/game
According to your output from ssh quackgame ls -ld /home/game, /home/game is only writable by the file owner, game, and not by the users group. Try running chmod g+w /home/game on quackgame and see if it works.
Groups ssh user is apart of don't apply on SSHFS
1,504,963,396,000
I have files in my git directory that have permission 600. When I used git-pull in my computer and git-push in another computer, the permission changes to 664. Is there a way to preserve permissions(600) after git-pull? Thanks
As mentioned by @Kusalananda, git normally only tracks execute permissions. In order to save more permissions information, you would need to implement a pre-commit hook that would gather up the permissions info and store it separately, and another hook to restore permissions on pull. etckeeper is basically a collection of tools that does this for the purpose of placing your /etc directory under version control. You might want to adapt it to your purposes, or perhaps study what it does to do something similar yourself.
Permission changed from 600 to 664 after git push-pull
1,504,963,396,000
I have multiple folders that need to be accessed by multiple users. Specifically, Medusa(Which I use to track which episodes of TV Shows I do not have backed up yet), Plex(Which I use to stream my digital backups to various devices around my home), and Media(which is the user login I ssh into the machine using. The issue I'm having is that no matter what I do, I cannot get all three users to have access to the folder at the same time. I have added all users to a group, set the file/folder permissions to allow full rwx permissions for user, group and everyone settings, but still cannot seem to get all three to work at the same time. All of the files / folders are on a second hard drive (sdb1) rather than on the boot drive, in case that matters for some reason. The mount point for the hard drive is /media/media/storage2 however I run all commands starting at /media/ because for some reason if I don't, the user is unable to access anything on the drive in the end. I use the command sudo chown -R plex:server /media/ to change the owner:groupOwner of the files to plex when I want to watch something. server is the group I created containing all users. I use the command sudo find /media/ -type d -exec chmod 777 {} \; to change the permissions for all folders, and sudo find /media/ -type f -exec chmod 777 {} \; for all files. I do realize that 777 is not good for security, I originally was going to use 775 but until I figure this out I was hoping 777 would fix the issue, and it hasn't. I'm not sure what else I can try to make this happen. Edit: Tried JHuffard's suggestion, ran sudo chown -R plex /media/, sudo chgrp -R server /media/ and sudo chmod -R 777 /media/ and everything is still locked down, despite permissions showing drwxrwxrwx for all relevant directories and files.
These ACL commands are for Linux only. First, set all ownership and permissions to something standard. chown -R root:root /media find /media -type d -exec chmod 0755 {} + find /media -type f -exec chmod 0644 {} + Files Next, decide how to use Access Control Lists (ACLs) appropriately. (You know the details about which users and/or groups require read or write access to which files or directories, but these were not specified in the question.) Some examples follow. Keep in mind that each example is setting an explicit ACL in order to get the ACLs defined correctly for files (not directories just yet). Later, ACLs and default ACLs can be applied to directories. Below, -m is the mask to apply. # Give medusa user (u) read-write; give group_name (g) read; give others (o) read. find /media -type f -exec setfacl -m u:medusa:rw-,g:group_name:r--,o:r-- # Give plex user (u) read-write. find /media -type f -exec setfacl -m u:plex:rw- # Give server group (g) read-write. find /media -type f -exec setfacl -m g:server:rw- # Give media user (u) read-write. find /media -type f -exec setfacl -m u:media:rw- # Give media user (u) read-write, server group (g) read-write, others (o) read. find /media -type f -exec setfacl -m u:media:rw-,g:server:rw-,o:r-- Directories Whichever ACLs were applied to the files can be applied to the directories as well, but a slight variance applies in that one can also set the default ACL (-d). By using the -d switch, all new filesystem objects in the directory inherit defined ACLs automatically. It is important to remember that one must set both an ACL for the directory itself and a default ACL if automatically applying ACLs to new files. Also note that, below, execute (x in rwx) is required to change directories (cd); but, this does not mean that the execute bit applies to files. Rather, the execute bit applies to new directories only. # For each directory itself: find /media -type d -exec setfacl -m u:media:rwx,g:server:rwx,o:r-x {} + # To set a default ACL in each directory - the same command as above with the `-d` switch: find /media -type d -exec setfacl -d -m u:media:rwx,g:server:rwx,o:r-x {} + Repeat the two commands above for each ACL, changing users and/or group according to objectives. This action stacks the ACLs so that one can add as many ACLs as desired and accomplish the automatic assignment of the ACLs for each new filesystem object. One can use the "ugo" method (e.g: rwx) or octal (e.g: 7). rwx r-- rw- r-x 7 4 6 5 In other words, the following commands are equivalent. setfacl -m u:media:rwx,g:server:rwx,o:r-x {} + setfacl -m u:media:7,g:server:7,o:5 {} + The group and other masks work the same way: g:groupname:--- or in combination as follows. u:username:---,g:groupname:---,o::--- I have noticed that a single colon also seems to work for "other". u:username:---,g:groupname:---,o:--- Not specifying a username or group name applies the mask to current user/group ownership. Not knowing exactly what user or group requires what level of access, it's difficult to be more precise. One might need to analyze first, possibly starting the process deeper in the directory tree. It might be helpful when first playing around with ACLs to know how to remove them all: setfacl -Rb /media. Also, one might use info and/or man to read the manual on setfacl, getfacl, and acl. There are also many questions and answers on ACLs. Just be sure to discern whether the ACL Q/A is for Linux because that's the OS in question. (ACLs are implemented differently according to major OS variants.) The standard ownership and permissions that were set at the top of this answer will be augmented by the ACLs. Wherever an ACL exists, you'll notice that a + sign exists - something like the mockup below. drwxr-xr-x 2 root root 4096 Jul 8 16:00 dir_without_acl drwxr-xr-x+ 2 root root 4096 Jul 8 16:00 dir_with_acl Services accessing these files may need to be restarted.
Linux File / Folder permissions
1,504,963,396,000
I wrote this testing code, and find that this program can always read the file successfully even after I canceled the read permission when running getchar(). #include <stdio.h> #include <fcntl.h> #include <unistd.h> #include <stdint.h> #include <sys/types.h> int main(){ int f = open("a.txt",O_RDONLY); uint8_t data[200]; printf("Got %d from read", pread(f, (void *)data, 200, 0)); getchar(); printf("Got %d from read", pread(f, (void *)data, 200, 0)); } This program printed Got 9 from read twice, even though I use chmod a-r a.txt during pausing. I'm pretty sure that I'm just a normal user and my process doesn't have CAP_DAC_OVERRIDE; why doesn't the second pread() return any error? My guess is, when doing read/write, kernel only check file permission on open file description, which is created with open(), and don't change even I changed the file permission on the underlying filesystem. Is my guess correct? Extra question: If I'm right about this, then how about mmaped regions? Do kernel only check permissions recorded in page table when I read/write/execute that mmaped region? Is that true inode data stored in filesystem is only used when creating open file description and mmap region?
Yes, permissions are only checked at open time and recorded. So you can't write to a file descriptor that you opened for read-only access regardless of if you are potentially able to write to the file. The kernel consults in-memory inodes rather than the ones stored in the filesystem. They differ in the reference count for open files, and mount points get the inode of the mounted file. If I'm right about this, then how about mmaped regions? Same. (PROT_* flags passed to mmap() equivalent to O_RDWR / O_RDONLY / O_WRONLY flags passed to open()). Do kernel only check permissions recorded in page table when I read/write/excute that mmaped region? I'm not sure when else it could check permissions recorded in the page table :-). As far as I understand your question: yes. Is that true inode data stored in filesystem is only used when creating open file description and mmap region? Inode permissions are also checked for metadata operations, e.g. mkdir() (and similarly open() with O_CREAT). And don't forget chdir(), which is different from any open() call. (Or at least, it is different from any open() call on current Linux). I'm not sure about SELinux-specific permissions.
Does Linux kernel check file permissions on inode or open file description?
1,504,963,396,000
When SELinux is disabled, I have no issues, but when it's Enforced then I'm facing this [systemd] failed to get d-bus session: Failed to connect to socket /run/dbus/system_bus_socket: Permission denied Audit.log sealert -a /var/log/audit/audit.log 100% done found 2 alerts in /var/log/audit/audit.log -------------------------------------------------------------------------------- SELinux is preventing /usr/sbin/zabbix_agentd from connectto access on the unix_stream_socket /run/dbus/system_bus_socket. ***** Plugin catchall (100. confidence) suggests ************************** If you believe that zabbix_agentd should be allowed connectto access on the system_bus_socket unix_stream_socket by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # ausearch -c 'zabbix_agentd' --raw | audit2allow -M my-zabbixagentd # semodule -i my-zabbixagentd.pp i created a policy as suggested above,restarted zabbix-agent, now from zabbix agent log getting [systemd] failed to get d-bus session: An SELinux policy prevents this sender from sending this message to this recipient, 0 matched rules; type="method_call", sender="(null)" (inactive) interface="org.freedesktop.DBus" member="Hello" error name="(unset)" requested_reply="0" destination="org.freedesktop.DBus" (bus) sealert -a /var/log/audit/audit.log 39% donetype=AVC msg=audit(1534885076.573:250): avc: denied { connectto } for pid=10654 comm="zabbix_agentd" path="/run/dbus/system_bus_socket" scontext=system_u:system_r:zabbix_agent_t:s0 tcontext=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 tclass=unix_stream_socket **** Invalid AVC allowed in current policy ***
Well, first you have to identify the denial you are getting from SELinux. The easiest (in my opinion) way to do that is via the sealert utility. First install the setroubleshoot-server package with: yum install setroubleshoot-server Then run: sealert -a /var/log/audit/audit.log You will probably get a lot of output, look for your specific denial, and follow the recommendations. But be sure to NOT allow things that shouldn't be allowed! Here is an exmple of a denial, and the suggested woraround from sealert (my emphasis): SELinux is preventing /usr/libexec/postfix/qmgr from using the rlimitinh access on a process. ***** Plugin catchall (100. confidence) suggests ************************** you believe that qmgr should be allowed rlimitinh access on processes labeled postfix_qmgr_t by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # ausearch -c 'qmgr' --raw | audit2allow -M my-qmgr # semodule -i my-qmgr.pp Additional Information: Source Context system_u:system_r:postfix_master_t:s0 Target Context system_u:system_r:postfix_qmgr_t:s0 Target Objects Unknown [ process ] Source qmgr Source Path /usr/libexec/postfix/qmgr Port Host Source RPM Packages postfix-2.10.1-6.el7.x86_64 Target RPM Packages Policy RPM selinux-policy-3.13.1-102.el7_3.16.noarch Selinux Enabled True Policy Type targeted Enforcing Mode Enforcing Host Name centos Platform Linux centos 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 Alert Count 5 First Seen 2018-04-18 18:02:32 CEST Last Seen 2018-08-22 09:11:22 CEST Local ID 855f168c-1e47-4c6b-8a1e-f8fddce5d426 The example above concerns Postfix, again; look for your denial, and insert a local policy.
How to add exception in SELinux?
1,504,963,396,000
While trying to create a test environment using mount --bind I was surprised to find that it sometimes fails with permissions errors because root cannot access the source directory. This only appears to affect NFS file-systems. Is there a way to mount --bind a directory which root cannot access? Perhaps by inode number directly? Example I have an NFS mount which the ordinary vagrant:vagrant user can access fully: vagrant@ubuntu-xenial:/tmp$ find nfs_mount/ -ls 4375 4 drwxr-xr-x 3 vagrant vagrant 4096 Mar 20 21:28 nfs_mount/ 257090 4 drwxr-xr-x 3 vagrant vagrant 4096 Mar 20 21:28 nfs_mount/source 257091 4 drwx------ 3 vagrant vagrant 4096 Mar 20 21:28 nfs_mount/source/path 257092 4 drwx------ 3 vagrant vagrant 4096 Mar 20 21:28 nfs_mount/source/path/is 257093 4 drwx------ 2 vagrant vagrant 4096 Mar 20 21:28 nfs_mount/source/path/is/here ... but root:root cannot: vagrant@ubuntu-xenial:/tmp$ sudo find nfs_mount/ -ls 4375 4 drwxr-xr-x 3 vagrant vagrant 4096 Mar 20 21:28 nfs_mount/ 257090 4 drwxr-xr-x 3 vagrant vagrant 4096 Mar 20 21:28 nfs_mount/source 257091 4 drwx------ 3 vagrant vagrant 4096 Mar 20 21:28 nfs_mount/source/path find: ‘nfs_mount/source/path’: Permission denied If I attempt to mount --bind it fails: vagrant@ubuntu-xenial:/tmp$ mkdir /tmp/bindtarget vagrant@ubuntu-xenial:/tmp$ sudo mount --bind /tmp/nfs_mount/source/path/is/here/ /tmp/bindtarget/ mount: mount /tmp/nfs_mount/source/path/is/here/ on /tmp/bindtarget failed: Permission denied The NFS mount at /tmp/nfs_mount is provided by localhost:/srv, and if I go directly to the source file-system the directory permissions don't pose a problem: vagrant@ubuntu-xenial:/tmp$ sudo mount --bind /srv/source/path/is/here/ /tmp/bindtarget/ vagrant@ubuntu-xenial:/tmp$ findmnt /tmp/bindtarget TARGET SOURCE FSTYPE OPTIONS /tmp/bindtarget /dev/sda1[/srv/source/path/is/here] ext4 rw,relatime,data=ordered NFS setup in case it matters: vagrant@ubuntu-xenial:/tmp$ showmount -e localhost Export list for localhost: /srv * vagrant@ubuntu-xenial:/tmp$ cat /etc/exports /srv/ *(rw,sync,no_subtree_check)  Environment Ubuntu 16.04 (Xenial64) Linux ubuntu-xenial 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Please note that doing any mount in /tmp is hazardous, because some cleaning task might suddenly decide to do its work in /tmp and not care about mountpoints, thus wiping old files not actually belonging to /tmp. That said I'll use the /tmp examples from OP. method 1: If you're in full control of the NFS environment, just add the no_root_squash option to the export options: this will prevent the root user client to be mapped as nobody on the server and losing rights. method2: Else, here's a relatively simple solution, the one you're looking for, in the same vein as accessing a still-in-use deleted file: using /proc For simplicity here, requires two terminals: user terminal: vagrant@ubuntu-xenial:/tmp$ cd /tmp/nfs_mount/source/path/is/here/ vagrant@ubuntu-xenial:/tmp/nfs_mount/source/path/is/here$ echo $$ 12345 root terminal: root can get a reference to the wanted directory, still unreadable, but mountable: # ls -l /proc/12345/cwd lrwxrwxrwx. 1 vagrant vagrant 0 Mar 21 01:18 /proc/12345/cwd -> /tmp/nfs_mount/source/path/is/here # ls -l /proc/12345/cwd/ ls: cannot open directory '/proc/12345/cwd/': Permission denied # mount --bind /proc/12345/cwd /tmp/bindtarget # ls /tmp/bindtarget ls: cannot open directory '/tmp/bindtarget': Permission denied That's it.
How to create a mount --bind when root does not have permission to access the source directory?
1,504,963,396,000
After reading about removing the execute permission from chmod, I got curious. Is it possible to recover from removing the execute permission from ld-linux.so without rebooting if I haven't yet exited bash? Every command appears to stop functioning.
You would need a statically linked (or already running) utility that can do a chmod operation. If you had a statically linked BusyBox or a similar emergency shell installed, that would probably do it. In some old distributions, the basic package management utility (e.g. dpkg or rpm) used to be statically linked to enable libc and loader upgrades. Nowadays there are apparently other ways to do that. But if your package management utility happened to be statically linked and the package containing ld-linux would be still in the cache directory of the package management tools, you might be able to force-reinstall the ld-linux package and fix it that way.
Recovering from removing execute permission from ld-linux.so
1,504,963,396,000
I need to grant access via SFTP to a specific folder with full write permissions from the root of this folder. I made it work but can't figure out a way to provide write permission on the / of the root. I read that the common way to solve this is just to create a subfolder for each user but this one contains existing files which are used all around the website. In short : / should not be readable (this is correct) /uploads/ is not writable (**but should** by any means) /uploads/* is writable (and should) This is what I have done so far : /var/www/uploads is owned by root:root with 755 permissions. (775 prevents user to even log in) /var/www/uploads/* is owned by newuser:sftp 775 permissions. relevant /etc/ssh/sshd_config Match group sftp ChrootDirectory %h AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp AllowGroups ssh-users sftp users are created like this : useradd -d /var/www/uploads -m newuser -g sftp -s /bin/false Thank's a lot!
I made it work but can't figure out a way to provide write permission on the / of the root. It is not possible. The chroot directory can not be writtable by the user you are chrooting. That is a must defined in the manual page for sshd_config: At session startup sshd(8) checks that all components of the pathname are root-owned directories which are not writable by any other user or group.
sftp restricted access to a directory
1,504,963,396,000
Good day everybody recently I've installed Arch on my computer and I'm still trying to figure it out why are the mounted devices (HDD) are missing from the Home folder tree in my File manager? To see it more clear what I mean: but when I run it with sudo I get this view: I'm assuming I must be missing some permissions for this? All extra devices are mounted to /mnt folder which in properties indicates root as the owner. How can I make it look the same without running sudo? UPDATE cat /etc/fstab # /etc/fstab: static file system information # # <file system> <dir> <type> <options> <dump> <pass> # UUID=7b8195aa-4480-433e-b258-7b5607977dbb /dev/sda1 / ext4 rw,relatime,data=ordered0 1 # UUID=dec7470a-c024-4f87-aa38-03d1d0bc214c /dev/sda5 /home ext4 rw,relatime,data=ordered0 2 # UUID=e2fc4ba5-3b2a-4dd8-9d35-eba0d1f83fc2 LABEL=Movies /dev/sdb1 /mnt/movies ext4 rw,relatime,data=ordered0 2 # UUID=4eafe188-0b7d-4083-9ef2-c3370e881455 LABEL=Media /dev/sdb2 /mnt/media ext4 rw,relatime,data=ordered0 2 # UUID=bb278797-cdd3-4f28-acb8-809935e48bb9 /dev/sda6 none swap defaults 0 0 sudo blkid /dev/sda1: UUID="7b8195aa-4480-433e-b258-7b5607977dbb" TYPE="ext4" PARTUUID="3cde3cdd-01" /dev/sda5: UUID="dec7470a-c024-4f87-aa38-03d1d0bc214c" TYPE="ext4" PARTUUID="3cde3cdd-05" /dev/sda6: UUID="bb278797-cdd3-4f28-acb8-809935e48bb9" TYPE="swap" PARTUUID="3cde3cdd-06" /dev/sdb1: LABEL="Movies" UUID="e2fc4ba5-3b2a-4dd8-9d35-eba0d1f83fc2" TYPE="ext4" PARTUUID="0008db75-01" /dev/sdb2: LABEL="Media" UUID="4eafe188-0b7d-4083-9ef2-c3370e881455" TYPE="ext4" PARTUUID="0008db75-02" UPDATE 2 cat /etc/fstab # /etc/fstab: static file system information # # <file system> <dir> <type> <options> <dump> <pass> # UUID=7b8195aa-4480-433e-b258-7b5607977dbb /dev/sda1 / ext4 rw,relatime,data=ordered0 1 # UUID=dec7470a-c024-4f87-aa38-03d1d0bc214c /dev/sda5 /home ext4 rw,relatime,data=ordered0 2 # UUID=e2fc4ba5-3b2a-4dd8-9d35-eba0d1f83fc2 LABEL=Movies /mnt/movies ext4 rw,user,auto,acl 0 2 # UUID=4eafe188-0b7d-4083-9ef2-c3370e881455 LABEL=Media /mnt/media ext4 rw,user,auto,acl 0 2 # UUID=bb278797-cdd3-4f28-acb8-809935e48bb9 /dev/sda6 none swap defaults 0 0 mount | grep sdb /dev/sdb2 on /run/media/admin/Media type ext4 (rw,nosuid,nodev,relatime,data=ordered,uhelper=udisks2) /dev/sdb1 on /run/media/admin/Movies type ext4 (rw,nosuid,nodev,relatime,data=ordered,uhelper=udisks2)
Solved! ok I'll contribute my answer to this thread maybe it will help somebody else with the same issue. So there's a quick workaround to this but I'll try to reproduce my steps. Firstly my fstab looked like this: # UUID=e2fc4ba5-3b2a-4dd8-9d35-eba0d1f83fc2 LABEL=Movies /dev/sdb1 /mnt/movies ext4 rw,relatime,data=ordered0 2 But thanks to @Bahamut I managed to change it to this and mountable devices appeared on Home folder tree: # UUID=e2fc4ba5-3b2a-4dd8-9d35-eba0d1f83fc2 LABEL=Movies /mnt/movies ext4 rw,user,auto,acl 0 2 After that I changed it again to: # UUID=e2fc4ba5-3b2a-4dd8-9d35-eba0d1f83fc2 LABEL=Movies /mnt/movies ext4 uid=1000,gid=100,umask=0022,auto,nosuid,nodev,rw,relatime,data=ordered 0 2 Only for the security meassures you can read more on fstab here Do note! that uid and gid may differ in your system from example above. But now I faced the issue every time I reboot the system it will ask for the root password which is kinda annoying but yes might be good for the security meassures. I started to google for the workaround and found a solution to edit sudoers file to remove sudo when using with mount but that is highly not recommended for the security purpose so I skipped this workaround and not recommend it either. So after I clicked on partition icon to mount it the pop up window appeared And at the left bottom corner I clicked Details Action which showed: org.freedesktop.udisks2.filesystem-mount-system As I found out later it appears to be a policy rule but one can change it just run in terminal: sudo gedit /usr/share/polkit-1/actions/org.freedesktop.udisks2.policy you can use nano as well or your fav. and in opened window search for the lines: <action id="org.freedesktop.udisks2.filesystem-mount-system"> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>auth_admin_keep</allow_active> </defaults> </action> and change it to: <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> Save & close the file. now sudo reboot That's it!
Mounted devices are not shown in File manager folder tree items
1,504,963,396,000
Please refer to the folder hierarchy below. folder1 -> file11.txt -> file12.txt -> folder11.backup -> file111.txt -> file112.txt -> file113.bak -> folder111 -> and many more folders and files folder2 -> file21.txt -> file22.txt -> file23.bak folder2.backup -> file111.txt -> file112.txt -> folder111 -> folder112 -> file1121.bak -> file1122.txt -> and many more folders and files folder3 -> folder31 -> folder311 -> folder3111.backup -> file3111.txt -> file3112.txt -> folder3111 -> and many more folders and files -> folder3112 -> file31121.bak -> file31121.txt I want change the ownership (chown) and permissions (chmod) with the following rules: all folders/subfolders EXCEPT folders that ends in ".backup". In my example folder hierarchy, the following folders and their contents will be ignored: folder11.backup, folder2.backup and folder3111.backup all files EXCEPT those that has the extension ".bak". But if the file irrespective of its extension is inside a .backup folder, these are excluded because of rule 1. Thanks for the help. :)
(1) The directories: find . -mindepth 1 -type d -not -name '*.backup' \ -not -path '*.backup/*' -print0 | xargs -0 chmod MODE (2) The files: find . -type f -not -name '*.bak' \ -not -path '*.backup/*' -print0 | xargs -0 chmod MODE For testing you may run the command lines with ls -ld instead of chmod ....
I want to change permissions on all folders/files excluding some of them
1,504,963,396,000
I just learned something that shocked me, because I did not have a clue it was a fact. If I have a directory with the following permissions: user@host:~$ ls -la testdir total 8 drwxrwxrwx 2 user user 4096 Mar 3 20:36 . drwx------ 34 user user 4096 Mar 3 20:36 .. -rw-r--r-- 1 user user 0 Mar 3 20:36 testfile 1 -rw-r--r-- 1 user user 0 Mar 3 20:36 testfile 2 Even though the files testfile 1 and testfile 2 have write permissions only for the owner everyone can write on them. Until now, I thought that the directories' permissions only affected the directory itself. So now for my question - what good are file permissions on files, if everything seems to be set by the directories' permissions that the files reside in? ==== EDIT 1 ==== On the other hand look at these permissions: [user@geruetzel2 default]$ ls -la total 24 drwxr-xr-x. 2 root root 41 Dec 19 23:07 . drwxr-xr-x. 96 root root 8192 Mar 3 20:28 .. -rw-r--r--. 1 root root 354 Dec 19 23:07 grub -rw-r--r--. 1 root root 1756 Nov 30 19:57 nss -rw-------. 1 root root 119 Mar 6 2015 useradd If I do a cat useradd as non-root here, I get a permission denied error. Why is that? The direcory has read permissions for "other" so it should be readable? There seems to be a difference between the two examples I gave but I don't see the reason for the different behavior.
The directory permissions "only" affect the content of the directory. So anybody with write permissions on the directory can e.g. delete files or folders in that directory, even if the permissions of the files or folders are set to have no write access. It maybe makes it easier to understand if you once open the folder with vi or any other text editor. In Unix and Linux "Everything is a file". If you for example edit a file with vi, it will not edit the file inplace but make a copy and delete the original when saved. On the other hand, the user not owning the file couldn't echo directly into that file.
How do permissions on a directory affect files in it?
1,504,963,396,000
I have two Linux OS installed in two partitions in my system. However I have few usernames which are common in both partitions. For ex, I have user friend in both the OS. I have noticed that friend in one OS has access to the files of friend in other OS. Is this expected ? These two friend account in two OS are different account (co-incidentally with same username). If this is expected then is there a way to prevent this type of access granted just because they have same username? I don't want the files of friend of OS1 to be accessible to friend of OS2.
It sounds like you have the partitions mountable to each other; one way to prevent cross-OS access would be to prevent this cross-mounting. Another way is to ensure that each account has a different UID. The UID is what matters, not the login name.
how to prevent access to other user with same user id (UID)
1,504,963,396,000
I'm moving files over from my old server to my new server. By the time I'm doing it for real (as opposed to just testing) I'll have all the "user" users (the non-system users, i.e. >= 1000) created on the new server. My only fear is that I have some file that I'm moving over in the home directory that will belong to one of these users (say apache for instance) that doesn't exist on the new server. I'm using rsync -az -e ssh src dest as user on the new server to do the copying. It does preserve the usernames (as opposed to ids) for the users that exist. However instead of complaining about non-existent users, it just falls back on the numeric ids if the user isn't found. The behavior seems to be as described in this paragraph from the man page: If a user or group has no name on the source system or it has no match on the destination system, then the numeric ID from the source system is used instead. See also the comments on the "use chroot" setting in the rsyncd.conf manpage for information on how the chroot setting affects rsync’s ability to look up the names of the users and groups and what you can do about it. While I haven't read the entire man page word for word, what I have read doesn't offer me any options to complain about non-existent users. What is the best way to make sure that whatever users exists as owners/groups of files under a directory (say /home) exist on the new machine. If it's not doable with rsync, what's the best way to get a list of all the users/groups that exist so I can manually check that they exist on the new machine, or fix them before copying them. Summary: How do I make sure that after I run rsync, none of the files have been copied using numeric ids instead of name ids?
The rsync command doesn't have a mechanism for handling this directly, so I would use a different approach. I would scan the source filesystem tree, collecting the usernames (and groups) of all files present there: # List of usernames owning files under 'src' find src -printf "%u\n" | sort -u | tee /tmp/src.users # List of group memberships for files under 'src' find src -printf "%g\n" | sort -u | tee /tmp/src.groups # Copy files to target system scp -p /tmp/src.{users,groups} dest:/tmp/ I would then ensure that all users existed on the target system ready for rsync to use. Run these commands on the target system: # List "missing" users getent passwd | cut -d: -f1 | sort -u | comm -13 - /tmp/src.users # List "missing" groups getent group | cut -d: -f1 | sort -u | comm -13 - /tmp/src.groups
How to get rsync to complain if user not found
1,504,963,396,000
I accidentally did chmod 770 / which removed all access to the system. I booted to rescue mode, dropped to root shell and tried to chmod 755 /, but I get message changing permissions of /: read-only file system and nothing happens. How can I set root dir back to 755? Or am I completly locked out?
When you make any changes to filesystem in recovery root shell , you have to remount the partition with read write permissions, mount -o remount,rw / . Then you can proceed with changing permissions of root directory
Accidentally removed execute permission from root directory.
1,504,963,396,000
thisisme@ubuntu:/home$ ls -al total 28 drwxr-xr-x 4 root root 4096 4月 19 2012 . drwxr-xr-x 24 root root 4096 4月 9 12:34 .. drwxr-xr-x 66 thisisme thisisme 4096 4月 12 10:15 thisisme drwx------ 2 root root 16384 11月 18 00:07 lost+found As above, my home folder permission is set as rwxr-xr-x, so it means that everyone on my computer can access into my home folder (/home/thisisme) because all three x flags are set, but in fact (tested by guest login session) only me can access my home folder. But why not set the permission as drwxrwx--- or something like drwx------?
You can set the permissions as drwxrwx--- (770) or drwx------ (700) depending on your preference. The first allows the owner and users in the folder's group to access the directory and add new files to it, while the second only allows the owner to access the directory. There should be no difference between the first and second in your case, unless you have other users added to your group (thisisme). Do note that even if users can add files and read the directory list, they may not be able to read or modify any other files or folders inside that have different permissions that prevent them from reading or writing to it. Another thing to note is that the reason why you cannot access home folders in guest sessions is because Ubuntu uses apparmor to restrict access to certain folders in guest sessions, including but not limited to /home. If you want to test if other users can access your home folder, you should do it from a new user account.
Understanding home folder permission
1,504,963,396,000
I have a couple of files that I want to move to another's user home directory. I don't have permissions to write to that user's home directory, but I know his password. I know how to copy the file using scp (see here). However, if I want to move the file, copying and then removing the original file is inefficient. Is there a way to move the file, without using sudo (I don't know the root's password)?
Subject to certain assumptions that the target user can actually access the file in its original location, the following approach could work: SRC='/path/to/existing/file' DST='/path/to/new/file' su target_user sh -c "ln -f '$SRC' '$DST'" && rm -f "$SRC" This "moves" the file to the new user's location, but does not change the ownership or permissions.
Move file to another user's home directory (without sudo)?
1,407,059,257,000
When I use the command crontab -e on my Debian server as a non root user (in this case as postgres), I can't edit it because of "/tmp/crontab.SJlY0Y/crontab" [Permission Denied] crontab -l on the other hand works fine. How can I fix this problem? Here are the current permissions: $ ls -l /tmp/crontab.SJlY0Y/crontab -rw------- 1 root postgres 1.2K Aug 3 11:44 /tmp/crontab.SJlY0Y/crontab $ ls -l /var/spool/cron total 12K drwxrwx--T 2 daemon daemon 4.0K Sep 12 2012 atjobs drwxrwx--T 2 daemon daemon 4.0K Jun 9 2012 atspool drwx-wx--T 2 root crontab 4.0K Aug 3 11:15 crontabs $ ls -l /var/spool/cron/crontabs total 12K -rw------- 1 git crontab 1.3K Mar 2 16:48 git -rw------- 1 postgres crontab 1.4K Aug 3 11:15 postgres -rw------- 1 root root 2.3K Jul 20 20:32 root $ ls -l /usr/bin/crontab -rwsr-xr-x 1 root root 36K Jul 3 2012 /usr/bin/crontab $ ls -ld /tmp/ drwxrwxrwt 6 root root 4.0K Aug 3 11:43 /tmp/
$ ls -l /usr/bin/crontab -rwsr-xr-x 1 root root 36K Jul 3 2012 /usr/bin/crontab The ownership and permission should actually be -rwxr-sr-x 1 root crontab 35880 Jul 3 2012 /usr/bin/crontab Since Debian sarge, crontab is setgid crontab, not setuid root, as requested in bug #18333. This is the cause of your problem: the crontab program expects to run setgid, not setuid, so it creates the temporary file as the user and group it's running as, which are root and the caller's primary group instead of the calling user and the crontab group. Reinstall the cron package: apt-get --reinstall install cron (as root). Check that /var/spool/cron/crontabs has the correct permissions and ownership: drwx-wx--T 2 root crontab 4096 Oct 8 2013 /var/spool/cron/crontabs In the future, don't mess with permissions of system files.
Cannot edit crontab as non root user
1,407,059,257,000
I've got a directory and a file in it, with the directory marked as read-only: $ mkdir directoryname $ touch directoryname/filename $ chmod a-w directoryname I cannot delete the file, even if pass the -f flag to rm: $ rm -f directoryname/filename rm: cannot remove `directoryname/filename': Permission denied Is there a way to force rm to delete this file? Obviously, I could temporarily give directoryname write permissions, but I'm looking for a more convenient way. I believe the underlying unlink syscall fails in the same way. Is there a way to get around that?
What about: sudo rm directory/filename or: su -c "rm directory/filename" depending on your distro and/or setup. You are giving yourself a temporary root for the duration of the above commands and as root is almighty on Unix/Linux you are allowed to do anything. This contrasts with MS Windows where you can remove access to the administrator account (although there are ways around that). SELinux can help as can various extended attributes tools (such as chattr) but in the end, they can be bypassed as root can alter the extended attributes and can configure (and even disable) SELinux.
How do I delete a file in a read-only directory?
1,407,059,257,000
I need to ensure that, when deleting a specific user from a system, all of his/her files are removed. User creation/deletion will happen a lot on this system, so I want to reuse UID's and want to ensure the new user does not have access to any files of the old user. My question is two-fold: Is there a general and easy way to find all files owned by a specific user? Or is a system-wide search -uid n my only option? If a system-wide search is the only option, then which directories are generally writeable by a normal user (suppose a distribution following FHS)? His home directory /tmp ?? The user does not have sudo privileges, so he can only write in places that are world-writable in a standard Unix filesystem.
I did a bit of research of my own. Main source: http://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/c23.html Long story short: there is nothing that prohibits a user from creating files. However, in standard linux FHS, only some directories are writeable by everyone. As long as you use a distribution that follows this convention, you should only check the following directories (as shown by a test on my own system): /dev/shm (mounted by default in some distributions) User home directory /var/tmp /var/run/screen/S-rubenf /tmp /mnt/usb-disk (mounted with gid=users) Source: find -type d | while read DIR; do if touch $DIR/test_can_be_removed123 2>/dev/null; then rm $DIR/test_can_be_removed123 echo $DIR >> writable_directories fi done
Which directories are writeable in a system following FHS?
1,407,059,257,000
I just performed a fresh ubuntu install and i am seeing the following in lsof: userA@az1:~$ lsof COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME init 1 root cwd unknown /proc/1/cwd (readlink: Permission denied) init 1 root rtd unknown /proc/1/root (readlink: Permission denied) init 1 root txt unknown /proc/1/exe (readlink: Permission denied) init 1 root NOFD /proc/1/fd (opendir: Permission denied) kthreadd 2 root cwd unknown /proc/2/cwd (readlink: Permission denied) kthreadd 2 root rtd unknown /proc/2/root (readlink: Permission denied) kthreadd 2 root txt unknown /proc/2/exe (readlink: Permission denied) kthreadd 2 root NOFD /proc/2/fd (opendir: Permission denied) Is this normal? If not how do I fix it? Trying to search for this particular error has lead me nowhere. I am concerned that something is wrong because root is getting Permission denied errors. ls -la result for the proc folder: dr-xr-xr-x 145 root root 0 Jan 13 17:33 proc ls -la results for contents are: dr-xr-xr-x 9 root root 0 Jan 13 17:34 1 and for the contents of process 1. sudo ls -la /proc/1/ total 0 dr-xr-xr-x 9 root root 0 Jan 13 17:34 . dr-xr-xr-x 145 root root 0 Jan 13 17:33 .. dr-xr-xr-x 2 root root 0 Jan 13 17:42 attr -rw-r--r-- 1 root root 0 Jan 13 17:42 autogroup -r-------- 1 root root 0 Jan 13 17:42 auxv -r--r--r-- 1 root root 0 Jan 13 17:34 cgroup --w------- 1 root root 0 Jan 13 17:42 clear_refs -r--r--r-- 1 root root 0 Jan 13 17:34 cmdline -rw-r--r-- 1 root root 0 Jan 13 17:42 comm -rw-r--r-- 1 root root 0 Jan 13 17:42 coredump_filter -r--r--r-- 1 root root 0 Jan 13 17:42 cpuset lrwxrwxrwx 1 root root 0 Jan 13 17:35 cwd -r-------- 1 root root 0 Jan 13 17:35 environ lrwxrwxrwx 1 root root 0 Jan 13 17:34 exe dr-x------ 2 root root 0 Jan 13 17:35 fd dr-x------ 2 root root 0 Jan 13 17:42 fdinfo -r-------- 1 root root 0 Jan 13 17:42 io -r--r--r-- 1 root root 0 Jan 13 17:42 latency -r--r--r-- 1 root root 0 Jan 13 17:35 limits -rw-r--r-- 1 root root 0 Jan 13 17:42 loginuid dr-x------ 2 root root 0 Jan 13 17:42 map_files -r--r--r-- 1 root root 0 Jan 13 17:35 maps -rw------- 1 root root 0 Jan 13 17:42 mem -r--r--r-- 1 root root 0 Jan 13 17:42 mountinfo -r--r--r-- 1 root root 0 Jan 13 17:42 mounts -r-------- 1 root root 0 Jan 13 17:42 mountstats dr-xr-xr-x 5 root root 0 Jan 13 17:42 net dr-x--x--x 2 root root 0 Jan 13 17:42 ns -r--r--r-- 1 root root 0 Jan 13 17:42 numa_maps -rw-r--r-- 1 root root 0 Jan 13 17:42 oom_adj -r--r--r-- 1 root root 0 Jan 13 17:42 oom_score -rw-r--r-- 1 root root 0 Jan 13 17:42 oom_score_adj -r--r--r-- 1 root root 0 Jan 13 17:42 pagemap -r--r--r-- 1 root root 0 Jan 13 17:42 personality lrwxrwxrwx 1 root root 0 Jan 13 17:35 root -rw-r--r-- 1 root root 0 Jan 13 17:42 sched -r--r--r-- 1 root root 0 Jan 13 17:42 schedstat -r--r--r-- 1 root root 0 Jan 13 17:42 sessionid -r--r--r-- 1 root root 0 Jan 13 17:42 smaps -r--r--r-- 1 root root 0 Jan 13 17:42 stack -r--r--r-- 1 root root 0 Jan 13 17:35 stat -r--r--r-- 1 root root 0 Jan 13 17:42 statm -r--r--r-- 1 root root 0 Jan 13 17:35 status -r--r--r-- 1 root root 0 Jan 13 17:42 syscall dr-xr-xr-x 3 root root 0 Jan 13 17:35 task -r--r--r-- 1 root root 0 Jan 13 17:42 timers -r--r--r-- 1 root root 0 Jan 13 17:42 wchan
It appears that you did not run lsof as root, given that you show a prompt with $. Run sudo lsof to execute the lsof command as root. Some information about a process, such as its current directory (pwd), its root directory (root), the location of its executable (exe) and its file descriptors (fd) can only be viewed by the user running the process (or root). That's normal behavior. Sometimes the permission to access files in /proc doesn't match the permission in the directory entries, it's finer-grained (for example, it depends on processes' effective UID as well as real UID). You might get “permission denied” as root in some unusual circumstances, when you're root only in a namespace. If you just installed a new machine, you won't be seeing this.
root permission denied on /proc/1/exe
1,407,059,257,000
I'm trying to build a project, and when I use the command make, I get the following errors: /bin/sh: line 4: .deps/ipset_bitmap_ipmac.Tpo: Permission denied make[2]: *** [ipset_bitmap_ipmac.lo] Error 126 This file, .deps/ipset_bitmap_ipmac.Tpo, was created by make during the build with the following permissions: -rw-r--r--, notice that there's no x. But then make wants to execute the file immediately, which fails. If I go to the file and add executable permissions manually, then the build continues past that point if I re-run make. Except that the make command will crash again once it reaches the next file. The only option I have is to keep chmoding every single new file. My question is, why is make creating these new files without +x? Side notes: I'm on CentOS5, umask -S returns: u=rwx,g=rx,o=rx, sudo doesn't help at all.
With a name like .deps/ipset_bitmap_ipmac.Tpo, it's pretty likely that the file was not meant to be executable. What's happening here is that there's a line in the Makefile that looks like $(SOME_VARIABLE) .deps/ipset_bitmap_ipmac.Tpo or more likely $(SOME_VARIABLE) $(ANOTHER_VARIABLE) where the value of ANOTHER_VARIABLE is .deps/ipset_bitmap_ipmac.Tpo, or some variant on this. Due to a bug in the makefile, or in the program that generated it, or because your computer has an unsupported configuration, the variable SOME_VARIABLE (which should have been the name of the program) wasn't defined. More help may be forthcoming if you tell us what project you're trying to build and exactly where you got it, how you unpacked it, how you configured it, what build command you ran.
Files created by 'make' aren't getting executable permissions by default
1,407,059,257,000
I have a folder with some files, which each have different permissions. Some are read-only, some are read/write, some are read/execute. How can I copy the permissions from user to group, so that for example a file marked read-write for user will be set to read-write for group, but a file read-executable for a user will be set read-executable for the group. Can this be done with a single command?
chmod from coreutils understands such assignments: chmod g=u file
Copy file permissions from user to group
1,407,059,257,000
I have been learning Python and Bash scripting, and I up to now I have given my scripts 777 permissions. Having learned more about how to set permissions, and knowing this is a bad idea, I ran the following command (On Mac OSX, v10.6.8): find / -type f -perm 0777 I was expecting to only get a small list of files that I had assigned 777 permissions, but instead I got a huge list. (1,503 to be exact.) These include my small number of scripts, as well as all of my image files, html and css files, example code that came with ebooks, as well as files related to applications such as Adobe Flash Catalyst, Smith Micro Anime Studio, something called Microsoft Play Ready (I assume this is part of Silverlight), and others. I'm kind of new at this, but isn't this a problem? Should any files have 777 permissions?
It is really not normal to give files 777 permission. I cannot think of a legitimate reason for any file to have such permission, but if exists, it had better have a really really good reason. I advise you to change those permissions immediately. At the minimum, remove the write permission of others, with chmod -R o-w Most probably you also want to remove the write permission of the group too, with chmod -R g-w Personally I really don't like seeing image/audio/video/document files with executable bits on. I would remove those too, with something like find /path -name '*.png' -exec chmod a-x {} \; In the future, be more careful. It is security 101 to not give more permissions than really needed. For example you can start by setting all files to 0400 and all directories to 0500 and then see what is not working and gradually add the bare minimum permissions. What is the output of umask in your system? If it is more permissive than 0022 then I think you might want to add in your ~/.bashrc the command umask 0022. This has the effect that new files you create will not have write permission for group and others by default.
A bunch of files have 777 permissions
1,407,059,257,000
I'm not exactly sure the right question to ask, so I'll try to explain what I'm trying to do. I have an internal web application (in PHP) that I want to be able to create a folder. The trouble is that the Apache user www-data doesn't have any access to the parent folder that I want my folder to be created in. I don't think it's appropriate to give www-data access to the parent folder, so I'm wondering if I can create a script somewhere that www-data can run which has more privileges than www-data does. The script would simply do something like this (psuedocode): FOLDER_NAME = sanitise(<arg-val-1>) mkdir /some-path/$FOLDER_NAME Where would it be appropriate to create this script, and how would it be run by www-data as root? (Or alternatively, is there a better way to solve the problem?) I'm running Debian Linux.
You can't do what you state in a useful way, but there's undoubtedly something that's close enough and that will do what you really want. Even if you arranged to create the directory, the www-data user would still not be able to access /some-path/subdirectory, because the subdirectory can only be accessed through the parent directory. (There are ways around this, but none that I recommend. You can have a process that can access both /some-path and /some-path/subdirectory change to /some-path/subdirectory, then drop privileges; the resulting process will still be able to access its current directory (but not through its absolute path). You can bind-mount the directory in another location, but if you're going to do that you might as well create the directory elsewhere.) Arrange for these directories to be located under a directory that www-data can at least access (x permission bit). If the problem is that the directory must belong to another user and another group, set an access control list on the directory (setfacl -m user:www-data:x /some-path) — see How to restrict to run commands in specific directory through SUDOERS? for more information. If the www-data user cannot write to /some-path, you'll still need elevated privileges to create the directory. You'll need to do at least two things, perhaps three: create the subdirectory as a user with sufficient privileges; if necessary, change the ownership of the subdirectory; if necessary, change the permissions of the subdirectory. If the subdirectory must belong to the www-data user, you can create it as a group who can write to /some-path. If necessary, set an ACL that allows some-group to write to /some-path: setfacl -m group:some-group:rwx /some-path. Then give www-data the right to execute the mkdir command with sudo. Run visudo and add the following rule: www-data ALL = ( : some-group) /bin/mkdir /some-path/[0-9A-Z_a-z]*, !/bin/mkdir /some/path/[!-0-9A-Z_a-z] This allows www-data to run sudo -g some-group mkdir /some-path/foo-bar to create subdirectories in /some-path. If the subdirectory must belong to another user who can write to some-path, run the mkdir command as that user. You might be able to arrange for the directory to have the correct permissions and ownership at creation time. For the sudoers file: www-data ALL = (some-user : some-group) /bin/mkdir -m 775 /some-path/[0-9A-Z_a-z]*, !/bin/mkdir /some/path/[!-0-9A-Z_a-z] Run sudo -u some-user -g some-group mkdir -m 775 /some-path/foo-bar to create a group-writable directory belonging to some-user:some-group under /some-path.
How to allow www-data to create a folder without giving read access to the parent folder
1,407,059,257,000
Possible Duplicate: Getting “Not found” message when running a 32-bit binary on a 64-bit system ts3user@...:~/ts3$ dir CHANGELOG LICENSE doc ... ts3server.pid ts3server_linux_x86 ts3server_minimal_runscript.sh ts3server_startscript.sh tsdns ts3user@...:~/ts3$ ./ts3server_linux_x86 sh: ./ts3server_linux_x86: No such file or directory As you can see, dir command reports existence of teamspeak executable. However, when I try to launch it, it states that the file does not exist. What is that? I did chmod 0777 to that directory and chomd 0755 to ts3server_linux_x86.
Teamspeak has two server package:"Server amd64" or "Server x86" You try to execute the 32 bits version, and I guess your linux is 64 bits. Two solutions: download the 64 bits package install the ia32 libs to be able to run 32 bits binaries: sudo apt-get install ia32-libs
Linux isn't sure whether a file exists or not [duplicate]
1,407,059,257,000
I got some Mercurial repositories which are served by Apache over HTTP. But there is a dedicated user performing some automated tests, which needs to check out the repositories locally. Recently this started to fail, seemingly due to lacking rights for files in the largefiles subdirectory in .hg: -rw------- 2 www-data www-data 6.3M 2012-01-02 17:23 9358b828fb64feb37d3599a8735320687fa8a3b2 Default umask should be 022. And I used the setgid settings for the directories in .hg according to the multiple committers wiki page, which does not cover .hg/largefiles though. However, as far as I understand it, setting the gid for this directory wouldn't solve the problem, that hg sets such restrictive rights on those files. My other user trying to access this repositories via the filesystem is also in the www-data group, thus an additional read right for group would be sufficient to solve my problem. How can I convince Mercurial, or the system to grant this right properly for new files? I am using: Mercurial Distributed SCM (version 2.1)
It turns out that this is a problem in Mercurial and that there isn't an easy work-around for Mercurial 2.1. I've just sent three patches to the Mercurial mailinglist to fix this — hopefully you'll see the fix in Mercurial 2.1.1 in a week. The problem is that the largefiles extension is creating the .hg/largefiles/<hash> files by writing data into a temporary file which is then later renamed to the real name. It creates its temporary files using the standard tempfile module in Python. The module restricts the permissions to 600 since you normally don't want anybody to read your temporary files. The largefiles extension didn't take this into account and just renamed the file. My patches fix this by taking the permissions of .hg/store into account when creating the temporary files. This should bring largefiles into line with the rest of Mercurial.
How to change file permissions for newly generated files in largefiles directory of Mercurial?
1,407,059,257,000
I'm a TA for an introductory Python programming class (all work is done from the terminal), and I'm writing a project submission script. Ideally I want a directory setup like this: submissions/ user1/ project1/ ... user2/ project1/ ... ... Ideally, I'd like nothing inside the submissions directory to be readable or writable to any of the users, and I'll provide a submission script that copies all of their work into the right place. Currently I have a little Perl script that does the submission part and works when I use it, but how can I set up the permissions (or the script) so that the students don't get a permission denied error while running the script (requiring write permissions to the submissions directory), but they don't simultaneously have write permissions?
For completeness, here's the simplest solution. I follow it with my view of why a versioning system is not appropriate. I ended up enabling the setuid bit on the submission executable (chmod 4755), so that when students ran it, the program ran as me. All copying of files would then transfer ownership to me, and I could make the entire directory inaccessible to the students. This did involve a few hurdles in getting past Perl's added setuid security (untainting the input, untainting the PATH), but it worked out nicely in the end. The final script is around 20 lines. The reason a versioning system is unacceptable is because this is a first course in programming. The students have never used a terminal, and are confused enough by the idea of ssh-ing into a server and transferring files back and forth. The details of a version control system are for upper-division classes (especially when a student is prone to terminal typos!). And you'd be surprised today how many of those courses do actually require one to learn a version control system (I took four courses that did in my undergraduate). A less significant reason I couldn't do a versioning system is that I couldn't have super-user privileges on the server, and wouldn't ask the professor to set up a versioning system just because I couldn't figure out the stupid setuid bit. All I want for this course is a simple one-step submission to a private place. The problem was with my execution and lack of knowledge about unix permissions (not knowing about the setuid bit), not the structure of my solution. So Sean C.'s comment was the tip, but I didn't recognize it at first because it seemed like he meant the point was to run the script as root, and I didn't want that to protect the world from my own naivete about unix. Here is the actual working script, and its permissions. Please point out any security holes you might see. -rwsr-xr-x 1 260s12ta 260s12ta 600 2012-01-16 23:19 submit #!/usr/bin/perl use File::Copy; $ENV{"PATH"} = "/usr/bin"; # appease the perl-suid security for shell calls my $username = getlogin() or die "Couldn't access user login: $!\n"; my $dir = "/home/260s12ta/labs/$username/"; foreach (@ARGV) { if ($_ =~ /([\w-]+\.tar\.gz)/) { # untaint the given filename $filename = $1; } else { die "\nInvalid submission: $_ is not a .tar.gz file.\n"; } print "Submitting $filename... "; copy($filename, $dir) or print "Submission failed: $!\n"; chmod(0600, "$dir/$filename") or print "Submission failed: $!\n"; print "OK!\n"; }
Permissions for a submission script
1,407,059,257,000
I'm running the latest version of Ubuntu, and have mounted a SMB share via a line in rc.local. The share mounts correctly, and I can browse files freely, create new files, and then delete them without problems. But when I try and rsync a directory onto the mounted share: rsync -a --delete /MySource/ /SharedMountPoint/ I get lots of errors: rsync: failed to set times on "/SharedMountPoint/SomeDir": Operation not permitted (1) and similar errors about being unable to create temp files. All the files and directories on the share are listed with numeric uid/guid - which I suppose is reasonable, as they were originally created via a sync from a windows box. I have no great need for access control - its just a box on a LAN that me and my family use as a dropbox - I'd basically just like anyone to be able to access it (provided they've done basic authentication).
When mounting, use -o uid=youruid. Then, all files on that cifs share will be owned by you so that you can edit/remove them. E.g.: mount -o uid=1000 //nas/share /SharedMountPoint You can find your numeric uid in /etc/passwd grep `whoami` /etc/passwd | cut -d : -f 3 or: id -u `whoami`
NAS box mounted via CIFS - problem with permissions
1,407,059,257,000
I have noticed that some files in my home directory have read and write permissions for group and even other. If I am the only person who I want to give access to my machine, is there any reason to enable group or other permissions for files or directories?
It depends on the file or directory. For example, some web server setups allow the machine's users to publish files as http://server.name/~username, with the files typically living in that user's subdirectory. httpd will probably need execute permissions on the directory containing the files and all of the directories above it in the path, due to the way it processes URLs. In other words, if you have ~username/public_html set to 777, but ~username is 700, Apache probably can't serve the files. The broader answer to the question requires you to consider all the daemons running in the system. They typically do not run as either root or your user, so they do not automatically have permissions for any files in your directory unless given them explicitly.
I am the only person with access to my machine. Any reasons to enable group/other permissions on files and directories?
1,407,059,257,000
I need to mount a directory in a way that prevents user accessing it. However I need to have access to all attributes (including permission) from root. Is following method safe or is there a way around it: mkdir /mnt/protect chmod 700 /mnt/protect mkdir /mnt/protect/some_dir mount /dev/sdXn /mnt/protect/some_dir
Yes, is should be secure, since any non-root user will not be able to read or enter any directory under /mnt/protect - that is unless you make something stupid like create a hard link to some file under /mnt/protect/ in a place that is accessible to others. [Edit]: As Maciej has pointed out, it is actually (almost always?) forbidden to create cross-device hardlinks. What you could (but, of course should never) do is create a bind mount to some place under the "protected" filesystem. That would constitute a security breach.
mounting volume protected against user access
1,407,059,257,000
i have done the following on my Asus WL-520gu Installed the dd-wrtv24-sp2 mini svn:13064 Updated for usb support Installed optware package Activated the transmission client but i keep getting a permission error for files. I think it is a user access thing. How to resolve this issue? Is there any way to ignore user permissions on a drive? Update: i think it is due to the permissions of the user under which the transmission daemon is running. Can I change that user to root? I know where but don't know how /etc/init.d/transmission.
Don't change the daemon to run as root. Change the permissions on the folder where your daemon has to write so that it is allowed to do so. Assuming it's running as user transmission, run something like this as root: chown transmission /mnt/data/torrents/downloads chmod u+rw /mnt/data/torrents/downloads
"Error: permission denied" error from Transmission Client
1,407,059,257,000
I read in man sudo the following: --preserve-env Indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may return an error if the user does not have permission to preserve the environment. But I wonder, even if I don't have that permission, what stops me from running sudo env MY_VAR=$MY_VAR <cmd> ?
Interesting question. First of all setting the environment manually is tedious so --preserve-env is obviously better. For the security aspect: the sudo configuration can disallow you to execute env with sudo or better only allow a specific list of commands to run which does not include env. I would consider a sudoer configuration which is not restricted to a limited list of commands as full root access. It is not just env. Why not do sudo bash -c 'export MY_VAR="$MY_VAR"; exec <cmd>'. It is similar problematic as having a restricted shell you should not escape. If you not strictly limit what can be run there is likely an escape route. This part of the sudoer man page supports this in my opinion: SETENV and NOSETENV These tags override the value of the setenv option on a per-command basis. Note that if SETENV has been set for a command, the user may disable the env_reset option from the command line via the -E option. Additionally, environment variables set on the command line are not subject to the restrictions imposed by env_check, env_delete, or env_keep. As such, only trusted users should be allowed to set variables in this manner. If the command matched is ALL, the SETENV tag is implied for that command; this default may be overridden by use of the NOSETENV tag. So if a user is allowed to use all commands including env he/she can (by default) also use --preserve-env. If the configuration only allows a specific command it defaults to NOSETENV so the user neither can --preserve-env nor call env to bypass it. TL;DR: it is unlikely that you lack permission to use --preserve-env but be able to use env.
`sudo --preserve-env=MY_VAR` vs `sudo env MY_VAR=$MY_VAR`
1,407,059,257,000
Yesterday, while writing an answer to How to get full path names of all opened pdf files (in zathura) - like rofi does, I noticed something weird about the ownership of files the /proc/PID/ directory for zathura processes - most of them are owned by root instead of the user (cas) I ran zathura as. For example: $ cd ~/Manuals $ zathura X399\ Taichi.pdf & [1] 4055396 $ ls -lF /proc/4055396/fd ls: cannot open directory '/proc/4055396/fd': Permission denied $ ls -lFd /proc/4055396/fd dr-x------ 2 root root 0 Mar 31 13:04 /proc/4055396/fd/ huh? why is that owned by root? I ran it as cas. Most, but not all of the files/dirs in /proc/4055396 are owned by root: $ ls -lF /proc/4055396 ls: cannot read symbolic link '/proc/4055396/cwd': Permission denied ls: cannot read symbolic link '/proc/4055396/root': Permission denied ls: cannot read symbolic link '/proc/4055396/exe': Permission denied total 0 -r--r--r-- 1 root root 0 Mar 31 13:04 arch_status dr-xr-xr-x 2 cas cas 0 Mar 31 13:04 attr/ -rw-r--r-- 1 root root 0 Mar 31 13:04 autogroup -r-------- 1 root root 0 Mar 31 13:04 auxv -r--r--r-- 1 root root 0 Mar 31 13:04 cgroup --w------- 1 root root 0 Mar 31 13:04 clear_refs -r--r--r-- 1 root root 0 Mar 31 13:02 cmdline -rw-r--r-- 1 root root 0 Mar 31 13:04 comm -rw-r--r-- 1 root root 0 Mar 31 13:04 coredump_filter -r--r--r-- 1 root root 0 Mar 31 13:04 cpu_resctrl_groups -r--r--r-- 1 root root 0 Mar 31 13:04 cpuset lrwxrwxrwx 1 root root 0 Mar 31 13:04 cwd -r-------- 1 root root 0 Mar 31 13:04 environ lrwxrwxrwx 1 root root 0 Mar 31 13:02 exe dr-x------ 2 root root 0 Mar 31 13:04 fd/ dr-xr-xr-x 2 cas cas 0 Mar 31 13:04 fdinfo/ -rw-r--r-- 1 root root 0 Mar 31 13:04 gid_map -r-------- 1 root root 0 Mar 31 13:04 io -r-------- 1 root root 0 Mar 31 13:04 ksm_merging_pages -r-------- 1 root root 0 Mar 31 13:04 ksm_stat -r--r--r-- 1 root root 0 Mar 31 13:04 limits -rw-r--r-- 1 root root 0 Mar 31 13:04 loginuid dr-x------ 2 root root 0 Mar 31 13:04 map_files/ -r--r--r-- 1 root root 0 Mar 31 13:04 maps -rw------- 1 root root 0 Mar 31 13:04 mem -r--r--r-- 1 root root 0 Mar 31 13:04 mountinfo -r--r--r-- 1 root root 0 Mar 31 13:04 mounts -r-------- 1 root root 0 Mar 31 13:04 mountstats dr-xr-xr-x 57 cas cas 0 Mar 31 13:04 net/ dr-x--x--x 2 root root 0 Mar 31 13:04 ns/ -r--r--r-- 1 root root 0 Mar 31 13:04 numa_maps -rw-r--r-- 1 root root 0 Mar 31 13:04 oom_adj -r--r--r-- 1 root root 0 Mar 31 13:04 oom_score -rw-r--r-- 1 root root 0 Mar 31 13:04 oom_score_adj -r-------- 1 root root 0 Mar 31 13:04 pagemap -r-------- 1 root root 0 Mar 31 13:04 patch_state -r-------- 1 root root 0 Mar 31 13:04 personality -rw-r--r-- 1 root root 0 Mar 31 13:04 projid_map lrwxrwxrwx 1 root root 0 Mar 31 13:04 root -rw-r--r-- 1 root root 0 Mar 31 13:04 sched -r--r--r-- 1 root root 0 Mar 31 13:04 schedstat -r--r--r-- 1 root root 0 Mar 31 13:04 sessionid -rw-r--r-- 1 root root 0 Mar 31 13:04 setgroups -r--r--r-- 1 root root 0 Mar 31 13:04 smaps -r--r--r-- 1 root root 0 Mar 31 13:04 smaps_rollup -r-------- 1 root root 0 Mar 31 13:04 stack -r--r--r-- 1 root root 0 Mar 31 13:02 stat -r--r--r-- 1 root root 0 Mar 31 13:04 statm -r--r--r-- 1 root root 0 Mar 31 13:04 status -r-------- 1 root root 0 Mar 31 13:04 syscall dr-xr-xr-x 6 cas cas 0 Mar 31 13:04 task/ -rw-r--r-- 1 root root 0 Mar 31 13:04 timens_offsets -r--r--r-- 1 root root 0 Mar 31 13:04 timers -rw-rw-rw- 1 root root 0 Mar 31 13:04 timerslack_ns -rw-r--r-- 1 root root 0 Mar 31 13:04 uid_map -r--r--r-- 1 root root 0 Mar 31 13:04 wchan zathura is NOT setuid root: $ type -p zathura /usr/bin/zathura $ ls -l /usr/bin/zathura -rwxr-xr-x 1 root root 305456 Nov 28 03:34 /usr/bin/zathura It is version 0.5.2, and the package was last upgraded on November 28 last year: $ zathura --version zathura 0.5.2 girara 0.3.7 (runtime: 0.4.0) (plugin) cb (0.1.10) (/usr/lib/x86_64-linux-gnu/zathura/libcb.so) (plugin) pdf-poppler (0.3.1) (/usr/lib/x86_64-linux-gnu/zathura/libpdf-poppler.so) (plugin) ps (0.2.7) (/usr/lib/x86_64-linux-gnu/zathura/libps.so) (plugin) djvu (0.2.9) (/usr/lib/x86_64-linux-gnu/zathura/libdjvu.so) $ dpkg -l zathura Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-==============-============-============-============================================= ii zathura 0.5.2-1 amd64 document viewer with a minimalistic interface $ ls -l /var/cache/apt/archives/zathura_0.5.2-1_amd64.deb -rw-r--r-- 1 root root 175712 Nov 28 04:41 /var/cache/apt/archives/zathura_0.5.2-1_amd64.deb If I use qpdfview, atril, or okular instead of zathura, the permissions are fine: $ qpdfview X399\ Taichi.pdf $ pgrep qpdfview 4071588 qpdfview X399 Taichi.pdf $ ls -lFd /proc/4071588/fd dr-x------ 2 cas cas 0 Mar 31 13:16 /proc/4071588/fd/ $ atril X399\ Taichi.pdf & [1] 4080297 $ ls -lFd /proc/4080297/fd dr-x------ 2 cas cas 0 Mar 31 13:20 /proc/4080297/fd/ $ okular X399\ Taichi.pdf & [1] 4081710 $ ls -lFd /proc/4081710/fd dr-x------ 2 cas cas 0 Mar 31 13:21 /proc/4081710/fd/ All of the above were run from the same instance of bash, same environment, same everything. Not in a VM or container, or anything "unusual". So, what is up with zathura? Is it zathura? or is it some weird namespaces related behaviour by systemd or cgroups or something like that? The system is running Debian sid (updated yesterday), with kernel Linux hex 6.1.0-6-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.15-1 (2023-03-05) x86_64 GNU/Linux. The last reboot was 14 days ago when I upgraded the kernel, has been running fine since then. NOTE: I vaguely recall that when I was working on my answer yesterday, I saw the same root ownership thing for ONE of the instances of running atril, but I can't replicate that now. I may have mis-remembered this, it was very late at night (> 3am) and I was tired. zathura, however, consistently has root-owned stuff in /proc/PID/ - every time I run it.
Zathura applies a number of settings to protect itself and the user from potentially malicious content; among other things, it disables dumping using pctrl(PR_SET_DUMPABLE, SUID_DUMP_DISABLE), which changes the ownership of files inside /proc/PID: The files inside each /proc/[pid] directory are normally owned by the effective user and effective group ID of the process. However, as a security measure, the ownership is made root:root if the process's "dumpable" attribute is set to a value other than 1. Despite the constant’s name, this applies to non-suid binaries too. There’s an exception to this: directories which are world-readable and world-executable (currently the pid directory itself, along with attr, fdinfo, net and task) keep the process’ effective uid and gid. This is done so that callers can still stat the /proc/PID directory.
zathura and ownership of files in /proc/PID
1,407,059,257,000
I am learning about UNIX file permissions and I saw that on my Ubuntu system, /var/mail has the setgid bit set. Why is this? $ ls /var/mail/ -dl drwxrwsr-x 2 root mail 4096 Feb 23 05:57 /var/mail/ This book I'm reading says: Setgid is useful when you have groups of users who need to share a lot of files. Having them work from a common directory with the setgid attribute means that correct group ownership will be automatically set for new files, even if the people in the group don’t share the same primary group. That description doesn't sound like anything useful for /var/mail since users don't directly manipulate that directory. The files created in /var/mail end up with the group owner "mail", but doesn't this already happen? Only "mail" can create new files in the directory (and root). The only useful case I can think of is when a sysadmin adds a new mail account with sudo touch /var/mail/<user>. That file would still have the "mail" group owner.
The local email is not handled by just one service, but several services. Besides the actual Mail Transfer Agent (MTA for short: typically postfix, exim, sendmail or similar), there can be mail filtering/post-processing utilities (like the old procmail), services to enable remote access to user mailboxes (various POP and IMAP services), mailing list management utilities, and many others. Historically, such services used to be run as root, because they needed to be able to access every user's mailbox, and the mailboxes had to be accessible only by their owners. But it soon turned out that having the email system running as root was a big chunk of code that was ripe for exploits. Many, many vulnerabilities were found and fixed, but eventually it was recognized that running the mail system as root was a bad idea. The solution for that was to create the group mail, and make all the components of the email services that need to deliver mail to users' mailboxes setgid to that group. But by that time, the amount of mail-related tools was already so large that it was impossible to guarantee a perfect change-over. So, as an insurance, the parent directory of user inboxes, /var/mail/ was also made setgid mail, to ensure that all software that delivers mail to users' inboxes will automatically create any new inboxes with the correct group. The remaining task was to patch or configure all mail delivery programs to use the correct umask for users' inboxes: when the mail system was running as root, they could have used umask 077 (for permissions -rw-------), but with the group mail in effect, umask 007 (for permissions -rw-rw----) was needed. But this was an adjustment of a pre-existing requirement, rather than adding a new responsibility to enforce the correct group, so it was a simpler change. (Of course, adding the enforcement of the correct group was definitely a good idea - but making /var/mail setgid mail made that code change optional and less urgent.) And yes, most programs that deliver mail to users' inboxes in /var/mail/ will automatically create a mailbox file for a user if the file does not exist - so having an user with no mailbox file until the user receives their first incoming email is perfectly valid. To summarize: having /var/mail/ setgid mail is one-part insurance policy against misconfigured mail delivery programs, and one-part a historical remnant from the transition away from the dark ages when the email services ran fully as root all the time.
Why is /var/mail setgid?
1,407,059,257,000
I have setup a directory and some files with setfacl. jobq@workstation:~/Pool$ getfacl /etc/jobq getfacl: Removing leading '/' from absolute path names # file: etc/jobq # owner: root # group: jobq user::rwx user:jobq:rw- group::r-x group:jobq:rwx mask::rwx other::r-x jobq@workstation:~/Pool$ sudo getfacl /etc/jobq/log.txt getfacl: Removing leading '/' from absolute path names # file: etc/jobq/log.txt # owner: root # group: jobq user::rw- group::rw- group:jobq:rwx mask::rwx other::r-- jobq@workstation:~/Pool$ groups jobq However, when I run a command, like ls -al /etc/jobq I'm getting permission errors: ls: cannot access '/etc/jobq/log.txt': Permission denied total 0 d????????? ? ? ? ? ? . d????????? ? ? ? ? ? .. Since user jobq is in the group jobq, they should have access to the directory. What am I misunderstanding? How can I fix this?
The problem comes from this ACL on /etc/jobq: user:jobq:rw- This means that user jobq can’t “search” the directory, which is what stops ls from showing its contents. To fix this, you need to add the x permission. See Execute vs Read bit. How do directory permissions in Linux work? for details. See also Restrictive "group" permissions but open "world" permissions? to understand why the group permissions don’t help here. Thus another solution would be to drop the user ACL for jobq, and rely on the group permissions instead.
ls throws errors when trying to access directory guarded with ACL
1,407,059,257,000
I've learned that besides the standard *nix file permissions, macOS also has file flags, and that they originated with BSD Unix. macOS's set of such flags is: UF_NODUMP Do not dump the file. UF_IMMUTABLE The file may not be changed. UF_APPEND The file may only be appended to. UF_OPAQUE The directory is opaque when viewed through a union stack. UF_HIDDEN The file or directory is not intended to be dis-played displayed played to the user. SF_ARCHIVED The file has been archived. SF_IMMUTABLE The file may not be changed. SF_APPEND The file may only be appended to. You can see these extra flags with an extra switch to ls, though the switch varies: ls -lo - BSD and perhaps older versions of macOS ls -lO - current versions of macOS You can change the flags with the commands the chflags command: FreeBSD man page There are corresponding system calls chflags, lchflags, fchflags to change these flags: macOS man page But I can't seem to find a system call to read the flags. Surely ls calls some function to get them? The syscalls that can change them don't seem to be able to also return their current state. What am I missing? (If this belongs on StackOverflow then please feel free to move it there.)
The flags can be read using stat on macOS and BSDs; they appear in the st_flags field.
API/syscall to read or list BSD/macOS file flags
1,407,059,257,000
I have an Ubuntu 18.04 machine with qbittorrent-nox and Jellyfin as a media server. The quick start guide I followed for qBittorrent recommended having it under a separate user (qUser). Jellyfin runs under another user (mainUser). The torrent Downloads folder must be owned by qUser or else it can't seed or download. When completed, the torrent is owned by qUser and has incorrect permissions. Jellyfin needs the files to be under a directory owned by mainUser and have the permissions set to 755. What I have had to do is download the file to a separate qBittorrent owned Downloads dir, use chown to change ownership to mainUser, run chmod to change the permissions to 755 and finally move it to a library directory for Jellyfin. While this works, it is not efficient. What could I do to make this process streamlined to where I could simply have qBitorrent download to a Jellyfin library directory? Edit: Once the torrent is completed, it won't have the correct permissions to be read by Jellyfin. To fix this, I added a small command to execute on torrent completion: chmod -R 775 "%F/"
Create a group media with both qUser and mainUser being members of this group: addgroup media adduser qUser media adduser mainUser media Set the group of your torrent files to media and both processes should be able to read files downloaded by qBittorrent: chgrp -R media path/to/torrents
Permissions with qBittorrent and Jellyfin
1,407,059,257,000
I am using rsnapshot to make daily backups of a MYSQL database on a server. Everything works perfectly except the ownership of the directory is root:root. I would like it to be root:backups to enable me to easily download these backups to a local computer over an ssh connection. (My ssh user has sudo permissions but I don't want to have to type in the password every time I make a local copy of the backups. This user is part of the backups group.) In /etc/rsnapshot.conf I have this line: backup_script /usr/local/bin/backup_mysql.sh mysql/ And in the file /usr/local/bin/backup_mysql.sh I have: umask 0077 # backup the database date=`date +"%y%m%d-%h%m%s"` destination=$date'-data.sql.gz' /usr/bin/mysqldump --defaults-extra-file=/root/.my.cnf --single-transaction --quick --lock-tables=false --routines data | gzip -c > $destination /bin/chmod 660 $destination /bin/chown root:backups $destination The file structure that results is: /backups/ ├── [drwxrwx---] daily.0 │   └── [drwxrwx---] mysql [error opening dir] ├── [drwxrwx---] daily.1 │   └── [drwxrwx---] mysql [error opening dir] The ownership of the backup data file itself is correct, as root:backups, but I cannot access that file because the folder it is in, mysql, belongs to root:root.
In the default /etc/rsnapshot configuration file is the following: # Specify the path to a script (and any optional arguments) to run right # after rsnapshot syncs files # cmd_postexec /path/to/postexec/script You can use cmd_postexec to run a chgrp command on the resulting files which need their group ownership changing.
Rsnapshot: folder ownership permissions to 'backups' group instead of root
1,407,059,257,000
How can you run a command (e.g. iftop or similar) that requires root privileges from a non-root user and without using SUDO in front? Alternatively, how can you give root privileges to a user without becoming root? Ideally, I want to run the iftop command in the following way: [user@pc]$ iftop And not like: [user@pc]$ sudo iftop [root@pc]$ iftop
How can you run a command (e.g. iftop or similar) that requires root privileges from a non-root user and without using SUDO in front? There are at least 2 methods you can use to allow non-root users use iftop but both of them require root access. The safer method is to assign cap_net_raw capability to iftop binary: sudo setcap cap_net_raw+ep "$(command -v iftop)" The less safe method is to assign setuid root: sudo chmod +s "$(command -v iftop)" Alternatively, how can you give root privileges to a user without becoming root? You can't.
Running a command with root priviliges without SUDO and not as root user
1,407,059,257,000
I was trying to learn how setuid works. So I made a dummy program which just prints the current user: #include<bits/stdc++.h> using namespace std; int main(){ cout << system("id -a") << "\n"; cout << system("whoami") << "\n"; } I compiled and created the executable my-binary under the user anmol: -rwxrwxr-x 1 anmol anmol 9972 Feb 1 16:54 my-binary Then, I set the setuid option using chmod +s: -rwsrwsr-x 1 anmol anmol 9972 Feb 1 16:54 my-binary If I execute it normally, I get the following output: uid=1000(anmol) gid=1000(anmol) groups=1000(anmol),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),116(lpadmin),122(sambashare) anmol Now, if I change to another user using su user2, and then execute it, I get: uid=1001(user2) gid=1001(user2) groups=1001(user2) user2 And when I execute it using sudo ./my-binary, I get: uid=1001(root) gid=1001(root) groups=1001(root) root As far as I understand, no matter how I run it, should I not get the 1st output everytime? I checked other similar questions over here and some suggested me to check if the filesystem is mounted using nosuid option, so I executed mount | /dev/sda1 and got the output: /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) Which means that this option is not enabled. Any hints on why am I not getting the expected output?
The system(3) library function runs its command argument by passing it to /bin/sh -c, and the /bin/sh on Linux (either bash, dash or mksh) gives up any setuid or setgid privileges unless called with the -p option. The bash manpage says: If the shell is started with the effective user (group) id not equal to the real user (group) id, and the -p option is not supplied [...] the effective user id is set to the real user id. With dash (the /bin/sh from Debian/Ubuntu), this is kind of new: it wasn't yet the case in Debian 9 Stretch (2017), and it's only a Debian-specific change, still not in the upstream sources as of 2020-02-04. bash has had this since its 2.X versions (first included in RedHat 7.X, 2000). This is still not the case with other shells (ksh93, zsh, etc) or with the /bin/sh from other systems (OpenBSD, FreeBSD, Solaris; but not NetBSD where it was changed to work like bash). If they have it, their privileged mode works differently than in bash: it's turned on by default when the shell is run in setuid mode, and you have to turn it off with set +p in order to have the shell drop the setuid privileges. If you want to use system(), popen(), or run a shell or an executable shell script directly from your setuid binary, then you should give up any "split personality" and completely switch over to your real or effective credentials via setres[ug]id: % cat a.cc #include <unistd.h> #include <stdlib.h> int main(){ uid_t euid = geteuid(); setresuid(euid, euid, euid); system("id"); } % c++ a.cc % chmod u+s a.out % ./a.out uid=1002(fabe) % su -c ./a.out uid=1002(fabe) If you want just to check that your binary really switched its effective credentials, do it directly, not by invoking an external program via system(): #include <iostream> #include <unistd.h> #include <pwd.h> using namespace std; int main(){ uid_t euid = geteuid(); struct passwd *pw = getpwuid(euid); cout << "euid=" << euid; if(pw) cout << ", " << pw->pw_name; cout << endl; }
setuid not working [duplicate]
1,407,059,257,000
When I am using mkdir -pm 764 a/b/c then only c got that 764 permission, while a and b have default permission. Why does it so? Why doesn't all directories get 764 permission?
The mkdir utility creates a single directory. When used with -m it creates the directory and effectively runs chmod on it with the given permissions (although this does not happen in two steps, which could be important under some circumstances). With -p, any intermediate directories that does not already exist are created. The mode given to -m still only applies to the last name in the pathname, since that is the directory that you're wanting to create (the intermediate directories are created to allow the creation of that directory with the given mode). The POSIX standard for mkdir say that each intermediate directory should be created with the mode (S_IWUSR|S_IXUSR|~filemask)&0777 where filemask is your shell's umask value. In the "Application Usage" section, it says [...] For intermediate pathname components created by mkdir, the mode is the default modified by u+ wx so that the subdirectories can always be created regardless of the file mode creation mask; if different ultimate permissions are desired for the intermediate directories, they can be changed afterwards with chmod. This means that the mode for the intermediate directories is set to allow you to create a directory that potentially have no user write or execute permissions. If the intermediate directories also were given no execute and/or write permissions, the last components of the directory path would not be able to be created. In your specific case, use mkdir -p -m 764 a/b/c chmod 764 a/b chmod 764 a If you know for sure that none of the directories previously existed, use mkdir -p -m 764 a/b/c chmod -R 764 a
Regarding permissions on intermediate folders created using "mkdir -pm 764 a/b/c"
1,407,059,257,000
I'm currently trying to able the Trash feature in a NTFS partition mounted automatically on boot. To do that I'm using the permissions option in my fstab: UUID=1CACB8ABACB88136 /media/FILES ntfs defaults,permissions,relatime 0 0 then I changed the permissions: sudo chown :users -R /media/FILES/ sudo chmod g+rwx -R /media/FILES/ It works great except I continue to not have the trash feature. I can read, write, execute being member of the users group but I cannot use the Trash feature in Nautilus, only permanent delete. Any thoughts ? BR
Hey guys I've found the solution, removing my old .Trash folder that was there but wasn't working: sudo rm -rf /media/FILES/.Trash-1000 worked like a charm, I'm now able to move to Trash from nautilus. And I'm pretty sure that If I create a new user he will be able to have its own trash too.
How can I enable Trash feature in a NTFS partition with permissions?
1,407,059,257,000
For any reason sometimes the permissions for the folder /home/folder1 changes. How can I know who is changing the permission? or better how can I disable this option for the folder? Linux distribution CentOS Linux release 7.2.1511 (Core)
Use the audit package to accomplish this task. Ensure the auditd service is running, and set to start on boot chkconfig auditd on Set a watch on the required file to be monitored by using the auditctl command: auditctl -w /home/folder1 -p war -k monitor-folder1 That is: auditctl: the command used to add entries to the audit database. -w: Insert a watch for the file system object at path, i.e. /etc/shadow. -p: Set permissions filter for a file system watch. r=read, w=write, x=execute, a=attribute change. -k: Set a filter key on an audit rule. The filter key is an arbitrary string of text that can be up to 31 bytes long. It can uniquely identify the audit records produced by a rule. For Permanent watch you must add your rule to /etc/audit/audit.rules on RHEL5 or RHEL6 or RHEL7 or Centos 7(or /etc/audit.rules on RHEL4) in order for them to persist after reboot. For more detail follow the link https://access.redhat.com/solutions/10107
Linux permissions are changing automatically
1,407,059,257,000
what will be the most efficient way for multiple users (around 15) to be able to sudo to another user's account to run privileged commands? so to make it clear, I have a main user called mainaccount that has sudo/root access, I also have 15 other users that need to be able to change or run commands su - mainaccount for managing a test environment. how can I do this? Edit: I am asking how is this done, so if user user1 wants to run a command as mainaccount (su - mainaccount) but without putting mainaccount's password, rather using their own password. I guess kind of like the wheel group, where you can add multiple users but this one just to be able to change or run commands as mainaccount
That (and making back-ups) is pretty much the traditional use of the operator user and group... Set up a group - eg. mainusers - and add the users allowed to "become" mainaccount In /etc/sudoers add: %mainusers ALL = (root) su - mainaccount This will let members of mainusers become mainaccount by using su - mainaccount. By doing so as root, they don't need to give a password for the su-command. Alternative %mainusers ALL = (mainaccount) ALL lets members of mainusers to run any command as mainaccount. Let mainaccount-user be member of the sudo-group (ie. may sudo to root and run commands as root). This will let any user first becomming mainaccount to then use sudoto become root. That said, this sounds like a bad idea! It may be better to let mainaccount - and users belonging to mainusers who could become him - to only be allowed to run a limited number of privileged commands (perhaps only the commands in a dedicated directory), maybe as root. sudo can be used to set-up this too. You may look at man sudoers -- and in the example sudoers-file in /usr/share/doc/sudo/examples/ -- for more inspiration. Look especially how they use alias and the operator-user/group in the example-file. Here "operators" may do daily maintenance work -- like shut-down the computer, kill processes, start/stop/add printers, mount CDROMs, and such things -- but far from everything root (and members of sudo-group) can do. This is a more appropriate set-up for allowing "trusted users" doing some day-to-day admin-work. If you're running several computers, it may also be a good idea to limit their privileges to only one or two computers (eg. groups of users have special rights on "their" computer, but not on the other computers). So if I was you, I would think twice and perhaps rethink this - especially the number of users you intend to "promote". If you have to do this; I would suggest the operator-solution - put them in a group, and use sudo to give them a limited set of privileged commands they could run (as root) to fix day-to-day problems. But don't let them all be able to ascend to full root-status! If you really need someone with full root-privileges, then pick a couple among the dozen that you really trust and knows are knowledgeable, and add them to the sudo-group as full co-administrators... that would be a lot cleaner and easier to control than what you proposed.
Sudo access to another user's account