date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,560,005,118,000
I have an unusual requirement where I need to mount the same filesystem on a client multiple times, but each mount offering a different view of the underlying data, based on the group permissions of the underlying directories and files. I have achieved this in the past with NFS and the all_squash and anongid /etc/exports option, making a specific mount appear as though the user had a specific group ID. It effectively filtered access to the underlying filesystem by forcing the accessing user's group. Unfortunately I can't use that in this scenario, as the filesystem will be Amazon EFS (effectively an NFS server, but without any configuration options). I have looked at bindfs, and this provides a force-group option, but this is the reverse of what I want, since it forces all files to have a specific group, rather than forcing the client to have a specific group, looking at the files unchanged. I did see a mention of something called filterfs, but it appears to be long dead. Does anybody know a way to get a filtered view of a file system for a single user by effectively changing the user's group on an ad-hoc basis (without using sudo, since the user is a webserver daemon).
Thanks to @sourcejedi for pointing me in the right direction. In the original NFS setup, all_squash was used to make a daemon user appear to have a specific group (set by anongid). For this example, assume the group ID is 601. This view onto the original filesystem could therefore enforce permissions on files / directories based on the mounted filesystem's anongid being 601. Permissions appear like they evaluated at the level of the NFS mount, independent of the daemon user's actual group permissions. Another NFS mount onto the same filesystem with different all_squash settings effectively shows a different view of the files, as if the user had different group membership. Using bindfs --map the same result can be achieved with a little different setup. A sample configuration binds a filesystem such that any files / directories with group ID 601 in the underlying filesystem appear to have group ID 599 in the mounted filesystem: bindfs --map=@601/@599 --create-for-group=601 --create-for-user=600 --create-with-perms='u=rwD:g=rwD:o=' $FS_ROOT $MOUNT_ROOT/view601 Now, when listing files in $MOUNT_ROOT/view601 the daemon user sees any file that has group 601 instead having group 599. By giving the daemon membership of group gid 599, the permissions are effectively enforced again based on the mount. If a different mount mapped gid 602 to 599, the files in the same underlying filesystem would be available to the same user if they originally had group 602 (rather than 601) now mapped to 599, making them available to daemon.
Equivalent to NFS all_squash
1,560,005,118,000
I want to remove the .html file from the /home/user1/html/ directory. I have tried nearly all of the solutions posted on a myriad of other web sites. Nothing is working. user1@comp1:~/html$ sudo rm -f .html rm: cannot remove '.html': Permission denied Properties of directory: user1@comp1:~$ ls -al total 0 drwxrwxrwx 1 user1 user1 4096 Aug 21 14:48 html Properties of file: user1@comp1:~/html$ ls -al total 3912 -rwxrwxrwx 0 user1 user1 1365246 Aug 20 17:20 .html Things I have tried on directory (all run successfully): sudo chown $USER:$USER ./html sudo chmod 777 ./html sudo chmod -R 777 ./html Things I have tried on the file (all run successfully): sudo chown $USER:$USER .html sudo chmod 777 .html sudo chmod 777 . I tried looking at the file's attributes (did not run successfully): user1@comp1:~/html$ lsattr .html lsattr: Inappropriate ioctl for device While reading flags on .html strace with sudo: user1@comp1:~/html$ strace sudo rm -f .html execve("/usr/bin/sudo", ["sudo", "rm", "-f", ".html"], [/* 17 vars */]) = -1 EPERM (Operation not permitted) write(2, "strace: exec: Operation not perm"..., 38strace: exec: Operation not permitted ) = 38 exit_group(1) = ? +++ exited with 1 +++ strace without sudo: user1@comp1:~/html$ strace rm -f .html execve("/bin/rm", ["rm", "-f", ".html"], [/* 17 vars */]) = 0 brk(NULL) = 0x805000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=39157, ...}) = 0 mmap(NULL, 39157, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fcfcb47e000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\t\2\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1868984, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcfcb470000 mmap(NULL, 3971488, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fcfcae30000 mprotect(0x7fcfcaff0000, 2097152, PROT_NONE) = 0 mmap(0x7fcfcb1f0000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1c0000) = 0x7fcfcb1f0000 mmap(0x7fcfcb1f6000, 14752, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fcfcb1f6000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcfcb460000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcfcb450000 arch_prctl(ARCH_SET_FS, 0x7fcfcb460700) = 0 mprotect(0x7fcfcb1f0000, 16384, PROT_READ) = 0 mprotect(0x60d000, 4096, PROT_READ) = 0 mprotect(0x7fcfcb425000, 4096, PROT_READ) = 0 munmap(0x7fcfcb47e000, 39157) = 0 brk(NULL) = 0x805000 brk(0x826000) = 0x826000 open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=1668976, ...}) = 0 mmap(NULL, 1668976, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fcfcb28d000 close(3) = 0 ioctl(0, TCGETS, {B38400 opost isig icanon echo ...}) = 0 newfstatat(AT_FDCWD, ".html", {st_mode=S_IFREG|0777, st_size=1365246, ...}, AT_SYMLINK_NOFOLLOW) = 0 unlinkat(AT_FDCWD, ".html", 0) = -1 EACCES (Permission denied) open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=2995, ...}) = 0 read(3, "# Locale name alias data base.\n#"..., 4096) = 2995 read(3, "", 4096) = 0 close(3) = 0 open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) write(2, "rm: ", 4rm: ) = 4 write(2, "cannot remove '.html'", 21cannot remove '.html') = 21 open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale-langpack/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) write(2, ": Permission denied", 19: Permission denied) = 19 write(2, "\n", 1 ) = 1 lseek(0, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek) close(0) = 0 close(1) = 0 close(2) = 0 exit_group(1) = ? +++ exited with 1 +++
You must run fsck on the partition where the file is. To do that you must boot in single mode and do something like fsck.ext4 /dev/yourpartdevice (or change ext4 to the partition type - and replace yourpartdevice by the partition with problem) But... "lsattr: Inappropriate ioctl for device While reading flags on .html" looks like to be a hardware problem, and maybe fsck will be not capable of correcting the file. If this solve your problem please mark this as the correct answer. UPDATE for other users reading this answer: Consider that RAM memory can do a lot of crazy things, so checking your RAM is good before running fsck, because that can make fsck to run very destructive. Good Lucky!
Yet another `rm: cannot remove 'file': Permission denied`
1,560,005,118,000
ERROR: type should be string, got "\n\nhttps://stackoverflow.com/questions/12996397/command-not-found-when-using-sudo is exactly what I am looking for but none of the answers worked for me. I am using Arch Linux. I am trying to run a command in the present working directory\nWorkspace$ sudo ./SomeBinary -some_args\nsudo: ./<SomeBinary>: command not found\nsudo pwd\n/home/SomeUser/Workspace\n\nMy /etc/sudoers file\nDefaults env_keep += \"LANG LANGUAGE LINGUAS LC_* _XKB_CHARSET\"\nDefaults env_keep += \"HOME\"\nDefaults env_keep += \"XAPPLRESDIR XFILESEARCHPATH XUSERFILESEARCHPATH\"\nDefaults env_keep += \"QTDIR KDEDIR\"\nDefaults env_keep += \"XDG_SESSION_COOKIE\"\nDefaults env_keep += \"XMODIFIERS GTK_IM_MODULE QT_IM_MODULE QT_IM_SWITCHER\"\nDefaults secure_path=\"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"\n\nDefaults mail_badpass\nDefaults log_output\nDefaults!/usr/bin/sudoreplay !log_output\nDefaults!/usr/local/bin/sudoreplay !log_output\nDefaults!REBOOT !log_output\n\nroot ALL=(ALL) ALL\n\n%wheel ALL=(ALL) ALL\n\n%sudo ALL=(ALL:ALL) ALL\n\n#includedir /etc/sudoers.d\n\nI have mounted the working directory with remote directory using sshfs.\nsshfs RemoteUbuntu.local:/media/ExtHDD/Workspace ~/Workspace\n\nRemote is BTRFS formated.\n"
There's something else at play here, I suspect with the ./SomeBinary. I ran these two tests on a CentOS 7 box using sudo and they both worked without issue. $ cat sudy.bash #!/bin/bash whoami echo "hi" pwd Which results in this output: $ sudo ./sudy.bash root hi /home/vagrant And if I copied the whoami executable to my /home/vagrant directory and ran it: $ which whoami /usr/bin/whoami $ cp /usr/bin/whoami . $ ll whoami -rwxr-xr-x 1 vagrant vagrant 28984 Aug 5 00:23 whoami And when I run it via sudo: $ sudo ./whoami root Command not found Curiously the only way I could induce that message with sudo is when the execute bit is removed from my sudy.bash script. For example: $ chmod -x sudy.bash run as myself $ ./sudy.bash -bash: ./sudy.bash: Permission denied run via sudo $ sudo ./sudy.bash sudo: ./sudy.bash: command not found NOTE: The same thing happens with the copied sudo ./whoami as well.
sudo: ./<SomeBinary>: command not found when run from mounted folder [closed]
1,560,005,118,000
.I have a font file which I created using a python script from Github. The owner and group are root. I don't actually know what this means... I can guess what owner means, and I can guess that a group is a group permission that can be used to determine access but I'm not entirely clear on the difference between the group being root and the owner being root. Or how one even creates a group. Anyway the permissions for "Other" is read only, and when I try to install the font using the font viewer I am denied based on permissions. I also want to transfer this file to my windows machine and to other friends etc, so I want to take away all permissions from this file. Or give all permissions on this file? I'm not clear on the semantics there... basically I want to make it so that anyone can read, write, execute, use, install, whatever this file. I tried running chmod o+rx {{font name}} but that didn't do anything it seems. I then tried `chown o+rx {{font name}}' and that threw an error because I didn't write a user name. But if I don't want to write a specific user name, I just want to make it that anyone can do anything with this file, what would I do. and why would I do it that way? I also realize that I could just use the CLI to sudo install the font but I want to understand why permissions work this way and what exactly I'm doing. When I download a picture from the internet, anyone can edit, delete, move, or use the file etc. So having never really dealt with permissions I'm not really clear on how a picture from the internet differs from files one has set permissions for. I get conceptually that it's a security thing but the exact logic is not clear to me. EDIT: The top answer to this question essentially answered my question: How to change permissions from root user to all users? I'm still not clear on the full limits or theory about permissions, but in order to make it so that a file can be used by anyone I just have to change the permissions while under root using this command: chmod a+rwX {{filename}} I know what rwx means. It means read, write, execute. And plus means to give those permissions. But what do a mean? I read to use o which is other, but I'm guessing that a is anyone? I know how to give anyone permission, but how can I make anyone owner? That's probably the more specific question I'm asking.
"root" is the superuser account. It has the permissions to do anything it wants to on the computer. "groups" are used for access control. And, a user can belong to several "groups". Organizations may set up different groups for different departments, e.g. accounting, personal, engineering, etc. This way then can limit access to the files to only that group. Most *nix OSs will automatically create a group with the same name as the user when creating the user account. The "other" category is limited to everyone who has an account on the computer. It does not mean everyone in the world. File transfers between computers are usually done with scp (secure copy protocol). It allows you to copy a file from the local computer to the distant computer. Or, the converse, from a remote computer to the local computer. However to do this, you will need a user account on the remote machine. When you download a picture from the internet you are using either http or ftp to do the download. If you really want anyone in the world to be able to download the fonts, you would have to set up a web server or an anonymous ftp server. I strongly advise against doing that if you are unfamiliar with unix.
How do I chmod or chown a file so that anybody in the world can access it? [duplicate]
1,560,005,118,000
I have been receiving since a long time following mails: From: Cron Daemon <[email protected]> Subject: Cron <root@cloud-vps> test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) To: [email protected] /etc/cron.daily/clamscan_daily: Starting a daily scan of / directory. Amount of data to be scanned is 4.4G. LibClamAV Warning: cli_scanxz: decompress file size exceeds limits - only scanning 27262976 bytes LibClamAV Warning: cli_scanxz: decompress file size exceeds limits - only scanning 27262976 bytes LibClamAV Warning: fmap_readpage: pread fail: asked for 4094 bytes @ offset 2, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4072 bytes @ offset 24, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4094 bytes @ offset 2, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4051 bytes @ offset 45, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4094 bytes @ offset 2, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4073 bytes @ offset 23, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4067 bytes @ offset 29, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4090 bytes @ offset 6, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4091 bytes @ offset 5, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4091 bytes @ offset 5, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4092 bytes @ offset 4, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4094 bytes @ offset 2, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4090 bytes @ offset 6, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4093 bytes @ offset 3, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4094 bytes @ offset 2, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4094 bytes @ offset 2, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4094 bytes @ offset 2, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4092 bytes @ offset 4, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4094 bytes @ offset 2, got 0 LibClamAV Warning: fmap_readpage: pread fail: asked for 4094 bytes @ offset 2, got 0 [...] WARNING: Can't open file /sys/module/jbd2/uevent: Permission denied /etc/cron.daily/logrotate: mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: NO)' error: error running shared postrotate script for '/var/log/mysql/mysql.log /var/log/mysql/mysql-slow.log /var/log/mysql/mariadb-slow.log /var/log/mysql/error.log ' run-parts: /etc/cron.daily/logrotate exited with return code 1 Can you tell me how to fix these errors? I have Apparmor disabled.
You need to resolve the password issue at the bottom of that email. The script is unable to access your MySQL server. WARNING: Can't open file /sys/module/jbd2/uevent: Permission denied /etc/cron.daily/logrotate: mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: NO)' error: error running shared postrotate script for '/var/log/mysql/mysql.log /var/log/mysql/mysql-slow.log /var/log/mysql/mariadb-slow.log /var/log/mysql/error.log ' run-parts: /etc/cron.daily/logrotate exited with return code 1 Specifically this: mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: NO)' Try logging into your MySQL DB that's locally on this server and verify the username/password work as expected. Then make sure that you can use mysqladmin to do the same as this CRON is attempting to do.
Cron Daemon - LibClamAV errors
1,560,005,118,000
When logging in the prompt is different than it is after executing /bin/bash -bash-4.2$ exec bash bash-4.2$ How do I get the - back in front of the bash. There are certain commands like tcp dump that only work in the orignal -bash-4.2$ tcpdump -i port1 -w /home/user/$HOSTNAME-port1.pcap -c10000 -G300 tcpdump: WARNING: port1: no IPv4 address assigned tcpdump: listening on port1, link-type EN10MB (Ethernet), capture size 65535 bytes 0 packets captured 0 packets received by filter 0 packets dropped by kernel -bash-4.2$ exec bash bash-4.2$ tcpdump -i port1 -w /home/user/$HOSTNAME-port1.pcap -c10000 -G300 tcpdump: port1: You don't have permission to capture on that device (socket: Operation not permitted) Update for @ctrl-alt-delor -bash-4.2$ groups nuage -bash-4.2$ exec bash bash-4.2$ groups nuage update for @Mikel bash-4.2$ exec bash -l bash-4.2$ Update for @Mark Plotnick -bash-4.2$ type tcpdump tcpdump is aliased to 'sudo /usr/sbin/tcpdump' -bash-4.2$ exec bash bash-4.2$ type tcpdump tcpdump is /usr/sbin/tcpdump [root@host nuage]# cat /etc/sudoers | grep nuage [root@host nuage]# exit bash-4.2$ group bash-4.2$ groups nuage bash-4.2$ whoami nuage
Problem : After executing exec bash the aliases present in the default login prompt are lost causing some commands to not work as expected, such as tcpdump in the following example -bash-4.2$ tcpdump -i port1 -w /home/user/$HOSTNAME-port1.pcap -c10000 -G300 tcpdump: WARNING: port1: no IPv4 address assigned tcpdump: listening on port1, link-type EN10MB (Ethernet), capture size 65535 bytes 0 packets captured 0 packets received by filter 0 packets dropped by kernel -bash-4.2$ exec bash bash-4.2$ tcpdump -i port1 -w /home/user/$HOSTNAME-port1.pcap -c10000 -G300 tcpdump: port1: You don't have permission to capture on that device (socket: Operation not permitted) The Fix : You can do what I did and figure out how the aliases change between the different prompts -bash-4.2$ type tcpdump tcpdump is aliased to 'sudo /usr/sbin/tcpdump' -bash-4.2$ exec bash bash-4.2$ type tcpdump tcpdump is /usr/sbin/tcpdump and then change the script to use those sudo tcpdump instead of plain tcpdump. Or alternatively you can restore all the aliases present in the original prompt by running exec -a -bash bash (which is apaprently not recommended, see @ctrl-alt-delor's comment)
Get back to default login shell after running /bin/bash
1,560,005,118,000
I am trying to write a Bash script that takes the name of a file or directory as an argument and reports if the file is a directory, regular file or other. The script should also report if the user has read, write and execute permission on the file or directory. Obviously something like this will get the raw information: #!/bin/sh read -p "Enter a filename: " fname file $fname ls -la $fname however, I am wondering if we could just drop the file command completely, pass the filename variable to an ls that we then write some kind of if, then, else statement based on the results. the ls command gives a file type and full break down of the permissions the user running it has on the file, so can I make custom output based on the results of an ls command?
You can use the shell test to check for file type and permissions: #!/bin/sh read -rp 'Enter a filename: ' fname if [ -f "$fname" ]; then echo 'File is a regular file' elif [ -d "$fname" ]; then echo 'File is a directory' else echo 'File is not a regular file or a directory' fi [ -r "$fname" ] && read=read [ -w "$fname" ] && write=write [ -x "$fname" ] && execute=execute echo "User has ${read} ${write} ${execute} permissions" | tr -s ' ' The shell test The if construct will report on the file type by checking if it is a regular file (-f) or a directory (-d) else it will report that it is neither. Then we check the various permissions: read (-r), write (-w), and execute (-x). Setting a variable for each. If the user does not have one of these permissions the corresponding variable will remain unset, so in the final echo line if the user doesn't have that permission the variable will expand to nothing. This is wrapped up with a call to tr -s ' ' to remove any extra spaces that may be there from permissions not existing.
How to validate basic file information [duplicate]
1,560,005,118,000
In journal filesystem(for example ext4, XFS, ZFS, JFS, btrfs), there are file access permission rules. Hence if I mount a HDD which include a unix OS on it, when I access the file on this disk without root priority, it will be failed to read or write it. However if the current username and password are same as the owner of this file on this disk. What will happen? If the access still remain failure, what information is involved in identifying this two different user with same username and password?
UNIX file permission metadata which is stored by any of the file systems you mentioned usually as extended attributes is stored as a numeric ID not as a name. The file system driver is aware of file system metadata like extended attributes but not how to enforce them. Furthermore different metadata can be used for file permissions such as ACL support for Linux. On Linux users are identified with a user ID determined by the name used in login and kept for the login session. Names can technically repeat in the /etc/passwd database though. Further the associated password has no bearing on the file permissions if the login session has the same user id it has the same permission. Meaning if you change your password, it won't affect the file system permission. It will affect the login session and the password you type in when using sudo or su but the metadata on the file system only indicates which user and group its associated with.
Beside username and password, what information are involved in permission management of journal filesystem?
1,560,005,118,000
I have the next problem, i have 2 servers (debian) and each one have install tomcat. The first server (192.168.0.100) have installed tomcat with their default permissions in every folder (and that’s ok). But , in my second server (192.168.0.101), someone here put chmod 777 –R to every folder in tomcat, and I don’t know how to restore to the original permissions in every folder. So, my question is, it’s possible copy the permissions tomcat’s folders from the first server (192.168.0.100) and then set that permissions in tomcat’s folders of the second server (192.168.0.101) ?? Regards
Yes, it is possible. You can either copy the folder with scp -pr, preserving permissions and replace it with the old one, or, if the contents, but not the files differ, you can copy it (preserving permissions) and clone permissions: chmod -R --reference REMOTEFOLDER LOCALFOLDER chown -R --reference REMOTEFOLDER LOCALFOLDER The following answer could also be helpful: https://unix.stackexchange.com/a/20646
copy folder permission to another server
1,560,005,118,000
I have a nagios notifcation command that looks like this: /usr/bin/printf "%b" "NotificationType=$NOTIFICATIONTYPE$\nService=$SERVICEDESC$\nHost=$HOSTALIAS$\nAddress=$HOSTADDRESS$\nState=$SERVICESTATE$\nDateTime=$LONGDATETIME$\nAdditionalInfo=$SERVICEOUTPUT$\n" > $$(mktemp -p $CONTACTADDRESS1$ service.XXXXXXXX.alert) Adding in some newlines for readability /usr/bin/printf "%b" "NotificationType=$NOTIFICATIONTYPE$\nService=$SERVICEDESC$\nHost=$HOSTALIAS$\nAddress=$HOSTADDRESS$\nState=$SERVICESTATE$\nDateTime=$LONGDATETIME$\nAdditionalInfo=$SERVICEOUTPUT$\n" > $$(mktemp -p $CONTACTADDRESS1$ service.XXXXXXXX.alert) This does actually create a new file in the directory defined by $CONTACTADDRESS1$ which in my case is /home/alert/NagiosAlerts. Only recently delving into permissions and ACLs to this depth I understood that I needed to set the gid and some default ACLs for this folder to ensure that my nagios user was able to write to this directory and that my alert user could read and write to files created here. So I set up the following # setfacl -Rdm g:nagios:rw /home/alert/NagiosAlerts/ # setfacl -Rm g:nagios:rw /home/alert/NagiosAlerts/ So the folder permissions look like this now [kanmonitor01]# pwd /home/alert/NagiosAlerts [kanmonitor01]# ll .. total 4 drwxrws---+ 2 alert nagios 4096 Dec 21 14:27 NagiosAlerts [kanmonitor01]# getfacl . # file: . # owner: alert # group: nagios # flags: -s- user::rwx group::rwx group:nagios:rw- mask::rwx other::--- default:user::rwx default:group::rw- default:group:nagios:rw- default:mask::rw- default:other::--- So as the nagios user I touch a new file.... [nagios@kanmonitor01]$ whoami nagios [nagios@kanmonitor01]$ touch file.bagel [nagios@kanmonitor01]$ ll file.bagel -rw-rw----+ 1 nagios nagios 0 Dec 21 14:57 file.bagel That looks OK to me. The group permission of the file is rw- which I think is the expected outcome. However when the nagios service executes the command at the beginning of the question I end up with this: [@kanmonitor01]# ll service.iCSThqzg.alert -rw-------+ 1 nagios nagios 178 Dec 21 14:51 service.iCSThqzg.alert [kanmonitor01]# getfacl service.iCSThqzg.alert # file: service.iCSThqzg.alert # owner: nagios # group: nagios user::rw- group::rw- #effective:--- group:nagios:rw- #effective:--- mask::--- other::--- So it has an ACL just not the one I wanted it to have. This to me is not the expected behavior. I see lots of question about certain binaries not honoring the ACL because of default behavior and such. It looks like my case is that mktemp is somehow causing me this issue. I was trying to avoid some sort of need to do a chmod of every file every time I needed it. Not sure what a good course of action is here. At the end of the day I need the user nagios to be able to write files to this directory and the user alert to be able to read/write those same files. ACL seem like the way to go.....
mktemp explicitly creates the file with permissions 0600 $ strace -e open mktemp -p . [...] open("./tmp.EuTEGOcoEJ", O_RDWR|O_CREAT|O_EXCL, 0600) = 3 So that overrides those default ACLs. mktemp does ask group and other not have permissions. A way to deceive it so it doesn't happen would be wrong. You could chmod the file after creation (go against mktemp's will afterwards): file=$(mktemp....) && chmod g+r -- "$file" && ... > "$file" Or use mktemp to find the file's name but not create it, and let the shell redirection create it: umask 077; ... > "$(mktemp -u ...)" In that case, the ACL does take precedence over the umask. The -u (here for GNU mktemp) is for unsafe though as while mktemp will try to come up with a unique file name, it cannot guarantee that the file won't be created (possibly as a symlink to somewhere else which is the kind of thing one might need to worry about here) in between the time it outputs it and the shell opens it. Even set -o clobber cannot safeguard against it as it also has a race condition (you'd need zsh's sysopen -o excl here).
What can I change with my nagios command so that it honors the destination folder ACL?
1,560,005,118,000
The documentation for setfsuid() says: Normally, the value of the filesystem user ID will shadow the value of the effective user ID. Does "shadow the value" means that the value of the filesystem user ID will be used instead of the effective user ID? If this is what it means, why did they say "Normally", is there a situation where the effective user ID will be used instead of the filesystem user ID? Note that they also say the same thing for the filesystem group ID and the effective group ID in the setfsgid() documentation: Normally, the value of the filesystem group ID will shadow the value of the effective group ID.
The part about shadowing refers to the following sentence: In fact, whenever the effective user ID is changed, the filesystem user ID will also be changed to the new value of the effective user ID. So, since usually programs don't change the FSUID (or even know about it!), it's always going to be the same as the EUID. The exception being programs that explicitly know to change it directly. The FSUID is used for filesystem accesses, the EUID for other things. The rationale is there in the man page: the FSUID existed originally so that a file server could act on behalf of some regular user, but could not be affected by that same user via signals.
Will "filesystem user ID" always be used instead of the "effective user ID"?
1,560,005,118,000
I want to initiate zypper command without the entry of sudo. For example, zypper update I attempt to change the permission bit of the zypper file located at /usr/bin folder. I assume that with allow me to run the zypper command without sudo command. -rwxr-sr-x 1 root root 1942112 Oct 10 19:21 /usr/bin/zypper I added current user to root group; this file should be able to run as root.
strange, I added setuid instead, and it works. For a binary to run with root privileges when invoked by any other user, it must be setuid. You can do it as follows: $ sudo /usr/bin/chmod 4755 /usr/bin/zypper
Launching zypper command with root privilege
1,560,005,118,000
I have a process P which is spawned by a process owned by root. After P is created setguid() and setuid() are called and it runs as user U. The process P attempts to create a file f on a folder F (in the root file system) which is owned by root and has the following privileges: drwxrwx--- 2 root root The function call look likes this: open(path , O_CREAT | O_RDWR , 0660); If I run the command ps -e -o cmd,uid,euid,ruid,suid,gid,egid,rgid,sgid the result is the following: /my/process 500 500 500 500 500 500 500 500 This confirm that the process P is not running as root however strange enough even if the process runs as user U the file f is create under the folder F which should be only writable by root and its group members: -rw-rw---- 1 U U So the file is owned by U. If I try doing the same from the bash I get a "Permission Denied" as expected: $ touch /F/f touch: cannot touch `/F/f': Permission denied If I set the folder F permissions to: drwx------ 2 root root then the open() call fails with "Permission Denied" as expected. Why can P create the file in that folder when writing permission has been granted to the root group? The ps command shows that all uid and gid are set to the related user ids so how can it possible? These are the group memberships of root and U: $groups root root : root $groups U U : U G So U has G as secondary group $lid -g root root(uid=0) sync(uid=5) shutdown(uid=6) halt(uid=7) operator(uid=11) $lid -g U U(uid=500) $lid -g G U(uid=500) This show that only U is a member of G
Like @jdwolf mentions in the comments, the issue might be supplementary groups. setgid() doesn't remove them. A simple test, ./drop here is a program that calls setregid() and setreuid() to change the GID and UID to nobody, and then runs id: # id uid=0(root) gid=0(root) groups=0(root) # ./drop uid=65534(nobody) gid=65534(nogroup) groups=65534(nogroup),0(root) There's still the zero group. Adding setgroups(0, NULL) (before the setuid()) removes that group: # ./drop2 uid=65534(nobody) gid=65534(nogroup) groups=65534(nogroup) Of course, that doesn't add any of the other target user's groups.
Linux open() syscall and folder permissions
1,560,005,118,000
If we create a shared directory, and allow say a root user and a group sharedgroup to the permissions: -rwxrwxr--, and we want every new file to have the permissions -rwxrwxr--, but the permission of the parent directory to be rwxrwxr-w. The way to do this would(what I know) be to set the default umask to 0003, but it appears once we close the terminal the umask is reset. So how do we make the change permanent only for a directory, because we wouldn't want to change the umasks of the entire system.
You cannot customize umask on a per-directory basis. The typical way to solve your particular scenario, is to use a setfacl default mask.
Setting a default umask permanently
1,560,005,118,000
I am working on some Qt application, which communicates with Bluetooth hardware. Now, if I run this app as normal user: [user@workstation]: /mnt/projects/btProjectBuild/debug>$ ./btClient I get following warning/error: qt.bluetooth.bluez: Missing CAP_NET_ADMIN permission. Cannot determine whether a found address is of random or public type. However, If I run same app with sudo prefix (as root): [user@workstation]: /mnt/projects/btProjectBuild/debug>$ sudo ./btClient I do not get this warning/error. I am using ArchLinux Linux workstation 4.12.8-2-ARCH #1 SMP PREEMPT Fri Aug 18 14:08:02 UTC 2017 x86_64 GNU/Linux. Where do I configure bluez to get rid of this warning/error?
This error comes from qt5 bluetooth library not from bluez directly and there is working solution, explained in "Bluetooth LE scan as non root?".
Qt application bluetooth error
1,560,005,118,000
I use my ikiwiki for personal notes only on my laptop locally (the html pages are under ~/public_html/mywiki) and now I am trying to edit it with emacs and push from command line. I have some questions about this: Is the following workflow correct: cd ~/mywiki edit and save ~/mypage.mdwm with emacs git add ~/mypage.mdwm git commit -m "mypage edit" git push Since I also sometimes want to edit it from the web interface, I tested it and noticed that it doesn't seem that I have to pull before editing. If I save an edit from the web interface the directory ~/mywiki is updated magically without using git pull. Is this correct so far or is there a better workflow? After editing and saving the page from the web interface it is saved with root permissions in ~/mywiki how can I make ikiwiki to save everything with my username as group and owner?
ad question 1: This seems to be correct. If you set git_wrapper to git_wrapper: /home/user/mywiki/.git/hooks/post-commit (instead of git_wrapper: /home/user/mywiki.git/hooks/post-update you don't need the push step. You may also think about another working clone of your wiki. But as long as you have a single user setup and you don't edit via web interface and editor at the same time, it should be fine to work inside scrdir as you described. See also this question: Why do I need 3 git repositories for ikiwiki if I want to commit locally) ad question 2: I am not quite sure where the problem comes from, maybe that you did run ikiwiki with sudo during setup. I suggest the following to fix it: Make sure, that public_html is owned by you (sudo chmod myuser:myuser ~/public_html) Resetup the wiki via cloning: Clone the bare repository: git clone --bare ~/mywiki.git ~/newiki.git (even if the files in mywiki.git are owned by root the files in ~/newiki.git will owened by myuser) cp ~/mywiki.git/config ~/newiki.git/config Make new srcdir: git clone ~/newiki.git ~/newiki (~/newiki will be your new srcdir) Make new config file: cp ~/mywiki.setup ~/newiki.setup and rename all occurences of mywiki with newiki. Then run (without sudo): ikiwiki --setup newiki.setup --getctime Test in your browser: 127.0.0.1/~myuser/newiki If everything works you may (after an backup) delete mywiki and rename newiki to mywiki if you want.
Using ikiwiki via command line: Workflow and permission problem
1,560,005,118,000
I have a game executable at ~/Games/factorio/bin/x64/factorio that I want to run from dmenu. I've created the shortcut below: [Desktop Entry] Type=Application Name=Factorio Path=/home/[USERNAME]/Games/factorio/bin/x64 Exec=factorio Terminal=false ...with [USERNAME] obviously being my username. dmenu picks up the file and displays the entry, but when I select it, nothing happens. I created another desktop file for pavucontrol below: [Desktop Entry] Type=Application Name=pavucontrol Comment=Sound manager for PulseAudio Path=/usr/bin Exec=pavucontrol Terminal=false This desktop file (pavucontrol.desktop) has the exact same syntax as factorio.desktop, yet actually works. Is there something I'm missing? I've checked the file permissions for both factorio and factorio.desktop, and both have full read permissions and write permissions for the owner. Both are marked as executable. Here is some system information if that helps: OS: Antergos Linux x86_64 Model: NC839AA-ABA a6838f Kernel: 4.12.3-1-ARCH Shell: bash 4.4.12 DE: i3
Something that always worked for me was putting the whole path in the Exec section as follows: [Desktop Entry] Type=Application Name=Factorio Exec=/home/[USERNAME]/Games/factorio/bin/x64/factorio Terminal=false I don't know excactly what the Path section is for - I never used it.
.desktop file will not launch the desired program, despite being identical in syntax to a working file
1,560,005,118,000
my PHP script is using to register new user with his photo. On Debian, everything was fine, but when I installed on my server RHEL, problems has begun. Directory /tmp/ rights are 777 and "upload/" has 777 with chown apache:apache. below is fragment of httpd's error_log: [Wed Jun 07 15:25:29.363766 2017] [:error] [pid 22867] [client 10.31.242.73:49624] PHP Warning: move_uploaded_file(upload/1268_org.jpg): failed to open stream: Permission denied in /var/www/html/inc/classes/user.inc.php on line 76, referer: http://10.31.242.72/index2.php?mnu=10041 [Wed Jun 07 15:25:29.363808 2017] [:error] [pid 22867] [client 10.31.242.73:49624] PHP Warning: move_uploaded_file(): Unable to move '/tmp/phpmY6k8j' to 'upload/1268_org.jpg' in /var/www/html/inc/classes/user.inc.php on line 76, referer: http://10.31.242.72/index2.php?mnu=10041 I don't have any idea, what's wrong with it. Maybe I skipped something?
I found solution on this website It was SELinux's blame. I just added httpd_sys_rw_content_t to upload directory by typing: semanage fcontext -a httpd_sys_rw_content_t "/var/www/html/upload(/.*)?".
PHP move_uploaded_file permission denied only on RHEL
1,560,005,118,000
I downloaded and compiled the source code of FreeBSD with: git clone https://github.com/freebsd/freebsd.git /usr/src cd /usr/src make clean make buildworld and literally everything would exit on signal 12. I tried rebooting the system, but reboot exited on signal 12, so I had to press the power button to shutdown my device. When I boot to FreeBSD again, I can't even login. Firstly it tells me Jun 4 08:10:32 init: /bin/sh on /etc/rc terminated abnormally, going to single user mode Enter full pathname of shell or RETURN for /bin/sh: And if I send a RETURN, an error would occur: pid 33 (sh), uid 0: exited on signal 12 Jun 4 08:10:51 init: single user shell terminated, restarting Enter full pathname of shell or RETURN for /bin/sh: The worst thing about this problem is that the same error occurs even when I enter Single User Mode. How can I fix this?
You had a bad luck to upgrade your system at a very rare moment of the CURRENT branch changing its ABI, and ignore the safe procedure detailed here (the 20170523 entry): https://github.com/freebsd/freebsd/blob/master/UPDATING At this point - old kernel, new userland, which is the only unsupported configuration there is (new kernel, old userland is fine) - I'd say the easiest way out is to reinstall, without reformatting partitions.
FreeBSD: everything exited on signal 12 after "make buildworld"
1,560,005,118,000
I don't understand how my Linux machine is operating on new files. I have an Amazon Linux AMI (RHEL based distro) and when I execute umask I get 0002, so I get whenever I create new stuff other users won't get write access. But then I go to my home directory and I type: $ mkdir myDir $ touch myDir/myFile $ ls -l | grep myDir and I get drwxrwxr-x 2 myself myself 4096 May 11 22:37 myDir and for the folder: $ ls -l myDir -rw-rw-r-- 1 myself myself 0 May 11 22:37 myFile So apparently there's more going on there then my umask, since myFile permissions are more restrictive than just write protection. Digging deeper, if I try: $ sudo touch /var/run/myPidFile.pid $ ls -l /var/run/ | grep myPidFile.pid -rw-r--r-- 1 root root 0 May 11 22:42 myPidFile.pid So myPidFile.pid gets a much more restrictive default permission, under /var/run then myFile gets under my home folder. We could blame the root umask but if I run umask under root I get 0022 which is indeed more restrictive then my user's 0002 umask but still doesn't explain how the execution bit permission isn't set. So how can I understand a folder's default permission on Linux?
The umask is most of the puzzle. Root has a different umask. This is pretty typical. The part of the puzzle that you're missing is that the umask is a mask. When an application creates a file, it specifies some permissions; the umask is a filter for these permissions that removes some permission bits. The file only has permission bits that the application included. For example, an application that intends to create a non-executable file (such as touch) passes the permission bits 666 (in octal); with the umask 002 this results in permissions 664, i.e. rw-rw-r--: the umask removed the write-other bit. When creating a directory, the application (such as mkdir) would normally allow execution, and so specify 777 as the permissions; the umask 002 results in permissions 775 on the directory, i.e. rwxrwxr-x. You can see what permissions the application uses by observing the system calls it makes. For example: $ strace -e open,mkdir touch foo … skipping opening of dynamically linked libraries etc. … open("foo", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666) = 3 +++ exited with 0 +++ $ strace -e open,mkdir mkdir goo … skipping opening of dynamically linked libraries etc. … mkdir("goo", 0777) = 0 +++ exited with 0 +++
See permissions for new files on a given directory
1,490,107,419,000
Dir created inside a loop fs denies access, but has correct permissions. init.sh - creates an fs image and mounts it (user and group ids are 1000): #!/bin/bash mkdir -p out-dir dd if=/dev/zero of=out-dir.img bs=1024 count=125 /sbin/mkfs.ext4 out-dir.img guestmount -o uid=$(id -u) -o gid=$(id -g) -a out-dir.img -m/dev/sda out-dir create.sh - creates a dir and does cd: #!/bin/bash mkdir -m 700 out-dir/test cd out-dir/test The cd gives: ./create.sh: line 4: cd: out-dir/test: Permission denied Then, ls -lan out-dir: drwxr-xr-x 4 1000 1000 1024 Mar 21 15:27 . drwxrwxr-x 3 1000 1000 4096 Mar 21 15:27 .. drwx------ 2 1000 1000 12288 Mar 21 15:27 lost+found drwx------ 2 1000 1000 1024 Mar 21 15:27 test How to establish the correct mapping?
This is the option: -o default_permissions. guestmount --fuse-help: ... -o default_permissions enable permission checking by kernel
guestunmount: can't cd into a dir, but the permissions are ok
1,490,107,419,000
I have a small Linux device that may choose to sync it's time with a handheld device when said device connects to it. My program's been running as root, and I've just been using date --set commands. But I'm trying to move said program to a less privileged user. Since I'm using systemd now, I think I should be using timedatectl to set the time rather than date directly. I've proven to myself how I can do this root. But I don't know how to drive it from non-root. I could use a specific sudo item, but I was hoping to not have my program running sudo. If that's the only way though, I know how to do that. If that's the only way, then just answer that :) I hoped that if I made my user a member of the systemd-timesync group, I might be able to, but with or without said group, I get the following error: > timedatectl set-time "2017-3-2 01:40:30" Failed to set time: The name org.freedesktop.PolicyKit1 was not provided by any .service files I have no idea what that means, or how to fix it, or if I should, or if it's possible.
I would use sudo. You express reluctance to that approach, but you would only be granting your user root access to run the single timedatectl command. This ought be able to be solved with PolicyKit as well, but it would effectively have the same result of allowing a user to run a single command as root. So the risks would be similar-- and you already understand how to solve the problem with sudo.
Allow non-root user to use timedatectl
1,490,107,419,000
I have vsftpd running on Ubuntu 16.04 LTS. During installation a ftp user is created with a home directory of /srv/ftp and hence this is the default FTP directory. Here are my vsftpd.conf file permissions that I've set. listen_ipv6=YES anonymous_enable=YES local_enable=YES write_enable=YES local_umask=022 anon_umask=011 anon_upload_enable=YES anon_mkdir_write_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES allow_writeable_chroot=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd What I'm trying to do is upload files as an anonymous user to the ftp server. I am able to login as an anonymous user but when I'm trying to upload, I'm getting, 200 PORT command successful. Consider using PASV. 553 Could not create file. Now there are numerous sources on the internet who are getting the same error but none of the solutions are solving my error. I know there is something about the permissions that I'm missing. The /srv/ftp permissions are set to 755.
I have installed vsftpd, filezilla, went through your .conf and added options accordingly: $ sudo cat /etc/vsftpd/vsftpd.conf | grep -v "#" anonymous_enable=YES local_enable=YES write_enable=YES local_umask=022 anon_upload_enable=YES anon_mkdir_write_enable=YES dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=abdullah xferlog_std_format=YES chroot_local_user=YES listen=NO listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES filezilla did give some feedback and I had to change the option chown_username=abdullah with my existing user name. then I run into permission problem, which is solved by changing the ownership of the ftp folder /var/ftp/pub from root to ftp. Then, I was able to upload & bind the files but not modify them, since we have a umask option.
Not able to upload as anonymous user in vsftpd
1,490,107,419,000
I try to kill a service as a other user. I login as a user "usernoroot" and kill a service of a root user "userroot"! Therefore I have a killscript.sh in the folder of "usernoroot" like: #!/bin/sh kill -9 $1 and make this script executable: chown root:root /home/usernoroot/killscript.sh chmod 755 /home/usernoroot/killscript.sh Now I try to run ./killscript.sh <pid> but getting: ./killscript.sh: 2: kill: Operation not permitted What can I do to run this script successfully? EDIT I have installed sudo: apt-get install sudo add my user to group sudo adduser usernoroot sudo and add the script "killscript.sh to the sudoers nano /etc/sudoers usernoroot ALL=(ALL) NOPASSWD: /home/usernoroot/killscript.sh Now I can execute ./killscript.sh 222 to quit the process with the id 222 without any PW.
First of all you are running the script as usernoroot that mean you don't have the right permeation to kill any process you don't own, so for killing any process on the system you can use sudo tool to run your script as root user: sudo ./killscript.sh <pid> There is another way to do that but I don't recommend it, it will make a serious security problem, and if it used in a wrong way it will make a big problems. use a setUID on kill tool but you have to be root: chmod 4755 /bin/kill then any one can run the kill tool as a root user, I don't recommend this way
Run script as other user
1,490,107,419,000
Is there a way in Ubuntu to set TCP ports permissions for individual users? For example, userA is only allowed to open ports between 3000-3010. So if userA ran the following php -S 0.0.0.0:3001, it would work. But if they try running php -S 0.0.0.0:3200, they will get a permission denied.
Without involving MAC(SELinux or AppArmor), you could do this quick'n dirty hack with iptables: iptables -P INPUT DROP iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -o ethX -m owner --uid-owner <userA_UID> -m multiport --dports 3000-3010 -j ACCEPT However, it will log nothing to the user, and will still allow the user to bind the port. He will just suffer from symptoms of having a port blocked
User Defined TCP Port Permissions
1,490,107,419,000
I'm trying to move away from running cron-scheduled jobs with root, so the thought process is to create a system account with no login (/dev/null home, /sbin/nologin shell) to run each cron job we need ran. I'm just curious how to give these accounts the proper permission to run where they need to be without changing the ownership of normal files and folders that are typically restricted to root. For instance, say I want this system account to output log files of what it's doing to /var/log, However, /var/log/ is owned by root, and is set to 755. This process won't be able to create log files there without running as root, correct? Am I correct in assuming using Linux Kernel Capabilities is the best way to do this?
One way You can achieve that is to put the logs inside a sub-folder under /var/log and then set the permission for the sub-folders. Another why is to log into syslog with logger and use a filter to redirect the logs to a specific file. e.g # /etc/rsyslog.d/10-myrules.conf if $programname == ["script1", "script2"] then { action(type="omfile" file="/var/log/myscripts/sys.log") stop } And you probably should also set a logrotate rule while you at it.
Assigning Privileges to System Accounts
1,490,107,419,000
Context I often create a multiuser GNU screen session for demonstration purposes. I do it by creating a named session with: screen -S tutorial And then performing ^A:multiuser on ^Aaclchg student1,student2,student3,... -wx "#?" And that works, the students can connect with screen -r grochmal/tutorial and can see what I do. (It even locks their PTS 'cause they do not have permission for ^Ad). Question What I'd like to do though is to setup aclumask so I could make my life easier since I sometimes forget to use aclchg and use acladd (and a funny student can write swear words on the terminal). According to how I understand man screen the following should be equivalent to what I do above: screen -S tutorial ^A:multiuser on ^A:aclumask ?-wx ^A:acladd student1,student2,student3,... And then I could add the aclumask ?-wx to my .screenrc and never worry again about funny students. Unfortunately that is not the case, and the aclumask line seems to have no effect on the permissions granted by acladd. I must be doing something wrong. What is the proper way of using aclumask with users that are not yet known to screen?
OP got me to find the last bits of what I needed for my configuration, seeing it's a old question I stumbled in here and found little examples of it elsewhere so figure I could drop my solution here~ I found in the umask it's reading right->left as well between the | if you like to add permission for one and remove from all other try in .screenrc aclchg * -w "#?" aclchg root -w+w "#?" aclumask [ -wx | root+wx ]
GNU screen: how do I use :aclumask to set permissions to unknown users?
1,490,107,419,000
Using a NAS device which provides samba/smb services, mounts to a Slackware box work, but the ownership is root, and users don't have permission to write. So: mount -t cifs //192.168.1.12/NAS1 /u/NAS1 -o rw, user=joeuser,password=passjoe,domain=MYSMBGROUP Mounts with /u/NAS1 with a uid,gid of root,root and users cannot write or create directories on the NAS. I have looked around with google and cannot find this exact situation with a solution. On Slackware 14, it appears that only root can mount. I also tried using file_mode=0775,dir_mode=0775, and was still unable to create files or directories. My question is, how do I control the mounted uid,gid, so that I can have users in the gid write to the drive?
Not sure why, but I did get the right response when I did: mount -t cifs //192.168.1.2/NAS1 /u/NAS1 -o uid=100,gid=100 The NAS is now mounted with that user,group, and I will work on permissions so that group members can modify. I guess I just asked too early in my frustration process. (grin)
Slackware cifs mount - how to control permissions?
1,490,107,419,000
When I create files with a different group I see that the permissions are different I did this as root: groupadd stack useradd stack1 gpasswd stack (choose a password) su stack1 touch testfile I did ls -l and I see permissions rw-rw-r-- newgrp stack (enter the password I previously created for that group) touch test2 I did ls -l and I see permissions of file test2 rw-r--r-- And where can I change that option, I think it has to do with umask but I'm not sure. Thanks.
As you know, the difference is due to the different umask values. When you create a file you specify the maximum permissions the file should have. For touch this would be rw-rw-rw-. The umask is then used to reduce the permissions. In general you should use su - stack1 to switch to a user. newgrp is a difficult program to write. It typically is suid as it needs to manipulate the groups. Ideally it would be built into the shell like umask is, so it would alter the groups for the current process, but this is incompatible with it being suid. So typically it is a suid binary that prompts for the password, and if validation succeeds it will replace itself with a shell. This shell can run its startup code. If your ~/.bashrc file has a umask command in it, either directly or indirectly, then that would explain the difference in values.
Why I get different permissions when I create a file with a different group and how can I configure that?
1,490,107,419,000
Environment Distro: CentOS7 Kernel: 3.10.0-427.10.1.lve1.4.7.el7.x86_64. Scenario This is a shared hosting environment and I just noticed that only /dev/mqueue and /dev/shm have 1777 permissions (/tmp and /var/tmp also but they are besides the point here). Questions Does that pose a threat to the security of the server? For instance, a system user might occupy the directories with useless junk and fill their disk quota; Given that the entire /dev directory is mounted on devtmpfs, does that mean that everything will be flushed/deleted from the directory once a reboot takes place? What is the difference between tmpfs and devtmpfs? Here's what's currently mounted: Filesystem Size Used Avail Use% Mounted on /dev/sdi1 148G 730M 140G 1% / devtmpfs 59G 0 59G 0% /dev tmpfs 59G 0 59G 0% /dev/shm tmpfs 59G 4.1G 55G 7% /run tmpfs 59G 0 59G 0% /sys/fs/cgroup /dev/sdh1 148G 14G 127G 10% /usr /dev/sda1 2.0G 269M 1.6G 15% /boot /dev/sdg1 148G 7.7G 133G 6% /var /dev/sdd1 148G 468M 140G 1% /tmp /dev/sdc1 493G 13G 455G 3% /ssd /dev/sde1 493G 37G 431G 8% /localbkp /dev/sdf1 8.0T 515G 7.1T 7% /home tmpfs 12G 0 12G 0% /run/user/0 tmpfs 12G 0 12G 0% /run/user/1242 tmpfs 12G 0 12G 0% /run/user/1507 tmpfs 12G 0 12G 0% /run/user/1812 Thank you.
Does that pose a threat to the security of the server? For instance, a system user might occupy the directories with useless junk and fill their disk quota; Sure, but they can already just fill up memory by malloc()ing too much anyway (Yes, you can use ulimit(), but that's a per process limit). If you want to protect users from each other's memory usage, you'll have to put them in different containers. Given that the entire /dev directory is mounted on devtmpfs, does that mean that everything will be flushed/deleted from the directory once a reboot takes place? Yes. What is the difference between tmpfs and devtmpfs? From the kernel's CONFIG_DEVTMPFS documentation: This creates a tmpfs filesystem, and mounts it at bootup and mounts it at /dev. The kernel driver core creates device nodes for all registered devices in that filesystem. All device nodes are owned by root and have the default mode of 0600. Userspace can add and delete the nodes as needed. This is intended to simplify bootup, and make it possible to delay the initial coldplug at bootup done by udev in userspace. It should also provide a simpler way for rescue systems to bring up a kernel with dynamic major/minor numbers. Meaningful symlinks, permissions and device ownership must still be handled by userspace.
Permissions of /dev/shm and /dev/mqueue
1,490,107,419,000
I'm running a live Ubuntu from my USB stick that has two partitions (SYSTEM & DATA). In DATA I need to create a file that starts with a *. When I run touch *.o i get a No such file or directory error. If I try to create it with vi/m I get an error saying that it can't open the file for writing. However, I can create the file on my System partition. Both partitions are formatted with a GPT partition table and FAT32 file system. I successfully created a *.o file on another FAT32 system though, so I assume it is not related to the file system itself. I suppose it is some permission issue? I tried sudo mount -o rw,remount /media/ubuntu/DATA, because I thought maybe the mounting was wrong, but that didn't help either. I also tried to chown -R ubuntu:ubuntu, but no luck there as well. Do you guys have any idea what the problem could be? For those wondering why I need those files: my makefile is creating those *.o files to compile the project.
It seems the answer is simply that FAT32 doesn't allow a literal * in filenames, according to https://en.wikipedia.org/wiki/Filename#Comparison_of_filename_limitations . So you're out of luck here, maybe reformatting to ext4 is an option?
Can't create *.o file on partition
1,490,107,419,000
I am using some cloud storage services (like Dropbox, MEGA, Amazon Cloud Drive, etc). Most of these do not support symlinks and file permissions (simple read/write/execute permissions for user, group, all). Dropbox supports file permissions but not symlinks. MEGA supports neither and the (third-party) Linux clients I found for Amazon Cloud Drive support neither symlinks or permissions. I was wondering whether there is a light-weight file system that implements symlinks and file permissions on top of a file system that does not support them natively. I am thinking about something where permissions are stored in an additional file (maybe per directory) and if that file system is mounted (presumably through FUSE) then it would read that permission list and show the correct permissions in the FUSE-mounted file system. Similarly, it could use a special file for symlinks and transparently make these work on an underlying file system that does not support it. Before I start writing my own FUSE file system that does that, I wanted to know whether someone else already had the same idea and I don't have to reinvent the wheel... (Note that I am aware of security issues with storing permissions in a separate file, as they could be changed by changing that file. However, if the underlying file system does not support permissions, then I'm not sure whether there is a way to securely implement permissions on top of that, because if one has access to the underlying file system, one can do anything anyway. This is more of a convenience thing for me. If I make a file executable on one client that syncs with cloud storage, I want it to become executable on the other clients as well. Similarly for symlinks.)
It turns out that posixovl (https://sourceforge.net/projects/posixovl/) does exactly what I was looking for. So the answer to the question is: Yes, such a FUSE file system exists. Thanks Gilles for suggesting it!
Is there a light-weight file system that implements symlinks and file permissions on top of another file system that does not support those?
1,490,107,419,000
I have a web server and it using by few developers. Web-site is under website user. Other users are like user1, user2 etc. I have given sudo access to user1, user2.. to access website. The issue I'm having now is users fails to copy scripts from website because some scripts not allows to read directly by the users. And even if I try to cp using sudo it fails because website don't have write permission to users directories. I do not want to change the file permission due to some security seasons. i saw somewhere that I can do this using tar, but couldn't figure out. Can someone help... Thanks!
You can do (as user1) something like sudo -u website cat ~website/somefile > ~user1/somefile Note that ~user1/somefile will be firstly created by user running the shell (user1), and the cat will be executed as website You can use tar(1) with same trick, for multiple files: sudo -u website tar cf - ~website/foo ~website/bar | tar xf - Run as user1 in his directory, that will "create" tar archive on stdout as website, and the another tar (without sudo, so running as same user as the one running the shell, that is user1) would unpack that virtual tar file to current directory (to which user1 can write). UPDATE Note that tar will create subdirectories leading to a file, you can reduce that behaviour by specifying -C so tar will entet specified directory before starting: sudo -u website tar -C ~website -cf - foo bar | tar xf - This way, foo and bar will be created in current directory without leading subdirectories (but if you added blah/baz, it would create blah as subdir in which baz resides)
How to copy file from one use to another?
1,490,107,419,000
So I need to create a ssh login for a colleague of mine that will allow him to install new things (most notably python packages) and restart services. I would be happy to give him full permissions to everything except a) few env files that have my access keys and passwords and b) elevating his permissions even more. What would be a good way to go about doing this? I'm running Ubuntu 14
You can't give a user access to “everything except for a few files”. If they can modify the system configuration then they can change those settings and give themselves full access. If they can install services that run as root then they can install a program that will give them full access. If they can install programs that you run then they can install a program that will give out your secrets the first time you run it. If you don't trust your fellow administrators, don't store any secrets on the machine. Put your confidential files and the services that need to be co-administered on separate machines. You can of course use virtual machines or containers, they don't have to be separate physical machines. Run the co-administered services in a VM/container and give your colleague administrator access only in the VM/container.
How to give a user permission to everything except env files?
1,490,107,419,000
I have a backup server in my LAN which mounts the home dir of user@laptop and creates a backup each hour using a python script. The problem that I'm having is that I get some hundred "permission denied" errors from rsync. Some files won't copy if I start the backup as root, others won't copy if I start it as user. The first thought that came to my mind was to set group ownership of home from user to root recursively. But I'm not sure if I should really do that.. Does anyone know how to proceed with this? Some info about the setup: uid and gid numbers are identical for user and root on both computers. This is how I import/export home: Export with: /etc/exports 192.168.178.10(ro,sync,no_subtree_check,root_squash) Mount with: /etc/auto.user -fstype=nfs4,ro,tcp 192.168.178.20:/home/username
Your export line says 192.168.178.10(ro,sync,no_subtree_check,root_squash) The root_squash entry means "when remote user root tries to access the file, pretend the user is nobody instead. This means the remote root user has no privileged access at all. Instead change root_squash to no_root_squash. ie 192.168.178.10(ro,sync,no_subtree_check,no_root_squash) Now the remote root user will have root level read access to the files.
How do I get access permissions right for my backup?
1,490,107,419,000
I've installed PG 9.5 (/usr/pgsql-9.5/) and when I start it manually with postgres -D it has no problems, but if I try to use systemctl I get an error. By looking to journalctl -xen output, I see: /bin/sh /usr/postgresql-9.5/bin/postgresql95-check-db-dir: permission denied These are the permissions: -rwxr-xr-x. root root system_u:object_r:postgresql_exec_t:s0 postgresql95-checkdb_dir I cannot understand if it's a SELinux problem or something else. Any help? Putting PostgreSQL in permissive mode (for example semanage permissive -a postgresql_t) solved the problem, but if I can, I want it to stay enforced. Do you know what kind of problem it is?
The problem is the wrong context (postgresql_exec_t). The solution: semanage fcontext -a -t bin_t "usr/pgsql-9.5/bin(/.*)?" restorecon -vR /usr/pgsql-9.5/bin Note the new context bin_t. I thought reading this that `postgresql_exec_t was the correct context.
Failed to start PostgreSQL 9.5 with systemctl - SELinux
1,490,107,419,000
I have a use case where my folder on the linux server needs to have its permissions opened up so I can use a sudo account to move files from my folder to shared folders. What then happens is often times I am not the one to log out (ssh connection disconnects) and my home folder permissions stay open. I don't care about the permissions being open except for the fact that when I try to ssh back in, the open folder permissions prevent my key authentication from working and I'm forced to enter a password. I want to know if I can setup a way to always set my permissions to 700 upon exiting (graceful or otherwise) the ssh session. Alternatively, if there is a way to make key authentication work when my home folder is set to 777 that would also help. By the way, in case it's helpful, here is what I'm actually trying to do. I use scp to move a locally built .jar from my machine to the home folder on the linux server. I then have to move that .jar from my home folder to a shared folder where the .jar can be executed. To move that .jar to the shared folder I have to use a provided sudo account, however, that sudo account cannot access my home folder unless I open the permissions.
Using information from your comment, the solution seems to be to fix the underlying issue rather than answer the question you've actually asked. I use scp to move a locally built .jar from my machine to the home folder on the linux server. I then have to move that .jar from my home folder to a shared folder where the .jar can be executed. To move that .jar to the shared folder I have to use a provided sudo account, however, that sudo account cannot access my home folder unless I open the permissions. As you have noticed, setting your home directory's permissions to 0777 prevents ssh from working. This is by design. Instead, create a subdirectory to contain your jar file and safely relax the permissions on that directory. Add execute permission to group and others for your home directory so that your sudo account can access through it to the target folder: chmod 711 "$HOME"` mkdir -m777 "$HOME/subdir" At this point, consider as an example that $HOME might be /home/serge. Now, although ls /home/serge fails for your sudo account with a permission issue, it will be able to search through your home directory and into the subdirectory ls /home/serge/subdir. If your sudo account and your own account have a group in comment - or it can be arranged for them to have a group in common - you can relax group permissions on the subdirectory: chmod 710 "$HOME" chmod 770 "$HOME/subdir" chgrp {whatever} "$HOME/subdir" Alternatively, transfer the file to /tmp (or /var/tmp) instead of to your home directory and avoid the entire difficulty.
change home folder permissions on exit/disconnect
1,490,107,419,000
My application on the server wants to copy a file from a remote directory mounted by sshfs to a local directory. The application code: shutil.copy('/data/somdir/somefile.txt','/var/www/App/localfolder' ) The permissions of /data is as follows: drwxrwxrwx 1 1027 root 4096 May 6 10:16 data So every user(including Apache) should able to access the folder, but in the logfile I get: IOError: [Errno 13] Permission denied Some edits and updates on my question: I set allow_other when mounting via SSHFS, and all the directories along the path to the source file have permissions of at least 755.So any use has read access to it.
mount with option allow_other. If you rely on these permissions being enforced, add the option default_permissions.
Apache does not have the permission to copy files from a mounted directory
1,490,107,419,000
I have two mounts /mount1 and /mount2. I ran the command: rsync -azrt /mount1/* /mount2/ to clone everything from /mount1 to /mount2. I then altered the /etc/fstab (see below) to remove /mount1 and mount /mount2 to /mount1 but things (including my email servers local user folders) are not working properly for permission reasons anymore, even though when comparing the permissions with the mounts before and after they are identical?! /etc/fstab before (working): UUID="3999A4F22570EAC4" /mount2 ntfs-3g nobootwait,permissions,locale=en_US.utf8 0 2 mhddfs#/mount3,/mount4 /mount1 fuse defaults,allow_other,nobootwait,nonempty,uid=1000,gid=1000,umask=007 0 0 /etc/fstab after (not working): UUID="3999A4F22570EAC4" /mount1 ntfs-3g nobootwait,permissions,locale=en_US.utf8 0 2 Where UUID="3999A4F22570EAC4" is /mount2 that has the content of the previous /mount1
Generic FUSE options 1) I noticed allow_other wasn't set on the ntfs-3g filesystem mount. The default for FUSE is not to allow access by other users. mhddfs is a FUSE filesystem and so is ntfs-3g (but see next section). 2) When you use allow_other, you also want to consider permissions checking. The default for FUSE is not to check permissions. So just adding allow_other to a filesystem can make it accessible by all users. This is probably undesirable; separate user IDs are often used to contain services, like the CUPS printer daemon, in case they are compromised by network attack. To enable user/group/mode permissions checks on generic FUSE filesystems, the option is called default_permissions. NTFS-3G specific behaviour 1 -> According to its man page, ntfs-3g will enable allow_other by default. (FUSE defaults will only allow the root user to do that. Not a problem here though, as you're using mount which runs as root). 2 -> It sounds like the ntfs-3g option permissions enabled permission checking for you. Otherwise, you wouldn't have noticed any permission errors. (SELinux might do, but you're not using SELinux, because you're on Ubuntu. Ubuntu AppArmor is described as being path-based, so from what you've described I think it's unlikely to be causing a problem). Thesis I believe your ntfs-3g mount is set up to perform permission checks, and FUSE is not separately blocking access by other users. This sounds sensible for a mount in fstab which is used to provide system directories like /var/mail. However your mhddfs mount is not performing permission checks itself, because it does not have default_permissions set. That would explain why the mhddfs setup was able to work (despite options for uid,gid,umask which only allow access to your user-id 1000). You don't show the underlying filesystems, so I don't know whether they're checking permissions, but I suspect that mhddfs is simply running as root and avoiding the permissions checks that way. Here's a test you could run on the mhddfs mount. It should show if the permission bits are being checked or not. mkdir dir chmod a-w dir # make directory read-only touch dir/t # attempt writing to directory To solve your permission errors, you need to determine which user(s) should have what access to the files in question, and set the correct permissions accordingly. You've never said what user (or even what software) is failing the permission checks so it's hard to be any more specific.
Migrating mounts with identical permissions not working
1,490,107,419,000
Is there a way you can setup a list of receipts of folders and directories of their correct default permissions that can be used as a backup to compare the correct permissions and be used to fix incorrect permissions caused by system or user changes, and installed software? For example; You install some software package, and installs itself into /usr/lib/ but it modified the permissions for a folder or file, but using a backup list of permissions for those files and folders it can be compared against that and corrected if needed. file1 is -rwxrwxrwx but should be -rwxr-xr-x folder1 is drwxrwxrwx but should be drwx------ and so on and so forth… and use the backup list in a script to run a check with the list and the directories and folders and correct them all with chown, chmod and setfacl. How can this be achieved, and if possible show examples of how it can be done. This might even be useful for a linux server in general if specific permissions need to be kept or set to prevent modifications or changes taking place where it shouldn't, and perhaps have it run automatically after each reboot or system update, and have its list automatically add new ones to the list when needed on the fly without much user interaction required.
As a disclaimer, please be careful when doing batch changes to permissions. If you have a bug in a script that change permissions, it can be nasty. That said, consider this example: Create a directory in which you can experiment, and change to that directory: mkdir /tmp/experiment cd /tmp/experiment Create a bunch of files in directories: mkdir -p {a,b,c,d}/{e,f,g,h}/{i,j,k,l} touch {a,b,c,d}/{m,n,o,p} touch {a,b,c,d}/{e,f,g,h}/{q,r,s,t} touch {a,b,c,d}/{e,f,g,h}/{i,j,k,l}/{u,v,w,x} As an experiment, give all the files random permissions for i in $(find . -type f); do chmod $(($RANDOM % 8))$(($RANDOM % 8))$(($RANDOM % 8)) $i done Also give the directories random permissions, but retain permissions for the owner: for i in $(find . -type d); do chmod 7$(($RANDOM % 8))$(($RANDOM % 8)) $i done Create a permission restoration script using stat: find . | xargs stat --printf="chmod %a %n\n" > /tmp/perms.sh Note the output format: head -n3 /tmp/perms.sh chmod 715 . chmod 700 ./b chmod 250 ./b/n Now trash the permissions: find . | xargs chmod 777 You could now restore the permissions using the script: bash /tmp/perms.sh To verify that this works, you can find the new permissions the same way you did before, but save them to a different file: find . | xargs stat --printf="chmod %a %n\n" > /tmp/perms.sh_new Then compare the two files and not that there are no differences: diff /tmp/perms.sh{,_new}
How to Backup and Compare File Permissions?
1,490,107,419,000
Upon installation, I have created an extra partition and mounted it as /data. The partition is visible, but I get a Permission denied error when trying to create a file or directory in it. Doing it with sudo does work. I am using ext4 filesystem. I have tried deleting the partition, then creating it again and setting up fstab to use a new partition. That changed nothing. How do I make the extra partition behave normally, e.g. be writable by users?
this should fix your problem: sudo chown -R $USER:adm /data chmod 0775 /data This will give you and all users in the adm group read and write access. all other users not in the adm group have only read access. Ihe group adm is one of the default groups for all users in Ubuntu. For another distro, you could check which groups are assigned to new users by default and use one of those. Alternatively, you could create a new group (i.e. data) and add the users that should get access to data to that group. If you want all users to have access to data, irrespective of the group they are in, then the chmod line should look like this: chmod 0777 /data
Permission denied when writing a file
1,490,107,419,000
So recently I accidentally started changing all the permissions in root to my unprivileged account :(. It happened because I was switching between users and shells and the directory changed to / without me noticing. Luckily I had -c enabled so I realised there was something wrong quite quickly(just after the home dir). I then ran chown root:root -R (all files owned by me in /) Now I'm having problems with xscreensaver and su'ing reurns failed auth. I can still use sudo though. Is there any maybe a list of permissions somewhere? I'm running the latest Mint XFCE.
I found the simplest method of fixing all the permissions. https://serverfault.com/a/117149/191095 getfacl -R / > /root/perms.acl setfacl --restore=/root/perms.acl It works perfectly. Now my xscreensaver and logging in as root work again :-)
Permission mix-up on Mint
1,490,107,419,000
I'm creating an ftp server with vsftpd, and I've almost finished, the only thing remaining is when I upload a file (logged as an user U), the file belongs to a group which has the same name (so group name = U), but the user is in a different group. Let's give an example: user=publichttp usergroup=ftpusers (and only 1 group) When I upload a file, the file is uploaded with 775 permissions as I want, but a ls -l show me the file owner is publichttp:publichttp and not publichttp:ftpusers as it should be and as I want. the folder permissions in /home/: drwxrwxr-x 3 publichttp ftpusers 4096 nov. 8 17:20 publichttp in /home/publichttp/: -rwxrwxr-x 1 publichttp publichttp 98789 nov. 8 17:20 Extras.Txt (I want) -rwxrwxr-x 1 publichttp ftpusers 98789 nov. 8 17:20 Extras.Txt I don't know how to do that, searched all the day... vsftpd.conf: listen=YES connect_from_port_20=YES use_localtime=YES xferlog_enable=YES dirmessage_enable=YES ftpd_banner=myftp. anonymous_enable=NO local_enable=YES write_enable=YES nopriv_user=publichttp secure_chroot_dir=/var/run/vsftpd/empty chroot_local_user=YES chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list #empty file allow_writeable_chroot=YES userlist_enable=YES userlist_deny=NO userlist_file=/etc/vsftpd.user_list #just contains publichttp anon_upload_enable=YES anon_mkdir_write_enable=YES local_umask=002 file_open_mode=0777 Is it possible ? (the simplest solution is the best) Thanks !
vsftpd takes all information from /etc/passwd and /etc/groups with local users. To make your publichttp user uploaded files belong to ftpusers group, you need to set primary gid of publichttp user to ftpusers group in /etc/passwd.
(vsftpd) uploaded file group
1,490,107,419,000
I'm attempting to cat a file with 770 permissions as a user who is part of the group (but not the owner of the file). This should provide sufficient permissions to my understanding, yet I encounter a Permission denied nevertheless. What am I missing? [altay@arch ~]$ ls -l test.txt -rwxrwx--- 1 http http 24 Sep 15 18:56 test.txt [altay@arch ~]$ groups altay lp wheel http network video audio storage autologin users [altay@arch ~]$ cat test.txt cat: test.txt: Permission denied [altay@arch ~]$ sudo cat test.txt I am not readable by my group despite 770 permissions.
After adding yourself to a group you need to restart your shell for it to notice that change. Otherwise the shell will only act as though you're in the groups that you were in when it started
Read file permission denied despite sufficient permissions [duplicate]
1,490,107,419,000
Scenario: with a script run by a normal user I want to copy several files, and replicate the dir structure. For example: cp --parents /lib/libacl.so /tmp/my_root/ cp --parents /lib/libc.so.6 /tmp/my_root/ Expected result: first cp, creates /tmp/my_root/lib and puts libacl.so in there. Second cp puts libc.so.6 in /tmp/my_root/lib. The problem is that the first cp creates /tmp/my_root/lib with the following permissions: dr-xr-xr-x., so the second cp fails with Permission denied. Of course, if I run the script as root everything works fine. One solution could be create the dir first and then copy the file, so the dir will have the proper permissions, but I was wondering if there was a better way of doing this, maybe some flag of cp? I checked the man but didn't find anything.
I cannot see a solution with cp. You can use rsync to do the same sort of copies, and it also creates a non-writeable lib dir, but it is capable of then adding new files to the dir. Your commands become: rsync -LR /lib/libacl.so /tmp/my_root/ rsync -LR /lib/libc.so.6 /tmp/my_root/ The -R preserves the directory structure. I added -L so that symbolic links are followed, as that seems to be what cp does, though usually this is not what is wanted. You can add -a to preserve permissions and timestamps (and owner/group if root). rsync is often used to copy files over the network, but it is a very versatile command. If you dont have rsync, some other commands which copy files, but create a writeable dir, are tar and cpio: tar cf - /lib/libc.so.6 | tar -C /tmp/my_root/ -xf - Or use chf to follow links. For cpio: ls /lib/libc.so.6 | cpio -pd /tmp/my_root/ with -L to follow links.
Folder permissions with cp --parents doesn't allow to copy another file
1,490,107,419,000
I am trying to get a script going that will move files in the apache webserver directory for me without sudo. In my mucking around I somehow got edit and move permissions working for user nhergert, but when I tried to trace my steps again it didn't work with tom. Permissions of the file (user www-data and group www-data are owners, the sticky bit is not set, permissions of 770): -rwxrwx--- 1 www-data www-data 1766 Jun 23 16:28 index4.html Members in group www-data: > getent group www-data www-data:x:33:nhergert,tom tom and nhergert are both members of www-data: > id nhergert uid=1000(nhergert) gid=1000(nhergert) groups=1000(nhergert),4(adm),24(cdrom),27(sudo),29(audio),30(dip),33(www-data),46(plugdev),109(lpadmin),124(sambashare) > id tom uid=1001(tom) gid=1001(tom) groups=1001(tom),33(www-data),1000(nhergert) Any ideas? Thanks!
Sorry everyone! Apparently changes to groups don't update until the next time the user logs in. So, tom's membership in www-data applied when I closed his terminal session and logged back in.
Can't get server access permissions working
1,490,107,419,000
I have two writeable directories branch1 and branch2 tied together with AUFS on Debian 8 to the mount point union. Mount options: br=branch1=rw:branch2=rw branch1 and branch2 each contain a subdirectory dir with permissions 700. When I change the permissions with chmod 755 union/dir, only the first directory branch1/dir is altered, branch2/dir ramains as it is. Problem: Group and other users can't access union/dir even after setting chmod 755 because branch2/dir is still chmod 700. Is there a way to make AUFS apply changed permissions to all directories in the union or is it always limited to the topmost one?
It sounds to me like you have done everything right, so I would suggest filing a bug against aufs. PS: Kontrollfreak has done that already, and the fix is to use the dirperm1 mount option, according to http://sourceforge.net/p/aufs/bugs/21/#1293 . Thanks for your research @Kontrollfreak .
AUFS only changes permissions of the topmost directory
1,430,917,514,000
I am using debian 7.0 and I added "debian-transmission" to the group "pi" with: usermod -a -G pi debian-transmission I made this change because transmission was not able to write the pi mounted hdd. Later I realised that pi is also belongs to sudo. My question is: does debian-transmission belong also to sudo supplementary group just because became to member of pi group which belongs to sudo, or with the above mentioned command I put only the member of pi group?
Sounds like you are getting confused between the pi user and the pi group. You cannot add a group to a group, only users. There is a group with the same name as a user, the groups the user is in have no effect on the users that are in the group.
What if I add a user to a supplementary group?
1,430,917,514,000
What I want to do is have the usual server running user files like this: http://users.example.com/~user1/stuff.php which would be stored somewhere in /home/user1/www or something like that, but running with that user's permissions. So, scripts in the /home/user1/www directory can't access files/folders in the /home/user2/www directory (unless UNIX permissons permit it). I read a solution using vhosts and a new pool for each vhost, but for hundreds of users that's probably to heavie. Is there any workaround? Note that I'm running nginx in a Raspberry Pi, so this is just a home project, nothing really serious.
Since nginx doesn't run php directly, but instead forwards requests to a php application, you care what the php binary is running as. I'm assuming you're running php-fpm, but the general idea isn't specific to php-fpm. The configuration page for php-fpm shows the directives that can be set. We're interested in the user and group params, as they control what user and group the php instances will run as (and therefore what permissions the script will run as). The chroot directive may also be of interest in creating a secure system (you may want to restrict each user to only the files available in /home/user1/www, not higher in the directory hierarchy). As you can see, they're set on a per-pool basis. So you could create several pools, one per user, and have each run under that user and group. You could also chroot to that user's webroot. This is the solution you linked. But since you have to set up a new php-fpm pool for every user, it's not something you can do on-demand. Instead, make creating the php-fpm pool part of your user creation process.
nginx - userdir with user permissions
1,430,917,514,000
I was trying to run a script to set my Oracle environment in RHEL. I ran it as ./foo.env, but it wouldn't run because of a permissions issue. I then ran it as . ./foo.env, and it ran successfully. What's the difference between the two, exactly?
Running ./foo.env means you're trying to execute the file as a shell script. Running a file as a shell script means that the file must have executable permission for your account. Running . ./foo.env is the equivalent of source ./foo.env, which means you only need to have read permission to the file.
Permissions issue with script
1,430,917,514,000
Good Day, There is a linux user and group called "developers" (has no sudo access) with home folder in "/home/developers". There is nginx running on the same server, there is a web folder for a site called "mysite.com". that web folder is in "/usr/local/nginx/html/mysite.com/" Folder site rights are "nginx:developers" and chmod permissions are 770. Now here comes the problem. Users "scp" their files into the server using "developers" user. Files get stored in "/home/developers" folder WITH "developers" owner (obviously). Users have to copy those files into "/usr/local/nginx/html/mysite.com/" folder. After "developers" user copies files lets say "testfile.py" into "/usr/local/nginx/html/mysite.com/" owner of the file stays as "developers:developers". Therefore, nginx can not read the file. How can I make that work? How can I allow user "developers" to transfer files into "/usr/local/nginx/html/mysite.com/" with nginx:nginx permissions? (without giving sudo access). Or what can I do on group level to make sure nginx will read those files? Edit: nginx and php-fpm running as "nginx" Thanks in advance.
what can I do on group level to make sure nginx will read those files? It doesn't sound like there would be any particular issue with putting the nginx user into the developers group: As root: usermod -a -G developers nginx A user can be in multiple groups. You can see which ones nginx is currently in with: grep nginx /etc/group # Or more simply: id nginx If putting nginx in developers poses some problem, you'll have to create a new group, put both the nginx and developers user in that, and make sure the transferred files have that GID. Note that you must log in again before new group memberships become available. For non-login accounts (presumably nginx is one) you can just use su or start a process which does the same (in this case, the nginx server itself should work).
How to allow users to copy files with different user and group?
1,430,917,514,000
I have a computer with read-write permissions for one of it is directories. I lost the password that can help me to log in to that directory. How can I delete this folder. I am using the following command to delete any directory but it is not working at that level? rm -rf /path/to/dir I am running Linux Fedora on my laptop. I used cryptkeeper to protect this folder by a password and to encrypt it and I can't access it now. I want to remove this folder.
cryptkeeper stores the files in a 'hidden' directory, one whose filename starts with a .. If you normally would have the files under /path/to/ in a directory called dir, then you should do ls -a `/path/to/` and see a directory .dir there. That is the directory that contains the encrypted version of the files. Confirm that that is so with ls -l /path/to/.dir (notice the .), you should see the names of the files you had stored, and then proceed to delete with rm -rf /path/to/.dir/.
Delete a directory under Cryptkeeper without access to its account
1,430,917,514,000
In my Debian GNU/Linux based workstation I have a big nice disk (3TB), apart from my ssd where the OS is installed. I recently got an intel nuci5 and set it up as a home server, together with my Gigabit Ethernet I am able to transfer files between the server and the workstation at full speed (~120Mbyte/sec). What I would like to do is move the disk from my workstation to my server, set up an nfs-share on the server, mount the disk on my workstation and have everything still working like it was when the disk was local. What I don't know is how to set up /etc/exports and /etc/fstab on the server and /etc/fstab on the local machine for this to work. I know the basics for these files but I would like some help to get the correct parameters from start. Here is what I have now on my workstation. mount | grep green /dev/sdc1 on /mnt/green type ext4 (rw,relatime,data=ordered) cat /etc/fstab | grep nuci5 nuci5:/media/share /mnt/nuci5 nfs defaults 0 0 nuci5:/mnt/extra /mnt/nuci5-extra nfs defaults 0 0 ls -lah /mnt/ | grep green drwxr-xr-x 11 mihaly mihaly 4,0K okt 9 20:56 green cat /etc/passwd | grep mihaly mihaly:x:1000:1000:Mihaly Bak,,,:/home/mihaly:/bin/bash On my server: cat /etc/passwd | grep mihaly mihaly:x:1000:1000:Mihaly Bak,,,:/home/mihaly:/bin/bash cat /etc/exports /media/share 192.168.1.2(rw,sync,no_subtree_check) /media/share 192.168.1.*(ro,sync,no_subtree_check,insecure,all_squash) /mnt/extra/ 192.168.1.2(rw,sync,no_subtree_check) 192.168.1.2 is the IP of my workstation. Being that I have the same uid on both machines for my user and my user already owns all the files this should be rather easy, if I have understood anything correctly about nfs and linux permissions.
In your /etc/exports you need to replace the 192.168.1.* with 192.168.1.0/24, you can only use wildcards in hostnames. You also need to create the mountpoints on the client system, you only show the current mountpoint /mnt/green; /mnt/nuci5 and /mnt/nuci5-extra must also exist. Maybe they do, but you filtered those out in that case. Beyond that it should work. Personally I use async in /etc/exports because I'm not that worried about possible data loss and more interested in speed. Of course you need to make your own decision about that. I also use mount options soft,intr because I don't want things to hang indefinitely if the NFS server is not reachable; again, make your own decision about that.
Moving physical disk from local computer to network server
1,430,917,514,000
It is possible mount a flash drive without read permission?
You can choose the permissions of the files and directories on a vfat filesystem in the mount options. Pass fmask to indicate the permission on files that are not set, and dmask for directories — the values are the same as in umask. For example, to allow non-root users to only traverse directories but not list their content, and create files and directories and overwrite existing files but not read back from any file, you can use fmask=055,dmask=044 (4 = block read permission, 5 = block read and execute permissions). You can assign a group with more or fewer permissions; for example, if you want only the creator group to be allowed to create directories, you can use the options gid=creator,fmask=055,dmask=046. This is a handy way of preventing the creator of a file from reading back the data written to the file. However, this is a rare requirement, and it has the considerable downside of not allowing the creator of a file to read back the data written to the file.
Mounting a device without read permissions
1,430,917,514,000
In order to find a workaround for a problem I had yesterday (see the question here) I come out with another experiment. After inserting a flash drive (vfat) and mounting its only partition I wondered: What if I change the permissions on the mount point? Well, that should solve all my problems so I proceed(as root): At first I tried to change the owner: root# chown root:root /media/MOUNT_POINT Note: /media/MOUNT_POINT was created automatically by the system What a surprise when the command answer was: Operation not permited. What? there is things that are not allowed even being root? Ok, that don't stopped me and then tried: root# chown 000 /media/MOUNT_POINT this time, no messages, but after ls -l /media I got drwx------ 4 miranda miranda 4096 Apr 10 05:41 24EE-9E3C as you can see, the folder still have all its permissions. I tried all combinations from 000 to 666 (with a script of course) and the result was the same. What's happening? What I'm missing? or even more important. Can this be done? Thanks in advance.
The vfat filesystem does not support permissions. When you try to modify ownership or permissions on the mount point while the partition is mounted, it applies to the root directory of the mounted file system, not the directory that you are mounting on top of. If your goal is to make the filesystem read-only, try mounting with -o ro. You can do it without unmounting with mount -o remount,ro /media/MOUNT_POINT.
Operation not permitted. For root user?
1,430,917,514,000
Can anyone please tell how can I change files/directories owned by me only. below is the command that list out files/directories owned by me find . -user <username>
If your find already supports -exec {} +: find . -user "$username" -exec chmod ... {} + If it at least supports -print0 and your xargs supports -0: find . -user "$username" -print0 | xargs -0 chmod ... Or if there are no newlines in the file paths: find . -user "$username" | xargs -d '\n' chmod ...
list out owned file/directories and change its permissions
1,430,917,514,000
I have a PHP script that is attempting to create a file, but it can not because of permissions. I would like to identify which user the system sees as the one requesting to create the file. I would like to know this in a general way, not just a specific solution on how to allow files to be created by PHP.
Unless the script is changing it's UID, which requires the script to have the SUID bit set in it's permissions, it is running as the user who invoked it. If this is a script run by a web server that would usually be the userid of the web server. There are various ways for a script to determine what it's uid is. If the scripting language has getuid and geteuid functions available, they can be used to get the real UID and effective UID. If not, running the id command from the script (either directly, or as shell command will return the UID being used. File creation permissions are controlled by the directory. The UID needs execute access to all the directories on the path. To create the file, the UID needs write and execute access to the containing directory. To list the created file, the UID needs read and execute access to the directory. Permissions can be gained by user, group, or other permissions. A last ditch approach would be to change the permissions on the directory to 777 or 773 and running the script. If the script is not blocked by permissions of a directory on the path, the file will have the script's UID as the owner. Be sure to restore the original permissions after the test. I usually use group access to permit the web server to write directories. This is done by setting the group on the directories to the group id the web server (script) runs as.
How do I identify the user that is attempting to create a file?
1,430,917,514,000
I have a couple of question regarding Solaris 11... First off, as far as I can tell, Solaris has dropped the auth solaris.grant in Solaris 11... Why was it removed? Is it no longer needed? Did it get a new name? Is there alternatives - eg. using several other auths to gain the same result? Can it be reintroduced by editing a file like /etc/security/auths_attr (or what it's called, the file that lists auths)? Second, how exactly does a user with the authority to grant or delegate rights (profiles, roles, auths) to other users? Which commands are involved? Can rights be granted both permanently and transiently?
The .grant authorization has evolved in two finer grained ones, .assign which is unrestricted and .delegate with which you can only delegate profiles already assigned to you.
solaris.grant and granting/delegating rights?
1,430,917,514,000
I'm using OpenSuSE and my computer has only 2 accounts: root and me.I have a directory /srv/www/htdocs/abc. I just want to set it up so that when new folders or files are created in this directory they will have the permissions set to 777. When new files are created in this directory, I have to set them to 777 again to gain write access. I use root and type: chmod me /srv/www/htdocs But I doesn't work. How can I do that?
The use of chmod is for changing permissions, the command chown is for changing ownership of files & directories. ownership To change a directory's ownership you can use the following command: $ sudo chown -R me /srv/www/htdocs NOTE: we're using the sudo facility to elevate our privileges to the same level as root for these commands, without having to become root. permissions To change the permissions on this directory: $ sudo chmod -R 777 /srv/www/htdocs umask Using the command umask sets up your terminal so that when files & directories are created their permissions can be influenced a bit. Caution should be used when using umask since there are situations where it won't give you permissions exactly the way you think. For a directory it seems fine: $ for i in `seq 1 7`;do echo "umask: 00$i"; umask 00$i; rm -fr blah; mkdir blah;ls -l|grep blah;done umask: 001 drwxrwxrw- 2 saml saml 4096 Jul 29 14:39 blah umask: 002 drwxrwxr-x 2 saml saml 4096 Jul 29 14:39 blah umask: 003 drwxrwxr-- 2 saml saml 4096 Jul 29 14:39 blah umask: 004 drwxrwx-wx 2 saml saml 4096 Jul 29 14:39 blah umask: 005 drwxrwx-w- 2 saml saml 4096 Jul 29 14:39 blah umask: 006 drwxrwx--x 2 saml saml 4096 Jul 29 14:39 blah umask: 007 drwxrwx--- 2 saml saml 4096 Jul 29 14:39 blah However it won't let you have files exactly the way you might intend them to be with particular umasks. See umask 006 for example below: $ for i in `seq 1 7`;do echo "umask: 00$i"; umask 00$i; rm -fr blah; touch blah;ls -l|grep blah;done umask: 001 -rw-rw-rw- 1 saml saml 0 Jul 29 14:40 blah umask: 002 -rw-rw-r-- 1 saml saml 0 Jul 29 14:40 blah umask: 003 -rw-rw-r-- 1 saml saml 0 Jul 29 14:40 blah umask: 004 -rw-rw--w- 1 saml saml 0 Jul 29 14:40 blah umask: 005 -rw-rw--w- 1 saml saml 0 Jul 29 14:40 blah umask: 006 -rw-rw---- 1 saml saml 0 Jul 29 14:40 blah umask: 007 -rw-rw---- 1 saml saml 0 Jul 29 14:40 blah There are others, this is just to highlight an example! So what should you do? Given you're dealing with a Apache directory (based on the path /srv/www/htdocs) I'd look for a Unix group that your user me and the user that Apache is running as are both members of. You can use the groups command to determine this: $ groups saml saml : saml vboxusers jupiter newgrp $ groups apache apache : apache You can also use the id command: $ id saml uid=500(saml) gid=501(saml) groups=501(saml),502(vboxusers),503(jupiter),10000(newgrp) $ id apache uid=48(apache) gid=48(apache) groups=48(apache) On my system Apache is run by a user apache. Looking at this user you can see that it's in a single group apache as well. So one approach would be to add the user me to this group. For example, add me to the apache groups: $ sudo usermod -a -G apache me The other approach would be to create another group and add both apache and me to this secondary group (for example, apacheplus), and then run this command on /srv/www/htdocs: $ sudo chgrp -R apacheplus /srv/www/htdocs
Set default permisson for newly folders and files in Linux
1,430,917,514,000
I just rsync'd a bunch of files, from a Windows box running cygwin sshd, to a CentOS 6.4 box. I ran rsync -e sshd... on CentOS to do this. Then I plugged in a USB drive on the CentOS box, formatted as ext4 using mkfs.ext4, and mounted it at /mnt/backup (with no extra options). Then I did chown on /mnt/backup, and ran rsync -vrlptg to copy the files from the CentOS box to /mnt/backup. A handful of random files (a few dozen of a few hundred thousand, mainly from 4 different directories, but not all of the files in those directories) failed with permissions errors. But when I ls -l on the CentOS box, it shows that I own all of them. If I sudo rsync instead, it copies everything without complaining. Why does it seem that rsync thinks I don't have permission to copy my own files to my own drive? Update: although earlier it said myuser owned them, I've since run a sudo rsync -e sshd and now most (but not all) files, despite being in a folder in my home drive on the CentOS box (/home/myuser) are now no longer owned by myuser myuser but instead are owned by 513 or dialout, which I never set up as users.
I wonder if your issue is related to this excerpt I found on a blog detailing how to use Cygwin, rsync, and ssh? Th title of the article is: SSH and Rsync within Cygwin. When using an NTFS file system, Cygwin will, by default, apply posix-style file permissions using NTFS file permissions. In some cases this may not be desirable as this can make it difficult to work with the files on the Windows server outside of Cygwin. This behavior can be altered by modifying the /etc/fstab file. Simply add/edit the line in this file to read as follows: none /cygdrive cygdrive user,noacl,posix=0 0 0 This would explain why the permissions were showing up as UID 513 or user dialout.
rsync "Permission denied" despite ownership
1,430,917,514,000
On an HP-UX B.11.31 that mount via NFS a remote disk using mount point /BK_RESTORE, I would like to access a sub directory with oracle user, but I cannot even if permissions are correct. Using a different normal user, like bsp works as expected. (from root) root> ls -ld / /BK_RESTORE /BK_RESTORE/erpln /BK_RESTORE/erpln/import-su-macchina-di-test drwxr-xr-x 41 root root 8192 Jul 8 09:43 / drwxrwxrwx 2 root sys 131072 Jul 8 10:06 /BK_RESTORE drwxrwxrwx 2 root sys 131072 Jul 8 09:44 /BK_RESTORE/erpln drwxrwxrwx 2 root sys 131072 Jul 8 10:05 /BK_RESTORE/erpln/import-su-macchina-di-test (from bsp) bsp> ls -ld / /BK_RESTORE /BK_RESTORE/erpln /BK_RESTORE/erpln/import-su-macchina-di-test drwxr-xr-x 41 root root 8192 Jul 8 09:43 / drwxrwxrwx 2 bsp bsp 131072 Jul 8 10:20 /BK_RESTORE drwxrwxrwx 2 bsp bsp 131072 Jul 8 09:44 /BK_RESTORE/erpln drwxrwxrwx 2 bsp bsp 131072 Jul 8 10:05 /BK_RESTORE/erpln/import-su-macchina-di-test (from oracle) oracle> ls -ld / /BK_RESTORE /BK_RESTORE/erpln /BK_RESTORE/erpln/import-su-macchina-di-test /BK_RESTORE not found /BK_RESTORE/erpln not found /BK_RESTORE/erpln/import-su-macchina-di-test not found drwxr-xr-x 41 root root 8192 Jul 8 09:43 / Please note that oracle lists the mount point with ls / but not with ls -l / (without giving any error). Moreover, when changing to this directory from oracle user, I get this error: cd /BK_RESTORE sh: /BK_RESTORE: Permission denied. Do you have an idea on what is happening? Thank you very much
So, it seems that HP-UX tricked me: while mount show the file system as NFS, it was really a CIFS one. And, since no username and password were provided when mounting it, authentication is done via cifslogin command. Probably this command was already issued for root and bsp users, while it was never issue for oracle user. Please note that cifslogin credential are stored in a cifsdb database. I think that on this server all credentials were stored years ago, and now everyone here was completely unaware of this mechanism.
Directory "not found" on HP-UX for NFS mount point
1,430,917,514,000
What should the permissions be for /? I ask because I completely removed all files from my drive and reinstalled the OS, but I still get "permission denied" errors for /bin/zsh when I try to log in as a non-root user. The only thing that is the same is the root of the drive. If that's the issue, I would like to not have to format and reinstall again to fix it. (any suggestions for that problem would be appreciated too.)
start cmd:> ls -ld / drwxr-xr-x 30 root root 4096 27. Jun 21:36 /
Permissions on `/`
1,430,917,514,000
I was about to backup my archlinux, following this guide to my FritzBox, (which enforces a NTFS system that I mounted via samba) as I remembered, that NTFS is not capable to keep permissions and other stuff like symlinks. There is also no encryption available which is really bad because it would break the idea to encrypt my laptop :) So I was wondering if there is a method which can do incremental backups like rsync but creates something like an encrypted tarball 'on the fly'?
I would mount ecryptfs on the ntfs filesystem and still use rsync - as long as you don't need to read the files from windows. Just watch out for long path names as you might hit some trouble: http://www.telmon.org/?p=631 As a side note, I haven't tried this but the encfs4win project looks good if you need access from windows as well: http://members.ferrara.linux.it/freddy77/encfs.html
backup / on ntfs filesystem encrypted
1,430,917,514,000
Given the fact that we can give any number of groups rwx via POSIX ACL's is there any special privileges given to the owning group? For example, we can set any number of users with ACL's but only the owning user (and root) can manipulate permissions. Is it just traditionally how elevated permissions were given to a group of people or does the owning group have additional rights?
Not all filesystems handle ACLs. ACLs are a more general mechanism (gives more finegrained control) than the Unix traditional user/group/others permissions, but are harder to get right. Select what is best for your case.
Function of owning group versus ACL group permissions
1,430,917,514,000
My current primary question motivator: $ ls -l /sys/devices/platform/samsung total 0 -rw-r--r-- 1 root root 4096 27. jaan 14:17 battery_life_extender drwxr-xr-x 3 root root 0 19. jaan 18:40 leds -r--r--r-- 1 root root 4096 26. jaan 23:37 modalias -rw-r--r-- 1 root root 4096 27. jaan 12:57 performance_level drwxr-xr-x 2 root root 0 24. jaan 00:35 power drwxr-xr-x 4 root root 0 19. jaan 18:40 rfkill lrwxrwxrwx 1 root root 0 27. jaan 13:03 subsystem -> ../../../bus/platform -rw-r--r-- 1 root root 4096 27. jaan 13:03 uevent -rw-r--r-- 1 root root 4096 26. jaan 23:37 usb_charge I'd like to modify these without sudo. During a desktop session that. Privileged startup script is perfectly OK. It's feeling like the solution is to have some sort of generic insmod parameters?
One solution would be a script, which changes permissions on that files using chmod and then setting you system so it would start the script on system bootup.
Is there a generic approach to automatically make some sysfs controls ch{own,mod} user-accessible?
1,355,922,388,000
I'm writing software for an embedded linux system and I'm using an NFS share as root directory. The root filesystem resides in /srv/nfs/rootfs, and it is exported using the following /etc/exports: /srv/nfs *(rw,no_root_squash,no_subtree_check,async,wdelay) The content of /srv/nfs/rootfs need to be owned by root, otherwise the target system will have trouble mounting /dev. However, I need to be able to modify the files in /srv/nfs/rootfs as regular user, and I don't want to add sudo to my scripts or run sudo on every other command. Is there some way of configuring NFS to fake root privileges on /srv/nfs/rootfs? I was thinking of trying to run nfs using fakeroot, but that does not seem like the best solution to me.
You can use ACLs on /srv/nfs/rootfs to give additional user(s) write access - see man setfacl / man getfacl for more info. If you decide to go this way, using the default ACLs might be a good idea - this would ensure, that the ACLs set for directories get propagated on file/directory creation. Yet there is a catch: the newly created files will be owned by the user creating them. Hence you might want to run chown root: periodically on your NFS export or hook something to FAM (File Alteration Monitor, e.g. gamin). As a side note: exporting writable things world-wide (/srv/nfs *(rw,...) is almost never a good idea, no matter how much one thinks his/her environment is isolated from the rest of the universe.
Can I export an NFS share with faked root privileges
1,355,922,388,000
I am using Ubuntu 12.04 and have configured apache to serve from ~/public_html. I am trying to serve some directory contents over http on LAN. When I did the following: ln -s ../Videos/android-internals-marakana/ android-internals-marakana I was able to see the specified directory at localhost (in browser) with my public_html directory contents as follows: k4rtik: public_html $ ls -l total 12 lrwxrwxrwx 1 k4rtik k4rtik 37 May 27 15:59 android-internals-marakana -> ../Videos/android-internals-marakana/ drwxrwxr-x 2 k4rtik k4rtik 4096 May 19 13:05 cgi-bin -rw-rw-r-- 1 k4rtik k4rtik 1406 May 19 12:20 favicon.ico -rw-r--r-- 1 k4rtik k4rtik 178 May 19 10:21 nindex.html But when I similarly try creating a link to android documentation with ln -s ../bin/android-sdk-linux/docs/ droid-docs I get the symbolic link in directory listing but not at localhost in browser. I have checked everything I could on my own - directory permissions, validity of the link, typing the dir name in the url directly (received Forbidden - You don't have permission to access /droid-docs on this server. there). Any clue on what's going on and how to get this to work? Is it because bin folder is somewhat special as compared to other folders in my home directory?
As Ulrich Dangel points out in his comment above - the whole directory hierarchy leading to the required directory should be accessible to apache in order for it to serve the directory and its listing. I had to chmod ~/bin/android-sdk-linux to 775 which was originally set to 770.
How does apache determine what directory to show from public_html?
1,355,922,388,000
I've installed Transmission on my Raspberry Pi with Raspbian on. Want to download torrents (legally of course) to an external hard drive. Permission is denied. root is owner and group for the drive I've tried to change permissions on the drive following a lot of different instructions here and other forums but can't make it. Found some information on that it's impossible to change permissions on a disk with exFAT. What workaround could I do? My main user is "pi" and I think that's the one Transmission uses. EDIT: Add content to fstab proc /proc proc defaults 0 0 PARTUUID=50913804-01 /boot/firmware vfat defaults 0 2 PARTUUID=50913804-02 / ext4 defaults,noatime 0 1 # a swapfile is not a swap partition, no line here # use dphys-swapfile swap[on|off] for that UUID=67E3-17ED /mnt/67E3-17ED auto defaults,nofail 0 0 UUID=652F-FA93 /mnt/652F-FA93 auto defaults,nofail 0 0 EDIT 2: lsblk --fs $ lsblk --fs NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS sda ├─sda1 vfat FAT32 EFI 67E3-17ED 196,9M 0% /mnt/67E3-17ED └─sda2 exfat 1.0 8TB 652F-FA93 5,8T 20% /mnt/652F-FA93 mmcblk0 ├─mmcblk0p1 vfat FAT32 bootfs D3E6-3F09 436,8M 14% /boot/firmware └─mmcblk0p2 ext4 1.0 rootfs cb6f0e18-5add-4177-ab98-e9f0235e06b3 42,7G 58% / EDIT 3: Changed fstab pi@raspberrypi:~ $ lsblk --fs NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS sda |-sda1 | vfat FAT32 EFI 67E3-17ED 196,9M 0% /mnt/67E3-17ED `-sda2 exfat 1.0 8TB 652F-FA93 5,8T 20% /mnt/652F-FA93 mmcblk0 |-mmcblk0p1 | vfat FAT32 bootfs | D3E6-3F09 436,8M 14% /boot/firmware `-mmcblk0p2 ext4 1.0 rootfs cb6f0e18-5add-4177-ab98-e9f0235e06b3 48,4G 54% / pi@raspberrypi:~ $ ls -la /mnt totalt 265 drwxr-xr-x 4 root root 4096 21 okt 16.28 . drwxr-xr-x 18 root root 4096 10 okt 06.06 .. drwxr-xr-x 4 root root 262144 7 nov 10.27 652F-FA93 drwxr-xr-x 2 root root 512 1 jan 1970 67E3-17ED Thanks in advance for the help!
1. You should confirm that you have the packages installed that are needed to handle EXFAT file systems: $ sudo apt update ... $ sudo apt install exfat-fuse exfat-utils If they're already installed, apt install will inform you of this, and do nothing further. 2. Here's the change that should probably be made to your /etc/fstab file: FROM: UUID=67E3-17ED /mnt/67E3-17ED auto defaults,nofail 0 0 UUID=652F-FA93 /mnt/652F-FA93 auto defaults,nofail 0 0 TO: UUID=67E3-17ED /mnt/67E3-17ED auto defaults,nofail 0 0 UUID=652F-FA93 /mnt/652F-FA93 auto uid=pi,gid=pi,defaults,nofail 0 0 I say probably because I don't think you intended to download any torrents to your EFI (FAT) partition, so there's no point in changing anything. In fact, you may not actually need to include the FAT partition in your /etc/fstab file at all. But, if I'm wrong, you can give the FAT partition the same treatment you gave the EXFAT partition. If repairing the permissions is all you're interested in, you need not read the remainder of this answer. I include the remainder only to provide some background that may be helpful in the future (if your future includes such things as editing /etc/fstab files :) The cause of this permissions confusion when using EXFAT is basic: The EXFAT filesystem has no owner/permission metadata. The owner/permission data is set when the filesystem is mounted, and it cannot be changed (unless the filesystem is remounted). This is why you see questions from time-to-time asking why chown and chmod operations fail on an EXFAT partition. We've seen that ownership of an EXFAT partition is set at mount time using the uid= and gid= parameters. Permissions may also be changed; the umask, dmask and fmask parameters are used for this purpose. All (or most) of this is covered in the system manual: man mount.exfat-fuse. The challenge here is knowing the name of the manual! :) Which brings up a few final points with respect to formulating an entry in /etc/fstab: I feel that use of the auto parameter in the third field (fs_vfstype) of /etc/fstab is a mistake... if you're using/editing /etc/fstab you should at least know what file system type you're going to mount! Likewise, I feel the same re use of the defaults parameter in the fourth field (fs_mntops). I dislike the use of UUIDs for identifying volumes to be mounted; a UUID is effectively a random number, and why use a random number to identify a volume to be mounted in /etc/fstab? ... will you remember it next week? I much prefer LABELs for identifying mounts. In the case of an EXFAT partition, the command for creating a label is: sudo exfatlabel /dev/sda2 "TORRENT_STORE" Consequently, my final suggested change to your /etc/fstab entry is this: LABEL=TORRENT_STORE /mnt/652F-FA93 exfat uid=pi,gid=pi,rw,user,nofail 0 0
How to give Transmission permissions to write on external HD on Raspbian Raspberry Pi media server?
1,355,922,388,000
Given that an app is writing files with 700 permissions, which means they are not readable by the group, I need these files to be readable by a special backup user account. ACL seems to be completely useless since chmod modifies the mask and users other than the owner or other groups still have no permissions. Periodically doing recursive chmod? Hakish and resource waste, especially on very large file sets. Using root for this (via sudo)? Not secure. No fine-grained control. Not appropriate for rsync initiated on remote machine, for example NAS. Using same user as the app making those files, i.e. using owner user for backups. Still hakish and not desired since backup user should have only read access and only for given directories.. Especially important for rsync initiated from outside. Mounting given directory to another directory using something like bindfs? Performance hit is significant and not justified for such simple permission-related task. I wonder why linux doesn't allow administrators dictate applications minimal allowed permission on files (The maximum permissions can be limited using a mask, but it cannot be enforced or transparently added, which seems weird) Any solutions?
In a similar situation I've taken the chmod approach, but managed through a loop started at boot time and driven by inotifywait. This is a cut-down version of the code I actually use, which also handles ownership/groupship, and logs all changes or errors: #!/bin/bash inotifywait --monitor --recursive --event create,attrib --format '%w%f' "$@" | while IFS= read -r item do # Avoid looping when we apply the fix-up chmod perm=$(stat -c %A "$item") if [[ -d "$item" ]] && [[ ! "$perm" =~ dr.xr.xr-x ]] then # Directory with wrong permissions printf 'dir\t%s\n' "$item" chmod ug+rx,o=rx "$item" fi if [[ ! -d "$item" ]] && [[ ! "$perm" =~ -r..r..r.. ]] then # Item (not a directory) with wrong permissions printf 'other\t%s\n' "$item" chmod ug+rw,o-w,o+r "$item" fi done Although you get the hackishness of chmod and the code's inability to handle filenames containing newlines, this is moderated by a tool that only fires when a file is created or its attributes modified. Acceptable for large sane datasets. You would probably want to supplement this with an occasional process that fixed up all permissions unconditionally. This example ensures user/group permissions have a minimum of read (and execute if a directory), and others' permissions are the same but without write: find /path/to/base -type d -exec chmod ug+rx,o=rx {} \; -o -exec chmod ug+rw,o-w,o+r {} \; If I were writing the solution today I'd probably consider incron rather than the loop
Override 700 permissions so it became group-readable or at least accessible for given user but the owner?
1,355,922,388,000
On my embedded System I enabled the CONFIG_CONFIGFS_FS=y to have access to the configFS. When booted, I mounted it with help of mount -t configfs none /sys/kernel/config. That works like charm: # mount | grep configfs configfs on /sys/kernel/config type configfs (rw,relatime) Now I try to create a folder device-tree, as I wanted to try out the dynamic loading of dtbo-files from userspace. Unfortunately I get an error: # mkdir -p /sys/kernel/config/device-tree/overlays/dummy mkdir: can't create directory '/sys/kernel/config/device-tree/': Operation not permitted I already made sure that CONFIG_OF_DYNAMIC and CONFIG_OF_OVERLAY are set. The permissions of /sys/kernel/config are: # ls -la /sys/kernel/config/ total 0 drwxr-xr-x 2 root root 0 May 31 16:57 . drwxr-xr-x 8 root root 0 May 31 15:56 .. So I'd have guessed, that writing to this directory as root should not be a problem at all. Any hints, how I could investigate that issue?
My problem was, that I used the mainline Kernel 6.1 (LTS) that does not support CONFIG_OF_CONFIGFS. So I downloaded a dtbo-configfs devicedriver from here: https://github.com/ikwzm/dtbocfg, compiled it and loaded it into the kernel. Then after mounting the configfs, I already had the device-tree directory available.
mkdir in configfs not permitted
1,355,922,388,000
If a user already has read and write permissions on a file, can't they just copy the contents of the file, delete it, and recreate it allowing execute permissions for themselves? Does a lack of 'x' bit ever stop a user from being able to execute the contents of the file?
And if the user is the owner of the file, they can set it to be readable and then read, even if there were no read permission. And they can set it to be writeable and edit it. And they can remove it if they are able to have writeable bit to the containing directory. So "w" and "r" bits seem useless too, following your logic. Those are called file modes, not just permissions for a reason. They only become permissions when we talk about the access of other users to given file. There is usually no problem is user runs a program. It is, finally, why the computers exist, for users to be able to run programs. Sometimes users write their own programs (or scripts) themselves and they might need to be able to build and run them. The system must be designed so the program running under ordinary user privileges can't do much harm to other users. Some programs are more privileged (some can be SUID, for example), but user can't create such a program themselves, they need a superuser to set a sensitive special mode bit on a file. However, if your goal is to create a very confined environment which won't let users run arbitrary programs but only those you've installed as an admin, you need to use additional measures. Those measures include: Mounting all user-writeable directories (homes, tmp and so on) as no-execute, so user can't create an "executable" file. However, this doesn't prevent it fully; user still run their own scripts prefixed with installed interpreters, and often this is enough to do anything they want. Using mandatory access controls like SELinux. This is very secure, however, cumbersome as well.
Is the unix permissions execute bit redundant?
1,355,922,388,000
Is there a way to create nested directories which all have the same user/group in a single command? That single command would have the same effect as the following two commands: mkdir -p new-1/new-2/new-3 chown -R myUser:myUser new-1
I can not add a comment, therefore I post this as an answer. Have a look at install, see man install(1). install -d -g myUser -o myUser new-1 new-1/new-2 new-1/new-2/new-3 Or, if you don't want to repeat the directory names (using root user): sudo -g myUser -u myUser mkdir -p new-1/new-2/new-3
Create nested directories with the same user/group in a single command
1,355,922,388,000
I am using EndeavorOS with i3 and am trying to put stuff on a thumb drive, but Thunar file manager isn't seeing it. When I run fdisk -l and lsusb I see the thumb drive, but it doesn't show up in Thunar. I tried mount /dev/sdc, but I get the response mount: /dev/sdc: can't find in /etc/fstab. I also tried chmod -R 0777 /dev/sdc which runs without error but nothing changes. I am not really sure what to do at this point. Below is the useful bit of my output for fdisk and lsusb. lsusb: Bus 001 Device 005: ID 154b:00ed PNY USB 3.1 FD fdisk: Disk /dev/sdc: 28.91 GiB, 31042043904 bytes, 60628992 sectors Disk model: USB 3.1 FD Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 304380E4-7F38-4C05-9262-5A90B5275272
With help from Freddy in the comments I found the issue was that the thumb drive didn't have a partition. Creating one with GParted solved the problem!
Unable to see thumb drive through file manager
1,666,864,697,000
In my Home Lab, I have an Ubuntu 20.04 Server and a Raspberry Pi as a VPN. I have two separate Samba shares on both machines. What I'm trying to do is mounting the Samba share on the Ubuntu server to the RPi and then connect to the RPi's Samba share. In order to have write access to Ubuntu Servers Samba, I'm trying to change the ownership of the cifs mounted share on the RPi server. Ubuntu Server | Raspberry Pi | | | | | Samba---------------> |Mount via cifs--->Samba |-------VPN----> My Laptop |(on a Folder inside | |Raspberry's Samba share)| The command I'm mounting with: sudo mount -t cifs -o credentials=xyz //ip_address/folder_name /path/to/mount Running sudo chown username:username * -v returns changed ownership of "files" from root:root to username:username but when I check, it's still root:root. Is there a setting that I'm missing, or should I forward some ports (different from 139 and 445) and not deal with this setup?
Try using the mount options, network filesystems inherit their permissions on a filesystem level. Here are a few options that could help: mount -t cifs -o rw,uid=1000,user=$User /dev/$Device /mnt/$Directory https://www.samba.org/~ab/output/htmldocs/manpages-3/mount.cifs.8.html
Can't change ownership of cifs mounted Samba Share
1,666,864,697,000
User:- Root /usr/bin/rsync -hprltaq --include '*/' --include '*.gz' --exclude '*' "/var/cache/backup/backup-20210817mysql.tar.gz" <storage_IP>::my-app/backup/sql_backup/ error:--> rsync: chgrp "/backup/sql_backup/.backup-20210817mysql.tar.gz.nWPkhd" (in my-app) failed: Operation not permitted (1) rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1196) [sender=3.1.2] even though rsync is throwing error the file is being copied to the destination(storage server). permission on the storage server ls -al backup/ drwxr-xr-x 9 nobody nogroup 4096 Oct 3 2020 . drwxr-xr-x 81 nobody nogroup 12288 Aug 17 08:00 .. drwxrwxrwx 2 nobody nogroup 4.0K Aug 17 12:54 sql_backup I have also tried 777 permission but getting the same error.
The -a (--archive) flag you specified to rsync includes the request to set owner and group for the destination files/directories to those of the source items. It seems likely that the storage server is not running the rsync service as root, and the user under which it is running is not a member of the group that owns the source file /var/cache/backup/backup-20210817mysql.tar.gz. There are three possible solutions that I can see, Test if the storage server has extended attributes available in its filing system, and if so then use them to store owner/group and other useful metadata rsync -M--fake-super --numeric-ids -ahq ... This would be the best solution. To save and restore files, always include -M--fake-super--numeric-ids. Remove owner/group identity from the metadata written to the storage server. You would then be responsible for restoring this "manually" rsync -ahq --no-o --no-g ... Run the service on the storage system as root. This presupposes a Linux/UNIX-based operating system, that you can change the service owner, and that you are happy with the risks. Not recommended, but just listed as a possibility.
rsync: chgrp "/backup/sql_backup/.backup-20210817mysql.tar.gz.nWPkhd" (in my-app) failed: Operation not permitted (1)
1,666,864,697,000
I have a ssh only user account called pgbackrest with disabled password . I have created a directory /etc/pgbackrest/ as root (sudo mkdir -p /etc/pgbackrest). Then I create a config file under the above directory and changed the ownership to the account pgbackrest and sudo chmod 640 /etc/pgbackrest/pgbackrest.conf When I try to edit it with Switch to pgbackrest user sudo su - pgbackrest and vim /etc/pgbackrest/pgbackrest.conf, I get /etc/pgbackrest/pgbackrest.conf" [Permission Denied] in the vim editor. sudo -u pgbackrest vim /etc/pgbackrest/pgbackrest.conf, I get /etc/pgbackrest/pgbackrest.conf" [Permission Denied] in the vim editor. does the permissions on the upper level directory prohibit the edit ? 640 means the owner and the group members can read, write. I am accessing the file as the owner, why am I getting access denied ?
In order for a user to be able to access a file, they also need permissions on all the directories in the path leading to the file. At a bare minimum they need x permissions on the directory. In your case (from comments) you have drwxr-x--- 3 root root 4096 Aug 10 22:56 pgbackrest And the file itself: -rw-r----- 1 pgbackrest pgbackrest 0 Aug 10 01:12 pgbackrest.conf This means the pgbackrest user can not get to the file to read it. A simple solution would be to add x permissions: chmod a+x /etc/pgbackrest Now anyone who knows the name of the file in that directory could try to access the file but the file permissions also take effect. So only the pgbackrest user or anyone in the pgbackrest group will be able to open the file; everyone else will be blocked.
Reading/Accessing a file owned by user account which is under a directory that is owned by root
1,666,864,697,000
Im pretty new at linux and I really don't know how to fix this. I accidentally ran sudo chmod -x /* Now every command I run returns a "Permission denied" even the sudo command. please help!
Unfortunately you will not be able to do much from that os because you cannot run any binaries in /usr/bin/ or /usr/sbin/. What you can do is make a live usb of any linux os, boot into the live environment, mount your root partition in the live environment at any mountpoint (say /mnt/), undo what you did i.e run the command sudo chmod +x /mnt/*, then finally reboot.
Accidentally ran "sudo chmod -x /*, is a fix possible?
1,606,321,004,000
I have a private webserver which runs under user 'nobody'. This webserver occasionally needs to access another server using SSH automatically. When this happens, I cannot be present to use a password. Therefore, I created a file with permission 700 and assigned user nobody (chmod and chown) to it. However, when accessing that file using 'sudo -u nobody cat testfile', I still cannot access the file. I therefore assume I cannot create a rsa_id with correct permissions for the webserver to use. Is it possible to create a file which can be used by the nobody user, to connect to SSH? If not, is it the next best thing to create a user specifically for the server with a home directory and then use that rsa_id? PS: I'm on centos Thanks for reading!
I created a file with permission 700 and assigned user nobody (chmod and chown) to it. ... I still cannot access the file You can't if it's in a directory that nobody cannot access. Find out what user the web server actually runs under, it might not be nobody to begin with. Then create a distinct SSH key for it and it put it somewhere the webserver can read. The config directory might be a good candidate (/etc/apache or /etc/httpd or whatever it is in CentOS). Then note that unless the scripts on your webserver run under distinct users, you probably can't limit the access to that SSH key to just one script, but anything that runs in the webserver can use it.
How to allow user 'nobody' to access rsa_id?
1,606,321,004,000
I have a hard drive (Formatted with NTFS) that I need to auto mount in fstab. I basically want this to be like my home directory, so I (And other programs) should be be able to write to it, read files, create directorys and so on. Here is what I currently have in my fstab: UUID=7099E21207CE11EC /mnt/v auto umask=022,dmask=022,uid=1000,gid=985 0 0 I went through a lot of iterations with that and I don't remember what exactly I tried, but the furthest I got was being able to read from it. I'm using arch btw, my user id is 1000 and I have a group called users which has the id 985, although I tried setting the gid to 1000 as well. Sorry if this is a noob question but I don't really understand file permissions and ownerships and really need this to work.
As @parsa-mousavi pointed out you definitely have to add the "rw" option to the 4th field: UUID=7099E21207CE11EC /mnt/v auto rw,umask=022,dmask=022,uid=1000,gid=985 0 0 BTW: You can often see "defaults" instead here which among other things means "rw". That's the obvious thing but there might be other pitfalls: Verify that the file system is mounted with the correct file system type by checking the output of mount | grep '/mnt/v'. You want to see "ntfs-3g" here. I haven't used NTFS since years but in the old days the "ntfs" driver had limited capabilities - it only allowed read access - whereas the 3rd generation NTFS driver also supported save write access. If the filesystem hasn't been unmounted cleanly Linux drivers might have problems (again limitations). When you boot Windows it will repair these issues (this usually happens automatically during system start). Make sure you shut Windows down cleanly before trying again in Linux. Try dropping the umask, dmask, uid, and gid parameters. The man page says: By default, files and directories are owned by the effective user and group of the mounting process, and everybody has full read, write, execution and directory browsing permissions. You can also assign permissions to a single user by using the uid and/or the gid options together with the umask, or fmask and dmask options.
Auto mount partition with read/write permissons
1,606,321,004,000
$ mkdir test $ chown gtgteq:users test $ chmod g+s test $ touch test/a $ touch b $ mv b test/ $ ls -l test total 0 -rw-r--r-- 1 gtgteq users 0 a -rw-r--r-- 1 gtgteq gtgteq 0 b How to change automatically the group of moving files(b)?
Eventually I wrote bash function. mvs() { local dest if [[ $# -ne 2 ]]; then return 1 fi if [[ -d $2 ]]; then dest="$2/$(basename $1)" else dest="$2" fi mv "$1" "$2" || return $? chown "$USER":users -R "$dest" chmod g+rw -R "$dest" find "$dest" -type d -exec chmod g+xs {} ';' }
How to apply group change via the dir's setgid bit when moving files with 'mv'?
1,606,321,004,000
A short introduction about myself I have just installed Feren OS. As a webdeveloper I do work with Linux and Unix commandline on a daily basis, but only for quite simple stuff. So I still consider myself a Linux noob. Edit: This question has now been downvoted once. Please consider to let me know what is wrong with it, if you downvote it. My problem First of all I have installed Apache2 from commandline (using apt). I have created a fully functioning working website in /var/www/projects/some_website/web. This part is what I am familiar with for my daily job as well. I have also installed PHPStorm (flatpak) from the store. The store itself is (according to it's about box) a Feren OS custom made store. However, I'd say that has nothing to do with the problem I'm having. In Feren OS I am logged in as user "paul". Now when - as user "paul" - I open up the terminal, I can go anywhere. For instance I can go to "/var/www". I can create directories there, chown them, etc. Like I said, this part I am familiar with. However from PHPStorm I can go to "/var", but there I can't see all the directories. Even when I type "/var/www" in the address box there, it can't go there. As a comparison I have also opened text editor Kate. This program can access "/var/www" like I would expect it to. What I tried to fix it The answers I found on the internet mentioned mainly that the user should be added to the www-data group and /var/www/ should be chowned by that group. For example this one While I consider this solution nonsense in my case, I did try it. I consider it nonsense, because other applications running under the same user can access the directory. Ofcourse I can be in the wrong here, so please let me know :) To be sure I have added user "paul" to the "www-data" group and I have chowned /var/www to that group as you can see in the attached screenshot. Another possible solution I found is installing PHPStorm via snap instead. I haven't tried that yet, because I would like to understand what I doing wrong first. Last but not least I have uninstalled the flatpak package in the Feren OS store and then re-installed the flatpak package from commandline: flatpak install PhpStorm This also does not change anything. My questions 1. What is the difference between PHPStorm and Konsole and Kate, that makes that PHPStorm cannot access /var/www/, while all the other tools can. What should I do to fix this problem? Investigation As asked in the comments I have checked what user is running PHPStorm: ps -ef | grep -i phpstorm paul 2241 1332 0 13:09 ? 00:00:00 /usr/libexec/flatpak-bwrap --args 31 phpstorm paul 2267 2241 0 13:09 ? 00:00:00 /usr/libexec/flatpak-bwrap --args 31 phpstorm paul 2268 2267 0 13:09 ? 00:00:00 /bin/sh /app/extra/phpstorm/bin/phpstorm.sh paul 2309 2268 99 13:09 ? 00:00:19 /app/extra/phpstorm/jbr/bin/java -classpath /app/extra/phpstorm/lib/bootstrap.jar:/app/extra/phpstorm/lib/extensions.jar:/app/extra/phpstorm/lib/util.jar:/app/extra/phpstorm/lib/jdom.jar:/app/extra/phpstorm/lib/log4j.jar:/app/extra/phpstorm/lib/trove4j.jar:/app/extra/phpstorm/lib/jna.jar -Xms128m -Xmx968m -XX:ReservedCodeCacheSize=240m -XX:+UseConcMarkSweepGC -XX:SoftRefLRUPolicyMSPerMB=50 -ea -XX:CICompilerCount=2 -Dsun.io.useCanonPrefixCache=false -Djdk.http.auth.tunneling.disabledSchemes="" -XX:+HeapDumpOnOutOfMemoryError -XX:-OmitStackTraceInFastThrow -Djdk.attach.allowAttachSelf=true -Dkotlinx.coroutines.debug=off -Djdk.module.illegalAccess.silent=true -Dawt.useSystemAAFontSettings=lcd -Dsun.java2d.renderer=sun.java2d.marlin.MarlinRenderingEngine -Dsun.tools.attach.tmp.only=true -XX:ErrorFile=/home/paul/java_error_in_PHPSTORM_%p.log -XX:HeapDumpPath=/home/paul/java_error_in_PHPSTORM.hprof -Didea.paths.selector=PhpStorm2020.1 -Djb.vmOptionsFile=/home/paul/.var/app/com.jetbrains.PhpStorm/config/JetBrains/PhpStorm2020.1/phpstorm64.vmoptions -Didea.platform.prefix=PhpStorm com.intellij.idea.Main paul 2403 2387 0 13:09 pts/1 00:00:00 grep --color=auto -i phpstorm And as a comparison I have done the same for Kate (which can access the directory): ps -ef | grep -i kate paul 2761 1332 4 13:12 ? 00:00:00 /usr/bin/kate -b --tempfile paul 2767 1271 0 13:12 ? 00:00:00 tags.so [kdeinit5] tags local:/run/user/1000/klauncherkWHCgQ.1.slave-socket local:/run/user/1000/kateencBio.1.slave-socket paul 2776 1271 0 13:12 ? 00:00:00 file.so [kdeinit5] file local:/run/user/1000/klauncherkWHCgQ.1.slave-socket local:/run/user/1000/kateSKaSMP.2.slave-socket paul 2780 1271 0 13:12 ? 00:00:00 thumbnail.so [kdeinit5] thumbnail local:/run/user/1000/klauncherkWHCgQ.1.slave-socket local:/run/user/1000/kateVcVjeG.4.slave-socket paul 2791 2387 0 13:12 pts/1 00:00:00 grep --color=auto -i kate
The developer of Feren OS told me that the problems most likely occur because of the permissions of the Flatpak package "user". Then the JetBrains support desk told me something similar and advised to use Snap or the JetBrains Toolbox instead to install PHPStorm. I have now installed PHPStorm via their toolbox and it runs like a charm.
Why is PHPStorm (flatpak) unable to access /var/www?
1,606,321,004,000
I want to capture slow queries via logrotate, and I want them to rotate weekly and I want to save a year's worth. The logs take the form: -rw-r-----. 1 mysql root 1239 Feb 21 18:46 mysqld1-slow.log -rw-r-----. 1 mysql root 885 Feb 11 14:48 mysqld2-slow.log -rw-r-----. 1 mysql root 885 Feb 22 08:58 mysqld3-slow.log -rw-rw-rw-. 1 mysql root 802 Feb 11 14:47 mysqld-slow.log Because the logs end up being written to so frequently, how can I make sure nothing is missed by logrotate? The process itself doesn't create the file, it needs to have the original there. I was thinking this would do it: /var/log/mysqld*-slow.log { missingok notifempty weekly rotate 52 compress delaycompress create 0644 mysql root } So it should compress the old, and create the same filename with the right permissions, but I'm unsure how logrotate handles something that's written to amidst the movement.
It would appear the copytruncate function can accomplish this: copytruncate Truncate the original log file in place after creating a copy, instead of moving the old log file and optionally creating a new one. It can be used when some program cannot be told to close its logfile and thus might continue writing (appending) to the previous log file forever. Note that there is a very small time slice between copying the file and truncating it, so some logging data might be lost. When this option is used, the create option will have no effect, as the old log file stays in place. I have put this in place.
logrotate a frequently written to log without losing data
1,606,321,004,000
An example of my question would be the /home directory: drwxr-xr-x 8 root root 4096 Jan 29 23:44 home/ So, the owner of /home is root. But I'm the owner of my personal home folder: drwx--x--- 85 teo teo 4096 Jan 30 16:22 teo/ Why is my user able to modify things under teo/ folder if the /home is owned by the root? I mean, modifications on my personal folder are also modifications in the /home folder, because it is a subfolder of /home, and I'm not in the root group.
In Unix, there are files and directories (and some weird "files" like pipes and devices, but permissions on them work just like plain files), and symbolic links (in essence, files containing the name of the file they point to). A directory is just a list of file names and references to the corresponding physical files. This way you can have the same file appearing under different names, or under the same (or another) name in different directories. There are three basic permissions on filessystem objects: r(ead), w(rite) and e(x)ecute. For regular files, read means to be able to read it's contents (e.g. copy it, view it, ...), write means to be able to modify it's contents (overwrite, add stuff at the end, truncate to length zero; note that this is independent of reading, you can have a file you can modify but not read), execute means running it as a program. For directories, read means listing it's contents (file names), writing means modifying (adding/deleting files), execution means using the directory to get at the files themselves (if you have r but not x on the directory, you can see the file names, but not get at them). Symbolic link's permissions are irrelevant, just take them as mentioned above: A short file containing the file name pointed to, and the contents is processed normally. Yes, quite orthogonal (independent). The system classifies permissions into three groups: The owner, the group the object belongs to, and everybody else. Each user belongs to one (or more) groups. When checking if an operation is allowed, first check if you are the owner, if so, the owner permissions rule; if you aren't the owner but belong to the group, group permissions are considered; otherwise, other permissions are checked. True, it allows rather nonsensical combinations of permissions. But it is a simple model, and some day you'll find use for some "nonsense" combination. The owner of some object has the power to change permissions at will.
Why owners of files and folders can modify it's contents if they don't have permissions on the parent directory?
1,606,321,004,000
I'm having a small issue. I have a passwordless user (jenkins) on a Unix system. This user is used by jenkins to perform some commands. Because I installed nvm on this unix system, I needed to add to /etc/profile a section to let the users know where the binary is. By executing sudo -u jenkins nvm I get following error: [USER@HOST ~]$ sudo -u jenkins nvm /bin/nvm: line 6: /bin/nvm.sh: No such file or directory For any other user this setting works, but not for the jenkins user. The jenkins user does not have its own home directory in /home folder. Therefore, I guess that's the reason why it's not working for it. How can I apply these settings to the jenkins user? Linux xx 3.10.0-957.21.3.el7.x86_64 #1 SMP Fri Jun 14 02:54:29 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux
In order to have the shell read and execute commands from /etc/profile, /etc/bashrc and the like when switching users with sudo, you probably need to use the -i or --login option, i.e. sudo -i -u jenkins nvm.
Use /etc/profile settings for passwordless user
1,569,513,366,000
I am having a simple permissions issue that the solution, or proper setup is eluding me. I have two servers sitting side by side on the same network. .80 is the web server and .40 is a file server. I am trying to mount a directory that lies on .40 onto .80 and serve as web content (only images). My mount is successful, and I can navigate the remote directory just fine, as well as add and remove items from it. But when images viewed in browser, I get 403 Forbidden or in layman's -- Permission Denied The remote directory on .40 looks like this: drwxrwxrwx 2 zak zak 4.0K Sep 26 10:36 images The local directory on .80 looks like this (prior to mount): drwxrwxrwx 1 zak zak 4.0K Sep 26 10:36 images I am using a Private Key to auto mount this directory on boot -- sshfs [email protected]:/Private/images /var/www/zak_site/images Why am I able to navigate, add and remove from this mount, but I cannot view via http? UPDATE Apache Error Log: [Thu Sep 26 11:29:04.607441 2019] [core:error] [pid 10718] (13)Permission denied: [client 174.31.53.188:58857] AH00035: access to /images/hawks2.png denied (filesystem path '/var/www/zak_site/images') because search permissions are missing on a component of the path note I will be needing to be able to upload via http as well once I get the permissions straight. But if I have that issue .. I'll ask a separated question.
OK there were TWO components that helped solve this. 1) I set user_allow_other in /etc/fuse.conf However that was not enough .. After reading -man I saw the option flag -o allow_other on the mount command. This allows Apache to see and change the files without being a "user" specified in the permissions. 2) So I changed the command to: sshfs -o allow_other [email protected]:/Private/images /var/www/zak_site/images All is well with Apache displaying my images now.
Permissions problem with remote mount
1,569,513,366,000
I have a directory and I'd like all the files in it to be always permissioned and owned in a certain way, regardless which user edits them. This gets close: # users: wilma, betty # group: devs contains wilma and betty. # directory: 'dir' chgrp -R devs dir find dir -type d -exec chmod 2770 \{\} \+ find dir -type f -exec chmod 660 \{\} \+ setfacl -d -R -m group:devs:rwx dir setfacl -R -m group:devs:rwx dir But it still gets messed up if one of the users uses chmod (resets acl masks) and a couple of other situations that I've seen in real life but not nailed down how it happened yet! I basically don't want those users to be able to change the permissions. The only way I can think to do this is resorting to making a vfat fs and mounting that over the dir!
You can use the fuse file system bindfs to accomplish this. e.g. if you have a directory of files that you want users wilma and betty to be able to share, both with write accesss, and both unable to restrict the access of the other. Say Wilma is the main user (you could pick a third user for this purpose) and they're both in the 'devs' group. wilma% mkdir src -m760 wilma% echo some content >src/file-created-by-wilma wilma% mkdir shared wilma% bindfs src shared -g devs -u wilma -p770 -m wilma:betty --create-with-perms=660 Now that's set up create some files inside this shared mount by the two users ... wilma% cd shared wilma% echo hi>file-created-by-wilma-inside-bindfs betty% cd /path/to/shared betty% echo hi>file-created-by-betty-inside-bindfs View the files inside the mounts. Note that each user sees themself as the owner! wilma% ls -l -rwxrwx--- 1 wilma devs file-created-by-willma -rwxrwx--- 1 wilma devs file-created-by-willma-inside-bindfs -rwxrwx--- 1 wilma devs file-created-by-betty-inside-bindfs betty% ls -l -rwxrwx--- 1 betty devs file-created-by-willma -rwxrwx--- 1 betty devs file-created-by-willma-inside-bindfs -rwxrwx--- 1 betty devs file-created-by-betty-inside-bindfs View the 'actual' files... wilma% ls -l src/ -rw-rw--- 1 wilma wilma file-created-by-willma -rw-rw--- 1 wilma wilma file-created-by-willma-inside-bindfs -rw-rw--- 1 wilma wilma file-created-by-betty-inside-bindfs
Force permissions/ownership on a directory and everything in it
1,569,513,366,000
Try to capture RTSP stream with ffmpeg. Everything goes nice, if I save video to my home folder. Can't save to another directory. ffmpeg says 'Permission denied' even directory premission is 777. In short: ffmpeg -i 'rtsp://192.168.0.161:554/11' -c:v copy -an new.mp4 good ffmpeg -i 'rtsp://192.168.0.161:554/11' -c:v copy -an folder777/new.mp4 Ubuntu Server 18.04.02. ffmpeg snap package v.4.1. Any suggestion?
The thing is I've tried to save video to the folders located at mounted partition. And my snap package have no connection to interface "removable-media". After connection everything works great.
ffmpeg permissions trouble
1,569,513,366,000
What I understand is, you could change the permission of a file for its owner by, say chmod u=0 file.txt In this case, we removed r, w and x permission for the owner of this file. But under what circumstances would we like to do that? If you are the file owner, why would you like to downgrade the permission of your own file?
It is not against an intelligent actor, because as owner, they could chmod() the file any time, giving their permissions back. It might be useful against programs, if you want to avoid your own programs to play with some of your file on any reason. However, typically it is more feasible to simply move that file away. It might be also useful, if the underlying filesystem driver doesn't support chmod(). For example, davfs or vfat file modes are determined by the mount flags and not by the filesystem metadata.
Under what circumstances would a user/superuser change the permission of a file for its owner? [closed]
1,523,703,847,000
When I click "Apply" in settings of DrJava, it says: Could not save changes to your ".drjava" file in your home directory. java.io.IOException: Permission denied Permissions are: drwxr-xr-x 30 pypaut pypaut 4096 avril 14 12:24 /home/pypaut -rw-r--r-- 1 root root 1259 janv. 1 2017 /home/pypaut/.drjava
Something created the .drjava directory with root ownership (or later chown'd it that way). You'll need to reset that to be your user: sudo chown pypaut:pypaut ~/.drjava If you do not have root-level authority to do this, you could rename the directory: mv ~/.drjava ~/.aside-drjava mkdir ~/.drjava
How to change color settings in DrJava ? (error)
1,523,703,847,000
Let me explain the situation, I have 15 users which are assigned to a bunch of different groups. I gave rwx permissions to all 15 but 2 users (I simply made a group specifically for that) for a directory bills, but theres another subdirectory access, in which I need to give r-x permissions to a group of 2 users, and rwx permissions to another group made up of 2 users as well. I will also need to do the same thing on different directories later on. So, is this even possible? If so, how can I do it?
You cannot assign multiple groups to a single directory. For that you will want to use Access Control Lists (ACL) just like "muru" said above. tutorial on ACL. Here's another great tutorial by Benjamin Cane Benjamin Cane - ACL tutorial.
Giving different permissions to two groups for a single directory?
1,523,703,847,000
As part of an exercice at my University, we need to do the following task: Respect the «least Privileges» rule. Create 5 users groups, each one has its own folder (G1 => Folder_G1) Grant user G1_Stefan the right to read the folder Folder_G2 So what I did is the following: [Stefan@centos---exam ffhs]$ getfacl verkauf # file: verkauf # owner: root # group: Verkauf user::rwx user:Stefan:r-- group::rwx group:Technik:--- group:HR:--- group:Projekt:--- mask::rwx other::r-x Stefan has access to the folder "hr" because he is an employee of the HR department. He is nevertheless a special user because he should have read access only to the "verkauf" (= sales). All his colleagues of the HR department have no right to access the "verkauf" folder. This is why, for the "verkauf" folder, I set special ACL permissions to the group HR (---) and an exception for the user Stefan (r--). The thing is that it does not work: [Stefan@centos---exam ffhs]$ cd verkauf bash: cd: verkauf: Permission denied And I do not know how to get around with it. Thanks for your help.
You also need to have search (execute) +x permission for the directory to able cd into it.
Files permission - problem with an exception to grant to specific user
1,523,703,847,000
I am trying to do some security levels on my linux system. For example, deny access to ping command or disk utility application can be easly done by restricting permissions to 750 for binaries: /bin/ping /usr/bin/gnome-disks and a user won't be able to run them. But the problem is that user can somehow obtain the same binary from outside and place that binary in it's home folder. Because user cannot be stopped from grantng permissions to it's own files, he can run the binary files and avoid the permissions granted on system files. How can I stop user from doing it?
Firstly we'll remove the execute bit from all files in $HOME: chmod a-x $HOME/* Then we make sure that any new files created in home, don't have the execute bit set: umask 006 $HOME However users can still manually set something to +x so they can execute it manually themselves. Stopping them doing this is more complicated, as you'll need to take ownership of any files they create, and then add them to a group which gives them read/write access but not the ability to change the permissions.
Prohibit access for certain programs to specific user groups
1,523,703,847,000
I found this command lines on the net: find . -type f -exec chmod 644 {} + find . -type d -exec chmod 755 {} + I'm not sure of what they do when executed... In theory I guess they go search of all the files and convert them to permissions 644 and the second line search for all the folders and convert them to 755 but I don't thing that I did anywhing once I typed enter. Also, I needed these line because I wanted to to set these permissions for my wordpress configuration, but I accidentally typed these commands in the / directory and not in /var/www/html/wordpress.... Can I stay calm or I did something wrong and my server will have problems? By now, it seems to work normally... Still one thing: can you tell me the best and fastest way to switch the permissions of all the files inside /var/www/html/wordpress to 644 and all the folders inside /var/www/html/wordpress to 755? --UPDATE-- I checked the history of terminal and it seems that I was on /root/ home when I executed these lines, so it's a great news!
It's easy to recover from an error like that with a RHEL-based distribution. But with Debian, at this point, the easiest thing to do is to reinstall Debian. Next time you have to write: find /var/www/html/wordpress -type f -exec chmod 644 {} + find /var/www/html/wordpress -type d -exec chmod 755 {} +
chmod - What does this command do?
1,486,312,508,000
I am trying to build a Git project using Jenkins on an ec2 instance. The custom workspace address I want to keep is /home/ec2-user/xyz. I get the following error: java.io.IOException: Failed to mkdirs: I figured that this is due to permission, that 'jenkins' user doesn't have the permission to that folder. So, I changed the ownership of xyz, and added jenkins as a user and a group, and tried to change the permission to 777. Still the error persists. I tried building this in other /var/www/ and it builds correctly. Any suggestions?
The problem might be that you gave permission to the xyz directory but did not give write permission to /home as well. You'd have to change the group of /home to jenkins and give it 775 permission. However, the cleanest way would be to use the default values for Jenkins directories.
Jenkins permission to build in home dir
1,486,312,508,000
Say I want to create a symlink for a folder /media/drive/here (owned by a group) to a folder /home/pepe/private/here Do all the intermediate folders need to have the x bit on? What does the computer do when from /media/drive I execute cd here? Does it internally just cd /home/pepe/private/here? (I had to set the x bit in this situation to every intermediate folder, so that other users could access just my private folder here, but still not sure if this is correct, I thought only permissions on /home/pepe/private/here matters, not their parents folders)
To complement @Rabin's comment, you can confirm this by stat'ing root path: # stat / File: '/' Size: 4096 Blocks: 8 IO Block: 4096 directory Device: fe01h/65025d Inode: 2 Links: 23 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2017-01-03 11:51:24.202486304 +0000 Modify: 2016-06-20 16:31:24.210935643 +0100 Change: 2016-06-20 16:31:24.210935643 +0100 Birth: - If root didn't have o+x bit set, you would not be able to access your home directory as a regular user. Symbolic links are basically used to keep things simple. For instance in most systems /lib and /lib64 point to exact same location, otherwise those would be copies of the same directory. Also note the excerpt from man chmod: chmod never changes the permissions of symbolic links; the chmod system call cannot change their permissions. This is not a problem since the permissions of symbolic links are never used. Hence you cannot change the privileges of the symlinks and your user must be privileged to access all the resources which symilinks are traversing.
Symlink and folder permissions
1,486,312,508,000
I'm trying to learn about file and directory permissions in unix/linux. I think I got the general idea, if I want to cat ~/foo/bar/text.txt I need x+r permission on ~, ~/foo, ~/foo/bar, and r permission on ~/foo/bar/text.txt. But let's say I don't have x permission on ~/foo/bar and that somehow my current directory is ~/foo/bar (maybe root user made su user there), let's say there I type cat text.txt from my test, it says I don't have permissions even if I have rwx on text.txt. So my question is: When I type cat text.txt does the system really interpret cat ./text.txt and so checks the current directory permission as well? (I thought that the directories I didn't mention in the path wouldn't be checked for permissions, but maybe the current one is checked neitherways?) So in this case (my current directory is ~/foo/bar and I want to do cat text.txt) the permissions on ~/foo does not matter but those on ~/foo/bar does?
The file open operation does require reading its directory entry for its attributes and physical location, which is stored in the current directory (file). That is why you are getting the permission issue. Otherwise, your assumptions are right and the system only accesses the path specified. This would make the file accessible in your sudo example if you simply gave x access to the local directory. Note that the r permission is not needed to retrieve a directory entry, but it is required to list its contents.
open file and current directory permissions
1,486,312,508,000
I have exported a handful of shares on my Synology - e.g. /volume2/Home_Data/Downloads On my CentOS7 box I would like to mount this and have it available for all users of the system. This works fine when I mounted to /mnt/nfs/ /etc/fstab entry diskstation.davis.local:/volume2/Home_Data/ /mnt/nfs/ nfs4 user,nfsvers=4,nosuid,bg,noexec 0 0 However, I need it mounted to /mnt/nfs/downloads. When mounted here only root has the share mounted, other users cannot see it. /etc/fstab entry diskstation.davis.local:/volume2/Home_Data/ /mnt/nfs/downloads nfs4 user,nfsvers=4,nosuid,bg,noexec 0 0 I thought it could be a perms issue, but the perms on /mnt/nfs & /mnt/nfs/downloads are the same. Perms: /mnt/: total 4 drwxr-xr-x. 4 root root 26 Dec 15 12:28 . dr-xr-xr-x. 17 root root 4096 Dec 15 12:02 .. drwx------. 6 root root 64 Dec 15 12:38 nfs drwx------. 2 root root 6 Dec 3 11:30 tmp /mnt/nfs/: total 0 drwx------. 6 root root 64 Dec 15 12:38 . drwxr-xr-x. 4 root root 26 Dec 15 12:28 .. drwx------. 3 root root 18 Dec 15 12:37 downloads Any ideas what I can try?
go into the synology control panel and make sure NFS share is checked. And then prior to mounting it in CentOS, do a chmod -R 777 /mnt to make everything under /mnt read-write-execute for all. I have a few synology boxes and have them NFS mounted to my linux systems and they work well. This is for NFSv3. And if you cannot get it to work from the web browser log in to the synology, then open up an SSH connection to the Synology and use putty.exe. from here you can view the synology operating system, which is linux based and will look very familiar, and then you can dive deeper into how NFS-server is working on the synology box.
NFS share mounting issue
1,486,312,508,000
My version of usermod does not seem to support LDAP, so when I run usermod -g <group> <username> I get an error that the user name does not exist. Is there a different way to switch my primary group?
Unless you've got the proper tooling set up, you won't be able to change a user's primary GID from an LDAP client. You will need to make the change on the LDAP server itself by modifying the gid attribute (or whatever attribute your schema uses to store primary GIDs) on the user object.
Switch primary group with LDAP user
1,486,312,508,000
I'm trying to create a directory in my home directory and I'm getting a permission denied warning. I am in /home/User/ as most solutions suggest. I'm also aware that I could use: sudo mkdir I also discovered in working around the problem that I can no longer use mv, cp, or rm without the sudo command. I was able to run all of these commands without problem before I added myself to the sudoers file. Did I remove myself from some permissions somehow? Thanks
Try running sudo chown $(whoami) ~ and also sudo chmod -R u=rwx ~. This should set your home directory as owned by you and give yourself all permissions.
permission denied for basic commands
1,486,312,508,000
How could I go about finding uneven file/directory permissions within a directory structure? I've made some attempts at using the find command similar to: find /bin ! \( -perm 777 -o -perm 776 -o -perm 775 -o -perm 774 -o -perm 773 -o -perm 772 -o -perm 771 -o -perm 770 -o -perm 760 -o -perm 750 -o -perm 740 -o -perm 730 -o -perm 720 -o -perm 710 -o -perm 700 -o -perm 600 -o -perm 500 -o -perm 400 but I run out of command line before I can complete the remaining permutations plus an -exec ls -lL {} \; I've also been doing manual things similar to: ls -lL /bin | grep -v ^-rwxr-xr-x | grep -v ^-rwx--x--x | grep -v ^-rwsr-xr-x | grep -v ^-r-xr-xr-x | grep -v ^-rwxr-xr-t but again, I run out of command line before I can complete the remaining permutations. Both methods seem unusually awkward. Is there a better, faster, easier way? Note that I'm restricted in the shell I'm using (sh) and platform (Irix 6.5.22).
are you looking for executable files? find . -type f -perm /+x regardless, the / mode is more than likely your friend... here is the man page: -perm /mode Any of the permission bits mode are set for the file. Symbolic modes are accepted in this form. You must specify `u', `g' or `o' if you use a symbolic mode. See the EXAMPLES section for some illustrative examples. If no permission bits in mode are set, this test matches any file (the idea here is to be consistent with the behaviour of -perm -000). UPDATE: right, i though you were looking for uneven numbers (executable ones)... this should work (still using 3rd perm param from find sample data: $ ls 000 001 002 003 004 005 006 007 010 020 030 040 050 060 070 100 200 300 400 500 600 700 Find command: $ find . -type f \( -perm /u-x,g+x -o -perm /u-w,g+w -o -perm /u-r,g+r -o -perm /g-x,o+x -o -perm /g-w,o+w -o -perm /g-r,o+r -o -perm /u-x,o+x -o -perm /u-w,o+w -o -perm /u-r,o+r \) | sort ./001 ./002 ./003 ./004 ./005 ./006 ./007 ./010 ./020 ./030 ./040 ./050 ./060 ./070 Basically you are saying, give me files where group has perms but owner does not, or files where world has perms but group does not, or where world has perms but owner does not. note: find has 3x perm params; perm mode perm -mode perm /mode ps I'm not all too sure of the value of this...
How would I find uneven file permissions within a directory structure?