date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,407,059,257,000
I was debugging kcheckpass under Archlinux. (Because I failed to login with kcheckpass) Somehow I believe this particular problem is not within kcheckpass. int main(int argc, char **argv) { #ifdef HAVE_PAM const char *caller = KSCREENSAVER_PAM_SERVICE; #endif const char *method = "classic"; const char *username = 0; #ifdef ACCEPT_ENV char *p; #endif struct passwd *pw; int c, nfd, lfd; uid_t uid; time_t nexttime; AuthReturn ret; struct flock lk; char fname[64], fcont[64]; // disable ptrace on kcheckpass #if HAVE_PR_SET_DUMPABLE prctl(PR_SET_DUMPABLE, 0); Before the execution of the very first line: prctl(PR_SET_DUMPABLE, 0); ls /proc/$(pidof kcheckpass)/exe -al lrwxrwxrwx 1 wuyihao wuyihao 0 Jan 16 16:16 /proc/31661/exe -> /cker/src/build/kcheckpass/kcheckpass And after executing it: ls /proc/$(pidof kcheckpass)/exe -al ls: cannot read symbolic link '/proc/31661/exe': Permission denied The same with /proc/31661/root and /proc/31661/cwd I don't see any connection between coredump and read permission of /proc/$PID/exe UPDATE Minimal example reproduced this problem: #include <sys/prctl.h> #include <stdio.h> int main(){ prctl(PR_SET_DUMPABLE, 0); return 0; } UPDATE2 kcheckpass and minimal example test are both: -rwsr-xr-x 1 root root
When you remove the dumpable attribute, a bunch of /proc/<pid>/ files and links becomes unreadable by other processes, even owned by the user. The prctl manpage reads: Processes that are not dumpable can not be attached via ptrace(2) PTRACE_ATTACH; see ptrace(2) for further details. If a process is not dumpable, the ownership of files in the process's /proc/[pid] directory is affected as described in proc(5). And the proc manpage reads: /proc/[pid] Each /proc/[pid] subdirectory contains the pseudo-files and directories described below. These files are normally owned by the effective user and effective group ID of the process. However, as a security measure, the ownership is made root:root if the process's "dumpable" attribute is set to a value other than 1. And finally, the ptrace manpage reads: Ptrace access mode checking Various parts of the kernel-user-space API (not just ptrace() operations), require so-called "ptrace access mode" checks, whose outcome determines whether an operation is permitted (or, in a few cases, causes a "read" operation to return sanitized data). (...) The algorithm employed for ptrace access mode checking determines whether the calling process is allowed to perform the corresponding action on the target process. (In the case of opening /proc/[pid] files, the "calling process" is the one opening the file, and the process with the corresponding PID is the "target process".) The algorithm is as follows: (...) Deny access if the target process "dumpable" attribute has a value other than 1 (...), and the caller does not have the CAP_SYS_PTRACE capability in the user namespace of the target process.
Symbolic link becomes unreadable after prctl(PR_SET_DUMPABLE, 0);
1,407,059,257,000
I have a directory, owned by me (e.g. /sync), and a process, run by me, which I want to have full read&write access to this directory (in my case the process is Resilio Sync, formerly known as BitTorrent Sync). All files in this directory are personal projects and documents. 99% of them are owned by me, but sometimes, for unavoidable reasons, some directories and files are created by root. How would I go about somehow letting the process alter, move and/or delete such directories and files? I've tried a combination of sticky bits (setting g+s on /sync, so that all files inherit the group) and custom ACL rules (to try to have the sticky bits propagate to newly created directories automatically), but, as described in this answer and its comments, it can't be done without inotify (which I'd like to avoid for simplicity). However, I was wondering, maybe there's some other way to go about this? Like giving one particular process more power in a certain directory and everything in it, ignoring all file permissions? And if that is possible, are there any security implications I would have to look out for?
I don't think this goal requires sticky bits. Let's say the process is running as the user, resilio, and your user account is olegs. (I see that it is your account running the process; I add this for sake of demonstration.) # Change all ownership to root:root chown -R root:root /sync # Make sure only root (and group members of root) can get a directory listing. chmod 0750 /sync # Now, let's augment standard permissions with ACLs. # Set default masks for all new file system objects in /sync. # (The root user already has permission.) setfacl -d -m u:resilio:7 /sync setfacl -d -m u:olegs:7 /sync # Apply a mask to all existing files (and dirs) to give full control # to the directory contents to olegs and resilio. setfacl -m -R u:resilio:7 /sync setfacl -m -R u:olegs:7 /sync Now, these users have full control over the directory: root, olegs, and resilio. Otherwise, no other user can see the contents of the /sync directory. Although the masks specify 7 (read/write/execute), directories become rwx and files effectively become 6 (read/write).
Let a process read and write all files inside a certain directory, at any depth
1,407,059,257,000
So when I look at the permissions of the /etc/sudoers file, it is like so -r--r----- 1 root root 705 Nov 2 19:57 /etc/sudoers Now, wouldn't this mean it's not writable? So how does the root user manage to write to it?
The root user has always full write access to any file, regardless of its mode. Perhaps the best example is /etc/shadow, which is mode 000 but of course modifiable by root: [root@centos7 ~]# ls -pl /etc/shadow ----------. 1 root root 1353 Oct 26 07:40 /etc/shadow
How is root able to write to the sudoers file with permissions set to 440?
1,407,059,257,000
I'm a non-admin user on a linux system. No sudo, so I can't manage any groups. A team of ~5 other users need shared access (+w) to a subfolder I created. I don't want to give 777 permissions on the folder, I only want the team to have full permission to everything in that folder. What's the most efficient way to handle this? Most discussions either suggest group's (sudo needed), or changing ownership. Neither seem right. I wasn't clear from searches if ACLs were the way to go, so if they are can you explain how that works for such a shared folder?
As the comments suggest, a shared group is the “proper” way to handle what you want. But if that is not possible, you can use ACLs as you suggest. You simply do setfacl -m u:someuser:rwx thefolder and repeat for each user you want to give access.
Allowing +w permissions for a sub-directory to a team of users, without sudo access
1,407,059,257,000
Please explain in detail (including tty related stuff) how is a sudo foreground process on a X terminal emulator actually killed on the Ctrl-C. See the following example please: $ sudo -u test_no_pw sleep 999 & [1] 16657 $ ps -o comm,pid,ppid,ruid,rgid,euid,egid,suid,sgid,sid,pgid -t COMMAND PID PPID RUID RGID EUID EGID SUID SGID SID PGID zsh 15254 15253 1000 1000 1000 1000 1000 1000 15254 15254 sudo 16657 15254 0 1000 0 1000 0 1000 15254 16657 sleep 16658 16657 1002 1002 1002 1002 1002 1002 15254 16657 ps 16660 15254 1000 1000 1000 1000 1000 1000 15254 16660 $ fg [1] + running sudo -u test_no_pw sleep 999 ^C $ # it was killed Before I interrupted the sudo I started the strace on it in another terminal: # strace -p 16657 Process 16657 attached restart_syscall(<... resuming interrupted call ...>) = ? ERESTART_RESTARTBLOCK (Interrupted by signal) --- SIGINT {si_signo=SIGINT, si_code=SI_KERNEL, si_value={int=809122100, ptr=0x54552036303a3934}} --- [...SNIP...] So the sender is SI_KERNEL, interesting. I've asked yesterday in IRC channels and Googled but got only hazy or incorrect answers. Most people said that terminal or shell will send the SINGINT to sudo but it seems to me that it cannot happpen according to kill(2): For a process to have permission to send a signal it must either be privileged (under Linux: have the CAP_KILL capability), or the real or effective user ID of the sending process must equal the real or saved set-user-ID of the target process. In the case of SIGCONT it suffices when the sending and receiving processes belong to the same session. (Historically, the rules were different; see NOTES.) I predict that it'll have something to do with sending some escape sequence with ASCII ETX (3) to a pseudo-terminal but I'm far from understanding it. (Why is does the signal originate from the kernel?) Related but hazy/incorrect: https://stackoverflow.com/questions/34337840/cant-terminate-a-sudo-process-created-with-python-in-ubuntu-15-10 I'm mostly interested in how it works on Linux.
(This is an attempt to clarify and answer the question, but improvements and corrections are welcomed). First, let's remove the sudo and the &+ fg from the scenario - as they are not affecting the station (I assumed you've used those mostly to get the PID). The question then becomes: 1) how does a process running in foreground of a terminal receives SIGINT; 2) what changes when the terminal is a pseudo-terminal using X11 (e.g. Xterm). The delivery of SIGINT (and SIGQUIT, SIGTSTP) are generated by the kernel controlling terminal driver, when it intercepts a CTRL-C character, which is why you see SI_KERNEL as the source. This happens regardless of X11 or pseudo-terminals. It is nicely illustrated in "Advanced Programming in the Unix Environment 2nd Edition (APUE2)", Figure 9.7, page 272 (I won't paste it here for copyright reasons, but I'm sure it can be found). It is further explained on page 275, section "9.8 Job Control". The relevant Linux kernel code is likely this: http://lingrok.org/xref/linux-linus/drivers/tty/n_tty.c#1254 Now adding pseudo-terminals to the mix: the psudeo-terminal kernel code still uses the standard terminal code (mentioned above) - thus when the 'master' side of the PTY (the X-terminal) receives the X11 key-event of "CTRL-C", and sends it to the slave PTY, the character is detected by the kernel terminal drive, and it sends SIGINT to the foreground process group (sudo in your case). This is illustrated in APUE2, Figure 19.1 page 676. In APUE2 page 706 there is a short paragraph "Signal Generation" mentioning that signals can be sent directly by the master PTY using ioctl(2) (e.g. http://lingrok.org/xref/linux-linus/drivers/tty/pty.c#482), however I believe this is not the case here. Comments welcomed.
How is sudo interrupted in xterm on Ctrl-C? [duplicate]
1,407,059,257,000
I was reading about setuid on Wikipedia. One of the examples goes as follows: 4700 SUID on an executable file owned by "root" A user named "tails" attempts to execute the file. The file owner is "root," and the permissions of the owner are executable—so the file is executed as root. Without SUID the user "tails" would not have been able to execute the file, as no permissions are allowed for group or others on the file. A default use of this can be seen with the /usr/bin/passwd binary file. I do not understand this. How can user "tails" execute this file at all, since he is not the owner of the file, and group and other permissions are not available? I tried to recreate this scenario, and indeed: $ su -c 'install -m 4700 /dev/null suidtest' $ ls -l suidtest -rws------ 1 root root 0 21 dec 07:48 suidtest* $ ./suidtest bash: ./suidtest: Permission denied I only got this working with permissions of 4755. Also, the default use mentioned in the example on Wikipedia (the /usr/bin/passwd) has in fact 4755 permissions. Is the example correct and am I missing something, or is this a mistake?
You are right and the Wikipedia article is wrong. See the below for an example: $ ls -l /usr/bin/passwd -rwsr-xr-x. 1 root root 30768 Feb 22 2012 /usr/bin/passwd $ sudo cp /usr/bin/passwd /tmp/ $ cd /tmp $ ls -l passwd -rwxr-xr-x 1 root root 30768 Dec 21 07:43 passwd $ sudo chmod 4700 passwd $ ls -l passwd -rws------ 1 root root 30768 Dec 21 07:43 passwd $ ./passwd bash: ./passwd: Permission denied $ sudo chmod 4701 passwd $ ./passwd Changing password for user vagrant. Changing password for vagrant. (current) UNIX password: $
setuid example from Wikipedia: 4700
1,407,059,257,000
Recently I've noticed that logrotate does not rotate my logs. user1@host:~$ /usr/sbin/logrotate /home/user1/logrotate.conf -v gives me an error: error: error setting owner of /home/logs/mylog.log.1 to uid 10111 and gid 10111: Operation not permitted error: error creating output file /var/lib/logrotate/status.tmp: Permission denied That gid confuses me, as user1 is only a member of a group with different gid: user1@host:~$ id uid=10111(user1) gid=1001(mygroup) groups=1001(mygroup) However, there's another group called user1, but, as I mentioned, actual user user1 is not its member: user1@host:~$ cat /etc/group | grep user1 user1:x:10111 It's something simple here, but I can't see it. UPDATE: here's what logrotate.conf looks like: /home/logs/*.log { rotate 7 daily copytruncate compress notifempty } logrotate 3.8.7 UPDATE 2: user1@host:~$ ls -la /home/logs/ -rw-r--r-- 1 user1 mygroup 1358383344 Dec 19 00:58 mylog.log
Try with a different user, one having default group membership: for each user userx there is membership in a distinct userx group. If logrotate is successful with different user account, then apply similar group membership settings for the user1 account having difficulty.
logrotate fails to rotate logs: error setting owner
1,407,059,257,000
I am using rsync to backup some folders in my home directory to a remote server using the following commands: cd rsync -Favz --inplace --delete --delete-excluded folder1 folder2 folder3 remote-server:/remote/path/ All files are owned by my user, both locally and remotely. It works fine except with files that have mode r--r--r--, even though my user is the owner of those files and the parent directory. This is what rsync reports for those files: rsync: open "/remote/path/somefolder/somefile" failed: Permission denied (13) ... rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.0] As a temporary solution I am able to log on the remote machine (same user), delete the files and re-run rsync which will then create those files in those modes, however next time I run the script it will fail again. Is there a way to have rsync overwrite non-writable files on the remote side (or delete them and create them again) or do I have to settle with first deleting them remotely and then running the backup script? This question is quite similar to How to backup /etc/{,g}shadow files with 0000 permission? except that the latter talks about non-readable files and my question is about readable but non-writable files.
This is side effect of --inplace option. This is more for disk to disk synchronizing not over the network. The other side effects, like leaving files in inconsistent state on transfer interruption can be problematic too. Try to synchronize without --inplace as rsync delta-transfer algorithm is already extreme efficient.
How to backup readable but not writable files using rsync?
1,407,059,257,000
I'm trying to set up a git server on my FreeNAS server. The problem I have is with setting up the permissions for different users/groups just as I want. Basically I have two different groups: git-auth-user which contains all users that should have rwx access to the directory containing all repositories (I should limit x to directories only I'd think, but for now that's a little detail) and git-unauth-user which is basically just the git daemon that should hand out read only access. I thought that running setfacl -m "g:git-auth-user:rwx:fd:allow" git/ would work to give my git-auth-user all rights, but that doesn't happen. From searching it seems like the classic permissions still limit the overall permissions ACLs can hand out, does this mean I have to basically give others full rights (so basically chmod 777 dir)? But then I assume everybody that doesn't get their rights limited via ACLs would then have full access as well which is obviously not what I want. Is there any way around having to set the classic permission rights for other to the most permissive I want to hand out via ACLs or if not, is there an ACL that completely denies access to everybody that doesn't get special access rights? Edit: ls -la (so chmod 770 for the directory) drwxrwx---+ 2 root wheel 2 Jun 22 23:45 git and $ getfacl git/ # file: git/ # owner: root # group: wheel group:git-auth-user:rwx-----------:fd----:allow owner@:rwxp--aARWcCos:------:allow group@:rwxp--a-R-c--s:------:allow everyone@:------a-R-c--s:------:allow Now when a user of the group git-auth-user tries to generate a new directory inside the git directory I get $ mkdir test.git mkdir: test.git: Permission denied On the other hand if I use chmod -R 777 git it works just fine, but that's obviously a really bad solution because I give everybody complete access to the directory, while my dream solution would be no access for everyone except git-auth-user (i.e. my user git-ro also has write access to the directory, now I could specifically remove all rights for that user per ACLs, but this obviously doesn't scale. I'm sure there must be a better solution to this that I'm overlooking).
ACLs, if present, override the usual chmod bits. Also, NFSv4 ACLs don't have masks. I believe the problem here is you only set 'rwx', and not 'rwxp'. The 'p' is APPEND_DATA/ADD_SUBDIRECTORY, which is what controls... well, adding subdirectories.
ACLs and classic permissions
1,407,059,257,000
In my Vagrant instance: vagrant@archlinux:~$ sudo pip2 install vcard Downloading/unpacking vcard Downloading vcard-0.9.tar.gz Running setup.py (path:/tmp/pip_build_root/vcard/setup.py) egg_info for package vcard Requirement already satisfied (use --upgrade to upgrade): isodate in /usr/lib/python2.7/site-packages (from vcard) Installing collected packages: vcard Running setup.py install for vcard Installing vcard script to /usr/bin Successfully installed vcard Cleaning up... vagrant@archlinux:~$ ls -l $(which vcard) which: no vcard in (/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl) total 0 vagrant@archlinux:~$ ls -l /usr/bin/vcard -rwxr-x--- 1 root root 286 Nov 9 10:42 /usr/bin/vcard /usr/bin/vcard is only executable by root. What gives? On my up-to-date Arch Linux machine it works as expected: $ sudo pip2 install vcard Downloading/unpacking vcard Downloading vcard-0.9.tar.gz Running setup.py (path:/tmp/pip_build_root/vcard/setup.py) egg_info for package vcard Requirement already satisfied (use --upgrade to upgrade): isodate in /usr/lib/python2.7/site-packages (from vcard) Installing collected packages: vcard Running setup.py install for vcard Installing vcard script to /usr/bin Successfully installed vcard Cleaning up... $ ls -l $(which vcard) -rwxr-xr-x 1 root root 286 Nov 9 10:40 /usr/bin/vcard /usr/bin/vcard is executable by everyone. It seems this is caused by a restrictive umask: vagrant@archlinux:~$ sudo bash -c umask 0027 Turns out it's set in the both the vagrant and root users's .profile, for unknown reasons: vagrant@archlinux:~$ sudo grep ^umask /root/.profile umask 027 vagrant@archlinux:~$ grep ^umask ~/.profile umask 027
vcard is not only executable by root, but also by any member of the group root. This is caused by the umask being 007 or even more restrictive at the moment pip2 is started. Just change with umask 002 or start with: python -c "import os; os.umask(2); os.system('pip2 install vcard')"
Why is my pip-installed Python script not executable for everyone?
1,407,059,257,000
In my lab, a group of people work each at one workstation, and they share a number of drives over NFS. They run shared software residing in one of those NFS drive and they run it on the NFS drives where the data is. The current set up has all of them using the same user account, let's call it one on all hosts. Lab members log in as joe@myhost to have a desktop with their own customizations, but for any real work on the scientific data, which belongs to one, group users, they must become one. Note that some per-user customization of the environment is achieved by having one accounts defined host-specific and thus have their home directory local i.e. one@myhost:/home/one (they all are UID 502 and GID 100 for historic reasons). So each user's one account can have specific environment variables set alongside the common PATH pointing at the shared programs. There is currently a debate going on in the lab where I am suggesting that it is better practice in Unix to have separate accounts for separate users and replicate the current functionality of a single user by using groups. I have listed the following disadvantages of the single account difficulty managing several software versions, almost meaningless auditing/logging, everybody can delete everybody's else's files and programs, when one member leaves the lab, ideally passwords should be changed Now, this network is behind a firewall and it is just scientific data that would be meaningless to an intruder and has no economic value. Plus everything is secondary to productivity in a fast-paced scientific lab, including security. Since the data is frequently backed up, the ability to delete data or programs is also not a major concern. Because the one account approach is very convenient and was established many years ago (2001?), I am having difficulty to convince anybody to go to one account per user. Perhaps I am using the wrong arguments, or perhaps maybe the scenario I just described above is such that the one approach is a practical middle-ground, and I am being inflexible in my reasoning. In either case, I would appreciate if you can help me find out if I am in the wrong, or if I am right, suggest how I could otherwise argue my case.
Well, I can think of two counter-points to the idea of separate accounts. One, there is one big flaw with with classic unix groups vs. a common shared tree of files, and that is that each user and each script they run needs to have a umask that keep files and directories group-writable, and you need to apply the group-sticky bit to all directories. In practice, this is hard, because so many scripts specify a restrictive umask, and because if you rsync -a or cp -a or tar -x files from a directory you own into the common tree its easy to forget to add the group-sticky bit and change the group to the shared one. What you end up with is a sub-tree of files owned by Joe and Joe is on vacation or out to lunch and someone needs to modify those files and you have to go find the sysadmin to fix it. IMHO the Unix permissions design is simple and efficient but not terribly practical in the real world. Two, if this common tree of files contains binaries, and everyone has added these paths to their $PATH, then you have no security at all. Any one of your users could hijack the whole group, so they'd better all really trust eachother. Also, you're using NFS, so if the users can take over root on their workstations, they can access the central files as any user they like. As to the idea of revoking user accounts, you get that by removing them from the local workstation, which is where the password for one was stored anyway (right?) because with NFS, the file server has no clue what password they typed, only that their workstation's kernel said "Hi, I'm UID 502, do these things on my behalf". So, before pushing the issue in your organization, you should think about whether you can actually accomplish what you want. If you want the program versions to be less chaotic, maybe have users ask for you to install them, owned by an administrator. Security is better when an administrator is in control of everything in $PATH. If you want separate accounts so that you can see who is changing files, you might have to go as far as writing a daemon to monitor the shared tree with INotify (I suggest writing it in Perl) and force the g+rwx bits after each 'open' event, and set the UID after each 'write' event. Hope that helps...
Single user for sharing vs. multiple users
1,413,459,296,000
I have just run into the case in which the owner-group of a file has more permissions than the owner-user of a file. user1@pc:/tmp$ ls testfile -l ----rw---- 1 user1 user1 9 Okt 16 13:16 testfile Since the user user1 has no permissions to to read the file I get this user1@pc:/tmp$ cat testfile cat: testfile: Permission denied This suprised me as user1 is member of the group user1 which has permission to read the file. Interesstingly when doing this: root@pc:/tmp$ addgroup user2 user1 Adding user `test' to group `ress' ... Adding user test to group ress Done. root@pc:/tmp$ su user2 user2@pc:/tmp$ cat testfile content of testfile user2@pc:/tmp$ I can read testfile's content. It seesms the permissions granded (or not) on the user-owner level take precedence over anything later like the permissions existing due to group membership. My question is if there is a reference to this behaviour I experience in my linux system (that is that not having user-permissions takes away group-permissions) Also is there a use case for this behaviour?
The file permissions specifically do not allow read, write or execute of that file to the owner (user1). If you were to change the owner to another user, then you would be able to read the file under the group permissions. Excert from File system permissions wiki page Classes ... The effective permissions are determined based on the user's class. For example, the user who is the owner of the file will have the permissions given to the owner class regardless of the permissions assigned to the group class or others class.
What has priority - owner/user vs group permission? [duplicate]
1,413,459,296,000
I'm using Debian 7 and have created a new user (website) with an htdocs directory like this: $ sudo adduser website $ sudo mkdir -p /home/website/htdocs $ sudo chown -R website /home/website Now I'd like users of another group (developers) to have access to the user's directory. I have tried: $ sudo chown -R :developers /home/website I can see that the group is assigned (with ls -la or stat) and that the users are in the group, but they don't have access right? drwxr-xr-x 3 website developers 4096 May 3 09:09 website I also want to: Allow another group, 'contractors' access to website files Restrict access of website users to their home directories only Ensure new website files inherit these permissions Do you have to use access control lists - or is there a better way to do this (like not using a separate user for each site)?
It's hard to give precise commands without knowing the O/S or distribution; and yes! ACL would work, but there's a standard way, too. There's adduser and useradd, one of them on your distribution may create the user's home directory automatically. If so, then the contents of the /etc/skel/ directory would be copied into the the user's home directory, permissions set, and perhaps some other appropriate actions might make place. There may exist groups pre-defined for sharing, such as 'staff'; but, if we want to create our own group for sharing, there's nothing wrong with that. So, create a new group or use an existing group. Make sure that users who are to be members of the group have been defined as such with usermod, moduser, or vigr perhaps, according to the operating system. Each user currently logged in must logout and back in again to become a member of a new group. Create a directory common for all users, such as /home/share_directory/ or any other directory that makes the most sense for your situation. A relevant best practice is not to use a directory within any user's home directory. If no one except the owner and group should be able to see files in the directory, then change the directory's permissions to 0770. If reading is ok by "others", then use 0775. The owner of the directory should almost certainly be root. chown root:group_name /home/share_directory/ Next, change the setuid bit. chmod +s /home/share_directory/ If no user should be able to modify another user's file, then also set the stick bit. chmod +t /home/share_directory/ These examples set both the setuid and sticky bits at the same time using octal notation. chmod 5775 /home/share_directory/ or chmod 5770 /home/share_directory/ For the updated question, it seems as though ACL is the right tool. Most Linux distributions now include the acl option in the defaults option. If your distribution doesn't include the acl option by default, then this a little work is needed to start using it. First mount the file systems with the acl option in /etc/fstab. sudo vim /etc/fstab UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx / ext4 defaults,acl 0 1 If needed, remount the filesystem: sudo mount -o remount,acl /. Then make a group to which a user may belong for this purpose. You may need to install the ACL tools as well: apt-get install acl. sudo groupadd developers sudo usermod -a -G developers $username (Or the group might be "contractors".) A currently logged in user must log out and back in again to become a member of the new group. Of course, do not do this if you have content in the /var/www directory that you want to keep, but just to illustrate setting it up to start: sudo rm -rf /var/www sudo mkdir -p /var/www/public sudo chown -R root:developers /var/www/public sudo chmod 0775 /var/www/public sudo chmod g+s /var/www/public sudo setfacl -d -m u::rwx,g::rwx,o::r-x /var/www/public sudo setfacl -m u::rwx,g::rwx,o::r-x /var/www/public sudo setfacl -d -m u::rwx,g:contractors:rwx,o::r-x /var/www/public sudo setfacl -m u::rwx,g:contractors:rwx,o::r-x /var/www/public Above, the difference between the setfacl commands are these: the first instance uses the default group (group owner of the directory) while the second specifies a group explicitly. The -d switch establishes the default mask (-m) for all new filesystem objects within the directory. Yet, we also need to run the command again without the -d switch to apply the ACL to the directory itself. Then replace references to "/var/www" with "/var/www/public" in a config file and reload. sudo vim /etc/apache2/sites-enabled/000-default sudo /etc/init.d/apache2 reload If we wanted to restrict delete and rename from all except the user who created the file: sudo chmod +t /var/www/public. This way, if we want to create directories for frameworks that exist outside the Apache document root or maybe create server-writable directories, it's still easy. Apache-writable logs directory: sudo mkdir /var/www/logs sudo chgrp www-data /var/www/logs sudo chmod 0770 /var/www/logs Apache-readable library directory: sudo mkdir /var/www/lib sudo chgrp www-data /var/www/logs sudo chmod 0750 /var/www/logs A litle bit of "play" in a directory that does not matter should help get this just right for your situation. On restrictions, I use two different approaches: the shell, rssh, was made to provide SCP/SFTP access but no SSH access; or, to restrict the use to a home directory you could use the internal-sftp subsystem, configured in /etc/ssh/sshd_config. Subsystem sftp internal-sftp Match group sftponly ChrootDirectory /home/%u X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp Create a group named, for example, sftponly. Make users a member of the sftponly group. Change their home directories to / because of the chroot. The directory, /home/username, should be owned by root. You can also set the user's shell to /bin/false to prevent SSH access. Mostly I am concerned about interactive access, so generally go with the rssh path. (They can't write anywhere except where I have defined write ability.)
Giving group permissions to other user's files
1,413,459,296,000
Here is what happens: $ chmod 600 foobar.txt $ ls -l total 1 -rwx------ 0 sampablokuper sampablokuper 13 Feb 19 21:00 foobar.txt Why is the last line not reading as follows? -rw------- 0 sampablokuper sampablokuper 13 Feb 19 21:00 foobar.txt N.B. This is occurring on a server of which I am not the sysadmin. The server is running Linux kernel "3.8.0-33-generic" under the following OS: $ cat /etc/*-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.3 LTS" MCS Linux 2013/2014 (x86_64) VERSION = 2013 NAME="Ubuntu" VERSION="12.04.3 LTS, Precise Pangolin" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu precise (12.04.3 LTS)" VERSION_ID="12.04"
Turns out, the server was using a file system of type "cifs". This was discovered by running the command df -T.
chmod 600 fails silently
1,413,459,296,000
I know how to use setgid on directories, to enforce that whole directory structure has uniform user:group ownership. Is there a similar way to set the umask for a directory, so that the whole directory structure "inherits" specific permissions (i.e. 750/640)?
Here is ugly hack to apply on directory. mount -o loop,umask=027,uid=test /opt/dev_test /home/test/test2 Since umask on mount point applied on NTFS or VFAT partition, I had created block device using dd command then formatted with mkfs.vfat and mounted with command as mentioned above. Test Result Inside test2 directory [test@test-server test2]$ touch xyz [test@test-server test2]$ ls -rlt xyz -rwxr-x--- 1 test root 0 Jan 28 23:22 xyz [test@test-server test2]$ umask 0002 Outside test2 directory [test@test-server test2]$ cd ../ [test@test-server ~]$ touch xyz [test@test-server ~]$ ls -rlt xyz -rw-rw-r-- 1 test test 0 Jan 28 23:22 xyz [test@test-server ~]$ umask 0002
set umask (permissions) similarly as setgid on a directory
1,413,459,296,000
I am running ubuntu 12.04 LTS . To be able to view my web project in the browser I did the following: chown -R www-data:www-data /var/www/project Now I wanted to open the project in netbeans but it has not the permissions to read or write. So I created another group called netbeans and added the current user and www-data user to that group. chgrp -R netbeans /var/www/project but netbeans still cannot write or even read. And it seams that apache only can read the project folder when its owner is www-data. Any ideas how to solve this?
So, assuming you want to allow the users www-data and netbeans to access /var/www/project by adding both of them to the group netbeans: # you might have done this already, but no harm will be done by # executing these commands again: groupadd netbeans adduser www-data netbeans adduser netbeans # set the user 'www-data' and the group 'netbeans' as the owners chown www-data:netbeans -R /var/www/project # allow group members to read and write files chmod g+rw -R /var/www/project
Make web project writable by apache and other user
1,413,459,296,000
I have a question about permissions. Distro: Debian GNU/Linux 6.0.4 (squeeze) So I have a folder with a php script in it. Both folder and file are owned by User1 If I do this: php script.php I get a permission denied error. Fair enough; I'm User2, not User1. If I do sudo php script.php this works because my user account is setup for that. So I wrote this script to accept an optional argument to output some stuff, like a "debug" mode, so if I do this: sudo php script.php debug It outputs some stuff to CLI. So far so good. But sometimes the output from the script is too long and I can't scroll up to see the output from the beginning, so I want to be able to redirect the output to a file. So I have tried the following: This gives me a permission denied error. sudo php script.php debug >> results.txt This works sudo touch results.txt sudo chown User1 results.txt sudo chmod 777 results.txt sudo php script.php debug >> results.txt Alternatively, this works sudo su - php script.php debug >> results.txt So what I want to know is... why did the first one give me a permission denied error? I thought the point of the sudo prefix is that I run the command as root, which works for running the php script...but it seems to me that this doesn't extend to executing the redirect? Can someone explain why? EDIT: So I found something from another post on here that works: sudo php script.php debug | sudo tee results.txt The answer there confirms my suspicion about the redirect still being run under my user and not sudo, but it doesn't explain why...can anybody explain why?
When working on a command line the shell first does the redirections and after that does the execve to the external command. This means that the redirections are performed with unchanged user rights (because the shell doesn't care about sudo getting involved later). The sudo command which could easily do >> results.txt doesn't see that but just gets a changed file descriptor for /dev/stdout. Two possibilities: You make the redirection to a file / directory which is writable to you (User2). You write a wrapper script which does that and call the wrapper script via sudo.
Redirecting output in the shell, permissions
1,413,459,296,000
here is my example, here I am logged in as root created a group using groupadd command called as devel. created two users dev1 and dev2 using useradd command. useradd -g devel dev1 useradd -g devel dev2 passwd set for both users su dev1 ... now I create some files and directories. exit su dev2 .... ls /home/dev1 response is ... Permission Denied I am new to all these things, request somebody to help to find ways in order to access files and directories. Also please let me know, how to edit files for which explicitly the chmod 774 kind of permissions are provided, since in present scenario i just get "permission denied" error.
I would go over some of the basics of unix file permissions to get started. Here are some links to get you started. A Unix/Linux Permissions Refresher UNIX permissions made easy Unix - File Permission / Access Modes Unix/Linux Permissions - a tutorial In general you don't want 2 users accessing files in each other's home directories (/home/). It's best to make a directory somewhere else with the permissions that are shared by both. For starters you could create a directory for them under /usr/local, /var/tmp, or even make your own top level directory such as /projects, and put a directory in one of those locations that they're able to access. EDIT #1 Per feedback from @peterph here's a good primer on how to make use of unix ACLs (Access Control Lists) in addition to the traditional chmod permissions (rwxrwxr-x) type. ACL's: Your Answer to Unix File Sharing
how to allow different users of same group access/edit files/directories created a user of same group
1,413,459,296,000
I need to allow a certain user access to these commands: sudo sync echo 3 | sudo tee /proc/sys/vm/drop_caches (This particular developer is doing some DB performance testing and flushing the fs cache gives them cleaner numbers, so please refrain from comments on why we're using this command.) Can I restrict the user to a very specific command with sudo, e.g. tee /proc/sys/vm/drop_caches? From what I understand, I can only restrict them to the command tee itself. I don't want to allow the user to be able to use tee as root on anything they want, however. I guess the other option is to give them write access to that particular file, but is that crazy talk, i.e. giving some non-admin write access to a file in /proc? Any pointers or ideas on the best way to restrict this are appreciated.
Sudo accepts command line arguments. So, you can very well go ahead and make changes to sudoers file such that tee is allowed when the argument is /proc/sys/vm/drop_caches for everything else, sudo will deny execution. If you want a tighter execution, drop in a neat and tidy shell script replacement under somewhere in /usr/bin or /usr/local/bin with tighter permissions and then in sudoers configuration, allow users to execute the script as root on that particular host.
Specific command sequence with sudo and file permissions
1,413,459,296,000
I read the following about newgrp: The newgrp command is used to change the current group ID during a login session. This made me think, how can I change my default's primary group permanently? I imagine I can have a newgrp line in my startup files for my shell, but is there a way to change my primary ID for every login session without resorting to newgrp? I am interested in a generic solution, but in case it depends on the distribution, I am interested in solutions for Ubuntu 11.10 and for Red Hat Enterprise Linux Server (I have administrator priviledges in the former, but not in the latter). Addendum: From the great answer @Shawn provided below to the these questions, I read "I won't be able to do this without root privileges". This made me wonder: why? Assuming that I have privileges to run newgrp immiediately after login, wouldn't this be the same as changing my default primary GUID for all practical purposes?
On Linux (not BusyBox), Solaris, NetBSD, OpenBSD: usermod -g group The usermod command modifies the system account files to reflect the changes that are specified on the command line -g, --gid GROUP The group name or number of the users new initial login group. The group must exist. Any file from the users home directory owned by the previous primary group of the user will be owned by this new group. The group ownership of files outside of the users home directory must be fixed manually. On FreeBSD: pw usermod -g group On BusyBox: addgroup -g user group You won't be able to do this without root privileges.
Changing my default primary GID for every login session
1,413,459,296,000
I don't want wine to: have access to the network (extra: separately defined access control per program that uses wine to start) have access to the main users files (run wine as different user??) How can I do this? Are there any solutions? (Using Ubuntu 11.04)
With winetricks you can sandbox the wine prefix to remove all links to $HOME. I believe the command is winetricks sandbox but winetricks has changed so much lately I'm not positive.
How can I limit wine's permissions?
1,413,459,296,000
I am setting up SFTP access to one of my machines running Linux with the Dropbear SSH server. When I SFTP onto the machine remotely, I can see the entire filesystem on it, even if I might not have write access. How to I control what directories a user can see when connecting to my machine via SFTP? For example, what if I only want to make one directory, e.g. /ftp/, visible and accessible? Thanks.
I believe you'll need to run your dropbear ssh server inside a chroot'd jail if you want to restrict it to certain directories. If you were using a recent OpenSSH, I'd suggest using the ChrootDirectory setting in your sshd_config. It doesn't appear as though dropbear has a similar parameter, so you'll have to do it manually.
Set visible directories for SFTP access?
1,413,459,296,000
After I install any project on my Debian (Buster) machine with sudo cmake install or sudo make install command, the binary gets placed inside /usr/local/bin but although the PATH variable is set correctly and even after a reboot, bash or fish cant find the command for the binaries installed that way. This happened with cmake and nvim so far. For nvim, for example, I followed the build from source instructions: cloned repo with git make CMAKE_BUILD_TYPE=Release sudo make install Now if I run nvim, the command is not found, but if I run sudo nvim the binary is started correctly. I compared the file permissions of binaries that are perfectly executable without sudo rights inside /usr/bin and they are exactly the same permissions as the binaries inside /usr/local/bin. -rwxr-xr-x 1 root root. What am I doing wrong, and why are the binaries inside /usr/bin executable without sudo and files installed from source inside /usr/local/bin not? This is my PATH variable: /usr/local/bin:/usr/bin:/bin:/usr/games Additional info: If I run: /usr/local/bin/nvim this is the output: fish: The file “/usr/local/bin/nvim” is not executable by this user If I run type -a nvim the output is: type: Could not find 'nvim' if I run sudo ./pathlld /usr/local/bin/nvim I get the following output: drwxr-xr-x 19 root root 4096 Dec 22 12:17 / /dev/nvme0n1p2 on / type ext4 (rw,relatime) drwxr-xr-x 14 root root 4096 May 5 13:19 /usr drwxr-xr-x 7 root root 4096 Mar 24 15:51 /usr/local drwx------ 2 root root 4096 May 5 14:21 /usr/local/bin -rwxr-xr-x 1 root root 10319072 May 5 14:21 /usr/local/bin/nvim I'm running a custom OS by the company Siemens that is called "Siemens Industrial OS"; it is basically a Debian Buster with a realtime-patch.
The problem is this: drwx------ 2 root root 4096 May 5 14:21 /usr/local/bin group and others have no execute and read permission on /usr/local/bin. Run as root: chmod 755 /usr/local/bin to restore the standard permissions for this directory.
binaries not executable by user (Permission denied) after cmake install / make install to /usr/local/bin
1,413,459,296,000
I have managed to execve("/bin/sh") in an Apache process, as a security exercise. The ls command works fine for example in the root and in /bin, but not in /tmp, where it outputs nothing even if the dir is world readable. Furthermore, I created a a file foo in /tmp, changed owner to www-data yet I get cat: /tmp/foo: No such file or directory. What could be the problem?
As mentioned by Artem, apache or php-fpm may be running as a systemd service with PrivateTmp=true mentioned here https://www.freedesktop.org/software/systemd/man/systemd.exec.html That would result in the web server running in its own mount namespace with a different /tmp and /var/tmp to the rest of the system. Assuming this is the problem you can use nsenter to execute your script in the namespace of another process. Process ID 1 should be in the same namespace to the majority of your system. nsenter -mt 1 /bin/sh
Find the reason why "ls /tmp" doesn't work in a security exploit
1,413,459,296,000
I am trying to create a custom mlocate db for my home directory. When running the updatedb it complains about inability to open a temporary file. 55;~/>uname -a Linux yoga 4.12.14-lp151.28.59-default #1 SMP Wed Aug 5 10:58:34 UTC 2020 (337e42e) x86_64 x86_64 x86_64 GNU/Linux 56;~/>updatedb --version updatedb (mlocate) 0.26 ... 57;~/>updatedb -l 0 -o ~/.home-mlocate.db -U ~/ updatedb: can not open a temporary file for `/home/<user>/.home-mlocate.db' Prepending updatedb with sudo or running as root does not change the outcome. Running simply sudo updatedb without any arguments succeeds. More generally unless the database is the default one updatedb can not create temporary file: yoga:~ # /usr/bin/whoami root yoga:~ # /usr/bin/updatedb ; echo $? 0 yoga:~ # /usr/bin/updatedb -o /var/lib/mlocate/mlocate.db ; echo $? 0 yoga:~ # /usr/bin/updatedb -o /var/lib/mlocate/custom-mlocate.db ; echo $? /usr/bin/updatedb: can not open a temporary file for `/var/lib/mlocate/custom-mlocate.db' 1 yoga:~ # /usr/bin/strace /usr/bin/updatedb -o /var/lib/mlocate/custom-mlocate.db 2>&1 1>\dev\null | grep "openat.*custom-mlocate.db" openat(AT_FDCWD, "/var/lib/mlocate/custom-mlocate.db", O_RDWR) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/var/lib/mlocate/custom-mlocate.db.6JiH9O", O_RDWR|O_CREAT|O_EXCL, 0600) = -1 EACCES (Permission denied) yoga:~ # My operating system is openSUSE Leap 15.1 and my /home directory is on an ext4 file system. What is the problem and how it is to be resolved?
As was suggested by @fra-san the problem was caused by security policies. The solution is Make sure you have audit daemon running and audit2allow installed. For opensuse audit daemon is in the package audit, and audit2allow is in policycoreutils. Install if not present and start the daemon as root systemctl start auditd Run the offending program, e.g. updatedb -o ~/custom-mlocate.db -U ~/ as a normal user. The rest should be executed logged in as root (bad) or prepending each line with sudo (good). Examine the last few lines of /var/log/audit/audit.log tail -n 20 /var/log/audit/audit.log | grep -i denied You are interested in the line that starts with type=AVC and where the name of the offending command appears. There are two possibilities: The line contains avc: denied. Your system uses SELinux. The line contains apparmor="DENIED". It means your system uses AppArmor for security. If it is AppArmor, consult AppArmor manuals. E.g. https://doc.opensuse.org/documentation/leap/security/html/book.security/part-apparmor.html for OpenSuse. For SELinux: Copy the line you identified in step 3 to a separate file. E.g. tail -n 20 /var/log/audit/audit.log | grep -i "denied.*updatedb" > /var/log/audit/audit-partial-tmp.log Check that it is OK cat /var/log/audit/audit-partial-tmp.log and audit2allow -w -i /var/log/audit/audit-partial-tmp.log Create SELinux module audit2allow -i /var/log/audit/audit-partial-tmp.log -M custom-selinux-module Make new policy active semodule -i custom-selinux-module.pp Run the program as a normal user to check whether it is ok. E.g. updatedb -o ~/custom-mlocate.db -U ~/ Cleanup rm /var/log/audit/audit-partial-tmp.log custom-selinux-module.pp
updatedb can not create temporary file for a custom database file
1,413,459,296,000
I want to give a non-root user the rights to view and write a specific root directory. Is this possible? If so, how?
Yes, assuming you have root privileges, it is possible. root can do anything (except on newer Macs) - even if it's not ordinarily a good idea. One way to do this is to use your root privileges to simply add your non-root user (call him nonroot for this answer) to the sudoers file: sudo visudo Once the sudoers file is open, you can add the following: nonroot ALL = (ALL) ALL Save & close the sudoers file, and it is done. Note this gives user nonroot the same privileges as root.... and root can do anything. But I may have misunderstood your question... If your question was how to allow nonroot to access files or folders under /root (e.g. /root/librarydir), the answer is a little different. Instead of the line above, add this line instead: nonroot ALL = (root) sudoedit /root/librarydir/* This gives user nonroot the ability to make changes to files in that location.
How can I grant a non-root user access to a root libary?
1,413,459,296,000
I was running several apps that were installed with Snap Store. I was not using the system for some time and blindly run sudo apt-get update sudo apt-get upgrade sudo snap refresh after reboot bdzionk. When I start any of these apps or even snap-store itself it just silently finishes. Everything else works as expected. To better diagnose the problem I tried to start the apps with command line: pdebski@system:~$ ps -ea | grep snap 764 ? 00:00:01 snapd pdebski@system:~$ snap list Name Version Rev Tracking Publisher Notes chromium 83.0.4103.61 1165 latest/stable canonical✓ - core 16-2.45 9289 latest/stable canonical✓ core core18 20200427 1754 latest/stable canonical✓ base gnome-3-28-1804 3.28.0-17-gde3d74c.de3d74c 128 latest/stable canonical✓ - gtk-common-themes 0.1-36-gc75f853 1506 latest/stable canonical✓ - kde-frameworks-5-core18 5.61.0 32 latest/stable kde✓ - midori v8.0-31-gf6b3b1e 550 latest/stable kalikiana - snap-store 3.31.1+git187.84b64e0b 415 latest/stable canonical✓ - snapd 2.45 7777 latest/stable canonical✓ snapd pdebski@system:~$ snap run snap-store /snap/snap-store/415/bin/desktop-launch: line 51: /home/pdebski/.config/user-dirs.dirs: Permission denied 18:13:30:0737 GLib-GIO g_app_info_get_name: assertion 'G_IS_APP_INFO (appinfo)' failed 18:13:30:0740 Gtk Failed to load module "appmenu-gtk-module" Unable to init server: Could not connect: Connection refused 18:13:30:0746 Gtk cannot open display: :1 pdebski@system:~$ ls -l .co*/u*s -rw------- 1 pdebski pdebski 632 mar 21 2018 .config/user-dirs.dirs I do not want user-dirs.dirs or any other file in my home directory world-readable, nevertheless I changed the permissions to check what happens: pdebski@system:~/.config$ chmod go+r u*s pdebski@system:~/.config$ ls -ld . drwxr-xr-x 27 pdebski pdebski 4096 cze 5 19:52 . pdebski@system:~/.config$ ls -al u* -rw-r--r-- 1 pdebski pdebski 632 mar 21 2018 user-dirs.dirs pdebski@system:~/.config$ ls -ald ../.c*g drwxr-xr-x 27 pdebski pdebski 4096 cze 5 19:52 ../.config pdebski@system:~/.config$ snap-store /snap/snap-store/415/bin/desktop-launch: line 51: /home/pdebski/.config/user-dirs.dirs: Permission denied 20:45:44:0906 GLib-GIO g_app_info_get_name: assertion 'G_IS_APP_INFO (appinfo)' failed 20:45:44:0951 Gtk Failed to load module "appmenu-gtk-module" Unable to init server: Could not connect: Connection refused 20:45:45:0012 Gtk cannot open display: :1 I am stuck. What's wrong?
I have manjaro but had this issue already several times, a quick snap store reinstall solved this (try following their instructions https://snapcraft.io/snap-store) and now I have rectangles instead text characters, which I can solve via: sudo rm /var/cache/fontconfig/* sudo rm ~/.cache/fontconfig/*
snap-store or basically any snap app cannot be run: user-dirs.dirs: Permission denied
1,413,459,296,000
I can preserve ownership of folderB and all files and folders inside when creating and extracting a tar file as follows: tar -cpf out.tar folderA/folderB sudo tar -xpf out.tar --same-owner However, folderA is owned by root when extracting unless the folder already exists. Is there any way to preserve ownership of the entire folder hierarchy with tar?
This happens because tar -cpf out.tar folderA/folderB doesn’t store folderA as a separate object in the tarball, so it doesn’t have any way of recording the ownership and permissions of folderA. To preserve the ownership, you need to tell tar to do so when you create the tarball; with GNU tar at least, the following works: tar -cpf out.tar --no-recursion folderA --recursion folderA/folderB This stores folderA (and its permissions etc.) without recursing, and folderA/folderB with its contents.
Preserve ownership of entire folder hierarchy in tar?
1,413,459,296,000
I have a partition that's NFS-mounted from a Netapp SAN. I can create files in that partition, and I can chown those files to another user, any user, even root. How am I able to do so? I thought the kernel would prevent such a thing. I have done this again and again today, using multiple user IDs on the file. I cannot do this in /tmp or in my home directory, which is locally-mounted. I've never seen this behaviour before. Also, I note that setcap/getcap are not found on this machine. I have checked my shell's capabilities and they are all 0's: $ echo $$ 15007 $ cat /proc/15007/task/15007/status Name: bash State: S (sleeping) SleepAVG: 98% Tgid: 15007 Pid: 15007 PPid: 14988 TracerPid: 0 Uid: 71579 71579 71579 71579 Gid: 10000 10000 10000 10000 FDSize: 256 Groups: 9000 10000 10001 10013 10018 10420 24611 36021 ... CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 I am on a Red Hat 5.3 virtual machine: $ cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) Running an old kernel: $ uname -r 2.6.18-274.7.1.el5 The NFS mount uses defaults: $ cat /etc/fstab ... mynetapp00:/home /mnt/home nfs defaults 0 0 For user authentication, we're using Windows Active Directory with ldap on the Linux side: $ grep passwd /etc/nsswitch.conf passwd: files ldap I'm able to do anthing as sudo: User mikes may run the following commands on this host: (ALL) ALL because I'm one of the ADMINS (contents of /etc/sudoers): User_Alias ADMINS = fred, tom, mikes ADMINS ALL=(ALL) ALL ...But I don't know how that's germaine, because sudo isn't involved. In any event, I was able to create a file and give it my ownership as a user "john" who's not found in /etc/sudoers: # grep john /etc/sudoers # su - john $ touch /mnt/home/blah $ chown mikes /mnt/home/blah $ ls -l /mnt/home/blah -rwxrwxrwx 1 mikes DomainUsers 0 Oct 23 19:45 /mnt/home/blah ...and chown is not aliased (but we knew that, because if chown was an alias or some other program, then I would be able to change ownership in /tmp too): $ alias alias l.='ls -d .* --color=tty' alias ll='ls -l --color=tty' alias ls='ls --color=tty' alias vi='vim' alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde' $ which chown /bin/chown P.S. I'm not kidding: $ id uid=71579(mikes) gid=10000(DomainUsers) $ touch /mnt/home/blah $ chown john /mnt/home/blah $ ls -l /mnt/home/blah -rwxrwxrwx 1 john DomainUsers 0 Oct 23 19:04 /mnt/home/blah $ id john uid=37554(john) gid=10000(DomainUsers) $ chmod 755 /mnt/home/blah chmod: changing permissions of `/mnt/home/blah': Operation not permitted $ rm /mnt/home/blah $ ls -l /mnt/home/blah ls: /mnt/home/blah: No such file or directory $ touch /tmp/blah $ chown john /tmp/blah chown: changing ownership of `/tmp/blah': Operation not permitted
Yes, chown is the purview of the kernel, but remember that the NetApp is beyond the kernel’s arm reach.  For local filesystems, the kernel translates user I/O requests into local hardware I/O operations (on the storage device).  For remote (e.g., NFS) filesystems, the kernel translates user I/O requests into network communications, asking / telling the server to do what the user wants. There is no guarantee that the server will do as requested.  For example, NetApp servers can be configured to support Unix-style permissions and Windows-style permissions simultaneously.  Unix/Linux clients will see the Unix-style permissions (user, group, mode, and maybe ACL).  Windows clients will see the Windows-style permissions (user but not group, attributes, ACL, and maybe extended attributes).  The NetApp internally stores a combination of the file properties, and enforces access based on some murky, proprietary algorithm, so a Unix operation might be refused because of a Windows-style permission restriction that the Unix client can’t see. TL;DR The NetApp server enforces permissions.  Therefore, the NFS driver for NetApp might be written not to do any permission checks, but to send all user requests to the server.  And so the decision to allow the chown to execute is probably being done 100% at the NetApp. I don’t know why that would happen.  It might be a bug.  That would surprise me a little, since NetApp has been around for 25 years; I would expect a bug that big to be reported and fixed by now.  It might be a configuration setting on the NetApp.  That doesn’t really make sense, but maybe the administrator of the server doesn’t quite understand what he’s doing (or perhaps there is some obscure policy reason why it would be configured this way).
How come I, as a normal user, am able to change ownership of a file?
1,413,459,296,000
I accidentally executed the following: chmod -R 741 /* (forgot the dot, yes). So the next thing I know is my terminal riddled with errors and I stop it in /mnt/ directory. At this point I realise what a terrible mistake I made and my terminal hangs dead. After some poking around I returned 755 to my home directory and everything seems working fine now. However, could I break something in the system? I didn't use sudo so I guess I didn't screw up too bad? And can I restore at least some default permissions for the home dir, granted that I have another user untainted by my blunders?
As others have pointed out, without root permissions you won't have damaged any part of the system itself. You will have changed permissions on files and directories that you own. Here is how you can return them to a (mostly) sane set of values: chmod u=rwx,go= ~ # Ensure we can access our home directory cd # Let's go... find . -type d -exec chmod u=rwx,go=rx {} + # All directories find . ! -type d -exec chmod u=rw,go=r {} + # Everything else find .bash_history .config .gnupg .ssh -exec chmod go= {} + # Lock out sensitive data This will set all directory permissions so that you can read/write/search them, and everyone else can read/search them. Change go=rx to go= if you want to prevent anyone accessing your directories. It will then set all file permissions so that you can read/write them, and everyone else can read them. Change go=r to go= if you want to keep your file contents private. Finally, it will remove all access for sensitive directories for everyone except yourself. If you have any executable files (programs, scripts) you will need to add the executable bit back in: chmod a+x ~/some/important/program As before, this gives everyone ("all") rights to execute the program. Change a+x to u+x if you want that right just for yourself.
Recursive chmod without sudo
1,413,459,296,000
How can I revoke all access (r,w,x) for a particular user to a file or directory tree (while still giving read permission to others)? Does setfacl with mask allow this ?
Yes setfacl should do it. Try the below, does it work ? setfacl -m u:user:--- file Where: -m is to modify the file/directory ACL user is the username for which you want to change permission --- will be the no permissions, replacing r,w,x file is the name of the file for which you want to change permissions
How to revoke access for a particular user?
1,413,459,296,000
I am setting up an automated backup job for some computers on my network. There is a server that will, daily, run an rsync command to backup each of the other computers. I'd like the user that the rsync job runs as to be able to read everyone's home directories (including sensitive files like encrypted secret SSH keys) but not be able to write anywhere on the system (except for /tmp). I'd also like to prevent normal users from reading each other's home directories, especially the sensitive parts. My first thought was to make a group comprising of only the backup user. Then I'd have the users chgrp their files to the backup group. Not being members themselves, they wouldn't be able to read each other's files but the backup user could read everything they wanted backed up. However, users cannot chgrp to a group they are not a part of. I can't add them to the group since that would enable users to read each other's home directories. I had considered giving the backup user a NOPASSWD entry in the sudoers file that allowed him to only run the exact rsync command it needs as root, but that seems potentially disastrous if I don't set it up right (if there was a way to make a symlink to /etc/sudoers and to get the rsync command to use it as a destination, for example).
TL,DR: run the backup as root. There's nothing wrong with authorizing the precise rsync command via sudo, as long as you carefully review the parameters; what would be wrong would be to allow the caller to specify parameters. If you want the backup user to be able to read file, see Allow a user to read some other users' home directories The idea is to create a bindfs view of the filesystem where this user can read everything. But the file level isn't the best level to solve this particular problem. The problem with backups made by rsync is that they're inconsistent: if a user changes file1 then file2 while the backup is in progress, but the backup reaches file2 before file1, then the backup will contain the old version of file2 and the new version of file1. If file2 is the new version of file1 and file1 is removed, that means that this file won't appear in the backup at all, which is clearly bad. The solution to this problem is to create a snapshot of the filesystem, and run the backup from that. Depending on your snapshot technology, there may be a way to ensure that a user can read the snapshot. If not, mount the snapshot and use the generic filesystem-based solution. And even if there is, rsync is still problematic, because if you run it as an ordinary user, it won't be able to back up ownership. So if you're backing up multiple users' directories, you need to run the backup as root.
File System Permissions: User who can backup all files
1,413,459,296,000
My solution (ArchLinux ARM on RaspberryPi) requires that a non-privileged user has to have access to the /dev/ttyAMA0 port. In final implementation that user will be automatically logged in and a start-up script would be launched, but that is off-topic. The problem is that the /dev/ttyAMA0 port (owned by root:tty) has 0620 permissions, and though the non-privileged user is put in the tty group, the file permissions do not give him read access, and that is not good enough. In this thread I was told I should use /etc/tmpfiles.d feature to fix the permissions. However, adding a /etc/tmpfiles.d/solution.conf file with one line F /dev/ttyAMA0 0660 root tty does not change a thing. Perhaps I am not using the tmpfiles.d feature correctly.
It turns out that this problem was specific to RaspberryPi, since the /dev/ttyAMA0 serial port that's linked to the hardware GPIO pins by default is initialized for virtual console access. I had to remove any reference to /dev/ttyAMA0 in /boot/cmdline.txt, reboot, and the /dev/ttyAMA0 now was with proper group permissions (read+write), however the group name now was uucp. What is not problem, of course, to put my user in that group. Had I wanted to change the ownership of /dev/ttyAMA0 or permissions, that could be done via editing the rule files in the /usr/lib/udev/rules.d directory.
Changing the /dev/tty* permissions at startup
1,413,459,296,000
I've been learning the Linux OS over the last year or so and I am still very confused on how permissions vary from different configurations. I am trying to set up my local development environment and in so doing I notice that on the production VPS (CentOS), my file/directory permissions behave differently than my local setup (Mint). The remote server file structure has been set up where owner and group are myuser:myuser and rwx permissions are set to 755 for directories and 644 for files which works fine with the website's requirements. However, I have to change ownership to www-data:www-data and permissions on some folders to 777 in order to have it work the same locally. I've added myuser to the Apache group but the problem persists. That leads me to believe that my local shell user differs from my local Apache user, and that on the remote system, the shell and Apache users are the same. Is this right? I'm concerned about the security implications of changing these settings. I've read that you shouldn't give the web server write access. However, drupal requires it on the file repository dir called "files" and although the remote system files are owned by myuser:myuser (and if that user is the Apache user) then doesn't that mean the web server still has write access? So then what determines how these users should be set up? I assume that I'll be changing the Apache run user and or group but could someone explain the method and best practices for doing so? Should the same rules apply for local vs remote?
You are basically asking two separate questions. How to set permissions on your local system to mirror the production one? You need to know the server configuration - in this case it includes configuration of the http daemon (httpd aka Apache in this case) - usually found in /etc/httpd or /etc/apache). You also need to know with what credentials daemon runs. Then you should be able to set your local permissions either in exactly the same way or effectively in the same way (i.e. different user/group names but same access rights when the daemon asks for a file), Are write permissions for a HTTP deamon ok? Depends. Generally the less writing a daemon can do the better. On the other hand in most cases (unless serving static/read-only content) it is not viable. If that is the case, several ways of hardening the system are at hand: run the daemon under a special user which will only have read access to the document root and write access only where necessary (extended ACL utilities getfacl/setfacl are your friends). On Linux you may also employ additional security models (GRSecurity/SELinux/AppArmor/...). properly written data handling - sane treatment of bogus input. In this case "sane" means "do not write anything". write access through a "proxy" - have the executive part (the model in the often used MVC model in a separate process won't hurt security-wise (but has to be implemented properly to be of any benefit).
How to set up users, ownership and permissions across local and remote servers?
1,413,459,296,000
I want to mount a NFS file system with user/group ownership of <admin> . How it can be done ? mkdir /cert chown admin.admin /cert mount -t nfs 192.168.2.149:/portalweb /cert/
Create user and group admin with non-interactive shell on NFS server, assuming that admin user and group exists in nfs client. The non-interactive shell option will prevent admin at NFS client from gaining access to NFS server. It works, because nfs maps uid and gid of server with its clients, so any file permissions assigned to the exported directories will remain intact as long the uid and gid matches between the server and client for admin user and group. ACL is an nfsv3 specific option. Note down the admin uid and gid (primary) in client machine and use it to create an account in NFS server. For example uid|gid of admin in client machine = 502 NFS Server: As root user: useradd -u 502 -s /sbin/nologin admin mkdir /portalweb chmod 770 /portalweb chown admin /portalweb chgrp admin /portalweb ls -ld /portalweb getfacl /portalweb You can allow collaboration among admin group members through setgid bit placed on /portalweb directory. vim /etc/exports /portalweb 192.168.2.149(ro,sync,root_squash) :wq exportfs -rv NFS Client: As root user: mkdir /cert vim /etc/fstab 192.168.2.149:/portalweb /cert nfs ro,nfsvers=4 0 0 :wq mount -a df -h -F nfs mount | grep nfs 192.168.1.71:/exports on /cert type nfs (r0,nfsvers=4,addr=192.168.2.49,clientaddr=192.168.2.50) Root user cannot access the files in /cert, because root has been squashed to user and group: "nobody" (see /etc/exports on NFS server). But root has the privilege to mount the NFS exports on the client machine, by default. If you prefer to use autofs service, normal users like admin do not have privilege to set automounting NFS directories using autofs service, unless they have been given special administrator privileges as like sudo users. ls -ld /cert drwxrwx---. 12 admin admin 4096 Dec 10 /cert ls /cert ls: cannot open directory /cert: Permission denied su - admin As admin user (only admin user (or admin group if properly configured with uids matching between client and server) can access the /cert contents): ls /cert As any other user: ls /cert ls: cannot open directory /cert: Permission denied
Mounting a NFS file system by changing default owner
1,413,459,296,000
I have mounted a remote share via my fstab using the line: //path/to/target /media/f cifs gid=<mygroup's id>,dir_mode=0775,file_mode=0775 0 0 As a result, everything under /media/f winds up with permissions that look like this: $ ls -al drwxrwxr-x 0 root mygroup ... I have made the user www-data a member of mygroup, with the goal of allowing a Django webapp to write files within /media/f. However, it doesn't work. I get permission errors. In an effort to fix the problem, I changed the mount line to set both the gid and uid so that the mount point has user www-data and group mygroup. So now my mount point looks like this: $ ls -al drwxrwxr-x 0 www-data mygroup ... And everything works fine. The question: why is it that my webapp is able to write to /media/f when the that folder is owned by www-data:mygroup but not when it is owned by root:mygroup (knowing that www-data is a member of mygroup? I have tried remounting as well as restarting in the hopes of getting the membership of www-data (the user) to the group mygroup to "stick" but it just doesn't work. Oddly, when set up with the root:mygroup ownership, if I sudo su www-data and then try to write to /media/f from the terminal, everything works fine. Any idea what's going on there? It's as if the uwsgi process that's running django isn't really running with the full permissions I've tried to grant to www-data. Thoughts?
It turns out this was quite specific to the context described above. I was using uWSGI to serve my site using emperor mode. I set the parameters uid=www-data and gid=www-data. I expected this to cause my vassal processes to have the permissions associated with the user and the group www-data as well as the permissions associated with any group to which www-data (the user) belongs. This assumption is incorrect. Vassals do not run (by default) with any supplementary group ids. It turns out uWSGI (in recent versions) has a fix for this. You can manually specify add-gid=mygroup in the uWSGI configuration. You can specify this parameter many times to add as many gid's to a vassal process as your heart desires. This feature is only available as of uWSGI 1.9.15 so you might need to upgrade to use this approach. Full writeup here.
Different write behavior for owner and member of group despite 775 permissions
1,413,459,296,000
I have quite a strange problem using cron. I distilled it thus far: I created following simple bash script in /home/user1/cron_dir/cron.sh: #!/bin/bash echo "Success" As user1 I created following crontab: */1 * * * * sh /home/user1/cron_dir/cron.sh This gets installed and runs as expected (getting a "Success"-message from cron in my local mail). However, if I log out from my user1 account, wait a couple of minutes to perform the cron job, log back in and check my local mail, I get: sh: 0: Can't open /home/user1/cron_dir/cron.sh Edit: Thanks to garethTheRed I realized the problem: my home directory is encrypted. Of course the directory is only accessible when I'm logged in.
Answering this question because it'd bad form to just edit your question to put the answer inline (really, just answer your own question, it's okay if done in good faith). I had a similar problem - jobs started via cron appearing to work the first few times, but then failing. The symptoms all traced back to an inability to access the user's home directory and files within it. The same script and setup had worked just fine on a previous Ubuntu box. The answer is that yes, if you chose to encrypt your $HOME directory during Ubuntu install, you'll find that cron jobs will not be able to access files under it, unless you happen to have manually logged into the machine to cause the file system to be decrypted and kept mounted. I said yes to that option because it sounded like a good idea, but I'm not firmly wedded to it. The solution I'm going with is to not encrypt my home directory; which means I have to remove encryption from it. It looks like that's a careful process of shifting all the relevant contents out of the folder, unmounting it, and shifting them all back in - not pleasant. The basic process I followed for this is below. NB: Be very careful and read through all the steps before following them, especially the final ones as I suspect that once you uninstall ecryptfs it will be very tricky to get your old encrypted home folder back. If you're not sure of what you're doing, don't try this as the risk of data loss is very real. I only plowed on ahead because I knew I had backups and could reinstall easily. Add a new user fixer using adduser (because you need to be logged in as somebody other than yourself to shift your home directory around), and give them sudo rights Using sudo, Create a new folder sudo mkdir /home/chrisc.unencrypted to transfer the contents of your home directory to Copy the contents of my home directory to the new unencrypted folder using rsync -aP /home/chrisc /home/chrisc.unencrypted. Make sure that all the hidden files have moved too (e.g. .bash_profile, etc.) Remove the /home/chrisc.unencrypted/.ecryptfs folder Log out (and possibly reboot, as you need the encrypted /home/chrisc folders to be unmounted) Log in again as fixer Use sudo su to run as root Check that the contents of /home/chrisc.unencrypted match what they should be. This is quite important, because the next few steps will remove your ability to see the original home folder Rename the old (encrypted) home using mv /home/chrisc /home/chrisc.old. You may find you need to reboot first to ensure that nothing is using that folder (otherwise you'll get a device in use message preventing the rename). Rename the unencrypted home folder to be the user's default folder mv /home/chrisc.unencrypted /home/chrisc Uninstall the ecryptfs tools using apt-get remove ecryptfs-utils libecryptfs0. If I didn't do this, then logging in as chrisc, I saw an empty home directory (as if it was still mounting the encrypted home directory and hiding my actual unencrypted home directory). I had to reboot to get it to be unmounted and the real unencrypted /home/chrisc to be visible. Log in again as your original user and check It may be possible to remove the configuration folder for ecryptfs, or that there's a per-user configuration somewhere that says "when you log in as chrisc, mount the ecryptfs volume available at /home/chrisc/.Private" If you could sever that link, then you probably wouldn't need to uninstall ecryptfs. If your new home folder doesn't look like it contains the right things, you should be able to restore the encrypted home folder by reversing the moves - making chrisc.old be chrisc again, and the unencrypted home folder chrisc.unencrypted. But that will only work up until the point you uninstall ecryptfs.
Cron can't access my home directory when I'm logged out
1,413,459,296,000
I'm required to empty the linux buffer cache in a python script, that runs on a Debian wheezy VM. As root I run sync; echo 3 | sudo tee /proc/sys/vm/drop_caches, but the script is run by a user, without root privileges. I've thought of the following possibilities: give the user write permissions on the file /proc/sys/vm/drop_caches (which doesn't seem to work, as I get Operation not permitted when I chmod 646 /proc/sys/vm/drop_caches) Set the setuid on tee, which should work but then the user could go apesh*t with tee I could setuid on the script and remove write permissions for the user to write to the script, so he couldn't alter it (but then again, that's bad, as the user may interact with the code) or I could write a tiny bash script featuring only the empty the linux buffer cache process, then remove write/read permissions, setuid and add execution permission for the user What is the sanest way to solve this?
From your list, only the fourth possibility (writing a small setuid script in a safe directory e.g. /usr/local/bin not changeable by the user) might work and could be safe, but is nowadays often disabled by the distribution. The easier and better option is to add the following line to /etc/sudoers (use e.g. visudo for this) YOURUSERNAME ALL = NOPASSWD: /sbin/sysctl vm.drop_caches=3 and then include the line sudo /sbin/sysctl vm.drop_caches=3 in your script.
How to enable a non-root user to empty the linux buffer cache
1,413,459,296,000
I run debian and I'm looking to limit a folder to only be accessible by one user. How do I set the permissions up so only that one specific user can enter it, and no other user. The folder I'm talking about is in /home/user1/ and is named protected. How do I allow only user1 to enter this folder, and not other users that have root access? ( Preferably only a password would do, as the user are not really "root" users, they're just standard users with access to all files. ) Thanks!
You will need to encrypt your folder using something like TrueCrypt. See the comment below - encfs is command line based and looks really simple to use. sudo apt-get install encfs and you are ready to go.
Password protect a single folder ( not a web folder )
1,413,459,296,000
I have a file on a remote server that I want to transfer to my android device over ssh, only using the android device in the process. Using this setup, I tried an scp from the android device scp remote_user@remote_host:file file After being prompted for the password I got permission denied. I then tried to transfer it from the remote server scp -P 2222 file root@SSHDroid-ip:/mnt/extSdCard/file Without being prompted for the password I now get the message that the network (of the android device) is unreachable: lost connection. Is this a permission problem? I have transferred files over ssh from the remote server before, so I suppose the problem is on the side of the android device. Edit. I can transfer the file, from the remote server to the android device via scp, to the home path of the SSHDroid server on the android device. This home path is very cumbersome and deep, and can not be reached with the regular android API of the device. So I can transfer it to the home path of the SSHDroid server, but not to the path of my SD card on the android device. Where can I change/check the permission settings of the android device?
Physically go to the remote_host and change the file owner to remote_user. sudo chown remote_user /path/to/file Then you should have permissions to copy the file.
Using scp to transfer files to an android device
1,413,459,296,000
I was trying to add execute permissions to sh files in a folder. For that I mistakenly used: find . -print0 -iname '*.sh' | xargs -0 chmod -v 744 and the output was : mode of `.' changed from 0755 (rwxr-xr-x) to 0744 (rwxr--r--) mode of `./codis.sh' changed from 0644 (rw-r--r--) to 0744 (rwxr--r--) mode of `./ne fil.sw' changed from 0644 (rw-r--r--) to 0744 (rwxr--r--) mode of `./.whois1.sh.swo' changed from 0644 (rw-r--r--) to 0744 (rwxr--r--) mode of `./new file' changed from 0644 (rw-r--r--) to 0744 (rwxr--r--) mode of `./ezik.sh' changed from 0644 (rw-r--r--) to 0744 (rwxr--r--) mode of `./.whois1.sh.swp' changed from 0600 (rw-------) to 0744 (rwxr--r--) mode of `./whois1.sh' retained as 0744 (rwxr--r--) I now know that the correct usage for the find part was find . -iname '*.sh' -print0 So I created another find like so: find . \! -iname '*.sh' -print0 | xargs -0 chmod 600 so that I may set back the permissions for non-sh files (yes, I see that some files have 644 perms, not 600 but it does not matter now). The output for this command is : chmod: cannot access `./ne fil.sw': Permission denied chmod: cannot access `./.whois1.sh.swo': Permission denied chmod: cannot access `./new file': Permission denied chmod: cannot access `./.whois1.sh.swp': Permission denied I used sudo too but still nothing... I see I do not understand permissons properly... If I understand correctly I need x permisions for directory direc too in order to execute commands in said directory.
Your find cmd also finds the current directory ".". The rights of this directory will then be set to 600 and therefore you'll lose the rights to touch the files within this directory. So cd .., chmod 700 said directory and then run your reverting find, which now excludes the current directory, like this: find . \! -path . \! -iname '*.sh' -print0 | xargs -0 chmod 600
Why can't I chmod these files that I could earlier?
1,400,707,244,000
It is my understanding that the owner of a file user can have lower permissions than group or other. What is the justification for this? Can only root and the owner change the permissions on a file? If root removes permissions of the owner, can the owner add their permissions back with chmod?
This probably wouldn't happen but it's possible someone would do it to prevent themselves from accidentally modifying the file. They can't lock themselves out of a file they own, though, so it's more or less just a polite suggestion than an access control at that point. Only the file owner or someone with CAP_FOWNER capability (which root has by default unless you add it to the bounding set) can change the permissions on a file. This behavior can be further restricted at the MAC level so that only certain security contexts can exercise these rights (so that, for example, sshd users couldn't chmod a particular type of file). Yes. The owner is the one to whom maintenance of access controls has been delegated. They're assumed to be the ones best in a position to give or take away rights to other users.
Why can user have lower permissions than group or other? Who can change them?
1,400,707,244,000
Can one directory be owned by two groups (RHEL 6.2)? I have created one directory: mkdir /opt/shared, then I created two groups: groupadd sales groupadd marketing I added manually to each of these groups 21 users, then I changed the ownership chown root:sales /opt/shared and chown root:marketing /opt/shared. Then I changed the unmask chmod 2775 /opt/shared The problem is, only group marketing have permission to rwx.
A directory (like everything else) can only have one group (the "ACL_GROUP_OBJ" in ACL terminology). But via ACL (setfacl) you can define permissions for other groups, too. Even (via default ACLs) for newly created objects in the directory.
Can one directory be owned by two groups?
1,400,707,244,000
In Windows this is easy. I never fully figured out Linux permissions, I have a directory such as this: /Photos The current ACL permissions are as follows: owner: root - full access group: photos -full access everyone: no access the photos group currently has full access. But I would like to add an additional group that have read only access, called readOnlyPhotos, without enabling read access to "everyone". That is, I want that folder to be inaccessible to anyone except for people who are either in photos group (who get full access) or readOnlyPhotos group (who get read only access). Also any new files and folders that get created in the /Photos folder should inherit these default permissions. How can I configure the permissions on the /Photos folder to achieve this effect?
You cannot do this using traditional Unix permissions, you'll have to use ACLs, that's Access Control Lists. Traditional Unix permissions do not accommodate more than 1 group per file/directory. Window's uses something similar. The commands to apply ACLs are called setfacl and getfacl. See my answer on this Q&A titled: Getting new files to inherit group permissions on Linux for an example that's similar to what you're looking for.
How to add a read only group permission to a folder that already has a default group, and have it inherited for all newly created files & folders?
1,400,707,244,000
I just set up a basic nginx server on my Debian 7.0 server. The config file /etc/nginx/nginx.conf is barely modified, and begings with: user www-data; # nginx shall run as user `www-data` I then set up a server configuration (file /etc/nginx/sites-available/filenamehere), which includes the following directives: root /home/diti/www access_log /home/diti/logs/access.log; I executed setfacl so that nginx can read all data in /home/ directories, and created the file /home/diti/logs/access.log as a regular user (username is diti). $ whoami diti $ groups diti diti : diti $ ls -lh /home/diti/logs/access.log total 12K -rw-r-x---+ 1 www-data diti 8.9K Mar 13 16:47 access.log $ tail /home/diti/logs/access.log tail: cannot open `access.log' for reading: Permission denied $ getfacl /home/diti/logs/access.log getfacl: Removing leading '/' from absolute path names # file: home/diti/logs/access.log # owner: www-data # group: diti user::rw- user:www-data:r-x group::--- mask::r-x other::--- How comes I am not able to read the log file, owned by www-data:diti? I should be able to, since I (diti) belong to group diti, and group permissions for the log file are set to r-x. Is it because of ACL? My filesystem is ext4.
getfacl shows you that the group does not have any permission. The ls output -rw-r-x---+ is confusing. It does not mean that the group has these permissions but that "someone specific and different from the owner" has it.
Not allowed to read a file with correct group permissions? ACL?
1,400,707,244,000
I'm trying to execute some binary with bash. I am getting a "Permission denied" message despite having given the full privileges (chmod 777) and being the 'root' user: This is the file description: -rwxrwxrwx 1 root root 641K Aug 22 15:04 wrapid This is the error message: bash: ./wrapid: Permission denied Output of strace ./wrapid: execve("./wrapid", ["./wrapid"], [/* 13 vars */]) = -1 EACCES (Permission denied) write(2, "strace: exec: Permission denied\n", 32strace: exec: Permission denied ) = 32 exit_group(1) = ? +++ exited with 1 +++ Output of ldd ./wrapid: /usr/bin/ldd: line 104: lddlibc4: command not found not a dynamic executable Output of file wrapid: wrapid: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x817251da41b3c8684a68f6f4afa1b4cd8f116072, not stripped Output of uname -a: Linux WR-IntelligentDevice 3.4.43-grsec-WR5.0.1.7_standard #2 SMP PREEMPT Thu Aug 22 16:27:28 CST 2013 i686 GNU/Linux
According to the info provided you are trying to run 64-bit executable on 32-bit kernel. It won't work that way. You either need 32-bit binary or 64-bit kernel/glibc libraries.
permission denied when executing binary despite "rwx" privilege and root user
1,400,707,244,000
My system configuration is the following: UEFI computer Dual-boot with Windows 8 and Linux Mint 15 Olivia I would like to be able to run Linux executables that are stored on an NTFS partition rather than having to copy them each time on a Linux partition. Whenever ticking the Execute check box it de-ticks itself, I guess because it's on an NTFS partition.                I've come across this post: https://askubuntu.com/questions/11840/how-to-chmod-on-an-ntfs-or-fat32-partition Searching on my system, I have found that mentioned settings can be done in Disks settings dialog:      I thought about appending exec at the end of default parameters but I'm not sure about these things: According to answer 2 of the above question: NTFS is a POSIX-compatible and To enable this, you need a "User Mapping File". So, is the User Mapping File a file stored on the Linux partition? (i.e. stores Linux-specific permissions in my Linux partition instead of directly changing the permissions in the NTFS partition). In short : Can I safely do this as I will still use my Windows with this partition?
Adding exec while nosuid is set is relatively safe (comparable to a native home-directory on a linux-system). Unsafe would be suid and exec. And no - you should not need a user-mapping for this. You are already mapping to user and group aybe.
Running executables on an NTFS partition?
1,400,707,244,000
I've checked the manpages, the mount, the permissions ... (edit: combined history into one sequence as requested. Starting to seem a not-simple problem. Nothing new since last edit, just bundled up all pretty) ~/sandbox/6$ editfunc doit ~/sandbox/6$ -x doit + doit + find . + cp /bin/ln /bin/id . + sudo chown jthill:jthill id ln + chmod g+s id ln + mkdir protected + chmod 770 protected + touch data + set +xv ~/sandbox/6$ ls -A data id ln protected ~/sandbox/6$ ls -Al total 92 -rw-r--r-- 1 jthill jthill 0 Nov 8 02:39 data -rwxr-sr-x 1 jthill jthill 31432 Nov 8 02:39 id -rwxr-sr-x 1 jthill jthill 56112 Nov 8 02:39 ln drwxrwx--- 2 jthill jthill 4096 Nov 8 02:39 protected ~/sandbox/6$ sudo su nobody [nobody@home 6]$ ./id uid=619(nobody) gid=617(nobody) egid=1000(jthill) groups=617(nobody) [nobody@home 6]$ ./ln ln protected ./ln: failed to create hard link ‘protected/ln’ => ‘ln’: Operation not permitted [nobody@home 6]$ ./ln data protected ./ln: failed to create hard link ‘protected/data’ => ‘data’: Operation not permitted [nobody@home 6]$ ln ln protected ln: failed to create hard link ‘protected/ln’ => ‘ln’: Permission denied [nobody@home 6]$ ln data protected ln: failed to create hard link ‘protected/data’ => ‘data’: Permission denied [nobody@home 6]$ exit ~/sandbox/6$
Found it: If sysctl fs/protected_hardlinks is set, hard links by someone not the owner (and without CAP_FOWNER), must be: not special not setuid not executable setgid both readable and writable according to fs/namei.c. Some guy on SO wanted to have a dropbox folder people could add to but not see into (I think that's a Windows feature), I figured this was one of the few places a setgid would be good and the smoketest drove me here. Thanks to all and especially Anthon who suggested checking the source. (edit: sysctl spelling)
setgid binary doesn't have permission, mount's right, I'm missing something, but what, please?
1,400,707,244,000
This is a followup on my question here. I am setting up the first webserver and am fumbling with what user accounts to create and permissions to provide for better security. Below is what I have. For 2 developers, I have 2 accounts (and they are added to the supplementary group devs) and only they are allowed to ssh to the server. For the web application (Django based), I have created 1 normal user, app (haven't configured it as --system user and belongs to group app) with shell access. The 2 developers, after ssh to the server, will su to app for any updates and starting/stopping the application. User app is not allowed to perform su (blocked by not adding to the group setting in /etc/pam.d/su using pam_wheel.so). I also have a 3rd account with no su capabilities for backup related tasks where a cron job will ssh and fetch log files, status, etc. Let me know if security aspects need to be made better. (PS: I am a novice here)
su requires sharing a password. I prefer sudo. So, developers would run either sudo -u app command to run command as app, or run sudo -u app -i to start an interactive shell as app. Possibly sudo -u app -i /bin/bash if you've set app's shell to something like /bin/false or /bin/true. If they don't need a full shell as the app, but rather only need to restart the app, you can limit the commands they can run as app. Use a default ACL on the directories they need to access which grants access to the devs and to the app so you don't have filesystem permission issues. The principle of least privilege is what you need to follow, IMHO. If they don't need to do it, don't give them access to do it. Typically I prefer to use keys only for ssh. If you can do that, disable passwords for the devs, and set the sudo rules to not require a password. Then there's no passwords needed for anyone, and thus no password to disclose / lose / reset. Reading assignment for this evening, because it's a bit much for this post: "how do filesystem ACLs work" and "how do I configure sudo". Perhaps followed by managing ssh keys.
New webserver: What all user accounts should I create and permissions to provide for
1,400,707,244,000
If you administer a Linux server, what steps do you take to properly grant someone else access? Assuming they will log in over ssh, don't already have an account, and will need root access. (Temporarily).
Creating Account When granting someone access to a Linux system you usually use the command useradd. $ useradd someuser Granting Filesystem Permissions If the user will be working with any files on the system, then add them to the corresponding groups based on which files they'll be working with. You can use the usermod command for this. $ usermod -a -G group1 someuser This will append the user to the group group1. Granting Sudo Permissions Once their account's been created I'll grant them very specific sudo rights (only if needed) using the visudo command. See the /etc/sudoers file for more info on how to grant specific rights. Do not do this: someuser ALL=(ALL) ALL unless you intend to give them full root access to the system. Also think about setting a time frame for how long the account should be valid. You might want to set a timeout on it so that it's only valid for a window of time. If they need to just run the date command for example: Cmnd_Alias DATE=/bin/date ALL ALL=NOPASSWD: DATE Also you might want to create Unix groups based on roles, perhaps myadmins & regusers. Then use these groups in your sudoers file when granting access. %myadmins ALL=(ALL) ALL %regusers ALL=(ALL) DATE
How do I grant someone else access to my linux server
1,400,707,244,000
I just used the Truecrypt GUI to encrypt my entire USB hard drive (currently /dev/sdb). I formatted it to ext4. Now, I can mount it in my home directory with the command truecrypt --mount /dev/sdb USB-Device For some reason I don't have read/write permission. So my question is: How do I get read/write permission? Is it a drive-specific issue, or a system issue? It wouldn't surprise me if there was just a group I had to add my user account to. Additionally, I'm running Arch Linux, but I'm pretty sure that's not entirely relevant.
Usually something along the lines of this for any new filesystem: mount /dev/newthing /mount/point chown owner:group /mount/point/ chmod 750 /mount/point/ (obviously using different values for owner / group / 750 depending on your requirements) It doesn't matter if newthing is encrypted or not. In the end you have a regular filesystem which follows regular permission rules. If for some reason you don't have permission, unless you opened the crypt container itself in read-only mode, it's a question of chown/chmod. Nothing special about it.
Permissions in Ext4 Truecrypt Drive
1,400,707,244,000
I've been gently experimenting with roles in Solaris, and wonder a bit about setting a password for it. I want the role I've created to be used by several users, so setting password to the same as one user (as is done for the root-role) is not an option. Having several users "sharing" (knowing) a single password, I've heard is a bad idea - that is after all the rational behind sudo. So for now, I've set a blank password (just "Enter"). I did this not by leaving the field in /etc/shadow blank nor by setting it to "NP"... I did it by using passwd to set it to nothing (pressed "Enter" twice) - and the resulting encrypted entry was surprisingly long and garbeled. So my first question; is it safe to leave a role with blank ("Enter") as password? After all, only logged-in users with that role can assume it... A few more questions: Is there some way in the role-specification to specify that a user should authenticate with his own password - rather than the role's - to switch to the role (without changing the role's password to that of the user, as I assume is done with the root-role)? If not, are there other ways (eg. by using sudo - maybe in combination with su? If so, how?) to accomplish this? How is the root-role bound to the password of the "first user"? Is it some field in the role-specification that makes it happen automatically? What happens behind the scenes to make it happen?
Yes, it is safe to remove the password from a role. In fact at my site we do it more or less by default (except for the root role). As you point out a user that assumes a role has already been authenticated so asking him to authenticate once again is really just too much authentication IMHO. I believe this also answers your second question. Just remove password from the role! A few notes on how to remove a password from a role. In the following that role is named roleX. In Solaris 10 It has always been enough for me simply to do: passwd -r files -d roleX In Solaris 11 Something has been changed by Sun/Oracle wrt enforcement of the PASSREQ parameter in /etc/default/login (see man page for login). In order to create a role without a password you need to do as in Solaris 10 on each role account as well as globally setting the PASSREQ parameter to 'NO' in /etc/default/login. As I see it PASSREQ acts as a last line of defense. You still need to physically remove the password from each account in order for the account not to have a password. I wish Solaris had a setting like PASSREQROLE (my proposal) that would say if it was ok for role accounts not to have a password (rather than for all accounts as is the interpretation of PASSREQ).
Solaris/OpenIndiana: Password for roles?
1,400,707,244,000
I have noticed that I am unable to preserve ownership if I rsync -o files. However, I am when I move them. This is all without admin privileges. What is the rationale of this? Several threads e.g. (1) seem to echo the need for admin privileges
When you move a file within the same filesystem, this detaches the file from its original location and attaches it to the new location. The file data is unchanged, and the file metadata — the inode — is also unchanged. So the file retains its ownership, permissions, times, and any other attribute: only its name and containing directory changes (and also the inode change time (ctime)). When you copy a file (with rsync or any other utility), this creates a new file with the same contents, belonging to you, with its modification time set to the date the copy was finished. Depending on the copy utility, it may additionally copy some of the file's metadata over from the original, e.g. the owning user with rsync -o. Moving an inode only requires write permission on the source directory (to detach it) and on the target directory (to reattach it). It doesn't require that you own the file or even can read or write it. On the other hand, you cannot create a file belonging to another user, or give away a file to another user (except for programs running as root). So copying a file as non-root cannot preserve ownership (unless the user doing the copy owned the original file).
Unable to preserve ownership in a copy but able with a move?
1,400,707,244,000
I have a script that I've made that needs to have root permissions to enable and disable bluetooth features. I am binding this to a button so it is not feasible to log in as root to run the script. How do I properly set the file permissions for the script? I know that it's good practice to make it so that only root can edit and read the file, but how do I give it full execution permissions? BT_RFKILL=$(rfkill list | grep tpacpi_bluetooth_sw | sed 's/\([0-9]\+\):.*/\1/') BT_STATE=$(rfkill list $BT_RFKILL | grep "Soft blocked: yes") if [ "x" == "x$BT_STATE" ]; then sixad --stop sleep 2s rfkill block $BT_RFKILL else rfkill unblock $BT_RFKILL sleep 2s sixad --start fi exit 0 The script runs perfectly if I sudo it, but that's not ideal since I'd love run it through a simple key binding.
The secure way is probably to use sudo on the lines of your script that call sixad and rfkill (I'm assuming both need root privileges). Then configure sudoers to allow those commands to be run without a password by the user or group which is supposed to run the script.
Running user script with root permissions
1,400,707,244,000
I stumbled across the directory /etc/ssl/private on Ubuntu (12.04), it has following permission: drwx--x--- 2 root ssl-cert 4096 7月 8 2012 private/ I wonder what does this mean for group ssl-cert? And why is it set this way?
Having execute permission on a directory is required in order to read the inodes of the files within that directory. Within that directory is a single file, ssl-cert-snakeoil.key, which has read permission only for the ssl-cert group (and root). So this combination of permissions is the most minimal permission set that would allow a member of the ssl-cert group to access the file. Restricting access to this file is important because it contains the private key for any services you run that make use of SSL. The idea is that only users (which in this case would correspond to services, e.g. the apache user) that require access to the key are members of this group. All other users are forbidden. The private key needs to stay secret to guarantee that you are who you say you are when a client establishes an encrypted connection to your service.
What does it mean if a directory has only x (executable) permission for certain user/group?
1,400,707,244,000
I am looking for something like sudo (or something that can be run from sudo) that allows me to run a program with my uid and primary gid, but with an additional supplementary group.
Warning #1: this is an alternative 'hacky' approach. Warning #2: not all applications/script might be able to handle this ambiguous uid/gid. Create a new user with the same uid and gid, by using useradd, its --non-unique option to allow that multiple users with the same uid/gid exist and its --groups option to specify additional groups. E.g.: useradd --uid 1001 --gid 1001 --groups 1002,1003 --shell /bin/bash \ --home /home/cloneduser cloneduser You should then run the program as this new user. I suggest to debug it using id.
Run program with additional supplementary group
1,400,707,244,000
I would like to know why my Lubuntu 11.10 does not open my .dvi files created with Latex. Please, can anybody help me solve this problem? When I use evince L1.dvi, in the terminal, the evince opens but not the file and the messages on the terminal are: evince L1.dvi (evince:3556): Gtk-WARNING **: Theme parsing error: gtk-buttons.css:159:10: Expected valid border (evince:3556): Gtk-WARNING **: Theme parsing error: gtk-bars.css:102:16: Themeing engine 'adwaita' not found (evince:3556): Gtk-WARNING **: Theme parsing error: gtk-bars.css:117:16: Themeing engine 'adwaita' not found (evince:3556): Gtk-WARNING **: Theme parsing error: gtk-bars.css:134:16: Themeing engine 'adwaita' not found (evince:3556): Gtk-WARNING **: Theme parsing error: gtk-bars.css:153:16: Themeing engine 'adwaita' not found (evince:3556): Gtk-WARNING **: Theme parsing error: gtk-bars.css:165:16: Themeing engine 'adwaita' not found (evince:3556): Gtk-WARNING **: Theme parsing error: gtk-bars.css:175:16: Themeing engine 'adwaita' not found (evince:3556): Gtk-WARNING **: Theme parsing error: gtk-bars.css:186:16: Themeing engine 'adwaita' not found (evince:3556): Gtk-WARNING **: Theme parsing error: gtk-bars.css:198:16: Themeing engine 'adwaita' not found (evince:3556): Gtk-WARNING **: Theme parsing error: gtk-bars.css:208:16: Themeing engine 'adwaita' not found (evince:3556): Gtk-WARNING **: Theme parsing error: gtk-bars.css:218:16: Themeing engine 'adwaita' not found (evince:3556): Gtk-WARNING **: Theme parsing error: gtk-bars.css:223:16: Themeing engine 'adwaita' not found warning: kpathsea: configuration file texmf.cnf not found in these directories: /usr/share/texmf/web2c:/usr/share/texmf-texlive/web2c:/usr/local/share/texmf/web2c. kpathsea: Running mktexpk --mfmode / --bdpi 600 --mag 1+0/600 --dpi 600 cmti10 mktexpk: Permissão negada kpathsea: Appending font creation commands to missfont.log. page: Warning: font `cmti10' at 600x600 not found, trying `cmr10' instead kpathsea: Running mktexpk --mfmode / --bdpi 600 --mag 1+0/600 --dpi 600 cmr10 mktexpk: Permissão negada page: Warning: font `cmti10' not found, trying metric files instead kpathsea: Running mkofm cmti10 mkofm: Permissão negada kpathsea: Running mktextfm cmti10 mktextfm: Permissão negada page: Warning: metric file for `cmti10' not found, trying `cmr10' instead kpathsea: Running mkofm cmr10 mkofm: Permissão negada kpathsea: Running mktextfm cmr10 mktextfm: Permissão negada page: Error: could not load font `cmti10' warning: kpathsea: configuration file texmf.cnf not found in these directories: /usr/share/texmf/web2c:/usr/share/texmf-texlive/web2c:/usr/local/share/texmf/web2c. kpathsea: Running mktexpk --mfmode / --bdpi 600 --mag 1+0/600 --dpi 600 cmti10 mktexpk: Permissão negada kpathsea: Appending font creation commands to missfont.log. page: Warning: font `cmti10' at 600x600 not found, trying `cmr10' instead kpathsea: Running mktexpk --mfmode / --bdpi 600 --mag 1+0/600 --dpi 600 cmr10 mktexpk: Permissão negada page: Warning: font `cmti10' not found, trying metric files instead kpathsea: Running mkofm cmti10 mkofm: Permissão negada kpathsea: Running mktextfm cmti10 mktextfm: Permissão negada page: Warning: metric file for `cmti10' not found, trying `cmr10' instead kpathsea: Running mkofm cmr10 mkofm: Permissão negada kpathsea: Running mktextfm cmr10 mktextfm: Permissão negada page: Error: could not load font `cmti10' (evince:3556): EvinceView-CRITICAL **: ev_document_model_set_document: assertion `EV_IS_DOCUMENT (document)' failed (evince:3556): EvinceDocument-CRITICAL **: ev_document_get_n_pages: assertion `EV_IS_DOCUMENT (document)' failed (evince:3556): EvinceDocument-CRITICAL **: ev_document_get_max_page_size: assertion `EV_IS_DOCUMENT (document)' failed
Ubuntu sets up evince to use AppArmor, which prevents it from accessing certain files even though the files have appropriate permissions. See Evince fails to start because it cannot read .Xauthority for a different but related problem. Do you have a custom TeX installation? If so, evince is probably preventing from writing the font files by AppArmor. See Ubuntu bug 846639, which shows how to fix the AppArmor configuration for your system. A simple workaround is to view the file once in another viewer such as xdvi, so that the fonts are generated. Then evince will be able to read them. Or run allneeded L1.dvi You can run the commands allcm and allec to generate some common fonts.
Problem opening .dvi files
1,400,707,244,000
I have a weird problem on my notebook. When I want to login to the system (for example on tty1), immediately after I put in my username I get a Permission Denied error, even with root! I booted a liveCD and checked /etc/{passwd,shadow}, but both are accessible by root and the users are still in there. What could be the problem? How do I fix it?
It is definitely PAM error as mentioned in comments. I also had same issue, I just downloaded latest PAM source from the link below and compiled as instructed in the webpage. http://www.linuxfromscratch.org/blfs/view/svn/postlfs/linux-pam.html
Why do I immediately get a "Permission Denied" error on login, even with root?
1,400,707,244,000
There seems to be a path inheritance issue which is boggling me over access restrictions. For instance, if I grant rw access one group/user, and wish to restrict it some /../../secret to none, it promptly spits in my face. Here is an example of what I'm trying to achieve in dav_svn.authz [groups] grp_W = a, b, c, g grp_X = a, d, f, e grp_Y = a, e, [/] * = @grp_Y = rw [somerepo1:/projectPot] @grp_W = rw [somerepo2:/projectKettle] @grp_X = rw What is expected: grp_Y has rw access to all repositories, while grp_W and grp_X only have access to their respective repositories. What occurs: grp_Y has access to all repositories, while grp_W and grp_X have access to nothing If I flip the access ordering where I give everyone access and restrict it in each repository, it promply ignores the invalidation rule (stripping of rights) and gives everyone the access granted at the root level. Forgoing groups, it performs the same with user specific provisions; even fully defined such as: [/] a = rw b = c = d = e = f = g = rw [somerepo1:/projectPot] a = rw b = rw c = rw d = e = rw f = g = rw [somerepo2:/projectKettle] a = rw b c d = rw e = rw f = rw g Which yields the exact same result. According to the documentation I'm following all protocols so this is insane. Running on Apache2 with dav_svn
After a bunch of headaches, I let this idle with * = rw at SVNParentPath level. Coming back to it, I suddenly had a stroke of obvious hit me; the read order was the issue. Firstly, my example naming conventions were flat out wrong as it should be [<repo_name>:<path-in-repo>]. My actual conventions were correct so syntax is not the root. The main issue is the authz file expects an order of 'specificity' where the first read rule, or available match is applied. In my case, everything would match with the root and it would be one-and-done. thus by reversing my example ordering: [groups] grp_W = a, b, c, g grp_X = a, d, f, e grp_Y = a, e, [ProjectPot:/] @grp_W = rw [ProjectKettle:/] @grp_X = rw [/] * = @grp_Y = rw would make it accepted and perform as behaved. This is behavior NOT DOCUMENTED and in my opinion is a serious snafu over something utterly trivial.
Setting SVN permissions with davsvnauthz
1,400,707,244,000
When traversing a filesystem with large numbers of files, is it quicker to do so as root compared to any other user? For example, if there are several million files under /data, and /data is owned by user123, would a recursive grep complete more quickly for root than for user123? I'm curious whether there's an optimization that skips a permissions check, or whether it's going to perform a stat for every file anyway, and so the check is just a conditional. And, whether this would be generally applicable, or by filesystem. I've picked up a superstitious habit of running an extremely large operation like that as root to speed it up, but haven't found a good way of testing whether it actually helps.
There's no special treatment of root in Linux there - though root could have the powers to change access mode, modify ACLs, disable SELinux enforcement etc, the user is still subject to these. So, really, the kernel can't take any shortcuts there. (Thinking about what it means that you're working inside Linux UID namespaces makes it even more complicated. That universal powerful root: it might not even exist from the view of processes that you're running.) or whether it's going to perform a stat for every file anyway, stat is s sys all, i.e., something that userland does to learn more about a file. It's not a given that a recursive grep even needs to do that; the inefficiency of getting all directory entries and then separately doing that on every entry is what leads to the existence of specialized calls that combine the two, getdents(64). The result doesn't contain any access information, but instead of checking whether the current user can access a file found that way, before accessing it, it works be sensible to just go ahead and try - if it fails, you couldn't. That saves one context switch per file. So, how could one actually exploit root-style privileges to make a recursive grep faster? The answer probably lies in minimizing context switches between userland grep and kernel gold system functionality. Short of writing something like a kernel module that gives one a flattened view of all the files a process would be able to access alongside with some functionality to convert a hit position back to an actual file path, I don't see an immediate clean way of extending the Linux kernel to avoid having to open, and if possible, read each file and directory. Linux has this model where files are really meant to be accessed from user space, to get all the concurrency, safety, close and memory allocation behavior into something that has well-defined (by the process owning the file handle) semantics.
Is traversing a Linux filesystem faster as root
1,400,707,244,000
For some reason, there isn't "staff" user group on a folder that I just created. How do I add/permit "staff" user group to a folder that I just created? This post, seems to suggest to type this in terminal: sudo chgrp -R staff ./folderName Can the command work if I don't use the "-R"? I don't really want it to apply to the subsequent folders underneath it. Also, this reddit post, suggest that I need to also try this before: sudo chown <owner's username> ./folderName I thought "chown" is change owner? Which user do I need to change to? what should be <owner's username>? Kindly please be patient and guide me. I'm very new to this unix command. I just want to make sure that I'm doing the right thing before I stuffed up.
You want to change the group of a single directory: chgrp staff ./directoryName Remember that all commands come with documentation, so man chgrp can help to confirm this.
Adding/permitting "staff" to read/write in folder
1,400,707,244,000
Issue: I have a dual-boot PC, Ubuntu / Windows 10, that share access to a NTFS disk partition (mounted as /DATA/ in Ubuntu). I need to avoid the "Permission denied" error when a chmod command is executed on a file in such shared partition, regardless the user calling this command. This is because I chmod is called as part of bigger procedures and the users cannot just avoid them, and when they return an error the whole procedure stops. What I tried: /DATA/ is now being mounted with the permissions option (mapping file is activate) and under a non-root user that has the ID of 1001, and all users are part of the the group with ID of 1003, to which rwx is allowed, i.e.: UUID=... /DATA ntfs auto,users,rw,permissions,umask=007,uid=1001,gid=1003 0 0 This solution ALMOST works. Everyone can r+w and, when the user 1001 calls chmod we don't get an error. It does not make any change indeed, but it is not a problem. The problem is that for other users the command chmod still trigger errors as they are not considered the owners of the files. Is there an way to give ownership of the partition mounted on /DATA/ to all users? Or to the user who first logins at least? Or at least make the chmod command never return an error?
Does the program that calls chmod hard-code the path to /bin/chmod? If not, if it just runs whichever chmod program is first in the PATH, try creating a directory that contains only a symlink called 'chmod' to /bin/true. e.g. (as root): # mkdir /usr/local/dummy # ln -s /bin/true /usr/local/dummy/chmod Then set the PATH to have this directory first (PATH="/usr/local/dummy:$PATH") before running the program. You can create a wrapper script to set the PATH and then run the program. You might want to make a symlink for chown too. BTW, this is stating the obvious, but you don't want this PATH setting to be the default. You only want it when running the program that triggers the problem.
Allow all users to use chmod on a NTFS file system
1,400,707,244,000
Problem: I have a git repository mounted via sshfs and cannot commit changes with the following error message: fatal: cannot update the ref 'HEAD': unable to append to '.git/logs/HEAD': Permission denied Note that I can cp -a .git/logs/HEAD .git/logs/HEAD.bu printf foo > .git/logs/HEAD mv .git/logs/HEAD.bu .git/logs/HEAD without a problem, but printf foo >> .git/logs/HEAD gives me the 'Permission denied' as well. Question: What do I need to change about my configuration to be able to commit from my local machine to the remote repository? What I tried: Given the above symptoms, I assume the issue lies with appending to a file. I found Git repository on SSHFS: unable to append to '.git/logs/HEAD': Invalid argument which refers to https://github.com/libfuse/sshfs/issues/82 suggesting the issue (note the slighty differerent error message) could be solved by mounting the remote file system with writeback_cache=no. The latter source quotes the man page referencing the following caveat/workaround: CAVEATS / WORKAROUNDS [...] O_APPEND When writeback caching is enabled, SSHFS cannot reliably support the O_APPEND open flag and thus signals an error on open. To enable support for unreliable O_APPEND (which may overwrite data if the file changes on the server at a bad time), mount the file system with -o unreliable_append. However, this section is not in my man page: sshfs -V SSHFS version 3.7.0 FUSE library version 3.9.1 using FUSE kernel interface version 7.31 fusermount3 version: 3.9.1 I found that the writeback-cache feature that I tried to disable, was actually removed (after being disabled and re-enabled more than once before). So I guess I should be good but clearly there (still) is a problem. A further complication that I should probably mention, is that my user name and ID on the remote system do not match the local one, so I need to used the idmap feature. Here is the corresponding fstab entry: <remote-user>@<remote-machine>: /mnt/ssh/<remote-machine> sshfs _netdev,user,idmap=user,allow_other 0 0 Also, my /etc/fuse.conf contains user_allow_other Background: To avoid answer just telling me not to do this: I know how git works. I know I can clone the repository locally, commit there, and push to the remote one over ssh. Why I don't do it? - Because I track code that can only be tested on the remote machine and I do want to test it before committing. So to some extent this is 'just' a convenience issue to avoid having to: Edit code on the local copy. Commit the changes to the local copy. Push to the remote copy. SSH to the remote machine (or switch terminals). Test the code on the remote machine. Checkout another branch (to allow force-pushing). End the SSH sessions (or switch [back] terminals). Edit the code. Amend the previous commit on the local copy. Force-push to the remote copy. SSH to the remote machine (or switch terminals). Checkout the force-pushed branch. Repeat steps 5 - 11 (seven steps!) until I'm happy. Instead, I want to: SSH to the remote machine (or switch terminals). Edit code on the remote copy from the remote machine. Test the code on the remote machine. Edit code on the remote copy from the remote machine. Repeat steps 3 - 4 (two steps!) until I'm happy. End the SSH sessions (or switch [back] terminals). Commit the changes to the remote copy from the local machine. Why don't I simply commit from the remote machine? - Because I want to sign my commits but can't entrust the remote machine with the private key. So the best alternative I could come up with is: SSH to the remote machine (or switch terminals). Edit code on the remote copy from the remote machine. Test the code on the remote machine. Edit code on the remote copy from the remote machine. Repeat steps 3 - 4 (two steps!) until I'm happy. Commit the changes to the remote copy from the remote machine. Checkout another branch (to allow force-pushing). End the SSH sessions (or switch [back] terminals). Pull from the remote copy. Amend (sign) the previous commit on the local copy. Force-push to the remote copy. SSH to the remote machine (or switch terminals). Checkout the force-pushed branch. So on the one hand, I'd like to get rid of these extra steps (things get more complicated when adding feature branches as those need to be properly checked out on both copies and configured for proper tracking), on the other I want to understand why it doesn't 'just work'(tm). Update: Following up on a comment by @tukan, I reproduced the error with debug output: Mount remote with debug output: mount -o sshfs_debug MOUNTPOINT SSHFS version 3.7.0 executing <ssh> <-x> <-a> <-oClearAllForwardings=yes> <-2> <USER@SERVER> <-s> <sftp> USER@SERVER's password: Server version: 3 Extension: versions <2,3,4,5,6> Extension: [email protected] <1> Extension: [email protected] <1> Extension: [email protected] <2> Extension: [email protected] <2> Extension: [email protected] <1> remote_uid = 0 In a different terminal, access the mounted share: cd MOUNTPOINT/DIR_WITH_WRITE_PERMISSIONS [00002] LSTAT [00002] ATTRS 45bytes (188ms) Verify regular writing works: echo foo > foobar [00003] LSTAT [00003] STATUS 38bytes (46ms) [00004] LSTAT [00004] STATUS 38bytes (32ms) [00005] LSTAT [00005] ATTRS 45bytes (242ms) [00006] OPENDIR [00006] HANDLE 29bytes (31ms) [00007] READDIR [00008] READDIR [00007] NAME 668bytes (58ms) [00009] READDIR [00010] READDIR [00008] NAME 483bytes (65ms) [00011] READDIR [00012] READDIR [00009] STATUS 37bytes (27ms) [00010] STATUS 37bytes (27ms) [00013] CLOSE [00014] LSTAT [00011] STATUS 37bytes (27ms) [00012] STATUS 37bytes (27ms) [00013] STATUS 28bytes (26ms) [00014] STATUS 38bytes (31ms) [00015] OPEN [00016] LSTAT [00015] HANDLE 29bytes (153ms) [00016] ATTRS 45bytes (158ms) [00017] FSTAT [00017] ATTRS 45bytes (29ms) [00018] WRITE [00018] STATUS 28bytes (28ms) [00019] CLOSE [00019] STATUS 28bytes (28ms) Trigger error by attempting to append: echo bar >> foobar [00020] LSTAT [00020] STATUS 38bytes (74ms) [00021] LSTAT [00021] STATUS 38bytes (57ms) [00022] LSTAT [00022] ATTRS 45bytes (52ms) [00023] OPENDIR [00023] HANDLE 29bytes (53ms) [00024] READDIR [00025] READDIR [00024] NAME 668bytes (68ms) [00026] READDIR [00027] READDIR [00025] NAME 597bytes (77ms) [00028] READDIR [00029] READDIR [00026] STATUS 37bytes (47ms) [00030] CLOSE [00027] STATUS 37bytes (47ms) [00031] OPEN [00032] LSTAT [00028] STATUS 37bytes (47ms) [00029] STATUS 37bytes (47ms) [00030] STATUS 28bytes (26ms) [00031] STATUS 43bytes (28ms) [00032] ATTRS 45bytes (29ms) zsh: permission denied: foobar Hope this helps to find the root cause of my problem. Note: Based on the answer by @Devidas (and the lack of a solution even after a desparate attention-seeking bounty week), I cross-posted this to the corresponding GitHub issue.
such a big and detailed question. Lets solve this step by step. Error is "Permission denied" Linux error code EACCES 13 /* Permission denied */ when I searched for EACCES in sshfs repo I found only two instances in file [sshfs.c][1] one is about file permission in local context. the one you demonstrated. other is SSH_FX_PERMISSION_DENIED error from ssh permission denied. From data I have I can say for almost certainty. that As you have permission in local machine Cases printf foo >> .git/logs/HEAD reason this gives permission denied and not printf foo > .git/logs/HEAD either you don't have permission on remote machine or remote servers that don't support O_APPEND refer to issue 117 you can verify it using strace. this is why part. How to solve depends on your reply. which case is it? Do let me know so that I can help you further. feel free to comment if you disagree.
git over sshfs (with idmap): unable to append to '.git/logs/HEAD': Permission denied
1,400,707,244,000
I'm using a Centos 8 Linux with multiple user belonging to the same group accessing a number ff folder/subfolder and files in the same FS (xfs). I want all files and folders have write permission for the group setting umask to 0002 allow new file created from user to have the right permission, but I have tar and other compressed files being extracted by users the extracted files maintain the permission they had in origin and are not changing resulting with some files being with permission only for the owner and not the group I'm trying to find a way to set the permission automatically without need of user to run a chmod to allow write for group I tried assign g+s on themain folder but I can only get the new folder inherit the group permission not the single files. I tried enabling ACL but again I don't get files to inherit parent folder permission. This how it look my main folder drwxrwsr-x+ 4 owner group 4.0K Mar 6 10:26 test And the content after extracting a tgz file in it drwxrwsr-x+ 8 owner group 202 Mar 6 09:56 folder1 drwxrwsr-x+ 8 owner group 202 Mar 6 10:12 folder2 but then when i reach the first folder with files, files permission are just for owner ll test/folder1 -rwx------. 1 owner group 195K Jun 6 2018 file1 -rwx------. 1 owner group 225K Aug 4 2018 file2 -rwx------. 1 owner group 211K Aug 20 2018 file3 -rwx------. 1 owner group 100K Sep 9 2018 file4 -rwx------. 1 owner group 200K Oct 24 2018 file5 -rwx------. 1 owner group 199K Nov 9 2018 file6 even after executing setfacl -R -m d:o:rwx test files are not changing their permission Is there a way to force all files created or extracted from a compressed archive to inherit the permission from the main folder?
I'm still looking for a better solution, but for now I created a a script that pipe tar output to chmod command #!/bin/bash - set -o pipefail tar xvf "$@" | xargs -rd '\n' chmod 770 -- I don't like it so much because instruct 100 and more user to use a different command will be tricky but if won't find better solution I'll keep this. just for info from man page tar seems to have an option which should ignore the file permission --no-same-permissions but seems working only if umask permission are more restrictive than the one on the extracted files, wonder if it's a bug
How to have files extracted from archive inherit permission from parent folder
1,400,707,244,000
So I am the user david and according to ls -la the file permissions are 700 and the owner is david. I can not understand why I am not allowed to write to the file. The stat command returns something interesting that there are 2 Access: one for 700 and one for 500. The 500 would explain why I can't write to the file but why doesn't that show up when I do ls -la? Also I am not able to sudo anything since I do not know the password for david david@traverxec:~/bin$ ls -la total 16 drwx------ 2 david david 4096 Mar 1 17:43 . drwx--x--x 5 david david 4096 Oct 25 17:02 .. -r-------- 1 david david 802 Oct 25 16:26 server-stats.head -rwx------ 1 david david 363 Oct 25 16:26 server-stats.sh -rw-r--r-- 1 david david 0 Mar 1 17:43 test david@traverxec:~/bin$ stat server-stats.sh File: server-stats.sh Size: 363 Blocks: 8 IO Block: 4096 regular file Device: 801h/2049d Inode: 10901 Links: 1 Access: (0700/-rwx------) Uid: ( 1000/ david) Gid: ( 1000/ david) Access: 2020-03-01 17:27:22.389179535 -0500 Modify: 2019-10-25 16:26:29.049613415 -0400 Change: 2019-10-27 16:24:21.437108121 -0400 Birth: - david@traverxec:~/bin$ echo "test" >> server-stats.sh -bash: server-stats.sh: Operation not permitted david@traverxec:~/bin$ id uid=1000(david) gid=1000(david) groups=1000(david),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),109(netdev) Edit: I am able to make files within the directory. I'm not too sure but it doesn't seem like it is mounted from somewhere else... hmmm david@traverxec:~/bin$ touch test david@traverxec:~/bin$ ls -la total 16 drwx------ 2 david david 4096 Mar 1 17:43 . drwx--x--x 5 david david 4096 Oct 25 17:02 .. -r-------- 1 david david 802 Oct 25 16:26 server-stats.head -rwx------ 1 david david 363 Oct 25 16:26 server-stats.sh -rw-r--r-- 1 david david 0 Mar 1 17:43 test david@traverxec:~/bin$ df -h . Filesystem Size Used Avail Use% Mounted on /dev/sda1 3.9G 1.5G 2.3G 40% / david@traverxec:~/bin$ findmnt -T . TARGET SOURCE FSTYPE OPTIONS / /dev/sda1 ext4 rw,relatime,errors=remount-ro david@traverxec:~/bin$
Answer: The file is marked as immutable which means that not even root can modify this file david@traverxec:~/bin$ lsattr server-stats.sh ----i---------e---- server-stats.sh Fix: This fix won't work for me since I do not have root/sudo access but here it is for anyone else sudo chattr -i server-stats.sh
Linux File Permissions lying to me
1,400,707,244,000
I am trying to avoid installing common Node packages redundantly for each user. I would like to install certain common Node packages globally. However, on Arch Linux, I encounter permissions issues. npm install [package] -g fails with message: Missing write access to /usr/lib/node_modules This succeeds: sudo npm install [package] -g However, then we get errors like this when a regular user tries to use the package: Error: EACCES: permission denied, open '/usr/lib/node_modules/[package]/lib/[file].js' What is the right way to do this, assuming we are required to install some packages globally. EDIT: see the reason for the requirements here.
In general all packages should be installed locally. This ensures you can have multiple applications running on different versions (like needed) of the same package. A global package-update might unleash hell in terms of broken dependencies and compatibility. Do a global install when a package provides an executable command you want to run from the shell. BUT if there is an already globally installed package you want to use in development: use npm link <global-package>. This will create a local link to that package (working only with npm >= 1.0 and with an OS supporting symlinks). For further information read: npm-1-0-global-vs-local-installation npm-1-0-link
How to install node package globally - the right way?
1,400,707,244,000
I've been working on writing my own Linux container from scratch in C. I've borrowed code from several places and put up a basic version with namespaces & cgroups. Basically, I clone a new process with all the CLONE_NEW* flags to create new namespaces for the clone'ed process. I also set up UID mapping by inserting 0 0 1000 into the uid_map and gid_map files. I want to ensure that the root inside the container is mapped to the root outside. For the filesystem, I am using a base image of stretch created with debootstrap. Now, I am trying to set up the network connectivity from inside the container. I used this script to setup the interface inside the container. This script creates a new network-namespace of its own. I edited it slightly to mount the net-namespace of the created process onto the newly created net-namespace via the script. mount --bind /proc/$PID/ns/net /var/run/netns/demo I can just get into the new network namespace as follows: ip netns exec ${NS} /bin/bash --rcfile <(echo "PS1=\"${NS}> \"") and successfully ping outside. But from the bash shell when I get inside the clone'ed process by default I am unable to PING. I get the error: ping: socket: Operation not permitted I've tried setting up capabilities: cap_net_raw and cap_net_admin I would like some guidance.
I would prefer to work from a more complete specification. However from careful reading of the script and your description, I conclude you are entering a network namespace (using the script) first, and entering a user namespace afterwards. The netns is owned by the initial userns, not your child userns. To do ping, you need cap_net_raw in the userns that owns the netns. I think. There is a similar answer here, which provides links to reference documentation: Linux Capabilities with User Namespaces (I think ping can also work without privilege if you have access to ICMP sockets. But at least on my Fedora 29, this does not seem to be used. Unprivileged cp "$(which ping)" && ping localhost shows the same socket: Operation not permitted. Not sure why it has not been adopted).
Ping not working in a new C container
1,400,707,244,000
I am using the following steps to set permissions and default acl permissions. chown as required find . -type d -exec chmod 770 {} \; find . -type f -exec chmod 660 {} \; find . -type d -exec chmod g+s {} \; setfacl -Rdm g::rw . setfacl -Rdm u::rw . setfacl -Rdm o::- . This produces the desired results except that newly created sub-directories cannot be entered (by owner or group) because they have 660 permissions instead of 770. How can I correct this one issue without changing the default permissions for new files? Currently we have to do this after creating a new sub-directory: chmod ug+x <sub-directory> I want to eliminate that manual step because my users often don't know how to do it. I want them to be able to create a directory in the file manager and have immediate access to it. UPDATE: The umask for all users is set as follows: cat /etc/profile # /etc/profile umask 006
You're setting default ACLs that don't have the execute permission at all, so nobody will have execute permission. Instead, as Patrick mentioned in a comment, set default ACLs with the X permission. Lowercase x means that the ACL entry grants the execution permission. Uppercase X grants the execute permission only if some user (usually the owner) has the execute permission. Unless you're doing something unusual, always put X in a default ACL wherever there's an r. There are few reasons to have a file or directory that's readable but not executable or vice versa. Unix implements execution and reading as separate permissions, and for directories it implements traversal and listing as separate permissions, but there are very few cases were this is desirable: usually a certain set of users is allowed to modify a file, a larger set is allowed to access it without modifying it, and for regular files the file may or may not be executable but that property doesn't depend on who is trying to access it. setfacl -Rdm g::rwX . setfacl -Rdm u::rwX . setfacl -Rdm o::- .
newly created directories should be executable by default
1,400,707,244,000
I've set permissions on the socket to 777 yet Nginx keeps stating that it's being denied permission to access, and yes I've restarted the server. Nginx is being started as root (not the best way but it's just the way it is and I'm not the one who set it this way) and the socket in question is owned by a user for the app. If it's important, the socket is for a rails app running on a puma web-server. The distro I'm using is Redhat. I've tried following what I found here but when I try to run grep nginx /var/log/audit/audit.log | audit2allow -m nginx I get this error: compilation failed: mynginx.te:6:ERROR 'syntax error' at token '' on line 6: /usr/bin/checkmodule: error(s) encountered while parsing configuration /usr/bin/checkmodule: loading policy configuration from mynginx.te Thinking it might be the command I'm running I tried: sudo cat /var/log/audit/audit.log | grep nginx | grep denied | audit2allow -M mynginx but I still get the same error, after opening the audit log in less and doing a search for anything related to nginx or even denied, I get nothing, there is nothing in the audit.log related to nginx. I'm trying to do the same thing for another system with a different app (same OS) and again I'm running into the same issue however according other audit log (which finally shows something I get this: type=USER_CMD msg=audit(1508924031.284:1165419): user pid=30802 uid=502 auid=502 ses=5121 msg='cwd="/home/user/selinux-nginx-rhel/nginx" cmd=73656D6F64756C65202D69206E67696E782E7070 terminal=pts/0 res=success' if it's showing res=success why is nginx still being denied? Also, when I try audit2allow for this project I get a blank policy.te file like this in this user question of course the underlining cases is probably not the same as I'm running on RHEL. Also, I'm not sure if SElinux is running but doing getenforce returns: Disabled. I think ultimately it's a user permissions issue as moving the socket location to place any one can access solves the issue.
I was facing the same issue. You have to disable SELinux. For detailed steps please follow the link: http://blog.odoobiz.com/2017/11/rhel-wsgi-nginx-error-permission-denied.html
nginx errors with failed (13: permission denied) for socket despite socket permissions being set to 777
1,400,707,244,000
I came across a piece of code where chmod permissions are getting mapped to an integer. 33204 // -rw-rw-r-- 36863 // -rwsrwsrwt 36855 // -rwsrwSrwt 36799 // -rwSrwsrwt 36351 // -rwsrwsrwx 36343 // -rwsrwSrwx How are these permissions mapped to an integer number? I am trying to find what permissions these numbers 33261, 41453 map to? I looked at various links, but I could find one that converts permissions into a number. Can someone help me with the conversion? Thanks in advance!
They only make "sense" in octal. Here's the first line using one of my programs to convert: $ hex 33204 33204: 33204 0100664 0x81b4 text "\201\264" utf8 \350\206\264 The 0100664 means it's a regular file with read/write (user), read/write (group) and read-only (other). The chmod manual pages should mention this, but the first bit (the S_IFREG value) is not mentioned — even by POSIX — as often as the other flags. Here's an example from a header: #define S_IFMT 00170000 #define S_IFSOCK 0140000 #define S_IFLNK 0120000 #define S_IFREG 0100000 #define S_IFBLK 0060000 #define S_IFDIR 0040000 #define S_IFCHR 0020000 #define S_IFIFO 0010000 #define S_ISUID 0004000 #define S_ISGID 0002000 #define S_ISVTX 0001000 Further reading: understanding and decoding the file mode value from stat function output
How to map chmod access permissions to an integer
1,400,707,244,000
I currently have chroot users whos home directories contain both an 'upload' directory and a 'download' directory. Originally the permissions on the upload directory where chown user:sftpadmin upload chmod 370 upload and the permissions on the download directory where chown user:sftpadmin download chmod 570 dowload The purpose of the sftpadmins group is for service accounts that are a member of this group would be able to place/retrieve files for the user from the respective directories. Now we have a request to allow the users the ability to delete files in the download directory after they are finished with them. However the only option I can come up with to accomplish this is setting the permissions on the download dir to chmod 770 download However this would grant the chroot'ed users the ability to write any file to this directory, which I would like to avoid. Is there any combination of permissions I can set that would allow them the ability to read, download, and delete the files in the directory, without allowing them to write files to the download directory? It would look something like: Allow user to remove (delete) a file Will not allow user to change the file. Will not allow user to add a file to the directory.
Well it depends: It's not possible with standard posix permissions, as deleting a file needs the same permission as adding: write permission on the containing directory. If however your file system supports NFSv4 access control lists (e.g. ZFS) it is possible, as there exist the distinct control entries "write-data" (-> create files) and delete-child. You just have to set the "allow delete-child" entry on the directory for the particular user, but not the "allow write-data" entry (or instead: set "deny write-data"). See https://linux.die.net/man/5/nfs4_acl for a detailed description
Granting user ability to delete a file without giving them write permissions to the directory
1,454,573,738,000
I have mounted a shared network drive. See below output of df -h command. Filesystem Size Used Avail Use% Mounted on /dev/sda1 550G 362G 161G 70% / tmpfs 1.9G 80K 1.9G 1% /dev/shm //10.143.19.121/myfolder/ 1.1T 678G 438G 61% /mnt/extstorage //10.143.19.121/myfolder/ is the mounted folder from network. I have mounted it by adding the below line under the /etc/fstab/ //10.143.19.121/myfolder/ /mnt/extstorage cifs uid=root,rw,umask=0000,directio,username=MyUser,password=MyPassword123! 0 0 I can access the folder. [root@caresurvey /]# cd /mnt/extstorage/ [root@caresurvey extstorage]# But I cannot do operations inside it. Error shown below. It says permission denied. [root@caresurvey extstorage]# mkdir TestDir mkdir: cannot create directory `TestDir': Permission denied The permissions on //10.143.19.121/myfolder/ are properly set to read and write for all users within the network. I'm not sure though about the File Format of the said shared folder but as far as I read, it doesn't matter as long as it is shared. My server is running on CentOS release 6.4 (Final). Am I missing something? I have also edited the /etc/ssh/sshd_config to un-comment/enable the PermitRootLogin yes because I have read a thread that by doing it, a user was able to fix his issue. Any help will be greatly appreciated. Below is the output of mount command. /dev/sda1 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /tmp on /tmp type none (rw,bind) /var/tmp on /var/tmp type none (rw,bind) /home on /home type none (rw,bind) none on /sys/kernel/config type configfs (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev) //10.143.19.121/myfolder/ on /mnt/extstorage type cifs (rw) nfsd on /proc/fs/nfsd type nfsd (rw) EDIT (09-22-2015@02:42PM - GMT+8): Here is the output of ls -ld /mnt/extstorage. drwxr-xr-x 1 root root 0 Jul 10 15:26 /mnt/extstorage Note that I am running as root.
Thanks for your replies. I have found out that the NTFS level permission was not set correctly that is why I cannot write in the said folder. After asking the Network Administrator to set the permission correctly, file operations has been permitted. Also, I have tried sharing a folder using nfs via CentOS (separate server). I was able to replicate and prove my network administrator that there is a problem with his NTFS-level permission. I created a shared folder using NFS. Mounted it on my server, then made the permission to the default one (not 0777). I was able to mount the folder, but cannot do file operations. I have changed it to 0777 (chmod 0777), then the file operations has been allowed. Cheers!
Mount successful, Directory accessible, but cannot do Operations (cp, mkdir, etc.)
1,454,573,738,000
Is there any technical merit/necessity to numerous *nix commands (mkdir, mkfifo, mknod) having a -m (--mode) option? I ask this because near as I can tell, umask (both the shell command and the syscall) provides everything you need to control a file's permissions: For example, I can do this: mkdir -m 700 "$my_dir" ..but I can just as easily do: old_umask=`umask` \ && umask 0077 \ && mkdir "$my_dir" umask "$old_umask" To be clear, I can see that the former is much more user-friendly and convenient (especially for command-line useage), but I don't really see a technical advantage of the former over the latter. Note also that I understand the merits of this flexibility at the underlying syscall level: if I want to call open or sem_open or what have you with at most 600 permissions, it makes sense that I would just pass S_IRUSR | S_IWUSR to the open syscall, never bother with the umask syscall, saving a syscall round-trip (or two, if I want to then reset the umask since the umask call modifies the current umask) and my code is simpler/cleaner. This does not apply in the command line example because the -m/--mode option of such a command will have to call umask to zero out the umask of that command's process anyway, to ensure the mode/permission bits that it's supposed to set on the new file/whatever are set. (E.g. if my shell's umask is 022, then mkdir -m 777 /tmp/foo can only work as expected if it's first calling umask internally to zero out the umask it inherited from the shell.) So what I want to make sure I didn't miss in my considering of the problem is, is there something you could not accomplish with just the umask command, without relying on the -m/--mode options of the mk* commands?
There are things you can't do with umask alone: create a regular file with permissions above 0666. create a directory with permissons above 0777. So you do need chmod or --mode as well. If, for security reasons, you never want to create an object with temporarily higher rights than intended, chmod without umask isn't enough either. In some corner cases you have to use even both resulting in the rather ugly sequence umask / mkdir / chmod / umask. (Example: create a group temp directory (01770).) So --mode can be replaced with chmod and umask, but not with only one of them.
umask vs -m, --mode command options
1,454,573,738,000
I have 2 storage drives, 1 is NTFS and the other is ext4. Both mounted in /media as Storage_1 and Storage_2 I've run chmod -R *user*:*user* /media/Storage_* I've tried putting umask=022 in /etc/bash.bashrc and ~/.bashrc My fstab looks like this: # Entry for /dev/sdc2 UUID=F88275C4827587C0 /media/Storage_1 ntfs-3g defaults,umask=022,uid=1000,gid=1000 0 0 # Entry for /dev/sdb1 UUID=b4ef7aaa-97e8-4bdb-bba4-382469b23749 /media/Storage_2 ext4 defaults 0 2 I've tried setting umask=022,uid=1000,gid=1000 on sdb1, doesn't work When I save files to my NTFS drive they adhere to the umask variable (when I download a picture it's given -rwxr-xr-x) but when I save files to my ext4 drive they don't (it's given -rw-r-----). How do I get my ext4 drive to automatically save files with the same permissions as my NTFS, or will I have to format it to NTFS?
The purpose of the umask option of mount is to set visible permissions of every file on the filesystem when the filesystem itself does not support Unix permissions (usually, permissions are stored in the filesystem, when it supports it). That is why umask option of mount exists for NTFS (this filesystem does not support Unix permissions), while it has no reason to exit for Ext4 (which does support Unix permissions).
files saves to ntfs drive adhere to umask, ext4 does not
1,454,573,738,000
After creating an NFS share between two servers, lb1 (nfs-client) and data-server1 (nfs-kernel-server), users with permission on the NFS server do not have access on the NFS client. data-server1 configuration: $ cat /etc/exports /data 10.132.246.167(rw,no_subtree_check) $ ls -la / | grep data drwx--x--x 3 u1 users 4.0K Sep 6 03:55 data/ $ ls -la /data drwxr-xr-x 3 u1 users 4.0K Sep 6 02:31 prod/ drwxrwsr-x 2 www-data www-data 4.0K Sep 6 02:31 keys/ $ awk -F: '$0=$1 " uid="$3 " gid="$4' /etc/passwd | grep 'root\|u1\|ftp\|www-data' root uid=0 gid=0 www-data uid=33 gid=33 u1 uid=115 gid=100 ftp uid=999 gid=100 lb1 configuration: $ mount 10.132.245.223:/data /data $ mount 10.132.245.223:/data on /data type nfs4 (rw,relatime,vers=4,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.132.246.167,minorversion=0,local_lock=none,addr=10.132.245.223) $ sudo -u u1 ls -la / | grep data drwx--x--x 3 u1 users 4.0K Sep 6 03:55 data/ $ sudo -u u1 ls -la /data drwxr-xr-x 3 u1 users 4.0K Sep 6 02:31 prod/ drwxrwsr-x 2 www-data www-data 4.0K Sep 6 02:31 keys/ $ awk -F: '$0=$1 " uid="$3 " gid="$4' /etc/passwd | grep 'root\|u1\|ftp\|www-data' root uid=0 gid=0 www-data uid=33 gid=33 u1 uid=115 gid=100 ftp uid=999 gid=100 On the NFS server (ie, data-server1), root, u1, and ftp users have proper rwx permissions for the subdirectories of /data and can access the filesystem without any problems. However, on the NFS client (ie, lb1), root and ftp get permission denied errors when trying to simply list directory contents of /data within the NFS share. User u1 on the other hand, works perfect. This is one of my first usages of NFS
Everything looks as expected. Since /data is rwx--x--x, only the owner, u1, can list it. Others can access files and subdirectories in it, subject to permissions on those files and subdirectories. In addition, userid 0 on NFS clients is mapped to userid 65534 (on some systems, -2) on servers unless you have no_root_squash in the export line (or, if running NFSv4, do explicit userid mapping on the server). Here are some details from the exports(5) man page: Very often, it is not desirable that the root user on a client machine is also treated as root when accessing files on the NFS server. To this end, uid 0 is normally mapped to a different id: the so-called anonymous or nobody uid. This mode of operation (called `root squashing') is the default, and can be turned off with no_root_squash. By default, exportfs chooses a uid and gid of 65534 for squashed access. These values can also be overridden by the anonuid and anongid options.
How do I set up NFS to respect user and group permissions?
1,454,573,738,000
I've just upgraded from RHEL 5 to 6.5 and setup fail2ban anew. I can't seem to get my custom action to work now, supposedly because of a permission issue. I wan't to know what am I doing wrong, and how can I make fail2ban run the script successfully. I have a fail2ban action set up to run a generic shell script. I prefer things this way. actionstart = <script> <jail> start Where <script> is specified in the jail.local file: action = run-script[jail=drupal-reg-lim,script=/path/to/script.sh] But I get a 126 exit code when I try to run the script, which, according to tldp.org, means that there is a "Permission problem or command is not an executable". This happens regardless of the contents of script.sh - whether it is completely empty, or only has a shebang (#!/bin/sh). Trying to run sudo /path/to/script.sh works. I tried changing the actionstart so that it would log, and placed the script as /etc/fail2ban/action.d/script.sh: actionstart = <script> <jail> start || logger -dit fail2ban-run-script <script> <jail> start failed: exitcode = $? Actually outputs the following into /var/log/messages: Apr 23 05:49:25 xx1 fail2ban-run-script[8236]: /etc/fail2ban/action.d/script.sh drupal-reg-lim start failed: exitcode = 126 Also trying to run sh works. Only running a script fails. I also tried chmod 777 script.sh: -rwxrwxrwx. 1 root root 10 Apr 23 05:12 script.sh Another thought was that this might have to do with the SELinux security context (logging id output gave me uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:system_r:fail2ban_t:s0), but I haven't been successful at testing this (sudo -t fail2ban_t -r system_r /bin/sh fails with Permission denied even though fail2ban can run sh; I might be doing something wrong in my sudo). Something that might be useful to know is that fail2ban uses Python's os.system() to execute the command. But my attempt at recreating it this way has failed too: >>> import os >>> print os.system('/etc/fail2ban/action.d/script.sh || echo "No"') 0 All of this leaves me with nothing. I'd be glad to get any kind of help. (Side note: actually, fail2ban didn't get the 126 exit code for some reason, and simply spit out an error with a code of 0x7e00, which means the process has stopped but the exit code is zero. Took me some time to figure out, so thought I'd share.)
As always, it had nothing to do with anything except for an error in the script I was trying to run! Specifically, the script I was calling, was trying to run another script which is located in the same directory. To get the name of the common directory, rather than using BASE_DIR=$(dirname $(readlink -f "$0")) I accidently wrote BASE_DIR=$(readlink -f "$0"), and needless to say, "${BASE_DIR}/another_script.sh arg1 arg2..." failed (it was trying to run the non-existing non-sense path /path/to/script.sh/another_script.sh) with status code 126.
fail2ban permission denied on script
1,454,573,738,000
I'm trying to backup my linux system (arch linux on a laptop) to a remote computer on the same network using ssh (or NFS). I use rsync extensively to backup my /home directory to the remote computer over ssh and NFS with no issues whatsoever. What I want to do is run the rsync command on the laptop within a script and copy all of the contents of / (excluding some directories of course, that I already have figured out) to the remote computer. The issue I'm having is about permissions. I have disabled root login on the ssh server (openssh, running on an ubuntu server 12.04 computer) and I disabled password authentication and enable rsa key login so I can run the backup scripts automatically using cron on the laptop. Now, for me to able to copy all of the system's files under /* (on the laptop) I need to run rsync as root. Doing that introduces a problem since I'm not able to access the ssh server as root. If I try to do it over NFS i get another permision problem since the root user isn't allowed to access the NFS mount, as my regular user is. I'm writting here so I can get suggestion on how to solve this problem. I would prefer to do it over ssh since it usually works a lot faster for me, but I'm willing to use NFS or even samba (haven't tried that one) if no other thing works.
Ok so the problem is that your acting as root and root doesn't have keys. sudo su ssh-keygen then copy the keys over to your backup server. Finally, rsync [stick your options here] / user@backup-server:/path/to/backups
Permission issues when doing system backup using rsync [duplicate]
1,454,573,738,000
After setup ubuntu with default settings, It allows user to login with password, so I use passwd -l [username] to forbid password login both in ssh and console. But when I use sudo to execute commands, it needs password so I can't use sudo any more. Is there any solution that I can use sudo again, another question is how to forbid password login both in ssh and console but keep sudo password auth available?
You could boot up in single user mode to get a root console. You could then mount your filesystem, and fix the affected account. (This is assuming grub hasn't been password locked also ..., you hold down shift during boot to bring up grub, and add 'single' to the boot statement after the word 'splash' ) As for best practice, I agree with strugee's comment. Just remembered- Ubuntu has that 'recovery mode' grub option, so you may not even have to edit your boot command...
Can't sudo after lock password, can I recover from that?
1,454,573,738,000
Catch 22: If I open my text editor without using sudo, I don't have permission to save any changes. If I open it with root privileges, any files I create are owned by root. I want to be able to create files that are owned by me (non-root user), and also edit files that require root access in the same session. Possible? Is this a Linux thing or a text editor setting? I'm using Sublime Text 2.
If you use sudoedit to edit your root-owned text files, then your editor will be running as you. Sudoedit works by making a temporary copy of the root-owned file(s), owned by you, and invoking your editor (chosen via $SUDO_EDITOR, $VISUAL, $EDITOR, or the sudoers config file) on it. When you quit the editor, it copies the temporary file(s) back if they're modified. Full details are in the man page.
I want to modify root files while also being able to create non-root files in a single text editor session.
1,454,573,738,000
Is there a way to terminate a root process without entering an administrator's password? You might be wondering why I'd want to do this 'cause it sounds fishy. Well I just need to end a process started by a daemon from an agent but daemons runs as root and agents are user specific.
Users are only allowed to kill or otherwise signal their own processes. You say that the process is started by a daemon running as root. Does that process need to run as root as well? If it doesn't, make it drop privileges and run as the desired user. If the process must run as root, you'll have to provide a way for the agent to elevate its privileges to kill the process. This can take the form of a setuid root helper executable, or an entry in the sudoers file (with the NOPASSWD tag). The sudo entry has the advantage that its use will be logged. Note that there is no atomic way to send a signal to a process. Sending a signal is asynchronous: it is possible that you obtain the PID of a process, then you send a signal to that process, but the process has died in the meantime and its process ID has already been reused by another process. There's a way to avoid this involving the parent of the process you want to kill, but it's complicated. The process ID will not disappear until its parent acknowledges the child's death (a zombie process remains until then). To use this effectively, you need the parent to know that it must hold on until no agent it going to want tot kill the child. If you're not concerned with the race condition, you can give the agent the permission to run pkill name_of_process_to_kill, if you know that there will be a single process with that name. If you can identify the process by a file it has open, you can use fuser -k /path/to/file. If you can, modify the process so that it listens to a termination request on a pipe or socket. Set up the permissions or authentication on the pipe or socket according to your needs.
Terminate Root Processes
1,513,188,343,000
I try to set up a deployer user for CI with Teamcity. I followed the instructions from this question on ServerFault: What's the best way of handling permissions for apache2's user www-data in /var/www? The problem is that the Teamcity application is creating directories with 755 permissions and the apache(2.4) can't write in some of them. If I change the permissions to 775 manually, apache can write them. Here's what I did to set uo the permissions: I created a user teamcity. Added www-data group to the user as secondary group Changed the ownership of /var/www to root:www-data Changed permissions for directories with: find /var/www -type d -exec chmod 2775 {} + and for files with: find /var/www -type f -exec chmod 0664 {} + Added umask 0002 to /etc/profile Tested: su teamcity umask >0002 touch testfile ls -l >-rw-rw-r-- 1 teamcity teamcity 0 May 25 10:38 testfile cd /var/www touch testfile ls -l >-rw-rw-r-- 1 teamcity www-data 0 Mai 25 10:42 testfile For directories its the same. They are rw for user and group. After a deployment the permissions of directories and files are 755 and not 775 as expected. The Teamcity application is started as a service: start-stop-daemon --start -c teamcity --exec /opt/TeamCity/bin/runAll.sh start It seems that I missed some detail, but can't find it. System: Debian jessie Apache 2.4 Teamcity 9 Solution: I changed the startup script for the Teamcity service by adding umask 002 before the startup command.
From the description, it seems that TeamCity is ignoring the umask. Perhaps it sets the umask in its service script (which was not mentioned in the question). If so, you could modify the script. If not, since it's apparently a closed-source application and in Java, your ability to thwart that is limited. You could make a cron job (running once per minute) which fixes the directory permissions. Further reading: How can I set the umask from within java? teamcity-vagrant (possibly relevant script) Install a Teamcity v9.x CentOS Build Agent , example showing where to start looking for the service script in systemd.
umask not working as expected in other than home directory
1,513,188,343,000
I'm trying to configure Samba running on Gentoo Linux to share my external NTFS drive with two other machines, one running Gentoo as well and the other running Windows 7. Previously this drive was connected to a Samba-enabled router (Zyxel Keenetic Giga II) and I could connect to it using the login/pass pair specified in the web interface. I had both read and write access. Now I'm trying to configure Samba to allow anyone who specifies the valid login/pass pair to have full access. The login/pass are unique (I do not use that username anywhere else). I managed to connect both Linux and Windows machines, but only in read-only mode. I get Permission denied on all attempts to write, even though the permissions from ls show that I should be able to write. The network structure is: sambaserv: Samba server hostname sambauser/sambapass: Samba login credentials myuserserv: my user login linuxclient: Linux client hostname myuserclient: my user login winclient: Win 7 client hostname Here's what I have done: sambaserv: ls -l /mnt ... drwxrwxr-x 1 myuserserv myuserserv 4096 2 June 01:08 storage sambaserv: /etc/fstab /dev/sdc1 /mnt/storage ntfs-3g defaults,uid=1000,gid=1000,umask=0002,noatime 0 0 Here 1000 is the ID of myuserserv. I'd like to use this drive for purposes other than Samba sharing, so I didn't specify sambauser instead. sambaserv: Created sambauser by issuing these commands: useradd sambauser passwd sambauser pdbedit -a -u sambauser sambaserv: testparm $ sudo testparm Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[storage]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions [global] server string = sambaserv log file = /var/log/samba/log.%m max log size = 50 dns proxy = No idmap config * : backend = tdb hosts allow = 192.168.1., 127. [storage] comment = Storage path = /mnt/storage valid users = sambauser read only = No create mask = 0775 directory mask = 0775 I have no idea how file permissions are handled considering that the drive is NTFS, but these would be okay if it wasn't. linuxclient: ls -l /mnt ... drwxrwxr-x 1 myuserclient myuserclient 0 2 juin 01:08 storage linuxclient: /etc/fstab //sambaserv/storage /mnt/storage cifs credentials=/home/myuserclient/.smbcredentials,iocharset=utf8,sec=ntlm 0 0 winclient: Typed the sambauser/sambapass pair in Connect network drive under My Computer. How do I get write access under Linux and Windows?
I have been pointed to a solution (not the solution). If I add sambauser to the group of myusersrv on sambaserv, the problem goes away. However, this is not a good solution, because it requires messing with user groups, which I might not have been able to do in a different environment.
Samba shares are read-only from both Windows and Linux
1,513,188,343,000
I'm running the latest version of Crunchbang 64-bit on a Dell E5510 notebook. Installing my printer and scanner worked fine - I just used an existing Ubuntu tutorial. However, scanning without starting Simple-Scan as sudo produces an error message. My main account is listed as part of the group scanner, according to the output of less /etc/group. To my limited knowledge, this should suffice, shouldn't it? What more steps are necessary in order to run simple-scan without sudo? Thanks in advance.
I found similar question asked on SU. Also this Ubuntu how-to could be useful. And there's a bug regarding permissions to access a scaner. Summarizing this sources: Add saned to the group which owns your scanner device: sudo adduser saned scanner. Note that device may be owned by different group, so use sane-find-scanner to determine the device and ls -al /dev/bus/usb/XXX/XXX to identify the group. Then add saned to this group. There's not very secure way to run saned as root. Add this line to /etc/rc.local (before exit 0): chmod -R a+w /dev/bus/usb Edit/create the following file: /etc/xinetd.d/saned: service saned { socket_type = stream server = /usr/sbin/saned protocol = tcp user = root group = root wait = no disable = no } Edit /etc/default/saned: RUN_AS_USER=root and RUN=yes Reboot.
Crunchbang: Can't use simple-scan without administrator rights
1,513,188,343,000
On one of my servers, I have ProFTPD installed in Debian 7. I need special permission only for 2 specific folders. These are: append - user must be able to append data to an existing file rename - user must not be able to rename file if same exists How can I do this?
As far as I understand there are no special operations like append and rename that you can configure. An append effectively is opening a file and then writing it with a different content (even if you only add to the file). You will need to have write privileges on that file. A rename effectively is moving a file from an old location to another new location. You will need to have write privileges in that directory
How to set special folder permission on proFTPD?
1,513,188,343,000
I ran the command fdisk -l to find out what my external drive is formatted to, I found out it's uses GPT partitions and the filesystem is HFS+. When I try and create a new folder on the external drive I receive the following message: chmod: changing permissions of 'file_name/': Read-only file system If I run mount this is the output: /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type tmpfs (rw) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) none on /sys/fs/pstore type pstore (rw) systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=dev) /dev/sdc2 on /media/dev/andre backup type hfsplus (ro,nosuid,nodev,uhelper=udisks2) /dev/sde2 on /media/dev/andre_clients type hfsplus (ro,nosuid,nodev,uhelper=udisks2) So now I ran umount /dev/sde2 and unplugged the device then reconnected the device and ran the command dmesg | tail and got this information back: [429154.613747] sd 14:0:0:0: [sde] Assuming drive cache: write through [429154.615995] sd 14:0:0:0: [sde] Test WP failed, assume Write Enabled [429154.616993] sd 14:0:0:0: [sde] Asking for cache data failed [429154.616997] sd 14:0:0:0: [sde] Assuming drive cache: write through [429154.669277] sde: sde1 sde2 [429154.671369] sd 14:0:0:0: [sde] Test WP failed, assume Write Enabled [429154.672742] sd 14:0:0:0: [sde] Asking for cache data failed [429154.672747] sd 14:0:0:0: [sde] Assuming drive cache: write through [429154.672751] sd 14:0:0:0: [sde] Attached SCSI disk [429157.047244] hfsplus: write access to a journaled filesystem is not supported, use the force option at your own risk, mounting read-only. Would it now be safe to run sudo mount -o remount,rw /dev/sde2 /media/dev/andre_clients without loosing any infomation?
Note: It seems you need to mount a hfsplus as write/read, which is a bit problematic, because of it's journal function. However, you can mount it as write/read as seen here and here. The problem is that /dev/sde2 is mounted read only, according to the ro flag in the parentheses in the last line: /dev/sde2 on /media/dev/andre_clients type hfsplus (ro,nosuid,nodev,uhelper=udisks2) Therefore you can't change anything on this disk. Remount it as read+write rw: sudo mount -o remount,rw /partition/identifier /mount/point In your case: sudo mount -o remount,rw /dev/sde2 /media/dev/andre_clients Before you do that, though, make sure you mount the right partition identifier by using dmesg | tail, e.g.: [25341.272519] scsi 2:0:0:0: Direct-Access [...] [25341.273201] sd 2:0:0:0: Attached scsi generic sg1 type 0 [25341.284054] sd 2:0:0:0: [sde] Attached SCSI removable disk [...] [25343.681773] sde: sde2 The most recent sdX: sdXX line gives you a hint on which partition identifier (the sdXX one) your device connection is identified with. You can also check which dev your device is connected to, by doing ll /dev/disk/by-id/ This will give you all symbolic links of the device and it's partitions: lrwxrwxrwx 1 root root 9 Jul 22 16:02 usb-manufacturername_*serialnumber* -> ../../sdb lrwxrwxrwx 1 root root 10 Jul 22 16:02 usb-manufacturername_*serialnumber*-part1 -> ../../sdb1
Changing file permissions on an HFS+ filesystem
1,513,188,343,000
I was wondering how to restrict access to a specific drive in Unix on the Mac. I was thinking to do this in Terminal where I create a file like this mkfile 6k secure_access. And where secure_access will be on the external drive and will only allow a specific user to be allowed to access the drive, thereby preventing other users from accessing it.
You can create an encrypted disk image on your external drive, which will require a password to mount. As long as you don't give out the password, other users will not be able to mount the image. See: http://support.apple.com/kb/ht1578 for details. Basically, you use Disk Utility located in the Applications -> Utilities folder to create a new disk image, give it a name, a size, and then in the Encryption dialog, select an encryption type. Disk Utility will then prompt you for a password which you must use to mount the disk image.
Restricting access to files on an external drive
1,513,188,343,000
I can copy the /etc/passwd file which is owned by root to my home directory with the following permissions: -rw-r--r--. 1 root root 2751 Dec 24 21:26 /etc/passwd I can't do the same with: -rw-------. 1 root root 43591 Dec 27 18:32 /var/log/messages So I am guessing that the read permission for others is making the copying possible? For making a hardlink, I created file1 as root user in my home directory and tried creating a hardlink as a regular user. I wasn't able to. -rw-r--r--. 1 root root 0 Dec 27 18:39 file1 What is preventing the creation of hardlink? The directory permissions where file1 resides are: drwx------. 18 student student 4096 Dec 27 18:42 . I am able to rename the file1 to file2 so I am guessing, same as copy, the read permission on others is making that happen? Lastly, I am not able to move file2 to a different location. Why? Edit: I have looked at a question that explains permissions about hardlinks but I don't understand what permissions are needed to copy, move and rename files and directories of other users.
So I am guessing that the read permission for others is making the copying possible? Copying a file is just reading the file contents, creating a new file, and writing the data there. You need permission to read the original file, and write permission to the directory of the new file to be able to create it. See Execute vs Read bit. How do directory permissions in Linux work? What is preventing the creation of hardlink? Probably the fs.protected_hardlinks sysctl, mentioned in Hard-link creation - Permissions?. If set, you can only create a hard link if you either own the file or have read and write permission to it, in addition to write access to the directory of the new link. If it's not set, you just need write access to the directory of the new link. There's a similar knob for symlinks, fs.protected_hardlinks. Both are meant to stop various vulnerabilities where a privileged process follows a link that could be modified by an unprivileged process, e.g. in /tmp. the knobs are described in the proc(5) man page. Like mentioned in the comments, the symlink/hardlink protections are likely to be enabled by default in most common Linux distributions. I am able to rename the file1 to file2 so I am guessing, same as copy, the read permission on others is making that happen? Lastly, I am not able to move file2 to a different location. Why? Moving a file within a single filesystem is the same as renaming it. You need write permission to both the old and the new directory, and if moving a directory, write permission to the directory itself. ("Moving" a file across filesystems requires making a new file, copying the contents over, and removing the original.) The man page for the rename() system call describes at least some of the requirements in the descriptions of the EACCES and EPERM errors. The common theme here is that the data of directories is filenames, (or "hard" links), which then point at the inodes describing the actual file. The access permissions of a directory control access to that data, so changes to filenames. Hence, creating, deleting, moving and renaming files requires write permission to the directory or directories containing the affected filenames. In all cases, you also need the search/access permission (x bit) too all directories on the path to the affected files/filenames. (See path_resolution(7))
What permissions on a file or directory are needed to cp, mv and rename or make a hard/symbolic link by a different user?
1,513,188,343,000
I have a computer with Ubuntu 13.10 installed. The user (say Walesa) has changed the ownership of etc folder and all its subfolders from root to Welesa using a privileged file manager. As sudo was disabled, he rebooted hoping it will be re-enabled again. But security does not allow log-in after entering username and password saying "owner of etc/profile is not root". But a commandline login with I have no name!@Walesa is possible. Is there a way to restore ownership of etc and all its subfolders to root using this commandline?
Doing: sudo chown -R root.root /etc on the commandline will set /etc and everything underneath to owner root and group root However on my system (Ubuntu 12.04) not everything under /etc is in group root. The following list might help (generated with sudo find /etc ! -gid 0 -ls | cut -c 29-): root dovecot 5348 Apr 8 2012 /etc/dovecot/dovecot-sql.conf.ext root dovecot 782 Apr 8 2012 /etc/dovecot/dovecot-dict-sql.conf.ext root dovecot 410 Apr 8 2012 /etc/dovecot/dovecot-db.conf.ext root shadow 2009 Dec 23 16:10 /etc/shadow root lp 4096 Mar 12 19:38 /etc/cups root lp 540 Mar 12 19:38 /etc/cups/subscriptions.conf root lp 108 Sep 1 2012 /etc/cups/classes.conf root lp 4096 Oct 8 2012 /etc/cups/ppd root lp 2751 Mar 12 07:38 /etc/cups/printers.conf root lp 2751 Mar 11 21:06 /etc/cups/printers.conf.O root lp 108 Jun 6 2012 /etc/cups/classes.conf.O root lp 540 Mar 12 19:24 /etc/cups/subscriptions.conf.O root lp 4096 Mar 28 2012 /etc/cups/ssl root sasl 12288 Jun 6 2012 /etc/sasldb2 root daemon 144 Oct 25 2011 /etc/at.deny root dialout 66 Oct 31 2012 /etc/wvdial.conf root lightdm 0 Apr 21 2012 /etc/mtab.fuselock root shadow 981 Feb 19 23:38 /etc/gshadow root dovecot 1306 Jun 6 2012 /etc/ssl/certs/dovecot.pem root ssl-cert 4096 Jun 6 2012 /etc/ssl/private root dovecot 1704 Jun 6 2012 /etc/ssl/private/dovecot.pem root ssl-cert 1704 Apr 21 2012 /etc/ssl/private/ssl-cert-snakeoil.key root fuse 216 Oct 18 2011 /etc/fuse.conf root dip 4096 Oct 31 2012 /etc/ppp/peers root dip 1093 Mar 28 2012 /etc/ppp/peers/provider root dip 4096 Mar 28 2012 /etc/chatscripts root dip 656 Mar 28 2012 /etc/chatscripts/provider
Ownership of etc folder is changed how to restore it using commandline?
1,513,188,343,000
Whenever I create or copy few shell files to usb storage device, then I am not able to make them executable. If I create test.sh, it's default file permission will be 644, but when I execute chmod 777 test.sh no error reports and echo $? also returns "0". But still ls -l shows permission as 644 and I can not execute it as ./test.sh
Yes, this can occur if your device is formatted with a filesystem that does not support that kind of permission setting, such as VFAT. In those cases, the umask is made up on the fly from a setting in the fstab (or the hotplugging equivalent). See, most probably, man mount for details. For example, for VFAT, we find: Mount options for fat uid=value and gid=value Set the owner and group of all files. (Default: the uid and gid of the current process.) umask=value Set the umask (the bitmask of the permissions that are not present). The default is the umask of the current process. The value is given in octal. etc.
can't change file permission
1,513,188,343,000
I am set up as a sudoer on my Debian machine, and I use it all the time to install software etc... I came across this interesting situation that I am scratching my head about: I tried to enable the fancy progress bar in apt using this command: sudo echo 'Dpkg::Progress-Fancy "1";' > /etc/apt/apt.conf.d/99progressbar I have permissions issues: /etc/apt/apt.conf.d/99progressbar: Permission denied However, if I su, then run the command, everything just works. Why is this so?
Because sudo cmd > file is interpreted as (sudo cmd) > file by your shell. I.e. the redirect is done as your own user. The way I get around that is using cmd | sudo tee file Addition: That will also show the output of cmd on your console, if you don't want that you'll have to redirect it.
How come root can do this, but sudo can't? [duplicate]
1,513,188,343,000
I'm running CentOS 6.2 and installed nginx as root. After the install I changed the owner and group of the install to it's own user and group to keep thing a bit more secure. I logged in as root and ran yum update which updated nginx and I noticed a lot of the file owners groups were reverted back to root. Is there a way I can retain the ownership I want when performing updates? Maybe login as the nginx user and performing the update (is that even possible or recommended?)
What you're doing is bad. Stop it. If the application nginx is owned by the user nginx and running as the user nginx then when the application is exploited it can write over its own files. You don't want this. Application binaries should almost always be owned by root. Services should almost always be running as nobody or another similarly non-privleged account. Likewise, you don't want your web content owned by the same user running the web server because that allows an attacker to change your content (i.e., deface your site). You want to use as much privilege separation and as few privileges as possible. Applications owned by root (so only root is allowed to modify them) Services executed by non-privileged users (so they have little or no access to the system) Wherever possible the service user should not own the content for that service
Should I change the permissions of binaries?
1,513,188,343,000
Does a normal* user have permissions to write anywhere else than his own home dir? (no sudo and those privilege escalation tools) I say normal because I do not know more categories than root and normal. Let say the involved user installed the system and do the administrative things with sudo <command>. I use Ubuntu, by the way. Thanks.
Yes. The normal/unprivileged user can write to /tmp and /var/tmp, for legitimate reasons. Also, if the user or group permissions of a given file/directory includes those of the user, he or she can write to those files or directories as well. Having said that, providing write capability to operating system files and directories to a normal user, is shooting one's self at the foot, as best as an analogy goes. There is a lot to say about this but this is not the place. If you are curious about why ? I suggest searching for and reading articles about "UNIX/Linux system administration best practices".
Does a user have permissions to write outside /home/userDir?
1,513,188,343,000
I'm trying to wrap my head around this command: find /home/ -type d \( -perm -0002 -a ! -perm -1000 \) 2>/dev/null I understand that it's going to look in the 'home' directory for all directories and redirect all stderr messages to a file (to suppress them), however I'm confused by the part in the middle, specifically: \( -perm -0002 -a ! -perm -1000 \) What do the slashes mean? I'm assuming ! -perm -1000 means to look for directories without those permissions, and -perm -0002 means to look for directories with those permissions, but I'm only used to seeing permissions in the form 644 (for example). Why are there four digits? Also, what does the -a flag do?
In the manual you can find: Operators Operators join together the other items within the expression. They include for example -o (meaning logical OR) and -a (meaning logical AND). Where an operator is missing, -a is assumed. So literally \( -perm -0002 -a ! -perm -1000 \) means : perm = 0002 and not perm = 1000 perm = 0002 means writeable for Others perm = 1000 means sticky bit is set. So this expression search for file writable for Other and without sticky bit sets. Looking for directory is done with -type d d means directory so find look after object of type directory From the find manual, -perm has several forms: -perm 0002 will match all files with this exact permission setting -perm -0002 will match all files where this right bit is setup, (which means 0772, 1752... whatever combination if the file is writeable for others)
-perm flag in find
1,513,188,343,000
Possible Duplicate: Redirecting stdout to a file you don't have write permission on I'm trying to create an md5sum for an ISO image created with the Ubuntu Customization Kit tools. The ISO is created by the tools, which have to be run with sudo, in ~/tmp/remaster-new-files/ which has permissions: drwxr-xr-x 2 root root remaster-new-files So I cd to that directory and run sudo md5sum my.iso > my.iso.md5 and I get the following error: bash: my.iso.md5: Permission denied I can create the md5 sum somewhere else and use sudo mv to move it into place, exactly where it would be if the sudo md5sum command succeeded. Also, if I change user to root with sudo su root, I can run the md5sum command successfully. Why can't I use sudo to create files in this directory, given that I can use sudo to move files to it?
The problem is that the redirection is done from the shell before running the command, as the current user, so sudo do not come into play. Use instead md5sum my.iso | sudo tee my.iso.md5
Why can I copy files to, but not create files in, this directory? [duplicate]
1,513,188,343,000
I changed owner of /etc folder by accident when I was doing work on web server and now owner of /etc folder and all of its subdirectories is www-data. I can't use sudo anymore for anything and in recovery mode console restarts after like 30 seconds and then it freezes. Is there any way for me to fix this without reinstalling ubuntu.
Maybe searching a little more: https://superuser.com/questions/501818/changing-ownership-without-the-sudo-command#501824 Reboot, hold down right shift key to bring up the grub2 boot menu. Then follow these instructions to enter single user mode. How do I boot into single user mode from grub? In single user mode you can fix the file permissions because you are automatically the root user. Generally speaking, if it's just the file ownership that changed. You can run: chown -R root:root /etc That will change ownership and group back to the default root. I have an ubuntu server 12.04 LTS here and there are a small number of files/directories beneath /etc that have a different group ownership. Aside from this, all files are owned by root. The files with the different group ownership are: /etc: -rw-r----- 1 root daemon 144 Oct 26 2011 at.deny drwxr-s--- 2 root dip 4096 Aug 22 12:01 chatscripts -rw-r----- 1 root shadow 697 Oct 31 12:58 gshadow -rw-r----- 1 root shadow 1569 Oct 31 13:00 shadow /etc/chatscripts: -rw-r----- 1 root dip 656 Aug 22 12:01 provider So you can run the chgrp command on those files after initially running chown first. Then you should have everything restored back to how it should be. It shouldn't take an average user more than 10mins. e.g. chgrp shadow /etc/shadow Oh and one final step. After you've done the changes reboot. /> reboot
Changed owner of /etc folder, can't use sudo anymore
1,513,188,343,000
I am looking for a way so that a particular user or a group should not given the permission to remove any file in the system, but only read/execute the file.
There are some users on my system who don't know anything about the PC, so they try to explore it for just learning purpose. My intention is that I can give them a separate account by which they can explore the system without accidently removing or changing any file on the system, but I as another user should be able to do any changes on the system. That is the roughly way a Unix/Linux system normally works. A user only has the right to delete or modify (a) files or directories that he owns or (b) files or directories for which a group that he is in has write permission. The system administrator (that's you, I presume) has control over everything. So, just make sure that these new users are in their own individual group. Unix was designed to be a multi-user system. So, from the start, Unix/Linux give normal users only only limited permissions. Generally, no normal user can mess with system files. Only the system administrator, called root, can do that. Some systems allow normal users to get root's capabilities by running sudo. Make sure that your /etc/sudoers file does not give that capability to them. If you want to be severe, do not give them ownership even of a home directory. They would still have write permission to /tmp and /var/tmp but that shouldn't cause trouble unless they create files so big that they fill up the partition.
Prevent a user from removing file
1,513,188,343,000
Created a folder "Sample_dir" and analysed its permissions. $ mkdir Sample_dir $ ll Sample_dir/ total 36 drwxrwxr-x 2 user user 4096 Jul 1 19:26 ./ drwx------ 71 user user 28672 Jul 1 19:26 ../ Looking at the first entry, I thought the argument that had to me given to chmod to achieve these permissions should be 1775. $ chmod 1775 Sample_dir/ $ ll Sample_dir/ total 36 drwxrwxr-t 2 user user 4096 Jul 1 19:26 ./ drwx------ 71 user user 28672 Jul 1 19:26 ../ But, the ls output has changed. ll has been aliased to ls -alF and the name of the folder now appears in white text with a blue background. Please explain.
The permissions you got were the permissions you asked for. The 't' comes from the '1' in the '1775' permissions string you specified, and sets what is called the "sticky bit". This tells the system that files in that directory can only be renamed or removed by the file's owner, the directory's owner, or the root user. The get the permissions you wanted initially, you would have needed to use "755" or "0755" as the permissions argument to chmod.
Meaning of chmod 1775
1,433,196,238,000
For some special purpose, I want to prevent non-root users of the Linux Server from changing or renaming the filenames. However, they can modify and write to the contents of the file. How to do this from command line.
To rename a file, write permissions to the file don't matter, renaming a file is a change to the directory, not the file. That's changing the directory entry to have a different name pointing to the file. So all you need to do is change the permissions of the directory. For instance: chown root: . chmod 755 . That will prevent users from renaming files in there, but also from creating or deleting files. If you still want them to be able to do that, you could instead make the directory writeable but also set the t bit. With that bit set, users (other than the owner of the directory who is not restricted) can only delete or rename the files they own. chown root:people-who-can-create-file-here . chmod 1775 . chown root:people-who-can-modify-the-files file1-that-must-not-be-rename ... chmod 664 file1-that-must-not-be-rename ...
How to prevent users from renaming files while providing write permissions on Linux [closed]
1,433,196,238,000
I was so sure of myself, I thought I knew about permissions. Until someone asked me this. Having these users: User Group ---- ----- juan juan pedro pedro maria maria jose jose miguel miguel eric eric lola lola paola paola This directory: /opt/privado with Owner = juan:juan Permissions: juan 111 pedro 110 maria and jose 101 miguel and eric 100 lola 000 There are no common permissions for creating a group myGroup and assigning, for example, 110 because I have different permissions for a different group of users. How can this be done on Unix? Really the issue is for Linux, but maybe it is the same solution.
You want to use POSIX ACLs for this, if you can't create sensible groups. See the setfacl command.
How can I set these permissions?
1,433,196,238,000
I can't set other's setuid bit. Why? Is there some security lock? $ ls -l -rwxrwxr-x 1 allexj allexj 16784 Mar 11 17:30 a.out $ chmod o=+s a.out $ ls -l -rwxrwx--- 1 allexj allexj 16784 Mar 11 17:30 a.out
Traditional Unix permissions consist of 12 bits user group other extra 0 0 0 0 0 0 0 0 0 0 0 0 r w x r w x r w x s s t Those extra bits allow you to enable three "additional models" [1]: The first bit (usually depicted as a lowercase s letter) is the setuid bit which, when you run an executable with it, allows any user to have their EUID[2] set to the UID of the executable's owner. The second bit (also depicted as a lowercase s letter) is the setgid bit, it has the same effects and implications as the setuid bit, but, unlike setuid, it works with groups and EGIDs effetively allowing any user to run an executable as if they had their group set to that of the file owner's group. The last third bit is the sticky bit[3] (also called the restricted deletion flag). It's almost always used for directories and allows anyone to create a file in it and makes sure the files "stick" to their owner not allowing anyone (except for root) to remove them. On modern systems its most common use is with the /tmp directory. Everyone can create files inside on it and only them are allowed to remove them, this makes sure that everybody can share a directory while simultaneously guaranteeing that none of the users can mess with the other's files. For directories this bit has mostly the same behavior on most Unix (and Unix-like) systems. It's use for files, however, is not uniform across different Unix (and Unix-like) systems, and, for example, modern Linux kernels completely ignore it and other systems may have some special uses[4] So, now that we know how these bits operate on modern systems, ask yourself a question: If the 12-th bit was not dedicated to the aforementioned behaviors, what would setuid bit's behavior look like if we applied it to the "other" users? With setuid and setgit it's obvious, but what does it mean to "allow any user to have their effective user ID as any other user's ID"? To me it looks like a division by zero, it makes no logical sense as the answer could be "the effective user ID would be any and all of the other possible user IDs at the same time". There are no security locks or implications, it just doesn't make any sense to have such a thing. And the original Unix designers probably didn't see much sense in having such a silly thing and decided to find a better application for it (they could also just use 11 bits for permissions, but I like to think that they actually figured they needed all 12 as they could use it for something helpful, which they also eneded up doing. Also working with even number of bits is easier). Hope this helps.
Why I can't set the setuid for "others"?