date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,379,574,688,000 |
I am using the CentOS 7.
I wrote my first bash script like this.
#!/bin/bash
echo 'this is my first code'
and I saved it as hello_world
I made a directory in my root home directory.
mkdir bin
Then I moved the script to the ~/bin directory.
Then I did this:
export PATH=~/bin:"$PATH"
source ~/bin
Then I tried to run the script with the below command.
hello_world
but I did not see the this is my first code but I got a bash: /root/bin/hello_world: Permission denied error instead.
|
For a script to be executable without executing it with an explicit interpreter (as in bash ~/bin/hello_world), the script file has to have its "executable bit" set. This is done with chmod (see its manual):
chmod u+x ~/bin/hello_world
This sets the executable bit for the owner of the file.
Or,
chmod +x ~/bin/hello_world
This sets the executable bit according to your current umask. Assuming that your umask is 022 (a common default), this will make it executable for all users.
The source step that you did is nonsense and should have given you an error message (you can't source a directory).
If you need the setting of the new PATH to be "permanent", then add the export PATH line to your shell's startup file (~/.bashrc if you're using bash as your interactive shell).
Also, avoid working at an interactive root prompt. Use an unprivileged user account for testing and exploring, and use sudo from that account for those few times that you need to do administrative tasks.
| bash: /root/bin/hello_world: Permission denied |
1,379,574,688,000 |
I originally login with $HOME as /home/oleg
I need to run a command with sudo - this happens to be an npm install command-
sudo npm install -g suman
however, the postinstall script for the suman module, as it's currently configured, writes to the original user's home directory /home/oleg/.suman....but because I run the above npm install command with sudo, I do not have access to /home/oleg/.suman.
Is there any hope to give the root user access to the /home/oleg/.suman directory?
Or should the postinstall script for suman simply install to the root user's home directory?
It looks like, since I wrote suman :), that I can write the contents of the .suman directory with 777 and given full access to the root user in this way. I guess what are the minimum file permissions to define for the .suman directory to give read/write/execute access to only the root user and the logged in user?
|
TL,DR: none of your suggestions are good. Instead, when running as root, store state files under /var (something like /var/lib/suman).
Root already has permission
Root has the permission to access all files in the system. So don't change the directory's permissions: it wouldn't make any difference to root, but it would allow everyone else to read and write in that directory. Despite popular belief, it's extremely rare for chmod 777 to do anything useful.
Root has the permission to access all files in the system, and normally that's enough. There are a few exceptions that have to do with certain filesystem types that handle users differently from “normal” filesystems. The two main cases are:
NFS: root on a client is typically mapped to a different user on the server, usually nobody. This means that when root opens a file, it's done with the permissions of nobody.
FUSE (which includes ecryptfs, which is commonly used to encrypt home directories): unless configured otherwise (with the option allow_other, which only root can use), FUSE filesystems are only available to the user who mounted them.
In those cases, there are files that root can't access directly despite having apparent permission. Root can still effectively access the files by switching to the account that owns the files — these are implementation limitations, not security restrictions — but it's a little inconvenient.
But root should use that permission carefully
If you have a program that's commonly invoked as root but with HOME set to another user's home directory, you should try to avoid creating files in a user's home directory that the user can't access.
If /home/oleg/.suman already exists and is owned by oleg, it doesn't matter if you write files there, because the owner of a directory can always erase files in that directory (what it takes is write permission, and the owner can always grant themselves permission). When oleg runs the same program, it'll replace the root-owned files if those files need to be overwritten. Don't create subdirectories, however: oleg would be unable to access them or remove them even if they're empty.
The problem is if you run the program for the first time as root and it creates /home/oleg/.suman. In a nutshell, don't do that — the question is how to avoid it.
The solutions
If you run sudo -H then the HOME environment variable is set to root's home directory and so the program won't be accessing /home/oleg.
But sometimes it makes sense to run a program as root (because it needs root permissions) but with the home directory set to your own (to read your configuration files). This of course only applies in cases where it's ok for a program running as root to read that configuration file — user interface customizations are fine, but the configuration file shouldn't contain things like executable code (e.g. no shell escapes through preprocessing). If that's the case, the program should read files from $HOME but not write.
If the program needs to store some state files, then when it's running as root, it should store them in a system directory (under /var), not in the user's home directory.
| Using sudo to write files to user's home |
1,379,574,688,000 |
I am surprised that /mnt, /media or even /opt are not writable by myself on my own system (Linux-Ubuntu). Should I always use sudo for any action on these paths?
Or would it be better to add myself to the root group?
I guess this might be the thing to do:
sudo chown root:nowox /opt /mnt /media
sudo chmod u+w /opt /mnt /media
|
Do not add yourself to the root group, this many have many unintended side effects granting more than you intended.
These directories are intentionally not writable by normal users. In the event you need to make manual changes to them (which will be rare), you can perform those operations as root via sudo.
| Should I take ownership of /media, /mnt, and /opt or join root group? |
1,379,574,688,000 |
I constantly have problems with read & write & execute permissions by Apache/me. There is a user "konrad" (it's me) in "konrad" group, and there is a user "www-data" in "www-data" group used by Apache. When I ("konrad") create a directory, then Apache have no rights to write to this folder which causes problems.
So now I have the following "idea": I will add myself ("konrad" user) to the "www-data" group (where also Apache's user belongs) and then I chown all my www projects, so that they will belong to user "konrad" but group: "www-data". And I will chmod the projects so that this group will have all permissions to files and directories (I think that would be 770).
Then I will change my primary group from "konrad" to "www-data", so everytime I'll create a new directory/file Apache will also have a full access to it.
The question is: is this a good idea? I don't have a great experience with permissions or even Unix itself. So maybe I'm missing something. But it seems reasonable to me.
|
Apache runs as a non-privileged user known as www-data in Debian distros for quite a very good reason: security.
It is considered a good security practice when dealing with daemons giving up privilege rights, avoiding as much as possible to create configuration files or data files with the ownership of the non-privileged user that runs the daemon - as such, if the Apache user is compromised, the attackers will have a much more hard time to mess around things or deface a site.
As possible, I recommend to create sites with different users, and to give read rights to the www-data group only; and to have only write access to www-data in directories that really need them. However even this can be avoided using mod-ruid2.
mod-ruid2 allows actually to run each site/vhost with their owner, and dealing with the security model of pages is much easier. It takes out the necessity of creating world writable directories. It also guarantees that in case of a compromise of one vhost, the attacker is not able to plant malware in the other vhosts.
mod-ruid2 is also advised for people with a hosting model, and we use it here to run a few hundreds sites, with quite success.
Unfortunately, the documentation about mod-ruid2 is a bit scant, and I had to write a more elaborate post to describe it here in Unix and Linux.
| Is giving all permissions to www-data group a good idea? |
1,379,574,688,000 |
I'm trying to set my local computer (which has Linux Mint 13 Maya) so that I can chmod & chown any file with my regular max user account.
Following this page,
https://askubuntu.com/questions/159007/how-do-i-run-specific-sudo-commands-without-a-password
I've done the following:
#edit the /etc/sudoers file via `visudo`
sudo visudo
#in the file, added these lines:
Cmnd_Alias NOPASS_CMNDS = /bin/chmod, /bin/chown
max ALL=(ALL) NOPASSWD: NOPASS_CMNDS
Then saved. (I got the locations for chmod and chown using which)
So, my visudo file now looks like this:
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
# Host alias specification
# User alias specification
# Cmnd alias specification
# User privilege specification
root ALL=(ALL:ALL) ALL
Cmnd_Alias NOPASS_CMNDS = /bin/chmod, /bin/chown
max ALL=(ALL) NOPASSWD: NOPASS_CMNDS
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
# See sudoers(5) for more information on "#include" directives:
#includedir /etc/sudoers.d
This is the output from sudo -l
$ sudo -l
Matching 'Defaults' entries for max on this host:
env_reset, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin
User max may run the following commands on this host:
(ALL) NOPASSWD: /bin/chmod, /bin/chown
(ALL : ALL) ALL
I then open a new shell tab and try to sudo chmod a file which is owned by a different user & group, and it asks me for a password:
$ ls -l tmp/0000000001
-rw------- 1 www-data www-data 19245781 Sep 10 16:59 tmp/0000000001
$ sudo chmod +w tmp/0000000001
[sudo] password for max:
Am I missing something here? I don't know if I've done it wrong or have misunderstood what I was actually trying to change.
Do I need to reboot, or reload/restart something to see the change?
|
The issue here is that there are two rules for this user:
(ALL) NOPASSWD: /bin/chmod, /bin/chown
(ALL : ALL) ALL
The second one comes from the line in sudoers reading
%sudo ALL=(ALL:ALL) ALL
Sudo will use the first matching rule starting from the bottom of the file - so when you need to have different options for a subset of commands, you need to make sure that they are listed below the more generic line.
In other words, you need to make sure that the line
max ALL=(ALL) NOPASSWD: NOPASS_CMNDS
is placed after the line
%sudo ALL=(ALL:ALL) ALL
in the file.
| Can't give myself NOPASSWD privelege for chmod/chown [duplicate] |
1,379,574,688,000 |
I have two pdf files in my server:
-rwxr-xr-x. 1 apache apache 1422861 Sep 11 2014 file1.pdf
-rwxr-xr-x. 1 apache apache 340815 Aug 27 13:06 file2.pdf
I can access the first one by going to www.myserver.com/pdffiles/file1.pdf
I just uploaded the second one and changed permissions to apache:apache and rwxr-xr-x so I wouldn't have problems with permissions, but when I try to access the second file with www.myserver.com/pdffiles/file2.pdf I get this:
Forbidden
You don't have permission to access
/pdffiles/file2.pdf on this
server.
What am I missing?
I got the following on my ssl_error_log:
[Thu Aug 27 13:30:46.755295 2015] [core:error] [pid 3025]
(13)Permission denied: [client x.x.x.x:60230] AH00132: file
permissions deny server access: /var/www/myserver/file2.pdf
|
I didn't know the problem was SELinux but I discovered that was the problem because I turned it off with setenforce 0 and then it worked.
This is how it looked when I listed the files with ls -alZ
-rwxr-xr-x. apache apache unconfined_u:object_r:httpd_sys_rw_content_t:s0 file1.pdf
-rwxr-xr-x. apache apache unconfined_u:object_r:user_home_t:s0 file2.pdf
so I fixed it with:
chcon unconfined_u:object_r:httpd_sys_rw_content_t:s0 file2.pdf
and also I turned SELinux back on with setenforce 1.
| One of my pdf files in my apache server can be accessed the other can't, with the same permissions and same directory |
1,379,574,688,000 |
On my old Debian Wheezy I see as a normal user without sudo all processes from all user in htop. On my new Debian Wheezy I only see my own processes.
Old system kernel: 3.2.0-4
New system kernel: 3.14.32
The difference that I noticed is that on the old system /proc/1/ has r-xr-xr-x permissions and on the new one only r-x------.
The line from /etc/fstab:
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
mount returns the following:
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
When I add hidepid=0 and reboot:hidepid=0.
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults,hidepid=0 0 0
mount still returns the same as without:
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
What should I do to see all processes as a normal user?
|
I found the problem.
The kernel was compiled with grsecurity, which hides processes from other users.
With default kernel everything works fine.
| htop shows only the processes of the user that's running it? |
1,379,574,688,000 |
I am trying to understand file/dir permissions in Linux.
A user can list the files in a directory using
cd test
ls -l
Even if the user issuing above commands does not have read, write or execute permission on any of the files inside the test directory, still he can list them because he/she has read permissions on the test directory.
Then why in the following scenario user B can change permissions of a file he owns but does not have write permissions of the parent directory?
User A, makes a test directory and gives other users ability to write in it:
mkdir test
chmod o+w test
User B, creates a file in test folder.
cd test
touch b.txt
User A removes write permission of others from the directory
chmod o-w test
User B, can successfully change permissions, even though permissions are part of directory and this user does not have write permission on the parent directory of the file he owns
chmod g-r b.txt
why does chmod not fail since the user cannot modify the directory which has the file information - permissions etc?
|
When you change a file's metadata (permissions, ownership, timestamps, …), you aren't changing the directory, you're changing the file's inode. This requires the x permission on the directory (to access the file), and ownership of the file (only the user who owns the file can change its permissions).
I think this is intuitive if you remember that files can have hard links in multiple directories. The directory contains a table that maps file names to inodes. If a file is linked under multiple names in multiple directories, that's still one inode with one set of permissions, ownership, etc., which shows that the file's metadata is in the inode, not in the directory.
Creating, renaming, moving or deleting a file involves modifying the directory, so it requires write permission on the directory.
| Why does chmod succeed on a file when the user does not have write permission on parent directory? |
1,379,574,688,000 |
I need to set the same permissions of owner to group recursively to all elements in a directory.
|
There's a fairly simple answer
(although I don't know for sure whether it works on all versions of *nix);
simply do
chmod g=u *
i.e., set the group permissions equal to the user permissions.
This is documented in chmod(1):
The format of a symbolic mode is [ugoa...][[+-=][perms...]...],
where perms is either zero or more letters from the set rwxXst,
or a single letter from the set ugo. …
︙
The letters rwxXst select ….
Instead of one or more of these letters, you can specify exactly one of the letters ugo:
the permissions granted to the user who owns the file (u),
the permissions granted to other users
who are members of the file’s group (g), and the permissions granted
to users that are in neither of the two preceding categories (o).
| Set group permissions as owner permissions |
1,379,574,688,000 |
I'm trying to setup ssh-host in Cygwin and am getting the below error:
*** Warning: The permissions on the directory /var are not correct.
*** Warning: They must match the regexp d..x..x..[xt]
*** ERROR: Problem with /var directory. Exiting.
As of now, the /var directory has the below permissions.
$ ls -ld /var
drws--Srwx+ 1 Prashant Prashant 0 Mar 11 22:29 /var
How do I set d..x..x..[xt] permissions for /var?
|
In Cygwin, it's not possible to change group permissions unless the group is Users or Root. Refer to 'chmod' cannot change group permission on Cygwin.
You won't be able to change the group permission until you change var's group owner to Users, so the best solution is:
chown :Users /var
chmod 757 /var
chmod ug-s /var
chmod +t /var
The last step of setting sticky bit is not really necessary though.
| Permission issues while doing ssh setup in Cygwin |
1,379,574,688,000 |
I'm not very experienced with Linux and I made a big, big mistake. I ran the following command:
chown -R [ftpusername]:[ftpusername] /
I meant to run this:
chown -R [ftpusername]:[ftpusername] ./
See the problem?
I tried to correct my mistake by changing the owner of all files to root:
chown -R root:root /
Now I'm getting permissions errors when trying to access my websites, but my biggest concern is that I want to make sure I haven't caused any security vulnerabilities here.
Questions:
Was changing ownership of everything to root the right thing to do?
I think running chown caused some of the folder and file permissions to be changed. Is that normal? Would this cause any security vulnerabilities?
|
Was changing ownership of everything to root the right thing to do?
No. It is, however, the quickest way I can think of to get the system to normal state.
There are plenty of process which require some directories/files be owned by their user. Examples include logs, caches, working/home directories of some processes like MySQL, LightDM, etc. Especially log files can create a lot of problems.
There are some applications which are setuid/setgid, and so need their owner/group to be something specific. Examples include /usr/bin/at, /usr/bin/crontab, etc.
I think running chown caused some of the folder and file permissions to be changed. Is that normal?
I doubt modes got changed. If it did, it most definitely is not normal.
Would this cause any security vulnerabilities?
Since you just set /usr/bin/crontab to be owned by root, you now have a setuid application that opens an editor. I doubt any vulnerabilities compare to that. Of course, this is a blatant vulnerability, so something more insidious might now pop up. Overall, I'd recommend simply re-installing the system - or hopefully you have full-disk backups.
Apparently, chown(3) is supposed to clear the setuid and setgid bits if the running process doesn't have the appropriate privileges. And man 2 chown for Linux says:
When the owner or group of an executable file are changed by an
unprivileged user the S_ISUID and S_ISGID mode bits are cleared.
POSIX does not specify whether this also should happen when root does
the chown(); the Linux behavior depends on the kernel version. In
case of a non-group-executable file (i.e., one for which the S_IXGRP
bit is not set) the S_ISGID bit indicates mandatory locking, and is
not cleared by a chown().
So, it seems the devs and the standards committees have provided safegaurds.
| Accidentally screwed up permissions big time -- what should I do? [duplicate] |
1,379,574,688,000 |
This question was asked/answered previously, and I know it's safe to chown on /usr/local for my admin user account, which I've done to install git with homebrew (using brew install git).
sudo chown -R $USER:admin /usr/local
Now I'm wondering if I should change ownership back to root.
Does it matter who owns /usr/local?
Note: This question is oriented towards macOS, but may also apply to Unix systems.
|
Let's start with some "history". /usr/local is typically used to store user programs/data that were not installed with the base operating system. Commonly, when you make programs from source using automake, they will install somewhere under /usr/local. Because the main operating system itself doesn't rely on this directory, permissions are really up to the administrators preference.
Now, we can also consider this from another angle. When you have user +r or +w or +x privileges on a directory, those permissions will only apply to the owner of that directory. Typically, user privileges are higher than the group or all privileges, which means that the owner of /usr/local will have elevated privileges over that of other accounts. Now, if the group and/or all privileges are equal to or greater than the privileges of user on that directory, then who the actual owner is isn't as important.
So, the question you need to ask yourself is what users will be using the "stuff" located in /usr/local, and do you care whether other users have access to the same "stuff"? Your answer will probably effect not only the user who owns this directory, but what the user, group and all permissions will be on that directory.
Using home-brew, it's usually a good idea to have this directory owned by the user administering the home-brew package system. For more of a shared system, it is more common that this directory will be owned by root, as even though this directory is user-controlled, often the administrator of the system wants to have control over what gets deposited into this directory.
As a reference, here's an example of a stock Linux (Ubuntu) machine, followed by a stock Mac OS X machine:
user@ubuntu:~$ ls -ld /usr/local
drwxr-xr-x 10 root root 4096 Sep 10 2013 /usr/local
Mac:~ user$ ls -ld /usr/local
drwxrwxr-x 12 root admin 408 12 Apr 04:32 /usr/local/
| sudo chown -R $USER:admin /usr/local - revert back to ROOT? |
1,379,574,688,000 |
When editing a file with PHPStorm, the group is changed from www-data to my local user. I've added my local user to the www-data group, and if I edit the file with gedit/vim/etc. the group stays as www-data.
I suspect PHPStorm must be running as a different user, but when I check ps, it shows as my local user. What would cause this program to alter the group of a file it's editing?
|
What causes this is most likely is that the original file is backed up by the editor and then the new file is written so it gets the current group id of the user. The editor would have to explicitly reset the group to the original group.
You could try to use newgrp to change the group of the user doing the editing before starting PHPStorm:
newgrp www-data
| PHPStorm changes group when saving file |
1,379,574,688,000 |
My question concerns Apache and the different ways it can operate. For this particular machine I am the only user, and I will be using the box mainly to run WordPress on a LAMP Stack with Ubuntu 12.04.3 LTS. I have Apache installed and it is running as the www-data user. For the purposes of this example, lets call my user foo.
If I just set up a base configuration of WordPress, the system won't be able to write to the directories, for reasons I still don't quite understand, because www-data is the Apache user, and isn't foo, the owner of the files.
From what I understand, I don't want to make my files owned by www-data as this can be very dangerous and insecure. However, might it be a good idea to make my files group writable, or is that not the best option either?
If someone could shed a little light on the workings of Apache, that would mean a lot, as well as the best configuration to put my server in so that the files are writable by my user but can be accessed by applications such as WordPress.
|
Noel, www-data user is the user, from whose behalf Apache runs your WordPress code (and any other code, generating Web pages for your users, e.g. Django code - python web-site engine).
www-data user is created to have minimal permissions possible, because it can possibly execute malicious code and take as much control over your system, as it is allowed to take. Suppose, that WordPress engine contains a vulnerability. Say, it allows the user to convert an image file from .jpg to .gif format by running convert from Imagemagick. The vulnerability is that it doesn't check that the filename contains the filename and only filename.
If a malevolent cracker supplies "image.png; ldd image.png", and WordPress executes convert image.png; ldd image.png in the shell without filtering out "; ldd image.png" part (this part was added to filename by the cracker in order to be exectued in the shell), your apache will run ldd image.png in addition to converting image. If image.png is in fact an executable file, named image.png, which the cracker supplied to you (if you allow other people to publish on your site, using WordPress engine), ldd image.png can result in arbitrary code execution, using 'ldd' vulnerability as described here: http://www.catonmat.net/blog/ldd-arbitrary-code-execution/.
Obviously, if that code is run as root user, it can infect all programs in your system and take total control of it. Then you're screwed (your virtual hosting can start sending spam, trying to infect everyone with viruses, eat up all your hosting budget etc.).
Thus, WordPress should be run with minimal privileges possible, in order to minimize damage from a potential vulnerability. Thus, any file, that www-data can write to, should be treated as possibly compromised.
Why don't you run WordPress as your foo user? Suppose, you've got a per-user installation of programs (e.g. in /home/foo/bin), and run WordPress as foo user. Then vulnerability in WordPress can infect those programs. If you later run one of those programs with sudo, you're screwed - it will take total control over the system. If you store any password or private key and foo user can read it, then cracker, who hacked your WordPress will be able to read it, too.
As for the overall mechanism of Apache functioning, here is a summary:
1) On your VPS computer there's a single Apache2 process, that runs as a root. It has to run as the root, cause it needs root privileges to ask Linux kernel to create a socket on TCP port 80.
Socket (see Berkley Sockets) is an Operating Systems Programming abstraction, used by modern Operating Systems (OS) kernels to represent network connections to applications. WordPress developers can think of a socket as of a file. When 2 programs, client and server, on 2 different computers speak to each other over the network, using TCP/IP protocol, OS kernels handle the TCP/IP details by themselves and the programs just think, that they have a file-like object - socket. When Client program (e.g. Mozilla) writes something to its socket, kernel of the client computer's OS delivers that data to the kernel of server computer's OS, using TCP/IP protocol. Then Server program (Apache2 on behalf of WordPress) can read those data from its socket.
How does client find the server and how server distinguishes between clients? Both server and client are identified by a pair (IP address, TCP port number). There are well-known ports for well-known protocols, such as 80 for http, 443 for https, 22 for ssh etc. Well-known ports are used by server computers to expect connections on them. IMPORTANTLY, only root user can create sockets on well-known ports. That's why the first instance of Apache2 is run as root.
When a server (Apache2) program wants to start listening to a port, it creates a so-called passive socket on port 80 with several system calls (socket(), bind(), listen() and accept()). System call is a request from a program to its OS kernel. To read about system calls, use e.g. man 2 socket (here 2 means the section 2 of man pages - system calls, see man man for section numbers). Passive socket can't really transfer data. The only thing it does is establish the connection with client - Mozilla's tab.
2) Client (Mozilla tab) wants to establish a TCP/IP connection to your server. It creates a socket on NON-WELL KNOWN port 14369, which doesn't need root privileges. Then it exchanges with 3 messages with Apache through the passive socket on your server computer's 80th port.This process (establishing the TCP/IP connection with 3 messages) is called 3-way handshake, see:
3) When TCP/IP connection is successfully established, Apache2 (run as root) invokes accept() system call and Linux kernel creates an active socket on server's 80th port, corresponding to connection with Mozilla's tab. Through this active socket will your WordPress application talk to the client.
4) Apache2 (run as root) forks another instance of Apache2 to run the WordPress code with lower privileges. That instance will run your WordPress code as a www-data user.
5) Mozilla and Apache2, running WordPress code as www-data user start exchanging http data over the established connection, writing and reading to their respective sockets via send()/recv() system calls.
Basically, WordPress is just a program, whose output is an html-page, so Apache2, running as a www-data just runs that program and writes its output (html-page) to the active socket and Mozilla on the client side receives that page and shows it.
| Apache ownership and permissions for wordpress |
1,379,574,688,000 |
On an Ubuntu Linux I have a directory with the setuid bit set (drwsr-xr-x) which I want to unset.
Neither chmod 755 nor chmod 0755 nor chmod 00755 (I though maybe the first 0 is interpreted as just "this is octal") cleared the setuid bit. However, chmod u-s did.
What is the correct numeric mode to clear the setuid bit?
|
Interestingly. this seems to be impossible using GNU chmod, and that's a feature. From the info entry on chmod on my system; note how, whilst the entry on setting the bits makes reference to symbolic and numeric modes, the entry on clearing them refers only to symbolic (ug-s) mode:
27.4 Directories and the Set-User-ID and Set-Group-ID Bits
These convenience mechanisms rely on the set-user-ID and
set-group-ID bits of directories. If commands like chmod' and
mkdir' routinely cleared these bits on directories, the mechanisms
would be less convenient and it would be harder to share files.
Therefore, a command like `chmod' does not affect the set-user-ID or
set-group-ID bits of a directory unless the user specifically mentions
them in a symbolic mode, or sets them in a numeric mode.
[...]
If
you want to try to set these bits, you must mention them explicitly in
the symbolic or numeric modes, e.g.:
[...]
If you want to try to
clear these bits, you must mention them explicitly in a symbolic mode,
e.g.:
[...]
This behavior is a GNU extension. Portable scripts
should not rely on requests to set or clear these bits on directories,
as POSIX allows implementations to ignore these requests.
| Clear setuid permission using numeric mode |
1,379,574,688,000 |
I understand this first example:
> mkdir foo
> chmod u-w foo
> touch foo/test
touch: cannot touch `foo/test': Permission denied
> echo "BAD" >> foo/test
bash: foo/test: Permission denied
This makes sense: I don't have write permission on the directory so I shouldn't be able to write any changes. I can not touch nor create a file that can be appended to. Why does this work however?
> mkdir bar
> touch bar/test
> chmod u-w bar
> echo "BAD" >> bar/test
> cat bar/test
BAD
|
You have no write permission on the directory. That means you cannot modify the directory. Creating or removing a file in the directory (which includes creating or deleting a file, as well as moving the file in or out of the directory) modifies the directory. If you modify a file inside the directory (by appending or overwriting it), that doesn't modify the directory itself.
You can also modify the file's metadata (dates, permissions, etc.) as long as you own the file, regardless of the permissions on the directory and on the file. You can even indirectly modify a file's access time by reading it, even if reading is the only permission you have on the file. Access to file metadata isn't controlled by permissions.
The only permission on the directory that's relevant to modifying files inside it is the execute permission. It controls whether you can access the file at all. (The read permission on the directory controls whether you can list the directory's files; with read but not execute, you can see the file names but not access the files; with execute but not read, you can access files in the directory, but only if you know their name.) As long as you can access the file, the directory's permissions don't matter further.
If you want to make a whole directory tree read-only, you can't do it by changing the permissions on the directory alone, you have to change the permission of every file. Alternatively, create a read-only view.
| How come I can append to files to a directory without write permissions? |
1,379,574,688,000 |
I have some users which have different $HOME directories than /home/.
The user homes are under /pkg/homeand /pkg owner is a different user, but all users have group access to /pkg. It seems that SSHD will restrict access to authorized_keys (e.g. /pkg/home/usera/.sshd/authorized_keys) because the user is not owner of the full path.
Is there any option for sshd_config to change this restriction?
|
It's all or nothing: if you turn the StrictModes option off, sshd will never check any file modes. There's no way to say that certain odd cases are ok, such as a group-writable directory (which is ok if the user is alone in the group).
OpenSSH checks the permissions and ownership of ~/.ssh/authorized_keys and its containing directories recursing upwards. However, it stops the comparison when it reaches the home directory. For example, in the classical arrangement where the authorization file is /home/joe/.ssh/authorized_keys and /home/joe is the user's home directory, only /home/joe/.ssh/authorized_keys, /home/joe/.ssh and /home/joe are checked.
So while your scenario is highly dubious (/pkg should be owned by root, with additional group permissions if required), it should not impact ssh.
If any symbolic links are involved, note that ssh expands all symlinks before starting its checks.
The system logs might have relevant information. Check if your failed login attempts cause any log message.
Check that your version of ssh performs the same checks as mine (I looked at the source of OpenSSH 5.5p1) by running a debug mode daemon on a custom port (sshd -d -p 2222). Use strace -f -efile sshd -d -p 2222 if necessary to check which files' permission the server checks. If these permission checks aren't the issue, adding more -d flags might throw some light.
If you have AppArmor, there's also the possibility that it is restricting the ssh server to reading files in users' .ssh directories. If you have AppArmor and home directories in a nonstandard location, you'll need to update AppArmor policies (not just for SSH). See Evince fails to start because it cannot read .Xauthority.
| Use SSH key authentication with custom user $HOME |
1,379,574,688,000 |
In a build script I want to create a temporary file that has a unique generated name, then write some data to it, and then rename that file to its final name (the reason for this procedure is that I want the renaming to happen (mostly) atomic).
But if I use mktemp, the new file will have very restrictive permissions:
Files are created u+rw, and directories u+rwx, minus umask
restrictions.
... and these permissions will be preserved when the file is renamed. However, I want the resulting file to have "normal" permissions; that is, it should have the same permissions as files created by touch or by a shell redirection.
What is the recommended way to achieve this? Is there a (common) alternative to mktemp; or is there a simple and reliable way to change the file permissions to the normal umask permissions afterwards?
|
You can use chmod =rw "$file" after creating the temp file. The GNU man page says:
A combination of the letters ugoa controls which users' access to the file will be changed: [...] If none of these are given, the effect is as if (a) were given, but bits that are set in the umask are not affected.
So, =rw gives read-write permissions reduced by the umask, similar to passing 0666 as the permissions argument to open(), which is what e.g. touch filename, or echo > filename would do.
That's a POSIX feature, so other implementations should support it too.
| How to create a temporary file that has "normal" permissions? |
1,379,574,688,000 |
I'm not sure if something similar has already been asked.
I'm currently trying to push emails to our spam filter when they are moved to or out of the Junk folder so it can learn them as spam/ham. To do that I followed this guide: https://workaround.org/ispmail/stretch/filtering-out-spam-with-rspamd in the section "Learning from user actions".
The sieve scripts are created following the instructions, they have been processed by sievec and permissions have been granted following the guide. The two shell scripts have also been created accordingly. The only real difference is that we don't have the user or group "vmail". I set it to dovecot:root which should be the counterpart of our system. So the folder looks like this:
drwxr-xr-x 2 dovecot root 4,0K Mai 7 10:52 .
drwxr-xr-x 3 root root 4,0K Jul 29 2019 ..
-rw-r--r-- 1 dovecot root 85 Mai 7 10:47 learn-ham.sieve
-rw-r--r-- 1 root root 246 Mai 7 10:47 learn-ham.svbin
-rw-r--r-- 1 dovecot root 86 Mai 7 10:47 learn-spam.sieve
-rw-r--r-- 1 root root 250 Mai 7 10:47 learn-spam.svbin
-rw-r--r-- 1 dovecot root 509 Mär 16 13:57 mailfilter.sieve
-rw-r--r-- 1 root root 398 Mai 6 18:02 mailfilter.svbin
-rwx------ 1 dovecot root 41 Mai 7 10:52 rspamd-learn-ham.sh
-rwx------ 1 dovecot root 42 Dez 14 10:42 rspamd-learn-spam.sh
When the sieve script executes and is supposed to call the shell scripts, I get the following errors:
Mai 12 17:16:28 mail dovecot[4119]: imap(user)<8778><xIGQ8nSlFMZ/AAAB>: Fatal: execvp(/etc/dovecot/sieve/global/rspamd-learn-spam.sh) failed: Permission denied
Mai 12 17:16:28 mail dovecot[4119]: imap(user)<8778><xIGQ8nSlFMZ/AAAB>: Error: write(program stdin) failed: Broken pipe
Mai 12 17:16:28 mail dovecot[4119]: imap(user)<8778><xIGQ8nSlFMZ/AAAB>: program `/etc/dovecot/sieve/global/rspamd-learn-spam.sh' terminated with non-zero exit code 84
Mai 12 17:16:28 mail dovecot[4119]: imap(user)<8778><xIGQ8nSlFMZ/AAAB>: Error: sieve: pipe action: failed to pipe message to program `rspamd-learn-spam.sh': refer to server log for more information. [2020-05-12 17:16:28]
Mai 12 17:16:28 mail dovecot[4119]: imap(user)<8778><xIGQ8nSlFMZ/AAAB>: sieve: left message in mailbox 'Junk'
Mai 12 17:16:28 mail dovecot[4119]: imap(user)<8778><xIGQ8nSlFMZ/AAAB>: Error: sieve: Execution of script /etc/dovecot/sieve/global/learn-spam.sieve failed
Besides the fact that I have no clue what the "server log" refers to, I just can't figure out what exactly the problem is. Sure it seems like a permission error, but how could it be fixed?
About our system: Debian 10.4 with dovecot 2.3.4.1 and pigeonhole 0.5.4
EDIT:
I found one mistake: I had set the sieve_pipe_bin_dir to the wrong folder. It now points to the folder containing the two .sh files, but still I get those errors:
Mai 22 15:40:06 mail dovecot[18547]: imap(user)<18686><57dcxDymXJ5/AAAB>: Fatal: execvp(/etc/dovecot/sieve/global/rspamd-learn-spam.sh) failed: Permission denied
Mai 22 15:40:06 mail dovecot[18547]: imap(user)<18686><57dcxDymXJ5/AAAB>: Error: write(program stdin) failed: Broken pipe
Mai 22 15:40:06 mail dovecot[18547]: imap(user)<18686><57dcxDymXJ5/AAAB>: program `/etc/dovecot/sieve/global/rspamd-learn-spam.sh' terminated with non-zero exit code 84
Mai 22 15:40:06 mail dovecot[18547]: imap(user)<18686><57dcxDymXJ5/AAAB>: Error: sieve: pipe action: failed to pipe message to program `rspamd-learn-spam.sh': refer to server log for more information. [2020-05-22 15:40:06]
Mai 22 15:40:06 mail dovecot[18547]: imap(user)<18686><57dcxDymXJ5/AAAB>: sieve: left message in mailbox 'Junk'
Mai 22 15:40:06 mail dovecot[18547]: imap(user)<18686><57dcxDymXJ5/AAAB>: Error: sieve: Execution of script /etc/dovecot/sieve/global/learn-spam.sieve failed
No matter which owner I set (root:root or dovecot:root, the only other users that are not "human-users" would be something like _apt, bin, nslcd, daemon, dovenull or www-data) Any idea what could cause that?
EDIT2:
I now changed my approach by trying to pipe directly to rspamc. Here my learn-spam.sieve script:
require ["vnd.dovecot.pipe", "copy", "imapsieve"];
pipe :copy "rspamc" ["learn_spam"];
Accordingly I changed the 90-plugin.conf to contain
sieve_pipe_bin_dir = /usr/bin/rspamc
where rspamc resides. Now I'm getting the error
Jun 03 09:48:34 mail dovecot[1536]: imap(user)<10486><xVI6QSmnpLN/AAAB>: Error: sieve: pipe action: failed to pipe message to program: program `rspamc' not found
Jun 03 09:48:34 mail dovecot[1536]: imap(user)<10486><xVI6QSmnpLN/AAAB>: sieve: left message in mailbox 'Junk'
Jun 03 09:48:34 mail dovecot[1536]: imap(user)<10486><xVI6QSmnpLN/AAAB>: Error: sieve: Execution of script /etc/dovecot/sieve/global/learn-spam.sieve failed
What went wrong? Or is the pidgeonhole pipe command only able to call shell scripts?
|
it seems I've found what was not working: for some reason dovecot didn't seem to have execute permission on the shell scripts. So the solution was actually sudo -u dovecot chmod +x *.sh
So correct file permissions in my case look like that:
/etc/dovecot/sieve/global # ls -la
insgesamt 44K
drwxr-xr-x 2 dovecot root 4,0K Jul 8 07:33 .
drwxr-xr-x 3 root root 4,0K Jul 29 2019 ..
-rw-r--r-- 1 dovecot root 144 Jun 5 10:06 learn-ham.sieve
-rw-r--r-- 1 root root 306 Jun 5 10:07 learn-ham.svbin
-rw-r--r-- 1 dovecot root 86 Jun 17 15:45 learn-spam.sieve
-rw-r--r-- 1 root root 250 Jun 17 15:45 learn-spam.svbin
-rw-r--r-- 1 dovecot root 509 Mär 16 13:57 mailfilter.sieve
-rw-r--r-- 1 dovecot root 462 Jul 29 2019 mailfilter.sieve~
-rw-r--r-- 1 root root 398 Mai 6 18:02 mailfilter.svbin
-rwxrwxr-x 1 dovecot root 41 Jun 5 10:25 rspamd-learn-ham.sh
-rwxrwxr-x 1 dovecot root 42 Jul 8 07:33 rspamd-learn-spam.sh
| Calling Bash script from Sieve script |
1,379,574,688,000 |
I'm working on a Python + Django application and it writes logs.
As I run app locally, logged-in as my user, I would like to enable my user to write logs to /var/logs.
I tried to add my user to syslog group: sudo usermod -a -G syslog mauro, but it does not works.
I wouldn't like to change path permissions (aka chmod +777 /var/logs), so, I can use the same set of settings for all environments.
Is there another way to do that, than change path permissions?
|
Assuming Unix/Linux environment and it is supporting acl implementation, you can apply an acl like this:
sudo setfacl -m u:mauro:rw /var/logs
This utility (setfacl) sets Access Control Lists (ACLs) of files and directories, i.e. sets what permission a user(s)/group(s) can have on a particular file or directory.
Refer to man-page of setfacl(1)
Also, setfacl has a recursive option (-R) just like chmod:
You can use capital-x X permission, which means:
execute only if the file is a directory or already has
execute permission for some user (X)
So the new command will be like:
setfacl -R -m u:mauro:rwX /var/logs
| Permission to write to log |
1,379,574,688,000 |
I have a git (actually git-annex) repository I'm trying to make shared, part of which involves setting the set-group-id bit on several directories. This is on a Debian GNU/Linux Stretch box, on an ext4 filesystem. For some odd reason, chmod g+s DIRECTORY is being ignored (blank lines around chmod block added for readability):
$ stat objects
File: objects
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fd06h/64774d Inode: 12353692 Links: 260
Access: (0775/drwxrwxr-x) Uid: ( 1000/ anthony) Gid: ( 1025/git-books)
Access: 2018-07-30 14:43:13.831641743 -0400
Modify: 2018-07-28 14:28:14.970667931 -0400
Change: 2018-07-30 14:46:38.179597449 -0400
Birth: -
$ chmod g+s objects
$ echo $?
0
$ stat objects
File: objects
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fd06h/64774d Inode: 12353692 Links: 260
Access: (0775/drwxrwxr-x) Uid: ( 1000/ anthony) Gid: ( 1025/git-books)
Access: 2018-07-30 14:43:13.831641743 -0400
Modify: 2018-07-28 14:28:14.970667931 -0400
Change: 2018-07-30 14:50:43.355539381 -0400
Birth: -
What I've checked so far:
There do not appear to be any weird mount options (e.g., nosuid) that might block it from working. I checked both fstab and /proc/mounts, which shows /dev/mapper/slow-srv /srv ext4 rw,relatime,nobarrier,errors=remount-ro,stripe=384,data=ordered 0 0
There does not appear to be any weird ACL on the directory; to be sure I did setfacl -b objects. Even after doing so, chmod continued to not work.
strace on chmod shows the syscall succeeding, and has the sgid bit set:fchmodat(AT_FDCWD, "annex", 02775) = 0
Other directories on the same filesystem have the set-group-id bit set. In fact, I set some earlier in the same session, in a different git-annex repository.
|
It turns out that although I had created that group and added myself to it several days ago, and this was what I thought was a new ssh connection, it wasn't really. Due to using OpenSSH's connection multiplexing feature (ControlMaster/ControlPath/etc.), I was actually logging in on a connection that was ~10 days old, so my session (processes) didn't have the new group set. I confirmed this with id.
After logging in via ssh -o ControlPath=none HOST, id confirms my session has the git-books group, and the chmod g+s works.
As to why that didn't give a permission denied error, it seems the standard requires this behavior for files and permits implementations to ignore the bits:
If the calling process does not have appropriate privileges, and if the group ID of the file does not match the effective group ID or one of the supplementary group IDs and if the file is a regular file, bit S_ISGID (set-group-ID on execution) in the file's mode shall be cleared upon successful return from chmod().
Additional implementation-defined restrictions may cause the S_ISUID and S_ISGID bits in mode to be ignored.
Single Unix Spec v4 2018 edition, chmod. http://pubs.opengroup.org/onlinepubs/9699919799/functions/chmod.html (registration may be required).
Probably, returning an error would merely be sane, not conformant ☹.
| Why is chmod g+s on a directory being ignored? |
1,379,574,688,000 |
My Fedora 27 x64 fails to boot after hard reset. It shows:
Failed to mount POSIX Message Queue File System,
Failed to start Remount and Kernel File Systems,
Failed to mount Kernel Debug File System,
Failed to mount Huge Pages File System [3]
and lots of other failures comes after these.
See https://photos.app.goo.gl/qBUxT40zA2MTLTwO2
In all these cases
Failed at step EXEC spawning /usr/bin/mount: Permission denied
is given as a reason.
How can it be? Doesn't it recognize it's own filesystems?
I have 3 kernels:
vmlinuz-4.14.16-300.fc27.x86_64
vmlinuz-4.15.13-300.fc27.x86_64
vmlinuz-4.15.14-300.fc27.x86_64
no matter which one I try to boot the same happens.
So far I have:
Checked filesystem integrity with fsck. All partitions are clean.
Checked disk health reported by SMART and performed both - short and long tests. Disk is perfectly healthy.
Rebuilt initramfs. Mounted boot, proc, sys, dev in /mnt, chroot and sudo dracut.
Followed suggestions and:
Performed
fsck -f on /dev/mapper/fedora-home, got:
tree extents for i-node 524820 (on level 2) could be narrower. Fix?<y>Y
Allowed to fix this.
And the same for /dev/mapper/fedora-root, /dev/sda1 (boot partition) confirmed they are clean. One more error of the same kind was found for an extra partition for data files.
rpm -V --all | grep -v " [cg] " returned as follows:
.M....... /run/libgpod
..5....T. /var/lib/selinux/targeted/active/commit_num
.......T. /var/lib/selinux/targeted/active/file_contexts
.......T. /var/lib/selinux/targeted/active/homedir_template
S.5....T. /var/lib/selinux/targeted/active/policy.kern
.M.....T. /var/lib/selinux/targeted/active/seusers
.M.....T. /var/lib/selinux/targeted/active/users_extra
.M....... /var/run/pluto
not exists /var/run/abrt
.M....... /var/log/audit
not exists /usr/lib/systemd/system-preset/85-display-manager.preset
S.5....T. /usr/share/icons/Crux/icon-theme.cache
S.5....T. /usr/share/icons/Mist/icon-theme.cache
rpm -V "$(rpm -q --whatprovides /usr/bin/mount)"
.M....G.. g /var/log/lastlog
fixfiles check /usr
libsemanage.semanage_make_sandbox: Error removing old sandbox directory /var/lib/selinux/targeted/tmp. (Read only file system).
genhomedircon: Could not begin transaction: Read only file system
Among many lines similar to to the one below:
Would relabel /usr/src/handbrake/trunk/build/contrib/lib from unconfined_u:object_r:usr_t:s0 to unconfined_u:object_r:lib_t:s0
a few interesting ones are:
Would relabel /usr/sbin/mount.nilfs2 from unconfined_u:object_r:bin_t:s0 to unconfined_u:object_r:mount_exec_t:s0
Would relabel /usr/sbin/umount.nilfs2 from unconfined_u:object_r:bin_t:s0 to unconfined_u:object_r:mount_exec_t:s0
Would relabel /usr/sbin/mkfs.nilfs2 from unconfined_u:object_r:bin_t:s0 to unconfined_u:object_r:fsadm_exec_t:s0
Proved RAM works fine - memtest86 didn't find any errors during 3.5
passes and over 8h test time.
9. Disabled SELinux (SELinux=disabled in /etc/selinux/config) and restarted. System started without any error! This proves problem is in SELinux policies. I believe I should start with checking those 6 top SELinux policies that have been changed somehow (see p. 5). The question is how to do it wisely.
Checked local modifications to SELinux config files and file_contexts:
semanage module -C -l
Module name Priority Language
semanage fcontext -C -l
fcontext SELinux type Context
/usr/bin/mount all files system_u:object_r:samba_share_t:s0
/usr/share/dnfdaemon/dnfdaemon-system all files system_u:object_r:rpm_exec_t:s0
/var/run/media/przemek/extra(/.*)? all files system_u:object_r:samba_share_t:s0
/var/www/html/photo all files system_u:object_r:httpd_sys_rw_content_t:s0
/var/www/html/photo/_cache all files system_u:object_r:httpd_sys_rw_content_t:s0
/var/www/html/photo/config all files system_u:object_r:httpd_sys_rw_content_t:s0
/var/www/html/photo/content all files system_u:object_r:httpd_sys_rw_content_t:s0
/var/www/html/photo/content/folders.json all files system_u:object_r:httpd_sys_rw_content_t:s0
/var/www/html/photo/iv-config/language all files system_u:object_r:httpd_sys_rw_content_t:s0
Interestingly fcontext of the /usr/bin/mount has changed.
The system runs 24h/day as a simple home server (www, mail, etc.).
From time to time (say once a few weeks) it freezes completely. HDD keeps writing something (repetitive, although irregular sound). No reaction to keyboard, mouse, remote SSH access. Many times I have tried to leave it overnight, but it does not recover, so I am forced to hard reset it each time this happens. This time I haven't waited, but hard reset it after just a few minutes. Unfortunately since then it cannot boot.
I remembered that a minute or less before the system froze Firefox message box appeared telling me that some script became irresponsive. I don't remember my choice (kill it/wait).
Hardware: Gigabyte GB-BACE-3160 Brix PC with Hitachi HTS725032A9A364 2.5" HDD and 4GB LPDDR3 RAM (default clock).
More details [here]
|
The problem was caused by improper file context of the /usr/bin/mount file: samba_share_t.
The file context change wasn't caused by some error due to hard reset, but... by my imprudent decision to follow the first suggestion of SELinux Alert Browser. See the screenshot below.
This first suggestion was to change /usr/bin/mount file context to samba_share_t to allow smbd to access getattr.
The solution was:
to delete invalid file context, restore default and relabel the file:
[root@atlas ~]# ls -Z /usr/bin/mount
system_u:object_r:samba_share_t:s0 /usr/bin/mount
[root@atlas ~]# semanage fcontext -d /usr/bin/mount
[root@atlas ~]# restorecon -v /usr/bin/mount
Relabeled /usr/bin/mount from system_u:object_r:samba_share_t:s0 to system_u:object_r:mount_exec_t:s0
[root@atlas ~]# ls -Z /usr/bin/mount
system_u:object_r:mount_exec_t:s0 /usr/bin/mount
reboot system.
It could be done in emergency console, but I have used the console to put SELinux into permissive mode, boot system and then change the file context as described above.
When I have checked modified SELinux contexts of files (see p. 10 of my initial post) I have noticed that context of the mount looked suspicious. At the moment I realized that shortly before the problem started I have imprudently followed the first suggestion of SELinux Alert Browser to change mount file context.
The same suggestion appeared now, after system repair and restart, so I was able to attach the screenshot below.
Credit for @sourcejedi for pointing SELinux may be causing the problem and for his kind help!
| What causes permission to be denied for mounting rootfs, home, messeage queue, kernel file system, during boot? |
1,379,574,688,000 |
I have what I think is a very normal, very typical use case. I am surprised it seems (so far) that there is not a solution. I assume I am overlooking something obvious. Many people must need a solution for this use case.
Up to five different users in the same group have login accounts on one computer. They do not all log in at the same time.
Update 1: The documents are text files and spreadsheets that reside on our local server. We don't want to host documents on Google or any outside server.
We do not need real-time collaboration.
These users in the group "team" need to collaborate by reading and writing the files in a shared directory. The directory containing these files resides on a local server. The shared directory can be mounted on the client by standard Linux methods. We could use ACLs if needed because BTRFS has built-in support.
All users can log into the server via SSH using keys. The user and group ID's are the same on client and server. None of the users have sudo permissions or any other special permissions. All they have in common is membership in group "team".
The shared directory is not under any user's home directory. It is owned by the same group "team" with rwx permissions and it's path is fully accessible to all the users in "team". We can change permissions as required, but no users outside of "team" group shall be able to read or write files in this directory.
The client and server both run Arch Linux and both also run BTRFS.
We tried NFS for around ten years and we had many permissions / access problems. One of our top support issues was resolving permission problems for users. We decided to switch away from NFS because we never found a good solution for the permissions problems.
We switched to SSHFS because "we could just use normal file system permissions". So far we have not been able to achieve our simple goal stated above with SSHFS. See here and here.
We don't hear a lot of good things about Samba, so we never tried it. What else is there?
This seems to be such a common use case. How would it normally be resolved?
We don't even have a complex case. For example, all machines (servers and clients) in our network run Linux. And all machines are on a local LAN. It's simple. But I have not found a solution that will work.
|
It's actually fairly easy. Most people seem to think that they need to set permissions of all files under a given directory, but this is not true. Just the toplevel directory will do:
chown :team /path/to/dir
chmod 2770 /path/to/dir
First we set the group owner to the team group, which you say already exists and contains the people who should be able to access the correct directory. Next, we set the permissions to "set group id" on the directory (so any files created below that directory will be owned by the team group, too; not strictly necessary, but I find it to be a useful reminder), and give full permissions to anyone in the team group as well as the directory owner. Anyone not in that group will have others permission, which is empty here.
The result of this setup will be that for anyone not in the team group, even doing an ls on that directory will get Permission denied. People who are a member of the correct group, however, will be able to read and write in that directory, provided no files are individually set to file permissions which are wrong. If no such files should be protected against writing, correct the permissions now:
chmod -R g+rw /path/to/dir
find /path/to/dir -type d -print0 | xargs -0 chmod g+x
In the absense of ACLs, setting the permissions for future (i.e., newly-created) files is not something you can do by creatively setting permission bits; users need to set their umask. You can do this by way of a line in /etc/profile:
umask 002
It's important to set that, because the default on many distributions will be 022 (allowing read, but not write, permissions to group members), 077 (allowing nothing to other users), or 027 (allowing read, but not write, to group; and nothing to others). Neither of those options are what you want.
If you have different requirements for different parts of your filesystem, or you want to ensure that people don't mess things up by fiddling with their umask individually, you can use default ACLs instead:
find /path/to/dir -type d -print0 | xargs -0 setfacl -m d:g:team:rwx -m d:o:---
Once you've set it all up, it really does not matter whether you use SSHFS or NFS. If it still does not work, however, it's probably best to come back with more specific questions
| How to let users collaborate by editing files in a shared directory |
1,379,574,688,000 |
I can't manage to set x bit to created file.
archemar@foobar:~/D> echo echo hello world > v.sh
archemar@foobar:~/D> ls -l v.sh
-rw-rw-r--+ 1 archemar group1 17 Apr 12 08:12 v.sh
no x-bit, let's look at acl
archemar@foobar:~/D> getfacl v.sh
# file: v.sh
# owner: archemar
# group: group1
user::rw-
group::rwx #effective:rw-
group:group1:rwx #effective:rw-
mask::rw-
other::r--
group1 is rwx in acl !!
let's look at acl for local dir
archemar@foobar:~/D> getfacl .
# file: .
# owner: FTP_D_adm
# group: admin
user::rwx
group::rwx
group:group2:rwx
group:admin:rwx
group:group1:rwx
mask::rwx
other::r-x
default:user::rwx
default:group::rwx
default:group:group1:rwx
default:mask::rwx
default:other::r-x
I am part of group1:
archemar@foobar:~/D> id
uid=1001(archemar) gid=1001(group1) groups=1001(group1),16(dialout),33(video)
let's try to execute
archemar@foobar:~/D> ./v.sh
-bash: ./v.sh: Permission denied
setting g+x is trivial, but real file will come through ftp. Is there a way to have bit x set ?
OS is suse 11.4, directory is NFS 3 mounted, ACL is set on filesystem.
|
This has been answered peripherally in these two questions:
How does umask affect ACLs?
https://superuser.com/questions/180545/setting-differing-acls-on-directories-and-files
The relevant bits are generally from man setfacl:
The perms field is a combination of characters that indicate the >permissions: read (r), write (w), execute (x), execute only if the file is
a directory or already has execute permission for some user (X).
Alternatively, the perms field can be an octal digit (0-7).
(Emphasis mine)
The relevant section from the first question in the anwser by @slm♦ is the following:
To summarize
Files won't get execute permission (masking or effective). Doesn't matter which method we use: ACL, umask, or mask & ACL.
Directories can get execute permissions, but it depends on how the masking field is set.
The only way to set execute permissions for a file which is under ACL permissions is to manually set them using chmod.
Which basically means that it seems you cannot do what you want to do with ACL, since very few programs actually explicitly say that they want to create an executable file.
| setting 'x' (executable) bit using ACL |
1,379,574,688,000 |
Are there any documents, which provide a reason for my /root being marked as not writeable by its owner? (r-xr-x---)
I am aware that its owner would often have write access anyway, by virtue of CAP_DAC_OVERRIDE. However it still surprised me to see this. So I'm curious whether there is anything I can learn from it!
Debian's approach looks more natural to my eyes. On Debian, the permission is rwx------.
$ rpm -q --whatprovides /root
filesystem-3.2-37.fc24.x86_64
$ sudo dnf info filesystem | grep Release
Release : 37.fc24
$ grep ^VERSION= /etc/os-release
VERSION="25 (Workstation Edition)"
|
This was changed in Fedora around 2009. Source: https://bugzilla.redhat.com/show_bug.cgi?id=517575P
Credit to @jordanm for pointing this out. I have attempted to copy the relevant quotes. Disclaimer: I'm sure this rendering has lost something in the process.
The changes take away write permissions for root so that you also need DAC_OVERRIDE in order to write. We then dropped capabilities on things that needed to be root, but are network facing, or setuid.
Critical response
Anyways, this was a well-intentioned idea, but in reality it won't work without significant further work because a process with uid 0 but not CAP_DAC_OVERRIDE is still perfectly capable of rewriting e.g. /usr/bin/bash which still has u+w, or /root/.bashrc for that matter. The answer to this sort of thing is SELinux. Any objections to a patch to revert back to mode 755 for directories?
Answer from the author:
What problem does [your software] have? If its trying to write to system directories, it should have a problem.
Reply:
It's not a big deal, the code to effectively revert it rpm-ostree is small and shouldn't be hard to carry over time.
I just wanted to cross-link the bugs so that anyone else who hit this can see the change we did in rpm-ostree.
Third party interjection: It's about kludges that are needed in any tool of the class to cope with this.
https://github.com/projectatomic/rpm-ostree/pull/335
Link to the Fedora bug that introduced this, and also change things so
it's also used for the "compose" case because:
Again it doesn't add security
Tools that operate on "compose" repos have to work around this
when doing checkouts, see e.g. https://lists.freedesktop.org/archives/xdg-app/2016-June/000241.html
| Why does Fedora create /root with permissions `r-xr-x---`? |
1,379,574,688,000 |
An Ubuntu 16.04 udev rule is defined:
target='SUBSYSTEMS=="usb", ATTRS{product}=="Metrologic Scanner", GROUP:="username"'
Command to append a rule to test udev file fails:
sudo echo $target > /etc/udev/rules.d/test.txt
What must be done to overcome the response \ error:
bash: /etc/udev/rules.d/test.txt: Permission denied
Examples and explanations are highly appreciated: thank you
|
You could use this instead and it will work
echo "$target" | sudo tee --append /etc/udev/rules.d/test.txt
tee command with --append (shortly -a) option appends the echoed string to the specified file, nothing is overwritten. tee also writes to STDOUT which can be redirected to /dev/null if desired
Another way to do this is
sudo bash -c 'echo "$target" > /etc/udev/rules.d/test.txt'
but I do recommend sticking with the first example, because echo "$target" will be run without root privileges
| Permission denied: writing a udev rule to to a test file in /etc/udev/rules.d/ [duplicate] |
1,459,725,470,000 |
I am making a installer bash, playing with it a little too. But I just came across a serious problem: As I run the bash with sudo ./install.sh, all the files it copies are owned by root and therefore read-only for others.
This makes installed program rather useless. In my case, the installed program is Tomcat web application, meaning tomcat will not be able to use it.
Therefore the question:
Is sudo ./install.sh the right way installation batches should work?
yes: In that case, how do I properly use cp command to ensure the files belong to a) issuer b) specific user. Or do I need other command?
no: In this case, how do I properly perform administrative tasks (such as apt-get install ...) from the batch?
|
Copy the files using the install command. It can set owner and permissions. From the man page:
-g, --group=GROUP
set group ownership, instead of process' current group
-m, --mode=MODE
set permission mode (as in chmod), instead of rwxr-xr-x
-o, --owner=OWNER
set ownership (super-user only)
| How to ensure proper file permissions when running installer bash as sudo? |
1,459,725,470,000 |
I wrote the following simple linux kernel module to test the param feature:
#include<linux/module.h>
int a = 5;
module_param(a, int, S_IRUGO);
int f1(void){
printk(KERN_ALERT "hello world\n");
printk(KERN_ALERT " value passed: %d \n", a);
return 0;
}
void f2(void){
printk(KERN_ALERT "value of parameter a now is: %d \n", a);
printk(KERN_ALERT "bye bye qworld\n");
}
module_init(f1);
module_exit(f2);
MODULE_AUTHOR("l");
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("experimanting with parameters");
Now when I try to echo a value to it, I get the "Permission Desnied" error, as expected:
[root@localhost param]# insmod p.ko
[root@localhost param]# dmesg -c
[ 7247.734491] hello world
[ 7247.734498] value passed: 5
[root@localhost param]# echo 32 >/sys/module/
Display all 145 possibilities? (y or n)
[root@localhost param]# echo 32 >/sys/module/p/parameters/a
bash: /sys/module/p/parameters/a: Permission denied
So far so good.
However, I can write to the file a using vim.
It does try to warn me with the follwing messages at the status line:
"/sys/module/p/parameters/a"
"/sys/module/p/parameters/a" E667: Fsync failed
WARNING: Original file may be lost or damaged
don't quit the editor until the file is successfully written!
Press ENTER or type command to continue
But I force the write with ! and get out of vim, and to my surprise the value of the parameter is re-written!
[root@localhost param]# vim /sys/module/p/parameters/a
[root@localhost param]# cat /sys/module/p/parameters/a
32
(Oriuginal value was 5 and I wrote 32 using vim).
Not only that, the value of the parameter in the module is changed as well!!:
[root@localhost param]# rmmod p.ko
[root@localhost param]# dmesg -c
[ 7616.109704] value of parameter a now is: 32
[ 7616.109709] bye bye qworld
[root@localhost param]#
What does this mean? READ Only permissions can just be overruled by a userland application like vim? What is the use of permission bits then..?
|
The /sys (sysfs) filesystem is somewhat special; many operations are not possible, for example creating or removing a file. Changing the permissions and ownership of a file or setting an ACL is permitted; that allows the system administrator to allow certain users or groups to access certain kernel entry points.
There is no special case that restricts a file that's initially read-only for everyone from being changed to being writable for some. That's what Vim does when it is thwarted in its initial attempt to save.
The permissions are the only thing that prevent the file from being written. Thus, if they're changed, the file content changes, which for a module parameter changes the parameter value inside the module.
Normally this doesn't have any security implication since only root can change the permissions on the file and root can change the value through /dev/kmem or by loading another module. It's something to keep in mind if root is restricted from loading modules or accessing physical memory directly by a security framework such as SELinux; the security framework needs to be configured to forbid problematic permission changes under /sys. If a user is given ownership of the file, they'll be able to change the permissions; to avoid this, if a specific user needs to have permission to read a parameter, don't chown the file to that user, but set an ACL (setfacl -m u:alice:r /sys/…).
| Why am I able to write a module parameter with READ ONLY permissions? |
1,459,725,470,000 |
I have a program that runs a command that is something like this:
/home/myuser/bin>> /usr/bin/sudo -u otheruser script.py /home/otheruser/file.txt
This works, but now I need this to work when the program runs from different locations, so I changed it to use the full path:
/home/myuser/bin>> /usr/bin/sudo -u otheruser /home/myuser/bin/script.py /home/otheruser/file.txt
That results in:
can't open file '/home/myuser/bin/runmacroscript.py': [Errno 13] Permission denied
It's the same file, so why does a full path make a difference?
|
Your otheruser cannot access /home/myuser/bin/runmacroscript.py. The directory permissions on either or both of /home/myuser or /home/myuser/bin are too restrictive.
The reason it works when you are already in the /home/myuser/bin directory is that otheruser doesn't have to traverse the directory tree to get there.
| Why can't another user access my file when it's a full path? |
1,459,725,470,000 |
Loop devices, i.e. for mounting raw disk images, can be managed without root privileges using udisks.
For testing purposes, an image can be created and formatted like so:
dd if=/dev/urandom of=img.img bs=1M count=16
mkfs.ext4 img.img
And then setup using udisks
udisksctl loop-setup -f img.img
This creates a loop device for the image and mounts it to a new directory under /run/$USER, just like any local hard drive managed by udisks. Only the permissions are not what I expected.
# ls -l /run/media/$USER/
drwxr-xr-x 3 root root 1024 Apr 10 11:19 [some id]
drwx------ 1 auser auser 12288 Oct 30 2012 [a device label]
The first one listed is the loop device, owned by root and not writable by anybody else. The second one is a local hard drive or an USB pen device mounted for comparison, belonging to the user who mounted it. I know that I could fix this with a simple chmod executed as root.
But why does udisks assign different permissions and owners? Can it be configured to do otherwise?
|
I had a detailed look into the udisks2 source code and found the solution there.
The devices correctly mounted under user permissions were formatted with old filesystems, like fat. These accept uid= and gid= mount options to set the owner. Udisks automatically sets these options to user and group id of the user that issued the mount request.
Modern filesystems, like the ext series, do not have such options but instead remember owner and mode of the root node. So chown auser /run/media/auser/[some id] indeed works persistently. An alternative is passing -E root_user to mkfs.ext4 which initializes uid and gid of the newly created filesystem to its creator.
| Mount image user-readable with udisks2 |
1,459,725,470,000 |
I have set up two user groups, students and faculty on Ubuntu 12.04 and created a number of students and faculty accounts. The problem is a student can currently see & read all of the files of a fellow student :-/
I would like to prevent students from seeing/reading each others directories/files, but permit someone in the faculty group freely access to the student groups.
I'm not sure how to go about this, can anyone offer pointers on
how to implement this policy? I know how to set/change groups, but not
how to limit the policy to what they can do/see. (I've been a Linux
user for a while, but administering more than my own account is new to
me)
Also, would I have to change the umask for all student accounts to make sure this policy doesn't get circumvented with new files/directories students create subsequently?
Would I as root execute chmod go-rx /home/* on each student homedirectory to accomplish this goal, or am I going about this the wrong way?
UPDATE: Just to clarify, my goal is to have this as a default setup, I don't expect I can prevent informed/curious students from changing their own permissions - and I'm willing to live with that.
|
I think I would attempt to do this using ACLs as well. The only other method I can conceive of doing this would be as follows.
Create 2 groups students & faculty
Each user's home dir. would be like this:
drwxrws---. 253 student1 faculty 32768 Nov 29 16:39 student1
This would allow anyone in the faculty group access to student1's directory, but no one else, except the owner, student1.
chown -R student1.faculty /home/student1
find /home/student1 -type d -exec chmod ug+rwx,g+s,o-rwx {} +
The trouble with this approach is that it can be a bit fragile if the owner were to mess with the group ownership, or were to mv files into this directory. Only newly created files/directories would persist the ownerships + setgid bit.
This setup requires that all the preexisting files/directories under /home need to be adjusted using steps #3 and #4 above.
ACLs
As I said above, I think I would still do this using ACLs. I would consult this tutorial on ACLs, titled: Using ACLs with Fedora Core 2 (Linux Kernel 2.6.5). The title makes it sound dated but the commands are still relevant.
| restrict student account permissions, give faculty access |
1,459,725,470,000 |
I need /var/www/config/config.json to be read by my app but not by users calling myapp.com/config/config.json. How would I do that?
|
There are two main methods. One is to run your app as a different user from your web server, change the permissions on files so that they can only be read by the user running the app, and then proxy to it using your web server. You can use nginx as a proxy, for example:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:1234;
proxy_set_header X-Real-IP $remote_addr;
}
}
If that's no good for you you could instead set up a rule like so to have your web server block them directly:
location ~ /config\.json$ {
deny all;
}
This can also be done with Apache using something like this in your .htaccess:
<Files ~ "config\.json$">
Order allow,deny
Deny from all
</Files>
| Is there a Linux file permission that allows the app to read a file but not a user? |
1,459,725,470,000 |
I have a temp directory set up where users can place whatever files they need to send to other users via HTTP. The owner of this directory is an SFTP user, and cannot run cron jobs.
I have one shell user that can run cron jobs, but does not have permission to edit files in the SFTP user's directory.
I have an admin user that can access the SFTP user's directory when using sudo, but can't (read: I'd really rather not) run cron jobs.
So, here's the conundrum. How do I get a nightly cron job to run as a shell user to delete files older than 1 week within the SFTP user's directory, with the admin user's privileges?
|
Edit the /etc/sudoers file (use visudo!) and add an entry that allows the shell user to have sufficient privileges to run a specific command, without having to enter a password. If you use a script, make sure the script cannot by edited by anyone but root.
In /etc/sudoers, where shelluser is the shell user name:
shelluser ALL=NOPASSWD: /usr/bin/clean-up-sftp-temp-directory
In a /usr/bin/clean-up-sftp-temp-directory script, you can put something like:
#!/bin/sh
rm -f /home/sftpuser/will-be-deleted/*
After making the script executable, you should be able to call sudo clean-up-sftp-temp-directory and add it to the shell user's crontab.
| Emptying a directory owned by another user weekly |
1,459,725,470,000 |
On campus, everyone's primary group is user and each person is additionally associated to groups depending on the courses he or she is taking, lab he or she works at, etc.
My coworker and I are members of group foo, so we use newgrp foo and umask 7 to ensure our files are accessible to the two of us without granting everyone permission. Neither of us minded this.
However, we now need our PATH environment variable to first point to our lab's bin folder before the rest in the PATH. We thought a simple script would work, but it doesn't as the PATH's contents don't persist after executing newgrp:
#!/bin/tcsh
setenv PATH "/path/to/lab/bin:$PATH"
newgrp foo
The default shell is tcsh. Does anyone have any suggestions?
Thanks!
|
Unless you need to type a password when you run newgrp (a very rarely used feature), you don't need to use newgrp to make files owned by the appropriate group. You can use chmod instead. For example, instead of the following workflow:
newgrp lab1
mkdir project1
$EDITOR project1/file1
you can do this:
mkdir project1
chgrp lab1 project1
$EDITOR project1/file1
chgrp lab1 project1/file1
On most current unices, either project1/file1 will already belong to lab1 like the directory it contains (*BSD), or you can force this behavior (Linux, Solaris, …):
mkdir project1
chgrp lab1 project1
chmod g+s project1
$EDITOR project1/file1
All of this requires that your umask be set to 002 or 007.
It's easier to manage permissions if access control lists (ACL) are supported. ACL support must be present in the disk filesystem driver and enabled in the mount options, and again for the network filesystem if applicable. ACLs support is not yet generalized, so you might not have it.
To see if you can use ACLs, on a Linux client, try running
touch foo
setfacl -m user:myfriend:rwx foo
ls -l foo
If the permissions of foo show up as -rw-rw-r--+ or similar (with a + at the end), ACLs are enabled. If the setfacl utility isn't available, then your campus network probably doesn't have ACLs all around.
If you do have ACLs, then you don't need to have a permissive umask, you can stick with 022 or 077. With ACLs, to set up a group-writable directory (where newly created files will be writable by the group as well), do
mkdir project1
setfacl -m group:lab1:rwx project1; setfacl -d -m group:lab1:rwx project1
In addition to not requiring a permissive umask, ACLs let you share files between an arbitrary set of users and groups.
| Changing group and retaining environment variables |
1,459,725,470,000 |
I am writing an application on an embedded Linux device that sets the Timezone as a non-root user. To boil things down to the essence, I want to execute this:
dbus-send --system --dest=org.freedesktop.timedate1 --print-reply /org/freedesktop/timedate1 \
org.freedesktop.timedate1.SetTimezone string:'America/New_York' boolean:false
But when I do that, I always get this error:
Error org.freedesktop.DBus.Error.AccessDenied: Permission denied
Other info
The device is running with SELinux disabled and polkit is not installed or running.
I have a d-bus configuration file /etc/dbus-1/system.d/org.freedesktop.timedate1.conf as follows:
<!DOCTYPE busconfig PUBLIC "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
"http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">
<busconfig>
<policy user="root">
<allow own="org.freedesktop.timedate1"/>
<allow send_destination="org.freedesktop.timedate1"/>
<allow receive_sender="org.freedesktop.timedate1"/>
</policy>
<policy user="ceres">
<allow own="org.freedesktop.timedate1"/>
<allow send_destination="org.freedesktop.timedate1"/>
<allow receive_sender="org.freedesktop.timedate1"/>
</policy>
<policy context="default">
<allow send_destination="org.freedesktop.timedate1"/>
<allow receive_sender="org.freedesktop.timedate1"/>
</policy>
</busconfig>
It is the ceres user that I want to grant the ability to change the timezone.
My questions
How do I enable a specific non-root user to set the Timezone via this D-Bus interface?
More generally, how do I determine the cause for a "Permission denied" error?
journalctl --follow shows nothing when this happens. Where should I look instead?
|
You can't do it with dbus configuration only, as Lennart explains in the link below...
You'd need either polkit or (as suggested in commments) sudo timedatectl...
See this bug report:
dbus methods: fall back to checking Linux capabilities when compiled without PolKit
| How do I allow a non-root user to set the timezone via d-bus? |
1,459,725,470,000 |
I am running Kali Linux 2020.2 on a VirtualBox VM; I am trying to change my password from the default, using the command:
sudo passwd ollie
For user ollie (me). It prompts me for the new password, and then to retype it. Both times I entered the correct password; it says:
passwd: password successfully changed!
But then later, when doing an update with the sudo command, it prompts me for the password. I try the new one: incorrect, it says. Then I typed the default one, and it continues! I have tried quite a few things from online, but they all result in this. How can I permanently change my password without it resetting every reboot (it does)? Are there certain files I should edit?
|
I had the same issue long time ago with Kali, and I used to successfully change a user password from root instead of using sudo:
$ sudo -i
# passwd ollie
| Cannot change password for user |
1,459,725,470,000 |
How can I change the ownership of a directory with nobody:nogroup?
Everything I tried ended up with "operation not permitted".
cat /etc/debian_version
10.2
root@torrent:/srv# chown -R rtorrent:rtorrent rtorrent
chown: cannot read directory 'rtorrent/.local/share': Permission denied
chown: changing ownership of 'rtorrent/.local': Operation not permitted
chown: changing ownership of 'rtorrent/.bash_history': Operation not permitted
chown: changing ownership of 'rtorrent/session/rtorrent.dht_cache': Operation not permitted
chown: changing ownership of 'rtorrent/session': Operation not permitted
chown: changing ownership of 'rtorrent/.rtorrent.rc': Operation not permitted
chown: changing ownership of 'rtorrent/download': Operation not permitted
chown: changing ownership of 'rtorrent/watch': Operation not permitted
chown: changing ownership of 'rtorrent': Operation not permitted
rm -r download/
rm: cannot remove 'download/': Permission denied
root@torrent:/srv/rtorrent# ls -al
total 32
drwxr-xr-x 6 nobody nogroup 4096 Jan 24 18:16 .
drwxr-xr-x 3 root root 4096 Jan 24 16:46 ..
-rw------- 1 nobody nogroup 47 Jan 24 18:16 .bash_history
drwxr-xr-x 3 nobody nogroup 4096 Jan 24 18:15 .local
-rw-r--r-- 1 nobody nogroup 3224 Jan 24 18:16 .rtorrent.rc
drwxr-xr-x 2 nobody nogroup 4096 Jan 24 16:46 download
drwxr-xr-x 2 nobody nogroup 4096 Jan 24 18:21 session
drwxr-xr-x 2 nobody nogroup 4096 Jan 24 16:46 watch
|
Your container appears to run as a user (rootless) container, built on user namespaces.
In order to work, user containers have an associated uid/gid mapping to convert host uid/gid to container uid/gid. The overall host range for these is 2^32 wide (starting with 0 being real root user). From this, the allocated range to the container is usually kept at 2^16 (which is compatible with historical uid ranges).
Any host uid which has no range translation to an uid inside the container will appear as nobody (resp: nogroup for gid) inside the user container. As this container's root has no rights over such an uid, it can't alter it and the operation fails as when run by a normal user.
Here's a link from Proxmox describing your problem:
https://pve.proxmox.com/wiki/Unprivileged_LXC_containers
However you will soon realise that every file and directory will be
mapped to "nobody" (uid 65534), which is fine as long as
you do not have restricted permissions set (only group / user readable files, or accessed directories), and
you do not want to write files using a specific uid/gid, since all files will be created using the high-mapped (100000+) uids.
There are tools dedicated to translate those ranges, so a prepared system tree layout can be shifted into a range suitable for the target container. Those tool must be run from the host (or at least in case of "recursive" containers, the container having "spawned" the user namespace). For example:
https://github.com/jirutka/uidmapshift
which is a reimplementation of apparently defunct project nsexec's uidmapshift:
https://github.com/fcicq/nsexec
You can of course do this manually by calculating the right target uid:gid and using chown (from host). If there's one value and a simple mapping it should be easy. Here's an example (using a running user LXC container):
Container (called buster-amd64):
user@buster-amd64:~$ ls -n test
-rw-r--r--. 1 65534 65534 0 Jan 24 21:09 test
root@buster-amd64:/home/user# chown user:user test
chown: changing ownership of 'test': Operation not permitted
Host (displaying same file):
user@host:~$ ls -n ~/.local/share/lxc/buster-amd64/rootfs/home/user/test
-rw-r--r--. 1 1000 1000 0 Jan 24 22:09 /home/user/.local/share/lxc/buster-amd64/rootfs/home/user/test
The command below gets the init process' pid (which is 1 in the container, but here the pid value as seen on host) running in the container (any other process of the container would work as well):
user@host:~$ lxc-info -Hpn buster-amd64
22926
user@host:~$ cat /proc/22926/uid_map
0 1410720 65536
This mapping should have been defined in the LXC configuration:
user@host:~$ grep lxc.idmap ~/.local/share/lxc/buster-amd64/config
lxc.idmap = u 0 1410720 65536
lxc.idmap = g 0 1410720 65536
If the user container's uid is 1000 and the file/directory should belong to this user, then the new host's uid should be 1410720 + 1000 = 1411720
On the host, this time as (real) root user:
root@host:~# chown 1411720:1411720 ~user/.local/share/lxc/buster-amd64/rootfs/home/user/test
In case the container's filesystem(s) is not directly mounted somewhere on an host's filesystem (eg: using LVM backing store or tmpfs mount) and thus not reachable, this works too with a running container (and should probably be preferred anyway):
root@host:~# chown 1411720:1411720 /proc/22926/root/home/user/test
And now on the container:
user@buster-amd64:~$ ls -n test
-rw-r--r--. 1 1000 1000 0 Jan 24 21:09 test
And its root user now has rights over this file, because it's in the correct uid/gid mapping.
root@buster-amd64:~# chown root:root ~user/test
root@buster-amd64:~#
There is work in progress on the kernel side with a feature called shiftfs which is still changing form to help alleviate these problems by doing this translation over a bind mount.
| Debian change owner of nobody:nogroup |
1,459,725,470,000 |
I was cleaning up groups and permissions on my home system today, and re-familiarized myself with umasks. It seems that the default on my system (Ubuntu 18.10) is 002, but that for root it's 0022.
Ignoring the extra bit for the current purposes, this got me thinking. With a default of 002, any user in the group associated with my username will be able to edit my files. With 022, they won't. Now, I've never made use of this group before (nor root's group for that matter), so I have no idea why one would ever use it, nor which permissions would be appropriate in such a case.
In principle, why would you choose one of these options over the other? For bonus points, why would (this part of) the umask for root be different from ordinary users?
|
With a default of 002, any user in the group associated with my username will be able to edit my files.
The idea isn't that there would be other users in the user's personal group. The idea is that there might be groups for projects or such, and they might have multiple users as members. The group owner of the files can then be the project (not any of the users personally), enforced by setgid on the project directory. With the umask allowing for write access to the group, such files would be writable by all members of the project group.
It works something like this, assuming foo is a user using umask 002:
# [add user foo to group proj]
# mkdir /work/proj
# chmod u=rwx,g=rwx,o=,g+s /work/proj
# ls -ld /work/proj
drwxrws--- 2 root proj 4096 Jun 14 17:48 /work/proj/
$ umask
0002
$ cd /work/proj
$ echo "some data here" > file.txt
$ ls -l file.txt
-rw-rw-r-- 1 foo proj 15 Jun 14 17:51 file.txt
Note how the file was created a) with the group owner proj (because of the setgid on the directory), and b) with write permission to the whole group (because of the umask). Instead of requiring the users to have a proper umask, the directory could be set up with a default ACL, which replaces the function of the umask if set.
There's more on per-user groups and the umask at least here and in Red Hat's manuals (That's the manual for RHEL 4, the newer ones seemed to be briefer on the matter.) Also related is this q: Why does every user have their own group?
Nothing here prevents the users from manually messing up the permissions. It
can't be prevented since the file owner can always change the permissions as they see fit. Nowadays people also more often use network services for collaboration, and such gymnastics with the file and directory permissions aren't necessary.
| Why choose a umask of 002 over 022? |
1,459,725,470,000 |
Say file.txt has the following permissions:
-rwxrw-r-- 1 chris group_b 347 2016-12-25 10:19 file.txt
The rw- are the permissions for the group in which file.txt belongs to.
Now let's say that Process A wants to access file.txt, the euid of Process A is paul (I am using names here instead of actual IDs).
Now (correct me if I'm wrong) each process has egid as well as a list of supplementary group IDs.
For example say that the egid for Process A is group_a and it has two supplementary group IDs, which are group_b and group_c.
This means that the process group permissions for file.txt (rw-) applies to Process A, correct?
Also, if Process A created a new file or a new directory, which group will the newly created file or directory belongs to (Process A has 3 groups), is it the egid?
|
This means that the process group permissions for file.txt (rw-) applies to Process A, correct?
Yes, correct.
Also, if Process A created a new file or a new directory, which group will the newly created file or directory belongs to (Process A has 3 groups), is it the egid?
Assuming that the directory the new file/directory is created into does not have its setgid bit set, yes.
If you want Process A to create its new files and directories with group group_b instead of the default group_a, you could start Process A with command sg group_b Process_A.
Or if a particular directory was designated for collaboration for group_c, then the owner of that directory could have set the group for that directory to group_c and then set the setgid bit for that directory:
mkdir /some/directory
chgrp group_c /some/directory
chmod g+rwxs /some/directory
Now, any new files and sub-directories created in that directory will automatically get their group ownership set to group_c. Any new sub-directories will inherit both the group ownership and the setgid bit, so this behavior will automatically propagate to any sub-directories and sub-sub-directories and so on as soon as they are created, unless the creator explicitly changes the group ownership or permissions.
| How does group permissions work? |
1,459,725,470,000 |
Say I had to change the permissions of some file in /etc/ssl to allow a program to read a private key file:
$ cd /etc
$ chgrp ssl-cert ssl/private/key.pem
$ chmod g+r ssl/private/key.pem
$ git status
On branch master
nothing to commit, working directory clean
How do I tell etckeeper that some file permissions have changed in order to commit them? I know that the permissions are kept in /etc/.etckeeper, but couldn't find any way to update that file.
|
git itself does not provide ownership and privileges information, besides executable bit information. The solution for you is to use etckeeper data.
Looking into the documentation, we have:
Most VCS, including git, mercurial and bazaar have only limited tracking of file metadata, being able to track the executable bit, but not other permissions or owner info. (darcs doesn't even track executable bits.) So file metadata is stored separately. Among other chores, etckeeper init sets up a pre-commit hook that stores metadata about file owners and permissions into a /etc/.etckeeper file. This metadata is stored in version control along with everything else, and can be applied if the repo should need to be checked back out.
So, the ownership of your directories are kept in /etc/.etckeeper, which is monitored by git as well. ;)
etckeeper commit should solve your problem.
Depending on your scale, I would think about more complex and useful configuration management tools like Salt, Ansible, Puppet, Chef and so on.
| Update and commit changed file permissions in etckeeper |
1,459,725,470,000 |
I am sharing /share/global/usr/share from a server to /usr/share on a client via NFS. When the client writes into it I get "Read-only filesystem" error.
Server
Filesystem permissions ok:
$> ls -la /share/global/usr/
drwxrwxrwx 2 nobody nogroup 4096 Dec 6 14:37 share
Exports are rw for client IP 192.168.101.250, other internal IPs are ro.
$> grep usr /etc/exports
/share/global/usr/share 192.168.0.0/16(ro,subtree_check,all_squash) 192.168.101.250(rw,subtree_check,all_squash)
Server can write here:
$> echo HELLO > /share/global/usr/share/REMOVEME && chmod 666 /share/global/usr/share/REMOVEME && echo ok
ok
Client
IP address matches (static):
$> ip addr | grep inet
inet 192.168.101.250/24 brd 192.168.101.255 scope global enp0s8
fstab specifies rw:
$> grep usr /etc/fstab
192.168.101.254:/share/global/usr/share /usr/share nfs rsize=8192,wsize=8192,timeo=3,intr,rw
and it's mounted rw:
$> mount | grep usr
192.168.101.254:/share/global/usr/share on /usr/share type nfs4 (rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,hard,proto=tcp,port=0,timeo=3,retrans=2,sec=sys,clientaddr=192.168.101.250,local_lock=none,addr=192.168.101.254)
Read is ok:
$> ls -al /usr/share/REMOVEME
-rw-rw-rw-. 1 nfsnobody nfsnobody 7 Dec 6 15:14 /usr/share/REMOVEME
Problem
On client:
$> sudo -i
$> echo foo > /usr/share/REMOVEME
-bash: /usr/share/REMOVEME: Permission denied
I also can't create new files here.
Everything in the configuration looks okay to me. Why can't I write to the shared directory on the client?
Server is Ubuntu 16.04, client is CentOS 7.
|
/etc/exports wants the specific IP addresses to appear first, IP ranges after.
i.e.
/share/global/usr/share 192.168.101.250(rw,subtree_check,all_squash) 192.168.0.0/16(ro,subtree_check,all_squash)
| "Read-only filesystem" on NFS share, permissions, mounts and exports file seem ok |
1,459,725,470,000 |
I'm trying to setup hylafax+ in fedora, either with the rpm file or by source code. hylafax+ is not prepared for systemd, so I wrote i.a. the following file "/etc/systemd/system/hylafax-faxgetty-ttyACM0.service", which works fine in ubuntu and Opensuse:
[Unit]
Description=HylaFAX faxgetty for ttyACM0, ...
[Service]
User=root
Group=root
Restart=always
RestartSec=30
ExecStart=/usr/sbin/faxgetty ttyACM0
[Install]
WantedBy=multi-user.target
but gives me the error "Can not setup permissions (uid)" in fedora.
When I run the code:
/usr/sbin/faxgetty -D ttyACM0
manually as root, it seems to run (the process persists).
I found the single location where the error message is produced in the source code of hylafax+ and modified it slightly to be more informative, like this:
faxApp::setupPermissions(void)
{
if (getuid() != 0)
faxApp::fatal("The fax server must run with real uid root.\n");
uid_t euid = geteuid();
const passwd* pwd = getpwnam(FAX_USER);
if (!pwd)
faxApp::fatal("No fax user \"%s\" defined on your system!\n"
"This software is not installed properly!", FAX_USER);
if (euid == 0) {
if (initgroups(pwd->pw_name, pwd->pw_gid) != 0)
faxApp::fatal("Can not setup permissions (supplementary groups)");
if (setegid(pwd->pw_gid) < 0)
faxApp::fatal("Can not setup permissions (gid)");
if (seteuid(pwd->pw_uid) < 0) {
char buf[50];
sprintf(buf,"Perm.for %s %d euid: %d",FAX_USER, pwd->pw_uid, euid);
// faxApp::fatal("Can not setup permissions (uid)");
faxApp::fatal(buf);
}
Now it gives me:
FaxGetty[6359]: Perm.for uucp 10 euid: 0
The respective entries of my password files:
/etc/passwd:
uucp:x:10:10:Facsimile Agent:/var/spool/hylafax:/bin/bash
/etc/group:
uucp:x:10:uucp
Can anybody tell me what might be going wrong?
|
From you output it looks like the following is happening:
Your application is running as root.
It is able to change groups to a lower privileged group
It is attempting to switch to a lower privileged user but failing.
First, seteuid like allot of syscalls sets the errno which will tell you the actual error. It is best to print this out as part of your error message to get the actual reason for failure.
However, it is most likely a permission error. Permission errors as far as you can tell they have permission to do the action (which is weird as root since they should be able to do anything) is an indication that selinux (or similar service like apparmour) is at work. They are the only services I know of that can block the root user from some action.
The quickest way to tell if selinux is at fault (I am not that familiar with apparmour) is to check if it is on (ie "enforcing")
sestatus
and then checking for avc denials in the audit log
sudo grep avc /var/log/audit/audit.log
If this returns anything then selinux is blocking something. You can further prove it is selinux by temporarily setting ti to permissive with the following
sudo setenforce 0
If you are now able to do what you require then it is definitely selinux. You have two options now; permanently set selinux to permissive lowering the security of your system (discouraged) or generating the rules required by your application.
| systemd permission issue with user root |
1,459,725,470,000 |
Is it possible to create an ACL to deny access to a specific user (say jdoe) to a specific file?
I'm not interested in the trivial solution of an ACL that gives access to the file to all users except jdoe. This solution has the disadvantage that any user created successively in the system won't have access to the file.
Creating a group of all users except jdoe and granting group access to the file bears the same disadvantage.
The command setfacl -x u:jdoe /path/file won't work as it removes only created ACLs.
|
Sure, to demonstrate, as root...
touch /tmp/test
setfacl -m u:jdoe:--- /tmp/test
getfacl /tmp/test
su - jdoe
cat /tmp/test
exit
rm /tmp/test
It could be done to every file in a directory by default as well:
mkdir /var/data/not-for-jdoe
setfacl -m u:jdoe:--- /var/data/not-for-jdoe
setfacl -d -m u:jdoe:--- /var/data/not-for-jdoe
Above, the -m switch is the mask and the -d switch makes it the default mask for all new filesystem objects in the directory. The --- can have other permission values, e.g.:
rwx
r--
rw-
r-x
7
4
6
5
The group and other masks work the same way: g:groupname:--- or in combination: u:username:---,g:groupname:---,o::---. Not specifying a username or group name applies the mask to current user/group ownership.
| Is it possible to create a "negative" ACL? |
1,459,725,470,000 |
If I run this script, how do I pass super user permissions to it? I wrote this just to setup new machines with the basics. I don't want to run every command with elevated permissions, but the commands that do have sudo I want to run with them.
How do I have some commands run with sudo and others run as the regular user?
#!/bin/sh
# If Linux, install nodeJS
if $(uname) = 'Linux';
then
export IS_LINUX=1
# Does it have aptitude?
if -x "which apt-get";
then
export HAS_APT=1
# Install NodeJS
sudo apt-get install --yes nodejs
fi
# Does it have yum?
if -x "which yum" ;
then
export HAS_YUM=1
# Install NodeJS
sudo yum install nodejs npm
fi
# Does it have pacman?
if -x "which pacman" ;
then
export HAS_PACMAN=1
# Install NodeJS
pacman -S nodejs npm
fi
fi
# If OSx, install Homebrew and NodeJS
if $(uname) = 'Darwin' ;
then
export IS_MAC=1
if test ! "$(which brew)"
then
echo "================================"
echo " Installing Homebrew for you."
echo "================================"
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
export HAS_BREW=1
elif -x 'which brew' ;
then
export HAS_BREW=1
brew update
fi
# Install NodeJS
brew install --quiet node
fi
# Does it have python?
if -x "which python" ;
then
export HAS_PYTHON=1
if -x "which pip" ;
then
pip list --outdated | cut -d ' ' -f1 | xargs -n1 pip install -U
export HAS_PIP=1
fi
fi
# Does it have node package manager?
if -x "which npm" ;
then
export HAS_NPM=1
else
echo "NPM install failed, please do manually"
fi
# Does it have ruby gems?
if -x "which gem" ;
then
export HAS_GEM=1
fi
The rest of the bash script (that I didn't include for length) installs packages from an array using npm, apt, yum, brew, or pacman, depending on the machine. It only installs simple things like git, wget, etc.
|
First time sudo is invoked password is prompted for. Then, depending on configuration, if invoked within N minutes (default 5 minutes IIRC), one do not need to enter password again.
You could do something like:
sudo echo >/dev/null || exit 1
or perhaps something like:
sudo -p "Become Super: " printf "" || exit 1
at start of script.
If you want to prevent anyone from doing sudo ./your_script you should check EUID as well (bash):
if [[ $EUID -eq 0 ]]
then
printf "Please run as normal user.\n" >&2
exit 1
fi
or something like:
if [ "$(id -u)" = "0" ]
...
In any case also check out which shell you targetr. I.e.
https://wiki.debian.org/DashAsBinSh
https://wiki.ubuntu.com/DashAsBinSh
https://lwn.net/Articles/343924/
etc.
To "keep it alive" one could do something like:
while true; do
sleep 300
sudo -n true
kill -0 "$$" 2>/dev/null || exit
done &
| How do I maintain sudo in a bash script? |
1,459,725,470,000 |
I am trying to change my password as a non root user :
passwd
The data is updated in /etc/shadow, but checking the permission i get:
---------- 1 root root 6076 Jan 27 17:14 /etc/shadow
cat /etc/shadow
cat: /etc/shadow: Permission denied
Clearly there were no permissions on the file for anyone,
even then the passwd command succeeds, and i am indirectly updating data to a non-previliged resource (shadow file)!
So can anyone explain the mechanism that how the updation takes place in background ?
Explanation with reference to the system calls will be very useful.
|
The passwd utility is installed setuid, which means that when it runs, it runs as the user that owns the file, not as the user that called it. In this case, passwd belongs to root, so the setuid bit causes the program to run with root privileges. It is therefore able to make changes to the passwd and shadow files.
If you look at the permissions for the passwd utility, you'll see something like this:
-r-sr-xr-x 2 root wheel 8.2K 19 Jan 17:24 /usr/bin/passwd
This is from my FreeBSD system - what you see will depend on the OS you are using. The s in the owner execute position (4th column) indicates the setuid bit.
For further reference, the syscall is setuid, and is part of the standard C library.
| How passwd command from non-root account succeeds |
1,459,725,470,000 |
I'm writing a find command to look for files or folders with broken permissions (files should be rw, directories rwx) and trying to ls -l (give or take) the results.
The following find command looks like it works, but the ls part is giving me trouble.
find . '(' -not -readable ')' -or \
'(' -not -writable ')' -or \
'(' '(' -not -executable ')' -and -type d ')'
Adding -ls or -exec ls -l {} \; to the end works until it gets to a directory it can't read. That gives a permission denied error and bails out completely without finishing. Running ls -ld $(<that command>) works, as far as I can tell, but it feels like I'm missing something simple in find.
As an aside, I'm not worried about POSIX compliance, so I'd rather use -or instead of -o and such for readability.
|
Once you hit a directory that's not executable, find tries to go into it, but it can't because, well, it's not executable. You need to tell it not to try by using -prune.
And put that condition first, so it's not short-circuited.
find . '(' '(' -not -executable ')' -and -type d -and -prune ')' -or \
'(' -not -readable ')' -or \
'(' -not -writable ')'
| How to ignore -ls errors in find |
1,459,725,470,000 |
Using Debian I am unable to create files that by default has read permission for all users.
For example:
# umask
0002
# touch test
# ls -l test
-rw-rw----+ 1 user user 0 Jun 25 18:18 test
Is there any specific restriction on creating readable files?
|
Because your system use ACLs, so the file will have extended permission. Try:
getfacl test
to see exactly file permission.
| Can't create a+r file |
1,459,725,470,000 |
I'm trying to change some permissions on a folder. I'm running FreeNAS and using the windows permissions settings (not Unix). If I right click the file and go to properties and then security, it shows that the only person who can make changes is the: root(Unix user\root).
NOTE: For obvious reasons I can't login to the windows share using the root user.
So how would I go about changing the settings to allow my account to change the permissions?
|
For NTFS
In researching this I found this AskUbuntu Q&A titled: How do I use 'chmod' on an NTFS (or FAT32) partition?. According to this thread there are several ways to go about this.
Methods
Control the permissions at mount time.
$ sudo mount -t ntfs -o \
rw,auto,user,fmask=0022,dmask=0000 /dev/whatever /mnt/whatever
Using a user mapping file
Contrary to what most people believe, NTFS is a POSIX-compatible¹ filesystem, and it is possible to use permissions on NTFS.
Consult the ntfs-3g man page as well as this ntfs-3g documentation on advanced ownership and permissions. The user mappings is covered in this topic titled: User Mapping.
You can then generate a usermap file like so:
For CIFS
In your case you're dealing with CIFS (shares mounted via mount.cifs) so the above would not be applicable. In that case you can use the command-line tools getcifsacl & setcifsacl. The man page for setcifsacl has the following examples:
Add an ACE
$ setcifsacl -a "ACL:CIFSTESTDOM\user2:DENIED/0x1/D" <file_name>
$ setcifsacl -a "ACL:CIFSTESTDOM\user1:ALLOWED/OI|CI|NI/D" <file_name>
Delete an ACE
$ setcifsacl -D "ACL:S-1-1-0:0x1/OI/0x1201ff" <file_name>
Modify an ACE
$ setcifsacl -M "ACL:CIFSTESTDOM\user1:ALLOWED/0x1f/CHANGE" <file_name>
Set an ACL
$ setcifsacl -S "ACL:CIFSTESTDOM\Administrator:0x0/0x0/FULL,
ACL:CIFSTESTDOM\user2:0x0/0x0/FULL," <file_name>
| Changing CIFS permissions on FreeNAS? |
1,459,725,470,000 |
I am using Ubuntu 12.04 LTS 64 bits to chroot into a just extracted to my harddrive (with unsquashfs) Squash File System from a Kali Linux v1.0.5 32 bits pendrive for customizations:
luis@Fujur:$ sudo chroot /media/Datos/Temporal/squashfs/modificando
root@Fujur:/# ls
0 boot etc initrd.img media opt root sbin srv tmp var
bin dev home lib mnt proc run selinux sys usr vmlinuz
I have been able to modify files (/etc/rc.local, add users with adduser and some other minor changes to the extracted filesystem), but the created new users have this problem:
root@Fujur:/# ls /home -la
total 8
drwxrwx--- 1 root plugdev 0 mar 24 22:45 .
drwxrwx--- 1 root plugdev 4096 sep 5 2013 ..
drwxrwx--- 1 root plugdev 4096 mar 24 00:29 luis
drwxrwx--- 1 root plugdev 0 mar 24 22:45 potato
as you can see, the owner is "root", and the group is "plugdev", when both should be the same name of the user account (luis/potato in this example).
Knowing why this happen should be fine, but I think I could solve it if I could change file/directory permissions, but I can not neither:
root@Fujur:/tmp# cd /home/
root@Fujur:/home# ls -la
total 8
drwxrwx--- 1 root plugdev 0 mar 24 22:45 .
drwxrwx--- 1 root plugdev 4096 sep 5 2013 ..
drwxrwx--- 1 root plugdev 4096 mar 24 00:29 luis
drwxrwx--- 1 root plugdev 0 mar 24 22:45 potato
root@Fujur:/home# chown potato potato
root@Fujur:/home# ls -la
total 8
drwxrwx--- 1 root plugdev 0 mar 24 22:45 .
drwxrwx--- 1 root plugdev 4096 sep 5 2013 ..
drwxrwx--- 1 root plugdev 4096 mar 24 00:29 luis
drwxrwx--- 1 root plugdev 0 mar 24 22:45 potato
and even on /temp there is no chance:
root@Fujur:/# cd /tmp
root@Fujur:/tmp# ls -la
total 4
drwxrwx--- 1 root plugdev 0 mar 24 22:52 .
drwxrwx--- 1 root plugdev 4096 sep 5 2013 ..
root@Fujur:/tmp# mkdir test
root@Fujur:/tmp# ls -la
total 4
drwxrwx--- 1 root plugdev 0 mar 24 22:57 .
drwxrwx--- 1 root plugdev 4096 sep 5 2013 ..
drwxrwx--- 1 root plugdev 0 mar 24 22:57 test
root@Fujur:/tmp# chmod a+x test
root@Fujur:/tmp# ls -la
total 4
drwxrwx--- 1 root plugdev 0 mar 24 22:57 .
drwxrwx--- 1 root plugdev 4096 sep 5 2013 ..
drwxrwx--- 1 root plugdev 0 mar 24 22:57 test
root@Fujur:/tmp# chmod a-x test
root@Fujur:/tmp# ls -la
total 4
drwxrwx--- 1 root plugdev 0 mar 24 22:57 .
drwxrwx--- 1 root plugdev 4096 sep 5 2013 ..
drwxrwx--- 1 root plugdev 0 mar 24 22:57 test
It is such a strange thing: I can make directories and files, even editing files, but not changing permissions.
Maybe I have not chrooted correctly? When I *chroot` into a partition to restore GRUB I do:
$ sudo mount --bind /dev /mnt/dev
prior to chroot, but I think this is not the case.
I believe I could be misusing the chroot command. Any ideas, please?
ADDED: this is the result of mount.
Outside (before) chroot:
root@Fujur:/# mount
/dev/sda6 on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
/dev/sda7 on /media/Datos type fuseblk (rw)
gvfs-fuse-daemon on /var/lib/lightdm/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=lightdm)
Inside (after) chroot:
root@Fujur:/# mount
warning: failed to read mtab
EDIT: Just tested and I have the same problem outside chroot, this is: without chrooting. I can not change file/directories permissions. Still there are no errors, but the changes are not made.
|
The line
/dev/sda7 on /media/Datos type fuseblk (rw)
from mount's output tells you that /media/Datos is an NTFS partition (type fuseblk).
NTFS cannot store ownership and permissions in the same way Linux/Unix filesystems like ext{2..4} can. That's why you can set ownership/permissions but they do not persist.
You'll need to switch to a "proper" filesystem (e.g. ext4) for that.
| Can not change permissions of files/directories in a chrooted filesystem |
1,459,725,470,000 |
I've been using the terminal for almost everything: in fact, I often don't even log in through the interface, I use the tty1 and go to the web with text-browsers. So, the external drive doesn't auto-mount, and I use sudo mount /dev/sdb1 /mnt/JMCF125_DE to mount it. It works, but listing shows there's a difference. The files' description when auto-mounting via the GUI (Unity on Ubuntu) looks like:
-rw------- 1 jmcf125 jmcf125
In manual mount, the same files' properties look like this:
-rwxrwxrwx 1 root root
Which makes sense since I had to use sudo to mount. But how come the system doesn't have to? How can my mounts work exaclty like the systems'? Also, I heard every action in the GUI goes through a background shell: can I see what commands are printed there?
|
The default GUI uses Gvfs to mount removable drives and other dynamic filesystems. Gvfs requires D-Bus. You can launch D-Bus outside of an X11 environemnt, but it's tricky. If you have D-Bus running, you can make gvfs mounts from the command line with gvfs-mount.
The program pmount provides a convenient way to mount removable drives without requiring sudo. Pmount is setuid root, so it can mount whatever it wants, but it only allows a whitelist of devices and mount points so it can safely be called by any user.
It is not true that every action in the GUI goes through a background shell. A few do but most don't.
| Why does a manual mount set different file ownership? |
1,459,725,470,000 |
I got some files in directory:
drwxrws-wt 2 me mygroup 4,0K 10.1. 12:34 .
-rw-r----- 1 someone mygroup 10G 10.1. 11:22 someonesfile
me and someone are regular users without supplementary groups.
How to take ownership of that file using me account?
If me do:
$ chown me someonesfile
chown: doing bla bla bla: permission denied
However me can "change" ownership by replacing file with new one:
cp someonesfile myfile && mv -f myfile someonesfile`
So my main question is if there is any easier (cheaper) way to change file ownership in described environment without using root account or other privilege elevations. Basically I wanted to know if me can somehow take advantage of directory permissions to somehow reset ownership/permissions without making copy of whole file.
I've also noticed that editing file with vim and forcing overwrite with :w! will change owner of file, is that same as doing cp && mv? At least touch someonesfile will fail with permission denied.
|
Yes, vim will remove the original file and create a new one to put the new content in.
Your cp && mv -f is the way to go.
Note that when the t bit is set on the directory as it is in your case, it's not enough to have write permission to the directory you also need to be the owner of the file or the directory (as you are).
| Taking file ownership when file and directory is readable/writable |
1,459,725,470,000 |
I recently 'hardened' two Ubuntu servers using Bastille, and now I get permission denied: scp whenever I try to scp files in.
SSH login works fine.
I've tried adding an /scp-dump folder with 777 permissions and still get the same error, so I don't believe it is a permission issue.
Tailing /var/log/auth.log doesn't really give any information, apart from
Oct 1 23:08:39 localhost sshd[20876]: Accepted publickey for some-user from [redacted ip] port 49250 ssh2
Oct 1 23:08:40 localhost sshd[20884]: Received disconnect from [redacted ip]: 11: disconnected by user
Using the -v flag with scp outputs the following:
Executing: program /usr/bin/ssh host some-domain.com, user (unspecified), command scp -v -t -- /scpdump
OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011
debug1: Reading configuration data /Users/some-user/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: Connecting to some-domain.com [12.34.56.78] port 22.
debug1: Connection established.
debug1: identity file /Users/some-user/.ssh/id_rsa type 1
debug1: identity file /Users/some-user/.ssh/id_rsa-cert type -1
debug1: identity file /Users/some-user/.ssh/id_dsa type -1
debug1: identity file /Users/some-user/.ssh/id_dsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1
debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.9
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Server host key: RSA 8e:83:21:4a:9c:be:57:56:b1:07:5a:14:68:8a:47:dc
debug1: Host 'some-domain.com' is known and matches the RSA host key.
debug1: Found key in /Users/some-user/.ssh/known_hosts:17
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /Users/some-user/.ssh/id_rsa
debug1: Server accepts key: pkalg ssh-rsa blen 277
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
Authenticated to some-domain.com ([12.34.56.78]:22).
debug1: channel 0: new [client-session]
debug1: Requesting [email protected]
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
debug1: Sending env LC_CTYPE = C
debug1: Sending env LC_MESSAGES = en_AU.utf-8
debug1: Sending env LC_TIME = en_AU.utf-8
debug1: Sending env LC_MONETARY = en_AU.utf-8
debug1: Sending env LC_NUMERIC = en_AU.utf-8
debug1: Sending env LC_COLLATE = en_AU.utf-8
debug1: Sending command: scp -v -t -- /scpdump
zsh:1: permission denied: scp
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0
debug1: channel 0: free: client-session, nchannels 1
debug1: fd 0 clearing O_NONBLOCK
debug1: fd 1 clearing O_NONBLOCK
Transferred: sent 2880, received 2504 bytes, in 0.6 seconds
Bytes per second: sent 4563.7, received 3967.9
debug1: Exit status 126
lost connection
Any idea where the permission denied might be coming from, config files I can look into, or other logs I should be looking at?
|
zsh:1: permission denied: scp looks like it is not allowed to run scp on the remote side; check the permissions there. Have you tried running scp on that machine to pull the files from elsewhere (vs. push)?
| scp permission denied after 'hardening' with bastille |
1,459,725,470,000 |
I'm doing the following to mount a remote server to a specific path on my server:
sshfs [email protected]:/backup/folder/ /home/myuser/server-backups/
However when I mount the server the folder permissions change (they become 700), and when I test my rsnapshot.conf file I get the following error:
snapshot_root /home/myuser/server-backups/ - snapshot_root exists \
but is not readable
What am I doing wrong ? should I mount the remote server with another user ?
|
FUSE has options to control who has access to the files. I'm guessing you want sshfs -o allow_other.
| Permissions issues with mounting remote server into a specific folder |
1,669,930,542,000 |
As root, I created /test and set the default ACL with
setfacl -m -d dog:rwx /test
I verified the output of getfacl
# file: test
# owner: root
# group: root
user::rwx
group::r-x
other::r-x
default:user::rwx
default:user:dog:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
Now, as the user dog, if I tried to create a directory in /test, I get
mkdir: can't create directory 'sub': Permission denied
Why is this so? If I setfacl without the default, getfacl shows user:dog:rwx instead of default:user:dog:rwx and dog could create a sub-directory there.
Note: It was tested inside of a VM, and the text on the VM screen is not copiable, so I add screen capture instead.
|
Using setfacl -m -d ... and setfacl -m ... have different behaviors when the ACL is assigned. The Default ACLs are used to be inherited. So you have to assign ACLs when you want to give acl permission to a user and assign Default ACLs to inherit perms under some dir.
Explanation
First, I will the dir (as root) /test and check its permissions:
$> mkdir /test ; ls -ld /test
#Output
drwxr-xr-x 2 root root 4096 Dec 1 17:52 test
As you can see the group and other have no write permissions by default. Let's take a look the difference between both setfacl commands.
Using setfacl -m guest:rwx /test
When I run that command (as root) I can see the following outputs by using the following commands:
ls -ld /test
#Output
drwxrwxr-x+ 2 root root 4096 Dec 1 17:57 test
As you can see above the directory has now write permissions for the group. By the way, if you remove ACLs by using: setfacl --remove-all /test and you use ls -ld /test you will notice that /test permissions are reverted to previous ones (drwxr-xr-x).
getfacl -e /test
#Output
# file: test
# owner: root
# group: root
user::rwx
user:guest:rwx #effective:rwx
group::r-x #effective:r-x
mask::rwx
other::r-x
I used getfacl -e to print all effective rights whose are important to know how the ACLs works.
Now I will try to create a file under /test directory with guest and edgar users:
(user:guest)> touch /test/fuzz
#All is ok!
(user:edgar)> touch /test/buzz
#touch: cannot touch '/test/buzz': Permission denied
You can notice that guest user was able to create /test/fuzz file while edgar was not. That behavior is correct because of the ACL assigned to guest.
Using setfacl -dm guest:rwx /test
In my case the syntax setfacl -m -d guest:rwx /test is not valid (I'm using openSUSE Tumbleweed). You can also use setfacl -m d:guest:rwx /test.
Now running the command with Default ACLs I have the following:
ls -ld /test
drwxr-xr-x+ 2 root root 4096 Dec 1 18:39 test
getfacl -e /test
# file: test
# owner: root
# group: root
user::rwx
group::r-x
other::r-x
default:user::rwx
default:user:guest:rwx #effective:rwx
default:group::r-x #effective:r-x
default:mask::rwx
default:other::r-x
As you can see now, the /test directory has no write permissions for the group. And by using getfacl ... I get that the default guest user has rwx permissions but as I said before they will be used to be inherited. So if you want that in your case dog user can write to /test dir is using: setfacl -m dog:rwx /test.
About Default ACLs or inherited ACLs
I will set the following ACL and Default ACL:
#ACL
setfacl -m guest:rwx /test
#Default ACL
sudo setfacl -dm guest:r /test
Now I will create a directory under /test as guest user:
(user:guest) /test: mkdir only_r
(user:guest) /test: ls -ld only_r
drwxr-xr-x+ 2 guest guest 4096 Dec 1 19:19 only_r/
(user:guest): getfacl -e only_r
# file: only_r/
# owner: guest
# group: guest
user::rwx
user:edgar:r-- #effective:r--
user:guest:r-- #effective:r--
group::r-x #effective:r-x
mask::r-x
other::r-x
default:user::rwx
default:user:edgar:r-- #effective:r--
default:user:guest:r-- #effective:r--
default:group::r-x #effective:r-x
default:mask::r-x
default:other::r-x
Now I will try to change to only_r dir and create a file
(user:guest) /test: cd only_r
(user:guest) /test/only_r: touch fuzz
(user:guest) /test/only_r: ls
fuzz
As you can above I was able to change to only_r and create a file although ACLs have not execution and write permissions (if a directory does not have execution perms then I cannot cd to it). However this behavior is correct because the Unix permissions and the owner drwxr-xr-x+ 2 guest guest allow to the guest user to cd and create files to /test/only_r
Finally with edgar user I will try to cd to /test/only_r and create some file:
(user:edgar) : cd /test/only_r
cd: permission denied: /test/only_r
(user:edgar) : echo jaja > only_r/fuzzbuzz
permission denied: only_r/fuzzbuzz
| Set Default ACL rwx to a directory, can't create a sub-directory as the user |
1,669,930,542,000 |
I have been having several issues with a CentOS 9 VM related to file permissions. I've never had this much trouble before, and I'm wondering if it has something to do with the security options and file systems I selected during install (GUI STIG and ext4).
Example issue 1:
Two python files in the same directory, with the same permissions displayed by ls and stat
$ls -al config.py run_app.py
-rwx------. 1 myuser myuser 20K Aug 4 19:33 config.py
-rwx------. 1 myuser myuser 50K Jul 8 10:51 run_app.py
$stat config.py run_app.py
File: config.py
Size: 19873 Blocks: 40 IO Block: 4096 regular file
Device: fd05h/64773d Inode: 1971283 Links: 1
Access: (0700/-rwx------) Uid: ( 1000/myuser) Gid: ( 1000/myuser)
Context: unconfined_u:object_r:user_home_t:s0
File: run_app.py
Size: 51016 Blocks: 104 IO Block: 4096 regular file
Device: fd05h/64773d Inode: 1969096 Links: 1
Access: (0700/-rwx------) Uid: ( 1000/myuser) Gid: ( 1000/myuser)
Context: unconfined_u:object_r:user_home_t:s0
But lsattr doesn't work right:
$lsattr config.py run_app.py
--------------e------- config.py
lsattr: Operation not permitted While reading flags on run_app.py
$sudo lsattr run_app.py
--------------e------- run_app.py
I also cannot cat/edit/run run_app.py. While all three operations work just fine on config.py. Doing anything with run_app.py requires sudo/root.
Example issue 2:
I cannot install python packages into a virtual environment, but I can install them to the local user environment.
myuser@COS9-VM:~/sandbox
$python3 -m venv myvenv
myuser@COS9-VM:~/sandbox
$. myvenv/bin/activate
(myvenv) myuser@COS9-VM:~/sandbox
$python3 -m pip install pyyaml
Traceback (most recent call last):
File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/__main__.py", line 29, in <module>
from pip._internal.cli.main import main as _main
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/cli/main.py", line 9, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/cli/autocompletion.py", line 10, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/cli/main_parser.py", line 8, in <module>
from pip._internal.cli import cmdoptions
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/cli/cmdoptions.py", line 23, in <module>
from pip._internal.cli.parser import ConfigOptionParser
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/cli/parser.py", line 12, in <module>
from pip._internal.configuration import Configuration, ConfigurationError
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/configuration.py", line 21, in <module>
from pip._internal.exceptions import (
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/exceptions.py", line 7, in <module>
from pip._vendor.pkg_resources import Distribution
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_vendor/pkg_resources/__init__.py", line 80, in <module>
from pip._vendor import appdirs
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 846, in exec_module
File "<frozen importlib._bootstrap_external>", line 982, in get_code
File "<frozen importlib._bootstrap_external>", line 1039, in get_data
PermissionError: [Errno 1] Operation not permitted: '/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_vendor/appdirs.py'
(myvenv) myuser@COS9-VM:~/sandbox
$deactivate
myuser@COS9-VM:~/sandbox
$python3 -m pip install pyyaml
Defaulting to user installation because normal site-packages is not writeable
Collecting pyyaml
Using cached PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (661 kB)
Installing collected packages: pyyaml
WARNING: Value for scheme.platlib does not match. Please report this to <https://github.com/pypa/pip/issues/10151>
distutils: /home/myuser/.local/lib/python3.9/site-packages
sysconfig: /home/myuser/.local/lib64/python3.9/site-packages
WARNING: Additional context:
user = True
home = None
root = None
prefix = None
Successfully installed pyyaml-6.0
I am out of ideas... What am I missing?
|
After scouring the internet, I have an answer. Of course the answer was on Stack Overflow/Stack Exchange already (here), but it took me days to track it down.
My VM was running fapolicyd as part of the STIG compliance configuration I enabled at installation. This daemon inserts itself via hooks in the file permissions decision making process. It has rules files that by default disable access to certain executable files in certain non-system binary/executable directories. It does this based off its determination of the MIME type of the file as far as I can tell. In my example config.py had no shebang, whereas run_app.py does. This was enough to get the latter classified as text/x-python, while leaving the former alone.
Once I stopped/disabled the fapolicyd service, I was able to use files according to their displayed permissions/ACLs.
| File permissions not matching allowed operations...? |
1,669,930,542,000 |
I have a script that automatically creates and runs a VM. That script is used by many people. You basically call the script giving it some information like what PCI or USB devices you want to pass through and which iso to use to install the OS and then the script runs sudo qemu-system-x86_64 with the appropriate parameters.
So if you break it down, you could currently call my script like this:
./create-vm.sh /home/me/os-images/windows10.iso
And this works fine.
But now I want to take it a step further and use sudo virt-install ... instead of sudo qemu-system-x86_64 ... and that is causing major issues because with virt-install it can't access the iso file anymore. Presumably because it drops its root privileges and uses the qemu user even if I run it with sudo...
So now I have to make a difficult decision:
Do I move the iso file to /var/lib/libvirt/images? (No because the user might need that file in the exact location where it is right now.)
Do I copy the iso to /var/lib/libvirt/images? (No because the user might not have enough disk space and it just seems like a waste of resources.)
Do I set user = root or user = me in /etc/libvirt/qemu.conf? (No, because that is a global setting that might mess up other qemu stuff the user is doing. - I have tried it though and it causes libvirtd.service to crash.)
Do I add the group of the iso file to the qemu user? (No, because that could have unwanted side effects, potentially giving qemu more access in situations where the user wouldn't want it. - Nevertheless, I've tried it and it didn't work, presumably some SElinux magic is blocking it...)
Do I change the owner of the iso file to qemu? (No, because that might have unwanted side effects. - Besides that, when I try it I still get permission denied errors, probably because of SElinux.)
Do I mount the iso and make the mountpoint available to the qemu user? (No, because iso files can be very complex and some data will not be available in the mountpoint.)
Do I mount the folder containing the iso? (No because the iso file would still have the same owner/group.)
I just can't seem to find a good solution. What am I supposed to do now? I really need some of the functionality that virt-install offers over qemu-system-x86_64.
Note: In reality there is not just one iso image, but also a floppy image, some other iso files containing drivers and an ACPI table file. I get permission errors for all of these files from virt-install.
|
The reason chown qemu:qemu $ISO alone doesn't work is because qemu likely doesn't have search permissions for $HOME.
virt-install attempts to detect this situation and print a warning. virt-manager will go a step further and offer to try and fix, using roughly setfacl --modify user:qemu:x $DIR for every directory in the chain leading to the iso. This, coupled with libvirt chown'ing and selinux labeling the file automatically (the default behavior), should be enough to get the VM booting. Yes this behavior does have unwanted side effects but there's no way around it if you want the VM to be run as a different non-root user.
That said, if want you want is a VM that belongs entirely to one user, look at qemu:///session compared to the default qemu:///system libvirt URI. Here's an explanation of the difference. Though you'll lose access to things like PCI passthrough in that case, stuff that requires libvirtd to have host admin access.
| Clean way of running virt-install with an iso file that's in your home directory that user qemu can't access |
1,669,930,542,000 |
I am storing some Unix socket files under a sub-directory beneath $XDG_RUNTIME_DIR. On the documentation, I can read:
Files in this directory MAY be subjected to periodic clean-up. To ensure that your files are not removed, they should have their access time timestamp modified at least once every 6 hours of monotonic time or the 'sticky' bit should be set on the file.
I wonder if it is necessary to set the sticky bit on each socket file in order to avoid periodic clean-up or if it is sufficient to set the sticky bit in the sub-directory in which I am storing all the socket files.
|
The part of the XDG Base Directory specification you cited speaks about setting the sticky bit on files:
Files in this directory MAY be subjected to periodic clean-up. To ensure that your files are not removed, they should have their access time timestamp modified at least once every 6 hours of monotonic time or the 'sticky' bit should be set on the file.
The specification is somewhat ambiguous here as file can mean every filesystem entity, non-directory filesystem entity or regular file depending on context. But the sticky bit on directories has a special effect on Linux and is even named different when used on directories in the chmod(1) manpage:
The restricted deletion flag or sticky bit is a single bit, whose
interpretation depends on the file type. For directories, it
prevents unprivileged users from removing or renaming a file in the
directory unless they own the file or the directory; this is called
the restricted deletion flag for the directory, and is commonly found
on world-writable directories like /tmp.
Because of this it's reasonable to assume that file in the XDG documentation in this context means non-directory filesystem entity.
But as the specification is not completely unambiguous it will depend on the implementation of the clean-up mechanism of your distribution.
It looks like there is currently no such periodic clean-up at least on Fedora and on Linux Mint, but as this may change in the future and there is no telling how the distributions will interpret this part of the spec, it's safer to set it on every file/socket you want to exclude from periodic cleanups.
EDIT: For systemd based distributions, pam_systemd is responsible for managing $XDG_RUNTIME_DIR. It currently only performs creation on the first login and deletion on the last logout. Also systemd creates sockets in subdirectories of $XDG_RUNTIME_DIR and doesn't set the sticky bit on anything. This strongly suggests that at least no systemd based distribution implements periodic clean-up yet.
| Sticky bit and socket files in XDG_RUNTIME_DIR |
1,669,930,542,000 |
Q1. If you setup root with a restrictive umask, it will affect the files created when you run ansible, right? (Unless you tell ansible to ensure a specific mode aka. permissions).
A1. Looks like it, ansible does not automatically reset the umask.
Using ansible to upgrade ansible in envronment with restrictive umask
[ansible-project] Temporarily setting umask
Q2. What can we conclude from this?
Is there a way to write ansible roles that they will never depend on the umask?
Are there disadvantages of writing an ansible role to work under this constraint?
|
Ansible now answers this question by adding a umask option, to modules like the ones mentioned, which create files without a specific mode.
pip_module.html#options
Unable to Set Umask / Mode for Git Module #10279 - Closed
I don't see a big disadvantage if you add this more explicit option where appropriate. Maybe just a small annoyance when reading the role.
I note the copy module does not implement umask. You can specify mode, it's just that this has slightly different semantics. If you could specify umask, and the file already existed (regardless of content), then its mode would not be changed. (In the role I'm working on, I think it's useful that the copy module made me think about this).
I note that the original message mentions pip breaking permissions on the ansible install under /usr/lib/python2.6/site-packages. Surely this would also be an issue with running pip manually. It sounds like a defect in pip... but unlike OS packages, pip packages can also be installed inside a virtualenv, so it is much more complex for pip to determine the user's intention. Arguably it is more of an example of a potentially unanticipated issue from using a restrictive umask.
| How reasonable to make ansible work on systems with restrictive umask? |
1,669,930,542,000 |
I'm trying to set up a directory where a user can traverse a directory but not prove the existence of anything in the directory. I have tried setting permissions such that the user has execute but not read permissions. Unfortunately the error returned (either "No such file or directory" or "Permission denied") will confirm whether a given item exists in the directory.
For example, here is what I currently see:
$ sudo ls -l permTest/
total 4
drwxr-x--- 2 root root 4096 Aug 10 12:35 exists
$ ls -ld permTest
drwxr-x--x 3 root root 4096 Aug 10 12:35 permTest/
$ ls permTest
ls: cannot open directory 'permTest': Permission denied
$ ls permTest/doesnotexist
ls: cannot access 'permTest/doesnotexist': No such file or directory
$ ls permTest/exists
ls: cannot open directory 'permTest/exists': Permission denied
I would like the error messages in the final line to be identical.
I'm trying to set this up on MapRFS if that's relevant. It is broadly POSIX compliant.
I have read the answers at How does one create a directory that can't be seen and can only be accessed via its absolute path name?, but did not find a solution to my problem.
|
It is not possible to hide the existence of a file if the user could list it:
$ ls -l permTest/insidedir/doesexist
-rw-r--r-- 1 root root 0 Aug 10 01:55 permTest/insidedir/doesexist
even if both directories (permTest and insidedir) are own by root and have only x permissions:
$ sudo ls -la permTest/insidedir/
total 8
d--x--x--x 2 root root 4096 Aug 10 01:55 .
d--x--x--x 3 root root 4096 Aug 10 01:54 ..
-rw-r--r-- 1 root root 0 Aug 10 01:55 doesexist
$ ls -la permTest/
ls: cannot open directory permTest/: Permission denied
$ ls -la permTest/insidedir/
ls: cannot open directory permTest/insidedir/: Permission denied
| How can I allow a user to traverse a directory but prevent them confirming the existence of any other files/directories |
1,669,930,542,000 |
There are free vst and aax (Pro Tools) plugins installed as part of 360 Spatial Workstation. These are greyed-out (as if requiring a license), or invisible (VSTs e.g. in Ableton Live). If you login as an administrator they appear with no issues.
This is not acceptable, because in a large music studio, we naturally do not let guests, artists, students, and so on, install programs or other admin tasks.
The program should have all the permissions necessary, to change almost any system component, and not require system-level restrictions in the actual plugins.
So the other parts of "360 Spatial Workstation" also known as FB360 now in collaboration with a team at Facebook work fine, just not the plugins in DAW programs.
It seems if you are not the administrator while wanting to use any of the Digital Audio Workstation compents of this program, the DAW cannot find the plug-ins on the system, and refues to acknowledge the "free" license (for example the Pro Tools .aax plugins).
The systems are OSX 10.11, facing the same issue with all non-admin users whether local or ldap users.
I am used to solving minor problems in similar programs with relatively-simple ****Bash* scripts or by tweaking unix-style config files or plists in ~/Library/Application Support/$Application as well as ~/Library/Preferences and the local, system-wide files***, similarly, but there are no files here to modify.
I've tried to post questions at The developers' help desk but can't seem to login or create an account.
I have also tried opening the permissions wide to 0777 on all of the components inside the Applications package, as well as the individual plugins. I also wanted to force the components to run suid as an administrator, but so far this has not worked, but this is the basis of tricks for unix & linux (on ***osx*, here)** that I am hoping to most get some tips on. It seems like the way Pro Tools plug-ins--but also the VST plugins--are implemented, contain within their packages also levels of access restriction, which is not normal for these plug-ins in my experience.
I also tried using the sudo system to allow specific users to access the plugins, and this made no difference.
|
You can't use the plugin portions of this package at this time, as a non-admin user. You and I both know that is a security risk, and others may know it is not necessary, but that is the answer.
I verified that what you mentioned before about sudo was correct, and as well, there were no unix-level tweaks I could work out. I agree it would be cool if in the future people add more to this post in regards to how to general tweak the VST or other plugin formats. A great resource is Steinberg (their developers' site is here), where you can read up on the VST specifciations. You may even be able to modify the 360 plugins with info that is there, and basic Unix/OSX knowledge. Maybe Avid has a similar resource for the AAX format.
I played around with their installer a bit, and read over the support group. Actually it seems to have quickly turned into a general forum about VR, and users-helping-users, not so much of a support group; but in general the VR movement looks awesome, so I have big hopes for the project!
Someone asked about a month ago about non-admin use, and someone gave a non-response of "well you had to be admin to install it, didn't you," so I think the venue is not right for these type of security-centered questions. I had also the same experience as you, with their pre-FB support pages, which seem to be closed now.
I have experience either personally, or as admin for clients, with almost all the leading music hardware, DAWs, and plugins, and none of them, I mean zero--and only one piece of hardware I know of (and that not even certainly)--needs strict admin access to work. A program like Pro Tools has all the rights, all on its own, to tweak the audio and video settings in your system, and does not need its plugins, to run strictly with root privileges.
Spatial workstation should work with every DAW, according to their own site so apparently they've just left out support for non-admin users, which seems a bit awkward to me; To speculate further wouldn't be justified, but the sure answer is it is not yet ready for prime time, and certainly, at least, not purposefully build for educational or group use.
| How can I tweak unix settings for 360 Spatial Workstation (specifically the vst and aax audio plugins) to make it run as a non-administrator? |
1,669,930,542,000 |
Say one has a resource hungry command that users on a server need to run. I want to wrap said command with a wrapper script that will parse the arguments passed and ensure that the command is only being used under certain conditions or times.
The problem is that if the program itself is not executable the wrapper won't be able to run it either. I'd also like the command not to run as root.
Is this possible?
|
The simple answer is: It is not possible to force your users to use your wrapper script.
The reason for this is fairly simple; a shell script is an interpreted program. That means that bash (or some other shell process) must read the file in order to run the commands that are called in it.
This in turn means that a user who has permission to run the wrapper script, must have permission to do everything that is done in the wrapper script. In the vast majority of cases*, a shell script, even one with lots of internal logic and conditionals, does exactly the same thing when you run it as it would if you typed the entire script into your command prompt, line by line.
If you are merely trying to make it difficult for uneducated users to slow down your system, there are a multitude of ways of doing this, such as what @mikeserv suggests in a comment on your question. I can think of at least five more ways offhand**, many of which could be used in combination; the crucial thing to understand about these is that they're not secure. They don't actually prevent the user from using the command directly instead of the wrapper script, and they also don't (and can't) prevent the user from making his own copy of the wrapper script (which he must have read permissions on to be able to run at all) and modifying it however he likes.
It is possible to write a short C program to perform the function of your wrapper script, which compiles to a binary executable, and then make that C program SUID*** so it is the only way the user can run the command you are talking about, but that's beyond my scope and area of expertise.
Other options involve extremely odd workarounds (hacks) like setting a cronjob to modify your sudoers file to allow permissions to run the command only during specific times of day...but that's getting really, really weird and Bad Idea territory.
I think the standard way to accomplish this (although still without forcing tech-savvy users to use your wrapper script) would be:
(I'll pretend the command to restrict is date.)
Ensure that inside your script, its call to date uses the absolute path: /bin/date (You can find out what this is by running which date.) Also ensure your script has a proper shebang, so that it can be run without needing to type bash ./myscript but can just be run as ./myscript, and ensure it is readable and executable by everyone. (chmod 555 myscript)
Put your wrapper script in /usr/local/bin/ and rename it as date.
Check that users have /usr/local/bin at the start of their $PATH variable. (Just log in as a user and run echo "$PATH".) They should already have this by default. It doesn't have to be at the very start as long as it's in their path before /bin (or whatever the location of the original date command is).
If they don't have it in their path, you can add it by running: echo 'PATH="/usr/local/bin:$PATH"' | sudo tee /etc/profile.d/my_path_prefix.sh
Now any time a user tries to run the command directly, he will actually be running your wrapper script, because the directory where your wrapper script is appears first in his $PATH.
A much more hack-y "blackhat" sort of a solution would be to actually mask the original binary, not by putting another version earlier in the path for users, but by putting the wrapper script in place of the command itself, in its original location. Use at your own risk:
Put the command itself somewhere outside the normal bin directories so no one has it in their path. You could move it to, for example, /var/local. (There may be a better place, but this is a hack already, so it doesn't matter much, does it?)
Ensure that the call to the date within your wrapper script points to the new location for date—its absolute path: /var/local/date in my example.
Move your wrapper script into date's old location, with date's original name.
The main caveat is that every time anyone tries to run that command, including system init scripts, they will get your wrapper script instead.
This is purely a hack and would not qualify as good system administration. But it is possible and you may as well know that it could be done. The better solution is what I posted above.
*The exceptions to this have to do with modifying the environment and programs that behave differently when they are run interactively vs. when they are run from a script. These exceptions have nothing to do with permissions, though, so they're not relevant to this discussion.
**Ask about them in the comments if you are interested and I'll expand on them.
***NOT suid root. If you do this, just create a user, put him in a group which is the only one with permission to run the command you are talking about (chmod 010 or something) and then chown your fresh-compiled wrapper binary to be owned by that user and set its suid bit with chmod 4511.
| Allow only wrapper script but not command |
1,669,930,542,000 |
I have this website, where if the user submits a form, a python script is executed through a php page, and the python script creates a zip file and should offer it to the user for download through a link. The file could be huge (a few GB).
As I'm working on a university server, I'm strictly bound to their server rules and capabilities. Here's the problem:
The website is stored in /data/mywebsite, which has limited diskspace. Of course this is owned by www-data as it's mainly accessible by my Apache server.
I'm offered 1 TB storage in /experimentdata/, which is ONLY ACCESSIBLE by a single, specific user, say theuser. This is because this folder is a samba mount that can be accessed by a single and specific user-id.
To create the file in /experimentdata, I use a sudo -u theuser command that will create the file /experimentdata/downloadme.zip as user theuser. Now my problem is: How can I offer this file through a link for download through Apache?
I thought of using a symbolic link that I put in, e.g., /data/mywebsite/download/downloadme.zip. The problem with that is that the user www-data has absolutely no permission to read the file!
How can I let the user download the file /experimentdata/downloadme.zip with the user www-data through the user theuser?
I would like to explicitly say that involving sudo -u theuser is absolutely fine. But I don't know how to make a link out of that to somewhere outside my website folder.
PS: If you require any additional information, please ask.
|
I think the thing to do is have your php/python return the data directly instead of apache. Your code can do the same thing that apache does. In my experience this is much better than opening up another directory and/or using sudo, or changing file permissions for apache, etc.
If the program produces the large file faster than the internet connection, then you can stream the data directly from your program, which eliminates the extra data file and the code to manage it and the mechanisms to remember it.
This answer on Stack Overflow shows how the code works in php. https://stackoverflow.com/a/4357904/5484716.
For programs that will be called this way, eliminate all stderr stream output and make sure the return code from your python process accurately reflects the success or failure of the process.
The examples below show the popen() calls you would use in the above example scenario from stackoverflow. I've prepended exec 2>/dev/null; to the shell command. This ensures that no output goes to stdandard error, even from the shell itself, because having data coming on both stderr and stdout can be a source of deadlocks with popen().
If you want to download the disk file to your user:
$fp = popen('exec 2>/dev/null; sudo -u theuser cat yourfile.zip', 'r');
If you want to download the data from the active process:
$fp = popen('exec 2>/dev/null; sudo yourpythonscript arg argN', 'r');
These command lines are shell commands and need to be quoted appropriately for shell meta characters.
In the second method, the server would begin sending the data immediately. When the user successfully submits the form, they immediately see a "save as" dialog from their browser. As soon as the user selects the output file, your php script transmits the data directly across the wire and into the remote file.
The python script should print only the zip data on standard output, and return an exit code that accurately represents the success or failure of the zip process. In python the script should write the output on sys.stdout, for example zf = ZipFile(sys.stdout, ....
It is critical to call pclose() and check the return value, because that will be the only way you know if the zip succeeded or not. If pclose() returns anything other than 0, something is wrong.
How the file is handled by the client depends on the settings of these response headers and others: content-type:, content-encoding:, and content-disposition: See: http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html, look at the response-header and the entity-header information.
| Access a file in PHP through a symbolic link from a user that normally doesn't have permission |
1,669,930,542,000 |
What are the differences or similarities among the FreeBSD flags, simmutable/uimmutable and sunlink/uunlink?
Reading man chflags, I see these flags:
schg, schange, simmutable
set the system immutable flag (super-user only)
sunlnk, sunlink
set the system undeletable flag (super-user only)
uchg, uchange, uimmutable
set the user immutable flag (owner or super-user only)
uunlnk, uunlink
set the user undeletable flag (owner or super-user only)
I currently understand the immutable attribute in the same way as a Linux manual page on chattr describes it:
A file with the 'i' attribute cannot be modified: it cannot be deleted or renamed, no link can be created to this file and no data can be written to the file. Only the superuser or a process possessing the CAP_LINUX_IMMUTABLE capability can set or clear this attribute.
How do "immutable" and "undeletable" differ in FreeBSD?
|
From the manpage of the syscall chflags(2):
SF_IMMUTABLE The file may not be changed.
SF_NOUNLINK The file may not be renamed or deleted.
[...]
UF_IMMUTABLE The file may not be changed.
UF_NOUNLINK The file may not be renamed or deleted.
The flags prefixing with SF_ may only be set or unset by the super-user. The others prefixing with UF_ may be set or unset by either the owner of a file or the super-user.
Note: If one of the SF_ flags is set a non-super-user cannot change any flags and even the super-user can change flags only if securelevel is 0.
The security level can be set with a sysctl(8) on the kern.securelevel variable.
| What is the difference between the FreeBSD flags, immutable and unlink? |
1,669,930,542,000 |
I am running Arch and have just install XAMPP. I have change changed my document root to /srv which is a separate EXT4 partition. I have set the Apache HTTP server to use a group called "http". I have added my user to the http group. I want every file created in the /srv folder to assigned to the HTTP group and for all the files to have group read and execute access.
I have ran the following commands:
sudo chgrp -R http /srv/
sudo chmod -R g+rwxs /srv/
And set the umask at the bottom of my ~/.bashrc to:
umask 002
However I notice 2 things:
When I extract a zip file that is owned by the http group, the group of the immediate folder that is created by my archive application is owned by http but all the subdirectories and files are still owned by the default "user" group.
Creating a new file with nano gives me a different permission (-rw-rw----) than creating one with Gedit (-rw-r--r--).
What umask should I be using to force user - all, group - read/execute, others - none?
How do I force all files to be owned by http regardless of what program creates them there?
|
That is not such an easy problem as one might guess. The point is that
directories' sticky bit
umask
even ACLs (and richacls)
are defaults only. Noting prevents a process from changing these values after the creation of a file or directory. An archive program may even be interested in restoring the original values.
I cannot reproduce your zip experience. I have Zip 3.0 (openSUSE 13.1). Here the sticky group of the parent folder is successfully inherited.
Using ACLs for default ACLs (adding the group explicitly i.e. not as file group (ACL_GROUP_OBJ) but as pure ACL entry) does not prevent an application from making modifications but in my experience this happens less often.
The only safe way is a privileged daemon which either regularly checks for new files / directories or keeps informed by FAM. This daemon can then change the file ownership so that normal processes can just modify (and maybe delete) the file but cannot change its access rights any more. Maybe this can be done with FUSE, too.
| How to set a directory's permissions for group read/execute access |
1,669,930,542,000 |
I am running CentOS 6.4 and I have installed Wordpress on it. (along with LAMP)
Now the problem is, that I cannot make any write changes to any files in the wordpress editor: eg. header.php, style.css etc. Wordpress says the following where the 'update' button is suppose to be: You need to make this file writable before you can save your changes.
Notes:
Now Apache is running as the root user (default)
Here are the
permissions on the themes folder where all the above mentioned files
lie:
drwxrwxr-x. 5 root bluegig 4096 Jul 7 17:32 themes
drwxrwxr-x. 3 root apache 4096 Jul 7 23:15 uploads
I ran the chmod 775 command on both 'themes' and 'uploads', Now doing a chmod 777 gets me write permissions, but I don't believe that is very safe... Is there any other/better way of doing it?
(bluegig is the name of my domain, don't know why that is there...)
What I can do:
I can read and execute in Wordpress
I can upload files into the uploads folder from within wordpress
I cannot:
Make any changes to files within wordpress (via the editor)
How do I enable write permissions so that I can modify files in
wordpress?
Note, I did not log into an ftp account from within WP.
|
The installation of Apache may appears to be running as root but in actuality it's running as the user apache. You can check this by looking in this file:
$ grep "^User" /etc/httpd/conf/httpd.conf
User apache
Your entire wordpress directory should likely be owned by this user if you're planning on managing the installation using wordpress through the web UI.
I usually create a separate directory for wordpress like this:
$ pwd
/var/www
$ ls -l | grep wordpress
drwxr-xr-x. 5 apache apache 4096 Apr 25 19:27 wordpress
Here's the contents of the wordpress directory just so you can see it:
-rw-r--r--. 1 apache apache 395 Jan 8 2012 index.php
-rw-r--r--. 1 apache apache 5009441 Jan 23 13:40 latest.tar.gz
-rw-r--r--. 1 apache apache 19929 May 6 2012 license.txt
-rw-r--r--. 1 apache apache 9177 Jan 25 11:25 readme.html
-rw-r--r--. 1 apache apache 4663 Nov 17 2012 wp-activate.php
drwxr-xr-x. 9 apache apache 4096 Dec 11 2012 wp-admin
-rw-r--r--. 1 apache apache 271 Jan 8 2012 wp-blog-header.php
-rw-r--r--. 1 apache apache 3522 Apr 10 2012 wp-comments-post.php
-rw-rw-rw-. 1 apache apache 3466 Jan 23 17:15 wp-config.php
-rw-r--r--. 1 apache apache 3177 Nov 1 2010 wp-config-sample.php
drwxr-xr-x. 7 apache apache 4096 Apr 24 20:15 wp-content
-rw-r--r--. 1 apache apache 2718 Sep 23 2012 wp-cron.php
drwxr-xr-x. 9 apache apache 4096 Dec 11 2012 wp-includes
-rw-r--r--. 1 apache apache 1997 Oct 23 2010 wp-links-opml.php
-rw-r--r--. 1 apache apache 2408 Oct 26 2012 wp-load.php
-rw-r--r--. 1 apache apache 29310 Nov 30 2012 wp-login.php
-rw-r--r--. 1 apache apache 7723 Sep 25 2012 wp-mail.php
-rw-r--r--. 1 apache apache 9899 Nov 22 2012 wp-settings.php
-rw-r--r--. 1 apache apache 18219 Sep 11 2012 wp-signup.php
-rw-r--r--. 1 apache apache 3700 Jan 8 2012 wp-trackback.php
-rw-r--r--. 1 apache apache 2719 Sep 11 2012 xmlrpc.php
I usually also manage any Apache configs related to wordpress in it's own wordpress.conf file under this directory,/etc/httpd/conf.d/`.
# wordpress.conf
Alias / "/var/www/wordpress/"
<Directory "/var/www/wordpress/">
Order Deny,Allow
Deny from all
#Allow from 127.0.0.1 192.168.1
Allow from all
AllowOverride all
</Directory>
#RewriteLog "/var/www/wordpress/rewrite.log"
#RewriteLogLevel 3
| CentOS - Issue with write permissions |
1,669,930,542,000 |
I'm using plain Ubuntu Desktop 11.04 and installed my lamp stack using lamp-server. I am trying to use Netbeans as my IDE.
Currently, all virtual hosts are being run from /var/www/vhostname -- but as I have not configured any groups or permissions, if I try to open any of the files through Netbeans it does not have write permission.
How can I properly set up permissions (or configure Apache or Netbeans) so that:
Files created by a php script can be rw by Netbeans
Files created by Netbeans can be rw by Apache
I attempted to chown everything to my user/group which gave Netbeans write permission, but then Apache did not have write permission.
Note: This is purely for a development machine -- not used in production, and I am the only user on this box.
UPDATE
I used to use the method in the answer I marked as accepted, but nowadays I do something much simpler:
I set Apache to run as my user and my group (this is done either in httpd.conf, apache2.conf, or envvars depending on your distro)
I chown /var/www to my user and group
Voila, Apache has read/write access, and I have read/write access while working on projects.
|
What I recommend doing has been mostly described in this Ask Ubuntu question.
For this particular case I would install suPHP which in short allows you to execute PHP scripts as your user under Apache.
By doing the following:
sudo chown -R youruser:youruser /var/www
find /var/www/ -type d -exec chmod 0755 {} \;
find /var/www/ -type f -exec chmod 0644 {} \;
Install suphp-common and libapache2-mod-suphp from this ppa (What are PPAs and how do I use them?)
Disable mod_php5 and enable mod_suphp
sudo a2enmod suphp
sudo a2dismod php5
Update your virtual hosts to include this line at the bottom of them:
suPHP_UserGroup youruser youruser
Replacing youruser with the user you use to edit files on the server. Restart Apache.
From this point forward Apache will execute all php scripts are your user, which means they can be owned by your user/group and there is no need to use crazy permissions like 777. Since everything is run as your user all files created by the php scripts will be owned by your user as well! There are many other cool things you can do with suPHP; however, from what it sounds like this is all you'll need to get started.
| How to configure permissions to allow gedit, apache, and an IDE play together? |
1,669,930,542,000 |
This may be a dumb question, but how does symbolic link preserve permission?
$ls -ld /proc/1/exe
ls: cannot read symbolic link '/proc/1/exe': Permission denied
so I look up what the link point to with sudo:
$sudo readlink -f /proc/1/exe
/usr/lib/systemd/systemd
$ls -ld /usr/lib/systemd #check if r+x for the dir to traverse it
drwxr-xr-x 14 root root 4.0K May 18 19:34 /usr/lib/systemd/ #yes I do
So I do have rx permission for others, but with symbolic link /proc/1/exe I cannot read the dir (traverse it) without sudo. Why?
|
There is no "good" (i.e. conforming to all the relevant standards) way of achieving what is desired here (showing only some but not all of the content (metadata) of a directory).
But the kernel does tell you that you have no permissions on this object if you ask it:
$ test -r /proc/2072/exe ; echo $?
1
$ test -w /proc/2072/exe ; echo $?
1
$ test -x /proc/2072/exe ; echo $?
1
| permission denied, even when I have permission |
1,669,930,542,000 |
As I understand, the setuid bit means that any user who can execute a file executes it as the file's owner. Is it possible, using ACLs or something, so that only a select list of users execute the file as the owner, and everyone else executes it as themselves?
I'm thinking of it as a lightweight alternative to sudo. Or do I have to use sudo (or su) for this?
|
It is not possible the way you want it, where if one user executes the file then it is executed setuid, but if another user executes the file it is executed without setuid. The setuid bit is not included in access control lists. It is either on for a file or off.
You can get partially toward your goal. You can allow only certain users to execute a setuid file, but other users will be forbidden from executing it at all.
For example, consider the fictitious "privileged_command" program. If you want to make it setuid but only allow members of the adm group to run it through an acl, then you can do this by:
$ chown root.root privileged_command
$ chmod 4000 privileged_command
$ ls -l privileged_command
---S------ 1 root root 152104 Nov 17 21:15 privileged_command
$ setfacl -m g:adm:rx privileged_command
This is a very simple example, and very doable with just group permissions. But you can go on to use setfacl to make a complex set of users/groups who can and can't execute the program. The only issue is, everyone who is capable of running the command will run it setuid. Anyone else will just get permission denied and won't be able to execute it at all.
This is as close as I can think of get to what you are actually asking. That being said, what you are asking is probably a Bad Idea™. Best practices for extra privileges are for users who are authorized to exercise those privileges to be required to actively assume that authority when and only when they intend to act with that authority. This is the whole purpose behind sudo. Making certain commands automatically act with different privileges depending on who is executing them is a recipe for accidental misuse. That's the reason why you don't normally just log in as root all the time.
Also, the use of ACLs this way is a recipe for later security holes. ACLs are rarely used, and even more rarely needed. In this case, using them to control who can run a program setuid, it's not immediately obvious who has what privileges, there is no central file or repository showing who has what authority, or what ACL conditions are on which files. It can quickly become an administrative nightmare.
I won't say that there is never a place for using an ACL, but I can count on one hand with a lot of unused fingers the number of times I've seen a convincing case that it's a good idea.
| Is it possible to apply setuid only for a specific user? |
1,669,930,542,000 |
I've written and compiled a short program to allow any user to change the contents of my /sys/class/backlight/intel_backlight/brightness file, but I fail to escalate their permissions. What could I be missing.
#include <stdio.h>
#include <stdlib.h>
#define FILENAME "/sys/class/backlight/intel_backlight/brightness"
int main (int argc, char * argv[])
{
int res;
setuid(0); // I didn't intend to keep this, but I included it just in case
printf("euid %d\n", geteuid());
system("whoami");
// Attempt to open FILENAME; print "Can't open..." on failure
}
Yet, whoami consistently returns exampleuser instead of root, and the program consistently fails to open the output file.
I compile it and set the uid bit then run the program:
$ gcc -o example.bin example.c # compile
$ sudo chown root:root example.bin # set owner & group
$ sudo chmod 4770 example.bin # set uid bit
$ ./example.bin 75 # execute
euid 1000
exampleuser
Can't open output file /sys/class/backlight/intel_backlight/brightness
The target output file does exist:
$ ls -l /sys/class/backlight/intel_backlight/brightness
-rw-r--r-- 1 root root 4096 May 2 07:57 /sys/class/backlight/intel_backlight/brightness
I'm running Ubuntu 14.04 LTS
|
Either the filesystem is doesn't support setuid executables (because it's mounted with the nosuid option, or because it's a FUSE filesystem mounted by a non-root user), or there is a security framework such as SELinux or AppArmor that prevents setuid here (I don't think Ubuntu sets up anything like this though). That, or you didn't actually run these commands — you've made the file non-executable by others, so they'd only work if you were in the root group, which you shouldn't be.
This isn't a good way to do it anyway. It's a lot simpler to change the permissions on the file.
chgrp users /sys/class/backlight/intel_backlight/brightness
chmod g+w /sys/class/backlight/intel_backlight/brightness
Use a group that you're a member of, if you aren't a member of the users group.
Add these commands to /etc/rc.local or some other script that is executed near the end of the boot sequence.
| Setuid, SUID bit not providing root privileges |
1,669,930,542,000 |
Looks like my MTD2,and MTD3 partitions are write protected. The OS is booting from SD card on a ARM Cortex A 9 processor.
root@Xilinx-ZC702-2013_3:~# mount /dev/mmcblk0p1 /mnt/
root@Xilinx-ZC702-2013_3:~#
root@Xilinx-ZC702-2013_3:~# cd /mnt/flash/
root@Xilinx-ZC702-2013_3:/mnt/flash#
root@Xilinx-ZC702-2013_3:/mnt/flash# ls
BOOT.BIN image.ub rootfs.jffs2
root@Xilinx-ZC702-2013_3:/mnt/flash# flashcp -v image.ub /dev/mtd2
While trying to open /dev/mtd2 for read/write access: Permission denied
root@Xilinx-ZC702-2013_3:/mnt/flash#
Even I tried this:
root@Xilinx-ZC702-2013_3:/mnt/flash# flash_eraseall -j /dev/mtd2
flash_eraseall has been replaced by `flash_erase <mtddev> 0 0`; please use it
flash_erase: error!: /dev/mtd2
error 13 (Permission denied)
Here are additional info:
root@Xilinx-ZC702-2013_3:/mnt/flash# mtdinfo
Count of MTD devices: 4
Present MTD devices: mtd0, mtd1, mtd2, mtd3
Sysfs interface supported: yes
root@Xilinx-ZC702-2013_3:~# cat /proc/mtd
dev: size erasesize name
mtd0: 00500000 00010000 "boot"
mtd1: 00020000 00010000 "bootenv"
mtd2: 001202c0 00010000 "image"
mtd3: 00500000 00010000 "jffs2"
Also:
root@Xilinx-ZC702-2013_3:/mnt/flash# mtd_debug info /dev/mtd3
mtd.type = MTD_NORFLASH
mtd.flags = MTD_BIT_WRITEABLE
mtd.size = 5242880 (5M)
mtd.erasesize = 65536 (64K)
mtd.writesize = 1
mtd.oobsize = 0
regions = 0
root@Xilinx-ZC702-2013_3:/mnt/flash# mtd_debug info /dev/mtd2
mtd.type = MTD_NORFLASH
mtd.flags = MTD_BIT_WRITEABLE
mtd.size = 1180352 (1M)
mtd.erasesize = 65536 (64K)
mtd.writesize = 1
mtd.oobsize = 0
regions = 0
root@Xilinx-ZC702-2013_3:/mnt/flash# mtd_debug info /dev/mtd1
mtd.type = MTD_NORFLASH
mtd.flags = MTD_CAP_NORFLASH
mtd.size = 131072 (128K)
mtd.erasesize = 65536 (64K)
mtd.writesize = 1
mtd.oobsize = 0
regions = 0
root@Xilinx-ZC702-2013_3:/mnt/flash# mtd_debug info /dev/mtd0
mtd.type = MTD_NORFLASH
mtd.flags = MTD_CAP_NORFLASH
mtd.size = 5242880 (5M)
mtd.erasesize = 65536 (64K)
mtd.writesize = 1
mtd.oobsize = 0
regions = 0
root@Xilinx-ZC702-2013_3:/mnt/flash#
How do I solve this issue?
dmesg output
I think something is wrong here:
4 ofpart partitions found on MTD device spi32766.0
Creating 4 MTD partitions on "spi32766.0":
0x000000000000-0x000000500000 : "boot"
0x000000500000-0x000000520000 : "bootenv"
0x000000520000-0x0000006402c0 : "image"
mtd: partition "image" doesn't end on an erase block -- force read-only
0x0000006402c0-0x000000b402c0 : "jffs2"
mtd: partition "jffs2" doesn't start on an erase block boundary -- force read-on ly
full dmesg output is given here
Booting Linux on physical CPU 0x0
Linux version 3.8.11 (root@xilinx) (gcc version 4.7.3 (Sourcery CodeBench Lite 2 013.05-40) ) #3 SMP PREEMPT Mon Apr 7 19:02:27 IST 2014
CPU: ARMv7 Processor [413fc090] revision 0 (ARMv7), cr=18c5387d
CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
Machine: Xilinx Zynq Platform, model: .
Memory policy: ECC disabled, Data cache writealloc
On node 0 totalpages: 262144
free_area_init_node: node 0, pgdat c0bed1c0, node_mem_map c0c0a000
Normal zone: 1520 pages used for memmap
Normal zone: 0 pages reserved
Normal zone: 193040 pages, LIFO batch:31
HighMem zone: 528 pages used for memmap
HighMem zone: 67056 pages, LIFO batch:15
PERCPU: Embedded 7 pages/cpu @c1415000 s6592 r8192 d13888 u32768
pcpu-alloc: s6592 r8192 d13888 u32768 alloc=8*4096
pcpu-alloc: [0] 0 [0] 1
Built 1 zonelists in Zone order, mobility grouping on. Total pages: 260096
Kernel command line: console=ttyPS0,115200
PID hash table entries: 4096 (order: 2, 16384 bytes)
Dentry cache hash table entries: 131072 (order: 7, 524288 bytes)
Inode-cache hash table entries: 65536 (order: 6, 262144 bytes)
__ex_table already sorted, skipping sort
Memory: 1024MB = 1024MB total
Memory: 1027124k/1027124k available, 21452k reserved, 270336K highmem
Virtual kernel memory layout:
vector : 0xffff0000 - 0xffff1000 ( 4 kB)
fixmap : 0xfff00000 - 0xfffe0000 ( 896 kB)
vmalloc : 0xf0000000 - 0xff000000 ( 240 MB)
lowmem : 0xc0000000 - 0xef800000 ( 760 MB)
pkmap : 0xbfe00000 - 0xc0000000 ( 2 MB)
modules : 0xbf000000 - 0xbfe00000 ( 14 MB)
.text : 0xc0008000 - 0xc04effd4 (5024 kB)
.init : 0xc04f0000 - 0xc0bbf9c0 (6975 kB)
.data : 0xc0bc0000 - 0xc0bedee0 ( 184 kB)
.bss : 0xc0bedee0 - 0xc0c09670 ( 110 kB)
Preemptible hierarchical RCU implementation.
NR_IRQS:16 nr_irqs:16 16
MIO pin 47 not assigned(00001220)
xslcr mapped to f0002000
Zynq clock init
sched_clock: 16 bits at 54kHz, resolution 18432ns, wraps every 1207ms
ps7-ttc #0 at f0004000, irq=43
Console: colour dummy device 80x30
Calibrating delay loop... 1332.01 BogoMIPS (lpj=6660096)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 512
CPU: Testing write buffer coherency: ok
Setting up static identity map for 0x35eec0 - 0x35eef4
L310 cache controller enabled
l2x0: 8 ways, CACHE_ID 0x000000c0, AUX_CTRL 0x72360000, Cache size: 524288 B
CPU1: Booted secondary processor
Brought up 2 CPUs
SMP: Total of 2 processors activated (2664.03 BogoMIPS).
devtmpfs: initialized
NET: Registered protocol family 16
DMA: preallocated 256 KiB pool for atomic coherent allocations
xgpiops e000a000.ps7-gpio: gpio at 0xe000a000 mapped to 0xf004e000
bio: create slab <bio-0> at 0
GPIO IRQ not connected
XGpio: /amba@0/gpio@41200000: registered, base is 252
SCSI subsystem initialized
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
Switching to clocksource xttcps_clocksource
NET: Registered protocol family 2
TCP established hash table entries: 8192 (order: 4, 65536 bytes)
TCP bind hash table entries: 8192 (order: 4, 65536 bytes)
TCP: Hash tables configured (established 8192 bind 8192)
TCP: reno registered
UDP hash table entries: 512 (order: 2, 16384 bytes)
UDP-Lite hash table entries: 512 (order: 2, 16384 bytes)
NET: Registered protocol family 1
RPC: Registered named UNIX socket transport module.
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC: Registered tcp NFSv4.1 backchannel transport module.
bounce pool size: 64 pages
jffs2: version 2.2. (NAND) (SUMMARY) © 2001-2006 Red Hat, Inc.
msgmni has been set to 1478
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
io scheduler noop registered
io scheduler deadline registered
io scheduler cfq registered (default)
e0001000.serial: ttyPS0 at MMIO 0xe0001000 (irq = 82) is a xuartps
console [ttyPS0] enabled
xdevcfg f8007000.ps7-dev-cfg: ioremap f8007000 to f00c8000 with size 100
st: Version 20101219, fixed bufsize 32768, s/g segs 256
osst :I: Tape driver with OnStream support version 0.99.4
osst :I: $Id: osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $
SCSI Media Changer driver v0.25
xqspips e000d000.ps7-qspi: master is unqueued, this is deprecated
m25p80 spi32766.0: found n25q128, expected n25q128
m25p80 spi32766.0: n25q128 (16384 Kbytes)
4 ofpart partitions found on MTD device spi32766.0
Creating 4 MTD partitions on "spi32766.0":
0x000000000000-0x000000500000 : "boot"
0x000000500000-0x000000520000 : "bootenv"
0x000000520000-0x0000006402c0 : "image"
mtd: partition "image" doesn't end on an erase block -- force read-only
0x0000006402c0-0x000000b402c0 : "jffs2"
mtd: partition "jffs2" doesn't start on an erase block boundary -- force read-on ly
xqspips e000d000.ps7-qspi: at 0xE000D000 mapped to 0xF00CA000, irq=51
libphy: XEMACPS mii bus: probed
xemacps e000b000.ps7-ethernet: invalid address, use assigned
xemacps e000b000.ps7-ethernet: MAC updated 96:ec:fa:13:9f:95
xemacps e000b000.ps7-ethernet: pdev->id -1, baseaddr 0xe000b000, irq 54
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
ULPI transceiver vendor/product ID 0x0424/0x0007
Found SMSC USB3320 ULPI transceiver.
ULPI integrity check: passed.
xusbps-ehci xusbps-ehci.0: Xilinx PS USB EHCI Host Controller
xusbps-ehci xusbps-ehci.0: new USB bus registered, assigned bus number 1
xusbps-ehci xusbps-ehci.0: irq 53, io mem 0x00000000
xusbps-ehci xusbps-ehci.0: USB 2.0 started, EHCI 1.00
usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
usb usb1: Product: Xilinx PS USB EHCI Host Controller
usb usb1: Manufacturer: Linux 3.8.11 ehci_hcd
usb usb1: SerialNumber: xusbps-ehci.0
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 1 port detected
Initializing USB Mass Storage driver...
usbcore: registered new interface driver usb-storage
USB Mass Storage support registered.
i2c /dev entries driver
xi2cps e0004000.ps7-i2c: 400 kHz mmio e0004000 irq 57
i2c i2c-0: Added multiplexed i2c bus 1
i2c i2c-0: Added multiplexed i2c bus 2
i2c i2c-0: Added multiplexed i2c bus 3
at24 3-0054: 1024 byte 24c08 EEPROM, writable, 1 bytes/write
i2c i2c-0: Added multiplexed i2c bus 4
i2c i2c-0: Added multiplexed i2c bus 5
i2c i2c-0: Added multiplexed i2c bus 6
i2c i2c-0: Added multiplexed i2c bus 7
i2c i2c-0: Added multiplexed i2c bus 8
pca954x 0-0074: registered 8 multiplexed busses for I2C switch pca9548
xadcps f8007100.ps7-xadc: enabled: yes reference: external
sdhci: Secure Digital Host Controller Interface driver
sdhci: Copyright(c) Pierre Ossman
sdhci-pltfm: SDHCI platform and OF driver helper
mmc0: Invalid maximum block size, assuming 512 bytes
mmc0: SDHCI controller on e0100000.ps7-sdio [e0100000.ps7-sdio] using ADMA
usbcore: registered new interface driver usbhid
usbhid: USB HID core driver
TCP: cubic registered
NET: Registered protocol family 10
sit: IPv6 over IPv4 tunneling driver
NET: Registered protocol family 17
NET: Registered protocol family 40
VFP support v0.3: implementor 41 architecture 3 part 30 variant 9 rev 4
Registering SWP/SWPB emulation handler
Freeing init memory: 6972K
mmc0: new high speed SDHC card at address 1234
mmcblk0: mmc0:1234 SA04G 3.67 GiB
mmcblk0: p1
|
When you see weird kernel behavior, dmesg is a great first thing to check. In your case, it gave an important pointer:
4 ofpart partitions found on MTD device spi32766.0
Creating 4 MTD partitions on "spi32766.0":
0x000000000000-0x000000500000 : "boot"
0x000000500000-0x000000520000 : "bootenv"
0x000000520000-0x0000006402c0 : "image"
mtd: partition "image" doesn't end on an erase block -- force read-only
0x0000006402c0-0x000000b402c0 : "jffs2"
mtd: partition "jffs2" doesn't start on an erase block boundary -- force read-only
Your /proc/mtd shows an erase block size of 64 KiB (0x10000). The size of the "image" partition (0x1202c0) is indeed not a multiple of the erase block size. The closest (though slightly smaller) is 0x120000 (1152 KiB); the next largest is 0x130000 (1216 KiB).
| While trying to open /dev/mtd2 for read/write access: Permission denied |
1,669,930,542,000 |
I would like to use AIDE to help me verify the integrity of my home directory on a shared Linux system. I am not an administrator of this system. I have built and installed AIDE in my home directory and it seems to work properly.
The sysadmin has set permissions on /home to 0751. This allows users to enter /home, but not list the contents of the directory.
For demonstration purposes, consider this overly simple aide.conf:
database=file:/home/kccricket/aide.db
database_out=file:aide.db.new
/home/kccricket R
Given this setup, running aide -i will output:
open_dir():Permission denied: /home
AIDE, version 0.15.1
### AIDE database at aide.db.new initialized.
The resulting AIDE database will be empty. If I run the same command with -V255 (highest verbosity), I can see that AIDE examines every directory in / and then attempts to do the same with /home. It chokes because it can't list the contents of /home.
Is there a way to make this work, short of asking the sysadmin to change the perms on /home?
|
The solution to this problem is to build AIDE from the current development snapshot or the 0.16a2 alpha release.
Version 0.16a2 includes a new option:
root_prefix
The prefix to strip from each file name in the file system before applying
the rules and writing to database. Aide removes a trailing slash from the
prefix. The default is no (an empty) prefix. This option has no effect in
compare mode.
In the case of this question, the new aide.conf file would be:
database=file:/home/kccricket/aide.db
database_out=file:aide.db.new
root_prefix=/home/kccricket
/ R
Thanks to Hannes von Haugwitz ([email protected]) of the AIDE team for this information.
| Can AIDE scan a directory inside a directory the user doesn't have read permissions for? |
1,457,623,331,000 |
I am giving my first steps to install a VPS web server (Debian 8 + Apache 2 + PHP 5.6) and need some help with files/folders permissions, please.
I already found some similar topics about this subject (not exactly the same), and all from 4 or 5 years old. Some of them point to solutions using deprecated PHP methods. Maybe today there are some new methods or solutions.
Well, there is Apache 2 that runs in www-data user/group. So I created a user called webadmin, put it in the www-data group and configured it as the owner of the main web site folder:
adduser webadmin
usermod -a -G www-data webadmin
chown -R webadmin:www-data /var/www/website.com
I also changed the permissions of the public_html folder like this:
find /var/www/website.com/public_html -type f -exec chmod 644 {} +
find /var/www/website.com/public_html -type d -exec chmod 755 {} +
find /var/www/website.com/public_html -type d -exec chmod g+s {} +
And created the folder that will receive the uploaded files (only static files - images):
mkdir /var/www/website.com/public_html/uploads
chown webadmin:www-data /var/www/website.com/public_html/uploads
chmod 774 /var/www/website.com/public_html/uploads
Now, what is happening: all the folders and PHP/Html files that I upload using SFTP (logged as webadmin) of course get webadmin as owner. When I use PHP to create subfolders and upload the images, they get the www-data user as owner. In this scenario I am having some permission issues, like to delete files throught SFTP.
I could assign Apache to run as webadmin user and www-data group, but I think I can have security issues (don't I?). Please, is there a standard and best pratice to configure the server or the PHP script that will created the folders and upload the files to avoid this issue?
|
Your problem is that new files are not group writable. Instead of using setgid on directory to set the group, use ACLs which offer more flexibility.
The default ACL entry is inherited for new files and directories, granting permissions defined in the ACL. To set a default ACL entry for group webadmin to allow rwx:
setfacl -m default:g:webadmin:rwx /var/www/website.com/public_html/uploads
New files created in the directory will be readable and writable for group webadmin.
If ACL is not an option, you need to change the umask, which is determines the default UNIX permissions for new files. Sensible choices for new umask are 002 (world-writable bit masked) and 007 (all world permission bits masked, i.e. only group and owner have access).
To set Apache umask, copy systemd unit file to /etc:
cp /lib/systemd/system/apache2.service /etc/systemd/system/
Configure umask in /etc/systemd/system/apache2.service by appending UMask=<umask> to [Service] section. Note that this affects all files created by Apache.
Changing umask for sftp is only required if you need to change uploaded files/directories from PHP/Apache. The default umask 022 creates files group and world readable, but not group writable. Easiest way to configure default umask for sftp is with pam_umask.
To apply your custom umask only for users in a specific group and only when using sftp, append /etc/pam.d/sshd with:
session [default=1 success=ignore] pam_succeed_if.so user notingroup <yourgroup>
session optional pam_umask.so umask=<umask>
The first rule tells pam to skip the next rule if the user is not in group <yourgroup>, i.e. only apply the next rule if the user is in group <yourgroup>. The second rule sets the umask to <umask>.
Addendum: Note that existing files moved with mv retain their original permissions and ownership. You can apply default permissions from umask and directory setgid/ACLs by copying files with cp -d instead.
| Apache PHP uploads - ownership and permissions |
1,457,623,331,000 |
I think my Linux laptop has been hacked, for three reasons:
Whenever I saved files into the Home folder, the files wouldn't appear - not even in the other folders on my computer.
An unfamiliar .txt file has showed up in my Home folder. Having noticed it, I didn't open it. I immediately had a suspicion that maybe my laptop has been hacked.
When checking my Firewall status, it turned out that it was inactive.
Thus, I have taken the following steps:
I backed-up all of my recent files using two USB Sticks that aren't as important as other USB Sticks which I own - so in case those USB Sticks get infected with the potential malware, it wouldn't infect my other backed-up important files.
I've used ClamTK in order to scan the aforementioned suspicious file -
but apparently, for some reason, it hasn't detected any threats.
I've used chkrootkit for another scan. This is the output (up until that point, nothing seemed to have been infected):
Searching for suspicious files and dirs, it may take a while... The following suspicious files and directories were found:
/usr/lib/python2.7/dist-packages/PyQt4/uic/widget-plugins/.noinit /usr/lib/debug/.build-id /lib/modules/4.13.0-39-generic/vdso/.build-id /lib/modules/4.13.0-37-generic/vdso/.build-id /lib/modules/4.10.0-38-generic/vdso/.build-id /lib/modules/4.13.0-36-generic/vdso/.build-id /lib/modules/4.13.0-32-generic/vdso/.build-id /lib/modules/4.13.0-38-generic/vdso/.build-id
/usr/lib/debug/.build-id /lib/modules/4.13.0-39-generic/vdso/.build-id /lib/modules/4.13.0-37-generic/vdso/.build-id /lib/modules/4.10.0-38-generic/vdso/.build-id /lib/modules/4.13.0-36-generic/vdso/.build-id /lib/modules/4.13.0-32-generic/vdso/.build-id /lib/modules/4.13.0-38-generic/vdso/.build-id
And also:
Searching for Linux/Ebury - Operation Windigo ssh... Possible Linux/Ebury - Operation Windigo installetd
I was trying - twice - to scan my laptop with F-PROT, with fpscan,
using Ultimate Boot CD. But when I tried getting into the PartedMagic section of the disc in order to use the tool, it just wouldn't work. Twice.
So I was not able to use it whatsoever.
When typing sudo freshclam, I got the following output:
ERROR: /var/log/clamav/freshclam.log is locked by another process
ERROR: Problem with internal logger (UpdateLogFile = /var/log/clamav/freshclam.log).
Then, I scanned the computer using rkhunter.
These are the warnings I got:
/usr/bin/lwp-request [ Warning ]
Performing filesystem checks
Checking /dev for suspicious file types [ Warning ]
Checking for hidden files and directories [ Warning ]
And this is the summary:
System checks summary
=====================
File properties checks...
Files checked: 143
Suspect files: 1
Rootkit checks...
Rootkits checked : 365
Possible rootkits: 0
Applications checks...
All checks skipped
The system checks took: 1 minute and 10 seconds
All results have been written to the log file: /var/log/rkhunter.log
One or more warnings have been found while checking the system.
Please check the log file (/var/log/rkhunter.log)
So, after all that - I do not have access to the rkhunter log file as root:
n-even@neven-Lenovo-ideapad-310-14ISK ~ $ sudo su
neven-Lenovo-ideapad-310-14ISK n-even # /var/log/rkhunter.log
bash: /var/log/rkhunter.log: Permission denied
What should I be doing now?
Help much appreciated!
Thanks a lot.
|
Based on the details in your question, your system is clean.
You're making backups. OK.
clamav comes up clean. That's fine, too.
Based on your output of chkrootkit, your system is clean. Those files listed as suspicious are benign. The Ebury/Windigo detection is a false positive: https://github.com/Magentron/chkrootkit/issues/1
Some of the live discs you tried didn't work. That's OK.
There might already be an updater running as a daemon.
You're trying to execute the log file. View it in a pager instead, like less /var/log/rkhunter.log.
From a logical standpoint, chkrootkit and rkhunter aren't of much use if they are used to scan the same system they execute on since they are not realtime scanners thus any decently packaged rootkit would have sabatoged the scanners before they are run. Also, both have heuristics that result in plenty of false positives.
The saved files not appearing are rarely an indication of system compromise. Without knowing the contents of the "suspicious" .txt file you mention, there can be no conclusion drawn from that. DEADJOE is a backup file created by the JOE text editor. The firewall in Linux Mint is disabled by default.
Edit: Added info on DEADJOE file.
| bash: /var/log/rkhunter.log: Permission denied (as root - Linux Mint 18.3) |
1,457,623,331,000 |
I had a strange situation where I've found a number of files and folders that had 000 permissions set. This was easily repairable via:
sudo find . -perm 000 -type f -exec chmod 664 {} \;
sudo find . -perm 000 -type d -exec chmod 775 {} \;
Unfortunately I suddenly realized the problem was a bit more complicated with some odd permissions such as 044 and some other strange settings. It turns out that these are strewn about and unpredictable.
Is there a way to search for permissions such as 0** or other such very limiting permission configurations?
|
I'd use something like this:
find . ! -perm -u=r ! -perm -u=w ! -perm -u=x -ls
Or if you prefer the octal notation:
find . ! -perm -400 ! -perm -200 ! -perm -100 -ls
Unfortunately, no idea, how to take it as one -perm option.
That syntax above is standard except for the -ls part (common but not POSIX) which you can replace with -exec ls -disl {} + on systems where find doesn't support -ls to get a similar output.
| How to `find` all files and folders with 0** permissions? |
1,457,623,331,000 |
/tmp folder has all permission:
drwxrwxrwt 28 root root 20480 Jan 24 03:14 /tmp
File /tmp/test.txt already exists, with permission 666, owned by user test1.
-rw-rw-rw- 1 test1 test1 0 Jan 24 02:34 /tmp/test.txt
As user test2, I run echo hello >> /tmp/test.txt
bash: /tmp/test.txt: Permission denied
Even when I switch to root, I still can't append to that file. It seems like only the owner of the files in /tmp can modify its content regardless of the permission bit.
Virtualization: vmware
Operating System: Ubuntu 20.04.3 LTS
Kernel: Linux 5.13.0-27-generic
Architecture: x86-64
|
To anyone having the same problem in the future, /tmp is a sticky folder by default, meaning only the file owner can modify it. Therefore, to change others' files, you must remove the sticky bit: sudo chmod -t /tmp. Note that you should turn it back on afterward: sudo chmod +t /tmp.
As suggested in comments, a better solution is that you can also create a new folder in /tmp to do the trick without touching the permissions on /tmp.
| Permission denied in /tmp |
1,457,623,331,000 |
Can we set permissions, on a Linux box OR for a particular directory (Scientific Linux in my case) that, one can read files but cannot copy, move or delete them?
Update: My Scenario is,
We developed an GUI program which are having some images which we have created by consuming lot of time and efforts. Our directory structure is :
/GUI/program/GUI.exe
/GUI/images/A/A1.jpeg A2.png .... A200.png
/GUI/images/B/B1.png B2.png .... B200.png
.
.
/GUI/images/I/I1.png I2.png .... I200.png
No need to say ./GUI.exe calls images which required by user to interact.
Now I need to hide /GUI/images/*.
Is there any way?
|
If a file can be read, it can be copied. You can, however, stop the file from being deleted or moved, by not giving write permissions to the directory where the file resides.
Edited with additional info since the question has been amended:
Given the scenario you've now added to the question, you could do this:
create a user that will be used only for this program, e.g. guiuser
change the ownership of the /GUI/images directory to e.g. guiuser
change the permissions of the directory and files inside it so that only guiuser has read permission
change the owner of the program GUI.exe to be owned by guiuser
change permissions of the program to run setuid (chmod u+s /GUI/program/GUI.exe)
When your users run the program, that program will have the access rights of guiuser, so the program will be able to read the files even though the ordinary user doesn't have permission.
| How do I disable copy permissions? |
1,457,623,331,000 |
I rely heavily on aliases in my .bashrc and it crossed my mind that I could improve my privacy / security
and somewhat harden my Ubuntu 20.04 Desktop by changing recursively the permissions of my home directory
and of its subdirectories and files (600 for the files and 700 for the directories). And so I ran:
sudo chmod -R 600 /home/undoxed && sudo chmod -R u+X /home/undoxed
where undoxed is the name of my administrative user.
The recursive flag has not spared .bash_logout, .bashrc, and.profile.
From what I earlier noticed when tinkering with /etc/adduser.conf, setting in it DIR_MODE=0700 along with setting umask in .bashrc to 0077 and then adding a new user
with useradd results in 600 permissions of any file in the new user's home directory with the exception of: .bash_logout .bashrc .profile.
The said three files got 644 permissions, instead of 600 as I expected.
Why is it so that these three files were spared? Could making these files 600, as did I manually with chmod -R 600, make my
OS unusable? Could there be some other consequences of these files having such permissions when compared to 644?
So far everything seems all right but I've not rebooted after running sudo chmod -R 600 /home/undoxed && sudo chmod -R u+X /home/undoxed
today and I'm anxious to do so.
|
TL,DR: .bash* can be 600, but chmod -R 600 is dangerous.
You can make your home directory accessible to only you:
chmod 700 ~
This doesn't need to be recursive. It's impossible to access a file without accessing a directory that it's in. For a directory, this means that it's impossible to access a directory without accessing its parent directory. So making a directory inaccessible (no x permission) makes everything under it inaccessible.
(There is one way to bypass the need to access the containing directory, which is to already have access: if a process already has a file open, it stays open, even if something changes that would now make it impossible for the process to open the file. Here “open file” includes having a directory as the process's working directory. The something that changes can be, for example, a permission change, or a move to a different directory, or the process reducing its privileges.)
There are a few circumstances in which you may need to keep parts of your home directory accessible to other users or to system services (e.g. making .plan accessible to fingerd or ~/public_html to httpd). They're uncommon nowadays when most people use individual machines which don't run any public services. In such a case:
Make your home directory traversable by everyone, but only readable and writable by you: chmod 711 ~
Make the content of your home directory private: chmod go= ~/* ~/.[!.]* ~/..?* (non-hidden files, hidden files except . and ..*, and hidden files starting with .. other than .. itself — ignore the error if one of these patterns doesn't match anything)
Allow read (r) access, plus execute/traverse (x) access for directories, to the specific files and directories that need it.
These permissions allow any local user to check whether a file by a given name exists (whether ls ~jerzy/somefile fails with “permission denied” or “no such file or directory”) but not to list the files in your home directory.
Configuration files for programs that you use, such as bash, don't need to be public. The only processes that need to access them run on your account. You can chmod 600 ~/.bash* if you like. It won't make any practical difference if your home directory is only accessible to you anyway, but it won't hurt.
If you set your umask to 077, all your new files will be only accessible to you.
Do not run chmod -R 600. As root, this can make your system so hard to restore that reinstalling is easier. As a non-privileged user, it's easier to recover from, but still painful.
chmod -R 600 removes execute permission from directories, and for a directory, the “execute” permission (the x in chmod, bit 1 in numerical values) means the ability to access file in that directory. The “read” (r, 4) permission only allows listing files in the directory. So chmod -R 600 ~ forbids everyone, even you, from accessing files in your home directory. Then chmod -R u+X ~ restores execute permissions for directories, but only if the system hasn't crashed in between.
Furthermore the sequence removes execute permission from all regular files. Some regular files need execute permission. This obviously includes any independent software that you may have installed in your home directory, and personal scripts or other programs. This can also include files that aren't generally thought of as directly executable; for example, older versions of Ubuntu used the execute permission to indicate that certain kinds of files were trusted, including .desktop files (though newer versions don't use this mechanism anymore).
The sequence also makes all files writable. It can be useful to make some files read-only, for example important files that you wanted to avoid overwriting or deleting accidentally. Many version control programs make certain files read-only because they're internal state files that normally never change, or to indicate that users aren't supposed to change them directly, or to indicate that a file is locked. However, this is rarely critical.
(Incidentally, there are a few files that must be private, such as SSH keys. A recursive chmod in your home directory that adds non-user permissions would break this, and in particular could make it impossible to log into your account over SSH.)
If you want to make all your files private individually, don't change the permissions that apply to you.
chmod -R go= ~
| Could setting the permission on these files to 600 render my OS unusable? .bash_logout, .bashrc, .profile |
1,457,623,331,000 |
I copied a project directory to a portable hard disk, wiped my laptop and copied the project back, but now git reports loads of changes due to files that previously had 644 permission changing to 755.
I could recursively chmod all files and directories to 644, but then I get a load more changes in git (so looks like not everything was 644 previously). Is there any way to chmod only files that have 755 permissions?
|
The documentation for find (see man find) writes,
-perm mode File's permission bits are exactly mode (octal or symbolic). [...] See the EXAMPLES section for some illustrative examples.
So you can match files and change their permissions like this
find path/to/files -type f -perm 0755 -exec echo chmod 0644 {} +
Remove the echo when you are comfortable that it's showing you what you expect, and run it again.
| Recursive chmod to 644 where current permission equals 755 |
1,457,623,331,000 |
Let's suppose we have to users: alice and bob.
Now Bob wants to move Alice's ~/Documents directory into his home folder.
What's the best workflow to do that, updating the permissions (from Alice to Bob)?
That means that all the rights Alice has on the /home/alice/Documents/ (directories and files, recursively) to be added to Bob /home/bob/Documents/ (directories and files, recursively), and Alice's rights will be removed from /home/bob/Documents.
|
If you change the file owner using chown, the permissions for alice would be transferred to bob. So here's the flow:
sudo mv ~bob/Documents ~bob/Documents.orig
sudo mv ~alice/Documents/ ~bob/Documents
sudo chown -PR bob ~bob/Documents
Edit:
In case you want to overwrite the group as well, use
sudo chown -PR bob:bob ~bob/Documents
Or:
sudo chown -PR bob: ~bob/Documents
to use bob's primary group.
However, beware that this could be problematic in case ~alice/Documents had non-default group permissions. In that case it might be better to use something like
sudo find ~bob/Documents -group alice -exec chown -h bob: {} +
If ACLs are in use, you may want to check those as well.
| Moving a directory from a user to another user, keeping the correct permissions |
1,457,623,331,000 |
I read "Linux Bible 10th Edition" at 130 page. Exercise №7:
Create a /tmp/FILES directory. Find all files under the /usr/share
directory that are more than 5MB and less than 10MB and copy them to
the /tmp/FILES directory.
My command looks like find /usr/share -type f -size +5M -size -10M -exec cp {} /tmp/FILES \;. I ran it like usual user and got
cp: error copying '/bla/bla' to '/lol/kek': Input/output error find: '/usr/share/bla-bla': Permission denied
After that I tried to ran it as super user and got error (without Permission denied):
cp: error copying '/bla/bla' to '/lol/kek': Input/output error
Please, explain me, what the reason of that errors even when I run it as super user. Thank you.
P.S. Please, explain why command with of -exec should have empty {}?
|
“Input/output error” indicates a low-level I/O error (EIO) either while reading a source file or while writing a target file. This means you have a problem with your storage; dmesg will give you more information.
Such errors are not related to privileges or permissions, which is why running cp as root doesn’t make them disappear (unlike the “Permission denied” error).
Understanding the -exec option of `find` explains the use of {} with -exec.
| Linux Bible 10th Edition: why -exec cp can't copy files even with super user privileges? |
1,457,623,331,000 |
I am completely new to *NIX based OSes. One of the things that baffles me is that a process or program may execute setuid(0) and then perform privileged operations and revert back to its normal uid.
My question is what is the mechanism in *NIX to prevent any arbitrary process from possessing root ?
If I write a simple C program that calls setuid(0) under what conditions will that call succeed and under what conditions will it fail ?
|
The basic idea is that a process may only reduce its privileges. A process may not gain any privileges. There is one exception: a process that executes a program from a file that has a setuid or setgid flag set gains the privileges expressed by this flag.
Note how this mechanism does not allow a program to run arbitrary code with elevated privileges. The only code that can be run with elevated privileges is setuid/setgid executables.
The root user, i.e. the user with id 0, is more privileged than anything else. A process with user 0 is allowed to do anything. (Group 0 is not special.)
Most processes keep running with the same privileges. Programs that log a user in or start a daemon start as root, then drop all privileges and execute the desired program as the user (e.g. the user's login shell or session manager, or the daemon). Setuid (or setgid) programs can operate as the target user and group, but many switch between the caller's privileges and their own additional privileges depending on what they're doing, using the mechanisms I am going to describe now.
Every process has three user IDs: the real user ID (RUID), the effective user ID (EUID), and the saved user ID (SUID). The idea is that a process can temporarily gain privileges, then abandon them when it doesn't need them anymore, and gain them back when it needs them again. There's a similar mechanism for groups, with a real group ID (RGID), an effective group ID (EGID), a saved group ID (SGID) and supplementary groups. The way they work is:
Most programs keep the same real UID and GID throughout. The main exception is login programs (and daemon launchers), which switch their RUID and RGID from root to the target user and group.
File access, and operations that require root privileges, look at the effective UID and GID. Privileged programs often switch their effective IDs depending on whether they're executing a privileged operation.
The saved IDs allow switching the effective IDs back and forth. A program may switch its effective ID between the saved ID and the real ID.
A program that needs to perform certain actions with root privileges normally runs with its EUID set to the RUID, but calls seteuid to set its EUID to 0 before running the action that requires privileges and calls seteuid again to the EUID change back to the RUID afterwards. In order to perform the call to seteuid(0) even though the EUID at the time is not 0, the SUID must be 0.
The same mechanism can be used to gain group privileges. A typical example is a game that saves high scores of local users. The game executable is setgid games. When the game starts, its EGID is set to games, but it changes back to the RGID so as not to risk performing any action that the user isn't normally allowed to do. When the game is about to save a high score, it changes its EGID temporarily to games. This way:
Because the high score file requires privileges that ordinary users don't have, the only way to add an entry to the high score file is to play the game.
If there's a security vulnerability in the game, the worst that it can do is grant a user permission to the games group, allowing them to cheat on high scores.
If there's a bug in the game that doesn't result in the program calling the setegid function, e.g. a bug that only causes the game to write to an unintended file, then that bug doesn't allow cheating on high scores, because the game doesn't have the permission to write to the high score file without calling setegid.
What I wrote above is describes a basic traditional Unix system. Some modern systems have other features that complement the traditional Unix privilege model. These features come in addition to the basic user/group effective/real system and sometimes interact with it. I won't go into any detail about these additional features, but I'll just mention three features of the Linux security model.
The permission to perform many actions is granted via a capability rather than to user ID 0. For example, changing user IDs requires the capability CAP_SETUID, rather than having user ID 0. Programs running as user ID 0 receive all capabilities unless they go out of their way, and programs running with CAP_SETUID can acquire root privileges, so in practice running as root and having CAP_SETUID are equivalent.
Linux has several security frameworks that can restrict what a process can do, even if that process is running as user ID 0. With some security frameworks, unlike with the traditional Unix model and capabilities, a process may gain privileges upon execve due to the security framework's configuration rather than due to flags in the executable file's metadata.
Linux has user namespaces. A process running as root in a namespace only has privileges inside that namespace.
| What is required by a process to set its uid to 0 (root)? |
1,457,623,331,000 |
Let's say I have a file called run.sh that contains:
#! /bin/sh -x
x-www-browser index.html
And ls -l run.sh says:
-rwxr-xr-x 1 myusername myusername 39 Jan 9 19:32 run.sh
And ./run.sh says:
bash: ./run.sh: Permission denied
Why does it not work? Why does sh -x run.sh work perfectly?
More info, since it's not so easy apparently
If I do a sudo, it will not output an error, but won't do anything either.
myusername@crunchbang:/mnt/data$ sudo ./run.sh
[sudo] password for myusername:
myusername@crunchbang:/mnt/data$
|
The filesystem run.sh is on has been mounted noexec.
| .sh file cannot be executed without explicitly calling sh |
1,457,623,331,000 |
Drive A is 2TB in a closet at home.
Drive B is 2TB in my office at work.
I'd like drive A to be the one I use regularly and
to have rsync mirror A to B nightly/weekly.
The problem I have with this is that multiple users have
stuff on A.
I have root run rsync -avz from A to $MYNAME:B
Root can certainly read everything on A, but doesn't
have permission to write non-$MYNAME stuff on B.
How am I supposed to be doing this? Should I have a
passwordless private key on A that logs into root on B?
That seem's super dangerous.
Also, I'd prefer to use rsnapshot but it looks like they demand that I draw from B to A using the passwordless private key to root's account that I'm so frightened by.
|
If it is intended as a backup (I'm looking at the tag), not as a remote copy of working directory, you should consider using tools like dar or good old tar. If some important file gets deleted and you won't notice it, you will have no chance to recover it after the weekly sync.
Second advantage is that using tar/dar will let you preserve ownership of the files.
And the third one - you will save bandwidth because you can compress the content.
| How do I backup via rsync to a remote machine, preserving permissions and ownership? |
1,457,623,331,000 |
I know that ls -l lists the permissions of every file in a directory but what is the command if I want to see the permissions of just a specific file?
|
To get all the info provided by ls -l for a single file or folder, use the -d option and specify the file:
ls -ld filename
| What is the command to find the read/write permissions for a single file? |
1,457,623,331,000 |
I tried n latest
cp: cannot create directory '/usr/local/lib/node_modules': Permission denied
cp: cannot create regular file '/usr/local/bin/node': Permission denied
cp: cannot create symbolic link '/usr/local/bin/npm': Permission denied
cp: cannot create symbolic link '/usr/local/bin/npx': Permission denied
cp: cannot create directory '/usr/local/include/node': Permission denied
I already made folder
sudo mkdir -p /usr/local/n && chown -R $(whoami) /usr/local/n/
I am on Ubuntu 18.04.
With sudo
sudo n latest
sudo: n: command not found
|
I solved it
sudo mkdir -p /usr/local/n && chown -R $(whoami) /usr/local/n/
And
sudo chown -R $(whoami) /usr/local/bin /usr/local/lib /usr/local/include /usr/local/share
| Why is permisson denied with n latest? |
1,457,623,331,000 |
Symbolic link not working, using standard UBUNTU 16 LTS... It shows "Permission denied" where I expected to get access, not working even after chown.
Full example:
sudo rm /tmp/file.txt # if exist, remove
cd ~
sudo chmod 666 data/file.txt
ls -l data/file.txt # "-rw-rw-rw-" as expected
more data/file.txt # working fine
sudo ln -sf $PWD/data/file.txt /tmp/file.txt # fine
ls -l /tmp/file.txt # "lrwxrwxrwx", /tmp/file.txt -> /home/thisUser/file.txt
more /tmp/file.txt # fine
sudo chown -h postgres:postgres /tmp/file.txt
sudo more /tmp/file.txt # NOT WORK! but its is sudo! and 666!
|
These actions should result with an error message: Permission denied. The directory, /tmp, has permissions including the sticky bit. The error is a result of the kernel configuration for fs.protected_symlinks.
To show the setting, sysctl fs.protected_symlinks. This equals 1 when set. To disable temporarily, which is not recommended, sysctl fs.protected_symlinks=0. To turn off permanently, which is again not recommended, sysctl -w fs.protected_symlinks=0.
See patchwork.kernel.org for more information.
To avoid link rot, the leading summary paragraphs on symbolic links from the hyperlink follow.
Kees Cook - July 2, 2012, 8:17 p.m.
This adds symlink and hardlink restrictions to the Linux VFS.
Symlinks:
A long-standing class of security issues is the symlink-based
time-of-check-time-of-use race, most commonly seen in world-writable
directories like /tmp. The common method of exploitation of this flaw
is to cross privilege boundaries when following a given symlink (i.e. a
root process follows a symlink belonging to another user). For a likely
incomplete list of hundreds of examples across the years, please see:
http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=/tmp
The solution is to permit symlinks to only be followed when outside
a sticky world-writable directory, or when the uid of the symlink and
follower match, or when the directory owner matches the symlink's owner.
| Symbolic link not working as expected when changes user |
1,457,623,331,000 |
I'm a user (not root) on a RHEL server. I want to be able to continue editing a file in my home directory, but set a permission so I don't accidentally delete it. Is this possible? Can I make it so I need to be secondarily prompted before deleting it (eg. "Are you sure? y/n")?
|
You remove a file by unlinking it to the directory it's referenced in (a file could also be referenced in more than one directory, or several times (with different names) to one directory, that's what we usually call hard links).
The permissions that matter then are not that of the file, but that of the directory it's linked to.
So if you place it in a directory you don't have write access to:
mkdir important-files
echo test > important-files/myfile
chmod a-w important-files # make the directory not writable
Then you won't be able to delete myfile, but you'll still be able to modify the file as long as it's writable.
| Prevent myself from accidentally deleting a file |
1,457,623,331,000 |
I am trying to set up hourly rsync beetwen my local machine and remote server. I have already created a ssh certificate to enable login less connection to remote machine.
Now however when I execute the following command from my root account:
rsync -avzhep /home/ vps:/
I got the following error:
rsync: Failed to exec /home/: Permission denied (13)
This seems strange to me as I am root and i can normally access /home/ directory.
Could you please advice what am I doing wrong?
|
-e option is used to run a different remote shell, actually you are trying to execute "/home/" which is not permitted. Try :
rsync -avzh /home/ vps:/
By the way :
The "/" at the end of /home/ indicates that you want to copy the content of /home to the remote root directory ("vps:/").
If your target is the remote "/home" directory, you should use :
rsync -avzh /home/ vps:/home
or
rsync -avzh /home vps:/
| RSync - Permission denied (13) while executing rsync as a root |
1,457,623,331,000 |
I've made today the greatest error on my server using root user:
chown -R 33:33 /
instead of chown -R 33:33 . within some webroot folder.
Well, this brought ssh down. I made it this far to get it working again, so far apache, mysql and php are still working, but I don't know if I ever restart them, or if the server will fail upon restarting.
Is there any "index" or package which will enable reverting these permission to the right / previous ones?
Here is the console output which help me realize and abort that operation:
Can I do anything to recover?
|
No, no chance. You have to reinstall the system.
There are lists in the internet, how to re-chown (or chmod) the filesystem, but you can never cover all files. Those are attempt to solve this without reinstalling. But, I'm sorry for the bad news; The only correct solution is reinstalling, even if you aborted the command after a while.
The system may not even boot anymore. Most of the services probably don't start anymore.
I think, every system administrator had to learn that the hard way. That's why I have a some rules for myself:
Always when doing a command with -R, re-read it at least 3 times, before pressing Enter. Then:
Read it again.
Sure?
Press Enter (and keep the fingers crossed).
| Recovering from a chown -R / [duplicate] |
1,457,623,331,000 |
I have a directory to store invoices -
drwxrwxr-x 2 me www-data 49152 Sep 9 13:38 invoices
There are two applications that write files to this dir.
PHP web application
-rw-r--r-- 1 www-data www-data 7681 Sep 9 13:38 invoice_1.html
Python script
-rw-rw-r-- 1 me me 8911 Sep 4 06:04 invoice_2.html
Now I want to overwrite invoice_2.html from the web application. How do I do that?
I don't want to add www-data to me group. I don't know how but that will make my server vulnerable to security threats.
Help me out.
Thanks.
|
Two options (both carried out as root):
First
If you're happy to have me be a member of the www-data group:
Add the user me to the www-data group:
# usermod -a -G www-data me
Set the SetGID flag on the invoices directory:
# chmod g+s /<path>/<to>/invoices
Now, any files created in the invoices directory will have their group set to www-data (the group of the directory) due to the SetGID bit being set. As the user me is in this group, then the user will have permission to write to that file.
Second
If you don't want the user me to be a member of the www-data group, then...
Create a new group - invoices.
# groupadd invoices
Add the users me and www-data to this group.
# usermod -a -G invoices me
# usermod -a -G invoices www-data
Change the group of the invoices directory to this new group (invoices).
# chown .invoices /<path>/<to>/invoices
Make sure that the group invoices has write permission on the directory:
# chmod g+w /<path>/<to>/invoices
Set the SetGID flag on the invoices directory:
# chmod g+s /<path>/<to>/invoices
Now, the invoices directory will be owned by the invoice group and any files created within it will have their group set to invoices due to the SetGID bit being set on the directory. Both me and www-data have write permission as they are members of the invoices group which has this write permission.
| Create files that both www-data and myuser can edit |
1,457,623,331,000 |
I've had this issue on several setups, and I'm unsure of how to handle it.
At first, all of /var is owned by root:root. Clearly I don't want the web directory to be owned by root, so I do chown apache:apache /var/www. However, when someone is ssh'd in as root, if they do something like an svn update or edit a file, it's going to change the ownership back to root.
Is there any way to fix this? I've heard of using something with suPhp, but I'm unsure if it's necessary.
|
Note: In your case, the best would be to just drop root privileges for updates and run your scripts with your apache user:
su apache -c "./update-script"
Otherwise, use chmod g+s /var/www. New files and sub-directories created inside this directory will share the same owner/group as the parent directory, by default. (This spreads recursively.)
According to the coreutils manual this is a GNU-ish extension which is not portable. This seems to work only for the group id, but I think it should be enough to deal with this general kind of issue. (Using umask 002 when running the script might help also.)
| Permissions of webserver's root directory |
1,457,623,331,000 |
I went from Windows 7 to Debian 9, copying most of the files I'm using for my projects from an NTFS drive.
I see that :
all the folders I have copied are now with rights drwxrwxrwx instead of
drwxr-xr-x.
all the files have those rights too, instead of -rw-r--r--.
Is there an easy way to correct this, recursively ?
a chmod I think, but I'm not used with its parameters.
Files and folders shall have differents rights.
|
you can use find like
find . -type d -print0 | xargs -r -0 chmod 0755
find . -type f -print0 | xargs -r -0 chmod 0644
The first one to chmod directories and the 2nd one for files
| All files or folders copied from an NTFS drive are with drwxrwxrwx rights. Can I correct this easily? |
1,457,623,331,000 |
On my Linux Red Hat machines, I do the following from root
# su - starus
$ <-- now I am in starus user
$ su - moon <-- now I want to access moon user from starus user
Password:
But I get prompted for a password!
Please advise why I get password if already added in visudo the following
moon ALL=(starus) NOPASSWD: ALL
What is wrong?
I also try to run the following script as user moon but password is needed
starus@host sudo -u moon /home/USER261/test.bash
[sudo] password for starus:
|
First of all, root can become any user without needing a password. That's one of the privileges of being the super user. So, with su - starus, you can switch to starus without being prompted. However, at that point, you are starus and no longer root, so you do need a password to switch to moon.
The simple solution is to switch back to root first (just run exit) and then switch to moon.
Now, visudo is irrelevant here. You're not using sudo so any changes you make there (in /etc/sudoers, the file that visudo edits) won't affect the behavior of su, only that of sudo which is not the same program.
In any case, the line you show (moon ALL=(starus) NOPASSWD: ALL) simply means that the user moon can run any command as the user starus with sudo without needing to enter a password. It doesn't mean that anyone can become moon without knowing moon's password. It just means that commands like this don't need a password:
moon@host $ sudo -u starus command
If you are logged in as moon, you can use sudo to run a command as starus without a password.
| visudo + how to get access to user from other user |
1,457,623,331,000 |
Are there any limitations on SFTP that prevents a user from copying between filesystems? I have a SLES with SFTP, and users can't copy/move files between filesystems, even when the target is chmod 777-ed and the user is root -- Filezilla just says "failed". Creating a directory on the target filesystem works fine, as does copying/moving within a filesystem, and if the user SSHes in they can copy to the target filesystem no problem.
There's no SELinux, AppArmor, grsecurity, etc. What could be the problem?
UPDATE: the server is a SLES 10.4
|
SFTP doesn't have a command to move files, only a rename command. In OpenSSH (the de facto standard implementation), this is implemented with the rename system call, which moves a file inside a filesystem. There is no command that can move a file to an arbitrary location, nor is there a command to copy a remote file to another remote location.
With only SFTP access and not shell access, the only way to copy a file is to download and reupload it. You can create symbolic links.
| Can't copy/move between filesystems with SFTP |
1,457,623,331,000 |
I know what it means for a file to have suid permission. It means when other users have execute permission for it, they execute as the owner of the file. But what does it imply when a folder has suid permission? I did some testing and it seems nothing special for the folder. Could anyone help to plain a little? Thanks.
I'm using Oracle Linux 7.6.
root:[~]# cat /etc/*release*
Oracle Linux Server release 7.6
NAME="Oracle Linux Server"
VERSION="7.6"
ID="ol"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.6"
PRETTY_NAME="Oracle Linux Server 7.6"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:6:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.6
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.6
Red Hat Enterprise Linux Server release 7.6 (Maipo)
Oracle Linux Server release 7.6
cpe:/o:oracle:linux:7:6:server
root:[~]#
Below is my testing on a freshly installed server.
root:[~]# pwd
/root
root:[~]# ls -lad /root
dr-xr-x---. 9 root root 4096 Aug 16 22:07 /root
root:[~]# mkdir test
root:[~]# ls -lad test
drwxr-xr-x. 2 root root 4096 Aug 16 22:07 test
root:[~]#
root:[~]# useradd a
root:[~]# passwd a
Changing password for user a.
New password:
BAD PASSWORD: The password is a palindrome
Retype new password:
passwd: all authentication tokens updated successfully.
root:[~]# chmod u+s test
root:[~]#
root:[~]# su - a
[a@localhost ~]$ cd /root/test
-bash: cd: /root/test: Permission denied
[a@localhost ~]$ cd /root
-bash: cd: /root: Permission denied
[a@localhost ~]$ logout
root:[~]#
root:[~]# ls -lad /root
dr-xr-x---. 10 root root 4096 Aug 16 22:07 /root
root:[~]# chmod o+x /root
root:[~]#
root:[~]# su - a
Last login: Fri Aug 16 22:08:54 CST 2019 on pts/0
[a@localhost ~]$ cd /root/test
[a@localhost test]$
[a@localhost test]$ pwd
/root/test
[a@localhost test]$ ls -la .
total 8
drwsr-xr-x. 2 root root 4096 Aug 16 22:07 .
dr-xr-x--x. 10 root root 4096 Aug 16 22:07 ..
[a@localhost test]$ touch file1
touch: cannot touch ‘file1’: Permission denied
[a@localhost test]$ logout
root:[~]#
root:[~]# chmod o+w test/
root:[~]#
root:[~]# su - a
Last login: Fri Aug 16 22:09:31 CST 2019 on pts/0
[a@localhost ~]$
[a@localhost ~]$ cd /root/test
[a@localhost test]$ touch file1
[a@localhost test]$ ls -la
total 8
drwsr-xrwx. 2 root root 4096 Aug 16 22:11 .
dr-xr-x--x. 10 root root 4096 Aug 16 22:07 ..
-rw-rw-r--. 1 a a 0 Aug 16 22:11 file1
[a@localhost test]$ mkdir folder1
[a@localhost test]$ ls -la
total 12
drwsr-xrwx. 3 root root 4096 Aug 16 22:11 .
dr-xr-x--x. 10 root root 4096 Aug 16 22:07 ..
-rw-rw-r--. 1 a a 0 Aug 16 22:11 file1
drwxrwxr-x. 2 a a 4096 Aug 16 22:11 folder1
[a@localhost test]$
As you can see, it seems the files and folders the user a created in /root/test didn't inherit the owner and group of it. The owner and group is a and not root. Are there any problems with my testing? I'm new in Linux.
|
That doesn't mean anything on your Oracle Linux or on any Linux system.
However it may have meaning on FreeBSD. Quoting from the chmod(2) manpage:
If mode ISUID (set UID) is set on a directory, and the MNT_SUIDDIR option
was used in the mount of the file system, then the owner of any new files
and subdirectories created within this directory are set to be the same
as the owner of that directory. If this function is enabled, new directories will inherit the bit from their parents. Execute bits are removed
from the file, and it will not be given to root. This behavior does not
change the requirements for the user to be allowed to write the file, but
only the eventual owner after it has been created. Group inheritance is
not affected.
This feature is designed for use on fileservers serving PC users via ftp,
SAMBA, or netatalk. It provides security holes for shell users and as
such should not be used on shell machines, especially on home directories. This option requires the SUIDDIR option in the kernel to work.
Only UFS file systems support this option. For more details of the suid-
dir mount option, see mount(8).
This is not supported on other *BSD systems like NetBSD or OpenBSD.
| What does it mean for a folder to have suid permission? [duplicate] |
1,457,623,331,000 |
If I create any new file/directory/link,
sham@mohet01-ubuntu:~$ ls -l
total 48
drwxr-xr-x 3 sham sham 4096 Apr 5 19:03 Desktop
drwxrwxr-x 2 sham sham 4096 Apr 7 11:19 docs
drwxr-xr-x 3 sham sham 4096 Apr 5 18:28 Documents
drwxr-xr-x 2 sham sham 4096 Apr 5 18:56 Downloads
-rw-r--r-- 1 sham sham 8980 Apr 5 10:43 examples.desktop
drwxr-xr-x 2 sham sham 4096 Apr 5 03:46 Music
drwxr-xr-x 2 sham sham 4096 Apr 5 18:46 Pictures
drwxr-xr-x 2 sham sham 4096 Apr 5 03:46 Public
drwxr-xr-x 2 sham sham 4096 Apr 5 03:46 Templates
drwxr-xr-x 2 sham sham 4096 Apr 5 03:46 Videos
I see the group name as sham. user sham is the owner of these files.
Question:
How can a group name be same as owner name? What does it imply for a group name to e same as owner name?
|
User names and group names exist in two independent namespaces, so same name does not need to imply anything. It is simply group which happens to have this name (numeric group id will be likely different than numeric user id for example).
Nevertheless, lot of Linux distributions create new group together with creating new user's account and this group becomes default group for this user (containing, by default, only this one user id). So same group and user names usually (!) implies that the file belongs to group with only this one user in it. (But there is nothing preventing admin to add more users into this group, or even create group of this name which is not related to user of same name in any way.)
| How group name works on creation of any file? |
1,457,623,331,000 |
In short, I'm trying to create a file server with a directory in which any user in a group has read-write-execute permission on any file placed in that directory. My research suggests that ACL is the right tool for the job, but I've run into an issue where it does not seem to be behaving as expected.
I'm running the latest Ubuntu Server LTS 16.04.1, and I've ensured that ACL is enabled for the drive in question.
For this example, I have 2 users, alex and usera, and both users belong to the fileserver group. I have created a test directory like so:
alex@tstsvr:/$ sudo mkdir -p /srv/fstest/test
alex@tstsvr:/$ sudo chown root:fileserver /srv/fstest/test
alex@tstsvr:/$ sudo chmod 770 /srv/fstest/test
In that directory, alex creates a simple test file:
alex@tstsvr:/$ cd /srv/fstest/test/
alex@tstsvr:/srv/fstest/test$ echo 123 > test.txt
$ ll
total 12
drwxrwx--- 2 root fileserver 4096 Dec 7 17:09 ./
drwxr-xr-x 4 root root 4096 Dec 7 16:46 ../
-rw-rw-r-- 1 alex alex 4 Dec 7 17:09 test.txt
As we can see, the file belongs to him and is in his group. Next he sets the file permissions to 770 and sets some ACL for the fileserver group to have rwx permissions on the file.
alex@tstsvr:/srv/fstest/test$ chmod 770 test.txt
alex@tstsvr:/srv/fstest/test$ setfacl -m g:fileserver:rwx test.txt
alex@tstsvr:/srv/fstest/test$ ll
total 12
drwxrwx--- 2 root fileserver 4096 Dec 7 17:09 ./
drwxr-xr-x 4 root root 4096 Dec 7 16:46 ../
-rwxrwx---+ 1 alex alex 4 Dec 7 17:09 test.txt*
Now for usera, everything seems to be working perfectly:
usera@tstsvr:/srv/fstest/test$ getfacl test.txt
# file: test.txt
# owner: alex
# group: alex
user::rwx
group::rw-
group:fileserver:rwx
mask::rwx
other::---
usera@tstsvr:/srv/fstest/test$ cat test.txt
123
But, if user alex changes the permissions to 700...:
alex@tstsvr:/srv/fstest/test$ chmod 700 test.txt
It seems ACL is not able to override those permissions, and usera is not longer able to read that file:
usera@tstsvr:/srv/fstest/test$ getfacl test.txt
# file: test.txt
# owner: alex
# group: alex
user::rwx
group::rw- #effective:---
group:fileserver:rwx #effective:---
mask::---
other::---
usera@tstsvr:/srv/fstest/test$ cat test.txt
cat: test.txt: Permission denied
My understanding was that because the file has a named group ACL entry, it would override those file permissions, but this seems to not be the case.
Did I do something wrong, am I misunderstanding, or is this not actually possible?
|
The posix permissions have priority over your acl. So when you chmod the file after the acl is given you are changing the acl mask. There is a great write up here: https://serverfault.com/questions/352783/why-does-chmod1-on-the-group-affect-the-acl-mask
| ACL named group permissions not overriding file permissions. Why? |
1,457,623,331,000 |
I connected to bastion-staging (ftp server-name) through ssh (from local machine).
I have get access through sudo bash.
Now I am trying to ssh from bastion-staging (myserver-name) to ecash (another-server).
But when I run:
ssh root@ecash
I get an error:
WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0777 for '/root/.ssh/id_rsa' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: /root/.ssh/id_rsa
Moreover, I have no password for ecash , I've been said I could ssh as root to another server (e.g ecash).
What should I do now?
|
Fix the permissions for the file indicated in the error message (running the following as root):
chmod 600 /root/.ssh/id_rsa
The id_rsa contains a private key required, in your case, to connect to the ecash server. It should be protected from access by unauthorised accounts (much like the password).
Having 777 permissions to the means, however, the file is readable by anyone and SSH refuses to use the file. Changing the permissions to 600 makes the file readable and writable by the owner (root) account only which is a minimal requirement for ssh command to use the file.
| Error permission denied through ssh |
1,457,623,331,000 |
I'm trying to write a Bash script. Where I go through folders recursively and make a list of and count files and folders.
In a way it works but if "find" gets to a directory where it has permission denied it just continues the script. Skipping the directory not counting files in it nor telling me the directory is permission denied.
(other than a useless terminal command which I can't use since the scripts is run through file-manager custom actions)
I would like it to when "find" finds a permission denied folder to stop the searching process and report to me what folder has permission denied. So I know what its skipping and why.
Half my code looks like this
#!/bin/bash
allfolders=("$@")
nfolders="0"
Nfilesinfolders="0"
filesinfolder="0"
results=""
noteadded="0"
for directory in "${allfolders[@]}"; do
echo "This is where I try and insert the code examples below.
echo "and want it to exit with a zenity error"
nfolders=$(( nfolders + 1 ))
echo "$nfolders"
if [[ $nfolders -ge 11 ]]
then
if [[ $noteadded -ge 0 ]]
then
results+="\n"
results+="Not adding any more folders to the list. Look at the top for total number of files"
noteadded=1
fi
else
results+="$directory\n"
fi
echo "This below attempt only worked on the top folder not folders in it"
if [[ -r "$directory" ]] && [[ -w "$directory" ]]
then
filesinfolder=$(find "$directory" -depth -type f -printf '.' | wc -c)
Nfilesinfolders=$(( Nfilesinfolders + filesinfolder ))
else
zenity --error --title="Error occured check message" --text="The directory\n $directory\n is not readable or write-able to you $USER\n please run as root"
exit $?
fi
done
Here are then some of my failed attempts
find "$directory" -depth -type d -print0 | while IFS= read -r -d $'\0' currentdir
do
echo "Checking "$currentdir" in directory "$directory""
if [[ ! -r "$currentdir" ]] && [[ ! -w "$currentdir" ]]
then
zenity --error --title="Error occurred check message" --text="The directory\n $currentdir\n is not readable or write-able to you $USER\n please run as root"
exit $?
fi
done
The above code seems like its just getting skipped and continues on the script.
The next one looked like this. I could get it to report an error but not tell me what folder that went wrong.
shredout=$(find "$directory" -depth -type d -print0 2>&1 | grep "Permission denied" && echo "found Permission Denied" && checkfolderperm="1" )
if [[ $checkfolderperm -eq 1 ]]
then
zenity --error --title="Error occurred check message" --text="The directory\n $directory\n is not readable or write-able to you $USER\n please run as root"
exit $?
fi
But the above also seems like its just getting skipped.
the last one is all-most like my first try.
while IFS= read -r -d $'\0' currentdir; do
echo "going through file = $currentdir in folder $directory"
if [[ ! -r "$currentdir" ]] && [[ ! -w "$currentdir" ]]
then
zenity --error --title="Error occured check message" --text="The directory\n $currentdir\n is not readable or write-able to you $USER\n please run as root"
exit $?
fi
done < <(find "$directory" -depth -type d -print0)
but that also gets skipped.
Is there any way for me to go through folders with find. Then stop and report if a directory is permission denied.
I've come across bash "traps" and bash "functions" but can't figure out if they are my solution or how to use them.
This is the resulting code after help from "meuh".
It stops the script and reports exactly what folder/folder's it doesn't have permission to. Hope it can help others like it did me.
if finderrors=$(! find "$directory" -depth -type d 2>&1 1>/dev/null)
then
zenity --error --title="Error occurred check message" --text="$finderrors"
exit $?
fi
|
find will set its return code to non-zero if it saw an error. So you can do:
if ! find ...
then echo had an error >&2
fi |
while ...
(I'm not sure what you want to do with the find output).
To collect all the error messages from find on stderr (file descriptor 2) you can redirect 2 to a file. Eg:
if ! find ... 2>/tmp/errors
then zenity --error --text "$(</tmp/errors)"
fi |
while ...
| Exit bash when find gets to a folder with permission denied |
1,457,623,331,000 |
I am having a very strange permission issue when trying to access any of the files in a certain directory as a specific user (adventho). This has been working fine for several months and I just recently noticed that I have been getting these errors and I haven't changed anything in the system for a while. This is what happens when trying to access any of the files as the user:
# su adventho
adventho@snail:/root
$ stat /home/adventho/public_html/hotelimg/187-1-1403380618.jpg
stat: cannot stat `/home/adventho/public_html/hotelimg/187-1-1403380618.jpg': Permission denied
However I can access it fine as root:
root@snail:~# stat /home/adventho/public_html/hotelimg/187-1-1403380618.jpg
File: `/home/adventho/public_html/hotelimg/187-1-1403380618.jpg'
Size: 528535 Blocks: 1040 IO Block: 4096 regular file
Device: 906h/2310d Inode: 918000 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1030/adventho) Gid: ( 1008/adventho)
Access: 2014-12-15 17:23:44.318374774 -0500
Modify: 2014-06-21 15:56:58.000000000 -0400
Change: 2014-10-23 16:44:57.502377342 -0400
Birth: -
In fact, doing an ls -la on the directory produces a bunch of "?" in the output, even for . and ..:
d????????? ? ? ? ? ? .
d????????? ? ? ? ? ? ..
-????????? ? ? ? ? ? 106-1-1239840962_800_600_180_135.jpg
-????????? ? ? ? ? ? 106-1-1239840962_800_600_240_180.jpg
-????????? ? ? ? ? ? 106-1-1239840962_800_600.jpg
-????????? ? ? ? ? ? 106-2-1239840963_800_600_180_135.jpg
-????????? ? ? ? ? ? 106-2-1239840963_800_600_240_180.jpg
-????????? ? ? ? ? ? 106-2-1239840963_800_600.jpg
-????????? ? ? ? ? ? 106-3-1239840964_800_600_180_135.jpg
-????????? ? ? ? ? ? 106-3-1239840964_800_600_240_180.jpg
-????????? ? ? ? ? ? 106-3-1239840964_800_600.jpg
But if I do ls -ld hotelimg/ I get an output:
drw-rw-r-- 2 adventho www-data 69632 Dec 15 17:23 hotelimg/
If I add anything after the slash, I get permission denied:
$ ls -ld hotelimg/../index.php
ls: cannot access hotelimg/../some_existent_file: Permission denied
$ ls -ld hotelimg/.
ls: cannot access hotelimg/.: Permission denied
$ ls -ld hotelimg/../
ls: cannot access hotelimg/../: Permission denied
I tried doing an strace on the ls and this is the output:
$ strace ls /home/adventho/public_html/hotelimg/187-1-1403380618.jpg
execve("/bin/ls", ["ls", "/home/adventho/public_html/hotel"...], [/* 13 vars */]) = 0
brk(0) = 0x1db6000
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a148000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=26612, ...}) = 0
mmap(NULL, 26612, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f931a141000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
open("/lib/x86_64-linux-gnu/libselinux.so.1", O_RDONLY) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260f\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=126232, ...}) = 0
mmap(NULL, 2226160, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9319d0b000
mprotect(0x7f9319d29000, 2093056, PROT_NONE) = 0
mmap(0x7f9319f28000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1d000) = 0x7f9319f28000
mmap(0x7f9319f2a000, 2032, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f9319f2a000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
open("/lib/x86_64-linux-gnu/librt.so.1", O_RDONLY) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220!\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=31744, ...}) = 0
mmap(NULL, 2128856, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9319b03000
mprotect(0x7f9319b0a000, 2093056, PROT_NONE) = 0
mmap(0x7f9319d09000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6000) = 0x7f9319d09000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
open("/lib/x86_64-linux-gnu/libacl.so.1", O_RDONLY) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`\"\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=35320, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a140000
mmap(NULL, 2130560, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f93198fa000
mprotect(0x7f9319902000, 2093056, PROT_NONE) = 0
mmap(0x7f9319b01000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7f9319b01000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300\357\1\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1603600, ...}) = 0
mmap(NULL, 3717176, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f931956e000
mprotect(0x7f93196f0000, 2097152, PROT_NONE) = 0
mmap(0x7f93198f0000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x182000) = 0x7f93198f0000
mmap(0x7f93198f5000, 18488, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f93198f5000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
open("/lib/x86_64-linux-gnu/libdl.so.2", O_RDONLY) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340\r\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=14768, ...}) = 0
mmap(NULL, 2109696, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f931936a000
mprotect(0x7f931936c000, 2097152, PROT_NONE) = 0
mmap(0x7f931956c000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f931956c000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
open("/lib/x86_64-linux-gnu/libpthread.so.0", O_RDONLY) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@\\\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=131107, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a13f000
mmap(NULL, 2208672, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f931914e000
mprotect(0x7f9319165000, 2093056, PROT_NONE) = 0
mmap(0x7f9319364000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x16000) = 0x7f9319364000
mmap(0x7f9319366000, 13216, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f9319366000
close(3) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
open("/lib/x86_64-linux-gnu/libattr.so.1", O_RDONLY) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0000\25\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=18672, ...}) = 0
mmap(NULL, 2113880, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9318f49000
mprotect(0x7f9318f4d000, 2093056, PROT_NONE) = 0
mmap(0x7f931914c000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f931914c000
close(3) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a13e000
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a13c000
arch_prctl(ARCH_SET_FS, 0x7f931a13c7a0) = 0
mprotect(0x7f931914c000, 4096, PROT_READ) = 0
mprotect(0x7f9319364000, 4096, PROT_READ) = 0
mprotect(0x7f931956c000, 4096, PROT_READ) = 0
mprotect(0x7f93198f0000, 16384, PROT_READ) = 0
mprotect(0x7f9319b01000, 4096, PROT_READ) = 0
mprotect(0x7f9319d09000, 4096, PROT_READ) = 0
mprotect(0x7f9319f28000, 4096, PROT_READ) = 0
mprotect(0x61a000, 4096, PROT_READ) = 0
mprotect(0x7f931a14a000, 4096, PROT_READ) = 0
munmap(0x7f931a141000, 26612) = 0
set_tid_address(0x7f931a13ca70) = 22762
set_robust_list(0x7f931a13ca80, 0x18) = 0
futex(0x7fff8335414c, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, NULL, 7f931a13c7a0) = -1 EAGAIN (Resource temporarily unavailable)
rt_sigaction(SIGRTMIN, {0x7f9319153ad0, [], SA_RESTORER|SA_SIGINFO, 0x7f931915d0a0}, NULL, 8) = 0
rt_sigaction(SIGRT_1, {0x7f9319153b60, [], SA_RESTORER|SA_RESTART|SA_SIGINFO, 0x7f931915d0a0}, NULL, 8) = 0
rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0
getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM_INFINITY}) = 0
statfs("/sys/fs/selinux", 0x7fff833540a0) = -1 ENOENT (No such file or directory)
statfs("/selinux", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=1440781, f_bfree=1145015, f_bavail=1071826, f_files=366480, f_ffree=337819, f_fsid={-205162666, 1274914527}, f_namelen=255, f_frsize=4096}) = 0
brk(0) = 0x1db6000
brk(0x1dd7000) = 0x1dd7000
open("/proc/filesystems", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a147000
read(3, "nodev\tsysfs\nnodev\trootfs\nnodev\tb"..., 1024) = 385
read(3, "", 1024) = 0
close(3) = 0
munmap(0x7f931a147000, 4096) = 0
open("/usr/lib/locale/locale-archive", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=110939968, ...}) = 0
mmap(NULL, 110939968, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f931257c000
close(3) = 0
ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(1, TIOCGWINSZ, {ws_row=39, ws_col=153, ws_xpixel=0, ws_ypixel=0}) = 0
stat("/home/adventho/public_html/hotelimg/187-1-1403380618.jpg", 0x1db70d0) = -1 EACCES (Permission denied)
open("/usr/share/locale/locale.alias", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=2570, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f931a147000
read(3, "# Locale name alias data base.\n#"..., 4096) = 2570
read(3, "", 4096) = 0
close(3) = 0
munmap(0x7f931a147000, 4096) = 0
open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
write(2, "ls: ", 4ls: ) = 4
write(2, "cannot access /home/adventho/pub"..., 70cannot access /home/adventho/public_html/hotelimg/187-1-1403380618.jpg) = 70
open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
write(2, ": Permission denied", 19: Permission denied) = 19
write(2, "\n", 1
) = 1
close(1) = 0
close(2) = 0
exit_group(2) = ?
I notice that it mentions selinux, however it is not installed. Just to be double sure, I installed policycoreutils (which installed 55 other packages) and executed sestatus and the output was "disabled". Everything that has ever been installed on the server (with the only exception of lfd/csf) has been from the repositories.
I am stumped as to what is causing these permission denied errors.
|
Read permissions on a directory only allow you to list its contents. To actually be able to access the contents, you need execute permissions. Conversely, having only execute permissions will allow you to access the contents, but not list them. See Execute vs Read bit. How do directory permissions in Linux work?
| Cannot access a directory with permissions drw-rw-r-- [duplicate] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.