date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,533,526,043,000 |
I have a file that has 10 parent directories. When I change the file's owner with chown command new owner still can't write to file because the file's parent directories still have a different owner. Is there a way to also change parent directories' owner?
If not, what's the best way to temporarily grant a user permission to change a file inside nested directories?
|
The user only needs the execute bit present on a directory containing a file to be able to access the file. If the execute bit is set on the directory and the user has write permissions to the file he can edit that file. Without write perms on the directory he wont be able to create or delete files (even if he owns the file), but he will be able to edit them. Note though that without read permissions (only execute is set) the user wont be able to get a directory listing, he'll have to know the exact file name of the file to be able to access it.
So if our file is at dirname/filename.txt and user owns the file:
rwx--x--x dirname
User can edit dirname/filename.txt
User cannot create dirname/filename2.txt
User cannot delete dirname/filename.txt
User cannot ls dirname
rwxr-xr-x dirname
User can edit dirname/filename.txt
User cannot create dirname/filename2.txt
User cannot delete dirname/filename.txt
User can ls dirname
rwxrwxrwx dirname
User can do anything
NOTE
These rules do not apply if the directory has the sticky bit applied. The stick bit changes the behavior completely (the /tmp directory has the sticky bit applied).
| change owner of a file and it's parent directories |
1,533,526,043,000 |
I want to know if the following scenario is possible.
I am a user called www-admin and I am in the group www-admin. I am also a sudoer.
I want to have all the files I create in the folder /var/www/domain to have the group domain and the user domain.
If I as user www-admin create and edit the files inside /var/www/domain all tha's domain:domain.
Is this possible:?
|
You can set up a directory so that files created in it belong to a particular group regardless of the effective group ID of the process which creates them. This is called BSD semantics and you should set SGID bit for the directory to enable it:
chgrp domain /var/www/domain
chmod g+s /var/www/domain
This does not change group of the files and directories already in /var/www/domain, you'll need to take care of that manually (for example using -R with chgrp above). Note that all subdirectories subsequently created in the directory will also inherit SGID, automatically enabling BSD semantics for the subdirectores as well.
The same semantics is not possible for the owner, though.
If you need to achieve this for both the owner and the group you probably need to ensure that the code which creates files under /var/www/domain runs with effective user domain and effective group domain. You can use sudo to do this:
sudo -u domain -g domain your_command
If domain is the primary group of user domain, the following will suffice
sudo -u domain your_command
Since this solution easily takes care of both owner and group there is no need for BSD semantics.
If you don't want to change the effective user and group of the process which creates the files (for example because it is a large server performing a number of other unrelated functions), you may need to externalize the part of the functionality which creates the files into a separate process whose effective UID and GID can be changed accordingly or you can use BSD semantics and try to achieve your ultimate goal by relying solely on the group.
| Create / Edit files in specific folder using different user / group |
1,533,526,043,000 |
I am putting some scripts in ~/bin and am wondering what appropriate file permissions are. To extend my question somewhat, what permissions make sense for */bin folders all over my system, and why?
|
Usually:
they are writable by the owner (root for /bin, /usr/bin, ...)
they executable and readable by everyone else
But your question should instead be:
who should be able to modify the directory?
who should be able to read the content and execute the binaries?
Once you answer these questions the permissions are straightforward.
An example:
$ ls -ld /bin /usr/bin /usr/local/bin ${HOME}/bin
drwxr-xr-x 8 corti corti 272 Apr 11 2011 /Users/corti/bin
drwxr-xr-x 39 root wheel 1326 Jul 21 19:37 /bin
drwxr-xr-x 948 root admin 32232 Oct 10 08:36 /opt/local/bin/
drwxr-xr-x 1205 root wheel 40970 Oct 5 09:01 /usr/bin
| What are appropriate execution permissions for ~/bin? |
1,533,526,043,000 |
Why does setting umask to 0077 makes a gpg public key unavailable for apt when installing a package, e.g.
umask 0077
curl -fsSLo /usr/share/keyrings/brave-browser-beta-archive-keyring.gpg https://brave-browser-apt-beta.s3.brave.com/brave-browser-beta-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/brave-browser-beta-archive-keyring.gpg] https://brave-browser-apt-beta.s3.brave.com/ stable main">/etc/apt/sources.list.d/brave-browser-beta.list
apt update
apt install brave-browser-beta
The above does not work, I get this output:
Err:4 https://brave-browser-apt-beta.s3.brave.com stable InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 0B31DBA06A8A26F9
Reading package lists... Done
W: GPG error: https://brave-browser-apt-beta.s3.brave.com stable InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 0B31DBA06A8A26F9
E: The repository 'https://brave-browser-apt-beta.s3.brave.com stable InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
This does work:
umask 0022
curl -fsSLo /usr/share/keyrings/brave-browser-beta-archive-keyring.gpg https://brave-browser-apt-beta.s3.brave.com/brave-browser-beta-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/brave-browser-beta-archive-keyring.gpg] https://brave-browser-apt-beta.s3.brave.com/ stable main">/etc/apt/sources.list.d/brave-browser-beta.list
apt update
apt install brave-browser-beta
Why does setting umask to 0077 (and then downloading public key) makes a gpg public key unavailable for apt? The key was downloaded as root, and so was executed apt update, why then this issue?
|
apt runs download-related operations using a sandbox user by default, _apt. I can’t check right now, but it’s possible that apt update key verification is done using this user too, which would mean the keys have to be readable by the _apt user.
See Why are directory permissions preventing "sudo apt install" using a file? for a similar problem.
| Why does setting `umask` to `0077` (and then downloading public key) makes a gpg public key unavailable for apt? |
1,533,526,043,000 |
Given,
touch /tmp/abc
ln -vs abc /tmp/def
$ ls -l /tmp/???
-rw-rw-r-- 1 ubuntu ubuntu 0 Apr 10 22:10 /tmp/abc
lrwxrwxrwx 1 ubuntu ubuntu 3 Apr 10 22:10 /tmp/def -> abc
Why I'm getting:
$ sudo chown syslog: /tmp/def
chown: cannot dereference '/tmp/def': Permission denied
$ sudo chown --dereference syslog: /tmp/def
chown: cannot dereference '/tmp/def': Permission denied
Ref:
chown(1):
--dereference
affect the referent of each symbolic link
(this is the default), rather than the
symbolic link itself
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
|
This answer supposes that this is in place:
# sysctl fs.protected_symlinks
fs.protected_symlinks = 1
The root user (which the following security feature was especially intended to affect) as any other users, is affected by the sysctl fs.protected_symlinks described in proc(5):
/proc/sys/fs/protected_symlinks (since Linux 3.6)
When the value in this file is 0, no restrictions are
placed on following symbolic links (i.e., this is the
historical behavior before Linux 3.6). When the value in
this file is 1, symbolic links are followed only in the
following circumstances:
• the filesystem UID of the process following the link
matches the owner (UID) of the symbolic link (as
described in credentials(7), a process's filesystem UID
is normally the same as its effective UID);
• the link is not in a sticky world-writable directory;
or
• the symbolic link and its parent directory have the
same owner (UID)
A system call that fails to follow a symbolic link because
of the above restrictions returns the error EACCES in
errno.
Here:
root != ubuntu : fail
/tmp is a sticky world-writable directory: fail
def's owner (ubuntu) is not the same as /tmp's owner (root): fail
hence EACCESS = Permission denied
Here are two other cases that would work instead:
prevent first condition to fail
sudo chown --no-dereference root: /tmp/def # this must be root, not syslog
Now the symbolic link is owned by root the 1st condition won't fail, allowing then to run:
sudo chown syslog: /tmp/def
To successfully affect /tmp/abc.
If so intended, def can be changed back to its previous owner or to syslog too:
sudo chown --no-dereference ubuntu: /tmp/def
or
sudo chown --no-dereference syslog: /tmp/def
Prevent 2nd condition to fail
Do the same experiment in a non-sticky directory (even if it is world-writable):
sudo mkdir -m 777 /tmp/notsticky
sudo mv /tmp/abc /tmp/def /tmp/notsticky/.
Which now allows to run:
sudo chown syslog: /tmp/notsticky/def
They can be moved back if that's the intention:
sudo mv /tmp/notsticky/abc /tmp/notsticky/def /tmp/
sudo rmdir /tmp/notsticky
In addition, as suggested in comment , one can again prevent the first condition to fail by doing the lookup from the original ubuntu user, since it's already the owner of the symbolic link. For example using:
realpath -z /tmp/def | xargs -0 sudo chown syslog:
or if there's no special character such as a Line Feed (LF / \n) at the end of the target filename (which would be removed by the shell interpreter), as in OP's case, simply:
sudo chown syslog: "$(realpath /tmp/def)"
| chown cannot dereference, Permission denied |
1,692,594,761,000 |
It seems like an no brainer question, but i did not manage to any real information. In my Ubuntu server i have created a custom /etc/cron.d config file, e.g. /etc/cron.d/MyCronTab, the reason I put all here is for ease of finding and they are easy to modify.
Now I'm not going to put anything sensitive at all in these crontab, but i see by default Ubuntu really likes 644(root can read/write, everyone can read) on these files. I guess that could make sense so that people know what task will be running in the background even if they can't alter them.
But in my case, it seems rather not a good idea to even expose this information about my specific crontab files since they are root and admin tasks anyway.
So I changed the permission on my own files (/etc/cron.d/MyCronTab) so that only root can read and write, which seems perfect, and even if that crontab has a task ran by say another user, it still runs without issue, which seems perfect.
Something I'm worried about is, will Ubuntu or cron daemon on updates reset my config file permissions back to 644 so everyone can read or do they persist forever in this directory?
|
It’s your file, you control its permissions — neither package updates nor the cron daemon itself will change them.
As a general rule, while many files under /etc are provided by the system, /etc is the system administrator’s domain, and the system will preserve changes made there. Even changes made to system-provided configuration files are preserved by default (in case of conflict during upgrade, the administrator is asked how to handle it).
On Debian and well-behaved derivatives (including Ubuntu), this requirement is described in the Policy section on configuration files; packages can either delegate their configuration file handling to dpkg, or handle it themselves in their maintainer scripts, which
must be idempotent (i.e., must work correctly if dpkg needs to re-run them due to errors during installation or removal), must cope with all the variety of ways dpkg can call maintainer scripts, must not overwrite or otherwise mangle the user’s configuration without asking, must not ask unnecessary questions (particularly during upgrades), and must otherwise be good citizens.
Even on first installation, existing configuration files are preserved; this means that if at some future point you end up installing a package which conflicts with one of your own configuration files, dpkg will ask you what to do about it. However, purging a package will remove all its configuration, including your own files which are considered as “belonging” to the package; it’s best to ensure /etc is covered by your backup strategy, and it’s also a good idea to track changes to /etc with etckeeper.
| When you alter permissions of files in /etc/cron.d in Ubuntu, do they persist across updates? |
1,692,594,761,000 |
To be able to test out of disk situations I tried to set up a file-based size-limited file system like this:
$ dd if=/dev/zero of=file.fs bs=1MiB count=1
$ mkfs.ext4 file.fs
$ udisksctl loop-setup -f file.fs
Mapped file file.fs as /dev/loop1.
$ udisksctl mount --options rw -b /dev/loop1
Mounted /dev/loop1 at /media/myuser/29877abe-283b-4345-a48d-d172b7252e39
$ ls -l /media/myuser/29877abe-283b-4345-a48d-d172b7252e39/
total 16
drwx------ 2 root root 16384 Dec 2 22:08 lost+found
But as can be seen, it's made writable only for root. How do I make it writable for the user that is running the commands?
I can't chown or chmod it since that also gives "Operation not permitted".
I tried with some options to udisksctl like -o uid=<id> but then I get an error about that mount option not being allowed.
Since this should be able to run for normal users I can't use root or sudo.
I am on Ubuntu 22.04.1.
|
Yeah, that's kind of mean :) But you can work around:
mkfs.ext4 takes a -d directory/ option with which you can specify a directory containing an initial content for the file system; if you already know which directories you'll later want to populate, that would be a good place to start.
mkfs.xfs supports -p protofile; that probably does exactly what you want to do. A file myprotofile containing naught but:
thislinejustforbackwardscompatibility/samefornextline
1337 42
d--777 1234 5678
where the first line is just a single string for backwards compatibility, which will be ignored; the second line must contain two numbers that will be ignored. (See man mkfs.xfs for more details than I remember from the top of my head.)
The third line contains a filemode uid gid tuple, describing the root directory. Replace 1234 with your user id of choice, and 5678 with the group id of your choice.
A subsequent
mkfs.xfs -p myprotofile -f file.fs
should do (but your image file needs to be at least 16 MB in size for a default-configure mkfs.xfs), so
dd if=/dev/zero of=file.fs bs=1MiB count=16
mkfs.xfs -p myprotofile -f file.fs
udisksctl loop-setup -f file.fs
works and automounts the filesystem rw on my system (but that's not necessarily the case on your system – your mount thing should work; but --options rw seems a bit superfluous).
| Create writable file system using udisksctl |
1,692,594,761,000 |
I'm not a linux expert and I would like to know how can I fix this situation:
there is an user called demo which have as id 1000, this user is associated to www-data group, so the demo user can actually remove / edit / create all the files of a project, eg:
now suppose that I'm logged in with the user foo which has the id 1001. If I try to edit a file, eg: .env I will get permission denied.
Is there a way to give the same permissions of the group owner (1000) to the user foo?
So I can simply add the user foo to the group of the demo user, and then foo can actually remove / edit / create as if the user were demo.
How can I achieve this?
Thanks in advance.
|
The owner of a file is… unique.
You can grant read, write and execute permissions to anyone else, anyone member of some group but…
you won't deprive the owner from it's unique (shared with root) privilege to remove the file.
If you want other users to also be capable of removing some file they do not own then… the only possibility is to enable them to usurp the owner's id using su.
Therefore, if you need other users to be able to remove files owned by the www-data, you'll have to authorize them using su www-data
| How to allow users of a group to have the same permission of the owner? |
1,692,594,761,000 |
I have read permission a a file.
Which permissions and why are required _in the / and in the /path directories to be able to cat /path/file.txt ?
|
In order to access to any file located under the /path directory, the user must have the x (execute) permission granted for each of the directories in the path (this including root).
The reason for this is that cat will use the open system call. System call which :
can fail with the following errors:
EACCES (…) or search permission is denied for one of the directories
in the path prefix of pathname, (…)
Note that this page invites you to learn more about path resolution in Linux.
| Which directory permission are needed to read the content of a file? [duplicate] |
1,692,594,761,000 |
In KDE setting you can assign a shortcut to execute your arbitrary command.
To assign a command to shortcut in KDE, you can do the following. In System Settings -> Shortcuts -> Custom Shortcuts, right click, choose New -> Global Shortcut -> Command/URL. Go to Action tab and fill in the command. And in Trigger tab assign the actual shortcut. And this mechanism works normally for the non-sudo commands.
But unfortunately, when I use the command that needs root privileges (let's assume sudo systemctl start something), it is just not executed.
Is there a way to bypass this limitation? I want to be able to trigger action, that requires evaluated permissions.
|
One solution is to use an executable, which is owned by root and has a SUID bit.
Most likely, you do not have your own binary and instead you want to use a command. If you make your bash script file with suid bit, suid bit will be ignored, see Why does setuid not work?.
So, for your command you need to compile a binary.
Create the file start_smth.c:
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
int main()
{
setuid(0);
system("systemctl start something");
return 0;
}
Compile it and set permissions:
$ gcc start_smth.c -o start_smth
$ sudo chown root:root start_smth
$ sudo chmod 4755 start_smth
In the Command/URL field in settings fill in the path to the start_smth binary file, for example /home/user/bin/start_smth.
If you have many commands you need to prepare in such way, you may use this script compile_and_set_permissions.sh:
#!/bin/bash
FILE="$1"
FILE_NO_EXT="${FILE%.*}"
gcc "$FILE" -o "$FILE_NO_EXT"
sudo chown root:root "$FILE_NO_EXT"
sudo chmod 4755 "$FILE_NO_EXT"
Then pass a c files as a parameter:
$ compile_and_set_permissions.sh stop_smth.c
| How do I run a sudo command from kde shortcuts command? |
1,692,594,761,000 |
I made fish function in ~/.config/fish/functions/confgit.fish:
function confgit
/home/john/Projects/confgit $argv
end
But when I run this function it just says:
fish: The file “/home/john/Projects/./confgit” is not executable by this user
/home/john/Projects/./confgit $argv
^
in function 'confgit'
The config file is normal python script. If I run it by ./confgit it runs fine.
There are the permissions of the script:
-rwxr-xr-x 1 john john 5.8K 29. nov 02.04 confgit*
How can I fix this so I can use this function ?
Thank you for help
|
I worked to reproduce your problem, and the closest thing I could emulate was this:
# file: ~/bin/janstest
echo $argv
# file: ~/bin/janstest2
function janstest
~/bin/janstest $argv
end
janstest It works!
and file permissions as:
stew@stewbian ~> ls -l ~/bin/jans*
-rwxr-xr-x /home/stew/bin/janstest*
-rwxr-xr-x /home/stew/bin/janstest2*
When I run it I get a similar error:
stew@stewbian ~> ~/bin/janstest2
Failed to execute process '/home/stew/bin/janstest2'. Reason:
exec: Exec format error
The file '/home/stew/bin/janstest2' is marked as an executable but could not be run by the operating system.
stew@stewbian ~ [125]>
The solution was to prepend #!/usr/bin/fish to the the script.
stew@stewbian ~> cat ~/bin/janstest2
#!/usr/bin/fish
function janstest
~/bin/janstest $argv
end
janstest It works
stew@stewbian ~> ~/bin/janstest2
It works
| Fish: The file is not executable by this user |
1,692,594,761,000 |
Both permission sets mean owner can read,write,execute; group and world can read,execute.
I expect 4755 to be r--rwxr-xr-x, but instead it is -rwsr-xr-x.
What is it?
Copying my comment to StephenKitt below
I've been visualising the permissions as literal bits. So that 755 is 111101101, But now we have a 100 in front of those -- so now 100111101101 which is somehow combined to put an 's' in that first field. It's not an AND because 7&4 is 4, and in fact 7 is as high we can go. So clearly my visualisation/expectation is astray.
|
Your binary visualisation is correct, but your character visualisation isn’t (and that’s fine, it’s a bit complicated).
The first three bits are mapped as character variations in each of the character triplets which follow:
the topmost bit, setuid, is mapped to the executable location of the user permissions, as follows:
- if neither the executable bit nor the setuid bit are set;
x if only the executable bit is set;
S is only the setuid bit is set;
s if both bits are set;
the second bit, setgid, is likewise mapped to the executable location of the group permissions, again using -, x, S and s;
the third bit, the sticky bit, is mapped to the executable location of the “other” permissions, but using -, x, T and t.
Thus the twelve bits fit in nine characters:
11 10 9
| | |
8 7 6 5 4 3 2 1 0
r w x/s r w x/s r w x/t
| File permissions: what is the difference between 755 and 4755 |
1,692,594,761,000 |
Currently trying to create this:
Owner: Read/write/execute
Group: Read/write/execute
other: Read/write/execute
I understand the chmod # is 777, however is there some way to prevent the "other" class from being able to delete? And would this change the chmod #? And if it does what is the #?
|
If you mean "delete files created by other people" then you want the sticky bit on the directory.
This is commonly seen on directories such as /tmp:
% ls -ld /tmp
drwxrwxrwt 15 root root 36864 Apr 7 21:46 /tmp
That "t" at the end means the directory is "sticky" and people can only delete their own files. So userA could put a file there; userB can put a file there. But userA can not delete userB's file. People can still delete and modify their own files, but they can't change other people's files.
To set that flag you want permission 1777 (chmod 1777 dir).
| How do I give a class permissions to write, but not to delete? Does the affect the chmod octal #? |
1,692,594,761,000 |
I was wondering if anyone knew of a way to block certain commands from particular users/groups on any Linux distro (or most, if not all distros) at certain times of the day.
What I would ideally like to do is prevent someone from doing reboots or shutdowns at night time. Roughly 8PM-6AM.
Thank all!
|
If it's just for yourself, then a simple option would be to have a cron job set permissions on /usr/sbin/reboot twice a day:
0 20 * * * chmod 0 /usr/sbin/reboot
0 6 * * * chmod 755 /usr/sbin/reboot
Now, on modern systems, /usr/sbin/reboot is just a symlink to /usr/bin/systemctl, so this would prevent you from performing a variety of other activities.
| Block commands at certain times of the day |
1,692,594,761,000 |
When a user creates a folder over SFTP it gets permissions of
drwxr-xr-x
I need it to have
drwxrw-r--
I know i can change the permissions with chmod but it would save me lots of time and effort if the folder could be created with the correct permissions from the start. Is there a way to change the default permissions for when a specific user creates a folder?
|
Directories are typically created with all permission bits set (see for example mkdir, when the mode isn’t specified explicitly), except those masked by the current umask, so you can set that for the user you’re interested in;
umask 013
will produce the result you’re after under such circumstances.
For your specific sftp requirements, see Proper way to set the umask for SFTP transactions?
Other approaches can be used if your file system supports ACLs and you don’t need to limit this to a single user; see How to set default file permissions for all folders/files in a directory? for details.
| Default permissions on new folder |
1,692,594,761,000 |
I am trying to make it more simple and easier for users to know if setgid or sticky bit are on the file permission by just writing setgid: ON/OFF sticky bit: ON/OFF how would I do that I know about ls -ld and awk but after that I dont know what to do
|
Use -g file to see if the file exists and the setgid bit is set. Use -u file to see if the exists and its setuid bit is set. The "sticky bit" can be tested with -k file. Don't confuse setuid with it.
[ -g "$myfile" ] && printf "%s has setgid set\n" "$myfile"
[ -u "$myfile" ] && printf "%s has setuid set\n" "$myfile"
[ -k "$myfile" ] && printf "%s has sticky bit set\n" "$myfile"
See the test documentation (manpage)
| how can I with a bash script tell if Sticky bit and Setgid are on the file |
1,692,594,761,000 |
I just noticed that the /root/ has 700 permission by default on Ubuntu, Debian as well as Nixos. Why is this handled differently than other directories for example /bin/?
What is so special about /root/ besides just being the home directory of the root user?
I wanted to give a user permission to view a directory within /root - but that requires executable permission set on the directory itself. (Do the parent directory's permissions matter when accessing a subdirectory?)
|
It is of course possible to change this permission but inadvisable.
The basic principle here is that root is NOT to be used as a regular user. You only login as root to perform security sensitive operations such as system upgrades. Therefore anything you must do as root should not in general be viewable by other users.
On that bases root's working area should remain strictly off limits to provide you with a safe space to work. This goes doubly for some automatically generated files which by default get written to a user's home. For example ~/.bash_history may inadvertently expose sensitive information. Better to black out the whole home directory than risk comprising your system.
If you are not forced to do something as root then don't. If root must share something then create a new directory (maybe in /usr/share) and create appropriate new groups to manage access.
| Why does the /root/ directory have 700 permission by default? |
1,692,594,761,000 |
Given a linux server:
There are two partitions; one is mounted on /, and the other is mounted on /data.
There is a user named alice.
alice's uid is 1001.
alice created many private files on /data. That is, only the user of uid 1001 can access the files.
Then:
I clean reinstall the linux OS, and keep the data partition mounted on /data.
I create a new user named alice. However, the uid of alice is not sure to be 1001. Let's say 1002.
Now:
alice cannot access her files on /data, because her uid(1002) is not equal to the uid(1001) of the files.
In practice, how to solve the common seen issue?
|
Either create the user alice with a uid of 1001 or change the ownership of the files from 1001 to 1002.
Create a user with a specific uid:
useradd alice -u 1001
find all files owned by 1001 and chmod them to alice (this will also change the gid to alice's primary group):
find /data -uid 1001 -print0 | xargs -0I{} chown alice: {}
| How to keep files accessible after reinstalling linux OS? |
1,692,594,761,000 |
On FreeBSD 12.0-RELEASE-p3 ls -l /dev/ada1 gives me:
crw-r----- 1 root operator [skipped] /dev/ada1
If I use the command gpart recover /dev/ada1 from a non-root user account, who is in the group operator (and wheel), gpart does the recovery. It definitely writes on the disk.
But why does the non-root user not just have read permissions for the disk? The group operator has only read permissions for /dev/ada1!
The sudoers file only consists of:
% grep -v '^#' /usr/local/etc/sudoers | grep -v '^$'
root ALL=(ALL) ALL
|
The gpart(1) program doesn't write anything to /dev/ada1.
It does all its operations by issuing GEOM_CTL ioctls on /dev/geom.ctl. In order to use ioctl(2) on a device file, you don't need write permissions to it; you only need to be able to open() it in read-only mode. And the operator group has read permissions on /dev/geom.ctl.
| How do operator/wheel groups work on FreeBSD? |
1,692,594,761,000 |
Upon login with .bashrc, how to set the user's group to a non-default one, say targetgroup?
Specifically, my problem is that I can execute, on the command line:
newgrp - targetgroup
but when I include this line in .bashrc, the terminal freezes upon login.
This question relates to Problem while running "newgrp" command in script but I have insufficient reputation to comment.
So I tried:
echo "Before newgrp"
/usr/bin/newgrp - targetgroup <<EONG
echo "hello from within newgrp"
id
EONG
echo "After newgrp"
which gives:
Before newgrp
Before newgrp
Before newgrp
Before newgrp
Before newgrp
Before newgrp
Before newgrp
Before newgrp
^C
After newgrp
so the trick for KornShell does not appear to work for bash, as I had to exit with ^C.
Is there any way to make newgrp work or another .bashrc line that would set the group to targetgroup upon every login? (NB I don't have superuser priviledges.)
|
For CentOS 6 you can try adding (without dash)
newgrp targetgroup
to your .bash_profile.
At least for CentOS 6.10 this changes the effective group to targetgroup in the interactive shell
for new login shells
after source ~/.bash_profile
Group will not be changed when "only" starting another bash or source ~/.bashrc in an existing console.
| Set to non-default user group upon login in bash |
1,692,594,761,000 |
I'm learning UNIX file permissions / ID inheritance and would like to clarify something:
I have this list of permissions, users, groups and files:
-rwxr-xr-x userA A foo
-rw-rwsr-x userB B bar
RealUID of userA is 100, GroupID of userA is 240
RealUID of userB is 102, GroupID of userB is 241
I need to know what would happen if userB executes foo:
Does userB's RealUID change to userA's RealUID?
Does userB's EffectiveUID change to userA's EffectiveUID?
Since userB is executing a file of userA, does userB's RealUID
get saved into SavedUID, then after executing, it reverts back to normal?
Does executing file also changes userB's GroupID?
|
Well, for starters, users don't have RealUIDs.
Users have UIDs. Period.
(The situation with GIDs is a little more complex.)
Processes have real UIDs and effective UIDs (and more).
Secondly, executing a file
will never change the real UID or the real GID of a process.
Thirdly, executing foo will not change any of a process's IDs,
because it does not have the setUID or the setGID bit set in its mode.
And why have you bothered to stipulate a file bar
that does have the setGID bit set in its mode,
when you don't ask any questions about it?
Please do some more research and edit your question to be more coherent.
| ID inheritance – Which IDs? |
1,692,594,761,000 |
I'm trying to setup Postfix from Entware (a repo for embedded devices).
There is no SElinux involved and chroot is disabled in master.cf.
# postconf -n
command_directory = /opt/sbin
compatibility_level = 2
config_directory = /opt/etc/postfix
daemon_directory = /opt/libexec/postfix
data_directory = /opt/var/lib/postfix
debug_peer_level = 2
debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5
default_database_type = cdb
inet_protocols = ipv4
mail_spool_directory = /opt/var/mail
manpage_directory = no
myhostname = domain.nl
mynetworks = 1.1.2.1,8.9.1.1
queue_directory = /opt/var/spool/postfix
shlib_directory = /opt/lib/postfix
smtputf8_enable = no
unknown_local_recipient_reject_code = 550
The issue is that postfix set-permissions isn't able to figure out the root user name. This distribution comes by default with a "root" user named "admin". At least I think that the user name is the issue, because of:
# postfix set-permissions
find: unknown user root
# ls -lah /opt/sbin/postdrop
-rwxr-xr-x 1 NewRootUser root 246.8K Sep 8 22:33 /opt/sbin/postdrop
Regression
With the help of https://wiki.zimbra.com/wiki/Steps_to_fix_permission_and_ownership_of_Postfix_binaries_manually_due_to_bug_on_zmfixperm is tried to fix the differences (755 was already set):
# chown AdminUserName:postdrop /opt/sbin/postdrop
# chown AdminUserName:postdrop /opt/sbin/postqueue
# chmod g+s /opt/sbin/postdrop
# chmod g+s /opt/sbin/postqueue
Result:
# postfix check
postsuper: fatal: scan_dir_push: open directory defer: Permission denied
Question
How to make postfix set-permissions learn the new root user name?
Or how to manually do the steps that postfix set-permissions should do?
Or where in the postfix source code can one find the actions that are executed for flag set-permissions?
|
Applications which expect and use user name root will understandably fail when there is no such user on the system. However, you can have more than one user with same UID. Usually, you should probably not configure the system with multiple user names with same UID, similarly as you should not rename root user.
You can add another UID 0 account, which has the username root. This possibly solves issues with applications which use user names instead of numeric UIDs. To add an alias root for UID 0 with disabled password and login, append following to /etc/passwd:
root:x:0:0:root:/root:/bin/false
Full syntax is explained in man 5 passwd.
| How to fix postfix set-permissions without user named root? |
1,692,594,761,000 |
Right now, I have a script that runs daily that chmod's the home directories, removing all "other" permissions from the directories and the "group" write permission. See below.
#Removing all other permissions on all home directories and write from group
ls /home | sed 's/ //g' |
while read i; do
chmod -R o-rwx /home/$i
chmod -R g-w /home/$i
done
This works great; however, I would prefer if the script checked if there were any files that needed to be changed and then act upon them instead of just doing it every time, regardless if it needs it.
I assume I could put this whole thing in a sub function inside of a if statement, but I don't know what test I would run inside that if statement.
How can I test if the directories would need changed?
|
Use find:
find /home/* -type d -maxdepth 0 -perm /g+w,o+rwx -exec chmod g-w,o-rwx '{}' +
If you want to do this recursive, as you use -R in your examples, use this
find /home/* -perm /g+w,o+rwx -exec chmod g-w,o-rwx '{}' +
Edit:
Brief explanation of find options, for details see man find:
The -perm together with /mode means any of the bits is set.
The classical -exec syntax is -exec command '{}' ';'. The characters {} are replaced with the file name, the quotes are there to protect them from the shell, on most shells this is not necessary, but it doesn't hurt. The ';' is the end of the command, here quoting is necessary for most shells, but an alternative form is \;. The drawback is that there is one call to chmod per file changed. This is avoided with the alternative form where the command is terminated with +. This form calls exec with many names, and is in effect similar to xargs, it even saves the overhead of calling xargs.
| How to check if there are "other" permissions in home directories |
1,692,594,761,000 |
Wayland (or unprivileged Xorg) is able to access DRM and input devices by fd-passing from systemd-logind or equivalent. I can see these devices (for the first seat) in loginctl seat-status seat0.
However I do not see a backlight device (/sys/class/backlight/*/) in this list of devices.
Additionally, while GNOME is able to control my backlight, my user has not been granted backlight permission through the sysfs file owner/group or ACL:
$ ls -ld /sys/class/backlight/intel_backlight
lrwxrwxrwx. 1 root root 0 May 24 17:12 /sys/class/backlight/intel_backlight -> ../../devices/pci0000:00/0000:00:02.0/drm/card0/card0-eDP-1/intel_backlight
$ cd /sys/class/backlight/intel_backlight
$ $ ls -l
total 0
-r--r--r--. 1 root root 4096 May 27 22:09 actual_brightness
-rw-r--r--. 1 root root 4096 May 27 22:17 bl_power
-rw-r--r--. 1 root root 4096 May 27 22:17 brightness
lrwxrwxrwx. 1 root root 0 May 27 22:09 device -> ../../card0-eDP-1
-r--r--r--. 1 root root 4096 May 27 22:17 max_brightness
drwxr-xr-x. 2 root root 0 May 27 22:09 power
lrwxrwxrwx. 1 root root 0 May 24 17:12 subsystem -> ../../../../../../../class/backlight
-r--r--r--. 1 root root 4096 May 27 22:17 type
-rw-r--r--. 1 root root 4096 May 27 22:17 uevent
$ getfacl bl_power brightness
# file: bl_power
# owner: root
# group: root
user::rw-
group::r--
other::r--
# file: brightness
# owner: root
# group: root
user::rw-
group::r--
other::r--
What mechanism are the unprivileged processes in my GNOME session using, to control the backlight despite not being root?
EDITED TO ADD: the device /sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-eDP-1 is shown in loginctl seat-status, and this is the parent device of the backlight device.
I am using gnome-shell 3.28.2-1.fc28 with Wayland. systemd is version 238-8.git0e0aa59.fc28.
|
The backlight is set by gsd-backlight-helper, a gnome-settings-daemon helper which runs as root, thanks to a PolicyKit setting allowing the active user to do so. /usr/share/polkit-1/actions/org.gnome.settings-daemon.plugins.power.policy contains the following:
[...]
<action id="org.gnome.settings-daemon.plugins.power.backlight-helper">
[...]
<defaults>
<allow_any>no</allow_any>
<allow_inactive>no</allow_inactive>
<allow_active>yes</allow_active>
</defaults>
[...]
| What mechanism allows the unprivileged graphical session to control the backlight device? |
1,692,594,761,000 |
I wrote/tweaked a custom kernel module and installed it.
It works as expected, but I've noticed that other kernel modules on my system are compressed with xz and have 0444 permissions, whereas I did not compress mine and I installed it with the executable bit set (0555 permissions).
$ stat --format=%A /path/to/my-module.ko
-r-xr-xr-x
$ stat --format=%A /path/to/other-module.ko.xz
-r--r--r--
Does this have any implications -- performance, security, or otherwise?
I plan on compressing mine and setting the permissions to match what other modules are using, but I don't know the underlying motivation for the compression and permissions they're using.
|
About permissions:
There is no need to set executable bit or write flag to module. Module file should be readable and that is it. insmod, modinfo, modprobe or something else are need to read the module file. Read permissions for group or others may be need possibly to debug module via objdump, nm, i.e.
There is no real reason of setting executable bit to module for anybody.
About compression:
Linux kernel has builtin XZ compression implementation. Linux kernel can successfully read (uncompress previously) initrd image, kernel modules and even itself (vmlinuz last z in kernel file name tells that kernel image is compressed).
I don't know what distro do you use. But if you have compressed kernel modules, then it's rules of your distro. Of course compressed modules have less size compared to that of uncompressed modules, but if the kernel module is compiled without debug symbols, then difference in size between compressed and uncompressed kernel modules will be small. On the other side, it's better to use compression and save space for something else instead of spending it only for storing huge count of modules considering that large count of it will not be necessary.
| kernel module: set executable bit? compress with xz? |
1,692,594,761,000 |
How the website was created:
The website's files were copied from another machine as a compressed folder:
WEBSITE.tar.gz
I decompressed and moved the content to /var/html/www in the new webserver, and that gives as result that the files are for example located as follows:
/var/html/www/index.html
/var/html/www/css/styles.css
/var/html/www/img/photo.jpeg
The permissions of the files and directories resulted different than they were in the previous machine, so I researched and found this solution to quickly fix the permissions:
sudo chmod -R u+rwX,go+rX,go-w /var/www/html/*
I checked the permissions of the files and now they did show the way I expected:
cd /var/www/html
ls -a -l
drwxr-xr-x. 2 root root size month day hour css
drwxr-xr-x. 2 root root size month day hour img
-rw-r--r--. 1 root root size month day hour index.html
cd /var/www/html/img
ls -a -l
-rw-r--r--. 1 root root size month day hour photo.jpeg
cd /var/www/html/css
ls -a -l
-rw-r--r--. 1 root root size month day hour styles.css
The problem:
I still get this message when trying to access a page of the website:
Forbidden You don't have permission to access /index.html on this
server
Then I did an experiment:
sudo mv index.html index.html.backup
sudo cp index.html.backup index.html
And the page loaded normally now, but wouldn't show the image and the styles. So if I complete the process of copying I will get the image and the styles to show in the website:
sudo mv css css.backup
sudo cp css.backup css
sudo mv img img.backup
sudo cp img.backup img
It worked, but what's the explanation? I don't want to look past the mystery, I want to know what's the "normal" way to solve the problem, other than copying the files.
|
For GNU-coreutils ls including on Linux, dot at the end of modebits means SELinux 'context' applies -- use ls [-l] -Z (or --context) to see details -- and (recent) RedHat/CentOS by default enables SELinux to restrict access to all kinds of resources including files.
Unless you want to use SELinux features to control access, the simple way to mostly ignore it is [sudo] restorecon [-R] on the files, see man restorecon; and the ways to disable SELinux are given at man setenforce
Related question What does a dot after the file permission bits mean?
| Why website pages' access is denied by Apache unless I replace them with copies? |
1,692,594,761,000 |
Please read the following steps to understand my problem,
Execute following commands
mkdir ~/src
mkdir ~/destination
sudo mount ~/src ~/destination --bind -o ro
src folder has been bind mounted to destination folder. When I view the destination folder with nautilus it is read only. But the ll command gives same file permission for both src and destination folders.
What is the reason for this?
How can I view the actual permission of the 'destination` folder?
How does the nautilus display it as read only?
|
There are two things at play here:
Directory permissions.
Mount options.
The directory permissions are independent of the fact that the mount was read-only. Hence, ls will still list the permissions assigned to the folder, without regards for how it was mounted.
In the same way, if I mount a folder with noexec, ls will show that the executable files in there are still executable. Mounting the folder with noexec (or ro in your case) does not change the permissions on files and directories.
The file manager appears to be smarter though, and knows that the directory is mounted read-only. It obviously queries more than just the permission bits to find this out.
From comments: "In my use case I want to read the mount permission with a shell command".
The command mount, without any options, will output all the currently mounted partitions and their mount options:
$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=32841600k,nr_inodes=8210400,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=6572324k,mode=755)
(etc.)
The mount options are listed in parenthesis.
You may also investigate /etc/mtab for the currently mounted filesystems. This file has the same format as /etc/fstab, so you could do
awk '{ print $2, $4 }' /etc/mtab
to only get the mount points and the mount options, for example.
| ls -la doesn't display correct permission for a mount --bind folder |
1,692,594,761,000 |
I can control which users can run su or gksu by, for example, including the line auth required pam_wheel.so deny group=nosu in /etc/pam.d – then members of group nosu won’t be able to use su or gksu --su-mode.
However, this won’t stop anyone from using pkexec (and it is futile to prohibit the use of su without prohibiting the use of pkexec, since apparently pkexec offers same functionality…). Is there any similar way to control who may and who may not use pkexec?
|
You could remove all permissions for the group, nosu, using an ACL.
setfacl -m g:nosu:--- /usr/bin/pkexec
After setting the ACL, users who are not members of the nosu group still use pkexec normally.
| How to prohibit users from using pkexec? |
1,692,594,761,000 |
I'm mounting a Windows Samba share on my Linux machine (SUSE 11) using the following.
mount -t cifs -o username=myname,password=12345 //10.10.0.78/smb /share/smb
It works fine except it has the following permissions
drwxr-xr-x 1 root root 4096 Nov 10 19:35 smb
If I try to change the permissions with
sudo chmod 777 /share/smb
I get permission denied even as root. How can I get around this so that non root users can access the share?
NOTE
On the Windows side the share has full access to all users
|
I found the solution, I had to set the permissions during the mount not after
mount -t cifs -o username=myname,password=12345,dir_mode=0777,file_mode=0777 //10.10.0.78/smb /share/smb
| Linux mount Windows samba share for all users |
1,692,594,761,000 |
Basically I want to know if my understanding of unix users is correct.
1) A unix user is basically a different set of permissions on some set of files and directories in a filesystem. For example a user may own some set of files and directories and execute different kind of actions on those files (execute/read/write)
2) Unix groups also have permissions and if a particular user is assigned to a group then user's permissions are extended to group's permissions.
3) Every process is started on behalf of a particular user.
Are the statements above correct?
4) When I download and install an application a bunch of directories and files are created. Is a user also created to manage newly installed application?
5) When I login to a unix system as a normal user and run an app by double-clicking on it on behalf of what user the process will run?
6) When I start a unix system a bunch of process are also started. On behalf of what user are they started?
|
Yes
Yes
Yes
It depends, complex program (database, tomcat like web server) do, smaller (gif generator) don't.
It depends, if no set user id, program will run as clicking user.
root mostly, some as www (if you have a web server), some as bin, mail.
| Understanding unix user [closed] |
1,692,594,761,000 |
I am unable to override previously set ACL settings.
This is what it looks like:
root@ip-xxx-xxx:/srv/www# getfacl grace.staging.site.com.au/
# file: grace.staging.site.com.au/
# owner: web
# group: www-data
user::rwx
group::r-x
group:www-data:rwx
group:dev:rwx
mask::rwx
other::r-x
default:user::rwx
default:group::r-x
default:group:www-data:rwx <---------
default:mask::rwx
default:other::r-x
As you can see, the default group is default:group:www-data:rwx - this setting has been applied recursively. However, each time I create a new file or directory, they're attributed to luqo33:dev:
luqo33@ip-xxx-xxx:/srv/www/grace.staging.site.com.au$ touch test_file
luqo33@ip-xxx-xxx:/srv/www/grace.staging.site.com.au$ ls -al
total 20
drwxrwxr-x+ 3 web www-data 4096 Jun 28 19:14 .
drwxr-xr-x 5 web www-data 4096 Jun 28 18:33 ..
drwxrwxr-x+ 2 web www-data 4096 Jun 28 18:33 logs
-rw-rw-r--+ 1 luqo33 dev 0 Jun 28 19:14 test_file <-------
I need to make all files and directories to be owned by web:www-data. Clearly, in spite of the fact that there is default ACL for the group (www-data), it does not have any effect. What am I missing?
|
That's not what the default entry means on a ACL; if you look at the new file you created you'll see it already has an ACL (the + at the end of the ls output), and a getfacl test_file will show it has group:www-data:rwx associated with it.
If you want the newly created file to be owned by www-data then you need to add the setgid bit on the directory.
Without the flag, if I create a file then it's in my group:
$ ls -ld .
drwxr-xr-x 2 sweh www-data 4096 Jun 28 17:37 ./
$ touch x
$ ls -l x
-rw-r--r-- 1 sweh sweh 0 Jun 28 17:38 x
I now add the setgid bit to the directory and the new file has group ownership defaulting to www-data
$ sudo chmod g+s .
$ ls -ld .
drwxr-sr-x 2 sweh www-data 4096 Jun 28 17:38 ./
$ touch y
$ ls -l y
-rw-r--r-- 1 sweh www-data 0 Jun 28 17:38 y
| Problem overriding ACL for default group ownership |
1,692,594,761,000 |
I have two home folders: /home/masi and /home/masi_backup and I would like to find the differences between files of the two directories.
Pseudocode
vimdiff <`ls -la /home/masi` <`ls -la /home/masi_backup`
How can you compare the differences of ownerships between the two directories?
|
Something like this:
vimdiff <(find /home/masi -printf "%P %u:%g %m\n" | sort) <(find /home/masi_backup -printf "%P %u:%g %m\n" | sort)
(this gives names without the leading /home/masi or /home/masi_backup, owning user and group, and permissions — the latter weren't mentioned in the question but seem useful, drop %m if you don't want them).
| Find differences of ownerships between two home folders? |
1,692,594,761,000 |
Just like the question. Is it possible to white-list certain executables from a noexec mounted FS?
for instance, mine looks like this:
/dev/vg/lv on /tmp type ext4 (rw,noexec,nosuid)
|
No. The mount options trump all. That's what they're for: to ensure that nothing ever gets executed directly from that filesystem.
To counter noexec, you can run most programs indirectly by invoking their launcher:
If the program is a script (starting with a shebang), invoke the interpreter and pass it the script as its first argument.
If the program is a dynamically linked executable, invoke the dynamic loader (e.g. /lib/ld-linux.so.2 or /lib64/ld-linux-x86-64.so.2) and pass it the binary as its first argument.
If you have a filesystem mounted with noexec, you can make a view of a directory where all files are executable with bindfs. Bindfs doesn't allow setting permissions on a per-file basis however.
Of course you can make a copy of the file elsewhere and make that executable.
If the filesystem is mounted nosuid, there's no way to make the files setuid. That would break security. To make a setuid file, you need to have access to the owning user account. Making a copy and making that setuid, or remounting without the nosuid option, are the only solutions.
| White-list certain binaries and scripts inside a noexec/nosuid mount? |
1,692,594,761,000 |
I would like to install sphinx-doc from the sources so I git clone the module then installed it with sudo python setup.py install.
Using /usr/local/lib/python2.7/dist-packages/pytz-2016.4-py2.7.egg
Searching for MarkupSafe==0.23
Best match: MarkupSafe 0.23
Removing MarkupSafe 0.18 from easy-install.pth file
Adding MarkupSafe 0.23 to easy-install.pth file
Using /usr/local/lib/python2.7/dist-packages
Finished processing dependencies for Sphinx==1.4b1.dev-20160423
Then I realized that I don't have the permissions to use it:
$ sphinx-quickstart
bash: /usr/local/bin/sphinx-quickstart: Permission denied
$ ls -al /usr/local/bin/sphinx-quickstart
-rwxr-x--- 1 root root 357 Apr 23 16:56 /usr/local/bin/sphinx-quickstart
The question is, how to install it with the correct permissions?
I often have this kind of issues when I have to use sudo to create a folder or to mount a drive to /media. This is a bit off topic, but /media is 755 and as a regular user I cannot mount my own drives on my own computer without being root.
Is that normal?
|
If you want to install Python packages from source, you should do so in a virtualenv. That way you minimize the chance that you break your system's python, and it you make it possible to just remove the installed package without fear of removing too much.
In order to do so you must first install virtualenv, e.g. using
sudo apt-get install python-virtualenv
after that is installed create a virtualenv somewhere and activate it:
sudo mkdir /opt/util
sudo chown $USER /opt/util
virtualenv /opt/util/sphinx-doc
source /opt/util/sphinx-doc/bin/activate
after that run your
python setup.py install
in the git cloned directory, you should not have to use sudo.
As long as the virtualenv is active you should be able to run sphinx-doc or any (other) utilites the python setup.py install creates. You can also run those when the virtualenv is not active by using /opt/util/sphinx-doc/bin/<UTILNAME> (for which you might want to make an alias).
/opt/util/sphinx-doc can be changed to whatever you want. But if you put such virtualenvs next to each other, you can easily create some script for automatic updating of any pip installed packages, for defining aliases etc.
| File permissions while installing a python module |
1,692,594,761,000 |
This document about File ACLs makes mention that the masking mechanism was put in place to solve the problem of
... POSIX.1 applications that are unaware of ACLs will not suddenly and unexpectedly start to grant additional permissions once ACLs are supported.
What would be an example of such a situation?
If there was a file with extended ACLs setup according to these intentions by the system admin:
The file owner should have rwx permissons
The users in the file's group should have no access (---)
Others should have no access (---)
An exception to the above three is that the system group audit has r-- permissions on files
I would imagine the corresponding extended ACL for a file would be:
# file: path/to/file
# owner: foo
# group: bar
user::rwx
group::---
group:audit:r--
mask::r--
other::---
In this example, if the mask mechanism was not in place and a tool unaware of extended ACLs attempted to change the group permissions to --x (it is a strawman argument) the group:: entry would end up having group::--x. Why would this "unexpectedly ... grant additional permissions"?
# file: path/to/file
# owner: foo
# group: bar
user::rwx
group::--x
group:audit:r--
other::---
Based on my understanding, users in the owning group but not in the audit would gain the ability to execute. Users in the audit group but not the owning group would not. Users in both groups would gain the ability to execute. I don't understand why the mask is needed.
If I am misunderstanding something, please explain. It's possible that my strawman does not describe the situation that the quote is talking about. If that is the case, please describe such a situation.
|
If the mask and its link to the S_IRWXG bits weren't the case, applications that did various standard things with chmod(), expecting it to work as chmod() has traditionally worked on old non-ACL Unixes, would either leave gaping security holes or see what they think to be gaping security holes:
Traditional Unix applications expect to be able to deny all access to a file, named pipe, device, or directory with chmod(…,000). In the presence of ACLs, this only turns off all user and group permissions if the old S_IRWXG maps to the mask. Without this, setting the old file permissions to 000 wouldn't affect any ACL entries for specific users/groups and other users/groups would, surprisingly, still have access to the object.Temporarily changing a file's permission bits to no access with chmod 000 and then changing them back again was an old file locking mechanism, used before Unixes gained advisory locking mechanisms, that — as you can see — people still use today.
Traditional Unix scripts expect to be able to run chmod go-rwx and end up with only the object's owner able to access the object. Again — as you can see — this is still the received wisdom even now, decades after the invention of Unix ACLs. And again, this doesn't work unless the old S_IRWXG maps to the mask, because otherwise that chmod command wouldn't turn off any ACL entries for specific users/groups, leading to users/groups other than the owner retaining access to something that is expected to be accessible only to the owner.
A system where the permission bits were otherwise separate from and anded with the ACLs would require file permission flags to be rwxrwxrwx in most cases, which would confuse the heck out of the many Unix applications that complain when they see what they think to be world-writable stuff.A system where the permission bits were otherwise separate from and ored with the ACLs would have the chmod(…,000) problem mentioned before.
Further reading
Craig Rubin (1989-08-18). Rationale for Selecting Access Control List Features for the Unix System. NCSC-TG-020-A. DIANE Publishing. ISBN 9780788105548.
Portable Applications Standards Committee of the IEEE Computer Society (October 1997).
Draft Standard for Information Technology—Portable Operating System Interface (POSIX)—Part 1: System Application Program Interface (API)— Amendment #: Protection, Audit and Control Interfaces [C Language] IEEE 1003.1e. Draft 17.
https://unix.stackexchange.com/a/406545/5132
| Example of situation where ACL unaware tools would grant unintended permssions |
1,692,594,761,000 |
I'm trying to write a "janitor" script that will run as a cron job in one specific directory. It is supposed to create an archive folder with the date of creation in the name, and then find and move all files of a certain type into this new folder.
Here is my test code:
#!/bin/bash
today=$(date +'%m:%d:%Y')
target="Archived-$today"
mkdir -p $target
find . -type f -name "*.zip" -exec mv -i {} /$target \;
It manages to create the folder correctly, but is unable to move the files it finds into the folder. I have only been doing this as a small test, and both the script and the files have been created by the same user. If I add sudo to the beginning it tries to move the files, but instead what happens is that it only deletes the files from the current directory, but does not place them in the newly created directory.
I am not trying to move .zip files. Just an example.
I have tried by having chmod 777 on both files and folders. Same thing happens.
I am running ubuntu 14.04 LTS.
If there is a much better way to do this, please tell me.
Any pointers in the right direction would be very much appreciated.
Edit
Now it works.
I updated the find statement to:
find . -maxdepth 1 -type f -name "*.zip" -exec mv -t "$target/" {} \;
|
find files in current dir not subdirs :
find . -maxdepth 1 -type f -name '*.zip' -exec mv -t "$target/" {} \;
exclude dirs methode :
find . -type f -not -path "$target/*" -name '*.zip' -exec mv -t "$target/" {} \;
Note that this will exclude only today's archive while you'll have other archive , i recommand the first command , or creat archive-dirs outside the main dir !
| Bash script unable to move files |
1,692,594,761,000 |
As root, I try to attach strace to a running kworker process, without success.
root@rasal# whoami
root
root@rasal:/# cat /proc/sys/kernel/yama/ptrace_scope
0
root@rasal:/# ps ax | grep kworker
1030 ? S< 0:00 [kworker/u17:0]
root@rasal:/# strace -fp 1030
strace: attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
The etc/sysctl.d/10-ptrace.conf file simply states:
A PTRACE scope of "0" is the more permissive mode.
This is exactly what I have, see above. Is there any reason why this should fail? Or is this a bug?
|
The kworker "process" that you show is a kernel thread and not a normal process. There is no userspace portion for it and thus no syscalls. Even if it worked it couldn't possibly show anything.
On top of everything else, I guess that tracing kernel threads (under whatever fictional scenario one can imagine) would most probably freeze the system.
| Why am I unable to attach `strace` to a kworker process? |
1,692,594,761,000 |
I have an apache running under user apache with a lot of permissions (rw to multiple directories). Now I want to let some users upload programs/scripts via webform and execute them, with php function exec() in file upload php page. However, I don't want those programs to be able to write anything to hard drive. It seems program sudo might do what I need, but I don't know how to use it.
Shorter: how to run programs and scripts in readonly mode under powerful user?
In case it matters, my system is Centos 6.
|
You're right, sudo to a user with read but not write permission will run a command in a way that only has write access to files you give it permission for.
sudo -u some_user cmdname
Running arbitrary user-uploaded programs requires extreme security precautions. Local-root exploits are unfortunately not uncommon in Linux. Letting users run programs they upload without some kind of jail / containment, if not a virtual machine, is unwise.
You should build your system so it's still at least probably secure even if the uploaded program takes advantage of an unpatched root exploit, to elevate its privileges from nobody to root.
| Run program in readonly mode |
1,692,594,761,000 |
I have a soft link that was accidentally moved as a result of a user drag/drop operation in a Filezilla UI. Is there a way to prevent a user from moving the link but leaving all other permissions intact?
Update:
To solve this problem we changed the owner of the link to the root user.
|
No, there is not.
If the user has permissions to write the directory that contains the symlink, then they will be able to do the following things:
Remove all kinds of files from that directory
Create all kinds of files in that directory
Rename files within that directory
Move files into the directory (assuming they also have write permission on the directory the file comes from).
Move files out of the directory (assuming they also have write permission on the directory the file is going to).
Perhaps you can use the sticky bit to achieve what you want? The sticky bit restricts operations on files within the directory to the owner of the file involved. So then the user would only be able to move or remove the symlink if they were the owner of the symlink. Be aware that the sticky bit is global per directory, so its effect will not be restricted by user nor by file type (symlink or otherwise).
chmod +t directory # set sticky bit
| Deny permission to move a soft link |
1,432,937,697,000 |
I just installed mongodb version 3.0.3, on Ubuntu. I edited the conf file to change the default data directory to "/home/user/mongodb", and gave it the following permissions:
drwxr-xr-x 4 mongodb mongodb 4096 May 29 23:26 mongodb
I haven't changed anything else in the conf file. When I try to start the mongod service as
sudo service mongod start
and connect to the shell via the mongo command, I get the following error
MongoDB shell version: 3.0.3
connecting to: test
2015-05-29T23:58:49.450+0200 W NETWORK Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2015-05-29T23:58:49.452+0200 E QUERY Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed
at connect (src/mongo/shell/mongo.js:179:14)
at (connect):1:6 at src/mongo/shell/mongo.js:179
exception: connect failed
The log file says this, every time I try to start the server:
2015-05-30T00:01:02.552+0200 I CONTROL ***** SERVER RESTARTED *****
2015-05-30T00:01:02.581+0200 I STORAGE [initandlisten] exception in initAndListen std::exception: boost::filesystem::status: Permission denied: "/home/user/mongodb/storage.bson", terminating
2015-05-30T00:01:02.581+0200 I CONTROL [initandlisten] dbexit: rc: 100
As far as I can tell, the permissions are correct. What could be the problem?
|
user home directory as default can be read, write and listed only by his owner so other users (other that root user) can't event list content of home folder of other users. So you need to ensure that mongodb user has access to new provided directory.
you can check that by using command like
namei -m /home/user/mongodb
output example
f: /home/scantlight/mongodb
dr-xr-xr-x /
drwxr-xr-x home
drwx------ scantlight
drwxr-xr-x mongodb
as you can see my home folder has rwx permission for me, owner ... other user has no permission so content of my home folder can't be event listed by other user (except root)
Of course you don't want to give write or read permission to other users on your home directory but you want that they can reach destination folder. Here is were x flag can be used. when x flag is set on file it means that that file can be executed, for this reason it's called execution flag ... but when this flag is set on folder and neither of r(read) or w(write) flag is set .. that means that other users o users in same group can pass trough this folder to reach a deeper situated folder.
So to ensure that mongodb user will be able to reach, read and write on /home/user/mongodb folder ... you home folder should have executable flag (x flag) set for all other users.
chmod -R o+x /home/user/mongodb
after this, previous output example should look like
f: /home/scantlight/mongodb
dr-xr-xr-x /
drwxr-xr-x home
drwx-----x scantlight
drwxr-xr-x mongodb
Note x permission for all other users ... if this will not help :) ... just check file permissions inside /home/user/mongodb folder maybe there is some issue with permissions of included files?
| Mongodb permissions error after changing data directory |
1,432,937,697,000 |
I've made a file as root, and written a string in it.
Now I've changed mode to "0" like this:
root# ls -al transit/
total 4.0K
---------- 1 root root 6 Jan 5 18:15 27050
root#
If I try to tail, head, or cat it, it works:
root# cat transit/27050
320646
root#
Why is it possible to read it?
|
Refer to the answer here.
Basically, rootness trumps permissions.
Permissions 000 means only root can read or write the file.
I'm not aware of any extra special use for the combination of root
ownership and 000 permissions.
Also, you could find some worthy information from this question as well.
So, as user Hauke Laging points out in his answer,
Always assume that root (and any other user/process with
CAP_DAC_OVERRIDE and CAP_DAC_READ_SEARCH) can do everything
unless an LSM (SELinux, AppArmor or similar) prevents him from doing
that.
That means also that you should assume that all your keystrokes can be
read. Passwords aren't really safe. If you want a serious level of
security then you must use a system which is completely controlled by
you (and not even used by anyone else.
So, even permissions 000 cannot restrict the root user from reading file contents unless there is any LSM preventing the root user from reading the file contents.
| Still able to read file after changing permissions |
1,432,937,697,000 |
OS: Linux CentOS 7, NFSv4
On one machine I exported an NFS share group-owned by the nfsgroup with 2770 privileges for group collaboration:
groupadd -g 5000 nfsgroup
chown nobody:nfsgroup /home/groupshare
chmod 2770 /home/groupshare
Then, on the other machine I add the same group and assign it to the root user. I then try to access the mounted NFS share and get a 'Permission denied' error:
groupadd -g 5000 nfsgroup
usermod -a -G nfsgroup root
ls -l /mnt/groupshare # Permission denied!
Note: For this I tried to re-login as root and even reboot the machine, the result is the same: Permission denied.
I then do the same thing for the regular account (named user) and have no access problems
usermod -a -G nfsgroup user
su - user
ls -l /mnt/groupshare # Works as expected, no permission errors
The only way I can access the share under the root is by changing the effective group (despite supplementary nfsgroup is there):
su - root
newgrp nfsgroup
ls -l /mnt/groupshare # No permission errors
I find this behavior inconsistent and weird. Can someone please shed a light on why it behaves this way?
One piece of information, that maybe somehow relevant is as follows. Both the id (under user account) and id user return the same output, in particular groups=1000(user),5000(nfsgroup), while the id (under root account) produces groups=0(root) and id root outputs, as expected, groups=0(root),5000(nfsgroup).
|
I think I found what caused the problem. When NFS client accesses an NFS share, the server checks UID and GID of the accessing user. And by default NFS server comes with root_squash option enabled, that assigns NFS client accessing the share as root the UID/GID of nfsnobody.
After I had added the no_root_squash option to the export in /etc/exports file the problem disappeared.
Apparently when NFS server 'squashes' root's UID/GID it disregards the supplementary groups altogether (seems wrong to me, but probably the NFSv4 standard thinks differently).
| Do root's supplementary groups behave differently than regular account ones for NFS shares? |
1,432,937,697,000 |
I've accidentally removed myself from sudoers file by doing usermod -G user group without -a, and now I am not in the sudoers group. I've tried doing su - and entering root's password but it says su: Authecation failure. Is there a way to add myself back? Using Fedora 20.
|
You can do this in single user mode.
Restart system, at grub prompt press down or up arrow so grub screen will be pause.
Press "e" to edit grub entries,
Select the kernel line and again press "e" to edit mode
Now add "1" or "single" at end of the line. and press enter.
Press "b" to boot with this setting. Now fedora will start in single user mode.
Now you can reset root password with below command
passwd
As well as you can edit file /etc/sudouser to assign sudo privileges to other users.
UPDATE ---
For GRUB2
Use the arrow keys to select the boot entry you want to edit
Press e to start editing that entry
Use the arrow keys to go to the line that starts with linux or linux16
If you have a UEFI system it's the line that starts with linuxefi
Go the the end of that line add a space then rw then another space and init=/bin/bash
Press Ctrl-x or F10 to boot that entry
Now you can reset root password with below command
passwd
| Fedora: How can I add myself back to sudoers file? |
1,432,937,697,000 |
I was reading the man page of chown. I don't understand why S_ISUID and S_ISGID mode should be cleared when the function returns successfully.
|
I think you're pointing to this from the man page:
When the owner or group of an executable file are changed by an
unprivileged user the S_ISUID and S_ISGID mode bits are cleared.
So why are they cleared now. You see they are only cleared in case of an executable file.
Because when one of the bits (SUID/SGID) is set, the unprivileged user can execute the file as the new owner of the file. That would be a huge security breach.
| Why the S_ISUID and S_ISGID mode bits got cleared when the owner or group of an executable file are changed by an unprivileged user |
1,432,937,697,000 |
Similar to https://stackoverflow.com/questions/15143614/file-ownership-changes-to-root-after-saving-from-a-program-in-ubuntu but I can't use the answer since I'm not running a command-line app as sudo. I'm running a desktop app on Mint 16 32-bit on a shared file (locally shared, i.e. just on a local drive with 777 perms and nobody:users ownership), which then is not able to be overwritten by another user when they go to use it because it becomes adminuser:adminuser and 644.
How can I share this file between users, and keep it from switching ownership/perms whenever the main admin user uses it?
|
Without the ability to use sudo your options become limited to essentially 2.
Method #1
You can either put the users into the same Unix group (/etc/group) so that they're able to access the same files & directories.
Example
$ more /etc/group
somegroup:x:1001:adminuser,nobody
You then need to set the parent directory that contains this file like so:
$ chgrp somegroup parentdir
$ chmod g+rwxs parentdir
This method will force any files or directories created underneath parentdir to have the group set to somegroup. This method works fairly well, by in large, but can be a bit fragile if parentdir's permissions or ownership gets messed up. Also this method doesn't work if files and/or directories are moved into the directory from some other location.
Method #2
The more robust way to do this would be to make use of access control lists (ACLs) on the file or directory of interest, using the command setfacl.
$ setfacl -Rdm g:somegroup:rx somedir
$ ll -d somedir/
drwxrwxr-x+ 2 saml saml 4096 Feb 17 20:46 somedir/
You can then confirm that the ACL has been applied using getfacl.
$ getfacl somedir/
# file: somedir/
# owner: saml
# group: saml
user::rwx
group::rwx
other::r-x
default:user::rwx
default:group::rwx
default:group:somegroup:r-x
default:mask::rwx
default:other::r-x
Setting the permissions above on the parent directory will enforce that a default ACL will get applied to any new files or sub-directories contained within somedir.
References
Getting new files to inherit group permissions on Linux
| Desktop app run by admin user (but not explicitly with sudo) takes file ownership of shared file |
1,432,937,697,000 |
This question is related to the two other questions I had earlier, about enabling Raspberry Pi to act as a motion sensor that will try to ssh into a more powerful server when it detects motion (the more power server will then do additional processing via a script). So here's what I did:
On the Raspberry Pi, I installed Linux motion app
I also used ssh-keygen on the Raspberry Pi and then using ssh-copy-id copies public keys to the more power server, so that the Raspberry Pi can ssh to the server without having to type in the password.
On the motion.conf file, there's a line for on_motion_detected event for when the motion is detected by Raspberry Pi, on that line, I have something like:
ssh [email protected] '/exec/some/script/here'
But the script on the more powerful server is never executed because the motion daemon is running as user 'motion', rather than the user (pi) that did the ssh-keygen that the remote server accepts. I know this because:
If I change the on_motion_detected command to:
on_motion_detected echo hello_world | wall
this command gets executed and I see it on all the terminals that are ssh'd into the Raspberry Pi
Or, if instead of on the on_motion_detected line, I simply run ssh [email protected] '/exec/some/script/here' on the Raspberry Pi's command line (as user 'pi'), it also gets triggered by the server.
So the question is, how do tell the Raspberry Pi's operating system to 'use' the key of the 'pi' user when the 'motion' user tries to ssh into the more powerful server, in that on_motion_detected event?
|
One option is moving your ssh-keys from pi user to motion user.
(Assuming that your home user of pi and motion is /home/pi and /home/motion)
# mkdir /home/motion/.ssh/
# cp -a /home/pi/.ssh/* /home/motion/.ssh/
# chown -R motion /home/motion/.ssh/
Explanation:
If not specified, ssh command use key in ~/.ssh/id_*, where ~/ is home directory of user who executed this command. So, if you run as motion user, ssh will try to use key in /home/motion/.ssh/ instead key in /home/pi/.ssh
| Linux motion user - run it as ssh key as pi to remote server |
1,432,937,697,000 |
I just installed npm and node.js, and I couldn't access npm. And I'm like "why?" and my OS is like "because /usr/local/bin is at 700 permissions" and I'm like "should it really be that way?" /usr/local is supposed to be .. the local user's bin folder? Then why does it require root access?
It is filled with GAE stuff. Maybe Google App Engine changed it, I don't know.
|
No, /usr/local/bin and pretty much everything in it should be set 755.
| Should my /usr/local/bin be 700 permissions? |
1,432,937,697,000 |
I've got an external drive, and I'd like to have a linux partition on it. I formatted everything correctly on one machine with Fedora20 (I have one ntfs and one ext4 partition), but when I plug it in my older machine with Fedora12, it doesn't automount the ext4 partition. I have found a solution that involves putting something like the following in /etc/fstab:
UUID=0123-abcd /media/MYEXTDISC ext4 ...,umask=000,dmask=000,...
But this has two problems:
It doesn't help since the old machine with Fedora12 does not support umask and dmask for ext4.
I would have to do something like that on every linux computer I use, which is impossible, since some of them are at work where I can't modify /etc/fstab.
My idea was that the the filesystem could provide some option like "treat all non-root files as non-root files" or "give all non-root files the rights 666 or 777. However, I don't know if this is possible with ext4. I can change the filesystem to anything linuxy, the HDD is empty at this moment.
|
It seems your problem is that each installation has a unique owner. Linux identifies users by number, or UID. You can see your user id with the id command:
$ id
At any rate, your first user on Fedora 20 has a UID of 1000, while Fedora 12 has a UID of 500.
You either need to relax the permissions, use a common group on each install, or use the same UID for your users.
It is possible you may be running into problems with selinux as Fedora 12 and Fedora 20 auto mount in very different locations. Check for selinux problems or set selinux in permissive mode.
With an ext4 partition, as you can see, you use chown and chmod to manage permissions.
| Configure external HDD for use with multiple linux PCs |
1,432,937,697,000 |
Some how, can't remember why, I got into the habit of downloading source to the directory /opt, which I chown to my user/group.
I have this feeling that it is not a good thing to do. Is there anything wrong with owning a directory that is outside of your own home directory?
|
First of all, another question explains the /opt directory.
Now, your question about the ownership depends on the environment. Is this is a work system, or personal system?
If it's entirely your computer system, download wherever you want to!
That seems like a fine place to maintain ownership of on your own system.
If this is a work computer, it would make more sense to have a more restricted account own that directory, in case other people need the source you're downloading for any reason like using it or auditing it.
| Bad to own a directory outside of your home directory? |
1,432,937,697,000 |
I prefer to keep my /home/dotancohen/ directory as permissions 0750. However, I do need Apache to access /home/dotancohen/someProject/public_html/. I know that I could configure the home directory as 0755 and all subdirectories other than ~/someproject/ as ___0 but that is a pain. How might I allow Apache to access the ~/someproject/public_html/ directory yet keep the home directory as 0750?
I tried to symlink /war/www/someProject to /home/dotancohen/someProject/ but in any case Apache fails to get past the 0750 barrier on /home/dotancohen/. I suppose that I could add the www-data user (Apache) to the dotancohen group, but I feel that is giving it too much power.
Alternatively, I could keep the web files in /var/www/someProject/ but due to other reasons I prefer to keep them under my home directory.
|
There's a few different options available to you. Here are the ones I can think of; each has its own merits and disadvantages.
You can set the world execute bit on the parent directories. That way, anyone who knows the full path to a file will be able to access it, but no one else can. This still does leave well-known files up for grabs though unless you protect them with more restrictive permissions (things like ~/.bashrc, ~/.gnupg, ~/.Xauthority and so on may be of interest to an attacker so would need their permissions tightened).
You may be able to leverage ACLs to do the same thing with more granularity, e.g. only allowing the www-data user or group execute access to the directories, read/execute access to any directory the web server needs to provide a content listing for and read access to files it should serve.
You could add the www-data user to the dotancohen group and then revoke group permissions on everything except what you want Apache to be able to access. That's probably the easiest approach that opens up as little as possible, but it gets trickier if you are already using group permissions for some other purpose.
Or, as you say, you could move the publicly-served files out of your home directory entirely. This is definitely the easiest setup to get right in terms of permissions, and it's certainly the choice I would make unless there's some compelling reason not to. Depending on your setup and specific needs, it may even be practical to use 0750 or 0770 permissions on such a public root with appropriate ownership, which would restrict access to only yourself and the web server. Owner yourself, group www-data and permissions 0710 throughout such a directory tree would probably be about as tight as you can go, but means the web server must know the full name of every file it will access under that directory.
As an aside, you may want to consider migrating to FHS-compliant /srv rather than /var.
| Allow world access to directory under 0750 directory |
1,432,937,697,000 |
I currently have a small Ubuntu Server 12.04 machine (test environment) with about 3 non-root users created. Each user has their own public_html directory under their home...thereby allowing them to deploy multiple apps as named virtual hosts. Each user belongs to the Apache www-data group, set up as follows:
sudo usermod -a -G www-data [username]
sudo chown -R [username]:www-data /home/[username]/public_html
sudo chmod 2750 /home/[username]/public_html
Now as the root user, I am in the process of creating a bash script that will automate the creation of the folders for the VirtualHost under a prompted user's public_html as well as creating an associated entry in /etc/apache2/sites-available/. The script (run with sudo) will prompt for the user ($uzer) and the desired virtual host name ($vhost). So far after running a few checks I eventually get to the following...
mkdir -vp /home/$uzer/public_html/$vhost
mkdir -vp /home/$uzer/public_html/$vhost/www
mkdir -vp /home/$uzer/public_html/$vhost/logs
mkdir -vp /home/$uzer/public_html/$vhost/backups
I need to change the ownership of these newly created folders, so I'm unsure whether I should be doing the following:
chown -vR $uzer:www-data /home/$uzer/public_html/$vhost
chmod 2750 /home/$uzer/public_html/$vhost
My questions:
Is my folder structure correct/ideal?
I know I've used recursive (-R) option, but should I be repeating the same for $vhost/www, $vhost/logs and $vhost/backups?
Am I correct in thinking that the chmod above is probably redundant?
Is there a way I can run the mkdir commands as the user $uzer?
|
Q: Is my folder structure correct/ideal?
A: Folder structure seems fine.
Q: I know I've used recursive (-R) option, but should I be repeating the same for $vhost/www, $vhost/logs and $vhost/backups?
A: It would be redundant to run it on those directories
Q: Am I correct in thinking that the chmod above is probably redundant?
Yes technically it's redundant because your initial sudo that cretes the directories is setting the 'set group id bit', but setting that bit, (the 2 in 2750), is not a guarantee. I've seen directories with this on where users have either moved files into the directory or accidentally changed the group on files, so I'd leave it.
Is there a way I can run the mkdir commands as the user $uzer?
root$ su -u $user -c "mkdir ..."
Also you could save a step on the chmod of the /www, /log, & /backups by using the mkdir --mode=... switch.
For example
mkdir -vp --mode=2750 /home/$uzer/public_html/$vhost
mkdir -vp --mode=2750 /home/$uzer/public_html/$vhost/www
mkdir -vp --mode=2750 /home/$uzer/public_html/$vhost/logs
mkdir -vp --mode=2750 /home/$uzer/public_html/$vhost/backups
| Changing permissions on user files for automating Apache VirtualHost creation |
1,432,937,697,000 |
On my system I have three partitions: one is shared between W7 and Linux Mint (NTFS), and the other two are OS-specific.
In my home directory I have created a symbolic link to another directory on the shared partition.
I have a simple .cpp file there which I compiled via g++ name.cpp. Usually, this would also make the file executable, but this time I had to manually chmod 755 it.
Strangely, this didn't work either, the console said it did not have the required permission. So I executed sudo chmod 755 a.out. This asked me for my password, and reported no errors. However, it had no effect. a.out was not executable. I've noticed some other strange behaviors in symlink directories too.
Whats going on and how can I fix it?
Edit:
My mount options:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda6 during installation
UUID=7c50dab1-730b-4d3c-a944-51da19c8e2c6 / ext4 errors=remount-ro 0 1
# swap was on /dev/sda7 during installation
UUID=12e39b76-7f19-4c6d-a724-81ea29211db1 none swap sw 0 0
/dev/sda5 /media/yannbane/Shared ntfs defaults,fmask=117,dmask=007,gid=46 0 0
|
As you can see, there is fmask option and it's set to 117. That effectively disables the exec permissions for anyone. If you don't want any restrictions, you may set it to 0 and remount. But please be aware: any restriction here was added to avoid problems and pitfalls.
| Cannot execute files on another partition |
1,432,937,697,000 |
I'm a user on a Debian machine. When I create a file in my home directory, the default permissions appear to be 700, even though umask returns 0022:
eulerz@foo:~$ touch testing
eulerz@foo:~$ ls -l testing
-rwx------ 1 eulerz users 0 2012-03-15 19:34 testing
In addition, when I create a file in the tmp directory, it doesn't show up as executable, but it does when I move it to my home directory:
eulerz@foo:~$ touch /tmp/made_in_tmp
eulerz@foo:~$ ls -l /tmp/made_in_tmp
-rw-r--r-- 1 eulerz users 0 2012-03-15 19:39 /tmp/made_in_tmp
eulerz@foo:~$ mv /tmp/made_in_tmp ~
eulerz@foo:~$ ls -l /u/eulerz/made_in_tmp
-rwxr--r-- 1 eulerz users 0 2012-03-15 19:39 /u/eulerz/made_in_tmp
and, of course, chmod doesn't change this:
eulerz@foo:~$ chmod -v u-x made_in_tmp
mode of `made_in_tmp' changed to 0644 (rw-r--r--)
eulerz@foo:~$ ls -l /u/eulerz/made_in_tmp
-rwxr--r-- 1 eulerz users 0 2012-03-15 19:39 /u/eulerz/made_in_tmp
What the heck?
Why is this happening? Where is it telling my home directory "set new things as u+x NO MATTER WHAT"?
And this just started happening recently; the older files in my home directory don't have this problem (but I made a copy of one and it did.)
|
The helpdesk got back to me and explained that it's due to the merging of Windows NTFS permissions with regular POSIX permissions, since the Isilon is configured to be accessible by both NFS and CIFS. So removing the CIFS access would fix the permissions issue.
| Chmod u-x isn't changing anything and I have no idea why |
1,432,937,697,000 |
/var/www$ wget http://ftp.drupal.org/files/projects/drupal-7.0.tar.gz
This results in:
--2012-02-08 21:20:17-- http://ftp.drupal.org/files/projects/drupal-7.0.tar.gz
Resolving ftp.drupal.org... 64.50.233.100, 64.50.236.52
Connecting to ftp.drupal.org|64.50.233.100|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2728271 (2.6M) [application/x-gzip]
drupal-7.0.tar.gz: Permission denied
Cannot write to `drupal-7.0.tar.gz' (Permission denied).
eyedea@eyedea-ER912AA-ABA-SR1810NX-NA620:/var/www$ ^C
eyedea@eyedea-ER912AA-ABA-SR1810NX-NA620:/var/www$ wget http://ftp.drupal.org/files/projects/drupal-7.0.tar.gz
--2012-02-08 21:46:34-- http://ftp.drupal.org/files/projects/drupal-7.0.tar.gz
Resolving ftp.drupal.org... 64.50.236.52, 64.50.233.100
Connecting to ftp.drupal.org|64.50.236.52|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2728271 (2.6M) [application/x-gzip]
drupal-7.0.tar.gz: Permission denied
Cannot write to `drupal-7.0.tar.gz' (Permission denied).
I checked the permissions of /var/www and i can't change them. What's going on here?
|
It's totally normal. your /var/www directory belongs to root user and root group with those rights drwxr-xr-x.
It's far more better to have /var/www belonging to root, because it will forbid possible security flaws in apache or php to write and change source code on this server.
What you can do about that :
Make your wget with root rights. For instance :
$ sudo wget http://ftp.drupal.org/files/projects/drupal-7.0.tar.gz
or
$ su -c "wget http://ftp.drupal.org/files/projects/drupal-7.0.tar.gz"
Download it from your $HOME and untar it afterwards
$ cd ~; wget http://ftp.drupal.org/files/projects/drupal-7.0.tar.gz
Ignore those security recommendations and change rights of /var/www
$ sudo chown `id -u`:`id -g` /var/www
EDIT : If you have broken your /var/www tree with a chmod -R 777 /var/www/* and haven't burn in hell, you can thank god and quickly execute those commands before he comes for you :
$ sudo find /var/www -type d -exec chmod 755 {} \;
$ sudo find /var/www -type f -exec chmod 644 {} \;
| Permission Denied when downloading Drupal |
1,432,937,697,000 |
I have setup a Freenas system that I would like to mount on my Ubuntu Desktop. Freenas is configured with cifs service.
When I open the freenas server in nautilus smb://freenas/homenas/ I can create and delete files without a problem.
Mounting the server also works:
sudo mount -t cifs //192.168.1.108/homenas /media/bigHD username=test,password=123
The owner of all the files in /media/bigHD is tom (my non root account).
However when I create a file under /media/bigHD its is owned by root. I really do not wish to chown every file/folder after I create it.
How can I fix this?
|
I figured out the problem I had to specify the gid and uid. Would have saved so much time if I just read the man page more carefully...
uid=arg
sets the uid that will own all files or directories on the
mounted
filesystem when the server does not provide ownership
information.
It may be specified as either a username or a numeric uid.
When not
specified, the default is uid 0. The mount.cifs helper must
be at
version 1.10 or higher to support specifying the uid in
non-numeric
form. See the section on FILE AND DIRECTORY OWNERSHIP AND
PERMISSIONS below for more information.
gid=arg
sets the gid that will own all files or directories on the
mounted
filesystem when the server does not provide ownership
information.
It may be specified as either a groupname or a numeric gid.
When
not specified, the default is gid 0. The mount.cifs helper must
be
at version 1.10 or higher to support specifying the gid in
non-numeric form. See the section on FILE AND DIRECTORY
OWNERSHIP
AND PERMISSIONS below for more information.
| Changing owner of NAS drive |
1,432,937,697,000 |
Here's a sequence of commands and the resulting output:
$ touch testfile
$ stat -c'%a %A' testfile
644 -rw-r--r--
What must I do so that when a user follows that sequence, I get this output instead:
664 -rw-rw-r--
|
POSIX defines the utility umask which sets the file mode creation mask, either for the current instance (without subshells), or for every newly invoked shell (over .bash_profile, .bashrc, etc.).
Show the currently set mask in octal or symbolic form:
$ umask
0022
$ umask -S
u=rwx,g=rwx,o=rx
The octal numbers indicate the values which are getting removed from the full access:
$ umask 0002 # or: umask g+w
$ touch testfile
$ stat -c'%a %A' testfile
664 -rw-rw-r--
| How to make files created by a specific user to have specific permissions by default |
1,432,937,697,000 |
Assume you are user x, so running id gives
uid=1001(x) gid=1001(x) groups=1001(x)
And there is also a user y with
uid=1002(y) gid=1002(y) groups=1002(y)
Now as root we create a file readme in user's x home directory like this:
# cd /home/x
# touch readme
# echo "hello" > readme
# chown root:y readme
# chmod 640 readme
And we make a copy of less
# cd /home/x
# cp /usr/bin/less .
# chown y:x less
# chmod 6110 less
I would expect user x to be able to read readme by running ./less readme because of the setuid and setgid, but I get a "permission denied" error. Why?
This is my logic, but probably something is wrong.
chmod 6110 gives only execution rights to the owner (y) and members of the group (x). Since user x belongs to group x, he can execute less. Then the setuid makes the effective UID to be the same as y, and the setgid makes the effective GID the same as the group of the owner, again y. And since readme's group is y, less should have read permission.
|
The error lies here:
the setgid makes the effective GID the same as the group of the owner, again y.
The setgid bit causes the effective gid to be that of the owner group of the binary, which is x here (chmod y:x less).
less ends up running with an effective uid corresponding to y’s, and an effective gid corresponding to x’s. Since readme is owned by root:y, it can’t be read.
| Why "permission denied" when running `less` with chmod 6110? |
1,432,937,697,000 |
I'm using setrlimit() from within my C++ code to try and set the RLIMIT_NOFILE to RLIM_INFINITY (getrlimit then set rlim_cur & rlim_max to RLIM_INFINITY and setrlimit()), but I get "Operation not permitted" error. The code runs as root.
is it even possible to set RLIM_INFINITY for RLIMIT_NOFILE?
|
RLIMIT_NOFILE is capped by the maximum defined by /proc/sys/fs/nr_open, and trying to set it above that results in EPERM. For a brief period (with kernel 2.6.28), it was possible to set it to RLIM_INFINITY, but that caused huge performance issues with some programs — see the revert commit for details.
This is documented in the corresponding EPERM entry in man setrlimit, and the description of /proc/sys/fs/nr_open in man 5 proc.
| Can you set RLIMIT_NOFILE to RLIM_INFINITY? |
1,432,937,697,000 |
I was wondering if it's possible to create a rule in /etc/security/time.conf in which you restrict users to log in not just by username but instead by the group they belong to.
|
The pam_group module has a similar configuration file, group.conf, which lets you restrict groups with a certain time definition.
| It's possible to restrict access with PAM based on groups? |
1,432,937,697,000 |
I have a folder /stuff that is owned by root:stuff with setgid set so all new folders' have group set to stuff.
I want it so:
New files have rw-rw----:
User: read and write
Group: read and write
Other: none
New folders have rwxrwx---:
User: read, write, and execute
Group: read, write, and execute
Other: none
If I set default ACLs with setfacl then it seems to apply to both files and folders. For me, this is fine for Other since both files and folders get no permissions:
setfacl -d -m o::---- /stuff
But what do I do for User and Group? If I do something like above then it will be set on all files and folders.
And I can't use umask.
I have a shared drive. I am trying to make it so folks in stuff can read/write/execute but nobody else (Other) can. And I wan to make sure that by default files do not get the execute bit set, regardless of what the account's umask is.
|
There is no way to differentiate between files and directories using setfacl only.
Instead you can workaround the issue with using inotify-tools to detect new created files/dirs, then apply the correct ACLs for each one recursively:
1- You have to install inotify-tools package first.
2- Recover the default /stuff directory acls
sudo setfacl -bn /stuff
3- SetGID
sudo chmod g+s /stuff
4- Execute the following script in the background for testing purpose, for a permanent solution wrap it within a service.
#!/bin/bash
sudo inotifywait -m -r -e create --format '%w%f' /stuff | while read NEW
do
# when a new dir created
if [ -d "$NEW" ]; then
sudo setfacl -m u::rwx "$NEW"
sudo setfacl -m g::rwx "$NEW"
# when a new file created
elif [ -f "$NEW" ]; then
sudo setfacl -m u::rw "$NEW"
sudo setfacl -m g::rw "$NEW"
fi
# setting no permissions for others
sudo setfacl -m o:--- "$NEW"
done
| How do I set different default permissions for files vs folders using setfacl? |
1,432,937,697,000 |
So I was thinking about learning some C this week, so I went to setting up everything. I was checking to make sure I had gcc installed using gcc -v, and I got the error
bash: /usr/bin/gcc: Permission denied
After this, I tried the same command using sudo, but got the error
sudo: gcc: command not found
If it's relevant, my Linux version is Pop_OS! 20.04 LTS, running kernel 5.8.0-7630-generic.
What should I do to resolve this issue?
|
So after a little digging, I managed to discover which gcc version was installed by using the command
ls /usr/lib/gcc/x86_64-linux-gnu
This revealed a folder called 9, and assuming this was the version of gcc installed, I used
sudo apt-get install --reinstall gcc-9
Following this, gcc -v worked fine without sudo.
| Cannot execute gcc due to "permission denied" |
1,432,937,697,000 |
I have a directory in which I am storing all my shell scripts and I would like new files to be made executable by default so that I don't have to go chmod u+x [file] everytime. Is there a way to make this happen. I tried chmod -R u+x [directory] but this only makes all the existing files executable not ones that I'm adding later. Is there a shell command or perhaps a shell script that you can suggest that can make this happen ? Thanks.
|
To make permissions apply to new files, you need an ACL (access control list). The main tool to do this is setfacl.
You can set ACLs on directories so that new files created in them are always world-writable, or owned by a specific group. You are specifically interested in making new files executable.
That would be done with:
sudo setfactl -Rm d:u::rwx dir
That means, "recursively set default user permissions as rwx for new files". When I experiment I get this:
$ mkdir dir
$ getfacl dir
user::rwx
group::r-x
other::r-x
$ setfacl -Rm d:u::rwx dir
$ getfacl dir
user::rwx
group::r-x
other::r-x
default:user::rwx
default:group::r-x
default:other::r-x
Cool, We've added some default: lines which now say that new files in this directory will have these specific permissions applied. But when I touch the new file we see:
touch dir/file
ls -l dir
-rw-r--r-- 1 usr grp 0 Aug 19 10:57 file
It's not user-executable! The man page says:
The perms field is a combination of characters that indicate the read (r), write (w), execute (x) permissions. Dash characters in the perms field (-) are ignored. The character X stands for the execute permission if the file is a directory or already has execute permission for some user. Alternatively, the perms field can define the permissions numerically, as a bit-wise combination of read (4), write (2), and execute (1). Zero perms fields or perms fields that only consist of dashes indicate no permissions.
I've made the relevant part of that bold. We can set the x ACL so that new files are executable, BUT that will only apply if the file already has execute permissions for some user.
This is a limitation. I assume it's a security limitation so that malicious applications can't stick any file they like in a directory, have it automatically become executable, and then run it.
To demonstrate how ACLs could be used to do something similar, I'll show another example:
setfacl -Rm d:g::rw dir
touch dir/file1
ls -l dir/file1
-rw-rw-r-- 1 usr grp 0 Aug 19 11:00 dir/file1
You can see that I told the ACLs to add a default rule to make new files group-writable. When I made the new file, I confirmed that it was group writable (while new files are usually only group readable).
| Make every file in a directory executable by default? |
1,432,937,697,000 |
How can I whitelist a directory for execution with firejail?
In particular, I would like to execute Firefox Nightly in firejail. But I get the following error:
$ firejail --profile=/etc/firejail/firefox.profile --whitelist=$HOME/software/firefox-nightly ./firefox
Reading profile /etc/firejail/firefox.profile
Reading profile /etc/firejail/whitelist-usr-share-common.inc
Reading profile /etc/firejail/firefox-common.profile
Reading profile /etc/firejail/disable-common.inc
Reading profile /etc/firejail/disable-devel.inc
Reading profile /etc/firejail/disable-exec.inc
Reading profile /etc/firejail/disable-interpreters.inc
Reading profile /etc/firejail/disable-programs.inc
Reading profile /etc/firejail/whitelist-common.inc
Reading profile /etc/firejail/whitelist-var-common.inc
Warning: networking feature is disabled in Firejail configuration file
Parent pid 769552, child pid 769553
Warning: An abstract unix socket for session D-BUS might still be available. Use --net or remove unix from --protocol set.
Post-exec seccomp protector enabled
Seccomp list in: !chroot, check list: @default-keep, prelist: unknown,
Child process initialized in 91.60 ms
Exec failed with error: Permission denied
and testing with a shell:
$ firejail --profile=/etc/firejail/firefox.profile --whitelist=$HOME/software/firefox-nightly sh
[...]
$ ls -l firefox
-rwxr-xr-x 1 vinc17 vinc17 16928 2020-05-16 13:22:44 firefox
$ ./firefox
sh: 2: ./firefox: Permission denied
Note: /etc/firejail/disable-exec.inc has noexec ${HOME}. But adding --ignore='noexec ${HOME}' just after firejail has no effect. Moving the directory under /usr/local has no effect either.
|
I had the same problem, so I asked on the firejail GitHub repo. Here is the answer I received:
If you want to execute software from inside your home, you need to ignore noexec ${HOME} and ignore apparmor.
cat > ~/.config/firejail/firefox-developer-edition.local <<EOF
ignore noexec ${HOME}
ignore apparmor
whitelist ${HOME}/files/Portable/FirefoxDeveloperEdition
EOF
For you, the path would be slightly different, I guess:
whitelist ${HOME}/software/firefox-nightly
Ref: https://github.com/netblue30/firejail/issues/3794
| Whitelist a directory for execution with firejail |
1,432,937,697,000 |
I've made a Makefile to simplify my life, and this Makefile calls a script in a bin file I've created. After running my command make something, I got following error :
/bin/sh: 1: .docker/bin/echo-title: Permission denied
After searching a bit and thanks to this answer, I gave execute permission to the user who created the file (aka: me) with chmod command. My question is : Since I'm the owner of the file, shouldn't I have execute permission right away? And if not... why?
This is for a personal project, but at work we're also using Makefiles and bin scripts, exactly this way (I actually copied and pasted the base content of the files) and I don't have to run a chmod command to run the scripts. Why is that so?
(Running other commands in the Makefile that don't involve bin script work well.)
|
It's not recommended to give all newly created files execute permissions as most files don't need to be executed.
umask(2) is used to determine what the default file permissions will be. If you just run umask with no options it will print the current value for your user. This is the value that will be subtracted from the permissions a particular application uses when creating a file, typically 666 (rw-rw-rw-) permissions for files or 777 (rwxrwxrwx) permissions for directories.
So if your umask value is 002 (pretty common) then when you create a new file it will get permissions of 664 (rw-rw-r--).
You can modify the default umask value by running umask new_value ie: umask 044, although since you can only subtract from 666 for files you won't be able to use it to default execute. Also to make it persistent you would need to add that to your rc or profile config file.
Related:
Why doesn't umask change execute permissions on files?
How umask works
how to give execute permission by umask
Why does umask 077 not allow the user to execute files/directories?
| Why is execute permission denied for bin file I created? |
1,432,937,697,000 |
I'm testing redirecting programs std_out to /dev/stdout in docker alpine.
I can't figure out why I can echo from user to stdout, but not from su command.
docker exec -it 779ddea6ec33 bash # root user
bash-4.4# su - http -c "echo 1 >> /dev/stdout"
-sh: can't create /dev/stdout: Permission denied
# why comman above failed
bash-4.4# whoami
root
bash-4.4# su - root -c "echo 1 >> /dev/stdout"
1
docker exec -u http -it 779ddea6ec33 bash # http user
bash-4.4$ whoami
http
bash-4.4$ echo 1 >> /dev/stdout
1
# but this command works
some ls:
bash-4.4# ls -lad /dev/stdout
lrwxrwxrwx 1 root root 15 Jul 7 16:47 /dev/stdout -> /proc/self/fd/1
bash-4.4# ls -lad /proc/self/fd/1
lrwx------ 1 root root 64 Jul 7 18:09 /proc/self/fd/1 -> /dev/pts/0
bash-4.4# ls -lad /dev/pts/0
crw--w---- 1 root tty 136, 0 Jul 7 18:09 /dev/pts/0
stat:
bash-4.4# stat /dev/stdout
File: '/dev/stdout' -> '/proc/self/fd/1'
Size: 15 Blocks: 0 IO Block: 4096 symbolic link
Device: 4dh/77d Inode: 8013573 Links: 1
Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-07-07 18:09:33.000000000
Modify: 2019-07-07 16:47:08.000000000
Change: 2019-07-07 16:47:08.000000000
bash-4.4# stat /dev/pts/0
File: /dev/pts/0
Size: 0 Blocks: 0 IO Block: 1024 character special file
Device: 4eh/78d Inode: 3 Links: 1 Device type: 88,0
Access: (0620/crw--w----) Uid: ( 0/ root) Gid: ( 5/ tty)
Access: 2019-07-07 18:15:28.000000000
Modify: 2019-07-07 18:15:28.000000000
Change: 2019-07-07 17:48:22.000000000
|
/dev/stdout is a special file.
Lets say I'm logged in with two user (user1 on tty1 and user2 on tty2).
/dev/stdout for user1 refers to /dev/tty1 and for user two refers to /dev/tty2.
Here http user is trying to write something on /dev/stdout which belongs to current user (root):
bash-4.4# su - http -c "echo 1 >> /dev/stdout"
-sh: can't create /dev/stdout: Permission denied
In the other case http is writing to a file which it owns.
| stdout is accessible to user but not from su - user |
1,432,937,697,000 |
There is some strange behavior while excluding path from find:
find ~ -not -path "~/sandboxes/*" -name 'some-file.vmdk'
gives:
/home/user/VMs/win/some-file.vmdk
find: ‘/home/user/sandboxes/debian7.amd64.buildd/root/...’: Permission denied
find: ‘/home/user/sandboxes/debian7.amd64.buildd/var/...’: Permission denied
What's wrong?
P.S. unfortunately -prune doesn't work too:
find ~ -path "/home/user/sandboxes/*" -prune -o -name 'some-file.vmdk'
gives more weird results:
/home/user/nemu_vm/win/some-file.vmdk
/home/user/sandboxes/debian7.amd64.buildd
/home/user/sandboxes/debian9.amd64.buildd
Useful link
|
Your command
find ~ -path "/home/user/sandboxes/*" -prune -o -name 'some-file.vmdk'
prints
/home/user/sandboxes/debian7.amd64.buildd
/home/user/sandboxes/debian9.amd64.buildd
because the default action when no action is supplied is to output the found pathnames. The above pathnames are found, and then those paths are pruned. Pruning a search path does not exclude these pathnames from being printed.
However, if you add -print to the very end, as in
find "$HOME" -path "$HOME/sandboxes" -prune -o -name 'some-file.vmdk' -print
then those pathnames would not be printed. This is because now you have an explicit action (the -print), so no default actions are triggered. The -print only applies to the right hand side of -o.
Note that the * is not needed, and that the variable $HOME is easier to work with than ~, especially in scripts.
Your first command,
find ~ -not -path "~/sandboxes/*" -name 'some-file.vmdk'
very likely does not work as ~ is not expanded within quotes.
Assuming you used $HOME instead, it also does not prune the search path, which means it would still enter ~/sandboxes, but it would never print any pathnames from beneath that path. Since it enters the directory, it would still give you the permission errors when it reaches the inaccessibly directories.
| find path excluding doesn't work when permission denied |
1,432,937,697,000 |
I test a Docker container (created from Nvidia CUDA image) started with the command:
docker run -i -t xxxxxx /bin/bash
I can see the root prompt, but still don't have privileges for some operations; for example, when I execute:
dmesg
I see "Permission denied". Why?
|
In modern Linux, being root does not necessarily mean having ultimate permissions. Capabilities mechanism provides more fine-grained control over permissions by breaking root's power in parts which may be granted/revoked to/from specific task individually, and Docker uses this mechanism.
By default, Docker drops many dangerous capabilities when it starts the containerized process, even if this process runs on behalf of root user. This is because the host kernel is shared between all containers and the host system, thus, some system calls from the privileged containerized process may reveal information about (your case) or even affect the "outer world". That is why you see "Permission denied" even while you run dmesg(1) as root.
Internally, dmesg(1) calls syslog(2) system call to obtain the kernel log. As per man capabilities, this system call requires specific capability - CAP_SYSLOG:
CAP_SYSLOG (since Linux 2.6.37)
* Perform privileged syslog(2) operations.
See syslog(2) for information on which operations require privilege.
This capability is dropped in Docker containers by default, so, dmesg(1) in your container fails.
If you trust the vendor of the image, or just don't care a lot about security, you may start the container with additional capability (--cap-add syslog):
docker run -it --cap-add syslog nvcr.io/nvidia/cuda:9.0-devel-ubuntu16.04
This will solve your issue.
| No root permission in a docker container image |
1,432,937,697,000 |
By default, only root can create CPU sets (and manipulate tasks in existing ones):
$ cset shield -c0
cset: **> [Errno 13] Permission denied: '/cpusets//user'
cset: insufficient permissions, you probably need to be root
If I granted user trusted the right to run sudo cset, the commands he/she will run, e.g.
sudo cset shield -e command
would be owned by root, unless we do
sudo cset shield -e sudo -- -u trusted command
which is quite complex, especially regarding what environment is inherited by command through these layers...
Is there a way to grant trusted rights to manipulate CPU sets without changing identity?
|
According to the cpuset man page:
The permissions of a cpuset are determined by the permissions of the
directories and pseudo-files in the cpuset filesystem, normally
mounted at /dev/cpuset.
Using a small, sudo-callable script, that creates a cpuset and adapts the ownership/permissions of the corresponding folder and files in it, a user would be allowed to create is own cpuset.
Then the user can use and modify this cpuset directly without root permissions and create child cpusets for it.
See also https://serverfault.com/questions/478946/how-can-i-create-and-use-linux-cgroups-as-a-non-root-user .
| Granting cpusets permissions to non-root user |
1,432,937,697,000 |
pCloud is a cloud storage service that allows Linux users to mount their cloud storage inside of their home directory, appearing as:
/home/username/pCloudDrive/
As far as I can tell, the pCloudDrive directory is only accessible by the user and not by root.
Running ls -l inside the home directory (as root) displays:
d????????? ? ? ? ? ? pCloudDrive
and in pcmanfm (as root), pCloudDrive is described as "inode/x-corrupted type".
From my experience with Linux, root should be able to access everything, because every other file and directory belongs to it.
What I would like to know is:
How is pCloudDrive's true nature being occluded?
Is there a way to access the pCloudDrive directory and contents as root?
|
I have no direct experience with it, but it looks like pCloud is mounted as a FUSE file system. A FUSE file system is not accessible by root by design. The aim is to prevent mounted file systems from doing nasty things (see an explanation in libfuse's FAQ).
To let root, or other users, access a FUSE file system, you have to mount it with the options -o allow_root or -o allow_others. You need also to uncomment/add user_allow_other in /etc/fuse.conf, otherwise your user will not be able to set the aforementioned options.
Your experience may be the same of many other users, puzzled by an apparently non-intuitive behavior. See, as an example, this question on serverfault.
Of course, since pCloud appears not to be open source, there might actually be no allowed nor easy ways to change how it mounts its volume.
Obviously, root can access a FUSE file system given that it can impersonate other users. For instance:
# sudo -u your_user ls /home/your_user/fuse_mount_point
(executed as root) should just work.
| Is pCloudDrive really inaccessible to root? |
1,432,937,697,000 |
Today, one of my coworkers ran into trouble setting up a new intern to work. An account was set up by our IT group for the intern, but we were not able to access it. When I investigated, I found:
$ ls -ld /acct/c33408
drwxr-xr-x 1 root system 1 May 23 09:48 /acct/c33408
Clearly the admin who set up the account forgot to change owner and group. However what puzzles me is that the account directory is shown as both readable and searchable by everyone. However when I attempt to see what is in it:
$ ls /acct/c33408
ls: /acct/c33408: The file access permissions do not allow the specified action.
And similarly for trying to change to that directory.
Why will it not show me the contents of this directory when the permissions appear to allow it? I have checked and there is no ACL on this directory.
This is an AIX network, and the directory is on an NFS mount.
|
I suspect that the permissions on the root of the NFS filesystem (/acct/c33408) are stricter than the permissions of the stub directory. Before the NFS filesystem is mounted, you're able to inspect the stub directory permissions, but once you trigger the automounter to mount the NFS filesystem, the new permissions do not allow your user to access it.
I tested this locally (with a different directory structure) with similar results:
# stub directory permissions
user@host/$ ls -ld /u/testdir
drwxr-xr-x 1 root system 1 Jun 2 17:00 /u/testdir
# trigger automount as root
user@host/$ sudo -s
root@host/# cd /u/testdir
root@host/u/testdir# exit
# NFS permissions
user@host/$ ls -ld /u/testdir
drwx------ 2 root somegroup 4096 Dec 1 2015 /u/testdir
# ls on the NFS mountpoint as a regular user
user@host:/$ ls -l /u/testdir
ls: /u/testdir: Permission denied
total 0
| Why can I not see this "readable" directory? |
1,519,655,901,000 |
I'm using Linux Mint 18.3 and I have a school task to find all log files in one linux machine without any error messages. I need to put together a command and explain it throughly. I think I have found a way to use find but there is one access denied message regarding gvfs that I'm not sure how to handle. Can you help me assemble a simple and smart command that doesn't just blindly filter out any error messages but only leaves out those places where it's really no sense to look?
My first try:
# find / -type f -name '*.log'
seems to return all log files but the result includes:
find: '/run/user/1000/gvfs': Permission denied
Then I tried to leave out one folder:
# find / -type d \( -name run \) -prune -o -type f -name '*.log' -print
but it doesn't seem smart to leave out the whole run folder so started to specify, to narrow it to one specific path maybe. Found this post, and unix.stackexchange.com/a/77592 answer, and tried to leave out this specific path:
# find / -name '*.log' -path '/run/user/1000/gvfs' -prune -o -type f -name '*.log' -print
but it doesn't seem to work as I expect, returning still the same, among seemingly all log files:
find: '/run/user/1000/gvfs': Permission denied
Now I run into understanding problem where I'm thinking wrong or is leaving out this one specific path the simplest and smartest thing to do at all.
|
Log files are stored in /var/log so there's really no need to run find on the entire root directory. If you insist on doing so and want to exclude that directory so that you don't get errors then your syntax should be:
find / -wholename /run -prune -o -type f -name '*.log' -print
That directory is the mountpoint for FUSE and doesn't contain any log files and /run itself has directories (atleast in Centos, Fedora, and RHEL) inside which will give permission errors so the above command will exclude the directory altogether. I don't have Mint installed so you can edit the commmand to prune lower until you receive errors.
Also, one thing to keep in mind is that log files don't always end in .log such as messages, dmesg, cron, and secure.
| How to find all log files recursively while leaving out one specific access denied path? |
1,519,655,901,000 |
I'm using Linux Mint 18.Let's say there're several shared directories.Each directory belongs to a specific group.One of this is foo and must be shared by Pippo and Pluto so I've created the group foo and I've added Pippo and Pluto to it.All contents created inside foo and its subdirectories must be accessible to both Pippo and Pluto regardless the creator is Pippo or Pluto.
The problem is that when creating a new file or directory the id group of the content is set to Pluto or Pippo(depending by whom the content has been created)and not to foo even if I've changed the group of the foo folder and its subdirectories and files to foo.
What's the way to give to new contents the group foo?
EDIT
Outside the foo folder every file created by Pippo or Pluto must have the group set respectively to Pippo and Pluto.
|
The default behaviour (as you have found out) is to give a file the same UID and GID as the owner. If you want to override that, you need to set the setgid bit on the directory that stores the file. This will not affect already-existing files and directories, so you'll need to fix the permissions on each directory underneath as well (as @cas mentioned, you probably want to allow the members of the foo group to modify each others' work (and view the directories), so you'll want to add those permissions too.)
To fix this (assuming you've already created the directory structure), run the following from the root of the structure:
find . -type d -exec chmod g=rwsX {} \;
chgrp -R foo *
I used find to select just the directories, as setting the setuid/setgid bits on files has a different meaning. Of course, if you're creating a new directory, you just have to run the chmod/chgrp commands without using find.
EDIT: Fixed the chmod permissions.
| Permission on shared directories |
1,519,655,901,000 |
I need to discover which are the permissions of a user in a CentOS system. Is it possible to find which are the directories the user can access and the command he can execute? It doesn't refer to ACL's.
|
To be able to execute a file, the file must
Be owned by the user and be executable by the user, or
Belong to the same group as the user and be executable by that group, or
Be executable by "others".
The following find command find such files in the current directory (for the current user and their primary group only):
uid=$( id -u ) # the user's ID
gid=$( id -g ) # the primary group ID
find . -type f \( \
\( -user "$uid" -perm -0100 \) -o \
\( -group "$gid" -perm -0010 \) -o \
-perm -0001 \) -print
-0100 means "at least executable by user", and -0010 and -0001 are the equivalent for "group" and "others".
The same criteria holds or accessibility of folders (if I'm not entirely mistaken), so changing -type f to -type d should give you the accessible folders. One may additionally want to check the folders for the "read" bit too, obviously (-0500, -0050 and -0005 instead of the permissions above).
For folders, this may be a solution:
find . -type d \( \
\( -user "$uid" -perm -0500 \) -o \
\( -group "$gid" -perm -0050 \) -o \
-perm -0005 -o -prune \) -print
I've added -prune at the end so that we don't descend into folders that the user wouldn't be able to access anyway.
Change the dot to a slash to search on the whole system.
It's also easy to turn it around to only print the names of e.g. folders that the user can't access:
find . -type d \( \
\( -user "$uid" -perm -0500 \) -o \
\( -group "$gid" -perm -0050 \) -o \
-perm -0005 -o -print -prune \)
| How to retrieve permissions of a user |
1,519,655,901,000 |
I have shell script called copy.sh in my web root directory with following lines on it
rsync -rzv -e 'ssh -p 199' test.txt [email protected]:/home/testuser/txt
ssh [email protected] -p 199 . /home/testuser/lty.sh > ltylog.txt 2>&1
This script first copy text file to my remote server and run the shell script to sync that file across other two servers. I generate ssh public/private key for testuser in my current server and copy to the remote server. I can ssh to remote server without any password
ssh [email protected] -p 199
I can run above command and can get in to my remote server. In my host machine when I'm in testuser I can execute that shell script without any issues. It copy the files and run the remote script. I set my script permission as 777 for now. But I need to run this script using PHP by running shell_exec like this
<?php
shell_exec('. /var/www/html/copy.sh');
echo "<pre>";
echo file_get_contents("ltylog.txt");
echo "<pre>";
?>
But when I run this PHP nothing happens. because I think the user who executing the shell script is Apache user instead of test user. but instead of running shell script when I shell_exec PWD or something it running flawlessly. I disabled the php safe mode and give 777 permissions for all the files. but still I can't get this worked.
P.S I know this is big security risk putting shell scripts on the web root with permissions. but this is not prod system and I'm testing small web application for our internal purposes. server I executing this has no internet access at all. could someone help me to fix this. I'm looking for small solution. since this small web app only used by one or two persons. thanks
|
If I understood your issue correctly and you don't care about security risks, you can install sudo, add "www-data" (www-data is a default user used by nginx/apache) to the sudoers file with all permission and no password required and use it to execute a command as another user.
You can do it like this:
Install sudo:
apt-get install sudo
Then add the user to the config:
nano /etc/sudoers
Add this in the last line:
www-data ALL=(ALL) NOPASSWD: ALL
And finally you can edit your php to perform a command with sudo:
shell_exec('sudo -u testuser /var/www/html/copy.sh');
@Edit
I have managed to make it work on my server.
Just follow these steps and it should work for you as well.
Try replacing your PHP code with this:
<?php
echo shell_exec('/bin/sh /var/www/html/copy.sh'); #this will display the result in your browser
echo "<pre>";
echo file_get_contents("ltylog.txt");
echo "<pre>";
?>
Then make sure that www-data has access to copy.sh file:
You can either give it a 777 chmod like this:
chmod 777 /var/www/html/copy.sh
or you can make the file belong to user www-data (used by apache):
chown www-data:www-data /var/www/html/copy.sh
but if you choose to use the second option, make sure that www-data still can execute the file by applying chmod like so:
chmod 755 /var/www/html/copy.sh
Make the file executable:
chmod +x /var/www/html/copy.sh
change the copy.sh code to this:
rsync -rzv -e 'ssh -p 199' test.txt [email protected]:/home/testuser/txt
ssh [email protected] -p 199 /bin/sh /home/testuser/lty.sh > ltylog.txt 2>&1
Finally, make sure that testuser hass access to the following files:
/home/
/home/testuser/
/home/testuser/lty.sh
/home/testuser/ltylog.txt
If you don't care about security, you can simply type this in the console of a remote server
chmod 777 -R /home/
Or you can check each file manually to make sure that the permissions are set right.
| Enabling shell script to run as different user with PHP |
1,519,655,901,000 |
I have created LVM partition in myvol and file system on it:
sudo lvcreate -L 10G myvol -n part1
sudo mkfs.ext4 /dev/mapper/myvol-part1
New partition appeared in file manager, but when I open it I can't create and delete files without root privileges. I tried to remount it with different commands but all in all it was accessible only for root. How can I mount it for user?
|
You can change the mount folders permission, so that other users are able to access it.
chown -R user:group <folder name>
P.S Mount the folder onto the partition before running chown.
| LVM partition mounts only for root |
1,519,655,901,000 |
I can't change permissions on files which are mounted with cifs from windows share. I can only change write permission.
I mounted share using:
//10.0.0.1/share on /some/path/to/folder
type cifs (rw,username=usr,password=passwd,domain=10.0.0.1,uid=32,gid=1001,
iocharset=utf8,dir_mode=0770,sec=ntlm,_netdev)
Where uid is my username.
When I try to change permission of some file inside share, like /some/path/to/folder/simple/file.inside to 777:
sudo chmod 777 file.inside
Permssions doesnt change to 777 instead ls -l output gives -rwxr-xr-x
When I change it to 000 result is: -r-xr-xr-x
The only difference between those two is that owner is not allowed to write. I am confused why is that and how to fix it.
| ERROR: type should be string, got "\nhttps://www.samba.org/samba/docs/man/manpages-3/mount.cifs.8.html#id2532725\n\nThe core CIFS protocol does not provide unix ownership information or mode for files and directories. Because of this, files and directories will generally appear to be owned by whatever values the uid= or gid= options are set, and will have permissions set to the default file_mode and dir_mode for the mount. Attempting to change these values via chmod/chown will return success but have no effect.\n\nTherefore, it's not implemented yet.\n" | CIFS mount - changing permissions on file doesnt take an effect |
1,519,655,901,000 |
Lets say I was told to do sudoers file changes with the following... What does that mean, and how do I actually do it?
www-data ALL = NOPASSWD: /bin/rm /etc/vsftpd/vusers/[a-zA-Z0-9]*
I believe that it's setting the premissions for those folders, and I think I use the visudo command to do it... but I'm not sure what the www-data means or anything like that. Can anyone shed some light on this for me?
|
The first word in the line indicates who this line applys to. www-data is a user, you can find it in /etc/passwd.
NOPASSWD means members of this user doesn't have to authenticate when calling sudo. Mostly used when a process will be calling sudo instead of a human.
The next part is the what your www-data has access to.
So this line means that the user www-data can execute /bin/rm on the files found in /etc/vsftpd/vusers/[a-zA-Z0-9]* as root without supplying their password.
| How to interpret line in sudoers |
1,519,655,901,000 |
I want to access some sites and files which I don't have access to on my home computer. However, I can ssh to a remote server which does have the ability to access these sites and files. Is there a way in which I can use the permissions of the remote server to browse the internet on my home computer?
Any help would be greatly appreciated.
|
Absolutely, this is what the -D option is for:
ssh -D 12345 -N user@host
... will establish a SOCKS proxy that will use the remote server's Internet connection and will be mapped on localhost's port 12345. The -N option is not necessary; it keeps ssh from opening a shell.
Now you have to configure your Internet browser to use that SOCKS proxy.
Maybe it is best to have a profile dedicated to this proxified connection, and use it only when necessary. With firefox you may want to create a special profile, named e.g. "socks", configured to use the SOCKS proxy. You then can call it from the command line with firefox -p socks -no-remote.
There are also Firefox extensions, like e.g. FoxyProxy, that allow you to switch temporarily to a predefined proxified connection to the Internet.
With Chrome (the example below is with the Ubuntu's derivative called chromium), you can also open a temporary browsing session with some special proxy settings, like:
chromium-browser --temp-profile --proxy-server="socks://127.0.0.1:12345"
| Can I use the permissions of a remote server through ssh? |
1,519,655,901,000 |
I have two users on my Mac. Both are me, but one is work mode, the other is non-work mode. I have an ongoing issue with installing via homebrew.
$ brew install x
Error: Can't create update lock in /usr/local/var/homebrew/locks!
Fix permissions by running:
sudo chown -R $(whoami) /usr/local/var/homebrew
Of course, executing this suggested code solves the problem -- until I need to brew install using my other user, then I need to change ownership again. How can I set the permissions so that both users can install with homebrew?
|
I don't know about homebrew in particular, but in theory you could use sudo to install software. Then files are accessed with root privileges, which may or may not be what you want.
In general though, if you want multiple unprivileged users to be able to write to the same location, it isn't the owner of that location that you want to change, but its group. You could create a group called homebrewers:
sudo dscl . -create /Groups/homebrewers
You'll then want to find a group ID that doesn't exist. For this I used:
dscl . -list /Groups \
| sed 's@^@/Groups/@' \
| ( while read grp; \
do dscl . -read "${grp}" PrimaryGroupID; \
done ) \
| sort -snk 2
I found that the highest group number in use was 501, so 4200 was available.
So, I set the PrimaryGroupID to 4200 and the Password to * (unused). Do not forget to set these! If you do, your groups list will be corrupted and you will likely have to boot into single-user mode to correct it.
sudo dscl . -append /Groups/homebrewers PrimaryGroupID 4200
sudo dscl . -append /Groups/homebrewers Password '*'
Then add your two users to that group. The example here uses shortnames (from whoami) of user1 and user2:
sudo dscl . -append /Groups/homebrewers GroupMembership user1
sudo dscl . -append /Groups/homebrewers GroupMembership user2
Note that you may have to log out and log back in for these changes to take effect.
Finally, you'll want to change the directory's group to be homebrewers and its permissions to be group-writable:
chown -R :homebrewers /usr/local/var/homebrew
chmod -R g+w /usr/local/var/homebrew
If you want, you can even change the owner to root to no ill effects:
sudo chown -R root /usr/local/var/homebrew
All commands shown here were tested on Mac OS X 10.4.11 on a PowerBook G4. Much has changed since the move to Intel, so the commands as shown may not work exactly as given on a newer release. The underlying concepts will remain the same.
| Multiuser Homebrew privileges |
1,519,655,901,000 |
I'm trying to create a ramfs mount point in /tmp/ram using:
Created an entry in /etc/fstab with the following line:
ramfs /tmp/ram ramfs rw,nodev,noexec,nosuid,async,user,noauto 0 0
(I've also tried replacing user with users. Also tried using x-mount.mkdir=0770)
Created a directory with permissions 0775 at /tmp/ram using normal user (not root).
Mounted the ramfs filesystem using the command mount /tmp/ram using normal user.
But after the mounting - the directory is always with the ownership user=root, group=me (me is the username/groupname of my normal user) and permissions 0755, which doesn't allow me to create a file in the directory.
Any idea how to proceed?
I'd like to mount that filesystem using normal user - not root...
I don't want to use root privileges at all for this mounting, that's why there's a line at /etc/fstab.
|
It happens because the root directory of a mount point is provided already by the mounted filesystem driver. Thus, the inode parameters (incl. permission settings) are coming from it, and they overlap the original settings of the /tmp/ram.
Some filesystems provide a feature to fix or change their permissions from a mount a parameter, although it serves a different reason: if an fs doesn't have adequate permission information (vfat), or it is too alien from the unix security (cifs), it is a way to hot-provide one by the sysadm. Ramfs doesn't have this feature.
The "user" parameter only enables the mounting or unmounting of the fs by users, but doesn't change its security parameters. It is probably not your intention (I think you want to produce a very fast tmp reachable by all of the users concurrently).
Note, simple optimization: Instead ramfs, you could use also tmpfs. Tmpfs content is also mainly in ram, but it can be swapped out if it is unused. Ramfs content is always in the physical memory. Tmpfs can be parametrized as you wish, for example a mode=1777 would make it behave like /tmp (everybody can create/delete files, but only theirs).
You have to run the chmod/chown commands after the mount happened. The linux mount tools don't provide a facility for that easily.
I suggest to make an initscript for that in /etc/init.d (other init scripts provide the syntax, how can it be done easily) and do the mount/chmod on reboots.
| mount ramfs for all users |
1,519,655,901,000 |
I have installed a Red Hat 6.8 machine, on which I have installed a certificate on the default keystore 'cacerts' successfully. When trying to invoke a software which is using SSL and is trying to access the keystore 'cacerts' (invoked as applicative user - not root), I receive the following error message: 'java.io.FileNotFoundException: Permission denied'.
From my research online, any user should have access to the 'cacerts' keystore (although the owner of the file is 'root').
|
Use sudo. That prevents permission escalation
| Permission denied to cacerts file - SSL |
1,519,655,901,000 |
A user has use of an application running on a Linux server. The application provides the user with an API that allows reading and writing files on the server, but does not offer any means of executing a file. Is that enough to ensure the user cannot execute commands on the server?
The underlying filesystem is not mounted with noexec.
The user can choose which file to read and write, and can create new files with arbitrary names. The user can delete files.
The application does not have access to "system" files, running as a relatively standard unprivileged user account similar to what a desktop user would have.
|
Arbitrary names at arbitrary locations, limited only by filesystem permissions is probably escalatable to execute arbitrary code. There are a lot of files in $HOME that are automatically run upon login, for example. And new ones are added (e.g., all the systemd user session stuff is fairly recent). Or maybe $HOME/bin is by default put at the front of $PATH.
Other good targets for an attacker would be ~/.ssh; I wasn't supposed to have login access, but will I once I install an authorized keys file? You can of course disable this via config in /etc/ssh/, but that's just one program. There are probably others.
I have no doubt you could secure this, but it'd be a lot of work (and you'd have to be very careful on OS upgrades!)
If however you can limit it to arbitrary files, but limited to certain directories (say, only in /srv/yourapp/ and subdirs), that is safe (provided its programmed correctly).
| If a user can only read and write files, is that sufficient to prevent execution? |
1,519,655,901,000 |
I am writing a script that downloads packages to a specific folder.
However, I want to make it possible for all users to download packages to that folder and use any packages installed there. How do I do that?
I want to check/change the permission for the /usr/local/src folder. I don't know how I've to use the if/else statement properly. In text it'll look like (I guess): if stat/permission of src folder isn't 777 then chmod to 777
|
You can use stat -c "%a" /usr/local/src to get the full permissions. But you should consider 1777 instead of 777.
So something like
if [ "$(stat -c '%a' /usr/local/src)" == "777" ]
then
# something
else
# something else
fi
In answer to your other question, if the permissions are already 777 then there will be no effect.
EDIT: corrected typos. @Alexej Magura why would I use double brackets? As far as I'm aware that would turn it into an arithmetic expression ..
| Check/change folder permission in shell script |
1,519,655,901,000 |
I've created a user 'www' and added it to the 'www-data' group. I've set the home directory of 'www' to /var/www/ also. I would like to use 'www' to transfer files in and out of my web server by FTP
The problem is when I run the command:
sudo chown -R www-data:www-data /var/www/
..I don't have permission to write files via FTP
However when I run:
sudo chown -R www:www /var/www
..I have full FTP access but get a 'Forbidden' message in my browser.
Any advice on how to get full FTP access including all subfolders would be really appreciated.
|
That means that you already have a www-data user which Apache uses that should have the necessary permissions in /var/www.
The simplest solution would be to use that same user, but you could also assign the www-data group to your new user and make sure the /var/www directory structure allows the group to write to it:
chown -R www-data:www-data /var/www
chmod -R ug+rw /var/www
| Ubuntu server 16.04 - Get full FTP access to /var/www/ |
1,519,655,901,000 |
I was careless for just a second and managed to type (being logged as root) on my Ubuntu system:
chown foobar /*
chown foobar /*/*
What is the possible extent of the damage, and how can I revert it?
|
It seems like there isn't really much that needs fixing, at least on a fresh install of Ubuntu 15.10. Of course, if you've installed stuff, you will have files and directories that I don't. However, I believe this output will show the proper permissions to keep Ubuntu running. Some programs may be broken because of the command you ran, but Ubuntu will at least run, and you can go about reinstalling applications from there.
If something doesn't work, try setting the owner to the group. It might not have been the same originally, but it's worth a shot if the app isn't working.
By running shopt -s extglob; find /!(proc|tmp|dev|run|root|lost+found) -maxdepth 1 -ls | awk '$5!="root" || $6!="root"' (Thanks @terdon), I came up with the following:
131226 4 -rw-r----- 1 root shadow 824 Jun 21 14:34 /etc/gshadow
131284 4 -rw-r----- 1 root shadow 1212 Jun 21 14:34 /etc/shadow
131095 4 drwxr-s--- 2 root dip 4096 Oct 21 2015 /etc/chatscripts
131103 4 drwxr-xr-x 5 root lp 4096 Jul 19 07:00 /etc/cups
find: `/mnt/hgfs': Protocol error
1064478 4 drwxr-xr-x 16 zw zw 4096 Jul 19 07:26 /home/zw
655571 36 -rwxr-sr-x 1 root shadow 35536 Apr 22 2015 /sbin/unix_chkpwd
655516 36 -rwxr-sr-x 1 root shadow 35576 Apr 22 2015 /sbin/pam_extrausers_chkpwd
150670 4 drwxrwsrwt 2 root whoopsie 4096 Oct 21 2015 /var/metrics
150669 4 drwxrwsr-x 2 root mail 4096 Oct 21 2015 /var/mail
150668 4 drwxrwxr-x 14 root syslog 4096 Jul 19 07:00 /var/log
150664 4 drwxrwsrwt 2 root whoopsie 4096 Oct 21 2015 /var/crash
150666 4 drwxrwsr-x 2 root staff 4096 Oct 19 2015 /var/local
The command excludes /root and /lost+found, as everything under /root and /lost+found is owned by root. Make sure to set the ownership accordingly.
The command excludes /proc, /tmp, /dev and /run as these directories contain files that are reset upon reboot.
/mnt and /media may have had special permissions set on subdirectories. A reboot may fix the ones under /media, but I'm not sure about /mnt.
There aren't very many directories you need to pay attention to, since most of them are owned by root. If you have any extra directories under /*/* that I don't have, try setting their owners to root or their corresponding groups. For everything that does match, just fix the permissions.
I would reverse the two commands by running what you ran, but replacing foobar with root. Then you can fix the other permissions afterward.
| Damage by chown command at / |
1,519,655,901,000 |
I am trying to get my CentOS v7 server to run IPv6. Root works, it can ping using "ping6 ipv6.google.com", and ifconfig looks great; I see the lines:
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 149.202.217.90 netmask 255.255.255.0 broadcast 149.202.217.255
inet6 fe80::ec4:7aff:fec4:d912 prefixlen 64 scopeid 0x20<link>
inet6 2001:41d0:1000:1c5a:: prefixlen 64 scopeid 0x0<global>
But as an unprivileged user, I can't ping ipv6 and I don't see the inet6 addresses in ifconfig.
What is happening? Why can't my users see the same interfaces, setup in the same way as root?
[edit]
As requested, ip a s and ping6 -c1 ipv6.google.com output:
root
[root@rabbit ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 2001:41d0:1000:1c5a::/64 scope global
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
link/ether 5e:63:58:37:5d:30 brd ff:ff:ff:ff:ff:ff
3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
link/ether 32:ad:47:94:1f:b1 brd ff:ff:ff:ff:ff:ff
4: ifb0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 32
link/ether 7e:52:08:a5:1a:dd brd ff:ff:ff:ff:ff:ff
5: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 32
link/ether 3e:ba:b9:d1:09:3b brd ff:ff:ff:ff:ff:ff
6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 0c:c4:7a:c4:d9:12 brd ff:ff:ff:ff:ff:ff
inet 149.202.217.90/24 brd 149.202.217.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 2001:41d0:1000:1c5a::/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::ec4:7aff:fec4:d912/64 scope link
valid_lft forever preferred_lft forever
7: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 0c:c4:7a:c4:d9:13 brd ff:ff:ff:ff:ff:ff
8: teql0: <NOARP> mtu 1500 qdisc noop state DOWN qlen 100
link/void
9: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN
link/ipip 0.0.0.0 brd 0.0.0.0
10: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN
link/sit 0.0.0.0 brd 0.0.0.0
11: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN
link/tunnel6 :: brd ::
[root@rabbit ~]# ping6 -c1 ipv6.google.com
PING ipv6.google.com(par03s15-in-x0e.1e100.net) 56 data bytes
64 bytes from par03s15-in-x0e.1e100.net: icmp_seq=1 ttl=57 time=6.61 ms
--- ipv6.google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 6.615/6.615/6.615/0.000 ms
user (pryormic)
[pryormic@rabbit ~]$ ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 2001:41d0:1000:1c5a::/64 scope global
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
link/ether 5e:63:58:37:5d:30 brd ff:ff:ff:ff:ff:ff
3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
link/ether 32:ad:47:94:1f:b1 brd ff:ff:ff:ff:ff:ff
4: ifb0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 32
link/ether 7e:52:08:a5:1a:dd brd ff:ff:ff:ff:ff:ff
5: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 32
link/ether 3e:ba:b9:d1:09:3b brd ff:ff:ff:ff:ff:ff
6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 0c:c4:7a:c4:d9:12 brd ff:ff:ff:ff:ff:ff
inet 149.202.217.90/24 brd 149.202.217.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 2001:41d0:1000:1c5a::/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::ec4:7aff:fec4:d912/64 scope link
valid_lft forever preferred_lft forever
7: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 0c:c4:7a:c4:d9:13 brd ff:ff:ff:ff:ff:ff
8: teql0: <NOARP> mtu 1500 qdisc noop state DOWN qlen 100
link/void
9: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN
link/ipip 0.0.0.0 brd 0.0.0.0
10: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN
link/sit 0.0.0.0 brd 0.0.0.0
11: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN
link/tunnel6 :: brd ::
[pryormic@rabbit ~]$ ping6 -c1 ipv6.google.com
ping: icmp open socket: Operation not permitted
[edit2]
I've added ifconfig output below:
root
[root@rabbit ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 149.202.217.90 netmask 255.255.255.0 broadcast 149.202.217.255
inet6 fe80::ec4:7aff:fec4:d912 prefixlen 64 scopeid 0x20<link>
inet6 2001:41d0:1000:1c5a:: prefixlen 64 scopeid 0x0<global>
ether 0c:c4:7a:c4:d9:12 txqueuelen 1000 (Ethernet)
RX packets 12131475 bytes 2122218137 (1.9 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1113935 bytes 690582284 (658.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 0c:c4:7a:c4:d9:13 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6632 bytes 1169904 (1.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
inet6 2001:41d0:1000:1c5a:: prefixlen 64 scopeid 0x0<global>
loop txqueuelen 0 (Local Loopback)
RX packets 332704 bytes 448694222 (427.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 332704 bytes 448694222 (427.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
user (pryormic)
[pryormic@rabbit ~]$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 149.202.217.90 netmask 255.255.255.0 broadcast 149.202.217.255
ether 0c:c4:7a:c4:d9:12 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 0c:c4:7a:c4:d9:13 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
The following command should give users the capability to use ping6. As root run
setcap cap_net_raw+ep /usr/bin/ping
| Unprivileged ping6 not working |
1,519,655,901,000 |
Output of ls -l:
drwxrwxrwx 16 btsync btsync 4096 Feb 25 15:41 documents
I can cd into the folder no problems. Now:
sudo chmod 776 documents
I can no longer cd into the folder:
bash: cd: documents: Permission denied
Even though:
$ groups $(whoami)
lp wheel network video audio storage btsync users
What is going on here? I belong to the group that owns the folder, so I should be able to cd into it.
|
If you add this user to the new group recently, note that new groups membership is applied after logging-in again.
Command
groups
gives you available groups in current shell, but
groups $(whoami)
returns the groups that you will get after re-login. You can also force the sync of the groups using exec newgrp btsync.
| No permissions? |
1,519,655,901,000 |
I have several samba shares, one of which is used for storing some backups.
I want to set a disk space quota on said share, as the ZFS volume the share resides on is huge and multi-purpose... I don't want the backups taking up more than 20Tb.
Can anyone point me in the right direction on how I can achieve this?
Thanks in advance!
|
To set the quota for a filesystem:
zfs set quota=20TB poolname/backup-filesystem
To query the current quota setting:
zfs get quota poolname/backup-filesystem
Note that quotas can only be set on ZFS filesystems (i.e. made with zfs create pool/fsname), not on subdirectories (made with mkdir).
subdirectories of a ZFS filesystem are included within that filesystem's quota usage. Child filesystems inherit their parent's quota (if any) unless overridden with their own zfs set quota=size pool/parent/child command (but this is an additional restriction - child filesystems, like snapshots, are included in the parent's total quota usage).
| Setting a disk space quota on a samba share residing on an ZFS pool |
1,519,655,901,000 |
I have been trying now for several days to get my new server up and running. I am running CentOS with MergerFS to pool my drives and samba to host to my windows machines. All of this running in Proxmox as well.
Over the weekend I got a couple of hard drives to start my server out with and am unable to get the shares to work correctly with samba. I have narrowed down the issue and it is being caused by labels. SELinux requires my mergerfs pool to have a label of samba_share_t but for some reason, mergerfs is not letting me change it from fusefs_t. All of my drives are ext4, I am seeing a lot of posts online that say this can be caused by using ntfs but that can't be my issue.
Things I have tried:
I have attempted to modify the fstab to include an option to set the
context to samba_share_t, but when I do that I get an error say that
fuse (used by mergerfs) init does not support the option "content".
I have tried manually changing the label of the pool with chcon and I
get an error that the operation is not supported.
I have tried adding the pool folder with semange and then manually running
restorecon and it still doesn't make a change to that specific folder.
Windows being able to see the folder but not able to access it is such a tease, so close yet so far away. If possible, I would like to not have to disable SELinux.
|
I was able to resolve the issue with just a simple setting change
setsebool -P samba_share_fusefs=1
and then restarting the smb service.
| SELinux + MergerFS (fuse) not working well together |
1,519,655,901,000 |
I lost my drive with my media center on it, and I realized that I had an old backup lying around.
But when I went to restore my backup, I realized that all of the files had been copied with a Windows utility, and more than likely my file permissions are all messed up.
It will be a few hours before I'll be able to attempt to boot to this, but is there anything I can do, even manually, to restore this to a bootable condition? I assume, at the least, that the execute bit has been unset on any and all files.
|
Sorry, it would be very hard to restore your system from this backup. You didn't just lose file permissions, you also lost file ownership and symbolic links. With so much lost, restoring manually would be an arduous process, there'd be a lot to do manually and it would be difficult to ensure you have them all.
It would be far easier to do a new, clean installation, and then restore selected configuration files (and any data files, of course) from your backup. If your backup at least preserved timestamps, you should be able to find the files that didn't come with the original system through their timestamps (you can use something like find /path/to/backup -type f -newer SOMEFILE to list the files that were modified more recently than SOMEFILE); this may mix some software updates with your changes. In principle, the files you modified should be under /etc or under home directories. You may have installed things under /opt or /usr/local as well.
| Restore file permissions after Windows copy |
1,519,655,901,000 |
After the website of a client has been hacked, I found some files with the following permissions set
What exactly S and T stands for?
Also, which one is the command to set them with those permissions?
Thanks in advance
|
There are many information sources about the topic, but reading here, on wikipedia and other similar questions asked on StackExchange like the following :
https://askubuntu.com/questions/88391/whats-an-upper-t-at-the-end-of-unix-permissions
https://superuser.com/questions/509114/what-does-directory-permission-s-mean-not-lower-case-but-in-upper-case
Uppercase S in permissions of a folder
we can assume that:
Sticky Bit
is mainly used on folders in order to avoid deletion of a folder and
its content by other users though they having write permissions on the
folder contents. If Sticky bit is enabled on a folder, the folder
contents are deleted or moved by only owner who created them and the root user.
But sure, it can be done also on singular files like in your case.
How to set Sticky Bit
# symbolic way :
chmod +t /path/to/folder/or/file
# Numerical way :
chmod 1757 /path/to/folder/or/file
if you see T (uppercase) in the file permission area, that indicates the specific file or folder does not have executable permissions for all users permissions portion. Otherwise, if the sticky beat t is lowercase, it means the executable permission for all users is enabled.
SetGID / SetUID (set-group-ID, set-user-ID) bit
On most systems, if a directory's set-group-ID bit is set, newly
created subfiles inherit the same group as the directory, and newly
created subdirectories inherit the set-group-ID bit of the parent
directory.
Same logic for the SetUID bit.
How to SetGID / SetUID
# add the setuid bit
chmod u+s /path/to/folder/or/file
# remove the setuid bit
chmod u-s /path/to/folder/or/file
# add the setgid bit
chmod g+s /path/to/folder/or/file
# remove the setgid bit
chmod g-s /path/to/folder/or/file
Similar as described above, if you see S (uppercase) the directory's setgid bit is set, but the execute bit isn't set. is the s is lowercase the directory's setgid bit is set, and the execute bit is set.
| File Permissions Sticky&User Execution |
1,519,655,901,000 |
I'm trying to pre-emptively avoid permission issues on my portable USB hard drive. The plan is to put a couple of Virtual Machines on the portable hard drive, and use it between two PCs both running Mint 17.2 + VirtualBox.
Both PCs are mine, and have been configured with the same username and password (not sure if that makes any difference).
I'm keen to format the hard drive as Ext4 for best support and performance. But, my concern is permission issues may come up between the two PCs. Alternatively, I suppose I could use NTFS but I don't know whether NTFS might have similar permission issues in Mint.
I've looked at exFAT, and I know fuse-exfat works well (I use it for Windows USB sticks), but for Virtual Machines I suspect Ext4 or NTFS would achieve better performance (least CPU overhead).
Any advice would be much appreciated.
|
Permissions on unix-like systems are based on the user's UID, not on their username. So, if all relevant users have the same UIDs on both systems, there will be no permissions problems with the mounted ext4 USB drive.
Debian (and debian-based distros like Mint) start creating users with UID=1000, so if you created the same users in the same order on both systems, they will have the same UIDs.
| Avoiding permission issues on portable hard drive in Linux Mint |
1,519,655,901,000 |
I am running a script to deploy a website on a server. It gives me the following error:
DEBUG [4223cc8a] Command: /usr/bin/env chmod +x /tmp/mysite_staging/git-ssh.sh
DEBUG [4223cc8a] changing permissions of `/tmp/mysite_staging/git-ssh.sh'
DEBUG [4223cc8a] : Operation not permitted
It is complaining that the deploy user cannot change permission of that file. I already have set it so that the deploy user can read,write,execute the file, as the user is in the deploy group:
$ ls -l
total 4
-rwxrwx--x. 1 root deploy 93 Aug 5 09:22 git-ssh.sh
So how can I enable the deploy user to change the permission of this file? This is on CentOS. My temporary solution was to make the deploy user the owner of the file.
|
Only the owner of a file, or the root user, can change the permissions of a file. You need either to change ownership of the file so it is owned by the deploy user, or run the script as root.
| Give permissions to change permissions |
1,519,655,901,000 |
0. Add user bkupusr to group extbk
leeand00@hostname:/home/leeand00/$ sudo groupadd extbk
leeand00@hostname:/home/leeand00/$ sudo usermod -G extbk bkupusr
1. Created directory structure:
leeand00@hostname:/home/leeand00/$ mkdir appdir2
leeand00@hostname:/home/leeand00/$ mkdir appdir2/appuser1
leeand00@hostname:/home/leeand00/$ mkdir appdir2/appuser2
2. Setup the default permissions for anything else we create in there here after (no recursion for existing stuff)
leeand00@hostname:/home/leeand00/$ setfacl -dm g:extbk:r ./appdir2
3. Create a directory and a file:
leeand00@hostname:/home/leeand00/$ cd appdir2
leeand00@hostname:/home/leeand00/appdir2/$ touch file1
leeand00@hostname:/home/leeand00/appdir2/$ mkdir dir1
leeand00@hostname:/home/leeand00/appdir2/$ echo "Hi" >> file1
4. Stop other groups from reading and executing appdir2
leeand00@hostname:/home/leeand00/appdir2/$ cd ..
leeand00@hostname:/home/leeand00/$ chmod o-xr ./appdir2
5. Attempt to access from a user in the extbk group
bkuser@hostname:/home/leeand00/$ cd ./appdir2
bash: cd: appdir2: Permission denied
bkuser@hostname:/home/leeand00/$ cat ./appdir2/file1
cat: appdir2/file1: Permission denied
However if I change the other permissions as follows:
leeand00@hostname:/home/leeand00/$ chmod o+x ./appdir2
Then I am able to access the file again.
bkuser@hostname:/home/leeand00/$ cd ./appdir2
bkuser@hostname:/home/leeand00/$ cat ./appdir2/file1
hi
But then so can anyone else in another group...so is there a way to allow access only to the groups that are in the ACL, (and to the group and owner) without allowing access to other?
|
There are two sets of FACL rules associated with folder ./appuser2: the FACL rules for folder ./appuser2 itself, and a second set of FACL rules that specify the default FACL rules that are applied to files and folders created within folder ./appuser2.
The steps you've outlined above set the "default" FACL rules that are applied to files and folders created within ./appuser2, but you have not defined a FACL rule set for the folder ./appuser2 itself. This is part of the reason why members of the group extbk cannot access ./appuser2 and its contents.
Another misconfiguration issue that requires correction is this: any user who requires access to folder ./appuser2 must be granted execute 'x' permission on that directory. As stated in the chmod(1) manual, for folders, the execute 'x' permission grants a user search permissions on the folder--i.e., the user is granted permission to execute a change directory action into the folder to access the folder's contents.
Here's an example based on your original comments for you to consider:
Listing 1: FACL permissions example
sudo su -
mkdir -p /opt/appdir2/{appuser1,appuser2}
setfacl -bR /opt/appdir2/
chmod 750 /opt/appdir2/appuser2/
find /opt/appdir2/ -ls
1049001 4 drwxr-xr-x 4 root root 4096 Jul 26 22:02 /opt/appdir2/
1049051 4 drwxr-xr-x 2 root root 4096 Jul 26 22:02 /opt/appdir2/appuser1
1049053 4 drwxr-x--- 2 root root 4096 Jul 26 22:02 /opt/appdir2/appuser2
getfacl -p /opt/appdir2/appuser2/
# file: /opt/appdir2/appuser2/
# owner: root
# group: root
user::rwx
group::r-x
other::---
#==========================================================
# FACL rules for folder `/opt/appdir2/appuser2/'.
setfacl -m g:extbk:r-x /opt/appdir2/appuser2/
getfacl -p /opt/appdir2/appuser2/
# file: /opt/appdir2/appuser2/
# owner: root
# group: root
user::rwx
group::r-x
group:extbk:r-x
mask::r-x
other::---
#==========================================================
# FACL rules for files and folders created
# within folder `/opt/appdir2/appuser2/'.
setfacl -dm g:extbk:r-x /opt/appdir2/appuser2/
getfacl -p /opt/appdir2/appuser2/
# file: /opt/appdir2/appuser2/
# owner: root
# group: root
user::rwx
group::r-x
group:extbk:r-x
mask::r-x
other::---
default:user::rwx
default:group::r-x
default:group:extbk:r-x
default:mask::r-x
default:other::---
echo "Hello" >/opt/appdir2/file1
echo "World" >/opt/appdir2/appuser2/file2
find /opt/appdir2/ -ls
1049001 4 drwxr-xr-x 4 root root 4096 Jul 26 22:13 /opt/appdir2/
1049051 4 drwxr-xr-x 2 root root 4096 Jul 26 22:02 /opt/appdir2/appuser1
1049053 8 drwxr-x--- 2 root root 4096 Jul 26 22:13 /opt/appdir2/appuser2
1049071 4 -rw-r----- 1 root root 6 Jul 26 22:13 /opt/appdir2/appuser2/file2
1049055 4 -rw-r--r-- 1 root root 6 Jul 26 22:13 /opt/appdir2/file1
getfacl -p /opt/appdir2/appuser2/file2
# file: /opt/appdir2/appuser2/file2
# owner: root
# group: root
user::rw-
group::r-x #effective:r--
group:extbk:r-x #effective:r--
mask::r--
other::---
#==========================================================
# Ensure users who are members of the group `extbk'
# are granted access to folder /opt/appdir2/appuser2/
# and its contents.
usermod -a -G extbk deleteme
su - deleteme
[deleteme]$ find /opt/appdir2/ -ls
1049001 4 drwxr-xr-x 4 root root 4096 Jul 26 22:13 /opt/appdir2/
1049051 4 drwxr-xr-x 2 root root 4096 Jul 26 22:02 /opt/appdir2/appuser1
1049053 8 drwxr-x--- 2 root root 4096 Jul 26 22:13 /opt/appdir2/appuser2
1049071 4 -rw-r----- 1 root root 6 Jul 26 22:13 /opt/appdir2/appuser2/file2
1049055 4 -rw-r--r-- 1 root root 6 Jul 26 22:13 /opt/appdir2/file1
[deleteme]$ cat /opt/appdir2/appuser2/file2
World
[deleteme]$ exit
#==========================================================
# Ensure users who are NOT members of the group `extbk'
# are denied access to folder /opt/appdir2/appuser2/
# and its contents.
gpasswd -d deleteme extbk
su - deleteme
[deleteme]$ find /opt/appdir2/ -ls
1049001 4 drwxr-xr-x 4 root root 4096 Jul 26 22:13 /opt/appdir2/
1049051 4 drwxr-xr-x 2 root root 4096 Jul 26 22:02 /opt/appdir2/appuser1
1049053 8 drwxr-x--- 2 root root 4096 Jul 26 22:13 /opt/appdir2/appuser2
find: '/opt/appdir2/appuser2': Permission denied
1049055 4 -rw-r--r-- 1 root root 6 Jul 26 22:13 /opt/appdir2/file1
[deleteme]$ cat /opt/appdir2/appuser2/file2
cat: /opt/appdir2/appuser2/file2: Permission denied
[deleteme]$ exit
| Using ACL Permissions without allowing other groups to access a directory? |
1,519,655,901,000 |
I guess there's something I'm missing here but I need to solve this in order to feel confident while learning how unix/linux OS work.
I've seen some similar questions on this topic but I don't think they solve my problem since both my users have the same privileges.
I have a localhost in my machine.
Inside my var/www/html/ directory I have directory_A created as 'root' user with a web project I developed in it. It works perfectly.
I also have directory_B which I downloaded from github to test a project with my regular user let's say 'regular_user' and I get the following error:
**Forbidden**
You don't have permission to access /drectory_B/ on this server.
So i check my permissions on both directories and compare them:
drwxr-xr-x 4 root root 4096 Jul 13 20:25 directory_A
drwx------ 5 regular_user regular_user 4096 Jul 13 20:47 directory_B
So I can see my user has permission to read, write and execute. What am I missing here? Why do I get this error when browsing directory_B when I'm supposed to have permission and I don't get it when browsing directory_A when root user has the same permissions as regular_user against this file?
|
When you access the directory via a server/browser combination of any kind, your credentials are not shared, so the server does not know that the person accessing the files is you. Try
chmod a+x directory_B
and
chmod a+r directory_B/*
| Getting 'Forbidden error' browsing a page in localhost with permission to read, write and execute |
1,519,655,901,000 |
I would like to have a way in which a set of folders do not allow write access for a particular process. For example, I would love to have a way in which YUM/RPM is not allowed to write into /usr/bin
|
You can chroot the software into a bind mount setup where these directories are mounted read-only.
mkdir /foo
mount --bind / /foo
mount --rbind /dev /foo/dev
mount --bind /proc /foo/proc
mount --bind /run /foo/run
mount -t tmpfs tmpfs /foo/tmp
mount --bind /sys /foo/sys
mount --bind /usr/bin /foo/usr/bin
mount -o remount,ro /foo/usr/bin
chroot /foo rpm …
Note that hostile processes running as root can escape a chroot, so this is not a secure confinement, only a way to ensure that a non-malicious process isn't writing where it isn't supposed to.
An alternative approach would be to set up SELinux rules. These constrain even processes running as root, so if set up correctly (which is nontrivial, and requires more than file access blocking) it can be a secure confinement.
If the process isn't running as root, just make sure that the permissions on the directory don't allow the user to write there. You can use an ACL that excludes a specific user, e.g.
setfacl -m user:alice:0 /some/dir
to make /some/dir inaccessible to the user alice, or
setfacl -R -m user:alice:rX /some/dir
to make it and files under it readable but not writable.
| How to guard a set of folders from being written into by a specific process? |
1,519,655,901,000 |
I'd like to have normal user account assigned to root group (which I've done already) and change mod of all files that are owned by root user to 770, If I understood it well in result of that every user that is assigned to root group will obtain full access to those files and as well can be treated as root user.
My question is if I do this system can take a damage?
|
Even assuming you meant chmod g=u rather than chmod 770, it may well break some of the PAM security modules, including those that manage logins. It will break ssh logins, as ssh checks permissions on $HOME and all parent directories.
If, as you suggest in your comments, you simply want to avoid using sudo there are some options that spring to mind:
Login as root
Run sudo -s at the start of your session
Continue using sudo but configure it to stop asking you for a password
Given your requirements, of all of these I would recommend only the third option.
| Linux Root group change chmod for root owned to 770 |
1,519,655,901,000 |
On a new external hard drive (Intenso 05-1204-18A), I made (with GParted) two partitions :
Disk /dev/sdc: 931.5 GiB, 1000204883968 bytes, 1953525164 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfa00a60d
Device Boot Start End Sectors Size Id Type
/dev/sdc1 2048 50794495 50792448 24.2G b W95 FAT32
/dev/sdc2 50794496 1953523711 1902729216 907.3G 83 Linux
(I am using Linux 3.19.3-3-ARCH GNU/Linux)
When I mount the first (by using file manager, but it works with terminal too), I can see :
drwxr-x---+ 3 felicien felicien 60 Apr 16 16:31 .
drwxr-xr-x 3 root root 60 Apr 16 15:58 ..
drwxr-xr-x 4 felicien felicien 16384 Jan 1 1970 INTENSO WIN
I can mkdir and everything in this directory. When I mount the second :
drwxr-x---+ 4 felicien felicien 80 Apr 16 16:32 .
drwxr-xr-x 3 root root 60 Apr 16 15:58 ..
drwxr-xr-x 4 felicien felicien 16384 Jan 1 1970 INTENSO WIN
drwxr-xr-x 3 root root 4096 Apr 16 16:02 Intenso Linux
I have to chown the directory to be able to write into it. Why have I permissions with FAT32 and not EXT4 ?
Thanks.
|
The fat32 filesystem has no notion of ownership or permissions. The man page for mount lists these options that help make it look closer to what Unix users expect:
uid=value and gid=value
Set the owner and group of all files. (Default: the uid and gid
of the current process.)
umask=value
Set the umask (the bitmask of the permissions that are not
present). The default is the umask of the current process.
So when you mounted it, it was mounted with your userid, groupid, and umask (which I'm guessing is 022). All files and directories will be owned by you, and will have permissions rwxr-xr-x.
ext4, on the other hand, is a classic Unix filesystem that stores userid, groupid, and permissions information. If you create a directory while running as root, it will be owned by root, until you use chown to change it. You can change the group or other permissions, using chmod, to make an object be writable by multiple users.
| Can't write on recently created EXT4 partition |
1,519,655,901,000 |
I just made a terrible mistake, I tried to fix it by myself but I need some help.
Due to a syntax error all files and folders permissions were being changed: thankfully I saw this while it was happenning and successfully stopped it. "Only" /bin, /boot, and /dev are affected. Here is an excerpt of what happened (full log: http://pastebin.com/4BkbXEqD):
Apr 11 21:34:08 *** sftp-server[20582]: set "/.newrelic" mode 40754
Apr 11 21:34:08 *** sftp-server[20582]: set "/bin" mode 40554
Apr 11 21:34:09 *** sftp-server[20582]: set "/boot" mode 40554
Apr 11 21:34:09 *** sftp-server[20582]: set "/cgroup" mode 40754
Apr 11 21:34:09 *** sftp-server[20582]: set "/dev" mode 40754
Apr 11 21:34:09 *** sftp-server[20582]: opendir "/.newrelic"
Apr 11 21:34:09 *** sftp-server[20582]: closedir "/.newrelic"
Apr 11 21:34:09 *** sftp-server[20582]: opendir "/.newrelic"
Apr 11 21:34:10 *** sftp-server[20582]: closedir "/.newrelic"
Apr 11 21:34:10 *** sftp-server[20582]: opendir "/"
Apr 11 21:34:10 *** sftp-server[20582]: closedir "/"
Apr 11 21:34:10 *** sftp-server[20582]: opendir "/bin"
Apr 11 21:34:10 *** sftp-server[20582]: closedir "/bin"
Apr 11 21:34:10 *** sftp-server[20582]: set "/bin/mknod" mode 100754
Apr 11 21:34:10 *** sftp-server[20582]: set "/bin/cat" mode 100754
Apr 11 21:34:10 *** sftp-server[20582]: set "/bin/ping6" mode 100754
For instance, MySQL is not running anymore, and I'm afraid more damage has been done.
I've tried to restore read and exec permissions on those folders and files but in fact I don't know in which state they were previously.
How can I revert my system to a working state?
|
In fact it seems that the system was more affected that the log was saying!
maybe because of the dev folder, maybe symlinks... don't know but after crawling forums etc and trying folder by folder, I finally saved the server with
for package in $(rpm -qa); do rpm --setperms $package; done
A special thx for @Gilles and his superb answer
A special NO Thx! @dhag for editing my question up to removing my politeness formula
| Broke permissions on /bin, /boot, and /dev; how to clean the mess? |
1,519,655,901,000 |
I copied some files from a data DVD to /home/emma (ext4), and all of the files are read only.
This is what all of the files are like:
emma@emma-W54-55SU1-SUW:~$ stat cd/Drivers/Drivers_List.rtf
File: ‘cd/Drivers/Drivers_List.rtf’
Size: 28120 Blocks: 56 IO Block: 4096 regular file
Device: 801h/2049d Inode: 656521 Links: 1
Access: (0400/-r--------) Uid: ( 1000/ emma) Gid: ( 1000/ emma)
Access: 2014-01-17 05:34:46.000000000 +0000
Modify: 2014-01-17 05:34:46.000000000 +0000
Change: 2015-02-01 23:11:04.226865424 +0000
Birth: -
When I try to delete them, I get rm: cannot remove ‘cd/Drivers/Drivers_List.rtf’: Permission denied, even though I'm the owner. Changing the mode to 777 doesn't work either. The only thing that works is deleting them as root, using sudo.
I thought only an i attribute made files unable to be deleted by their owner, so what's going on?
I'm using Xubuntu 14.10.
Results of various commands:
(Please note: I created directory cd myself, and then copied directory Drivers to it from the DVD.)
emma@emma-W54-55SU1-SUW:~$ ls -dlh cd
drwxrwxr-x 3 emma emma 4.0K Feb 3 01:44 cd
emma@emma-W54-55SU1-SUW:~$ ls -dlh cd/Drivers
dr-x------ 11 emma emma 4.0K Feb 3 02:15 cd/Drivers
emma@emma-W54-55SU1-SUW:~$ ls -l cd/Drivers/Drivers_List.rtf
-r-------- 1 emma emma 28120 Jan 17 2014 cd/Drivers/Drivers_List.rtf
emma@emma-W54-55SU1-SUW:~$ rm cd/Drivers/Drivers_List.rtf
rm: cannot remove ‘cd/Drivers/Drivers_List.rtf’: Permission denied
emma@emma-W54-55SU1-SUW:~$ chmod 660 cd/Drivers/Drivers_List.rtf
emma@emma-W54-55SU1-SUW:~$ ls -l cd/Drivers/Drivers_List.rtf
-rw-rw---- 1 emma emma 28120 Jan 17 2014 cd/Drivers/Drivers_List.rtf
emma@emma-W54-55SU1-SUW:~$ rm cd/Drivers/Drivers_List.rtf
rm: cannot remove ‘cd/Drivers/Drivers_List.rtf’: Permission denied
emma@emma-W54-55SU1-SUW:~$ chmod 777 cd/Drivers/Drivers_List.rtf
emma@emma-W54-55SU1-SUW:~$ ls -l cd/Drivers/Drivers_List.rtf
-rwxrwxrwx 1 emma emma 28120 Jan 17 2014 cd/Drivers/Drivers_List.rtf
emma@emma-W54-55SU1-SUW:~$ rm cd/Drivers/Drivers_List.rtf
rm: cannot remove ‘cd/Drivers/Drivers_List.rtf’: Permission denied
emma@emma-W54-55SU1-SUW:~$ lsattr cd/Drivers/Drivers_List.rtf
-------------e-- cd/Drivers/Drivers_List.rtf
emma@emma-W54-55SU1-SUW:~$ ls -alh cd/Drivers
total 48K
dr-x------ 11 emma emma 4.0K Feb 3 02:15 .
drwxrwxr-x 3 emma emma 4.0K Feb 3 01:44 ..
dr-x------ 7 emma emma 4.0K Jan 14 2014 01Chipset
dr-x------ 3 emma emma 4.0K Jan 14 2014 02Video
dr-x------ 9 emma emma 4.0K Jan 14 2014 03Lan
dr-x------ 9 emma emma 4.0K Jan 14 2014 04CReader
dr-x------ 3 emma emma 4.0K Jan 17 2014 05Touchpad
dr-x------ 3 emma emma 4.0K Jan 14 2014 06Airplane
dr-x------ 2 emma emma 4.0K Jan 17 2014 07Hotkey
dr-x------ 12 emma emma 4.0K Jan 14 2014 08IME
dr-x------ 7 emma emma 4.0K Jan 14 2014 09Audio
-r-------- 1 emma emma 162 Feb 24 2012 ~$ivers_List.rtf
(I've already deleted cd/Drivers/Drivers_List.rtf using sudo as a test.)
|
I've found the answer myself here.
Because cd/Drivers is read-only, only root can delete from it.
| Why can't I delete my files? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.