date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,486,312,508,000
I've managed to put my laptop in a state of no-boot while trying to perform update and I suspect I know exactly what is wrong. In order to correct the problem, I need access to at least /boot and preferably my home directory on my HDD. I've booted from an Ubuntu 16.04 image on a bootable USB thumb drive and I've managed to mount and decrypt my hard disk. The issue now is that I can see the folder structure on my hard disk, but most of folders appear to be empty and it seems as though I may simply not have permissions over any of the files. In the case of my home directory, it's understandable (I'd need to impersonate myself? sudo is not working because my user account does not exist in this image), but I'm not sure why I can't see any files in /boot. I know the relevant passwords. How can I convince my HDD that I'm me? Any direction would be much appreciated. Thanks, Patrick EDIT: I've managed to grant myself permissions over the contents of my home directory using chmod so worst case scenario, I should be able to do an extraction... but why does /boot still appear to be empty, even after modifying permissions? Incidentally I had backed up the files I botched in my home directory before it all went horribly wrong; I've tried copying them into /boot, but Ubuntu doesn't seem to see them at boot. Why didn't this work? Thanks again, Patrick
If /boot is empty, it's probably because you are looking at the mount point where /boot is normally mounted, not the boot partition itself. The USB boot drive doesn't know where the boot partition should be mounted.
How can I take permissions over /boot on my mounted HDD from bootable disc?
1,486,312,508,000
UPDATE: I now know my issue was database corruption, but discerning it was somewhat tricky--apparmor appeared to be the cause for longer than it should've. I didn't note when first posting that even after putting mysql in complain mode and sending apparmor both stop and teardown commands my syslog still showed the apparmor message...feeding my irrational fear of the protection layer--still not sure how this happened. I finally got mysql separated from apparmor, but it still couldn't lock its own files. Ergo database corruption--dang. My backups worked fine on a new server. INITIAL POST: Mysql server is being blocked by (I think) apparmor, but I'm at wits' end to determine why/how. I'm not overly familiar with apparmor. I know I shouldn't uninstall apparmor--for at least two reasons--but I've used enough profanity (and given this issue too much time) to not consider it. Hopefully I'm merely missing something simple and will learn here. The failure began today and follows no system changes. MySQL's error log laments permissions Can't open and lock privilege tables: Table 'servers' is read only I've been unable to find anyone with this issue who isn't currently moving their default database store. I moved mine as well--two years ago. The apparmor config is unchanged since 2014/04/21: /files/bak/tmp/ rw, /files/bak/tmp/* rwk, /files/bak/mysql/ rw, /files/bak/mysql/** rwk, I've verified filesystem permissions: # find mysql/ -type d -exec chmod 700 {} \; # find mysql/ -type f -exec chmod 660 {} \; # chown -R mysql: mysql I reloaded apparmor, installed apparmor-utils, pushed mysql to complain # aa-complain mysql # apparmor_status apparmor module is loaded. 5 profiles are loaded. 4 profiles are in enforce mode. /sbin/dhclient /usr/lib/NetworkManager/nm-dhcp-client.action /usr/lib/connman/scripts/dhclient-script /usr/sbin/tcpdump 1 profiles are in complain mode. /usr/sbin/mysqld 1 processes have profiles defined. 0 processes are in enforce mode. 0 processes are in complain mode. 1 processes are unconfined but have a profile defined. /sbin/dhclient (495) ...but viewing syslog still suggests apparmor is blocking mysql after service mysql start: apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=13899 comm="apparmor_parser" Before I found the apparmor issue I tried restoring DBs from backups, which also failed with write permissions: Can't create/write to file '/files/bak/mysql/dbCooper/wp_cb_contact_form.MYI' I verified the filesystem is 'rw' (even though the above find -exec would have failed anyway): mount /dev/xvdf on /files/bak type ext4 (rw,noatime) I've even tried stopping apparmor, but the syslog still shows can't open and lock privilege tables after this: # service apparmor stop [...redacted teardown msg...] # /etc/init.d/apparmor teardown * Unloading AppArmor profiles # service apparmor status apparmor module is loaded. 0 profiles are loaded. 0 profiles are in enforce mode. 0 profiles are in complain mode. 0 processes have profiles defined. 0 processes are in enforce mode. 0 processes are in complain mode. 0 processes are unconfined but have a profile defined. Is it possible for mysql to lock database files and fail to unlock them when the daemon crashes? If so how would I clear the lock? I'm currently running my DB with mysqld --skip-grant-tables ...so I know the executable can run, and the databases are at least somewhat valid (the sites all appear normal). Am I missing something? thanks for reading.
Database corruption. Occam's Razor prevails. I moved backups to a new server and updated the db location/apparmor config. I cringed as I restarted everything. I'd spent hours convincing myself AppArmor was a cavernously complex and difficult beast, but my reticence was completely without cause--it worked perfectly on the first try. Amazing how it just works when no files are corrupted--should be an Apple ad. Now if I could just recover the WordPress site my web developer updated extensively while I was running under --skip-grant-tables but before I realized what the cause was. :-/
apparmor: mysql permissions--with no recent changes
1,456,004,638,000
Right to the business, here is what I am trying to do foverzar@subsystem /home> ls -aln total 16 drwxr-xr-x 4 0 0 4096 Dec 12 23:07 ./ drwxr-xr-x 17 0 0 4096 Dec 4 13:51 ../ drwx------ 9 1000 1000 4096 Dec 13 22:05 foverzar/ drwxrwx--- 2 1001 1001 4096 Dec 12 23:11 tor/ foverzar@subsystem /home> cd tor cd: Permission denied: “./tor” foverzar@subsystem /home> id foverzar uid=1000(foverzar) gid=1000(foverzar) groups=1000(foverzar),10(wheel),1001(tor) Basicaly my question is: why can't I access the dir, even if my user is the part of "tor" group and permissions are set to 770?
"Have you tried to reboot?" (c) Ok, nevermind - I figured out that usermod -aG tor foverzar was executed during the same session as the above commands I was trying to execute. And, since adding supplementary groups only affects /etc/group file, which is relevant only during login - nothing would work until we re-login the user. Hope that helps for anyone else around.
Access rights for directory, when you are a part of owner group [duplicate]
1,456,004,638,000
I'm trying to install numpy onto my system. I'm kept from installing the package, however, because I don't have permission to edit certain folders. This "permission block(s)" is the standard safety mechanism that OS developers create to avoid users (like me) from blindly hacking away at some important structure. I could run sudo python setup.py install, and that would override my permission block, but this is risky. If a script contained in a package were unsafe, a call to sudo to install the package would overlook any threats. As opposed to blindly calling sudo, can I specifically give an installation permission to install into some specific folder?
You can use virtualenv. virtualenv allows you to install python libraries into a separate directory than the system libraries. You can set the directory to be in a directory where you have write permission (e.g. your home directory). Alternatively, you can change the group of the files with chgrp py-installer -R /my/python/packages/directory and run your installer as that group with sudo -g py-installer python setup.py this is a bit more risky as you may end up with incorrect folder permissions if you aren't careful.
When installing a package, can I give write permission to specific files?
1,456,004,638,000
I have a external drive mounted to /media/usbhdd, the owner is: debian-transmission and the group is gebruikers. In the group gebruikers i have added the person debian-transmission and pi. But when i want to change or delete something as user pi it doesn't work (i get an permission error). my /etc/fstab looks like this: /dev/sda1 /media/usbhdd vfat uid=debian-transmission,gid=gebruikers 0 0 With the command id pi i get this: uid=1000(pi) gid=1000(pi) groups=1000(pi),4(adm),20(dialout),24(cdrom),27(sudo),29(audio),44(video),46(plugdev),60(games),100(users),101(input),108(netdev),999(spi),998(i2c),997(gpio),115(debian-transmission),1002(gebruikers) And with the command ls -l /media/usbhdd: total 192 drwxr-xr-x 2 debian-transmission gebruikers 32768 Aug 30 2014 disk1 drwxr-xr-x 3 debian-transmission gebruikers 32768 Oct 14 11:42 Network Trash Folder drwxr-xr-x 2 debian-transmission gebruikers 32768 Aug 31 2014 shares drwxr-xr-x 3 debian-transmission gebruikers 32768 Oct 14 11:42 Temporary Items drwxr-xr-x 4 debian-transmission gebruikers 32768 Oct 14 12:26 series drwxr-xr-x 3 debian-transmission gebruikers 32768 Oct 14 12:26 movies How can i write/change/delete with user pi without changing uid in the fstab file? EDIT This was the trick: /dev/sda1 /media/usbhdd vfat uid=debian-transmission,gid=gebruikers,umask=0000 0 0
As you can see in the results of ls -l inside your device, the group and "others" permissions have the "write" flag cleared. Only the owning user may write. If you want to keep the mounting ids intact, add a proper mode mount option in your /etc/fstab to enable write permissions to the group, for example mode=0775 is rwx for user and group, r-x for others.
Rights, permissions and groups
1,456,004,638,000
I have a website running on apache with the web root folder at /var/www. I have different people managing different aspects of the website. Designers accesses /var/www/images and /var/www/media Developers access /var/www/code A 3rd party outside contractor accesses /var/www/code/3rd_party_code. The entire /var/www/ is owned by www-data as that is what apache is run as. Is there any Linux users and/or grouping permission scheme I can use to make sure people are only able to change their parts of the website? Note also there is some overlap. For instance, developers can also see the 3rd party code. Also, designers access different folders (/var/www/images and /var/www/media) and chroot will not work (I think!). Access will be only via SFTP. I will disable ssh shell access as necessary. This is on Ubuntu 12.04.5 LTS.
Seems like access control lists is exactly what I need. I will read up on this: ACLS in Ubuntu. Thanks Andy and Jakuje.
SFTP access to different parts of the apache webroot for different users
1,456,004,638,000
What is the default file permission of /etc/shadow in CentOS? Is there any difference in the default file permission of /etc/shadow in previous versions of CentOS?
That's an interesting observation; the only concrete evidence I could find, so far, is this SCAP mailing list thread talking about the change from RHEL5's default permissions of 0400 to RHEL6's default of 0. You can also observe that in the list of Common Configuration Enumartions on the Working Group's now-archived website.
What is the file permission of /etc/shadow in CentOS?
1,456,004,638,000
What are the default file permissions in CentOS 5, CentOS 6 and CentOS 7? Is there any difference in the default file permissions between these OSs?
Default permissions are based on your umask setting. RHEL 6.6: # umask 0022 I don't believe that has changed by default - can't find anything authoritative. umask is something that you should set as part of the default profile.
What are the default file permissions in CentOS?
1,456,004,638,000
There are some strangers to Linux who need to use it in terminal mode but they are helplessly limited to accept that using chmod and chown is dead simple. They would be satisfied only by a tool that asks them in tui mode to select their target and permission rules. Midnight Commander would be fine if it had recursive mode, as they do not know which file is causing the issue and would like to apply the permission rules per directory. It wouldn't be hard to make a dialog script for this but I'm guessing someone already did it, and would be nice to customize one than to start from scratch. I'm curious what tool / script do you know of for this?
The only TUI based solution that I've found is inside of a file manager called Last File Manager, aka lfm. On F12 there is an option to set permissions and to do it recursively. Others that I've checked: Midnight Commander: There is a suggestion for midnight commander to have this feature, but as of now, it is still not available. Didn't find the feature of chmod/chown-ing recursively in ranger These would be what I was after but they are having a GUI instead of a TUI: https://github.com/IgnorantGuru/permz https://github.com/gitpan/Gtk2-Chmod https://github.com/homebru/permit
TUI Tool to change directory permission rules?
1,456,004,638,000
I'm trying to update the ownership of everything in a directory tree from me to root. I'm using find to do it a little more carefully than a recursive chown. Here are the commands I use to change ownership of all files and directories in my tree: cd /opt/mydir # Update files sudo find . -type f -execdir chown root:root "{}" + # Update directories sudo find . -type d -execdir chown root:root "{}" + These work fine for all files and directories. But I noticed there are symbolic links in /opt/mydir that point to files somewhere in the same directory tree, that I still retain ownership of. For example: lrwxrwxrwx 1 civfan civfan 6 Jul 18 2013 halt -> reboot -rwxr-xr-x 1 root root 14832 Jun 25 2013 reboot This looks wrong and seems likely to cause me permission problems later if I don't fix it now. How do I change ownership of all symbolic link files, too?
The ownership of symbolic links don't matter. Its the referenced entitiy which does. That said, use find -l to discover symbolic links in a directory tree. Use chown -h and/or chmod -h to operate on the symbolic link. find . -type l -exec chown -h root:root {} +
How do you use 'find' to update ownership of all directories, files, and symbolic links in a folder?
1,456,004,638,000
I've followed this instructions and now I'm able to access my Machintosh HD from Ubuntu, the problem is that I can't access the folder I need (example Desktop), it say that I haven't enough permissions to access the folder and I see the folder with an "X" on it. I've tried to use gksudo nautilus It works but I can't run the terminal every time I have to access a file on my HD. Is there a solution to edit the permissions permanently?
You could change the ownership of the folder by: sudo chown -R username:groupname /mount/mac/Desktop Replace username with your user-name.
Access Macintosh HD from Ubuntu - Permission denied
1,456,004,638,000
I'm using Oracle Developer Days virtual box, which is a Linux distribution by Oracle for development environments (link). Not sure what distribution is it, but in case it was relevant: [oracle@localhost ~]$ lsb_release -a LSB Version: :core-3.1-ia32:core-3.1-noarch:graphics-3.1-ia32:graphics-3.1-noarch Distributor ID: EnterpriseEnterpriseServer Description: Enterprise Linux Enterprise Linux Server release 5.5 (Carthage) Release: 5.5 Codename: Carthage Host is Mac OS, although that certainly shouldn't relevant for the problem. I've setup a shared directory between the host and the vm, and as pointed out in this question I've added oracle Linux user to the group vboxsf (upvoted the answer of course). I'm using the shared directory in order to load files into the database using Oracle's external tables. I've got quite a few files to load repetitively and I really need to automate the process and to control it from the host. The remaining problem is that Oracle, when loading the file, needs to write a new file in the same shared folder, basically including a log and a list of bad registers that could not be loaded. I also would need to check these log files from the host. The problem: Oracle can't write the file. It gives the following error: ORA-29913: error al ejecutar la llamada de ODCIEXTTABLEOPEN ORA-29400: error de cartucho de datos error opening file /media/sf_sisifo01/restapi1/TWEET_LOAD_3205.log 29913. 00000 - "error in executing %s callout" *Cause: The execution of the specified callout caused an error. *Action: Examine the error messages take appropriate action. (Sorry about the mixture of languages, the vm is in English but I installed SQL Developer in Spanish. Anyway, it is quite understandable). It cannot write to the shared directory, which is mounted at /media/sf_sisifo01. If I try to write to the same directory with oracle Linux user, there is no problem, I can do touch /media/sf_sisifo01/restapi1/TWEET_LOAD_3205.log and if works. And my guess is that Oracle should be using that same user. I've tried to give permissions to the shared directory both from the root user and the host (although I suspect the host cannot control that - anyway my knowledge of Linux admin is quite limited) - no success. The permissions for the shared directory are the following: [oracle@localhost ~]$ ls -l /media/sf_sisifo01 total 200 drwxrwx--- 1 root vboxsf 476 Mar 30 09:08 restapi1 and I don't manage to give r and x permissions for all users. Thanks for reading and for your help!
Sometimes in a fresh day you see things differently... I've added all users to group vboxsf. Not that I find the solution particularly elegant (I'd accept a better answer if somebody posts it). But it works and I don't quite see the harm on it. There were two users in the vm beside oracle, their ids are: davfs2and dm. No idea what's their use.
Oracle virtual box share folders when executing oracle command
1,424,687,163,000
An odd situation came up recently. User1 needed to be able to change files in a directory were the files and the directory were owned by User2 and in group User2. In order to facilitate this editing, the permissions were changed to 757 recursively for the directory structure. Thus a listing looked something like the following. drwxr-xrwx 3 user2 user2 4096 Nov 19 19:41 . drwxr-xr-x 3 user2 user2 4096 Nov 19 19:41 .. drwxr-xrwx 3 user2 user2 4096 Nov 19 19:41 directory1 drwxr-xrwx 3 user2 user2 4096 Nov 19 19:41 directory2 drwxr-xrwx 3 user2 user2 4096 Nov 19 19:41 directory3 -rwxr-xrwx 3 user2 user2 42 Nov 19 19:41 file1 User1 was able to read the files however attempts to create new files or edit/copy over existing files failed. The error was something like the following. $ touch file1 touch: cannot touch 'file1': Permission denied Thinking that maybe the drive was write protected somehow, User1 asked User2 to change the file. User2 was able to do so without any issues thus indicating the drive was not write protected. Looking at df and /etc/fstab, the file appeared to be on a locally mounted hard drive. Other info. User1 is in group User2. (This was originally thought not to be the case) There were no locks on the file. It appeared as though SE Linux was disabled. (As indicated by sestatus) While I recognize normally you would not want to set an entire directory to allow anyone to write to it, this is a special case. Almost an identical build on a separate machine worked. The output of getfacl is the same for the files and directories. # file: . # owner: user # group: user user::rwx group::r-x other::rwx What can cause this protection and how can it be undone?
I was going to simply delete the question since one of the original facts was incorrect. I decided that since the obvious answer was missed and there is an alternative not so obvious answer to post the possible causes and solutions to this problem aid those that may run into a similar issue in the future. 1) If you have this problem and you think that user1 is not part of user2, personally verify this by having user1 check there groups or examining the passwd file. In this case, the user1 was mistakenly added with the following /etc/passwd entry and no entry in /etc/group. user1:x:1001:1000:User1:/home/user1:/bin/bash while user 2 had the following in /etc/passwd user2:x:1000:1000:User2:/home/user2:/bin/bash and user2:x:1000:user2 user1 Group permissions take priority over Other permissions, thus writing was not allowed. This could be fixed by changing the group permissions or removing user1 from the user2 group. This was the easy answer that had the initial assumptions been correct, many people probably would have gotten. Note to self, when something doesn't work, verify for yourself. 2) The less obvious answer comes from using file Access Control Lists (ACLs). If the user in question has specific permissions assigned, they take priority over the general permissions. While this may be known by those who have used ACLs, I suspect many don't even know they exist. Here is an example of how to this can block a user. $ sudo setfacl -m u:user:r-x . $ ls -la total 0 drwxr-xrwx+ 2 root root 60 Nov 21 20:46 . drwxrwxrwt. 12 root root 300 Nov 21 20:45 .. -rw-rw-r--. 1 user user 0 Nov 21 20:46 dog $ touch cat touch: cannot touch ‘cat’: Permission denied $ getfacl . # file: . # owner: root # group: root user::rwx user:user:r-x group::r-x mask::r-x other::rwx To undo this $ sudo setfacl -b . $ sudo getfacl . # file: . # owner: root # group: root user::rwx group::r-x other::rwx $ touch cat $ ls -la cat -rw-rw-r--. 1 user user 0 Nov 21 20:51 cat Thank you @andcoz for asking about the groups which ultimately made me go back and re-verify and thank you @Rianto Wahyudi for mentioning 'getacl' which I had not seen/used before.
How can a user be prevented from changing a file that has write permissions for other users?
1,424,687,163,000
I have some sensitive data on a network share hosted on a Windows machine (drive D on mymachine.mywork.com) that I need to mount, as follows: sudo mount -t cifs mymachine.mywork.com :/D /mnt Will other users of the machine be able to view or modify my mounted network share? If so, how would I go about restricting all access to myself only? Thanks.
One thing you could try doing is burying the mount point somewhere in a directory that only you can get to. So, if your username is 'bob', then try something like this: mkdir -p /home/bob/securedir/mnt chmod 0700 /home/bob/securedir and then modify your mount command from above to this: sudo mount -t cifs mymachine.mywork.com :/D /home/bob/securedir/mnt
Permissions of CIFS Mounts
1,424,687,163,000
I have an external HDD. It's portable, so I plug it in all over the place. Needless to say, these different places have different passwd files and uids. The files are mine and I need to access them everywhere. Currently, every time I plug it in somewhere I have to sudo find . -type f -exec chmod a+rw \{\} \+ sudo find . -type d -exec chmod a+rwx \{\} \+ Which assumes I have root access, of course. I could run this command (without sudo) at the end of a session instead, but either way it's a faff and takes a lot of time (it's a ½TB drive). One option is to use a FS that does not understand permissions, exFAT, for example. However I'm using BTRFS because I use the snapshot facility. Is there a way at the FS level I can disable permissions - leave everything wide open? Or if you have another way, how do you do this?
Using bindfs may help. root% mkdir /mnt/wide-open-backups -m777 root% bindfs /mnt/external-drive /mnt/wide-open-backups \ -r \ -m youruser \ -p 666 This gives youruser access via /mnt/wide-open-backups and they will see everything as if it was owned by them. The -r makes it read only.
External disk - how to ensure readable by all without needing root perms?
1,424,687,163,000
I'm not able to ls on a folder that I have just transferred from win7 to OSX via a FAT32 drive. I don't know how to search for an answer for this issue. I've attempted the following: sudo chmod u=rwx myfolder/ sudo chmod a+rx myfolder/ ...to no avail. I have found that sudo ls seems to work. Why would this be?
wow, i would have never thought this could happen, but turns out that there was a file in that directory named 'ls' without an extension, so it was overriding the sys default while i was in that directory and running ls via the cwd's supposed executable of it. a rare and embarrassing case, but true and not completely obvious while attempting to troubleshoot. i suppose this is one of the oldest issues in the book.
`ls` fails for directory copied from Win and OSX [closed]
1,424,687,163,000
I have an SFTP server running on Debian. The directory is chrooted and is set up with privileges as such: /sftp/+---testagent2----writespace 750 | 755 755 +---testagent3----writespace | 755 755 +---testagent4----writespace 755 755 All agents can log in without issues. The problem is only testagent2 can write within his writespace, the rest cannot, even though testagent3 and testagent4 can still download files. I have experimented with various permission settings, such as 750,755,775, and 777 but it doesn't make a difference. How can I correct this error?
Given the 755 permissions, make sure that the appropriate users own their respective writespaces!
How to correct error where 2 out of 3 users on chrooted network cannot gain write access?
1,424,687,163,000
I have a Raspberry Pi that's running a Raspbmc distribution and I've noticed that a lot of the directories are either owned by the user 501 and the group dialout or both the user and group root. It's frustrating for me to move files from the main filesystem on the SD card to the external drive because I always need root access (and it makes automating tasks a pain too), so I'd really like to be able to chown it to the user pi. I've read up a little bit on what the 501 user and the dialout group are and don't see why I shouldn't do this, but my knowledge of Unix permissions is basic at best so I'd like to know if I've missed any considerations before I go ahead and change the permissions recursively on the entire drive. So my question would be: Is there any harm in doing a chown -R pi on the external drive?
If you create a common user between the systems where this disk is moving you can then make the ownership on this disk that single user and you'll no longer have to deal with this. Simply add a user on both systems, and make sure that this user's UID (user ID) and GID (group ID) are the same numbers on both systems. The names are immaterial, it's the numbers that need to be kept in sync, so that the UID/GID is recognized across both systems as a single user/group. When creating a user these are the parts that drive the recognition by the system which user/group owns files. Example Say I have this directory, it's user/group is saml & saml. $ ls -ld . drwx------. 245 saml saml 32768 Oct 26 22:41 . Using the -n switch to ls you can see what the numbers are for these fields. $ ls -ldn . drwx------. 245 500 501 32768 Oct 26 22:41 . So we need to make sure that I have the same user/group on both systems (saml/saml) and the UID/GID needs to be 500/501 as well. If you look in the /etc/group file you'll see the group saml + GID. $ grep "^saml" /etc/group saml:x:501: Looking in /etc/passwd file you'll see the user saml + UID. $ grep "^saml" /etc/passwd saml:x:500:501:Sam M. (local):/home/saml:/bin/bash When running the useradd command you can control what UID/GID to use. $ sudo useradd -u 500 -g 501 saml
What are the ramifications of recursively chown'ing the directories on an external drive that currently has 501:dialout or root:root permissions?
1,424,687,163,000
I cannot open any files on my Manjaro Linux laptop. I really can't remember why it happened but I restarted my computer and now I dont have read/write access to any folder or file even though I've logged in.
Check the mode at which the partition is mounted i.e whether it is mounted in read-only or read-write mode. You can use mount command to check that. If this is the issue, then you can edit the /etc/fstab file.
Cannot access any files Manjaro Linux
1,424,687,163,000
I am Trying to configure ProFTPd to change group for newly created files/directories. In my config I have this: <Directory /home/*> GroupOwner www </Directory> Which does not seem to work. All users are added to www group. Debug shows nothing regarding to a group change. I'm using FREEBSD 9.0-release. EDIT: I'm willing to try any other FTP server that makes this easier.
After a deep research, I found out that proftpd is not capable of changing a group of newly uploaded file.. However workaround was found: You need to simply change group for the user's home folder, after which all newly uploaded files will inherit the group from the home folder. Not much of a solution, but at least something. =)
proftpd does not change default group for new files
1,424,687,163,000
Possible Duplicate: Permissions: What's the right way to give Apache more user permissions? Before I knew better, I used /home/someuser/public_html/scripts as a place from which shared scripts could be accessed by various users' php scripts. include('/home/someuser/public_html/scripts/somefile.php'); Something changed when we went from one server to another and now the only way that was working was to set public_html to 0755. Since we now have some customers with access to their own accounts, this is not acceptable. I tested a work around by adding a user to the same group as someuser and putting another folder at /home/someuser/test_folder and setting it to 0750. Now my users who share the group can access test_folder. A user that is not in the group cannot, but if I chmod public_html from 755 to 750, I get permission errors even from the users that are in the same group. I checked lsattr and the only attrib that is set is "I" on public_html Any ideas on what to try next?
The solution was # setfacl -m g:someuser:rx /home/someuser/public_html Read that like this "Set File Access Control List, Modify, Group:someuser:read,execute, /home/someuser/public_html" This forum question is what pointed me in the right direction.
Group Permission on public_html [duplicate]
1,334,009,265,000
How to configure sudoers to prevent having the Sorry, user ****** is not allowed to execute error message. Background For the purpose of testing how a python script under a less privileged user and group daemon account, there is a need to run: $ sudo -u _denyhosts -g _denyhosts python /usr/local/bin/denyhosts.py -c /usr/share/denyhosts/denyhosts.cfg -n --purge --sync --verbose The result is: Sorry, user ****** is not allowed to execute '/usr/bin/python /usr/local/bin/denyhosts.py -c /usr/share/denyhosts/denyhosts.cfg -n --purge --sync --verbose' as _denyhosts:_denyhosts on ***.***.***. The guess is that this command fails due to an extra configuration required in sudoers. $ sudo -l Matching Defaults entries for *** on this host: editor=/usr/bin/nano, env_reset, env_keep+=BLOCKSIZE, env_keep+="COLORFGBG COLORTERM", env_keep+=__CF_USER_TEXT_ENCODING, env_keep+="CHARSET LANG LANGUAGE LC_ALL LC_COLLATE LC_CTYPE", env_keep+="LC_MESSAGES LC_MONETARY LC_NUMERIC LC_TIME", env_keep+="LINES COLUMNS", env_keep+=LSCOLORS, env_keep+=SSH_AUTH_SOCK, env_keep+=TZ, env_keep+="DISPLAY XAUTHORIZATION XAUTHORITY", env_keep+="EDITOR VISUAL", env_keep+="HOME MAIL" User *** may run the following commands on this host: (ALL) ALL Already tried to add the group _denyhosts to sudoers by executing sudo visudo and inserting the line: %_denyhosts ALL=(ALL) ALL Saving and trying again does not improve.
The line %_denyhosts ALL=(ALL) ALL means that users in the _denyhosts group are allowed to run any command as any user. This isn't what you're trying to do: you need to allow the user ****** to run commands as the user _denyhosts and the group _denyhosts. Something like: ****** ALL = (_denyhosts : _denyhosts) ALL
How to configure sudoers to allow running sudo command under other group and user name?
1,334,009,265,000
I have the following directory: $ ll -d neptune drwxrws---+ 5 beamin psych 4096 Mar 7 16:18 neptune $ getfacl neptune # file: neptune # owner: beamin # group: psych # flags: -s- user::rwx group::r-x group:sysadmins:rwx group:psych:rwx mask::rwx other::--- default:user::--- default:group::r-x default:group:sysadmins:rwx default:group:psych:rwx default:mask::rwx default:other::--- I am logged in as beamin: $ id beamin uid=1000(beamin) gid=1000(beamin) groups=1000(beamin),2000(sysadmins) $ umask 0002 However, when I create a directory or file, this is what I get: $ cd neptune $ mkdir dir $ touch file $ ll total 8 d---rws---+ 2 beamin psych 4096 Mar 7 16:25 dir ----rw----+ 1 beamin psych 0 Mar 7 16:25 file Why is this?
I suspect the answer is in the properties stored in that ACL you dumped: Why the owner has no permission: default:user::--- Why group has rwx: default:group:psych:rwx Why others have no permission: default:other::--- This ACL stuff overrides traditional Unixy behavior.
Why are directories created with permissions 2070 and files with 060 in a directory with setgid bit?
1,334,009,265,000
Looking at the files in my /etc/profile.d directory: cwellsx@DESKTOP-R6KRF36:/etc/profile.d$ ls -l total 32 -rw-r--r-- 1 root root 96 Aug 20 2018 01-locale-fix.sh -rw-r--r-- 1 root root 1557 Dec 4 2017 Z97-byobu.sh -rwxr-xr-x 1 root root 3417 Mar 11 22:07 Z99-cloud-locale-test.sh -rwxr-xr-x 1 root root 873 Mar 11 22:07 Z99-cloudinit-warnings.sh -rw-r--r-- 1 root root 825 Mar 21 10:55 apps-bin-path.sh -rw-r--r-- 1 root root 664 Apr 2 2018 bash_completion.sh -rw-r--r-- 1 root root 1003 Dec 29 2015 cedilla-portuguese.sh -rw-r--r-- 1 root root 2207 Aug 27 12:25 oraclejdk.sh This is Ubuntu on the "Windows Subsystem for Linux (WSL)". Anyway the content of oraclejdk.sh is like this: export J2SDKDIR=/usr/lib/jvm/oracle_jdk8 export J2REDIR=/usr/lib/jvm/oracle_jdk8/jre export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Program Files/WindowsApps/CanonicalGroupLimited.Ubuntu18.04onWindows_1804.2019.522.0_x64__79rhkp1fndgsc:/snap/bin:/usr/lib/jvm/oracle_jdk8/bin:/usr/lib/jvm/oracle_jdk8/db/bin:/usr/lib/jvm/oracle_jdk8/jre/bin export JAVA_HOME=/usr/lib/jvm/oracle_jdk8 export DERBY_HOME=/usr/lib/jvm/oracle_jdk8/db I'm pretty sure it's run when the bash shell starts. My question is, why don't all the *sh files have the x permission bit set? Don't all shell scripts need the x perission bit set in order to be executable? Please consider me a bit of a novice.
A shell script only needs to be executable if it is to be run as ./scriptname If it is executable, and if it has a valid #!-line pointing to the correct interpreter, then that interpreter (e.g. bash) will be used to run the script. If the script is not executable (but still readable), then it may still be run with an explicit interpreter from the command line, as for example in bash ./scriptname (if it's a bash script). Note that you would have to know what interpreter to use here as a zsh script might not execute correctly if run with bash, and a bash script likewise would possibly break if executed with sh (just as a Perl script would not work correctly if executed by Python or Ruby). Some script, as the one you show, are not actually scripts but "dot-scripts". These are designed to be sourced, like . ./scriptname i.e. used as an argument to the dot (.) utility, or (in bash), source ./scriptname (the two are equivalent in bash, but the dot utility is more portable) This would run the commands in the dot-script in the same environment as the invoking shell, which would be necessary for e.g. setting environment variables in the current environment. (Scripts that are run as ordinary are run in a child environment, a copy of its parent's environment, and can't set environment variables in, or change the current directory of, their parent shells.) A dot-script is read by (or "sourced by") the current shell, and therefore do not have to be executable, only readable. I can tell that the script that you show the contents of is a dot-script since it does not have a #!-line (it does not need one) and since it just exports a bunch of variables. I believe I picked up the term "dot-script" from the manual for the ksh93 shell. I can't find a more authoritative source for it, but sounds like a good word to use to describe a script that is supposed to be sourced using the . command.
Must shell scripts be executable?
1,334,009,265,000
Windows has a super-administrator account that has not only elevated privileges to perform protected system functions, but unfettered access to anything on the computer, regardless of user ownership. Does Linux have an equivalent? My understanding is that in terms of access to all user accounts, root is just another user (it would defeat user security if any user could become any other user simply by doing it via root).
In Linux/Unix the user with user id 0 is such a super administrator. The user is usually called "root", but the magic is really behind the id and not the name. That user is especially not bound to local file access permissions and can read and write any file. That user also has the ability to change to any other user without needing a password.
Is there a Linux equivalent of the Windows super-administrator?
1,334,009,265,000
Please see my screenshot below. User chj executes chmod +x ichsize.out, but fails with Operation not permitted. ichszie.out has world-rw permission enabled, but it looks not enough. -rw-rw-rw- 1 nobody nogroup 27272 May 26 18:51 ichsize.out The owner of ichsize.out is nobody, because that file is created by the Samba server, serving a [projects] directory location like this: [projects] comment = VS2019 Linux-dev project output path = /home/chj2/projects browseable = yes read only = no guest ok = yes create mask = 0666 #(everybody: read+write) directory mask = 0777 #(everybody: list+modify+traverse) hide dot files = no The Samba client accessed this share with guest identity, and requested creating the ichsize.out file. The system is Raspberry Pi based on Debian version: 11 (bullseye). Ubuntu 20.04 exhibits the same. So I'd like to know, how can I write my smb.conf so that any user on the RasPi can do chmod +x on that file.
If you don't need to worry about the user that owns the files in this share you can use the force user configuration setting to allow Samba users to run commands such as chmod. This will mean that all files will appear to be owned by the account connecting to the share (i.e. if Alice and Bob both connect to the share, Alice will see that she owns all the files, and Bob will also see that he owns all the files), but as a result anyone can run chmod. Example, assuming that shareuser is a valid user account on your Samba server, that sharegroup contains the set of users permitted to access this Share, and that /home/_share exists and is owned by shareuser with permissions of at least 0700: [Share] comment = Everyone owns these files path = /home/_share browseable = yes read only = no guest ok = no force user = shareuser valid users = "@sharegroup" ; vfs objects = acl_xattr recycle catia Or one that I haven't tested, which allows for guest users: [Share] comment = Everyone owns these files path = /home/_share browseable = yes read only = no guest ok = yes force user = shareuser In a domain joined context, it's even possible to have Samba act on files with true Windows ACLs and ownerships. For example, in the Windows world it's possible for a group to own files and have permissions to change access rights, etc. Seeing as you have guest ok = yes in your context I suspect this isn't relevant, but I'm mentioning it for potential future readers. On the other hand, if you really do mean, "how can I write my smb.conf so that any user on the RasPi can do chmod +x on that file" [my italics for emphasis] then you should know that the smb.conf configuration file is irrelevant for users on the Pi itself. Local UNIX/Linux controls apply to users on the Pi and thus you cannot run chmod on files that you don't own.
What is the requirement to execute chmod +x? 'rw' is not enough!
1,334,009,265,000
I am just curious because for ordinary user scripts I must check if the corresponding file is readable by the user and writable like this (just a snippet for the read operation): if ! [ -r "$1" ]; then dump_args "$@" print_error__exit "! [ -r $1 ]" "The file is not readable by you!" fi My question is: Is on the whole Linux (if must be, say Mint or Debian) system any file not readable or not writeable by root? My belief, in general, is that root can do anything, but everything has limits, right,... which is why I am asking this question. Thank you.
Many files under /proc are not writable, and there are files under /sys that are writable but not readable. Examples: echo something > /proc/$$/cmdline cat /sys/block/sda/device/delete (Be careful: echo > /sys/block/sda/device/delete will "detach" sda from the system) This is so because /proc and /sys are special filesystems. It has nothing to do with the usual file permissions. There are other examples: read-only filesystems such as DVDROMs filesystems that are mounted read-only networked filesystems where root is not mapped to a high-privilege user, e.g. NFS with root squash device files for devices that only allow reading or writing
Is on the whole Linux system any file not readable or not writeable by root?
1,334,009,265,000
I know how to set read/write/execute with three numbers from the Arch wiki's file permissions page. I also know that chmod 7777 will set the setuid, setgid, and sticky bits to true. But which number controls these bits? Is it the first or the last number? Which value is set to true when a 1, 2, or 4 is passed as this value? Edit: I have added this information to the Arch wiki for future reference. My question is much smaller in scope than the proposed duplicate.
From the chmod man page: A numeric mode is from one to four octal digits (0-7), derived by adding up the bits with values 4, 2, and 1. Any omitted digits are assumed to be leading zeros. The first digit selects the set user ID (4) and set group ID (2) and sticky (1) attributes. The second digit selects permissions for the user who owns the file: read (4), write (2), and execute (1); the third selects permissions for other users in the file’s group, with the same values; and the fourth for other users not in the file’s group, with the same values.
How to control setuid, setgid, and sticky bits from chmod's numeric method? [duplicate]
1,334,009,265,000
The default permissions on Ubuntu (or even some BSD) distributions for the /etc/passwd file are 644. It is pointed out in questions like this that /etc/passwd is a sort of user database and it is convenient to make it universally readable. But this file may also contain (possibly) reserved informations about the users in the GECOS field. Shouldn't these informations be anyway protected? Or is there another way (newer than GECOS) to store ad secure this kind of data?
There are multiple newer ways to store this kind of data, including but not limited to LDAP and NIS. The question you have to ask is why there's private information in /etc/passwd in the first place.
/etc/passwd permissions and GECOS field
1,334,009,265,000
If I try to kill root's process or another user's process it says you can't do this. But I can do that while shutting the system down. Isn't this a security problem?
Those systems that let unprivileged users shut down the system usually only do it for users that are logged in locally, that is, users that have physical access to the machine and could for instance just as well pull the power chord or press the power button/switch. In that case, it's better to let them shut down the system so it can be done gracefully and so that we have a record of who triggered the shut down. Where the source of electrical power can be secured and access to the power button removed to regular users, it's generally possible to remove that possibility.
Why can we kill another user's process while shutting the system down
1,334,009,265,000
I created user small, added him to group kek and allowed that group to only read files in user home directory. Then I chowned all files to root:kek. However, small still can delete files in his home directory. Commands I ran: useradd -ms /bin/bash small groupadd kek usermod -a -G kek small chown -R root:kek /home/small/* chmod -R g=r /home/small/* Then when I try to remove file: $ ls -l total 16 -rw-r--r-- 1 root kek 240 Jun 23 06:17 Dockerfile -rw-r--r-- 1 root kek 39 Jun 21 09:17 flag.txt -rw-r--r-- 1 root kek 2336 Jun 22 14:19 server.py -rw-r--r-- 1 root kek 24 Jun 22 08:16 small.py $ rm flag.txt $ ls -l total 12 -rw-r--r-- 1 root kek 240 Jun 23 06:17 Dockerfile -rw-r--r-- 1 root kek 2336 Jun 22 14:19 server.py -rw-r--r-- 1 root kek 24 Jun 22 08:16 small.py $ whoami small Why does this happens?
Whether a file can be deleted or not is not a property of the file but of the directory that the file is located in. A user may not delete a file that is located in a directory that they can't write to. Files (and subdirectories) are entries in the directory node. To delete a file, one unlinks it from the directory node and therefore one has to have write permissions to the directory to delete a file in it. The write permissions on a file determines whether one is allowed to change the contents of the file. The write permissions on a directory determines whether one is allowed to change the contents of the directory. Related: Execute vs Read bit. How do directory permissions in Linux work?
User can delete file with read permission only
1,334,009,265,000
I am trying to restore a database with the following command: $ sudo -u postgres pg_restore -C -d dvdrental dvdrental.tar [sudo] password for t: However, I am receiving the following error message: could not change directory to "/home/t/mydata/.../relation model/SQL/implementations/Implementations, i.e. relational database management systems/postgreSQL/general/web/postgresqltutorial/databases": Permission denied pg_restore: [archiver] could not open input file "dvdrental.tar": No such file or directory I was wondering why I can't change directory to the current directory with permission denied? File permission bits are: -rw-rw-r-- 1 t t 2838016 May 26 2013 dvdrental.tar Is it because one of its ancestry directory is not both readable and executable by any one? The file has many ancestry directories, and how can I verify that?
The current directory, and all its parent directories, have to be accessible for the postgres user, i.e. have the executable/searchable bit set for whichever owner/group/other permission applies on each directory when determining postgres’s permissions, or grant that permission using ACLs. To check the permissions, use namei: namei -l /path/to/directory See How to check if a user can access a given file? for details.
Why can't I change directory to the current directory with permission denied?
1,334,009,265,000
I just installed some software using apt-get, and its owner and group is "logger". Since I installed the software using sudo, why isn't the owner and group "root"? I am pretty sure about a year ago, I renamed user pi with the new name logger. Could this have caused it, and if so, why? michael@rp3:~ $ ls -l /usr | grep local drwxrwsr-x 12 root staff 4096 Dec 23 16:49 local michael@rp3:~ $ ls -l /usr/local total 32 drwxrwsr-x 2 root staff 4096 Dec 23 16:47 bin drwxrwsr-x 2 root staff 4096 Apr 10 2017 etc drwxrwsr-x 2 root staff 4096 Apr 10 2017 games drwxrwsr-x 2 root staff 4096 Apr 10 2017 include drwxrwsr-x 4 root staff 4096 Jun 4 2017 lib lrwxrwxrwx 1 root staff 9 Apr 10 2017 man -> share/man drwxrwsr-x 2 root staff 4096 Apr 10 2017 sbin drwxrwsr-x 7 root staff 4096 Dec 23 15:20 share drwxrwsr-x 2 root staff 4096 Apr 10 2017 src michael@rp3:~ $ sudo apt-get install test-client Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: test-utils The following NEW packages will be installed: test-client test-utils 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/1,575 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue? [Y/n] y WARNING: The following packages cannot be authenticated! test-utils test-client Install these packages without verification? [y/N] y Selecting previously unselected package test-utils. (Reading database ... 41030 files and directories currently installed.) Preparing to unpack .../test-utils_0.1.1-jessie_armhf.deb ... Unpacking test-utils (0.1.1-jessie) ... Selecting previously unselected package test-client. Preparing to unpack .../test-client_0.1.2-jessie_armhf.deb ... Unpacking test-client (0.1.2-jessie) ... Setting up test-utils (0.1.1-jessie) ... Setting up test-client (0.1.2-jessie) ... michael@rp3:~ $ ls -l /usr/local total 40 drwxrwxr-x 6 logger logger 4096 Dec 23 16:49 test-client drwxrwxr-x 3 logger logger 4096 Dec 23 16:49 test-utils drwxrwsr-x 2 root staff 4096 Dec 23 16:49 bin drwxrwsr-x 2 root staff 4096 Apr 10 2017 etc drwxrwsr-x 2 root staff 4096 Apr 10 2017 games drwxrwsr-x 2 root staff 4096 Apr 10 2017 include drwxrwsr-x 4 root staff 4096 Jun 4 2017 lib lrwxrwxrwx 1 root staff 9 Apr 10 2017 man -> share/man drwxrwsr-x 2 root staff 4096 Apr 10 2017 sbin drwxrwsr-x 7 root staff 4096 Dec 23 15:20 share drwxrwsr-x 2 root staff 4096 Apr 10 2017 src michael@rp3:~ $ cat /etc/passwd | grep 'apt\|logger\|root\|michael' root:x:0:0:root:/root:/bin/bash michael:x:1001:1001:,,,:/home/michael:/bin/bash _apt:x:109:65534::/nonexistent:/bin/false logger:x:1000:1000:,,,:/home/logger:/bin/bash michael@rp3:~ $ cat /etc/group | grep 'apt\|logger\|root\|michael' root:x:0: michael:x:1001: wireshark:x:114:michael logger:x:1000: michael@rp3:~ $ sudo cat /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # Please consider adding local content in /etc/sudoers.d/ instead of # directly modifying this file. # # See the man page for details on how to write a sudoers file. # Defaults env_reset Defaults mail_badpass Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL:ALL) ALL michael ALL=(ALL:ALL) ALL anton ALL=(ALL:ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL # See sudoers(5) for more information on "#include" directives: #includedir /etc/sudoers.d michael@rp3:~ $ ls -l /etc/sudoers.d total 8 -r--r----- 1 root root 27 Oct 18 2016 010_pi-nopasswd -r--r----- 1 root root 958 Jan 11 2016 README michael@rp3:~ $ sudo cat /etc/sudoers.d/* pi ALL=(ALL) NOPASSWD: ALL # # As of Debian version 1.7.2p1-1, the default /etc/sudoers file created on # installation of the package now includes the directive: # # #includedir /etc/sudoers.d # # This will cause sudo to read and parse any files in the /etc/sudoers.d # directory that do not end in '~' or contain a '.' character. # # Note that there must be at least one file in the sudoers.d directory (this # one will do), and all files in this directory should be mode 0440. # # Note also, that because sudoers contents can vary widely, no attempt is # made to add this directive to existing sudoers files on upgrade. Feel free # to add the above directive to the end of your /etc/sudoers file to enable # this functionality for existing installations if you wish! # # Finally, please note that using the visudo command is the recommended way # to update sudoers content, since it protects against many failure modes. # See the man page for visudo for more information. # michael@rp3:~ $
apt-get, or rather dpkg, installs package contents using whatever user is recorded as owning the various files in the package. This is typically root:root, but can be anything; you’ll commonly see root:games in game packages, root:www-data for certain directories in web-server-related packages, etc. (Ownership and permissions can also be set by maintainer scripts, but that’s usually not necessary.) If a package is created manually on a Raspberry Pi-style system, without paying too much attention to ownership (and not using fakeroot), it would perfectly be possible to end up with a package containing files owned by pi:pi, identified numerically. On your system, these would end up belonging to logger:logger. You can see the ownership information contained in a packages by using dpkg-deb -c.
What user does apt-get install software under?
1,334,009,265,000
$ ls sess.vim -lh -rw-r--r-- 1 root root 11K Feb 26 18:52 sess.vim I want this file to be readable for everyone and writable by no one (except by root). Thus I set its permissions to 644 and ownership to root:root. $ echo "text" >> sess.vim zsh: permission denied: sess.vim Seems fine. After some changes in vim I do :w! (force write) and the file is saved successfully. Now: $ ls sess.vim -lh -rw-r--r-- 1 MY_USERNAME users 11K Feb 26 19:06 sess.vim Wt.. Why? How?
Using :w! in vim is similar to the following: echo 'test' > sess.vim.temp mv sess.vim.temp sess.vim The mv commands only cares about the directory permissions, the permissions of the file are not relevant. This is because you are modifying the directory, not writing to the file. To accomplish your goal, you will also need to adjust the permissions of the directory the file resides in.
Vim writes to file without having permissions [duplicate]
1,334,009,265,000
I'm installing minikube as part of below Dockerfile: FROM jenkins/jnlp-agent-alpine RUN curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && \ install minikube-linux-amd64 /home/jenkins/minikube -o jenkins -g jenkins -m 777 && \ rm minikube-linux-amd64 Once image is built and run: $docker build -t app:latest . $docker run -it app:latest bash #minikube` binary exists bash-5.1$ls -l minikube -rwxrwxrwx 1 jenkins jenkins 74953166 Jul 19 15:44 minikube #however running the binary `minikube` returns `No such file or directory` error: bash-5.1$ ./minikube bash: ./minikube: No such file or directory As part of debugging, I made jenkins user owner of minikube and set its permissions to be 777, though it still didn't help. Why does No such file or directory error pop and how to solve it?
The minikube binary is linked against the GNU C library, but your image is based on Alpine which uses musl. Running minikube fails because the dynamic linker it specifies (/lib64/ld-linux-x86-64.so.2) isn’t present. If you want to use minikube, you need to either find a musl-based build (or a static build), or switch to a base image which uses the GNU C library.
No such file or directory when running binary, though the binary exists
1,334,009,265,000
I've been using Linux for ages - I practically live in it, but I've never actually thought to ask why root always owns the .. directory. If you chmod the totality of a subdirectory structure to some other user root still owns ... Under the hood, why is that?
.. is the parent directory, so whoever own that owns ... If you run ls -ld .. in a subdirectory of your home directory, you should see that ..’s owner is yourself: cd ~/Desktop ls -ld .. Changing a hierarchy’s owner won’t change ..’s owner (looking from the top of the changed hierarchy) because .. is outside of the changed hierarchy. In your own home directory, you’ll see root as the owner of .. because .. is typically /home, and that’s owned by root: cd ls -lid .. /home (you’ll see that both have the same inode number).
Why does root own the dot dot ".." directory?
1,334,009,265,000
I've created a remote mounted drive by adding this to my /etc/fstab: \\192.x.x.x\web /mnt/web cifs username=X,password=X,domain=X and mounting it with sudo mount /mnt/web (which works perfectly!) The problem is that I can only mount the drive as root. Running mount /mnt/web (without sudo) results in the error mount: only root can mount \192.x.x.x\web on /mnt/web I read this guide that suggests the following syntax //192.168.1.100/data /media/corpnet cifs username=johnny,domain=sealab,noauto,rw,users 0 0 When I change my entry to use this syntax like this: \\192.x.x.x\web /mnt/web cifs username=X,password=X,domain=X,noauto,rw,users 0 0 and run mount /mnt/web I get mount.cifs: permission denied: no match for /mnt/web found in /etc/fstab I then read this question along with it's highest voted answer, but the same error appears. I have checked that my web folder in the /mnt directory has CHMOD 775, which should be ok. What could be wrong?
UPDATE (see the discussion on the comments): You are typing \\ instead of //. For linux you must use // even if the network file system is running inside Windows. The old post: You are writing mount /mnt/web, but the directory you write in /etc/fstab was /media/corpnet so you need to write /mnt/web in /etc/fstab... So change /media/corpnet //192.168.1.100/data /media/corpnet cifs username=johnny,domain=sealab,noauto,rw,users 0 0 To /mnt/web: //192.168.1.100/data /mnt/web cifs username=johnny,domain=sealab,noauto,rw,users 0 0 Or if you can't edit fstab change your command to mount /media/corpnet (and you must create this directory too) Good lucky and if that works, please select this as the correct answer.
Mouting a remote drive with cifs
1,334,009,265,000
I am concerned about the possibility that a normal user can delete important files from like /etc/passwd or files from /boot. They can do it because the permissions on /etc and /boot are drwxr-xr-x. Should I worry about this or am I missing something? Thank you
No. The permissions that you see can be split into four components: type of entry, owner permissions, group permissions, and "all" permissions; "all" simply refers to anyone who is neither the owner or a member of the group. What the permissions mean depend on whether the entry is a file or a directory. A more thorough description of how permissions work is here. So, for this example: $ ls -dl /etc /etc/passwd /etc/shadow drwxr-xr-x 58 root root 4096 Feb 13 19:08 /etc -rw-r--r-- 1 root root 1887 Oct 11 21:49 /etc/passwd -rw-r----- 1 root root 970 Oct 11 21:49 /etc/shadow For /etc: d: the entry is a directory. rwx: the owner of the directory (root) has full permissions to view and modify (add/delete/rename) file entries, and change to ("cd") this directory. r-x: members of the group (also called root, but is not the same as the user called root) have permissions to view file entries and change to ("cd") this directory. r-x: everyone else has permissions to view file entries and change to ("cd") this directory. Note that having permission to read a directory does not mean that you can read the contents of individual files: that is what file permissions are for. Individual files work in a similar way, but the permissions refer to reading, writing and executing the file itself. For /etc/passwd: -: the entry is a regular file. rw-: the owner (root) can read and write to this file, but not run it directly from the command line. r--: members of the group (root) can only read this file. r--: everyone else can read this file. Originally the /etc/passwd file did have (encrypted) passwords in it, but that was judged to be a security risk so the passwords were moved to a "shadow" copy of the password file called /etc/shadow. It is only accessible by the root user and group (-rw-r-----): regular users cannot view it.
Normal user can delete important files
1,334,009,265,000
guys! This is my first post here and, unfortunately, and it's being made in very ugly circumstances! I'm running Debian Jessie x64 and an hour ago I installed some fonts for figlet into the /usr/share/figlet directory. I couldn't use them because I was unzipping them with sudo and they didn't have the proper permissions. Without giving it much thought I ran sudo chmod 644 .* Now nothing works(the browser, even Terminator can't find its icons, etc). I read a few minutes ago that the sudo working directory is the root's home and now I'm frankly panicking! I have two questions: what exactly did I do(did I change all the permissions on everything on my system to 644?) and how do I revert it barring a full backup restore? I ran two searches looking for files and directories that were modified in the last hour, but that wasn't very productive. I googled for answers, but couldn't find an answer pertaining to the exact command that I ran, hence I'm posting a dedicated question-I want to know exactly what happened. I'm very curious about what I did because the newly-unpacked figlet fonts themselves were left with their permissions unchanged! Thank you very much in advance!
Assuming you're running Bash (with typical settings), running sudo chmod 644 .* from /usr/share/figlet would end up running sudo chmod 644 . .. (echo .* will display . ..). This is equivalent to sudo chmod 644 /usr/share/figlet /usr/share Fixing your system is straightforward: sudo chmod 755 /usr/share /usr/share/figlet
How to revert a "sudo chmod 644 .*"?
1,334,009,265,000
I've been thoroughly confused for quite some time with ls just plain refusing to work in some places, even though I have read permissions. After messing around a bit, it turns out that ls works fine, as long as I run it with --color=never, but as soon as I use auto or always, I get the familiar Permission denied error on everything where I lack execution permissions. What causes this and how can I stop it while keeping my ls output in color? Update: Okay, finally figured it out (as usual, directly after asking for help). You need execute permissions to enter directories, so cd and ls --color doesn't work on directories without it. I have no idea why I can still ls --color=never on directories without it though. Curious why that is?
To see the contents of a directory (the names of the entries) requires only read permission on the directory. That means you can run /bin/ls and see all the names without a problem. But to decide which color the names should be displayed with, ls uses other properties from the entries. It uses metadata from the file (permissions, size, filetype, etc.) This requires that it stat() the file, and that requires execute permission on the directory to succeed. Just the names of the files in a directory: you only need read permission. For metadata about the files in a directory: you need read and execute permission
Running `ls` with `--color=auto|always` needs execute permissions
1,334,009,265,000
I am installing Voipmonitor whose setup script has this step: sudo echo " * * * * * root php /var/www/html/php/run.php cron" >> /etc/crontab I am getting this error -bash: /etc/crontab: Permission denied The file permissions are: -rw-r--r-- 1 root root 51 Feb 15 04:45 /etc/crontab
The command does not work, because sudo applies to the command, however the redirection is made with the current user, and so it fails the permissions. So echo runs as root, however >> /etc/crontab is being done with the user permissions out of the sudo. This will work: sudo /bin/bash -c '( echo " * * * * * root php /var/www/html/php/run.php cron" >> /etc/crontab )'
bash file permission error while appending to a file [duplicate]
1,334,009,265,000
-edit- whats even more curious is if I chmod 777 /var/run/php-fastcgi/php-fastcgi.socket this works. If it's not www-data, php-www (nor root) then what user is trying to access the socket :| -edit2- I added chown www-data:$FASTCGI_GROUP $SOCKET to the end of the script below (which is right after spawn-fcgi) and that solves the problem, but I'm confused, www-data is in the php-www group. Why must it be owner. I didn't change FASTCGI_USER back to www-data bc it would defeat the purpose (it would allow the PHP files to access all my files as www-data which I don't want) Essentially what I wanted to do is have the PHP process not be www-data so if it gets compromised its damage is limited to the very few PHP sites I have. What I did was create the user php-www and add its group to www-data. When I log in as www-data I can access everything ih php-www however php-www can't access anything but my PHP sites. perfect. I got php+nginx running. But how changing it gives me a problem. I see www-data mention in a init.d script which changes the ownership of a folder. Its fine and I changed it to php-www. Thats not a problem. What is the problem is the spawn script. #!/bin/bash FASTCGI_USER=php-www FASTCGI_GROUP=php-www SOCKET=/var/run/php-fastcgi/php-fastcgi.socket PIDFILE=/var/run/php-fastcgi/php-fastcgi.pid CHILDREN=6 PHP5=/usr/bin/php5-cgi /usr/bin/spawn-fcgi -s $SOCKET -P $PIDFILE -C $CHILDREN -u $FASTCGI_USER -g $FASTCGI_GROUP -f $PHP5 the user/group lines use to say www-data but now I changed them to php-www. I started php-fastcgi and nginx. When I visit my site I get a 502 bad gateway error. When I look in nginx logs I see this line connect() to unix:/var/run/php-fastcgi/php-fastcgi.socket failed (13: Permission denied) while connecting to upstream Permission denied!?! why!?! www-data does have the group php-www and stat that folder and socket shows owner and group php-www. I can access the PHP file with bot php-www and www-data. Why am I get a permission error? and what am I doing wrong? in case you want to see my process # ps aux | egrep "php|www" shows www-data 548 0.0 0.1 1908 492 ? Ss 18:08 0:00 /usr/sbin/fcgiwrap www-data 586 0.0 0.1 1908 488 ? Ss 18:08 0:00 /usr/sbin/fcgiwrap php-www 1611 0.0 1.9 19312 5020 ? Ss 18:20 0:00 /usr/bin/php5-cgi php-www 1612 0.0 0.7 19312 1856 ? S 18:20 0:00 /usr/bin/php5-cgi php-www 1613 0.0 0.7 19312 1856 ? S 18:20 0:00 /usr/bin/php5-cgi php-www 1614 0.0 0.7 19312 1856 ? S 18:20 0:00 /usr/bin/php5-cgi php-www 1615 0.0 0.7 19312 1856 ? S 18:20 0:00 /usr/bin/php5-cgi php-www 1616 0.0 0.7 19312 1856 ? S 18:20 0:00 /usr/bin/php5-cgi php-www 1617 0.0 0.7 19312 1856 ? S 18:20 0:00 /usr/bin/php5-cgi www-data 1776 0.0 0.6 5428 1684 ? S 18:27 0:00 nginx: worker process php-www 1967 0.0 1.9 19312 5020 ? Ss 18:40 0:00 /usr/bin/php5-cgi php-www 1968 0.0 0.7 19312 1856 ? S 18:40 0:00 /usr/bin/php5-cgi php-www 1969 0.0 0.7 19312 1856 ? S 18:40 0:00 /usr/bin/php5-cgi php-www 1970 0.0 0.7 19312 1856 ? S 18:40 0:00 /usr/bin/php5-cgi php-www 1971 0.0 0.7 19312 1856 ? S 18:40 0:00 /usr/bin/php5-cgi php-www 1972 0.0 0.7 19312 1856 ? S 18:40 0:00 /usr/bin/php5-cgi php-www 1973 0.0 0.7 19312 1856 ? S 18:40 0:00 /usr/bin/php5-cgi root 2110 0.0 0.2 3300 736 pts/1 S+ 18:55 0:00 egrep php|www
The socket probably isn't group readable and writeable.
permission error with php/nginx and not using www-data
1,334,009,265,000
I'm using rsync to sync files from my NAS to a Remote Server. The files on the Remote Server appear to have the same user, group and permissions as the source. However rsync wants to copy the files anyhow. Here is the output from my NAS to my Remote Server: root@omv:/share/Music# rsync -auHs --dry-run --progress --itemize-changes --numeric-ids -e ssh /share/Music/Bond/Born user@somedomain:/home/user/media/Music/Bond/Born sending incremental file list cd+++++++++ Born/ <f+++++++++ Born/01 Bond - Quixote.flac <f+++++++++ Born/02 Bond - Winter.flac <f+++++++++ Born/03 Bond - Victory.flac <f+++++++++ Born/04 Bond - Oceanic.flac <f+++++++++ Born/05 Bond - Kismet.flac <f+++++++++ Born/06 Bond - Korobushka.flac <f+++++++++ Born/07 Bond - Alexander The Great.flac <f+++++++++ Born/08 Bond - Duel.flac <f+++++++++ Born/09 Bond - Bella Donna.flac <f+++++++++ Born/10 Bond - The 1812.flac <f+++++++++ Born/11 Bond - Dalalai.flac <f+++++++++ Born/12 Bond - Hymn.flac <f+++++++++ Born/13 Bond - Victory (Mike Batt Mix).flac <f+++++++++ Born/Folder.jpg root@omv:/share/Music# Here is the checksum on the NAS: root@omv:/share/Music# sha256sum 'Bond/Born/'* d510925c0cba8b01b4d95935248ef5b863e4ed9a7e8c7e537b3acbd18767c882 Bond/Born/01 Bond - Quixote.flac 5dd40480fdc1ca7d20bfa3696c99a8b918636be2af7e59101d4cd9b04726dd44 Bond/Born/02 Bond - Winter.flac b0c1236caf10c1a4c04ee4cce16d9b2d47e6a5fecfcc8fabd513c585a4156039 Bond/Born/03 Bond - Victory.flac 4348e1502c36b0824742e4c47544efee2b3040626fdd94f937634b55c6736c1f Bond/Born/04 Bond - Oceanic.flac 8e8bbf0817d1625a183547b31e60d9677bb454d18bb6c2b14ed7729a8acb4627 Bond/Born/05 Bond - Kismet.flac 19b0ea2b2f2ad45bb3cee3876037b238fa9a5915eccf44c766262de17085ff33 Bond/Born/06 Bond - Korobushka.flac 37fa58c31263aeb475ace4760b86d26fbf130a47bf80e51c61b4b2e0e003fa07 Bond/Born/07 Bond - Alexander The Great.flac 6ad69fe39d57b7be43538c36a674ed89492d805087319a386bb8ddda78ae364e Bond/Born/08 Bond - Duel.flac ae3de5d17b1ad56ce1d6ef7323532c015f863ab1548c198cdac41386e56c46d3 Bond/Born/09 Bond - Bella Donna.flac ab85604b04ad72cc01a886d61b437354ee2eb058fe5d1188199c360445d4c926 Bond/Born/10 Bond - The 1812.flac 6babc0a824b42375efa5d584dd1662687022f9830d16c70504b2e1b86c17e71a Bond/Born/11 Bond - Dalalai.flac 11f2ad493c5c0d8beee89a00e9502010421579cffc941fe3d1e72cdd70a5f12b Bond/Born/12 Bond - Hymn.flac b15ad119dfd14a76d5fd47ce5993a6ebccec10171f1e58f5238bf830f05a3134 Bond/Born/13 Bond - Victory (Mike Batt Mix).flac 537e3ab0f6cf762319e36b19a786951d507d66dc26ea6409f4000cac508a58ab Bond/Born/Folder.jpg Here is the checksum on the Remote Server: user@10:~/media/Music$ sha256sum 'Bond/Born/'* d510925c0cba8b01b4d95935248ef5b863e4ed9a7e8c7e537b3acbd18767c882 Bond/Born/01 Bond - Quixote.flac 5dd40480fdc1ca7d20bfa3696c99a8b918636be2af7e59101d4cd9b04726dd44 Bond/Born/02 Bond - Winter.flac b0c1236caf10c1a4c04ee4cce16d9b2d47e6a5fecfcc8fabd513c585a4156039 Bond/Born/03 Bond - Victory.flac 4348e1502c36b0824742e4c47544efee2b3040626fdd94f937634b55c6736c1f Bond/Born/04 Bond - Oceanic.flac 8e8bbf0817d1625a183547b31e60d9677bb454d18bb6c2b14ed7729a8acb4627 Bond/Born/05 Bond - Kismet.flac 19b0ea2b2f2ad45bb3cee3876037b238fa9a5915eccf44c766262de17085ff33 Bond/Born/06 Bond - Korobushka.flac 37fa58c31263aeb475ace4760b86d26fbf130a47bf80e51c61b4b2e0e003fa07 Bond/Born/07 Bond - Alexander The Great.flac 6ad69fe39d57b7be43538c36a674ed89492d805087319a386bb8ddda78ae364e Bond/Born/08 Bond - Duel.flac ae3de5d17b1ad56ce1d6ef7323532c015f863ab1548c198cdac41386e56c46d3 Bond/Born/09 Bond - Bella Donna.flac ab85604b04ad72cc01a886d61b437354ee2eb058fe5d1188199c360445d4c926 Bond/Born/10 Bond - The 1812.flac 6babc0a824b42375efa5d584dd1662687022f9830d16c70504b2e1b86c17e71a Bond/Born/11 Bond - Dalalai.flac 11f2ad493c5c0d8beee89a00e9502010421579cffc941fe3d1e72cdd70a5f12b Bond/Born/12 Bond - Hymn.flac b15ad119dfd14a76d5fd47ce5993a6ebccec10171f1e58f5238bf830f05a3134 Bond/Born/13 Bond - Victory (Mike Batt Mix).flac 537e3ab0f6cf762319e36b19a786951d507d66dc26ea6409f4000cac508a58ab Bond/Born/Folder.jpg user@10:~/media/Music$ All the checksums look OK so why does rsync still want to copy the files to the remote server? TIA
The <s in that dry-run incremental output would indicate that the files would be sent and the +s that they would be created anew. That indicates the files don't exist on the target. Your target specification is user@somedomain:/home/media/Music/Bond/Born but you show us: user@10:~/media/Music$ sha256sum 'Bond/Born/'* Where ~ represents your home directory. As you can double-check with echo ~, your home directory is unlikely to be /home. More likely it's something like /home/user, so the target specification should rather be something like user@somedomain:/home/user/media/Music/Bond/Born or just user@somedomain:media/Music/Bond/Born where we use a relative path on the remote server as sshd lands you in your home directory, so that would be a relative path to your home directory on the remote server. Also, since the source specification (/share/Music/Bond/Born) doesn't end in a /, that's asking rsync to create a Born directory inside the target directory, so it will try to create a /home/media/Music/Bond/Born/Born. Note this important paragraph in the rsync manpage: A trailing slash on the source changes this behavior to avoid creating an additional directory level at the destination. You can think of a trailing / on a source as meaning "copy the contents of this directory" as opposed to "copy the directory by name"[...] So the command should look like: rsync -auHs --dry-run --progress --itemize-changes --numeric-ids -e ssh \ /share/Music/Bond/Born/ \ user@somedomain:/home/user/media/Music/Bond/Born
Can you tell me why rsync is saying my files are different
1,334,009,265,000
When I am trying to open crontab I see the next output: ubuntu@macaroon:~$ crontab -l crontabs/ubuntu/: fopen: Permission denied When I add sudo it opens fine however, if the jobs don't work there: ubuntu@macaroon:~$ sudo crontab -l # Edit this file to introduce tasks to be run by cron. # omitted such info * * * * * /usr/bin/env python3 /home/ubuntu/main.py date >> ~/main_script_cronjob.log What is missing from this machine? How to fix this behaviour Other machines work fine with the regular crontab command without sudo. Here I have to do some workaround: sudo crontab -u ubuntu -e Then it opens correct crontab for ubuntu user. UPDATE: Additional information for crontab details: ubuntu@macaroon:~$ ls -l /usr/bin/crontab -rwxrwxrwx 1 root crontab 39568 Mar 23 2022 /usr/bin/crontab ubuntu@macaroon:~$ sudo namei -l /var/spool/cron/crontabs/ubuntu f: /var/spool/cron/crontabs/ubuntu drwxr-xr-x root root / drwxr-xr-x root root var drwxr-xr-x root root spool drwxr-xr-x root root cron drwx-wx--T root crontab crontabs -rw------- ubuntu root ubuntu It is not a Docker container. It is a physical machine.
The crontab command has lost its permissions. For example, on my (Raspbian) system the permissions include setgid crontab (the s permission bit for the group): -rwxr-sr-x 1 root crontab 30452 Feb 22 2021 /usr/bin/crontab You can confirm this by running ls -l /usr/bin/crontab on your problematic system and adding the result to your question. Here's how you'd fix the missing setgid bit: sudo chmod u=rwx,go=rx,g+s /usr/bin/crontab (I prefer symbolic values: User=rwx, Group,Others=rx, Group+setgid. You could equally use the octal 2755.) Remember also to tell us if you're trying to run this under Docker, as it often strips permissions by default.
Crontab doesn't open for user
1,334,009,265,000
I toggle my resolution with: sudo xrandr --output eDP-1-1 --mode "3840x2400" How do I make xrandr not require sudo? So e.g. I can do: xrandr --output eDP-1-1 --mode "3840x2400" I am in the video group. I would not like to expose my password in a regular file, but am willing to make some kind of system script which automatically becomes root temporarily just for xrandr purposes. (The script will be called when I click a toggle button in a menubar, so in fact it will be a python script which detects the resolution and toggles between 3840x2400 and 1920x1200.) EDIT: I believe some strange circumstance caused xrandr to require being run as root. This may have been nvidia-drm.modeset kernal parameter, or something like having to manually re-add modelines despite them existing. If you find yourself in this situation, I really don't know what to suggest. Please feel free to comment. Somehow the requirement to use sudo went away with a reboot. I am accepting the top answer.
xrandr doesn't require sudo. If a user is logged in to an X session, they can run xrandr to query and change screen settings. It's possible to set up /etc/sudoers such that a user (or users) can run a command as root without needing to enter a password. This is best used in combination with a short and simple wrapper script that does just the one thing that needs root and nothing else. Then allow that script to be run as root by the user with NOPASSWD:. Wrapper scripts like this should, preferably, take no arguments or other user-supplied input if at all possible. And if user input is unavoidable, it should take the bare minimum necessary to do the required job and validate all input before using it (and, for better security, instead of checking for known-bad things and rejecting them, it should check for known-good things and reject everything else). e.g. #!/bin/sh xrandr --output eDP-1-1 --mode "3840x2400" save this as, e.g., mode3840x2400.sh, make it executable with chmod, and add something like the following to /etc/sudoers (do not edit this file directly, use visudo): username ALL = NOPASSWD: /full/path/to/mode3840x2400.sh Alternatively, here's an example of a wrapper script that accepts an argument requesting either of your desired resolution settings while rejecting everything else. It defaults to 1920 if no argument is supplied. #!/bin/sh res=${1:-1} case "$res" in 1) xrandr --output eDP-1-1 --mode "1920x1200" ;; 2) xrandr --output eDP-1-1 --mode "3840x2400" ;; *) echo "Error: unknown argument '$res'" > /dev/stderr; exit 1; esac BTW, as mentioned above, xrandr doesn't need sudo to work - any user logged in to X can run it. I'm only using it as an example of something that could be run as root.
How to make xrandr work without sudo?
1,334,009,265,000
I understood from this answer that after composer require Drupal will harden file permissions. How come a program which is not Linux (in this case, Composer, or Drupal) changes permissions for Linux? Isn't permission change a task for a human user (from root or sudo account)?
Isn't permission change a task for a human user (from root or sudo account)? No, and also, there's very sparsely any concept of "human" in computer software. Humans always interact through hardware and different software components. Nothing is "special" about chmod or sudo.
How come a program which is not Linux changes permissions for Linux?
1,334,009,265,000
We have one bash command with one user "useradm" which runs a command as sudo su - platfrmapi; sh script.sh sudo su - platfrmapi; cp script.sh script_2.sh The user-switching is happening perfectly, but the logs which are created as part of "script.sh" are owned by "useradm" instead of "platfrmapi". Are we missing something?
Your commands are chained together with a semicolon, meaning they are performed independently! First, the sudo command runs and switches to the platfrmapi user; that process must eventually exit, which allows the second command to run, which is sh script.sh; that second command is still running as the original user, because the sudo su - command has exited. What you seem to want is for the sh script.sh to run as the platfrmapi user, so do this: sudo -u platfrmapi sh script.sh ... assuming the useradm user has the correct sudo permissions to execute sh script.sh. It would be more direct to ensure that the script is executable (chmod +x script.sh and has the proper sh-bang line) and then execute it directly with: sudo -u platfrmapi ./script.sh
" sudo su - " is switching the user but files are getting created with parent id
1,643,140,252,000
I understand we use "umask" for setting different security levels: umask value Security level Effective permission (directory) 022 Permissive 755 026 Moderate 751 027 Moderate 750 077 Severe 700 Can we set umask to 028 or any other value so that it includes a number greater than "7"?
The shell's umask command takes the permission mask as an octal number, in base 8. Base 8 only has the eight digits 0 to 7, unlike decimal (ten digits) or hexadecimal (sixteen). So, no, you can't use umask 028 in the shell, it doesn't mean anything. Of course the umask is just a pile of bits, a number, and it could be represented in decimal or hex too. E.g. in C code, 022 (octal) is the same number as 18 (decimal), so the system call to set umask to 022 could be written as umask(18). But given there's three permission bits (rwx), and three bits can represent eight different values, octal is a rather useful way to represent permission bits. Also, it may be useful to consider umask values for what they mean with regard to permissions granted to other users, instead of single-word descriptions.
Can "umask" variable include a number greater than "7"?
1,643,140,252,000
I mistakenly ran chmod -x * in my /home thinking I was in a subfolder (wrong terminal). Now I'm getting all kinds of errors from different applications. Is there a way for me to detect which files exactly were modded and revert them?
If you did this in your home directory, there’s a good chance it only affected directories; you can restore the permissions there by running chmod u+x */ This will give you execute (search) permission on all the non-hidden directories in the current directory.
Restore executable permissions to files
1,643,140,252,000
What is the situation why ls -l returns a list of subdirectories in the form below? d????????? ? ? ? ? ? Subdirectory A tree launched on that directory returns 0 directories, 0 files for example. The system seems to know the name of the subdirectory but cannot find it. Which missing link confuses ls? Late note. On directories, thus not on files, see also: Linux local directory permissions as question-marks for non-root
You have read, but not execute/search permissions to the containing directory. Easy to reproduce with: mkdir -p foo/bar; chmod -x foo; ls -l foo ls: cannot access 'foo/bar': Permission denied total 0 d????????? ? ? ? ? ? bar On Linux and BSD, ls is able to get that it's a directory from the d_type field of the directory entry, but not much more. That may also happen in other situations where ls is not able to access the actual inode, but only the directory entry which points to it (as when the file or directory inode has disappeared before ls was able to stat() it -- see this, or when the it's an inaccessible mount point -- see this).
What does the output `d?????????` in `ls -l` mean? [duplicate]
1,643,140,252,000
I have a local SSD disk, which is mounted via /etc/fstab on my Ubuntu machine: /dev/sdb2 /media/Store ntfs-3g rw,nosuid,nodev,default_permissions,umask=0002,uid=deniss,gid=deniss 0 0 I can read and write anything on that drive. Now I have installed nginx and php-fpm, and added www-data to my group and www-data in my group so i can edit files from both groups: $ groups deniss deniss : deniss adm cdrom sudo dip www-data plugdev lpadmin lxd sambashare $ groups www-data www-data : www-data deniss nginx user (www-data) can read and write all files on mounted share, the problem starts when it tries to chmod files on a drive: $ sudo -u www-data chmod 644 test.txt chmod: changing permissions of '/media/Store/file.txt': Operation not permitted Not that I need www-data to chmod files, but there are local websites running on the drive, and chmod sometimes is integrated into libraries and frameworks and I cannot disable them.
NTFS is not a unix filesystem and is not capable of using unix ownership, groups, or permissions. When an NTFS filesystem is mounted on linux, ONE user and ONE group are used to simulate ownership of ALL files and directories, and ONE set of permissions is also used for all files/dirs on the NTFS mount. This is what the default_permissions,umask=0002,uid=deniss,gid=deniss part of your /etc/fstab entry is setting up.
User within a group cannot chmod on local disk mounted as NTFS
1,643,140,252,000
On Linux, how do I allow Steam to download games into the root drive? Space on my home drive is very limited, but trying to create a new library folder in /usr throws the error "Drive is read-only". Specifics: I am on Pop!_OS dual-booting with Windows and I've set up the boot, home and swap partitions on my SSD with 2, 10 and 2 GB respectively, but have set the root partition to be on my HDD, which allowed me to allocate more space since Windows isn't on it.
The filesystem itself probably isn't read only, although you can double check that it's not mounted with ro in /proc/mounts. What's more likely is that, since /usr is owned by root and typically has 755/rwxr-xr-x permissions, you're simply not allowed to write directly inside that directory to create a new directory for your Steam library. To allow you to do so, you need to first create a subdirectory that has write access for your user (/opt is generally a better place for these system-independent installations, but you can do it under /usr, too, if you want): # as the user you run steam as sudo mkdir /opt/steamlibrary sudo chown "$(id -un)" /opt/steamlibrary Now you should be able to create your Steam library under /opt/steamlibrary.
On Linux, how do I allow Steam to download games into the root drive?
1,643,140,252,000
I was installing Hadoop on my Ubuntu. This is my PATH now echo $PATH /usr/lib/jvm/java-1.11.0-openjdk-amd64/bin:/home/miki/.local/bin:/opt/hadoop-3.2.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin If I try ll miki@miki:~$ ll ll: command not found My bashrc edit line echo 'export HADOOP_HOME=/opt/hadoop-3.2.0;export PATH=$HADOOP_HOME/bin:$PATH' > ~/.bashrc the ~/.bashrc file has only one line export HADOOP_HOME=/opt/hadoop-3.2.0 All previous scripts were deleted. It is also strange that letters have changed the color. Why?
By using ... > ~/.bashrc, you have replaced the content with just the echo output. So you removed all the other content of your .bashrc file. You can recover the default .bashrc with. cp /etc/skel/.bashrc ~/ Then run your command again, but make sure to use >> instead of > to append to the file instead of replacing it. See also.
After the edit of ~/.bashrc file: ll command not found
1,643,140,252,000
Is there a way to restrict the application access to the system time in linux? I want to make the application launch as much as possible abstracted from the environment. If you can restrict access to devices/file system with permissions, it's not clear how to restrict access to the system clock, because this is not a standart system device.
If you want to deny access to the system time, you’d pretty much have to write a system call filter; nowadays that would be a seccomp filter. See Kees Cook’s simple tutorial, or for more complex requirements, libseccomp. You’d need to deny access to gettimeofday, clock_gettime, and time, at least; the details depend on whether you’re trying to deal with an adversarial application (where you might also want to deny access to exec, system etc. to prevent an application from running external applications — although seccomp filters are inherited so that might not matter —, and deny access to external, direct or indirect time sources such as the network or even file systems). If you want to give the application an artificial time, look at faketime and libfaketime.
Is there a way to restrict the application access to the system time in linux?
1,643,140,252,000
I am trying to make a folder and its file read only so I do not accidentally delete it. I have run chmod -R 444 myfolder/ but when I then right click on the folderand go Properties>Permissions, it is still showing as read and write. I also tested by modifying a file, and the modification succeeds. In addition, when I try to change the permission in the filemanager gui to read only, it immediately flips back to read and write. I am under the impression that 4 means read only access. Is this correct? EDIT: I think my issue has to do with how the drive is mounted. Here is the fstab entry. UUID=6F7C5E910607D747 /media/storage1 ntfs-3g uid=1000,gid=1000,umask=0022,auto,rw 0 0
Note This post was made before OP gave the additional info that he's using a windows filesystem (NTFS) on a linux machine. I was under the impression he's using a native linux filesystem. You need to set the read, write and executable flag for the owner, and the read, executable flag for the group for mydirectory. The executable flag is needed to enter the folder. Without it you get a "permission denied" when trying to cd myfolder as a user belonging to the group or other. chmod 755 myfolder is giving access for the group and others, or chmod 750 myfolder just giving access for the group and lock others out. Set the ownership to root and the group to users: sudo chown root:users myfolder Now, only root can create new files in myfolder ie. sudo touch mytest the new file gets the ownership root and the group root. To force new files getting the group users, you need to set the SGID bit to myfolder. this can be done in two ways, which results are equal sudo chmod +s myfolder (adding the sgid bit) or sudo chmod 2755 myfolder (same + user, group, others) doing a ls -l should show something like this: drwsr-sr-x myfolder # last x optional depending on your others setting if you now sudo touch mytest2 in myfolder, mytest2 belongs to root, and the group users with the permission 644 Existing Files in myfolder would be treated like this: cd myfolder sudo chown root:users * sudo chmod 644 * 1 = execute 2 = write 4 = read read + write = 4 + 2 = 6 P.S.: You can replace root with any user, users with any group Update as requested by @Rastapopoulos a further explaination Let's assume myfolder belongs to tom When doing a chmod -R 444 myfolder/ you set the folder for user (tom), group, others to read only and all files within it, too So no nobody would be able to enter the folder, even tom (except root) because it's lacking the executable flag. When doing a chmod 644 myfolder tom still can't enter the folder. The correct way would be to set the read, write, executable flag for tom, and the read executable flag for the group/others. (executable flag = 1) ie. chmod 755 myfolder (only setting permission for myfolder, not files) To change only the permission for files in myfolder but not the permission for myfolder you'd do a: chmod 444 myfolder/* But you might probably still want to edit/write your files as owner/tom so you'd rather do a chmod 644 myfolder/* (or 640)
Setting an NTFS file to be read only from Linux
1,643,140,252,000
I'm trying to create a backup script that saves my thumbdrive as an image. I'm planning on having it automatic. I've always seen the thumbdrive listed as /dev/sdb, and created a script that will save it as a gzipped tarball. While trying to make a copy of it via dd, I noticed this error: dd: failed to open ‘/dev/sdb’: Permission denied I wondered if it was just a fluke, so I tried piping a cat command to dd and got this error instead: cat: /dev/sdb: Permission denied 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000620078 s, 0.0 kB/s Of course, since I'm a superuser, I can sudo it out -- but that removes an element of the script being automatic. Why does this device need to have superuser permission if I'm not modifying it? Furthermore, is there a way to bypass this?
Why does reading a device require admin permissions? Firstly there are couple of issues here mount'ing the physical storage device and the partitions it contains. Accessing and manipulating the files on it. If the filesystem is permissions based, e.g. ext2,3,4 then permissions are defined on a per file basis. In terms of why you would require the user to have special admin privileges to mount a device there are a few reasons, which are more likely to apply to enterprise type situations and less likely to apply to personal computing - although it still can be relevant It prevents reading in abusive programs Once a disk is mounted an entire collection of untrusted programs are now potentially available for execution which can abuse the operating system. If you were administering that system, you could be more confident that wouldn't happen if casual users couldn't upload their own programs off their own disks. Writing / saving sensitive secret data If you had corporate secrets, or users passwords on a system, and someone could connect an unauthorised storage device they can make copies to it. You can get around the sudo issue by either running the entire script as sudo, and then using sudo to switch back to an ordinary user inside the script for the commands you dont want to run as root (yes you can do that) e.g. file: script.sh #!/bin/sh # this dd command now works dd if=<source> of=<target> bs=<byte size> # normal cammand you want to run as "mathmaniac" user sudo -u "MathManiac" bash -c "touch foo.bar" So you then run this script with: sudo ./script.sh Another quick and dirty but generally not recommended way is simply to put sudo in front of the command inside the script, when the interpreter encounters the sudo it will halt script execution and ask you for your sudo password before continuing.
Why does reading a device require admin permissions?
1,643,140,252,000
I have three users in Redhat 6 machine, tiger gourav sourav User gourav and sourav in brother group. Now user tiger create a directory tiger_gaurav and want to give read and write permission to only and only gourav user. When I try to give permission according to group,sourav also gets the permission to access that directory. Please do help me with this.
Using ACL's this can be done (Red Hat - Setting Access ACLs) - $ setfacl -m u:gourav:rwx,d:u:gourav:rwx,d:u:tiger:rwx,m:rwx path/tiger_gaurav u:gourav:rwx — grants user gourav read,write, and execute to the directory d:u:gourav:rwx — sets the default rule that will grant user gourav read,write,execute permission to files created in the directory d:u:tiger:rwx — sets the default rule that will grant user tiger read,write,execute permission to files created in the directory m:rwx — sets the mask, this value is unioned with the owning group and all other users/groups. That said, creating a group is likely much simpler to maintain in the long run.
Give full permissions to the owner and only one other user
1,643,140,252,000
Can someone tell me what I'm missing here? I'm in a group that owns the "mediawiki" directory and all of its subdirectories, but I can't write to the folder for some reason. I'm connected to SSH, but I've tried re-authenticating to SSH and even rebooting the server. [02.26.2016/10:50:59] myuser@wikiserver $ ls -la total 16 drwxrwxr-x 4 www-data www-data 4096 Feb 26 10:45 . drwxr-xr-x 13 root root 4096 Feb 23 17:42 .. drwxrwxr-x 2 www-data www-data 4096 Feb 23 18:20 html drwxr-xr-x 15 www-data www-data 4096 Feb 26 10:25 mediawiki [02.26.2016/10:50:59] myuser@wikiserver $ touch mediawiki/test.txt touch: cannot touch ‘mediawiki/test.txt’: Permission denied [02.26.2016/10:53:48] myuser@wikiserver $ groups myuser myuser : myuser adm cdrom sudo dip www-data plugdev lpadmin sambashare Any advice would be greatly appreciated. I just simply want everyone in that group to be able to write to the mediawiki folder and its subdirectories.
The group www-data does not have write permissions on that folder, only the owner can write to that directory.
User belongs to group, but can't write to folder that's owned by group
1,643,140,252,000
Is there any way to archive a folder and keep the owners and permissions intact? I'm doing a backup of some files, which I want to move using a usb-stick, which has a FAT filesystem. So the idea was to keep all this information and file setting within an archive. I know that the -p option for tar keeps the permissions, but still not the ownership.
tar's default mode is to preserve ownership and permissions on archive creation; I don't believe there's even an option not to store the data. When you extract an archive, if you're a normal user, the default is to use stored permissions minus the umask and set the owner to whoever's extracting; if you're superuser, the default is to use stored permissions and ownership verbatim. There are options to control how these metadata are restored on extraction (see the man page).
How do I archive a folder keeping owners and permissions intact?
1,643,140,252,000
I am packaging one of my projects on Debian. The project expects a directory to be created at XDG_DATA_HOME or ~/.local/share/ where the data files will be kept. Now I am trying to create and feed the initial data using postinst script shipped with the .deb package. The problem is since packages are installed as root, the directory is getting created as root and the user who is installing it won't have write permission on it. My question is how can I create the directory so that the user who is installing the package will have write permission on it and all subdirectories?
What you are asking dose not make sense. The user doing the installing is always root. If you want new users to automatically have this file in their home directory, then you add it to /etc/skel. If an existing user does not have it, then the program needs to be capable of dealing with that, possibly by automatically creating it, possibly by copying defaults from /etc/skel, or perhaps /usr/share.
How to create a directory which will have access to the user who is installing a package?
1,643,140,252,000
I am trying to backup several servers' /etc by using rsync from another server. Here's a relevant snippet: PRIVKEY=/path/to/private.key RSYNC_OPTS="--archive --inplace --no-whole-file --quiet" ssh -n -i $PRIVKEY root@${ip} "rsync $RSYNC_OPTS /etc 192.168.25.6::"'$(hostname)' where ${ip} is the IP Address of the server to be backed up, while 192.168.25.6 is the IP Address of the server holding the backup. Everything went well, except for /etc/{,g}shadow files on some servers. Since their permissions are 0000, rsync seems to not want to read them ("Permission denied (13)" errors). A quick check using the following: ssh -i $PRIVKEY root@${ip} "cat /etc/{,g}shadow" successfully dumped the files. Is there a way to get round this rsync limitation? EDITs The backup server is Ubuntu 12.04. The servers to be backed up are a smorgasbord of Ubuntu, RHEL, OEL, and CentOS servers. I've tried adding -t to the ssh options, and prefixed rsync with sudo, but still have the same errors.
File permissions do not really apply to root: Programs running as root can read and write files regardless of protection settings. (However, even root cannot execute a file unless one of the execute bits is set; it does not matter which one). That explains why cat can do it. But apparently, rsync runs its own check in order to rein in what might get copied. So it's not a "limitation" but intended behavior (not that it's any consolation to you). I say "apparently" because I haven't found any documentation for this behavior. (Apologies if I'm just restating your question! Not quite sure from your description.) If your problem is limited to the shadow files, I'd be inclined to add the call to cat to your backup script, and ignore the rsync error for these files. You could add logic to only sync the shadow files when they've changed, but really they're small enough that I wouldn't bother.
How to backup /etc/{,g}shadow files with 0000 permission?
1,643,140,252,000
On my machine there are user accounts. My question is how to restrict access to ones user files. Meaning no one else can access my files at all. How can I setup this restriction? How many root users are possible for one Linux machine? Edit: I'm one of the user of my system with name TOM(actually my system has two users TOM and JERRY), I installed all the packages as root using yum install <package name> Restriction: No one else access my package and file which are containing in TOM user
You can't use yum command with regular user like TOM, Yum command just can be used by root power and if you have install a package with yum, every user can use it. But if you have tar package you can change installation directory like following: ( If that package generated with GNU auto tools) ./configure --prefix=/home/myusername/bin make; make install and put /home/myusername/bin in your PATH variable. With this solution that package just run with myusername username! But if your file is a binary or txt file you just can set proper permission and if we look more precise and you want to write a program which works just with specific UID and USERNAME you can simply check UID variable because it's readonly and you can trust it. see below script and put it in a check.sh file: ( I wrote SHELLSCRIPT because here is Unix & Linux forum ) #!/bin/bash COUNTER=0 FLAG=0 declare -r MYUID=500 declare -r USNAME=Sepahrad for i in `cut -d: -f3 /etc/passwd` do COUNTER=$(( COUNTER + 1 )) [ $i -eq $MYUID ] && TMP=$(head -n`echo $COUNTER` /etc/passwd | tail -n1 | cut -d: -f1) && [ "$TMP" == "$USER" ] && [ $MYUID -eq $UID ] && FLAG=1 done if [ $FLAG -eq 1 ] then echo "Hello $USNAME ($UID)" elif [ $FLAG -eq 0 ] then echo "You don't have permission to see thi file $USER ($UID)"; exit 1 fi This script just runs for the user which has UID=500 and Sepahrad username. (You can change it by an editor) but if you don't want any user see your source even root you can compile it with shc command like: shc -v -r -T check.sh
How to restrict one user files to another user
1,643,140,252,000
There is one user Lets call him B and he needs access Just to Read the File MEANS READ ACCESS and that file is own by another user A . How can I grant access to user A's file to User B.
Standard UNIX permissions aren't quite that granular. You either need to, put A and B in the same group[*] and limit the group permissions to read only or use ACLs For #1, if this is the only file you care about, create a group called readerb, put user B into it, and change the group ownership on the file (chgrp) to be readerb, and then set the group permissions to be read only (chmod). To be fair, user A doesn't even need to be in the group assuming they still own the file. To do #2 you need to make sure your distribution supports ACL's, you have the ACL utilities installed, and your filesystems are mounted with ACL support. With that in place, you would use, setfacl -m u:B:r thefile.b to give user B access to thefile.b. [*] technically user A and B don't need to be in the same group, you could just put user B into the group, or use a group user B is currently the only member of.
File Privileges
1,643,140,252,000
My task was to configure a directory so that users in a group could only delete files they own. I used chmod 1771 RandD, as suggested by lab instructions, to accomplish this. When running ls -l, the permissions were displayed as drwxrwx--t. I understand why there is a t at the end of the permissions, since the last 1 in chmod 1771 RandD is responsible for other permissions. However, what is the point of the first 1 if t is not displayed in the user's permissions section?
Some of the characters in ls -l’s output serve multiple purposes; this is what’s happening here with the last character in the permissions. t means that the file has the execute bit for others set, and the sticky bit set. If the sticky bit wasn’t set, you’d see x instead; if the execute bit wasn’t set, you’d see T. In chmod, all four digits serve different purposes: the first sets “special” bits (including the sticky bit), the second sets owner permissions, the third sets group permissions, and the fourth sets “other” permissions. See Understanding UNIX permissions and file types for details.
Why is the sticky bit mentioned twice in chmod but only once in the ls output?
1,643,140,252,000
I've always heard that sticky bit should be only used with directories, and I understand that, but what is the meaning if it is applied to a file. If I set the sticky bit to a file and do an ls -la I see a capital T, but I don't know if this influences the behaviour of the file.
This is a case of RTFM From man 1 chmod: Restricted Deletion Flag or Sticky Bit The restricted deletion flag or sticky bit is a single bit, whose interpretation depends on the file type. For directories, it prevents unprivileged users from removing or renaming a file in the directory unless they own the file or the directory; this is called the restricted deletion flag for the directory, and is commonly found on world-writable directories like /tmp. For regular files on some older systems, the bit saves the program's text image on the swap device so it will load more quickly when run; this is called the sticky bit.
sticky bit on files and directories
1,643,140,252,000
I ran into an issue on my NAS where I mistakenly reset the permissions on the root file tree to 664. This ended up disrupting many of the NAS functions, like ssh and, after much debugging, logrotate. My question here is: is there a list of system tools that are known to be sensitive to file permissions? It would help me create a script that resets such systems to "canonical" permissions.
As @muru suggests in the comments, 664 permissions for a directory are not appropriate for any "system tools." If applied recursively, those perms are likely to break a large number of UNIX tools, not least of which the ability to search the PATH to find executables, etc., etc. mtree is a convenient tool to back up your permissions, so that if something happens you can restore them. As root, and with your permissions in a sound initial state, create a backup: # mtree -c -p / > /var/backup/root-mtree.out -c ... Create an mtree specification -p ... Begin from path "/" To restore your directory permissions to what was backed up: mtree -u -p / < /var/backup/root-mtree.out -u ... update permissions/datestamp/flags/owner/group of entries If you're facing this problem without having had the foresight to create an mtree backup first, then you may want to: Determine whether you do have a backup of your filesystem elsewhere, such as in a zfs or btrfs snapshot. If so, you could take an mtree "impression" from the backup filesystem, and use that to restore your main filesystem. Create a separate installation of whatever OS on any available media (thumb drive, small hard disk on a spare computer, etc.) and again, use that as a reference to see what the permissions should be. This base install may not contain all of the custom-created directories your production installation has (had), but having the base OS directories and file set correctly should go a long way toward restoring some basic functionality to your production system and better permit you to suss out what the remaining fixes are.
List of systems that are sensitive to file permissions
1,643,140,252,000
So I just bought a new Samsung T7 Portable SSD. I initially intended to format it to exFAT, for use with both Windows, MacOS and Linux, but upon inspection, the disk comes with a default file system of HPFS/NTFS/exFAT. I didn't know that was a thing, but I decided to test it out. To test it out, I simply copied a few ASCII text files to the disk, but regardless of method for copying, and file extension, they all get the executable flag set. I don't understand why. Why is it like this, and how can I avoid it? I want the files copied exactly as they are. Complete output showing changed permissions. user@ubuntu:~$ echo "test text file" > test.txt user@ubuntu:~$ echo "test test test" > test user@ubuntu:~$ echo "print('test')" > test.py user@ubuntu:~$ user@ubuntu:~$ ls -l test* -rw-rw-r-- 1 user user 15 July 18 01:20 test -rw-rw-r-- 1 user user 14 July 18 01:20 test.py -rw-rw-r-- 1 user user 15 July 18 01:20 test.txt user@ubuntu:~$ user@ubuntu:~$ mkdir /media/user/T7/testdir user@ubuntu:~$ cp test /media/user/T7/testdir/ user@ubuntu:~$ rsync test.txt /media/user/T7/testdir/ user@ubuntu:~$ rsync -a test.py /media/user/T7/testdir/ user@ubuntu:~$ user@ubuntu:~$ ls -l /media/user/T7/testdir total 384 -rwxr-xr-x 1 user user 15 July 18 01:23 test -rwxr-xr-x 1 user user 14 July 18 01:20 test.py -rwxr-xr-x 1 user user 15 July 18 01:23 test.txt Here you can see I've tried both cp, rsync and rsync -a, but they end up as executables every single time. Why? Edit: I tried doing exactly the same to a WD HDD that comes with NTFS by default. There, the files get the 777 permission (rwxrwxrwx). Does it have something to do with the disk itself? Clearly my knowledge is lacking here.
HPFS/NTFS/exFAT is a partition type. It claims the partition contains one of the named filesystem types, but that does not have to be the complete truth. Try lsblk -o +FSTYPE or look into /proc/mounts while the partition is mounted to see the actual filesystem type. Anyway, HPFS is unlikely, so the SSD most likely is already formatted with either a NTFS or exFAT filesystem. In terms of use with Linux, both these filesystem types lack a certain property: they don't support Unix-style ownership/group/permissions information. NTFS has ACLs which could be used to implement Unix-style ownerships and permissions; it could even support Linux's ACLs if necessary. But before it can do that, the Linux NTFS driver needs a conversion table between Unix style user and group IDs (UIDs and GIDs, basically just simple numbers) and Windows-style security IDs (SIDs: long strings of groups of numbers separated by dashes). If this is not provided, the driver won't be able to know how it should record the file permissions information on the filesystem, and it falls back to working just like with a filesystem that cannot support the concept of users and permissions at all. exFAT is a filesystem designed for removable media: it is assumed that whover physically possesses the media will be able to read everything stored on it anyway, so there is not much point for permissions. So like FAT32 and other filesystems in the FAT family, it has no real concept of file ownerships and permissions at all, and no way to store them. But Linux - or any Unix-like system - fundamentally requires that every file must be associated with some user and some group, and must have at least the classic set of user/group/other permissions, or a more complex ACL. All the system calls and operating system commands expect every file to have those. So if the filesystem does not support those, the filesystem driver needs to fake them. For the purpose of providing fake ownerships and permissions when the filesystem has none, both the NTFS-3G and exFAT filesystem drivers support a set of mount options which you can use to define two sets of permissions: one for all files, and another for all directories. Without being able to store permissions information to the metadata of each file on the filesystem, that's all you can get. The difference between the WD NTFS HDD and the Samsung SSD indicates that the Samsung most likely already has an exFAT filesystem on it, and the exFAT and NTFS filesystems simply have different default settings for faking the permissions... or the NTFS HDD has an ACL on its root directory that would be expressed in Windows as Everyone - Full Control, configured to be inherited by any new file or sub-directory. Since "Everyone" in Windows is a globally-defined standard SID, it's one of the very few SIDs the Linux NTFS driver will be able to understand by default.
Files get executable when copied
1,643,140,252,000
My folder has ownership of mike:adm Its permissions are 770. Note that zero. My user is mirko and is member only of mirko group Imagine that root user change ownership of a file inside the folder to mirko:mirko My user cannot cd into the folder What can my user do on the file? What not and why? I suppose I cannot delete file because I cannot 'x' and cannot 'w' the folder But can I read and write the file content?
You can not delete a file from a directory where you don't have write permission - assuming a normal unix-like file system.. The important idea is that there are 3 things which are needed for a file The data blocks which hold the actual contents of the file The meta-data which holds the information about the file, such as the owner, the modification date, the permissions, and how to find the data blocks. The name of the file and how to find the meta-data. Unix allows you to have more than one name for the same file. A directory is pretty much just a list of pairs of (file name component, pointer to meta-data). When you "delete" a file all you are doing is removing the pair from the directory with the correct final component. When all the names of a file have been removed then the data blocks are available for reuse. From this it is clear that to "delete" a file you need to be able to alter the directory. This needs write permission to the directory. Note that to remove a file, or to add an additional name to a file does not require any permissions on the file, this are just operations on directories. Background The meta-data is stored in things called i-nodes. The "pointer to meta-data" stored in the directory is actually the index number of an array of i-nodes stored on the disk. Usually these days the array is stored in a series of blocks split across the disk. You can use df -i to see how many entries are in use and the total number available.
Can I delete a file I own in a directory which I cannot write?
1,643,140,252,000
I have a directory /var/opt/gitlab/backups with the following permissions: [user@localhost ~]$ sudo ls -la /var/opt/gitlab/backups total 1316296 drwx------. 2 git root 63 1月 21 21:44 . drwxr-xr-x. 21 root root 4096 1月 21 21:39 .. -rw-------. 1 git git 1347880960 1月 21 21:44 1642819457_2022_01_21_14.1.2-ee_gitlab_backup.tar Now the following command does not remove anything sudo rm -rf /var/opt/gitlab/backups/* While the following command removes the directory and everything inside sudo rm -rf /var/opt/gitlab/backups/ Also the following command will remove the specific file sudo rm -rf /var/opt/gitlab/backups/1642819457_2022_01_21_14.1.2-ee_gitlab_backup.tar It's only the file wildcard way does not work (which unfortunately is what I want) However what I want is only removing the files inside and not removing the directory. I suspect it's because of the permission settings but changing the permissions is not an option for me. The directory owner and permissions are set automatically by a third-party software and I would like not to mess around. Is there any way to achieve the "removing all files inside the directory but not the directory itself" effect?
Wildcards are expanded by your shell. In order for rm /var/opt/gitlab/backups/* to work, then you must have permission to list the contents of /var/opt/gitlab/backups/. Consider for example as a non-root user I run this command: $ echo /var/* /var/cache /var/db /var/empty /var/lib /var/lock /var/log /var/mail /var/run /var/spool /var/svc.d /var/tmp Then the shell expands the * to the list of non-hidden files in that directory, then echo prints those values. However, if I try to do the same thing with a directory that I don't have the ability to access: $ echo /root/* /root/* The shell doesn't have permission to enumerate the content, and therefore cannot expand the *. If you really must use the wildcard, then you can try: $ sudo /bin/sh -c 'rm -rf /var/opt/gitlab/backups/*' With that, you run a new shell (/bin/sh) as root. That shell will have permission to read the content of the directory and can expand * to the contents.
sudo rm -rf directory/* does not work with certain permission settings [duplicate]
1,643,140,252,000
Leaving aside the need for that, I wanted to write (create a file) into /sys/devices/pci0000:00/{one-of-the-devices}/. Running touch a returns touch: cannot touch 'a': Permission denied. (I read somewhere that giving write permissions to the given folder is not enough — if one of the parent folders in the hierarchy does not have write permissions. I tested that and it does not seem to hold true.) Anyway, I obviously tried using sudo and even impersonating as root user with sudo su root, but keep getting permission denied. Does this there are folders in the file system that only kernel space is allowed to write to (as opposed to user space)? Perhaps virtual file systems that the OS refreshes/writes to intermittently? Perhaps the folder is a link and I do not know?
Yes, most virtual file systems like /proc and /sys on Linux can’t be used arbitrarily, because they don’t store files, they provide access to objects internal to the kernel. So it’s not that virtual file systems that the OS refreshes/writes to intermittently — virtual file systems don’t store data which is refreshed by the kernel; reading from and writing to a virtual file system results in reading from and writing to data in the kernel. New directories and files appear in /proc and /sys when new underlying data structures are added; trying to create directories and files there is meaningless.
Are there specific folders in the file system that cannot be written to?
1,643,140,252,000
I have an external hard-drive formatted as NTFS which I use to back-up and store files from both Linux and Windows (as I am dual-booting). I recently bought a new computer and installed Linux Mint 20 on it, and I would like to copy some of the files from my back-up to my computer's internal HD. I noticed that every single file in every single subfolder I copied from the hard-drive has had the option Allow executing file as program enabled in its permissions. How can I safely recursively run through a directory and set all files in all subdirectories as non-executable (including hidden ones and in hidden folders starting with .)? Also, is there a way of preventing this to happen in a NTFS hard-drive or would I be better off creating two partitions on it, an EXT4 for Linux and a NTFS for Windows?
The answer is to use: chmod -R -x+X . See chmod(1)
Recursively setting all files in a directory as non-executable [duplicate]
1,643,140,252,000
We have some applications in various subdirectories that we limit access to by unix group, and we want them to be execute only. So, for example, if we want members of the "chem" group to have execute-only access to a.out, we have: $ ll -d /tmp/dir1 drwx--x--- 2 root chem 18 Oct 20 07:50 /tmp/dir1/ $ ll /tmp/dir1/a.out -rwx--x--- 1 root chem 8728 Oct 20 07:50 /tmp/dir1/a.out Now from the bash shell I (a member of the "chem" group) can add this directory to my path and execute a.out: $ export PATH=/tmp/dir1:$PATH $ a.out This is a test. . . From tcsh, however, I can't: $ set path = ( /tmp/dir1 $path ) $ a.out a.out: Command not found. I have to add group read permission to the parent directory for a tcsh user to be able to run a.out (bash works in both cases): $ ll -d /tmp/dir1 drwxr-x--- 2 root chem 18 Oct 20 07:50 /tmp/dir1/ After the chmod I log out and back in (under tcsh), then: $ set path = ( /tmp/dir1 $path ) $ a.out This is a test. . . We expected that read permission on the parent directory would not be needed. I've looked all over for documentation that would explain the apparent in difference in behavior between bash and tcsh in this case, but no luck. Can anyone explain this? By the way, I did these tests under SLES 12 and CentOS 7, and the behavior was the same in both cases.
bash searches every directory in $PATH (from first to last) every time you run a command, trying to run the command out of that directory. It either succeeds or fails. So if /tmp/dir1 is in your $PATH, and a.out is in that directory, you can try to run a.out - bash will try to run /tmp/dir1/a.out, and that succeeds because you have execute permission. csh is different. When you change (or set) the $path, csh builds a hash table of everything it finds in every directory in the path, and it remembers what order those directories appear in the path. By default, when you run a command (without using a full pathname), csh looks in its hash tables to see if it remembers that command being on the path somewhere (and if more than one directory in your path has it, it picks the first one it finds in the order listed in $path. If it doesn't find it in the hash table, then it assumes no such command exists, and you get an error. This was useful long ago, when the amount of time it took to walk down all the directories in the search path and look for the command was actually significant. It's not so significant anymore, but csh has always behaved this way, and it still does today. In order for csh to build its hash table, it needs read permission on all of the directories in $path. Because it needs to be able to know exactly what executables live in which directories in the $path. Because you've denied read permission for /tmp/dir1, csh can't build the hash table for it, and it treats it as if there are no commands in that directory at all. That's why csh can't find your a.out file in a directory in the $path that it doesn't have read permission for. In order for csh to be able to find commands in directories on the $path that it has no read permission on, you have to turn off the hashing function. You do that by running the builtin command unhash. Once you run that, the hash table function is turned off, and csh will fall back to just trying every directory in the search path to see if it can run the command out of it, just like bash does. Another side effect of this hashing behavior is that, if you add executables to directories that are on the csh search path, csh won't know about it. So suppose you put emacs in /usr/local/bin (which is on your search path). Then you try to run emacs, and ... Command not found. To resolve this (without turning off hashing), you run rehash which tells csh to go and build a fresh set of hash tables, upon which it will pick up anything new that's been added.
PATH behaves differently in bash and csh when directory is execute only?
1,643,140,252,000
Okay, so I have a laptop with Windows 10 and KDE neon dual boot. While trying to fix a problem (Linux root did not have enough space left) I accidentally moved /usr to /home. I tried to fix it but nothing I found online helped at all. I couldn't use sudo or su anymore and to move /usr back to root I needed those permissions of course. And if it couldn't get worse my laptop died. So now my Linux can't even boot-up anymore and I can't even access a terminal. My questions are: If my research is correct my only option is to re-install KDE neon. Is that the case or does anyone know of some other way to still save the system? If I do need to re-install I would appreciate a step by step guide - maybe someone knows a good one. I could find any. Normally I get help from a Linux-group at my university but they are not available right now, plus their knowledge is limited too. So please know that I mysel have only very little knowledge of all of this so please keep the answers as simple as possible. If there is already a question like this please link it as I obviously did not find it. Thanks in advance!
Use a live session to move it back from /home/ to /usr/ In /home/ there should only be users and that tends to be 1 on de desktop so should be easy to do. Even if you moved it to /home/$USER/ not a problem: that one holds directories like Desktop, Downloads so easy to identify.
Accidentally moved /usr now kde is broken. Do I need to re-install?
1,643,140,252,000
I have just installed Mint 129.3. I am the only one on my computer system. I am trying to change the samba config file to rename my network. When I edit the file in terminal it will not let me save it. Says I need elevated privileges. I was under the impression that I was the sole admin on my system, I'm the only user. Thanks for your time and help.
By default system files are owned and protected for the root user (a special admin account). On many Linux distributions there are special commands to run a program as root. A few examples For terminal-based applications use sudo sudo vim your_filename.txt For graphical applications it's a little more challenging On old Ubuntu variants you may be able to use gksu gksu gedit your_filename.txt Newer versions have a special prefix for accessing files as root, by putting admin:// before the full path gedit admin:///path/to/your_filename.txt
Admin privilege
1,643,140,252,000
I am getting this error: Warning: Identity file /dev/fd/63 not accessible: Bad file descriptor. when running this command: ssh -Y '[email protected]' -i <(cat << EOF -----BEGIN RSA PRIVATE KEY----- MIIEogIBAAKCAQEAgbUQXIfIWtMJpYcTn5C+LStaL8NICo/0l1V33IQ8pQADUk+Tq+cfotyiHrRl JXRyn8KJe8zmAQs7uSR3drVdj2KNFhXnFsEbXYxjAS93ZutO1Z2eBvvKcp/W8AoOr7r2JtTXaGml W18/0Fot83UcVRdqYI4CCv5hhYN7oGDYT94d8d0yFtuIhXf8IlkCgYEAkugROAktxuG1AgQ9KGP5 ......... a3ZAHHf5F2rn0oW0X5YNtEWqhGknYQkkiztqaWAPM4bAP7gpDIqYyqh81soqYHxxP9q2Ch634NPb BMmdZb9hMb/PY9bJNKwZt/yO7W0yq1zzjXFIqhymGDqkc/E4/K+V+svsDIV8VtainrY= -----END RSA PRIVATE KEY----- EOF ) nix-collect-garbage I am just trying to run the 'nix-collect-garbage' command on the remote machine. Perhaps the temporary file/fd has the wrong permissions? Is there a way to give i the right permissions? I assume it's a permissions problem with the process substitution but not sure how to resolve it.
ssh will close all its file descriptors, except for the standard in, out and err, before doing anything interesting, even before parsing its command line switches. So you cannot use process substitutions (or any shell features which are using the /dev/fd/ mechanism) to pass file arguments to -i or other options.
Cannot cat .pem file inline as -i option to ssh
1,643,140,252,000
I just discovered a mistake in the permissions setting of our system. It's kind of serious because it allows normal users to access something they shouldn't see. Currently the mistake has been fixed, but I wonder how many users have ever accessed these files. By "accessing", I mean reading from it, for example, vi (without saving), less, cat, cp, scp, ... One strategy I can think of is greping through users' ~/.history files, but they could have deleted the relevant commands.
If you didn't have some sort of auditing mechanism in place at the time the file's permissions were wrong, it's pretty much impossible.  While your idea of searching the users' history files is not a terrible one (if you don't have concerns over the ethical and privacy issues), a simple grep won't find things likecd (directory_where_file_is) ls -l less *or cases where the user said vi (some_random_file), and then did :e (the_sensitive_file) from within vi.
Who have previously accessed a shared file?
1,643,140,252,000
I kind of know the answer, but what I want to know is the implementation details: Suppose root is logged in, and we have a file which has permissions 000. Then root can write and read that file. But my question is, how is this implemented. Does it somehow follow from the usual rules (owner, group, other; rwx) or does the system look at the user and say, ok, this is root, so he has privileged rights to read,write,execute any file? I am asking this because I am writing a rest-api application which mimicks files and users from unix ( urls for example /galois/home/gauss are considered as unix directories, while for example urls as /galois/home/gauss/iris.pfa are considered as (executable) files in unix. For a little bit more details, see here the readme: https://github.com/orgesleka/unix-acl-sql ). And now my question is, if I can somehow deduce from the usual acl of unix that root is privileged, or if I have to "hard-code" this?
You cannot deduce this using the ACL mechanisms. You have to encode bypasses. In some operating system kernel codebases you will findif (suser()) or similar tests here and there. (Example from OpenBSD) This is a function to test that the effective user ID of the relevant process is zero. In other operating system kernel codebases you will find this replaced with checks against a set of bitflag privileges that a set of credentials possesses. (Example from FreeBSD) But the same consideration applies: the privileges, if possessed, bypass the access checks and there are explicit bypass checks that have to be coded. Both models are poor ones to be copying just for the sake of it, especially with WWW-facing code. Better to leave both ideas, that of a single distinguished magic user ID and that of magic flags that user accounts can possess, out of your design and out of your code. Similarly, the 3-bit permissions model is not a good one to be copying. Look at NFS-style ACLs, where flags do not do the double duties that they do in the old 3-bit model. They have been around for a couple of decades, now.
Is root a privileged user in unix?
1,643,140,252,000
I'm looking for a way to isolate user from system completely and only allow the user to log-in to system and interact with opened programs. One way I could achieve this is by stripping all permissions for every individual folder and/or file. Goal of mine is to have user log-in to my system, and give the user access to only one Java executable, which the user can then start and interact. Are there any better way of doing this? Thank you
You probably want to set these users default shell to /bin/rbash -- restricted bash (bash -r). See man rbash. This can be done is done with the command chsh -s /bin/rbash {USER}. To prevent them from executing other commands than the one you want, set their PATH variable environment to only the folder containing the executable you want them to run in their ~/.bash_profile: export PATH="$HOME/bin" Edit: At this point, it is still possible for an attacker to break out of rbash. Actually, ssh allows running remote commands. They can still do something like ssh ... cp /bin/bash bin, to add bash to the set of commands they can run. The problem is that when run directly from ssh, it seems bash (or rbash) does not read .bash_profile, nor .profile, nor /etc/profile. The link (2) proposes solutions to this problem. man sshd_config describes all the options which can be set in /etc/ssh/sshd_config. Apparently, the script ~/.ssh/rc (as well as /etc/sshrc see link (4)) is run when we need it to be - whenever a user ssh some command. But I haven't tested. At this point, it may or may not still be possible to break from rbash using scp. Also, you'll need to disable SFTP in /etc/ssh/sshd_config. This is done by commenting the line: Subsystem sftp /usr/lib/openssh/sftp-server Links: (0) https://stackoverflow.com/questions/402615/how-to-restrict-ssh-users-to-a-predefined-set-of-commands-after-login (1) https://serverfault.com/questions/28858/is-it-possible-to-prevent-scp-while-still-allowing-ssh-access (2) https://serverfault.com/questions/133242/disabling-all-commands-over-ssh (3) https://access.redhat.com/solutions/65822 (4) https://docstore.mik.ua/orelly/networking_2ndEd/ssh/ch08_04.htm
Prohibit any interaction with system for user
1,643,140,252,000
This is on a Mac but I figure it's a Unixy issue. I just forked a Github repo (this one) and cloned it to a USB stick (the one that came with the device for which the repo was made). Upon lsing I notice that README.md sticks out in red. Sure enough, its permissions are: -rwxrwxrwx 1 me staff 133B 15 Jun 08:59 README.md* I try running chmod 644 README.md but there's no change. What's going on here?
Because the 'executability' of a file is a property of the file entry on UNIX systems, not of the file type like it is on Windows. In short, ls will list a file as being executable if any of the owner, group, or everyone has execute permissions for the file. It doesn't care what the file type is, just what the permissions are. This behavior gives two significant benefits: You don't have to do anything special to handle new executable formats. This is particularly useful for scripting languages, where you can just embed the interpreter with a #! line at the top of the file. The kernel doesn't have to know that .py files are executable, because the permissions tell it this. This also, when combined with binfmt_misc support on Linux, makes it possible to do really neat things such as treating Windows console programs like native binaries if you have Wine installed. It lets you say that certain files that are technically machine code can't or shouldn't be executed. This is also mostly used with scripting languages, where it's not unusual to have libraries that are indistinguishable in terms of file format form executables. So, using the python example above, it lets you say that people shouldnt' be able to run arbitrary modules form the Python standard library directly, even though they have a .py extension. However, this all kind of falls apart if you're stuck dealing with filesystems that don't support POSIX permissions, such as FAT (or NTFS if you don't have user-mappings set up). If the filesystem doesn't store POSIX permissions, then the OS has to simulate them. On Linux the default is to have read write and execute permissions set for everyone, so that users can just do what they want with the files. Without this, you wouldn't be able to execute scripts or binaries off of a USB flash drive, because the kernel doesn't let you modify permissions on such filesystems per-file. In your particular case, git stores the attributes it sees on the files when they are committed, and the original commit of the README.md file (or one of the subsequent commits to it) probably happened on a Windows system, where such things are handled very differently, and thus git just stores the permissions as full access for everyone, similarly to how Linux handles fiesystems without permissions support.
Why would README.md show up as an executable?
1,643,140,252,000
The setuid concept appears to become gradually obsolete, being replaced by proper permissions and authentication models in the few cases it is still in use. Examples: ping has been converted to use capabilities, mount is being superseded by udisks, SSH can replace su and sudo. On the system I type these words, I found 18 binaries with the suid bit set, most of them variations over the task of switching the user context for another program (su, sg, pkexec, etc.) and mount (normal and fuse). None of them seem to be irreplaceable. For what tasks is the suid bit still required then?
Ping has moved from setuid root to setcap net_admin. Moving from setuid root to setcap reduces the risks if the program exhibits unintended behavior. But that's the same basic mechanism: a process receives extra privileges based on metadata in the executable that it's instantiated from. The concept isn't obsolete, it's being refined. This is not a new phenomenon: there's a long trend of reducing the amount of privileges conferred to programs. Twenty years ago, executables setuid to a user other than root were common, but (without any change in the OS permission model) those have practically disappeared. Nowadays executables that require the permission to access extra files are made setgid, not setuid, so that if they are compromised, the attacker gains no more than the privileges the program has. The danger of a setuid executable is that if it is compromised, the attacker can replace the content of the executable by a Trojan that will compromise the account of anyone who executes the program. There is also a long trend towards not making programs setuid or setgid, but rather giving users the permission to execute them via sudo. Sudo has the benefit of finer-grained control (over who can execute the program, and over what arguments can be passed to the program), easier deployment on networks (simply deploy the configuration file /etc/sudoers, rather than having to be careful about permissions when installing, upgrading and deploying the program), and logging. Mount needs to be setuid root¹ to let non-root users mount partitions. It can indeed be replaced by other programs such as pmount (which itself needs to be setuid root) or a service-based mechanism such as udisks. There are really only two programs that absolutely positively must be setuid root: su and sudo. (And any other similar program.) Since they must be able to grant any privilege, they must have the highest privilege in the system. SSH cannot completely replace su and sudo. SSH can be used as a privilege escalation mechanism, but it isn't suitable for all circumstances. Everything in an SSH session goes through a tunnel. This is not always desirable: you can't pass file descriptors or shared memory via an SSH channel, there is a performance loss (SSH encrypts data, even when talking locally). Also, SSH is useless for a system administrator who wants to repair the system if the SSH daemon has crashed or has been misconfigured. ¹ or setcap sys_admin but there's no real differences between being root and what you can do indirectly with the sys_admin privilege — such as mounting a partition with a setuid root binary or mounting something over /etc or /bin.
What tasks still require the suid bit on modern systems? [closed]
1,643,140,252,000
I was trying to get the user nagios to have access a sub directory in another users home directory /home/alert/NagiosAlerts/. I was going crazy setting permissions that should have been to high to get this work work (777 type stuff for testing) but was still getting permission denied doing simple touch tests. Started writing up a question here to ask and got linked to this dupe target which had the following in its answer: The directory needs to be searchable to allow users to enter it or its subdirectory "project" So that is what fixed my issue me as well. I change the group of the folder /home/alert and set the group permissions to --x and I am now able create files in /home/alert/NagiosAlerts/. Why did I have to assign those rights in the parent directory /home/alert? I would have figured the rights in /home/alert/NagiosAlerts would have been fine? If I touch /home/alert/NagiosAlert/file should it only matter about the permissions on NagiosAlert? Why would alert matter?
You need to have permission to transit /home/alert in order to access /home/alert/NagiosAlerts. The executable bit gives that permission for a directory.
Why does the group need --x permission on the parent directory?
1,643,140,252,000
Hi i'm having a real problem trying to run a script as root, the script is to use sudo to run wp-cli as the user required. The command I am using is sudo -u hstestsite1 -s "cd /home/hstestsite1/public_html; /usr/local/bin/wp plugin list" I receive /bin/bash: cd /home/hstestsite1/public_html; /usr/local/bin/wp plugin list: Permission denied However if I su to that user and run the command it all works fine su - hstestsite1 cd /home/hstestsite1/public_html; /usr/local/bin/wp plugin list +---------+----------+--------+---------+ | name | status | update | version | +---------+----------+--------+---------+ | akismet | inactive | none | 3.3.3 | | hello | inactive | none | 1.6 | +---------+----------+--------+---------+ Any ideas? Cheers.
Sudo doesn't handle multi-line commands in its shells, but you can create a shell inside of your sudo command and have that shell run multiple commands, like so: sudo -u hstestsite1 -s sh -c "cd /home/hstestsite1/public_html; /usr/local/bin/wp plugin list"
sudo as user permission denied on command
1,643,140,252,000
debian8@hwy:~$ sudo cat /etc/sudoers |grep debian8 debian8 ALL=(ALL:ALL) NOPASSWD:ALL It means debian8 is a permitted user to execute a command as the superuser. I want to write something into a log file. debian8@hwy:~$ trafficlog="/var/log/traffic.log" debian8@hwy:~$ sudo echo -n `date "+%Y-%m-%d %H:%M:%S"` >> $trafficlog bash: /var/log/traffic.log: Permission denied debian8 has root permission,why can't write date record into the trarric log?
Because the sudo command ends where the command does. Using implied parentheses, this is what you're doing (and let me add that you're doing it in a particularly convoluted way, but that aside) (sudo echo -n) (date "+%Y-%m-%d %H:%M:%S") >> $trafficlog As you can see, you execute the echo command as root, but the redirection happens as debian8. What would work would be echo -n date "+%Y-%m-%d %H:%M:%S" | sudo tee --append $trafficlog
why Permission denied when to write something into a log file? [duplicate]
1,476,395,110,000
To repair mongodb running on arch linux arm, the doc says that the specific command must be run with the same user as the one running the service, to avoid permission issues later on. The dbpath folder belongs to the mongodb user, so I guess it's the user running the service. How can I find the password to su it to issue the repair command? Or is there a better approach?
Service accounts are typically locked, i.e. there is no password that you can login with. If you are the administrator of the system, then become root by way of su or sudo, and as root issue su mongodb.
How to su to a daemon user?
1,476,395,110,000
I recently had a terrible accident with filezilla which has literally cost me days to fix. In the course of 2 years of using Linux Mint (Ubuntu kernel) this is the third time I've managed to kill the system, have learnt something every time. Hardware has not yet caused a problem, more human error/ignorance. This time, it killed Windows 7 as well which sits on a physically separate hard drive in the case. I had mounted the Windows drive to access a file as the accident occurred and it somehow must have destroyed something in there quite badly. So yeah, I was just trying to update a website and saw all this stuff in the left panel of filezilla so pressed delete to get rid of it, thinking it was just some old folders from the previous time I updated the site and it was the whole file system! So it all froze, used xkill to close it and over the next hours ext4magic to recover some files which somewhat worked, but unfortunately Linux and windows both needed to be reinstalled. Was now at least able to put Linux on the bigger drive but don't want this to happen again. Why on Earth does a program like Filezilla which is there to move files to and from servers need to have the power to destroy a computer?! So have been wanting to set up things for safety. Created another user account called "admin_acc" from the gui and took away my user accounts admin rights. My user account is still a member of adm cdrom dip plugdev lpadmin sambashare . Is that too many for safety? Have installed safe-rm. Although the command cat /etc/safe-rm.conf lists all the 25 protected folders, including / and /home , the user-specific blacklist which is supposed to live in ~/.safe-rm does not exist! Do I have to simply create it myself? Installed safe-rm from the Mint installer wonder if it was the full version.. I'd like to have all the main folders and hidden folders of the home directory in that custom list. Am scared that that's not even enough though.. Apparently safe-rm just places a wrapper around rm and the full path /bin/rm still runs it as usual. Who's to know what command filezilla or some other program is going to execute in the background? Can see that applications as well as users can belong to groups as per this post. The goal is that nothing other than the superuser can delete any folders or hidden out of /home or /home/Documents or /home/pictures etc ever again.. What's the best way of achieving this, how many different measures need to be taken? No use just taking away filezilla's power need to make sure that no other application is going to be like a weapon sitting around waiting for someone to carelessly pull the trigger.. Would be much grateful for some expert guidance here so my fresh, clean install can run peacefully for a long time now.
Do not use FileZilla as root or to connect as a root user. A normal user usually cannot do what you have described (not without sudo or su or a really misguided sudoers config (e.g., NOPASSWD)). You need to understand that there are two sides as well. Say you are a normal user on host1 and use FileZilla to connect to host2, you can still destroy host2 if you connect as the root user. As long as you run programs as a normal user and connect to other hosts as a normal user, you usually can't do much damage outside of that user's home directory. If your website files are located somewhere else (e.g., /var/www), then you can either recursively chown those files or more appropriately, add the normal user to a group that can read/write to those files (e.g., www-data). The concept of user permissions is actually really simple, yet complicated to explain. I suggest investing a few hours to a day in research on the subject to get a basic understanding, learn about the 4-2-1 rule and such. Also, as you said yourself, you do not know what FileZilla is going to do ... you may want to consider using more basic tools like ssh and sftp. It pays to learn these things and you also have more power to restrict yourself (e.g., aliasing rm to rm -i may be good for you). P.S. I doubt filezilla actually killed your Windows partition from what sounds like a rm -rf /. Rather, it probably took out the bootloader.
restricting rights of user account and applications for safety, protecting home directory contents from accidents [closed]
1,476,395,110,000
I'm trying to launch a small script which fixes a bug in iceweasel icons. Here is the script. You can find it as a workaround in the bug report for n in 16 32 48; do inkscape -z -w $n -h $n -e /usr/share/iceweasel/browser/chrome/icons/default/default${n}.png /usr/share/icons/hicolor/scalable/apps/iceweasel.svg; done for n in 16 32 48 64 128; do inkscape -z -w $n -h $n -e /usr/share/icons/hicolor/${n}x${n}/apps/iceweasel.png /usr/share/icons/hicolor/scalable/apps/iceweasel.svg; done I created a file tempiceweasel.sh with the few lines above. I gave it execute permission: # chmod +x tempiceweasel.sh # ls -la tempiceweasel.sh -rwxr-xr-x 1 user user 349 mars 9 16:33 tempiceweasel.sh When I launched the script I have permissions errors: # ./scripts/tempiceweasel.sh Nothing to do! ./scripts/tempiceweasel.sh: ligne 3: /usr/share/iceweasel/browser/chrome/icons/default/default16.png: Permission non accordée ./scripts/tempiceweasel.sh: ligne 4: /usr/share/icons/hicolor/scalable/apps/iceweasel.svg: Permission non accordée Nothing to do! ./scripts/tempiceweasel.sh: ligne 3: /usr/share/iceweasel/browser/chrome/icons/default/default32.png: Permission non accordée ./scripts/tempiceweasel.sh: ligne 4: /usr/share/icons/hicolor/scalable/apps/iceweasel.svg: Permission non accordée Nothing to do! ./scripts/tempiceweasel.sh: ligne 3: /usr/share/iceweasel/browser/chrome/icons/default/default48.png: Permission non accordée ./scripts/tempiceweasel.sh: ligne 4: /usr/share/icons/hicolor/scalable/apps/iceweasel.svg: Permission non accordée Nothing to do! ./scripts/tempiceweasel.sh: ligne 7: /usr/share/icons/hicolor/16x16/apps/iceweasel.png: Permission non accordée ./scripts/tempiceweasel.sh: ligne 8: /usr/share/icons/hicolor/scalable/apps/iceweasel.svg: Permission non accordée Nothing to do! ./scripts/tempiceweasel.sh: ligne 7: /usr/share/icons/hicolor/32x32/apps/iceweasel.png: Permission non accordée ./scripts/tempiceweasel.sh: ligne 8: /usr/share/icons/hicolor/scalable/apps/iceweasel.svg: Permission non accordée Nothing to do! ./scripts/tempiceweasel.sh: ligne 7: /usr/share/icons/hicolor/48x48/apps/iceweasel.png: Permission non accordée ./scripts/tempiceweasel.sh: ligne 8: /usr/share/icons/hicolor/scalable/apps/iceweasel.svg: Permission non accordée Nothing to do! ./scripts/tempiceweasel.sh: ligne 7: /usr/share/icons/hicolor/64x64/apps/iceweasel.png: Permission non accordée ./scripts/tempiceweasel.sh: ligne 8: /usr/share/icons/hicolor/scalable/apps/iceweasel.svg: Permission non accordée Nothing to do! ./scripts/tempiceweasel.sh: ligne 7: /usr/share/icons/hicolor/128x128/apps/iceweasel.png: Permission non accordée ./scripts/tempiceweasel.sh: ligne 8: /usr/share/icons/hicolor/scalable/apps/iceweasel.svg: Permission non accordée It seems I don't have the right to write files in these directories. I don't understand why; I'm running as root and the permissions of these files are all like the below: -rw-r--r-- 1 root root 93 févr. 14 14:25 default16.png -rw-r--r-- 1 root root 325 févr. 14 14:25 default32.png -rw-r--r-- 1 root root 1845 févr. 14 14:25 default48.png Any ideas why I can't write these files?
The commands you copy-pasted were supposed to be single-line commands. Instead, they were broken into three lines each. So each command is the same as if you did: # for n in 16 32 48; do inkscape -z -w $n -h $n -e # /usr/share/iceweasel/browser/chrome/icons/default/default${n}.png # /usr/share/icons/hicolor/scalable/apps/iceweasel.svg; done So basically, in each step of the loop, it's trying to run inkscape, then it is trying to run the image file as an executable, then it is trying to do the same for the svg file. The reason you got errors about permissions is, of course, that the images don't have execute permission. The "nothing to do" came from inkscape, which was missing its parameters. The three lines should be all on the same line. Or the more appropriate way to write this, since you are writing a shell script rather than a single command, would be: for n in 16 32 48 do inkscape -z -w $n -h $n -e \ /usr/share/iceweasel/browser/chrome/icons/default/default${n}.png \ /usr/share/icons/hicolor/scalable/apps/iceweasel.svg done Note the backslashes at the end of the lines - they mean the following line is a continuation of the current one. The same applies to the second loop.
Impossible to write files with root credentials
1,476,395,110,000
I'm a new user of CentOS and now I want to create a new user of my system and let it can only access one directory. First I create a group named test. Then: useradd -g test -d /home/disk/disk1/testDir testuser The disk1 is a real disk which is mounted in disk1 folder. And now I can see the testDir folder and its ll output is : drwx------ 2 testuser test 4096 Jul 27 14:48 testDir And after I set the password and login with testuser by putty. It says: Could not chdir to home directory /home/disk/disk1/testDir: Permission denied The folder exists and owned by testuser. I do not understand why it got permission denied?
When you can not access to the directory you have right permissions, the first thing you should check is your access rights with parent directories: ls -ld /home/disk and: ls -ld /home/disk/disk1 You need at least execute permission to access the child of those directories.
Could not chdir to home directory when create and login a new user?
1,476,395,110,000
So I am a little bit confused with File permissions categories. So here is what I do understand and correct me if I am mistaken. For every File , there is an Owner (one user) which have a symbol u and for every file there maybe a group of users who own the file (most of the times the owner = group ?) a group can have access to it Including the owner himself as a member of that group. And others refer to all the people who are not in the group and not the owner right ? So when dealing with permission the user is the owner ? and all (a) refers to the owner (u) + group (g) + others ?
On a POSIX filesystem, every file has a user (the file's owner), a group, and permissions for the user, the group, and everyone else. For every user, access to a given file is determined as follows: if the user is the file's owner, the owner permissions apply; if the user is a member of the file's group, the group permissions apply; in all other cases, the other permissions apply. The order here is significant; thus you can have a file which is owned by you, with permissions 0077 (everything for the group and others, nothing for the owner), and you will not have access to it! But since you're the owner you can change that with chmod. This can be useful in some cases where you want to deny access to a specific group and allow anyone else access (think of a students group in an academic context). Strictly speaking permissions aren't matched to the end user, but to a process's effective user, which may be different (e.g. for set-uid binaries). There can also be other factors affecting groups, e.g. on NFS mounts. Permissions can be set with chmod (see Understanding UNIX permissions and their attributes for details), and one option is to use the letter you mention: u for the user permissions, g for the group permissions, and o for other permissions. Using a with chmod applies the permissions to all three categories. In modern systems other access permissions can apply on top of or instead of these permissions; look up ACLs (setfacl), SELinux etc.
File permissions categories
1,476,395,110,000
I'm trying to write a shell script to preserve and rewrite all permissions/groups in a large directory with several subdirectories in the event they get changed or are not properly created or if mirroring the users/permissions for this directory on a different machine. Something like: chown adam:brown /var/blarg chmod 770 /var/blarg chown adam:brown /var/blarg/toast.file chmod 777 /var/blarg/toast.file ...etc Doing this by hand will take a long time. I was wondering is there an existing command/script to accomplish this task?
As yeti suggested in the comments, I used the find command to find all files and directories within the directory and output their permissions/owners into a chown or chmod command. I added the verbose -v option so when running the resulting shell scripts you can see the success/errors of the commands: find /var/blarg -printf 'chown -v %u:%g %p\n' > chowns.sh find /var/blarg -printf 'chmod -v %m %p\n' > chmods.sh Now just make the resulting .sh files executable: chmod +x chowns.sh; chmod +x chmods.sh Then run them and output the verbose feedback to a txt file: ./chmods > chmods_results.txt Boom.
Script to recursively check permissions and owners of a directory and write shell script to recreate them
1,476,395,110,000
I am trying to: mount my 2TB external USB hard drive as my home directory at /home/peter ensure that the home directory is owned by me (not root) do all this automatically at bootup Currently: my drive is formatted to ext4 my drive is empty I am running debian 7 I can reformat to another filesystem type if necessary, but I want to use the full 2TB on the drive. The following fstab line mounts the drive incorrectly owned by root: UUID=xxxx /home/peter ext4 nodev,nosuid 0 2 How can I mount the drive so that it is owned by peter (that's my login user on the PC)?
the solution was simply to chown the home directory after the mount took place: $ chown peter:peter /home/peter while using the following fstab settings: UUID=xxxx /home/peter ext4 defaults 0 2 this hadn't worked before with other fstab settings, but now /home/peter remains owned by peter each time i restart (previously root kept taking ownership of this directory on restart).
fstab mount drive as my /home
1,476,395,110,000
file name : abc permissions : -rwxrwxrwx shell : ksh I am logged with the user which is the owner of this file. Contents of file AccessTime = 20130424-161120 ActualShmKeyDec = 1090650862 ActualShmKeyHex = 0x410202ee Dev = 4457198 Inode = 64770 FtokShmKeyDec = 1090650862 FtokShmKeyHex = 0x410202ee LastBootTime = 2013-04-24--11:41 UptimeMins = 4:30,-3--users UptimeSecs = 16253.96 AccessTime = 20130424-170309 ActualShmKeyDec = 1090650862 ActualShmKeyHex = 0x410202ee Dev = 4457198 Inode = 64770 FtokShmKeyDec = 1090650862 FtokShmKeyHex = 0x410202ee LastBootTime = 2013-04-24--11:41 UptimeMins = 5:22,-3--users UptimeSecs = 19362.82 AccessTime = 20130424-173741 ActualShmKeyDec = 1090650862 ActualShmKeyHex = 0x410202ee Dev = 4457198 Inode = 64770 FtokShmKeyDec = 1090650862 FtokShmKeyHex = 0x410202ee LastBootTime = 2013-04-24--11:41 UptimeMins = 5:57,-2--users UptimeSecs = 21434.49 AccessTime = 20130424-180537 ActualShmKeyDec = 1090650862 ActualShmKeyHex = 0x410202ee Dev = 4457198 Inode = 64770 FtokShmKeyDec = 1090650862 FtokShmKeyHex = 0x410202ee LastBootTime = 2013-04-24--11:41 UptimeMins = 6:25,-1--users UptimeSecs = 23111.03 AccessTime = 20130424-191315 ActualShmKeyDec = 1090650862 ActualShmKeyHex = 0x410202ee Dev = 4457198 Inode = 64770 FtokShmKeyDec = 1090650862 FtokShmKeyHex = 0x410202ee LastBootTime = 2013-04-24--11:41 UptimeMins = 7:32,-1--users UptimeSecs = 27168.95 AccessTime = 20130425-101909 ActualShmKeyDec = 1090650862 ActualShmKeyHex = 0x410202ee Dev = 4457198 Inode = 64770 FtokShmKeyDec = 1090650862 FtokShmKeyHex = 0x410202ee LastBootTime = 2013-04-24--11:41 UptimeMins = 22:38,-2--users UptimeSecs = 81522.99 AccessTime = 20130425-124617 ActualShmKeyDec = 1090650862 ActualShmKeyHex = 0x410202ee Dev = 4457198 Inode = 64770 FtokShmKeyDec = 1090650862 FtokShmKeyHex = 0x410202ee LastBootTime = 2013-04-24--11:41 UptimeMins = 1-day--1:05 UptimeSecs = 90350.71 AccessTime = 20130430-161311 ActualShmKeyDec = 1090650862 ActualShmKeyHex = 0x410202ee Dev = 4457198 Inode = 64770 FtokShmKeyDec = 1090650862 FtokShmKeyHex = 0x410202ee LastBootTime = 2013-04-28--06:01 UptimeMins = 2-days--10:12 UptimeSecs = 209527.13 Now when I try to delete this file..I get this error. rm: cannot remove `abc': Permission denied Any reason why I cannot remove this file ?
For deleting a file you have to modify the containing directory to not list that file anymore. Seems you have no w-permission on that directory. In this case you cannot create and delete files there you can only modify/delete the content of the file.
Not able to remove a file [duplicate]
1,476,395,110,000
Possible Duplicate: How to apply changes of newly added user groups without needing to reboot? I am on Ubuntu 11.04. I am creating another user and placing an existing user in the group of other user, hoping to write in the home directory of other user. # uname -a Linux vini 2.6.38-11-generic #50-Ubuntu SMP Mon Sep 12 21:18:14 UTC 2011 i686 athlon i386 GNU/Linux # whoami sachin # su root # useradd -m -U foo // create user foo # usermod -a -G foo sachin // add user `sachin' to group `foo' # chmod 770 /home/foo/ # exit # whoami sachin # cd /home/foo/ bash: cd: /home/foo/: Permission denied # groups sachin sachin : sachin foo This is totally weird. Though user sachin is in group foo, and group bits for /home/foo/ is set to rwx, sachin can't chdir to /home/foo/. I am not able to understand this. But, if at the exit step, I switch to sachin user from root, this is what happens: # uname -a Linux vini 2.6.38-11-generic #50-Ubuntu SMP Mon Sep 12 21:18:14 UTC 2011 i686 athlon i386 GNU/Linux # whoami sachin # su root # useradd -m -U foo // create user foo # usermod -a -G foo sachin // add user `sachin' to group `foo' # chmod 770 /home/foo/ # su sachin # whoami sachin # cd /home/foo/ # ls examples.desktop Now, whatever is happening here is totally incomprehensible. Does su sachin inherits some permissions from the root user at this step? A solution that was suggested was to use newgrp, which updates the group and other user information. So if I do newgrp - sachin, then I get access to the directory of the newly created user, but if I create one more user and follow the same steps sachin does not get access to the latest users directory. This behavior is indeed very baffling. Any explanations would be much appreciated.
Your current login shell process keeps the group configuration it had before. Especially, compare the output of groups sachin with groups . After logout and login, the difference is gone. If you cannot relogin due to reasons unclear to me, you have to cascade the newgrp stuff. Imnternally, newgrp does nothing but "relogging in" in a new layer of processes. If you log out, you have to press Ctrl-D as often as you have newgrp stuff running. Alternatively, you terminate the inner newgrplayer and start another one after a usermod. Then you again have access to the current set of groups.
Incompatible group permissions in Linux - Is it a bug? [duplicate]
1,476,395,110,000
I was trying to test the rmdir command by removing a test directory located in my Downloads directory. I have read and write rights on Downloads. I issued rmdir -p /Users/myself/Downloads/test and got rmdir: /Users/myself/Downloads: Permission denied , but the test directory was deleted. So why do I have this message? Should I care? I'm using OSX Lion 10.7.3.
From man rmdir: -p, --parents remove DIRECTORY and its ancestors; e.g., `rmdir -p a/b/c' is similar to `rmdir a/b/c a/b a' So your rmdir call tries to delete test (succeeds), then tries to delete the parent directory Documents (or rather Downloads) and fails... I think. I'd rather have expected some "directory not empty" error, because why shouldn't you have the permissions to delete this folder?
OSX : rmdir "permission denied" but directory removed
1,476,395,110,000
I have my Apache server running in the /srv/http/ directory. I changed the group of it to httpadmin and then added my user to that group. Then, I changed the permissions of that directory to rwxrwxr-x, which means everyone in the group httpadmin should be able to write in that directory, right? Yet, I can't create files in it. What am I missing?
After changing your group, you have to log out and log in again for your new group assignment to be active. You can either log out or
How to write in directory out of home?
1,476,395,110,000
I have a folder under /mnt/ with drwxrwxrwx permissions and under root:root I then mount a USB drive (exFAT) to this folder and it becomes drwxr-xr-x The issue is that now I cannot scp to that folder via WinSCP since there is no permission for group to write to folder, and I am unable to scp as root user. I am mounting the drive via fstab with the following: /dev/sdb2 /mnt/USB exfat defaults,dmask=0000,umask=0000,rw 0 0 How do I either: 1) Give permission to group write or 2) Mount it as a non root user so that that user can write? ive attempted chown and chmod to no avail. Chown even when run as root returns Operation not permitted I am able to write to the mount as root user when in SSH (such as mkdir), so the mount is writable, but only by root.
ExFAT filesystems don't support Unix permissions. The Unix permissions are set at mount time. The ownership/permissions of the mountpoint (/mnt/USB) has nothing to do with whatever gets mounted over it. It's just a placeholder in the file tree. To fix it now, try: sudo mount -o remount,umask=0,dmask=0,fmask=0,uid=$(id -u),gid=$(id -g) /dev/sdb2 /mnt/USB Update your /etc/fstab entry to add the fmask=0 and uid= and gid= options. You'll have to hard-code your UID and GID, with the values from id -u;id -g.
Have drwxrwxrwx permissions on folder, but after mounting to it it becomes drwxr-xr-x which disalows members of the group to write. How do I fix it?
1,476,395,110,000
When providing code to another, it appears to be good practice to remove all elevation-specific code — that is, provide apt update rather than sudo apt update (or su -c 'apt update'). However, this causes all kinds of problems for those new to UNIX-based permission elevation — I and many others I have met used to merely run everything as the superuser using the aforementioned utilities. Additionally, when invoking certain commands programmatically, superuser elevation shall be necessary on some systems, but not on others. This means that including the permission elevator can break configurations on some systems, because things shall begin to be owned by the superuser when they shouldn't be. flatpak is an example of some of this (except the configuration corruption) — 50% of the installations I've used necessitated elevation to invoke most of its commands, the other 50% didn't, by default. Considering that UAC elevation on Windows 7+ appears to work incredibly well even in edge cases in my experience with it, I am seriously surprised that no comprehensive methods of remediating this issue appear to exist. Consequently, I ask whether anyone knows of any methods of programmatically determining whether a command necessitates user elevation to invoke it. I am aware that switching users when invoking certain commands can cause its behaviour to affect that user, but in those instances, I cannot imagine that the users able to invoke the command would be restricted (lest it be rendered useless). https://unix.stackexchange.com/search?q=determine+whether+superuser+elevation+required returns nothing, and although how to determine through code if command needs root elevation? and How to check if the "sudo" permission will be necessary to run a command? appear as if they're asking what I want know, the few responses available at both appear to be focussed upon shell script code to sidestep the issue rather than providing language-agnostic solutions.
Consequently, I ask whether anyone knows of any methods of programmatically determining whether a command necessitates user elevation to invoke it. I am aware that switching users when invoking certain commands can cause its behaviour to affect that user, but in those instances, I cannot imagine that the users able to invoke the command would be restricted (lest it be rendered useless). No such way exists; it's "administrator contextual knowledge" that you need to run apt as root, and flatpak not. Other programs will ask for privilege elevation when they need it. Considering that UAC elevation on Windows 7+ appears to work incredibly well even in edge cases in my experience with it, I am seriously surprised that no comprehensive methods of remediating this issue appear to exist. UAC elevation is the windows GUI equivalent of sudo or policykit on linux / freedesktop. You can't tell whether a program will require admin privileges under windows, either, until it requests them. The windows graphical shell has a way to mark links to executables as "run as different user", though. But that's not the same as saying the executable can be spotted as in need of privilege elevation.
How to programmatically determine whether superuser elevation is necessary to invoke code?
1,476,395,110,000
I have here a storage without supporting the most important metadatas (permissions, owners, etc). I think, it would help a lot if I could somehow solve that I inject some type of metadata in the system from an external source. Last time as I have seen some similar, that was the today really ancient umsdos filesystem, some decades ago. It stored the posix ownership and permissions info in small database files in all directories of a fat16 filesystem. This time I have no word to change the storage. Also I have no influence over the permission flags it gives. But I would want to emulate them locally. Is it somehow possible in Linux?
The POSIX overlay filesystem might solve your problem — it’s a FUSE-based overlay capable of adding POSIX metadata to non-POSIX file systems. Its last official release dates back to 2018 but its source repo is still being updated.
Making file metadata on a filesystem not supporting it
1,476,395,110,000
The man pages state that: -perm -mode means that all of the permission bits in mode are set for the file. -perm /mode means that any of the permission bits in mode are set for the file. When I created two directories in /tmp with permissions 1777 and 1755, and used these commands, both directories were found with both 1777 and 1755 permissions. find / -perm -1000 -type d find / -perm /1000 -type d This is why I'm confused. I'm using CentOS 7 as my distribution.
octal 8#1000 is binary 2#1_000_000_000, it only has one bit set, the sticky bit, so all or any make no difference. /tmp has all of that one bit set, and has any of that one bit set. You'd see a difference for values that have at least 2 bits set such as in -perm -5000 vs -perm /5000 (8#5000 being 2#101_000_000_000 with 2 bits set) where the former returns files that have both the setuid and sticky bit set and the latter has either of them (or both) set. You use typically / for things like -perm /111 (is executable by someone) -perm /444 (is readable by someone) or -perm /6000 (either setuid or setgid, i.e. dangerous) and - for things like -perm -111 (is executable by everyone), -perm -600 (is both readable and writable by its owner) often negated (! -perm -... -exec chmod ...+... {} +).
-perm -1000 and -perm /1000 produce same results to find files with sticky bit set?
1,476,395,110,000
I'm trying to set up the Hashicorp Vault Agent as a systemd service. I can manually run that agent with the user vault. Note, perhaps that's important: here's the /etc/passwd for that user : vault:x:994:989::/home/vault:/bin/false So I need to do sudo su -s /bin/bash vault to get a vault session. With that in mind, I can do the vault agent -config=<pathToConfig>and it works. Now here the /usr/lib/systemd/system/vault-agent.service I've set up : [Unit] Description="HashiCorp Vault - A tool for managing secrets" Documentation=https://www.vaultproject.io/docs/ Requires=network-online.target After=network-online.target ConditionFileNotEmpty=/etc/vault.d/vault.hcl [Service] User=vault Group=vault ProtectSystem=full ProtectHome=read-only PrivateTmp=yes PrivateDevices=yes SecureBits=keep-caps AmbientCapabilities=CAP_IPC_LOCK Capabilities=CAP_IPC_LOCK+ep CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK NoNewPrivileges=yes ExecStart=/bin/vault agent -non-interactive -config=/etc/vault.d/agent-config-prod.hcl ExecReload=/bin/kill --signal HUP $MAINPID KillMode=process KillSignal=SIGINT Restart=no RestartSec=5 TimeoutStopSec=30 StartLimitIntervalSec=60 StartLimitBurst=3 LimitNOFILE=65536 [Install] WantedBy=multi-user.target This is a service conf I've found multiple times. But I always get the same issue: Error storing PID: could not open pid file: open ./pidfile: permission denied I tried to replace the ExecStart= by /bin/whoami, just to be sure, yes, it's indeed vault. Permission and location of that ./pidfile (default install location): /etc/vault.d/pidfile drwxr-xr-x. 108 root root 8192 May 15 16:32 etc drwxr-xr-x 3 vault vault 113 May 15 17:43 vault.d -rwxrwxrwx 1 vault vault 0 May 15 17:48 pidfile #not default permission, but I am desesperate. I am really suspicous about the sudo su -s /bin/bash vault command that, perhaps, grants the vault user more privileges. If so, how to incorporate it into my service? I ran systemctl reload daemon everytime and SELinux is disable. ps: if someone has a great link about how to set up a systemd for the vault AGENT (not as root), I'll take it. EDIT : about the sudo -s /bin/bash vault $ sudo -s /bin/bash vault /bin/vault: cannot execute binary file $ su -s /bin/bash vault Password: (and I have no password or I don't know it) So that's why I'm using the full sudo su -s /bin/bash vault command.
The option ProtectSystem=full literally mounts /etc as read-only for the process defined in the service: Takes a boolean argument or the special values "full" or "strict". If true, mounts the /usr/ and the boot loader directories (/boot and /efi) read-only for processes invoked by this unit. If set to "full", the /etc/ directory is mounted read-only, too. You should either move the pidfile to a writable location for that process, or remove the option ProtectSystem=full from the service file. You should look into all of the other systemd service options that you are using which you are unsure of what they do. There are a number of other restrictions in there that may cause problems with your setup.
systemd / service user has not the same rights / permission as same user in a shell
1,476,395,110,000
I have lost the permissions to create files/folders in /home/user without using sudo. I have a few snippets below of what I mean. I tried to install nyxt-git from the AUR beforehand and it errored out. Then I shut my computer down for the night and now when I booted it up this morning I get the below errors. How might I change my permissions back? I'm not entirely sure if it's related to my attempt at installing the package but it was the last thing I did before having this issue. ⋊> ~ pwd /home/user ⋊> ~ touch test.py touch: cannot touch 'test.py': Permission denied ⋊> ~ rm Untitled.ipynb 11:28:37 rm: cannot remove 'Untitled.ipynb': Permission denied ⋊> ~ mkdir testing mkdir: cannot create directory ‘testing’: Permission denied My OS is EndeavorOS and I am running AwesomeWM. The output of ls -ld "$HOME" is, dr-xr-xr-x 54 hank hank 4096 May 14 11:35 /home/hank/
Your home directory permissions: dr-xr-xr-x 54 hank hank 4096 May 14 11:35 /home/hank/ are missing the write permission for the owner of the directory (you). The fix ought to be very simple: chmod u+w /home/hank You might have to invoke that chmod command under sudo, but try it as yourself first.
Don't have permissions to create folders or delete files in $HOME