date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,307,090,858,000 |
Is there a way to create a user account in Solaris which allows the users to run one command only? No login shell or anything else. I could possibly do it with /usr/bin/false in /etc/passwd and just get the user to ssh <hostname> <command>, but is there a nicer way to do it?
|
You could used a forced command if the users can only connect through ssh. Essentially, whenever the user connects through ssh with a certain key and a certain username, you force him to execute a command (or a script or) you determined in the .ssh/authorized_keys. Commands issued by the users will be ignored.
For example:
# in .ssh/authorized_keys
command="cd /foo/bar && /path/to/scripts/my_script.sh arg1 arg2",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa public_key
| Creating a UNIX account which only executes one command |
1,307,090,858,000 |
I need to perform backup of a server to my computer using Duplicity:
duplicity /etc sftp://[email protected]//home/backup
Before this can be done, I need to allow password-less access by doing the following:
$ ssh-keygen
$ ssh-copy-id [email protected]
$ ssh [email protected]
My question is, how do I restrict the command to just this SFTP transfer in the public key that is generated?
command="restrict to sftp",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa AAAA…
And since I am on a dynamic IP address, how do I overcome the "missing known host" problem each time my IP changes?
|
Question #1
My question is, how do I restrict the command to just this SFTP transfer in the public key that is generated?
There are 2 methods for doing this.
1. -- Restricting through sshd
This method involves setting up the SFTP feature within your SSH daemon, sshd. This is controlled through the /etc/ssh/sshd_config configuration file. NOTE: This will restrict the user, backup to only be allowed to SFTP into the server.
# /etc/ssh/sshd_config
Subsystem sftp internal-sftp
## You want to put only certain users (i.e users who belongs to sftpusers
## group) in the chroot jail environment. Add the following lines at the end
## of /etc/ssh/sshd_config
Match User backup
ForceCommand internal-sftp
2. -- Restricting through authorized_keys
This method doesn't involve any changes to the sshd_config file. You can limit a user + a SSH key to a single command via the command= feature which you've already mentioned in your question. The trick is in what command you include. You can put the SFTP server in this command= line, which has the same effect as setting up the SFTP server in your sshd_config file.
# User backup's $HOME/.ssh/authorized_keys file
command="/usr/libexec/openssh/sftp-server" ssh-dss AAAAC8ghi9ldw== backup@host
NOTE: if the user has write access to ~/.ssh/authorized_keys, they can read and/or modify it. For example, they could download it, edit it, and reupload it stripping away the commmand=..., granting him unfettered command access, including the shell. If the user has write access to ~/.ssh, they could also simply unlink and recreate the file, or chmod it to write access. Many possible solutions exist, such as putting the ~/.ssh/authorized_keys files away in a non-user-writable place, such as with:
Match Group sftponly
AuthorizedKeysFile /etc/ssh/authorized_keys/%u
Question #2
And since I am on a dynamic IP address, how do I overcome the "missing known host" problem each time my IP changes?
This is trickier but doable using the from= feature within the authorized_keys file as well. Here we're limiting access from only the host, somehost.dyndns.org.
from="somehost.dyndns.org",command="/usr/libexec/openssh/sftp-server",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-dss AAAAC8ghi9ldw== backup@host
The additional settings after the command= are equally important, since they'll limit the use of SSH key even further.
breakdown of features
from='hostname1,hostname2,'' - Restricts access from the specified IP or hostname patterns
command='command' - Runs the specified command after authentication
no-pty - Does not allocate a pty (does not allow interactive login)
no-port-forwarding - Does not allow port forwarding
no-X11-forwarding - user won't be able to remove display X11 GUIs
no-agent-forwarding - user won't be able to forward through this host to other internal hosts
To get rid of the message about the "missing known hosts" you can add this SSH option to the client when it connects like so:
$ ssh -o StrictHostKeyChecking=no ....
See the man page, ssh_config for full details about this switch.
Restricting the user's shell
For both solutions above you'll likely want to lock down the backup user by limiting this user's shell in the /etc/passwd file as well. Typically you'll want to set it to scponly, but there are other choices for this as well. See this U&L Q&A titled: "Do you need a shell for SCP?" for ways of doing this.
The use of /sbin/nologin can also be used if you opt to use the chroot feature from sshd_config as outlined in #1 above. However if you opt to use the method outlined in #2, then you'll likely have to use scponly or something else for the user's shell in /etc/passwd.
BONUS - Extending #2 above
If you need to expose a set of commands for this user you can also do this. Create a script like so, /home/backup/commands.sh:
#!/bin/sh
case $SSH_ORIGINAL_COMMAND in
"diskspace")
df -h
;;
"dirlist")
ls -1
;;
"apache_restart")
/etc/init.d/apache restart
;;
*)
echo "Unknown command"
esac
You then setup the authorized_keys file like so:
command="/bin/sh /home/user/commands.sh" ssh-dss AAAAC8ghi9ldw== user@host
The backup user can then run these commands like so:
# diskspace
$ ssh -q user@remote_host diskspace
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/dev-root 39G 2.2G 35G 6% /
# dirlist
$ ssh -q remote_host dirlist
commands.sh
dump.sql
References
SSH Tips and Tricks
Five Minutes to an Even More Secure SSH
| Restrict password-less backup with SFTP |
1,307,090,858,000 |
The Glassfish application server provides scripts to administer the application server and also start and stop them and I would like to restrict the root user from running this script. The reason is that some key developers forget to administer the server as the non-privileged user and if they restart the application server as the root user, then the application server has to run by the root user [*].
It isn't an option to avoid giving root access and the developers forget because they are so used to do this on their local machine. I would like the asadmin script to either change to run as the non privileged user or optionally display an error message each time the script is run by root.
Bash shell is used.
[*]: I've tried to fix the rights on files, but even though I've tracked down lots of files which root owns and chmod them, the application experience strange errors and I have to run as root again.
|
Similar to the other answers, but in the direction you wanted.
if [[ $EUID -eq 0 ]]; then
echo "This script must NOT be run as root" 1>&2
exit 1
fi
Alternatively, you can use sudo within the script to force execution as the non-privileged user using the -u flag to specify the user to run as. I don't use Glassfish, but here's a proto-example pseudo script.
#!/bin/bash
if [ $1 == "start" ]; then
sudo -u nobody /usr/bin/glassfish
fi
Hopefully you get the idea. Sorry I don't really know what the script looks like, or the name of the non-privileged user.
| How to hinder root from running a script |
1,307,090,858,000 |
Using command="" in authorized_keys, I can restrict the commands that can be run by a particular key.
What commands do I need to allow in order to have a functioning git remote?
From the Pro Git book I can infer that git-upload-pack and git-receive-pack are required, but is there anything else?
Note I still want to be able to log into the user normally, just not with this key.
|
Git includes a git-shell command suitable for use as a Git-only login shell. It accepts exactly the following commands:
git receive-pack
git upload-pack
git upload-archive
git-receive-pack
git-upload-pack
git-upload-archive
cvs server (used for emulating a CVS server, and not required for the Git protocol)
So these are the only commands you need to allow. Every version of Git I have access to only uses the hyphenated versions.
git-shell itself may be good enough in itself for what you want to do, too.
You can verify what Git is running for any particular command by setting GIT_SSH to a shim that echoes the arguments. Make a script ssh.sh:
#!/bin/bash
echo "$@" >&2
Then run:
GIT_SSH="./ssh.sh" git push
and you will see the remote command it tried to run.
| What commands does git use when communicating via ssh? |
1,307,090,858,000 |
I've got a remote server my.server.com and I'd like to allow another user (stranger) to scp files onto the box.
Goals:
give as little access to the stranger user as possible
stranger should not be able to log in at all
if possible, arrange one-way copying - they should only be able to copy files onto the sever, not off
How can I accomplish this?
Edit: I've removed all the cross posts. This question only exists here. Please help!
Edit 2: I ended up settling for a normal scp transfer. Selected @Lambert's answer because it was the most complete. Thanks for all the help!
|
If you want all setup all the limiting stuff you mention I would suggest to use ProFTPd.
Using the sftp_module you are able to only allow a secure session. See http://www.proftpd.org/docs/contrib/mod_sftp.html for details about the sftp functionality. Near the bottom of the page an example configuration is listed.
Using the DefaultRoot directive you can isolate the granted user into his/her own directory
Using the <LIMIT> structure you are able to limit the FTP commands you want to allow, i.e. READ so the user can not retrieve files. See http://www.proftpd.org/docs/howto/Limit.html for details.
When you setup the sftp configuration in ProFTPd you probably want to have it to listen on another port than ssh, for example 2222. Configure your firewall and/or router to allow traffic coming from stranger to the port you choose for ProFTPd. Another possibility is to run ProFTPd's sftp module on port 22 and reconfigure ssh to listen on another port.
A sample configuration can look like:
<IfModule mod_sftp.c>
<VirtualHost a.b.c.d>
# The SFTP configuration
Port 2222
SFTPEngine on
SFTPLog /var/log/proftpd_sftp.log
# Configure the RSA, DSA, and ECDSA host keys, using the same host key
# files that OpenSSH uses.
SFTPHostKey /etc/ssh_host_rsa_key
SFTPHostKey /etc/ssh_host_dsa_key
SFTPHostKey /etc/ssh_host_ecdsa_key
<Limit READ>
DenyAll
</Limit>
DefaultRoot ~ users,!staff
</VirtualHost>
</IfModule>
Note: This is not a complete configuration of ProFTPd, you should review and modify the ProFTPd default configuration to have it fits your need.
There is another possibility to just use OpenSSH for this:
Create the user stranger and set a password for the user:
useradd -m -d /home/stranger -s /bin/true stranger
passwd stranger
Edit the file /etc/ssh/sshd_config and check if the following line exists, add it if it does not exists:
Subsystem sftp internal-sftp
Next add a Match block at the bottom of /etc/ssh/sshd_config:
Match User stranger
ChrootDirectory %h
ForceCommand internal-sftp -P read,remove
AllowTcpForwarding no
Note: the user will be able to overwrite an existing file.
Restart the sshd daemon.
Set the owner of the directory /home/stranger to root:
chown root:stranger /home/stranger
Note: root must be the owner and may be the only one to have write permission if ChrootDirectory is used. An alternative might be to add -d %u to the ForceCommand internal-sftp line and set ChrootDirectory /home but a user will be able to cd / and see other usernames with ls
Create an upload directory for the user:
mkdir /home/stranger/upload; chown stranger:stranger /home/stranger/upload
Now you can logon as user stranger using:
sftp stranger@somehost
When you upload a file it should be ok:
sftp> put myfile
Uploading myfile to /upload/myfile
myfile 100% 17 0.0KB/s 00:00
sftp> get myfile
Fetching /upload/myfile to myfile
/upload/myfile 0% 0 0.0KB/s --:-- ETA
Couldn't read from remote file "/upload/myfile" : Permission denied
| How can I set up one-way scp? |
1,307,090,858,000 |
I would like to lock down an Arch Linux user account to the maximum extent possible. The only functionality required for the account is to accept a non-terminal SSH session which allows the client to create a tunnel to the internet.
The situation is that I want to share my remote connection with a few friends. I will provide them with an SSH key for the account and configure their programs as necessary.
The complication is that I don't want to place 100% faith in their ability to secure the key file. I'd rather minimize the potential damage of a compromise while it's still hypothetical - and take the opportunity to learn more about security.
Is there any way I can achieve a completely isolated and/or locked down account? Can I allow SSH connections but refuse terminal access?
I appreciate any help!
|
When you add keys to an authorized_keys file you have several options to restrict what that key can do. In this situation, you can disallow running any commands. Simply prefix it with command="".
For example:
command="" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDc7nKsHpuC6W/U131p0yDh455sLE9pWmFxdK...
When the user wants to connect, they have to pass -N to ssh. This tells the ssh client not to try running a command, but to just open a connection (and do tunneling if configured). If the client is started without -N, it'll immediately disconnect.
For example:
ssh -N -D 8080 host.example.com
| How can I lock down a user account to the point that it can read/write/execute as little as possible? |
1,307,090,858,000 |
In Linux distributions like RedHat you can create a user with options --disabled-login and ---disabled-password (see man page for command adduser link).
I wonder if it is possible to check for an administrator after user creation if the login and password is disabled for a given user? Exists there any possibility?
|
This information can be gathererd using the passwd utility.
From man passwd
-S, --status
Display account status information. The status information consists of 7 fields. The first field is the user's login name. The second field indicates if the user account has a locked password
(L), has no password (NP), or has a usable password (P). The third field gives the date of the last password change. The next four fields are the minimum age, maximum age, warning period, and
inactivity period for the password. These ages are expressed in days.
To check the status of every user on the system, run
passwd -a -S
A disabled (locked) user might look like this:
apache L 08/30/2019 0 99999 7 -1
Note the L, indicating the account is locked.
A regular user might look like this:
panki P 09/23/2019 0 99999 7 -1
| How to check if unix account has been created with "--disabled-login" and "---disabled-password" |
1,307,090,858,000 |
Linux box a is connected to a home DSL line with dynamic DNS registration, hosting a tmux session to which multiple clients connect in read-only mode over SSH. All users connect using the same credentials: user b.
Example: ssh [email protected] tmux attach -t screencast
It all works fine but I have had a user do "naughty" stuff from the box out to the Internet. That's unacceptable as I am responsible for my Internet contract with the ISP; how do I completely jail every user apart from granting the ability to use the account b, over ssh, using tmux to watch session screencast on my a machine?
I am thinking about updating ipchains straight after a user connects over ssh, allowing traffic back to that ip address only but... with multiple viewers sharing the same account?
|
I don't completely understand your requirements: which machine are the users to be jailed on? Can they do anything that doesn't involve the network? Nonetheless I think I can tell you what the necessary building blocks are.
To restrict a user to specific network connections, see
How to restrict internet access for a particular user on the lan using iptables in Linux. In short, to restrict user 1234's network traffic to connecting to 192.0.2.42 (the IP address of machine A) via ssh:
iptables -t mangle -A OUTPUT -o eth0 -m owner --uid-owner 1234 -d 192.0.2.42 --dport 22 -j ACCEPT
iptables -t mangle -A OUTPUT -o eth0 -m owner --uid-owner 1234 -j DROP
Remember to block IPv6 as well if you have it.
On machine A, to restrict the restricted users to the account B, the most effective method is to arrange for these users not to have credentials to other accounts. You can use Match directives in sshd_config to restrict connections from certain IP addresses to authenticating certain users, but this may not be a good thing as it would prevent you from obtaining administrative access.
Match 192.0.2.99
AllowTCPForwarding No
AllowUsers B
PasswordAuthentication No
X11Forwarding No
To restrict account B to a single command, there are two ways:
Give the users a private key (preferably one per user), and set restrictions for this key in the authorized_keys file with a command= directive.
command="tmux attach-session -r -t screencast" ssh-rsa …
Set the user's shell to a script that launches tmux with the right arguments. This has the advantage that you can allow password authentication, but may be harder to get right in a way that doesn't allow the user to break out to a shell prompt.
I think tmux that tmux doesn't allow shell escapes in a read-only session, but I'm not sure, check that users can't escape at that point.
| How to "jail" a user account's network capabilities on Linux? |
1,307,090,858,000 |
I know how to restrict standard users to run a command by removing execute permissions for that command. But it's possible restrict standard users to run a command with a specific option/argument?
For example a standard user should be able to run the following command:
ls
but not:
ls -l
I think that this can be possible since there are some commands like chsh or passwd which a standard user can run them, but he get permission denied when he runs chsh root or passwd -a -S.
|
I think the only way would be to write your own wrapper to the command/utility in question and have it decide what is allowed or not allowed based on the (E)UID of the user who started it. The tools you mention that do this such as chsh or passwd have this functionality built into their implementation.
How to write a wrapper for ls
#!/usr/bin/perl
use strict;
use warnings;
my $problematic_uid = 1000; # For example
my $is_problematic = $< == $problematic_uid;
unless(/ -l / ~~ @ARGV or $is_problematic){
exec qq{/new/path/to/ls }.join '',@ARGV
}else{
die "Sorry, you are not allowed to use the -l option to ls\n"
}
You need to ensure that the path to the original ls isn't in your user's PATH. Which is why I wrote /new/path/to/ls. The problem is, this wrapper requires that your user be able to execute the original ls so the user may still circumvent it by calling the original ls directly.
| Restrict standard users to run a command with a specific argument |
1,307,090,858,000 |
I'd like to use a passwordless key to perform e.g. unison synchronization while being able to SSH into the server only with a password-protected key. The usual way of using scponly is changig the login-shell of my server account, but that is too global. Can an entry in authorized_keys achieve this instead?
|
You can use command keyword in authorized_keys to restrict execution to one single command for particular key, like this:
command="/usr/local/bin/mysync" ...sync public key...
Update:
If you specify a simple script as the command you may verify the command user originally supplied:
#!/bin/sh
case "$SSH_ORIGINAL_COMMAND" in
/path/to/unison *)
$SSH_ORIGINAL_COMMAND
;;
*)
echo "Rejected"
;;
esac
| How to associate only one public key with a restricted shell like scponly? |
1,307,090,858,000 |
I have a machine running Ubuntu 12.04 LTS with OpenSSH server installed. I have created a user with a jailed home account.
When I log in with the user's account, I can only access his home (duh).
In addition to the user's home folder, I would like to give the him read access to a particular partition (/Volumes/Storage). How can I do this?
|
You will need to ensure that the chroot that the users is put in has access to the directory by bind mounting into the chroot tree:
mount --bind /Volumes/Storage /path/to/chroot
The user will also need to have necessary filesystem permissions to read the data on the drive. The easiest way to accomplish this would be to put the user in a supplementary group that has ownership of the files (assuming standard 755 and 644 permissions).
| SFTP: give user access to folder outside home |
1,307,090,858,000 |
I have a small problem. I have two accounts on a PC. One is used for "normal" operations, usually through the graphical interface, the other is only used for remote management (upload files through sftp, execute through ssh)
Now, I'd like to disable the local login for the second account and allow it to be used only through ssh and sftp. If this is unfeasible, I actually need that it can't login on the graphical interface (mainly because I don't want the X standard folders - Desktop, Downloads... - to appear in the home folder).
I'm using lubuntu (so LXDE as DE) 16.04 x64
|
You could create a group for the users you want to be able to log in graphically (just to make it easier if you decide to add another user later). I'll assume graphical-users, but the name doesn't matter.
Add the group: addgroup graphical-users
Add your normal user to this group: adduser frarugi graphical-users.
Edit your PAM config for lxdm (probably /etc/pam.d/lxdm) [or whatever login manager you're using, possibly lightdm] to add something like this, probably right below @include common-auth:
auth required pam_succeed_if.so quiet_success user ingroup graphical-users
I confess I haven't tested this, and am hoping the PAM setup in Lubuntu is similar to Debian.
| Force login through ssh only |
1,356,445,124,000 |
I am using /bin/rbash for some users. It's working as expected but there is some hack like when users run bash or dash, then they got unrestricted shells, so to avoid these commands, I have added below functions in their .bashrc files.
bash() {
echo "WARNING: NOT ALLOW!!"
}
sh() {
echo "WARNING: NOT ALLOW!!"
}
So my question is:
1# can we use functions with multiple names as below
func1,func2 () {
# do stuff
}
2# I also tried:
case $BASH_COMMAND in # check each command`
bash|dash|sh)
echo "WARNING: NOT ALLOW!!"
;;
esac
3# /bin/rbash -> bash it's just a soft link of bash, then how does it work as restricted?
Also there is some command to avoid users to execute that like unset HISTFILE and kill -9 $$
Is there any alternate way to achieve the same?
|
Do not do this. rbash should only be used within an already secure environment unless you know what you are doing. There are many ways to break out a restricted bash shell that are not easy to predict in advance.
Functions can easily be overridden simply by doing command bash or command sh.
As for your questions:
You can't define multiple functions at the same time directly. You'd have to do something like this:
x() { foo; }
alias f1=x
alias f2=x
rbash works because bash checks the value of argv[0] on launch. If the basename, with leading dashes stripped, is equal to RESTRICTED_SHELL_NAME (defaulting to rbash, see config.h), it runs in restricted mode. This is the same way that it runs in POSIX-compliance mode if invoked as sh. You can see this in the following code from shell.c in bash 4.2, lines 1132-1147:
/* Return 1 if the shell should be a restricted one based on NAME or the
value of `restricted'. Don't actually do anything, just return a
boolean value. */
int
shell_is_restricted (name)
char *name;
{
char *temp;
if (restricted)
return 1;
temp = base_pathname (name);
if (*temp == '-')
temp++;
return (STREQ (temp, RESTRICTED_SHELL_NAME));
}
| Bash restricted Shell using rbash |
1,356,445,124,000 |
I'm trying to create a guest account in arch linux that only allows the user to use simple applications like firefox. I do not want them to have access to programs like terminal or grub-customizer etc. How would I do this?
|
In the case of gnome - please use the following as a support in getting where you want to be.
Security
Disable access to any command line
https://help.gnome.org/admin/system-admin-guide/stable/lockdown-command-line.html.en
Ensure that the installation works as wanted before activating this
or ensure that one can log in as root with the right password. Preferably disable all access to command-line completely. Thus devices cannot be altered once installed except through chroot.
Disable user list at login screen
How to disable the user list on GDM3 login screen?'
Disable repartitioning for the user
https://help.gnome.org/admin/system-admin-guide/stable/lockdown-repartitioning.html.en
Disable altering settings
https://help.gnome.org/admin/system-admin-guide/stable/dconf-lockdown.html.en
Disable the user from saving files:
https://help.gnome.org/admin/system-admin-guide/stable/lockdown-file-saving.html.en
Disable printing (such as from firefox or libreoffice)
https://help.gnome.org/admin/system-admin-guide/stable/lockdown-printing.html.en
No keyring package installed
You can also lockdown mozilla firefox settings - or at least you used to. Try googling the subject.
| Create guest account with restricted access to applications |
1,356,445,124,000 |
I am trying to secure a custom application as much as possible from outside tampering.
I've seen many pages on jailing a user, but they usually include many exceptions, and I want to lock down this user as much as possible.
The user only needs to execute an application that is a websocket++ client & server that needs the ability to:
Accept incoming connections port forwarded from 443 to another port, for example 8000
Seek outgoing connections
Communicate with a local PostgreSQL server
Read from & write to a few specific files in the directory where the application is executed
Get output from ntpd -c 'rv'
Accept keyboard input
How can my intent be implemented?
|
if you really
want to lock down this user as much as possible
create a virtual machine. The chroot don't really isolate this process.
If a real virtual machine is too heavy, maybe you can have a look at linux containers, a lightweight version of virtual machine. Harder to configure though.
If you want something even more lightweight you can try to configure SELinux. Maybe even harder to configure, but it should do exactly what you want
chroot is not intended as a security measure, and there are various way to work around it.
| Absolutely jail a user with minimum IP, file, & command rights |
1,356,445,124,000 |
It's clear from documentation about cron that if cron.allow and cron.den both exist then cron.allow takes precedence and it is allowed.
What is the case for at.allow / at.deny?
Everywhere I've searched and checked does not say it explicitly.
I use Ubuntu.
|
The FreeBSD and Solaris manuals are clear: if at.allow exists, at.deny is ignored. The Linux manual is slightly less explicit but the behavior is the same.
Despite the convergence of BSD and Solaris, this is not universal. On AIX, if a user is listed in both at.allow and at.deny, then he cannot use at.
| at.allow and at.deny precedence (in Ubuntu)? |
1,356,445,124,000 |
I've created a local port forwarding and I was trying to ssh into my own port.
During the process, I found that my own Linux won't recognize me even when I have the right password
chulhyun@chulhyun-Inspiron-3420:~$ su
password:
root@chulhyun-Inspiron-3420:/home/chulhyun# ssh root@localhost -p 2200
root@localhost's password:
Permission denied, please try again.
root@localhost's password:
root@chulhyun-Inspiron-3420:/home/chulhyun# exit
exit
chulhyun@chulhyun-Inspiron-3420:~$ ssh chulhyun@localhost -p 2200
chulhyun@localhost's password:
Permission denied, please try again.
I've tried it both as root@localhost and chulhyun@localhost (chulhyun is my user name). In both cases I have no problem logging into the account but when I enter that password when they ask for my password during ssh'ing, they say its wrong...
What am I missing here? Is there supposed to be a separate king of password for network login?
update
here's my sshd_config
# Package generated configuration file
# See the sshd_config(5) manpage for details
# What ports, IPs and protocols we listen for
Port 22
# Use these options to restrict which interfaces/protocols sshd will bind to
#ListenAddress ::
#ListenAddress 0.0.0.0
Protocol 2
# HostKeys for protocol version 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
#Privilege Separation is turned on for security
UsePrivilegeSeparation yes
# Lifetime and size of ephemeral version 1 server key
KeyRegenerationInterval 3600
ServerKeyBits 768
# Logging
SyslogFacility AUTH
LogLevel INFO
# Authentication:
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes
RSAAuthentication yes
PubkeyAuthentication yes
#AuthorizedKeysFile %h/.ssh/authorized_keys
# Don't read the user's ~/.rhosts and ~/.shosts files
IgnoreRhosts yes
# For this to work you will also need host keys in /etc/ssh_known_hosts
RhostsRSAAuthentication no
# similar for protocol version 2
HostbasedAuthentication no
# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication
#IgnoreUserKnownHosts yes
# To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords no
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no
# Change to no to disable tunnelled clear text passwords
#PasswordAuthentication yes
# Kerberos options
#KerberosAuthentication no
#KerberosGetAFSToken no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
X11Forwarding yes
X11DisplayOffset 10
PrintMotd no
PrintLastLog yes
TCPKeepAlive yes
#UseLogin no
#MaxStartups 10:30:60
#Banner /etc/issue.net
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
UsePAM yes
update
here's the result of
ssh -vvv chulhyun@localhost -p 2200
chulhyun@chulhyun-Inspiron-3420:~$ ssh -vvv chulhyun@localhost -p 2200
OpenSSH_5.9p1 Debian-5ubuntu1.4, OpenSSL 1.0.1 14 Mar 2012
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to localhost [127.0.0.1] port 2200.
debug1: Connection established.
debug3: Incorrect RSA1 identifier
debug3: Could not load "/home/chulhyun/.ssh/id_rsa" as a RSA1 public key
debug1: identity file /home/chulhyun/.ssh/id_rsa type -1
debug1: identity file /home/chulhyun/.ssh/id_rsa-cert type -1
debug1: identity file /home/chulhyun/.ssh/id_dsa type -1
debug1: identity file /home/chulhyun/.ssh/id_dsa-cert type -1
debug1: identity file /home/chulhyun/.ssh/id_ecdsa type -1
debug1: identity file /home/chulhyun/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1.4
debug2: fd 3 setting O_NONBLOCK
debug3: put_host_port: [localhost]:2200
debug3: load_hostkeys: loading entries for host "[localhost]:2200" from file "/home/chulhyun/.ssh/known_hosts"
debug3: load_hostkeys: found key type RSA in file /home/chulhyun/.ssh/known_hosts:2
debug3: load_hostkeys: loaded 1 keys
debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,[email protected],zlib
debug2: kex_parse_kexinit: none,[email protected],zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,[email protected]
debug2: kex_parse_kexinit: none,[email protected]
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: found hmac-md5
debug1: kex: server->client aes128-ctr hmac-md5 none
debug2: mac_setup: found hmac-md5
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug2: dh_gen_key: priv key bits set: 130/256
debug2: bits set: 501/1024
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Server host key: RSA 0a:2c:c5:31:6e:46:76:f6:e2:fb:3e:ac:77:96:36:2a
debug3: put_host_port: [127.0.0.1]:2200
debug3: put_host_port: [localhost]:2200
debug3: load_hostkeys: loading entries for host "[localhost]:2200" from file "/home/chulhyun/.ssh/known_hosts"
debug3: load_hostkeys: found key type RSA in file /home/chulhyun/.ssh/known_hosts:2
debug3: load_hostkeys: loaded 1 keys
debug1: Host '[localhost]:2200' is known and matches the RSA host key.
debug1: Found key in /home/chulhyun/.ssh/known_hosts:2
debug2: bits set: 518/1024
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /home/chulhyun/.ssh/id_rsa ((nil))
debug2: key: /home/chulhyun/.ssh/id_dsa ((nil))
debug2: key: /home/chulhyun/.ssh/id_ecdsa ((nil))
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic,password
debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password
debug3: authmethod_lookup gssapi-keyex
debug3: remaining preferred: gssapi-with-mic,publickey,keyboard-interactive,password
debug3: authmethod_is_enabled gssapi-keyex
debug1: Next authentication method: gssapi-keyex
debug1: No valid Key exchange context
debug2: we did not send a packet, disable method
debug3: authmethod_lookup gssapi-with-mic
debug3: remaining preferred: publickey,keyboard-interactive,password
debug3: authmethod_is_enabled gssapi-with-mic
debug1: Next authentication method: gssapi-with-mic
debug1: Unspecified GSS failure. Minor code may provide more information
Credentials cache file '/tmp/krb5cc_1000' not found
debug1: Unspecified GSS failure. Minor code may provide more information
Credentials cache file '/tmp/krb5cc_1000' not found
debug1: Unspecified GSS failure. Minor code may provide more information
debug1: Unspecified GSS failure. Minor code may provide more information
Credentials cache file '/tmp/krb5cc_1000' not found
debug2: we did not send a packet, disable method
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /home/chulhyun/.ssh/id_rsa
debug1: read PEM private key done: type RSA
debug3: sign_and_send_pubkey: RSA 69:f0:21:fa:39:b5:5e:79:48:25:4d:b2:dc:59:86:23
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Trying private key: /home/chulhyun/.ssh/id_dsa
debug3: no such identity: /home/chulhyun/.ssh/id_dsa
debug1: Trying private key: /home/chulhyun/.ssh/id_ecdsa
debug3: no such identity: /home/chulhyun/.ssh/id_ecdsa
debug2: we did not send a packet, disable method
debug3: authmethod_lookup password
debug3: remaining preferred: ,password
debug3: authmethod_is_enabled password
debug1: Next authentication method: password
chulhyun@localhost's password:
debug3: packet_send2: adding 64 (len 59 padlen 5 extra_pad 64)
debug2: we sent a password packet, wait for reply
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
Permission denied, please try again.
|
This is an answer that I finally reach after the discussion made in the comments.
From the comments, I realized that connecting to port 2200 is in fact an attempt to log in to the remote destination server. Based on this discovery, not only was I typing in the wrong password, but also, I was trying to log in with the wrong username. The username that I have in the destination server is 'kwagjj' so obviously, trying chulhyun@localhost was totally wrong.
So I tried ssh kwagjj@localhost -p 2200 and with the password to the kwagjj account that I have at the destination server and I succeeded to get in.
The point is, when utilizing the local port that you have forwarded, treat it as if you're facing the remote server at the very end of that local port.
| Linux can't recognize the right password when ssh'ing? |
1,356,445,124,000 |
Is it possible to have bashrc grab every single command the user types, save for those containing a given word?
Like the way you can use aliases to change what the user meant, you can alias for instance 'cd' to nothing. that way the user can't apply that command anymore.
Maybe that way you can have a given user only able to apply one command?
|
If you only need users to access files remotely with sftp or rsync, but not be able to run shell commands, then use rssh or scponly.
If you need users to be able to run only a few programs, set a restricted shell for them, such as rbash or rksh. In a restricted shell, PATH cannot be changed and only programs in the path can be executed. Beware not to allow programs that allow the user to run other programs, such as the ! or | command in vi. Access to files remains controlled by the file permissions.
| Bashrc disable-ing all but a given command for a given user |
1,356,445,124,000 |
I have a (home) Ubuntu machine; upon installation I created the "x" user during the installation (who is not the root) so often when I want to run apt-get or other things that require write access to /usr or /var I need to sudo.
My question is is there a safe way to setup the "x" user in such a way that he has more rights so that I don't have to sudo or su?
What would be the optimal way to do the user account management? (On a home machine - so no production.)
|
Not on Linux. On Solaris you can use RBAC (Role Based Access Control) and provide additional permissions.
On Linux the proper way to do this is to use sudo.
We all do it.
That is the way of things.
The way of the Force.
| Optimal way to setup user account / accesses? |
1,356,445,124,000 |
I've created a SSH folder for some friends. I've created the users to log in via SSH, but they connect to ~ how can I set to which folder they can connect?
I have a directory called /usbdrv/ which point to my usb Drive.
PS: They can't go then in the parent directory.
|
Setting a user's home directory only determines the directory where they are by default. Users can see the rest of the filesystem.
If you want an account to be restricted to file transfer and to only have access to a specific directory tree, you need to “jail” that user. This is supported natively by OpenSSH; for example, if you put those friends (and only them) in the friends group:
Match Group friends
ForceCommand internal-sftp
ChrootDirectory %h
#AuthorizedKeysFile /etc/sshd/friends/%u.authorized_keys
The ChrootDirectory confines these users to their home directory. If they all have the same home directory, they'll all be able to use the same SSH keys, which may not be what you want. Uncomment the AuthorizedKeysFile line if you don't want these users to be able to upload their own authorized keys.
If you want to treat these users independently from an authentication point of view, don't want them to be able to manipulate their keys, and want to give them all access to the same directory tree, then you can set a particular directory instead:
Match Group friends
ForceCommand internal-sftp
ChrootDirectory /pub
If you want to give these users access to multiple parts of the filesystem, you can make a combined view using a bind mount.
ForceCommand internal-sftp restricts these users to SFTP access (e.g. with Filezilla or over SSHFS). If you want to allow other methods such as rsync, you need a fancier configuration, e.g. using rssh (read the CHROOT guide).
| Create a user that can only connect to a specific directory |
1,356,445,124,000 |
When I work on Linux test server (Debian 11) I have root, and want block other users open new session to this server during my work.
Is it possible?
|
When you are logged in as root, you can create a file named /var/run/nologin (historically /etc/nologin), and this should prevent non root users from logging in. When you are done your work as root, you can then delete that file to resume access to other users.
See the man page for nologin (5):
Name
nologin - prevent unprivileged users from logging into the system
Description
If the file /etc/nologin exists and is readable, login(1)
will allow access only to root. Other users will be shown the contents of this file and their logins will be refused.
On at least some systems this is controlled through PAM with the pam_nologin module. on Debian this module is the first account entry for login and sshd, placed before the inclusion of common_account:
# Disallow non-root logins when /etc/nologin exists.
account required pam_nologin.so
| How to prevent other users from creating new SSH sessions? |
1,356,445,124,000 |
I'm setting up an Ubuntu server to receive ssh connections from clients so I will then be able to connect back to their machine (reverse SSH tunneling). I searched for a way to prohibit any action from the client on the server, and I found different solutions, but none seems as simple as just configuring the authorized_key of a specific client on the server by adding:
command="sleep x seconds"
Am I missing something important that would make that solution not a good one?
|
If the restriction is for a number of users I would set up a Match block in /etc/sshd_config, to apply restrictions for that group of users. If the restriction is for a particular ssh key I would consider taking your suggested route and blocking commands in the ~/.ssh/authorized_keys file.
Match block in /etc/sshd_config
Read the documentation for Match, and as usual when changing how a connection subsystem such as ssh works, ensure you have a root login available on the remote system to revert or adjust any changes.
# Add users to the UNIX group "restrictedusers"
Match Group restrictedusers
AllowTCPForwarding yes
X11Forwarding no
AllowAgentForwarding no
ForceCommand echo No logins permitted
Keys in ~/.ssh/authorized_keys
Not only force a command but also disable X11Forwarding, agent forwarding, etc. by prefixing access control options to an existing or new key entry. Ensure you have a login available on the remote system to revert or adjust any changes to the file.
restrict,port-forwarding,command="echo No logins permitted" ssh-ed25519 ed25519-public-key-code-goes-here...
In both cases, once this is set up the caller needs to set up the reverse callback for you. For example, this one connects the remoteHost's local port 50123/tcp to the caller's local port 123/tcp
ssh -fN -R 50123:localhost:123 remoteHost
Best of all, this can all be prepared in the caller's ~/.ssh/config for remoteHost, so that they would then just need ssh remoteHost to start the callback:
Host remoteHost
ForkAfterAuthentication yes
SessionType none
# listenPort targetHost:targetPort
RemoteForward 50123 localhost:123
| Using sleep command in ssh authorized_key to prevent user's actions |
1,356,445,124,000 |
Assume you can install something on a system because you have sudo rights to do so, but only have sudo rights for the installer. In that case it is fairly easy to create a package that installs a binary owned by root that has the setuid bit set during installation, and have that binary execute any command that you feed it, as root. This makes it insecure to allow limited sudo access for any given user to a package that can arbitrarily change change permissions. The other obvious (IMO) security hole is that a package can update the /etc/sudoers file and grant the user all kind of additional rights.
As far as I know apt-get nor yum have an option that you can set, or check how they are invoked, that causes what is installed in the normal, default, locations, but in a limited way (e.g. not overwriting already available files, or not setting setuid bits).
Did I miss something and does installation with such restrictions exists? Is it available in other installers? Or are there other known workarounds that would make such restrictions ineffective (and implementing them a waste of time)?
|
This is probably doable with an SELinux policy (and probably not doable without SELinux or other a security module that can confine root), but it's pointless.
As you note, a package could declare that it installs /etc/sudoers. Even if you make an ad hoc rule to somehow prevent that, the package could drop a file in /etc/sudoers.d. Or it could drop a file in /etc/profile.d, to be read the next time any user logs in. Or it could add a service that's started by root at boot time. The list goes on and on; it's unmanageable, and even if you caught the problematic cases, you'd have prevented so many packages from installing that you might as well not bother (for example, that facility wouldn't allow most security updates). Another thing the package could do is to install a program that you'd be tricked into using later (for example, if you forbid write access to /bin altogether, it could install /usr/local/bin/ls) and which injects a backdoor via your account the next time you invoke the program. To prevent a package installation from injecting a potential security hole, you need to either restrict the installation to trusted packages, or to make sure you never use the installed packages.
Basically, if you don't trust a user, then you can't let them install arbitrary packages on your system. Let them install software in their home directory if they need something that isn't in the distribution.
If you want to give an untrusted user the ability to install more packages (from a predefined list of sources that you approve as safe) or upgrade existing packages on the main system, that can be safe, but you need to take precautions, in particular to disable interaction during the installation. See Is it safe for my ssh user to be given passwordless sudo for `apt-get update` and `apt-get upgrade`? for some ideas about apt-get upgrade.
Under recent Linux versions (kernel ≥ 3.8), any user can start a user namespace in which they have user ID 0. This basically allows a user to install their own distribution in their own directory.
| Disallow `apt-get`, `yum` to install setuid binaries when itself run via sudo |
1,356,445,124,000 |
I have a linux machine(kubuntu 13.04)
A my friend asked me give him an account with being able to use sudo. So, I made an account for him and put the account in /etc/group for him to use sudo
I don't care he installs or manages any programs but don't want him to access my home folder /home/myaccount and its sub folders. How can I do this?
|
Probably, you can not. If he has "full root privileges" (and the needed knowledge), he'll can circumvent any limitation you can impose him.
Try to reverse your approach: He needs full root privileges? Why? Really? Give him sudo privileges only to minimal required set of command. E.g. To install an application by package management, he needs only to run rpm (or yum or apt-get) as root, not any program.
As stated by @Evan-Teitelman, even this could be dangerous. In sudoers file you can define even which command parameters a user can use, so you could let him use only an high-level command-line interface of the package manager (as apt-get or zipper) and let him install the software only from official repositories.
P.S. In any case, if he reboots the machine using a live-cd, he'll can read anything unless you encrypt the home file-system.
| How do I isolate my file system data from another system root? |
1,356,445,124,000 |
I have a problem, I need to give an access to someone, but the only way to connect to the database is via SSH, so if I have understood everything correctly, I need to create him a linux account (on a Debian 10 machine), the user created doesn't have any right to write in folder, so I guess he can't break anything.
But the problem is that he can read any file in the filesystem, I don't really know what is the best practice, I know chmod could fix everything but I don't know if it is a good practice to remove all rights from others and changing the group of every single file to my main linux account.
I guess I should remove all the execution rights of the command under the /bin folder and change the group of the file?
Thanks for reading! I hope I was clear.
|
What you asked for...
What you are looking for is chroot. This will set the / root of the filesystem to a location of your choice.
If you chroot to /home/bob for the user bob this location will look like / for bob. He will not see the rest of the filesystem. Because of this you want to place any programs he needs to run below this folder.
As we now know of chroot we can then find plenty of answers and guides:
How can I chroot ssh connections?
Chroot users with OpenSSH: An easier way to confine users to their home directories
Chrooted SSH/SFTP Tutorial (Debian Lenny)
What you want...
If the database is accessible from the Debian machine and that is all what is needed then you are looking for a SSH tunnel. You still need to have a user account but this can be locked totally down. The important SSH settings are:
AllowTcpForwarding yes - we are allowed to have a tunnel
ForceCommand /bin/false - if you try to log in via ssh you will not get a shell
ChrootDirectory /opt/dummy_location/%u - If you somehow get a shell anyway we have limited view of the filesystem to an empty location
With this knowledge we can again find plenty of prior art:
ssh tunneling only access
How to restrict an SSH user to only allow SSH-tunneling?
The above handles the ssh connection. If the user has physical access to the server then remember to set the shell for the user as well:
usermod -s /bin/false userbob
With the above in order then you can search around to see how to connect with any database client. As all the magic with SSH happens on the network layer this can work for all clients! When the tunnel is up it looks like the database is running on the local machine.
Some clients are aware of SSH tunnels and make your life a little easier. A common client would be HeidiSQL - see How to connect to a MySQL database over a SSH tunnel with HeidiSQL.
If you go the tunnel route then please please please test with a regular account first to make sure it works before you start to lock down the tunnel user!
And finally you should be using SSH keys instead of passwords. But this combined with the complexity of chroot is best left as the last thing to implement.
| Removing commands from a specific user |
1,356,445,124,000 |
I want to use an existing Ubuntu machine (with a public IP) to allow a user to SSH into it and then use it to SSH/RemoteDesktop/FTP into another linux machine (with a private IP but within the 1st machine's network).
I would create a new user account for this purpose, but I want to:
Restrict sudo access to this user
Not be able to go into other users /home/otheruser directory
How to do it?
|
What you are trying to setup is usually called a jumpbox.
Users are not added to the wheel group automatically, so you only need to useradd the new user, and you are good to go
| SSH and SCP piggyback ubuntu machine for a new user |
1,356,445,124,000 |
Background: We have a policy in the company to deactivate the login possibility as much as possible, which is understandable.
I am just wondering if there are any other side effects if you specify /usr/sbin/nologin as the login shell of an account? Apart from the login capability are there any other capabilities or features which will be deactivated? Any other known side effects?
|
Most daemons will reject user access for users whose shells don't appear in /etc/shells (and those that authenticate using PAM can be made to do so with a line of configuration). But you'll need to check the ones you care about (FTP daemons, IMAP server, etc) to make sure their behaviour is what you want. You may need to add /usr/sbin/nologin (or /bin/false) to the /etc/shells file.
I'm assuming that this is additional to giving such users disabled passwords, which (if working correctly) means the disabled shell is a backstop to the primary restriction.
| Does /usr/sbin/nologin have any side effects? |
1,356,445,124,000 |
A small family setup...
I have a color printer, and a Linux computer, with CUPS installed. I wan to allow the kids to print, but only in draft mode, and only in greyscale.
With CUPS I prevented the kids' account from accessing the printer. Then I set up a second printer, for the same hardware printer, but with different default options (draft and greyscale), and allowed the kids to access this new printer.
It works, when they print the default options for this new printer are indeed draft and greyscale. But they are just that, default options. They can change it.
Is there a way to prevent users from changing the options of a printer ?
|
So, with meuh's help, it was actually quite easy to implement a working solution.
Edit the printer's PPD, locate the GUI entries concerning the color setup and printing mode, and comment out the unwanted options (comments are lines starting with *% in a PPD file).
In my case, it was :
*OpenUI *ColorModel/Output Mode: PickOne
*OrderDependency: 10 AnySetup *ColorModel
*DefaultColorModel: KGray
*%*ColorModel RGB/Color: "<</cupsColorSpace 1/cupsBitsPerColor 8/cupsRowStep 0>>setpagedevice"
*%*ColorModel CMYGray/High Quality Grayscale: "<</cupsColorSpace 1/cupsBitsPerColor 8/cupsRowStep 1>>setpagedevice"
*ColorModel KGray/Black Only Grayscale: "<</cupsColorSpace 1/cupsBitsPerColor 8/cupsRowStep 2>>setpagedevice"
*CloseUI: *ColorModel
*OpenUI *OutputMode/Print Quality: PickOne
*OrderDependency: 10 AnySetup *OutputMode
*DefaultOutputMode: FastDraft
*%*OutputMode Normal/Normal: "<</OutputType(0)/HWResolution[600 600]>>setpagedevice"
*OutputMode FastDraft/Draft: "<</OutputType(-2)/HWResolution[300 300]>>setpagedevice"
*%*OutputMode Best/Best: "<</OutputType(1)/HWResolution[600 600]>>setpagedevice"
*%*OutputMode Photo/High-Resolution Photo: "<</OutputType(2)/HWResolution[1200 1200]>>setpagedevice"
*CloseUI: *OutputMode
Easy !
| Is it possible to prevent users to modify printer options in CUPS? |
1,356,445,124,000 |
I have a server where MAXWEEKS is set to 13 in etc/default/passwd. I know that this setting means that password will expire after 13 weeks. My question is if the user does NOT change the password, does the account become locked out after this or does the system will force the user to change the password when the user logon? What setting can be set to automatically revoke access after certain weeks on inactivity?
|
The MAXWEEKS only sets the amount of time a certain password may be used before the user if forced to change it. The user will get a prompt with a request to change the password before the user can further login to the system.
Locking of an account after a period of inactivity requires a more advanced authorization management tool. NIS+ might be something on Solaris that can do this for you.
| MAXWEEKS on Solaris is equals to 13 |
1,356,445,124,000 |
I am new to linux and interested in only allow in login via direct access to the machine (active user). I don't even want to be able to log in remotely myself. Can someone please point me in the right direction on how to do this? I am using Ubuntu.
|
By default, Ubuntu doesn't allow anyone to log in remotely – you'd need to install a server for that (for example, for an SSH server, the openssh-server package), and then activate it.
So, unless you're explicitly doing that, nobody can log in remotely.
If you already did install such a server, you can either uninstall it (sudo apt remove {name of the server package}, or just disable it (sudo systemctl disable --now {name of the service}, for example sudo systemctl disable --now sshd). But again, this will not be necessary unless you specifically set up a server first.
| How to lock down Ubuntu linux so that only the active user can log in |
1,356,445,124,000 |
I need to create a user which can SSH to a server, but can not run any commands or view anything, for the purposes of tunneling the connection without risk of revealing credentials. How do I do this? I looked on google but found no sources.
Edit: Would it also be possible to show a certain message to the user upon login? Instead of the usual 'Last logged in from xxx at yyy'?
|
Sleepshell http://www.mariovaldez.net/software/sleepshell/ or
rbash http://en.wikipedia.org/wiki/Restricted_shell
could be you need.
| How to create user with only SSH permissions [duplicate] |
1,456,364,840,000 |
Why do open() and close() exist in the Unix filesystem design?
Couldn't the OS just detect the first time read() or write() was called and do whatever open() would normally do?
|
Dennis Ritchie mentions in «The Evolution of the Unix Time-sharing System» that open and close along with read, write and creat were present in the system right from the start.
I guess a system without open and close wouldn't be inconceivable, however I believe it would complicate the design.
You generally want to make multiple read and write calls, not just one, and that was probably especially true on those old computers with very limited RAM that UNIX originated on. Having a handle that maintains your current file position simplifies this. If read or write were to return the handle, they'd have to return a pair -- a handle and their own return status. The handle part of the pair would be useless for all other calls, which would make that arrangement awkward. Leaving the state of the cursor to the kernel allows it to improve efficiency not only by buffering. There's also some cost associated with path lookup -- having a handle allows you to pay it only once. Furthermore, some files in the UNIX worldview don't even have a filesystem path (or didn't -- now they do with things like /proc/self/fd).
| On Unix systems, why do we have to explicitly `open()` and `close()` files to be able to `read()` or `write()` them? |
1,456,364,840,000 |
I'm trying to learn UNIX programming and came across a question regarding fork(). I understand that fork() creates an identical process of the currently running process, but where does it start? For example, if I have code
int main (int argc, char **argv)
{
int retval;
printf ("This is most definitely the parent process\n");
fflush (stdout);
retval = fork ();
printf ("Which process printed this?\n");
return (EXIT_SUCCESS);
}
The output is:
This is most definitely the parent process
Which process printed this?
Which process printed this?
I thought that fork() creates a same process, so I initially thought that in that program, the fork() call would be recursively called forever. I guess that new process created from fork() starts after the fork() call?
If I add the following code, to differentiate between a parent and child process,
if (child_pid = fork ())
printf ("This is the parent, child pid is %d\n", child_pid);
else
printf ("This is the child, pid is %d\n", getpid ());
after the fork() call, where does the child process begin its execution?
|
The new process will be created within the fork() call, and will start by returning from it just like the parent. The return value (which you stored in retval) from fork() will be:
0 in the child process
The PID of the child in the parent process
-1 in the parent if there was a failure (there is no child, naturally)
Your testing code works correctly; it stores the return value from fork() in child_pid and uses if to check if it's 0 or not (although it doesn't check for an error)
| After fork(), where does the child begin its execution? |
1,456,364,840,000 |
I have read the man pages of Curl, but I can't understand what those parameters (k, i and X) mean. I see it used in a REST API call, but can someone please explain what those three parameters do? It's not clear in the documentation.
Thank you in advance.
|
-k, --insecure: If you are doing curl to a website which is using a self-signed SSL certificate then curl will give you an error as curl couldn't verify the certificate. In that case, you could use -k or --insecure flag to skip certificate validation.
Example:
[root@arif]$ curl --head https://xxx.xxx.xxx.xxx/login
curl: (60) Peer's Certificate issuer is not recognized.
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a
"bundle" of Certificate Authority (CA) public keys (CA certs).
If the default bundle file isn't adequate, you can specify an
alternate file using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented
in the bundle, the certificate verification probably failed
due to a problem with the certificate (it might be expired,
or the name might not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate,
use the -k (or --insecure) option.
[root@arif]$ curl -k --head https://xxx.xxx.xxx.xxx/login
HTTP/1.1 302 Moved Temporarily
Date: Thu, 07 Dec 2017 04:53:44 GMT
Transfer-Encoding: chunked
Location: https://xxx.xxx.xxx.xxx/login
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: JSESSIONID=xxxxxxxxxxx; path=/; HttpOnly
-i, --include: This flag will include http header. Usually http header are consist of server name, date, content type etc.
Example:
[root@arif]$ curl https://google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://www.google.com/">here</A>.
</BODY></HTML>
[root@arif]$ curl -i https://google.com
HTTP/1.1 301 Moved Permanently
Location: https://www.google.com/
Content-Type: text/html; charset=UTF-8
Date: Thu, 07 Dec 2017 05:13:44 GMT
Expires: Sat, 06 Jan 2018 05:13:44 GMT
Cache-Control: public, max-age=2592000
Server: gws
Content-Length: 220
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Alt-Svc: hq=":443"; ma=2592000; quic=51303431; quic=51303339;
quic=51303338; quic=51303337; quic=51303335,quic=":443"; ma=2592000;
v="41,39,38,37,35"
<HTML><HEAD><meta http-equiv="content-.....
-X, --request: This flag will used to send custom request to the server. Most of the time we do GET, HEAD, and POST. But if you need specific request like PUT, FTP, DELETE then you can use this flag. Following example will send a delete request to the google.com
Example:
[root@arif]$ curl -X DELETE google.com
..........................
<p><b>405.</b> <ins>That’s an error.</ins>
<p>The request method <code>DELETE</code> is inappropriate for the URL
<code>/</code>. <ins>That’s all we know.</ins>`
| What is the meaning of "curl -k -i -X" in Linux? |
1,456,364,840,000 |
I always assumed the st_blocks field returned by stat()/lstat()... system calls and which du uses to get the disk usage of files was expressed in 512 bytes units.
Checking the POSIX specification, I now see POSIX makes no guarantee for that. The perl documentation for its own stat() function also warns against making that assumption.
In any case, as indicated by POSIX, that block size is not related to the st_blksize field returned by stat(), so has to be found elsewhere.
Checking the GNU du or GNU find source code, I see evidence HP/UX uses 1024 bytes units instead. GNU find adjusts its -printf %b output to always give a number of 512-byte units which is probably the source of my confusion.
Is there any other Unix-like system, still currently in use where st_blocks is not expressed in 512 byte units? Can that be filesystem dependant (as POSIX suggests)? I guess mounting an HP/UX NFS share could do it.
|
st_blksize is called "the optimum I/O size" and unrelated to the units used for st_blocks. The optimum I/O size of course is filesystem specific. This is a result from the fast filesystem development from Berlekey in 1981/1982. Before, there was no optimum block size in the available filesystem
st_blocks is expressed in units of DEV_BSIZE that indeed is 1024 on HP-UX. DEV_BSIZE is a platform specific constant. Later, when FFS was renamed to UFS, there was a second filesystem in BSD UNIX with different behavior related to indirect blocks and that required this new stat() field. Before, du did just know the algorithm for indirect blocks from the filesystem.
If you run a HP-UX NFS fileserver and other NFS clients, you get wrong reports from the HP-UX NFS server for df unless HP did fix their problem during the past 15 years, where I had no access to recent versions of HP-UX. I know of no other UNIX with similar NFS related bugs.
BTW: up to NFSv3, NFS assumes a blocksize of 512 and HP would need to convert their NFS reports in the server. NFSv4 does not make that implicit assumption, but HP-UX still reports wrong numbers.
I know no other UNIX that is based on a 1024 DEV_BSIZE.
| On what UNIX-like system / filesystem is the st_blocks field returned by stat() not a number of 512-byte units? |
1,456,364,840,000 |
It's possible to mount both Apple HFS+ and Microsoft NTFS filesystems under Linux.
Both filesystems support multiple content streams per file, though this is not widely used.
Apple uses the term fork.
Microsoft uses the term Alternate Data Stream.
Are there (semi) standard ways to access these filesystem features from Linux? If so are the methods uniform between the two filesystems or are only arcane ad-hoc methods available? (Perhaps something with ioctls?)
|
ntfs-3g can read alternate data streams in NTFS. From its manpage:
Alternate Data Streams (ADS)
NTFS stores all data in streams. Every file has exactly one unnamed
data stream and can have many named data streams. The size of a file
is the size of its unnamed data stream. By default, ntfs-3g will only
read the unnamed data stream.
By using the options "streams_interface=windows", with the ntfs-3g
driver (not possible with lowntfs-3g), you will be able to read any
named data streams, simply by specifying the stream's name after a
colon. For example:
cat some.mp3:artist
Named data streams act like normal files, so you can read from them,
write to them and even delete them (using rm). You can list all the
named data streams a file has by getting the "ntfs.streams.list"
extended attribute.
For hfs+ I couldn't find anything conclusive (e.g. kernel documentation), but this question at Super User points to a suggestion:
Add /rsrc to the end of the file name to access the resource fork. I have no I idea where that's documented if anywhere. Edit: Just to clarify I was referring to command-line usage for example cp somefile/rsrc destfile will copy the resouce fork of somefile a file called destfile. All command-line functions work this way. I haven't tested it with anything graphical.
| Can I read and write to alternate HFS+ file forks or NTFS data streams from Linux? |
1,456,364,840,000 |
I was debugging a fuse filesystem that was reporting wrong sizes for du. It turned out that it was putting st_size / st_blksize [*] into st_blocks of the stat structure. The Linux manual page for stat(2) says:
struct stat {
…
off_t st_size; /* total size, in bytes */
blksize_t st_blksize; /* blocksize for file system I/O */
blkcnt_t st_blocks; /* number of 512B blocks allocated */
…
};
What is st_blksize for if st_blocks is in 512B blocks anyway?
[*] which looks wrong anyway, as integer division doesn't account for the fractional part…
|
st_blocks is defined as
Number of blocks allocated for this object.
The size of a block is implementation-specific. On Linux it’s always 512 bytes, for historical reasons; in particular, it used to be the typical size of a disk sector.
st_blksize is unrelated; it’s
A file system-specific preferred I/O block size
for this object. In some file system types, this
may vary from file to file.
It indicates the preferred size for I/O, i.e. the amount of data that should be transferred in one operation for optimal results (ignoring other layers in the I/O stack).
| Why is st_blocks always reported in 512-byte blocks? |
1,456,364,840,000 |
There are two Linux kernel functions:
get_ds() and get_fs()
According to this article, I know ds is short for data segment.
However, I cannot guess what "fs" is short for.
Any explanations?
|
The FS comes from the additional segment register named FS on the 386 architecture (end of second paragraph).
My guess is that after DS for Data Segment and ES for Extra Segment Intel just went for the next characters in the alphabet (FS, GS). You can see the 386 register on the wiki page, on the graphic on the right side.
From the linux kernel source on my Linux Mint system (arch/x86/include/asm/uaccess.h):
/*
* The fs value determines whether argument validity checking should be
* performed or not. If get_fs() == USER_DS, checking is performed, with
* get_fs() == KERNEL_DS, checking is bypassed.
*
* For historical reasons, these macros are grossly misnamed.
*/
#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
#define KERNEL_DS MAKE_MM_SEG(-1UL)
#define USER_DS MAKE_MM_SEG(TASK_SIZE_MAX)
#define get_ds() (KERNEL_DS)
#define get_fs() (current_thread_info()->addr_limit)
#define set_fs(x) (current_thread_info()->addr_limit = (x))
| What is "fs" short for in kernel function "get_fs()"? |
1,456,364,840,000 |
It was explained in this Stack Overflow thread that each logical terminal has a "pseudo-terminal", and that writing to one:
$ cat some-file.txt > /dev/ttys002
will cause the data to appear in that terminal window. What's the reason for providing a file-like API to terminal windows? Is there any use case where this is helpful?
Till here copied verbatim.
Not limited to pseudo-terminals, it is available to /dev/tty* as well.
|
Most hardware devices offer a file-like API. This is done because it makes both the design of the operating system and the design of applications simpler. The OS only has to have a file API and not a separate terminal API and a separate disk API and a separate sound API and so on. Applications that are not using features specific to a particular kind of hardware can use the file API without caring whether they are accessing a regular file or a hardware device.
A lot of hardware has capabilities that are specific to a type of device. Applications can invoke these capabilities via ioctl. Some hardware doesn't appear as files because you can't simply read or write a stream of bytes to it. For example Linux doesn't expose network interfaces as device files, because network interfaces work on packets, not on individual bytes.
Historically, terminals were hardware devices. Nowadays most terminals are provided by emulators, either in a graphical environment or over the network. Nonetheless even pseudo-terminals appear like hardware devices, because the kernel contains special handling to track which processes are running on which terminal.
On every unix variant, /dev/tty means “the current terminal for this process”. In other words, whenever a process opens that file, it designates the process's controlling terminal). This allows a process to interact via its terminal even when its standard input and output descriptors have been redirected.
Each terminal has an associated device file, which is either a hardware terminal (tty, for example /dev/tty1, /dev/tty2, … for the text mode virtual consoles on Linux, or /dev/ttyS0, … for serial ports on Linux) or an emulated terminal (pty, short for pseudo-terminal; /dev/pts/NUMBER on Linux). This is the file through which processes exchange data with the terminal driver or emulator.
It's because terminals are files that you can run applications and display their output to the terminal. When you run a program at the command line, by default, its output goes to the terminal, but you can redirect it to a file.
| Use case of providing file-like API to terminal/console |
1,456,364,840,000 |
Does FreeBSD support in any way the TWAIN API? If not, what is the API that could be used to read data from imaging devices (webcams) - preferably in a portable way.
|
The FOSS scanner/imaging API is SANE. You may need to install Linux compatibility files in order to allow it to access the webcam as a V4L device.
| TWAIN API support on FreeBSD |
1,456,364,840,000 |
I am spent whole half a day, but still couldn't figure, why is the dash being launched with execl call just becomes a zombie.
Below is a minimal test case — I'm just fork a child, make a duplicate of std [in,out,err] descriptors, and launch the sh.
#include <cstdio>
#include <fcntl.h>
#include <cstring>
#include <stdlib.h>
#include <cerrno>
#include <unistd.h>
int main() {
int pipefd[2];
enum {
STDOUT_TERM = 0,
STDIN_TERM = 1
};
if (pipe(pipefd) == -1) { //make a pipe
perror("pipe");
return 0;
}
pid_t pid = fork();
if (pid == 0)
{// Child
dup2(pipefd[STDIN_TERM], STDIN_FILENO);
dup2(pipefd[STDOUT_TERM], STDOUT_FILENO);
dup2(pipefd[STDOUT_TERM], STDERR_FILENO);
execl("/bin/sh","sh", (char*)NULL);
// Nothing below this line should be executed by child process. If so, print err
perror("For creating a shell process");
exit(1);
}
__asm("int3");
puts("Child launched");
}
When I launch it in a debugger, and at the line with breakpoint (above the puts() call) look at the pid variable, and then look at the according process with ps, I every time getting something like
2794 pts/10 00:00:00 sh <defunct>
I.e. its a zombie
|
You're leaving a zombie, trivially, because you didn't wait on your child process.
Your shell is immediately exiting because you've set up its STDIN in a nonsensical way. pipe returns a one-way communications channel. You write to pipefd[1] and read it back from pipefd[0]. You did a buch of dup2 calls which lead the shell to attempt to read (STDIN) from the write end of the pipe.
Once you swap the numbers in your enum, you get the shell sitting forever on read. That's probably not what you want either, but it's about all you can expect when you've got a shell piped to itself.
Presuming you're attempting to use the shell from your parent process, you need to call pipe twice (and both in the parent): one of the pipes you write to (and the shell reads from, on stdin) and the other the shell writes to (stdout/stderr) and you read from. Or if you want, use socketpair instead.
| The «sh» being launched with execl() becomes a zombie |
1,456,364,840,000 |
I reversed statements in if/else, corrected now.
I am reading a code snippet from Advanced Programming in the UNIX® Environment:
The program tests its standard input to see whether it is capable of seeking.
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
int main(void){
if(lseek(STDIN_FILENO,0, SEEK_CUR) == -1)
printf("cannot seek\n");
else{
printf("seek ok\n");
}
}
I compile and run it (under Ubuntu 18.04.2 LTS) but don't understand the following behaviors.
//1
$ ./a.out
cannot seek
//2
$ ./a.out < /etc/passwd
seek OK
//3
$ cat < /etc/passwd | ./a.out
cannot seek
//4
$ ./a.out < /var/spool/cron/FIFO
cannot seek
Why //1 is cannot seek? Empty stdin should be able to seek I think. Is it because stdin has not been opened yet? Because I heard that normally stdin, stdout and stderr are opened when a program starts to run.
Why //2 is OK and //3 is not? I think they are the same.
|
//1 ./a.out:
If you do no redirection of stdin (no pipe and no <), stdin is inherited from the parent process. As you run a.out interactively in a shell, it inherits the terminal device that gets your keyboard input as stdin.
Terminal devices aren't usually seekable because they represent user interaction, but according to the POSIX standard lseek may return success and simply do nothing. On Linux lseek fails with an ESPIPE.
//2 ./a.out < /etc/passwd:
Here stdin is redirected to an open file. As /etc/passwd should be a regular file, it is seekable.
//3 cat < /etc/passwd | ./a.out:
Here you start two processes (cat and ./a.out) and connect them with a pipe.
cat (without other arguments) reads it stdin (/etc/passwd) and copies it to its stdout (the pipe connecting to ./a.out). This is not the same case as //2. From the perspective of ./a.out the stdin cannot seek because it is only a pipe connecting to another process.
//4 ./a.out < /var/spool/cron/FIFO:
Here you have a named pipe or similar special file. This case is similar to //3. You have an unidirectional connection to another process. And these are not seekable.
| Problems when test whether standard input is capable of seeking |
1,456,364,840,000 |
Is there any API that can be used to manipulate Linux routing table? I want to write a program that listens to sockets and then modifies routing table accordingly, just a simple code, but need an API.
|
You can use Netlink. From the wiki,
Netlink was designed for and is used to transfer miscellaneous
networking information between the Linux kernel space and user space
processes. Networking utilities such as iproute2 use Netlink to
communicate with the Linux kernel from user space. Netlink consists of
a standard socket-based interface for user space processes and an
internal kernel API for kernel modules. It is designed to be a more
flexible successor to ioctl. Originally, Netlink used the AF_NETLINK
socket family.
My personal preference would be bash scripts for such tasks since I can specify the iptables rules/routing in my script itself. If you are using programming language like C, you can probably invoke system and then use the return value in your program to do something.
There is one API named haxwithaxe available from here
| API for IPROUTE2 in any programming language |
1,456,364,840,000 |
We are experiencing a weird issue on our servers. (Debian 8.9) We have an API which is a PHP application. It requests an elasticsearch which instance is on a separate server.
Every 2 hours, we are experiencing errors 500, it lasts during 1 or 2 minutes, rarely more:
[2017-10-19 20:52:10] +2 hours
[2017-10-19 22:51:59] +2 hours
[2017-10-20 00:52:02] +2 hours
[2017-10-20 02:52:14] +2 hours
[2017-10-20 04:52:28] +2 hours
Sometimes it is +4 hours or +6.
Here is the detail of the error:
request.CRITICAL: Uncaught PHP Exception Elastica\Exception\Connection\HttpException:
"Operation timed out"
Which is quite clear. The API tries to connect to the elasticsearch instance until it reaches the specified timeout of the http client.
What could cause this? How to debug this kind of issue?
Of course, when checking later all the URLs referrers, everything is OK.
|
They finally managed to find the issue. And the root cause is totally stupid. If fact a monitoring view of the es cluster sent a lot of queries to es. About 6 times more than the application itself!
As you can see every 2 hours the memory was too high and the server was unavailable several minutes until it clears the memory (garbage collector).
Other parameters were also optimized and/or increased.
| Timeouts between an API and an elasticsearch server every 2 hours |
1,456,364,840,000 |
I am trying to extract some info about github repositories using its API, apparently jq is the way to go.
I can use this command to view all the available info:
curl 'https://api.github.com/repos/tmux-plugins/tpm' | jq
Output:
{
"id": 19935788,
"node_id": "MDEwOlJlcG9zaXRvcnkxOTkzNTc4OA==",
"name": "tpm",
"full_name": "tmux-plugins/tpm",
"private": false,
"owner": {
"login": "tmux-plugins",
"id": 8289877,
"node_id": "MDEyOk9yZ2FuaXphdGlvbjgyODk4Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8289877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tmux-plugins",
"html_url": "https://github.com/tmux-plugins",
"followers_url": "https://api.github.com/users/tmux-plugins/followers",
"following_url": "https://api.github.com/users/tmux-plugins/following{/other_user}",
"gists_url": "https://api.github.com/users/tmux-plugins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tmux-plugins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tmux-plugins/subscriptions",
"organizations_url": "https://api.github.com/users/tmux-plugins/orgs",
"repos_url": "https://api.github.com/users/tmux-plugins/repos",
"events_url": "https://api.github.com/users/tmux-plugins/events{/privacy}",
"received_events_url": "https://api.github.com/users/tmux-plugins/received_events",
"type": "Organization",
"site_admin": false
},
"html_url": "https://github.com/tmux-plugins/tpm",
"description": "Tmux Plugin Manager",
"fork": false,
"url": "https://api.github.com/repos/tmux-plugins/tpm",
"forks_url": "https://api.github.com/repos/tmux-plugins/tpm/forks",
"keys_url": "https://api.github.com/repos/tmux-plugins/tpm/keys{/key_id}",
"collaborators_url": "https://api.github.com/repos/tmux-plugins/tpm/collaborators{/collaborator}",
"teams_url": "https://api.github.com/repos/tmux-plugins/tpm/teams",
"hooks_url": "https://api.github.com/repos/tmux-plugins/tpm/hooks",
"issue_events_url": "https://api.github.com/repos/tmux-plugins/tpm/issues/events{/number}",
"events_url": "https://api.github.com/repos/tmux-plugins/tpm/events",
"assignees_url": "https://api.github.com/repos/tmux-plugins/tpm/assignees{/user}",
"branches_url": "https://api.github.com/repos/tmux-plugins/tpm/branches{/branch}",
"tags_url": "https://api.github.com/repos/tmux-plugins/tpm/tags",
"blobs_url": "https://api.github.com/repos/tmux-plugins/tpm/git/blobs{/sha}",
"git_tags_url": "https://api.github.com/repos/tmux-plugins/tpm/git/tags{/sha}",
"git_refs_url": "https://api.github.com/repos/tmux-plugins/tpm/git/refs{/sha}",
"trees_url": "https://api.github.com/repos/tmux-plugins/tpm/git/trees{/sha}",
"statuses_url": "https://api.github.com/repos/tmux-plugins/tpm/statuses/{sha}",
"languages_url": "https://api.github.com/repos/tmux-plugins/tpm/languages",
"stargazers_url": "https://api.github.com/repos/tmux-plugins/tpm/stargazers",
"contributors_url": "https://api.github.com/repos/tmux-plugins/tpm/contributors",
"subscribers_url": "https://api.github.com/repos/tmux-plugins/tpm/subscribers",
"subscription_url": "https://api.github.com/repos/tmux-plugins/tpm/subscription",
"commits_url": "https://api.github.com/repos/tmux-plugins/tpm/commits{/sha}",
"git_commits_url": "https://api.github.com/repos/tmux-plugins/tpm/git/commits{/sha}",
"comments_url": "https://api.github.com/repos/tmux-plugins/tpm/comments{/number}",
"issue_comment_url": "https://api.github.com/repos/tmux-plugins/tpm/issues/comments{/number}",
"contents_url": "https://api.github.com/repos/tmux-plugins/tpm/contents/{+path}",
"compare_url": "https://api.github.com/repos/tmux-plugins/tpm/compare/{base}...{head}",
"merges_url": "https://api.github.com/repos/tmux-plugins/tpm/merges",
"archive_url": "https://api.github.com/repos/tmux-plugins/tpm/{archive_format}{/ref}",
"downloads_url": "https://api.github.com/repos/tmux-plugins/tpm/downloads",
"issues_url": "https://api.github.com/repos/tmux-plugins/tpm/issues{/number}",
"pulls_url": "https://api.github.com/repos/tmux-plugins/tpm/pulls{/number}",
"milestones_url": "https://api.github.com/repos/tmux-plugins/tpm/milestones{/number}",
"notifications_url": "https://api.github.com/repos/tmux-plugins/tpm/notifications{?since,all,participating}",
"labels_url": "https://api.github.com/repos/tmux-plugins/tpm/labels{/name}",
"releases_url": "https://api.github.com/repos/tmux-plugins/tpm/releases{/id}",
"deployments_url": "https://api.github.com/repos/tmux-plugins/tpm/deployments",
"created_at": "2014-05-19T09:18:38Z",
"updated_at": "2021-03-03T04:30:43Z",
"pushed_at": "2021-02-23T11:07:55Z",
"git_url": "git://github.com/tmux-plugins/tpm.git",
"ssh_url": "[email protected]:tmux-plugins/tpm.git",
"clone_url": "https://github.com/tmux-plugins/tpm.git",
"svn_url": "https://github.com/tmux-plugins/tpm",
"homepage": null,
"size": 204,
"stargazers_count": 6861,
"watchers_count": 6861,
"language": "Shell",
"has_issues": true,
"has_projects": true,
"has_downloads": true,
"has_wiki": true,
"has_pages": false,
"forks_count": 251,
"mirror_url": null,
"archived": false,
"disabled": false,
"open_issues_count": 79,
"license": {
"key": "mit",
"name": "MIT License",
"spdx_id": "MIT",
"url": "https://api.github.com/licenses/mit",
"node_id": "MDc6TGljZW5zZTEz"
},
"forks": 251,
"open_issues": 79,
"watchers": 6861,
"default_branch": "master",
"temp_clone_token": null,
"organization": {
"login": "tmux-plugins",
"id": 8289877,
"node_id": "MDEyOk9yZ2FuaXphdGlvbjgyODk4Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8289877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tmux-plugins",
"html_url": "https://github.com/tmux-plugins",
"followers_url": "https://api.github.com/users/tmux-plugins/followers",
"following_url": "https://api.github.com/users/tmux-plugins/following{/other_user}",
"gists_url": "https://api.github.com/users/tmux-plugins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tmux-plugins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tmux-plugins/subscriptions",
"organizations_url": "https://api.github.com/users/tmux-plugins/orgs",
"repos_url": "https://api.github.com/users/tmux-plugins/repos",
"events_url": "https://api.github.com/users/tmux-plugins/events{/privacy}",
"received_events_url": "https://api.github.com/users/tmux-plugins/received_events",
"type": "Organization",
"site_admin": false
},
"network_count": 251,
"subscribers_count": 83
}
How would I extract just the "description"?
How do I extract the "language" & "description"?
I ask question 2 as I have seen examples online (when trying to find an answer for myself), that show multiple fields being extracted in one, this would be helpful to me and others finding this question.
Thank you!
|
In all cases below, file.json is the name of a file containing your JSON document. You could obviously use jq as you've done in the question instead, and have it read from a pipe connected to the output of curl.
Pulling the requested fields out, one by one:
$ jq -r '.description' file.json
Tmux Plugin Manager
$ jq -r '.language' file.json
Shell
The -r option is used above (and below) to get the "raw data" rather than JSON encoded data.
Getting both at once (you'd have issues telling them apart if any of them contain embedded newline characters):
$ jq -r '.language, .description' file.json
Shell
Tmux Plugin Manager
Getting them as a CSV record (will be properly quoted so that embedded commas and newlines will be parsable by a CSV parser, and embedded double quotes will be CSV encoded too):
$ jq -r '[.language, .description] | @csv' file.json
"Shell","Tmux Plugin Manager"
Tab-delimited (embedded newlines and tabs will show up as \n and \t respectively):
$ jq -r '[.language, .description] | @tsv' file.json
Shell Tmux Plugin Manager
Letting jq produce shell code containing two variable assignments. The values will be properly quoted for the shell.
$ jq -r '@sh "lang=\(.language)", @sh "desc=\(.description)"' file.json
lang='Shell'
desc='Tmux Plugin Manager'
Getting the shell to actually evaluate these statements:
$ eval "$( jq -r '@sh "lang=\(.language)", @sh "desc=\(.description)"' file.json )"
$ printf 'lang is "%s" and desc is "%s"\n' "$lang" "$desc"
lang is "Shell" and desc is "Tmux Plugin Manager"
| how to extract fields of info from github api using jq |
1,456,364,840,000 |
I've currently got code that forks two processes. The first reads a http streaming radio and pushes the data down a pipe (opened with pipe() ) for the second process to read, decode and output to the sound card using OSS.
I've been trying to debug the decoding part (separate issue) and I've come across a situation where the pipe has a file descriptor of 0 when I print it out. As far as I can tell this means stdin. Is this a known problem with pipe, that it can accidentally open one of the standard file descriptors? If so, how do I get around it?
My pipe/fork code is below. There is quite a bit of other code, that I hope is irrelevant.
//this is the "switch channel" loop
while(1)
{
/*create the pipes
*
* httpPipe is for transfer of the stream between the readProcess and the playProcess
*
* playPPipe is for transfer of commands from the main process to the playProcess
*
* readPPipe is for transfer of commands from the main process to the readProcess
*
*/
if(pipe(httpPipe) == -1)
{
cout << "ERROR:: Error creating httpPipe: " << endl;
}
if(pipe(PlayPPipe) == -1)
{
cout << "ERROR:: Error creating PlayPPipe: " << endl;
}
if(pipe(ReadPPipe) == -1)
{
cout << "ERROR:: Error creating ReadPPipe: " << endl;
}
cout << "httpPipe:" << httpPipe[0] << ":" << httpPipe[1] << endl;
cout << "PlayPPipe:" << PlayPPipe[0] << ":" << PlayPPipe[1] << endl;
cout << "ReadPPipe:" << ReadPPipe[0] << ":" << ReadPPipe[1] << endl;
pid = fork();
if(pid == 0)
{
/* we are in the readProcess
* this process uses libcurl to read the icestream from the url
* passed to it in urlList. It then writes this data to writeHttpPipe.
* this continues until the "q" command is sent to the process via
* readPPipe/readReadPPipe. when this happens the curl Callback function
* returns 0, and the process closes all fds/pipes it has access to and cleans
* up curl and exits.
*/
rc = 0;
close(httpPipe[0]);
writeHttpPipe = httpPipe[1];
close(ReadPPipe[1]);
readReadPPipe = ReadPPipe[0];
rc = readProcess(urlList.at(playListNum));
if(rc > 0)
{
cout << "ERROR:: has occured in reading stream: " << urlList.at(playListNum) << endl;
close(writeHttpPipe);
close(readReadPPipe);
exit(16);
}
}else if(pid > 0)
{
pid = fork();
if(pid ==0)
{
/* we are in the PlayProcess
* the playProcess initialises libmpg123 and the OSS sound subsystem.
* It then reads from httpPipe[0]/readHttpPipe until it recieves a "q" command
* via PlayPPipe[0]/readPlayPPipe. at which point it closes all fd's and cleans
* up libmpg123 handles and exits.
*/
close(httpPipe[1]);
sleep(1);
close(PlayPPipe[1]);
readPlayPPipe = PlayPPipe[0];
playProcess();
exit(0);
}else if(pid > 0)
{
/* This is the main process
* this process reads from stdin for commands.
* if these are valid commands it processes this command and
* sends the relevant commands to the readProccess and the PlayProcess via
* the PlayPPipe[1]/writePlayPPipe and ReadPPipe[1]/writeReadPPipe.
* It then does suitable clean up.
*/
string command;
//close ends of pipe that we don't use.
close(ReadPPipe[0]);
close(PlayPPipe[0]);
close(httpPipe[0]);
close(httpPipe[1]);
//assign write ends of pipes to easier variables
writeReadPPipe = ReadPPipe[1];
writePlayPPipe = PlayPPipe[1];
rc = 0;
//wait for input
while(1)
{
cin >> command;
cout << command << endl;
rc = 0;
/* Next channel command. this sends a q command to
* readProccess and playProcess to tell them to cleanup and exit().
* then it breaks out of the loop, increments the playListNum and
* we start all over again. The two processes get forked, this time with a new channel
* and we wait for input.
*/
if(command == "n")
{
rc = sendCommand("q");
if(rc != 0)
{
cout << "ERROR: failed to send command: " << command << ":" << endl;
}
break;
}
/* Quit program command.
* This sends a command to the two proceesses to cleanup and exit() and then exits.
*
*/
if(command == "q")
{
rc = sendCommand("q");
if(rc != 0)
{
cout << "ERROR: failed to send command: " << command << ":" << endl;
}
exit(0);
}
}
}else
{
cout << "ERROR:: some thing happened with the fork to playProcess..." << endl;
}
}else
{
cout << "ERROR:: some thing happened with the fork to readProcess..." << endl;
}
//clean up the pipes otherwise we get junk in them.
close(writePlayPPipe);
close(writeReadPPipe);
delete json;
//Parse JSON got from the above url into the list of urls so we can use it
JsonConfig *json = new JsonConfig(parms->GetParameter("URL"));
json->GetConfigJson();
json->ParseJson();
json->GetUrls(urlList);
cout << "####---->UrlListLength: " << urlList.size() << endl;
//increment which url in the list we are going to be playing next.
playListNum++;
//if the playlist is greater than or equal to the urlist size then we are back at the start of the list
if(playListNum >= (int)urlList.size())
{
playListNum = 0;
}
}
This loop is so that I can go through a list of radio stations. When 'n' is pressed it sends a command to the two child processes which shut them down cleanly, and then closes all the pipes and loops back around, opening all them up again and forking the two processes again.
The first time it goes through the loop it seems like this works, but on the second time around I get the following output.
URL: 192.168.0.5:9000/playlist
GetConfigJsonurl: 192.168.0.5:9000/playlist
httpPipe:3:4
PlayPPipe:5:6
ReadPPipe:7:8
GetConfigJsonurl: 192.168.0.5:9000/playlist
####---->UrlListLength: 2
httpPipe:0:4
PlayPPipe:7:8
ReadPPipe:9:10
So basically I would like to know how to stop pipe from opening up the std file descriptors.
|
A new file descriptor always occupies the lowest integer not already in use.
$ cat >test.c
main(){exit(open("/dev/null",0));}
^D
$ cc test.c
$ ./a.out; echo $?
3
$ ./a.out <&-; echo $?
0
$ ./a.out >&-; echo $?
1
The system doesn't care about "standard file descriptors" or anything like that. If file descriptor 0 is closed, then a new file descriptor will be assigned at 0.
Is there any place in your program or in how you're launching it that may be causing close(0)?
| stop pipe() opening stdin |
1,456,364,840,000 |
What is the best way to retrieve the current time zones of a number of countries, on a daily basis? (that would take into account DST changes, of course)
Reliably
If possible the Linux way (i.e. either using internal resources, or a Linux website API)
(I'm on Ubuntu 10.04)
|
If you just want the timezone, then timezones are stored in /usr/share/zoneinfo.
If you want to be able to retrieve the current time for a number of different cities or countries, then you can pull them from the Date and Time Gateway.
| Retrieve countries timezones |
1,456,364,840,000 |
#include <linux/io_uring.h>
main.c:1:10: fatal error: linux/io_uring.h: No such file or directory
#include <linux/io_uring.h>
^~~~~~~~~~~~~~~~~~
Kernel version 5.4.0-80.
I have not found a way to install the API header files. The ABI should be supported though.
|
On Ubuntu (which I’m guessing is what you’re using, based on the kernel version), you’ll find linux/io_uring.h in linux-libc-dev. Install that:
sudo apt install linux-libc-dev
and you should find the header in /usr/include/linux.
Programs written using liburing use that library’s headers, so installing that is unlikely to help; but if you want to try, the relevant package is liburing-dev. The io_uring.h header there defines the same interface as the kernel UAPI’s io_uring.h.
| Missing header file linux/io_uring.h |
1,456,364,840,000 |
How and where does this Gnome applet get weather information? Same question for sunrise and sunset times.
I suppose there is a web API it queries but which one and can I use it?
(sorry for the screenshot in french)
|
gnome-weather uses libgweather underneath which in turn uses several GWeatherProviders (defined in gweather-weather.h) to get weather information for your particular geo-location:
* GWeatherProvider:
....
* @GWEATHER_PROVIDER_METAR: METAR office, providing current conditions worldwide
* @GWEATHER_PROVIDER_IWIN: US weather office, providing 7 days of forecast
* @GWEATHER_PROVIDER_YAHOO: Yahoo Weather Service, removed in 3.27.1
* @GWEATHER_PROVIDER_YR_NO: Yr.no service, worldwide but requires attribution
* @GWEATHER_PROVIDER_OWM: OpenWeatherMap, worldwide and possibly more reliable, but requires attribution and is limited in the number of queries
....
You could look into the source code and see how they do it:
weather-metar.c,
weather-iwin.c,
weather-yrno.c,
weather-owm.c. See also weather.c
Sunrise and sunset times are computed in weather-sun.c
| How does Gnome clock/calendar applet get weather, sunset and sunrise time information? |
1,456,364,840,000 |
I'm using the Ansible module for Bluecat to make an authorized API call to get some information about a subnet. The response looks something like this:
"result": {
"changed": false,
"failed": false,
"json": "b'{\"id\":12345,\"name\":\"SUBNET NAME\",\"properties\":\"CIDR=10.2.2.0/24|allowDuplicateHost=enable|pingBeforeAssign=disable|inheritAllowDuplicateHost=true|inheritPingBeforeAssign=true|gateway=10.2.2.1|inheritDNSRestrictions=true|inheritDefaultDomains=true|inheritDefaultView=true|\",\"type\":\"IP4Network\"}\\n'",
"msg": "",
"status": 200
}
As you can see, all the useful data is in that json field, but it's some string literal abomination with escaped quotes and newlines. If I run
- debug:
msg: "{{ result | json_query('json.name') }}"
in Ansible, it gives me back the msg field instead! I can get the entire json field, but not anything inside it. If I tinker with it a little bit and trim the b at the beginning, the inner single quotes, and the extra backslash by the newline at the end, then jq .json | fromjson parses it correctly. But I'm fairly certain b'' just means byte encoding and shouldn't break the parsing, but it does. And what's with the double backslashes at the end?
Do I have any options beyond using some sed black magic to wipe out all of the escape characters? Why would a web API return a string literal like this?
|
Strip what's outside the braces {} and Ansible will parse the dictionary
subnet: "{{ result.json[2:-3] }}"
gives
subnet:
id: 12345
name: SUBNET NAME
properties: CIDR=10.2.2.0/24|allowDuplicateHost=enable|pingBeforeAssign=disable|inheritAllowDuplicateHost=true|inheritPingBeforeAssign=true|gateway=10.2.2.1|inheritDNSRestrictions=true|inheritDefaultDomains=true|inheritDefaultView=true|
type: IP4Network
Optionally, use a more robust striping. For example, the expression below gives the same result
subnet: "{{ result.json|regex_replace(_regex, _replace) }}"
_regex: '^.*?\{(.*)\}.*$'
_replace: '{\1}'
If you want to parse the attribute properties too, the expression below
subnet_prop: "{{ subnet|combine({'properties': dict(_prop)}) }}"
_prop: "{{ subnet.properties.split('|')|select|map('split', '=')|list }}"
gives
subnet_prop:
id: 12345
name: SUBNET NAME
properties:
CIDR: 10.2.2.0/24
allowDuplicateHost: enable
gateway: 10.2.2.1
inheritAllowDuplicateHost: 'true'
inheritDNSRestrictions: 'true'
inheritDefaultDomains: 'true'
inheritDefaultView: 'true'
inheritPingBeforeAssign: 'true'
pingBeforeAssign: disable
type: IP4Network
The boolean values are represented by strings in the dictionary above. If this is a problem replace the split filter with regex_replace and from _yaml. Do it also if the filter split is not available
_prop: "{{ subnet.properties.split('|')|
select|
map('regex_replace', '^(.*)=(.*)$', '[\\1, \\2]')|
map('from_yaml')|list }}"
gives
subnet_prop:
id: 12345
name: SUBNET NAME
properties:
CIDR: 10.2.2.0/24
allowDuplicateHost: enable
gateway: 10.2.2.1
inheritAllowDuplicateHost: true
inheritDNSRestrictions: true
inheritDefaultDomains: true
inheritDefaultView: true
inheritPingBeforeAssign: true
pingBeforeAssign: disable
type: IP4Network
| How to parse an escaped json string with ansible/jmespath/jq? |
1,456,364,840,000 |
What specifically needs to be changed in the below in order for a program running on an Ubuntu-latest GitHub runner to successfully connect to a Twisted web server running on localhost of the same GitHub runner?
The same code works on a Windows laptop, with only minor changes like using where twistd on Windows versus which twistd on Ubuntu as shown below.
SOME RELEVANT CODE:
The code to start the Twisted web server on localhost is in a script called twistdStartup.py that reads as follows:
import subprocess
import re
import os
escape_chars = re.compile(r'\x1B\[[0-?]*[ -/]*[@-~]')
def runShellCommand(commandToRun, numLines=999):
proc = subprocess.Popen( commandToRun,cwd=None, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
while True:
line = proc.stdout.readline()
if line:
if numLines == 1:
return escape_chars.sub('', line.decode('utf-8').rstrip('\r|\n'))
else:
print(escape_chars.sub('', line.decode('utf-8').rstrip('\r|\n')))
else:
break
print("About to create venv.")
runShellCommand("python3 -m venv venv")
print("About to activate venv.")
runShellCommand("source venv/bin/activate")
print("About to install flask.")
runShellCommand("pip install Flask")
print("About to install requests.")
runShellCommand("pip install requests")
print("About to install twisted.")
runShellCommand("pip install Twisted")
##Set environ variable for the API
os.environ['PYTHONPATH'] = '.'
print("Done updating PYTHONPATH. About to start server.")
twistdLocation = runShellCommand("which twistd", 1)
startTwistdCommand = twistdLocation+" web --wsgi myAPI.app &>/dev/null & "
print("startTwistdCommand is: ", startTwistdCommand)
subprocess.call(startTwistdCommand, shell=True)
print("startTwistdCommand should be running in the background now.")
The code in a calling program named startTheAPI.py that invokes the above twistdStartup.py is:
myScript = 'twistdStartup.py'
print("About to start Twisted.")
subprocess.Popen(["python", myScript], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
print("Just finished starting Twisted.")
The logs produced by that step in the GitHub Action job are as follows:
About to start Twisted.
Just finished starting Twisted.
The results of running the curl command on the same Ubuntu-latest GitHub runner in the next step of the same job after waiting 30 seconds for startup are as follows:
$ curl http://localhost:1234/api/endpoint/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to localhost port 1234 after 1 ms: Connection refused
The contents of a callTheAPI.py program that runs the curl command would look like:
import subprocess
import re
import os
escape_chars = re.compile(r'\x1B\[[0-?]*[ -/]*[@-~]')
def runShellCommand(commandToRun, numLines=999):
proc = subprocess.Popen( commandToRun,cwd=None, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
while True:
line = proc.stdout.readline()
if line:
if numLines == 1:
return escape_chars.sub('', line.decode('utf-8').rstrip('\r|\n'))
else:
print(escape_chars.sub('', line.decode('utf-8').rstrip('\r|\n')))
else:
break
print("About to call API.")
runShellCommand("curl http://localhost:1234/api/endpoint/")
print("Done calling API.")
SUMMARY:
The twistdStartup.py script is running, but is failing to provide any output to the logs despite all the print() commands. The curl command fails to connect to the correctly-stated http://localhost:1234/api/endpoint/
|
Regarding:
The curl command fails to connect to the correctly-stated http://localhost:1234/api/endpoint/
The main problem here is that every command you run with runShellCommand("somecommand") is executed in a new shell so all changes such as functions,variables, environment variables, etc. disappear after the commands is executed.
For example, try running this simple script in your twistdStartup.py:
print("Running echo $PATH")
runShellCommand("echo $PATH")
print("Running PATH=/")
runShellCommand("PATH=/")
print("Running echo $PATH")
runShellCommand("echo $PATH")
The output (in my case):
Running echo $PATH
/home/edgar/.local/bin:/home/edgar/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/go/bin
Running PATH=/
Running echo $PATH
/home/edgar/.local/bin:/home/edgar/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/go/bin
As you can see above the assignment Running PATH=/ is ignored when I run again runShellCommand("echo $PATH").
The possible solution (untested with all your commands) here is that you wrap all your runShellCommand method calls in a single method call (or convert the code to a shell script), like this:
Section twistdStartup.py:
runShellCommand(
"""
set -e
echo About to create venv.
python3 -m venv venv
echo About to activate venv.
. venv/bin/activate
echo About to install flask.
pip install Flask
echo About to install requests.
pip install requests
echo About to install twisted.
pip install Twisted
export PYTHONPATH='.'
echo Done updating PYTHONPATH. About to start server.
twistdLocation=$(which twistd)
echo "Running ${twistdLocation} web --wsgi customControllerAPI.app &>/dev/null &"
(
$twistdLocation web --wsgi myAPI.py >/dev/null 2>&1
)&
echo startTwistdCommand should be running in the background now.
""")
Testing on GitHub Actions I noticed that source venv/bin/activate caused a fail because source was not valid (possibly the default shell for Ubuntu in GitHub is dash).
Instead of using source instead you have to use . (which is far better): . venv/bin/activate.
Because of the error above the command: which twistd was not working correctly because the venv was not sourced. Thus the command:
$twistdLocation web --wsgi customControllerAPI.app &>/dev/null will fail and the Flask API will never run (for that reason you get the messahe: Failed to connect to localhost port 1234 after 1 ms: Connection refused)
Regarding:
but is failing to provide any output to the logs despite all the print() commands
By looking your file api_server.py I see you are calling the python script setup.py:
apiServer = subprocess.Popen(["python", "setup.py"], stdout=subprocess.DEVNULL, cwd=str(path))
Here you are not getting the output the command python setup.py, so I suggest you remove that line and add these ones:
apiServer = subprocess.Popen(["python", "setup.py"], cwd=str(path),stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
line = apiServer.stdout.readline()
if line:
print(self.escape_chars.sub('', line.decode('utf-8').rstrip('\r|\n')))
else:
break
You may want to change the line twistdLocation=$(which twistd) to twistdLocation=$(command -v twistd). See Why not use "which"? What to use then?
Also I suggest you add the line set -e in the first line of the script. That command is used to abort the execution of the following commands if some error has been thrown (so in your case if some dependencies fails to install itself then the scripts stops).
About ($twistdLocation web --wsgi myAPI.py >/dev/null 2>&1 )& I used that in order to prevent to the python subprocess from waiting to read output from that command since this one was causing to the server a "infinity" stuck).
If you are interested in the logs (stdout and stderr) of the command: $twistdLocation ... you should redirect the output to a file, like this:
(
$twistdLocationn web --wsgi myAPI.py >/tmp/twistd.logs 2>&1
)&
Also you will have to edit your github action to include the command which cat the content of /tmp/twistd.logs, like this:
steps:
- uses: actions/checkout@v3
- shell: bash
name: Start the localhost Server
run: |
echo "Current working directory is: "
echo "$PWD"
python code/commandStartServer.py
echo "Checking twistd logs"
cat /tmp/twistd.logs
So in these files you have to have the following code:
api_server.py
import os
import re
class api_server:
def __init__(self):
pass
escape_chars = re.compile(r'\x1B\[[0-?]*[ -/]*[@-~]')
def startServer(self):
path = os.getcwd()+"/api/"
print(path)
print("About to start .")
import subprocess
#The version of command on next line runs the server in the background. Comment it out and replace with the one below it if you want to see output.
apiServer = subprocess.Popen(["python", "setup.py"], cwd=str(path),stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
line = apiServer.stdout.readline()
if line:
print(self.escape_chars.sub('', line.decode('utf-8').rstrip('\r|\n')))
else:
break
#The second version of the command below will print output to the console, but will also halt your program because it runs the server in the foreground.
#apiServer = subprocess.Popen(["python", "setup.py"], cwd=str(path))
print("Just finished starting .")
def destroyServer(self):
... # your current code
setup.py
import subprocess
import re
import os
escape_chars = re.compile(r'\x1B\[[0-?]*[ -/]*[@-~]')
def runShellCommand(commandToRun, numLines=999):
proc = subprocess.Popen( commandToRun,cwd=None, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
line = proc.stdout.readline()
if line:
if numLines == 1:
return escape_chars.sub('', line.decode('utf-8').rstrip('\r|\n'))
else:
print(escape_chars.sub('', line.decode('utf-8').rstrip('\r|\n')))
else:
break
runShellCommand(
"""
set -e
echo About to create venv.
python3 -m venv venv
echo About to activate venv.
. venv/bin/activate
echo About to install flask.
pip install Flask
echo About to install requests.
pip install requests
echo About to install twisted.
pip install Twisted
export PYTHONPATH='.'
echo Done updating PYTHONPATH: $PYTHONPATH. About to start server.
twistdLocation=$(which twistd)
echo "Running ${twistdLocation} web --wsgi customControllerAPI.app &>/dev/null &"
(
$twistdLocation web --wsgi myAPI.py >/dev/null 2>&1
)&
echo startTwistdCommand should be running in the background now.
""")
localhost-api.yml
name: localhost-api
on:
push:
branches:
- main
jobs:
start-and-then-call-localhost-api:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- shell: bash
name: Start the localhost Server
run: |
echo "Current working directory is: "
echo "$PWD"
python code/commandStartServer.py
echo "Checking twistd logs"
cat /tmp/twistd.logs
- shell: bash
name: Call the localhost API
run: |
echo "Current working directory is: "
echo "$PWD"
python code/commandCallAPI.py
- shell: bash
name: Destroy the localhost Server
run: |
echo "Current working directory is: "
echo "$PWD"
python code/commandDestroyServer.py
| Ubuntu-latest GitHub runner cannot Connect to localhost API |
1,456,364,840,000 |
For process_vm_readv the linux man page states:
[...] (Avoid) spanning memory pages (typically 4KiB) in a single remote iovec element. (Instead, split the remote read into two remote_iov elements and have them merge back into a single write local_iov entry. The first read entry goes up to the page boundary, while the second starts on the next page boundary.)
I get why this is a thing but I don't quite understand how I should work around it. Do I somehow need to find out where the page boundaries are? Or does the function figure this out on it's own as long as I provide 2 remote_iov elements? And if I read more than 4kiB and potentially cross 2 page boundaries do I need to split the remote element into 3 parts?
|
You should really read the whole paragraph -- that way of splitting the iovecs is not a hard requirement. It's only supposed to help in the case of a partial read, though it's not clear how it could help ;-)
That manpage is quite dubious and confusing; my testing shows that process_vm_readv() will always error out if the iov_start of the first iovec from the remote_iov list is not a valid address, but return a partial read if any of the pages spanned by iov_start + iov_len or the rest of the iovecs are not mapped in (which is expected and useful, but contradicts the emphasized parts below).
Note, however, that these system calls do not check the memory regions
in the remote process until just before doing the read/write. Consequently, a partial read/write (see RETURN VALUE) may result if one of
the remote_iov elements points to an invalid memory region in the
remote process. No further reads/writes will be attempted beyond that
point. Keep this in mind when attempting to read data of unknown
length (such as C strings that are null-terminated) from a remote
process, by avoiding spanning memory pages (typically 4KiB) in a single
remote iovec element. (Instead, split the remote read into two
remote_iov elements and have them merge back into a single write
local_iov entry. The first read entry goes up to the page boundary,
while the second starts on the next page boundary.)
[...]
This return
value may be less than the total number of requested bytes, if a partial read/write occurred. (Partial transfers apply at the granularity
of iovec elements. These system calls won't perform a partial transfer
that splits a single iovec element.)
| Read arbitrary amount of memory with process_vm_readv |
1,456,364,840,000 |
Many fields in structs defined by the Unix API have prefixes, like sa_ in sa_handler defined in struct sigaction. Why is it so? Why isn't sa_handler called just handler?
|
This goes back a long way, all the way to the first C versions. They didn't have a seperate symbol table for structure members, the names were added to the global symbol table. With the obvious nasty global namespace pollution that causes. The workaround was the same one you use on enums today, prefix them with a couple of letters to avoid the name collisions.
https://stackoverflow.com/a/10325945/799204
| Why fields in structs defined by the Unix API have prefixes? |
1,456,364,840,000 |
Why isn't there the Unix API? I mean, as there is the Windows API.
I know a lot of things in the Unix world is modular, and those things put together creates a whole system. This sounds good but does create some problems when you try to make a native Unix app.
For example, you want to program a nice word processor with a cool name WP. The Windows version of WP will be built by calling the Windows API. You can either code this directly in C, or use any of the various wrapper libraries out there. But still, the program must be constructed by calling winapi, which provides every single functionality that a programmer may need to build a Windows app, from basic system calls to GUI, 3D, multimedia or anything else with more than a decade of backwards compatibility. If it weren't like this, Wine could never exist.
Now you want to create a Unix version of WP. The standard C and C++ library and the POSIX API is very stable and well supported in any Unix variants. The problem occurs when you try to do more. So you need to create a window for WP, but how? There is X11, but this is not the only one. People think X11 should be replaced and now are making two incompatible replacements, Wayland and Mir. Even for X11, there is Xlib and xcb. xcb claims they are 'better', and it is true in some ways, but where is the documentation? You eventually choose Xlib to do the task, but the X11 standard by itself only defines very basic features. Anything else you'd expect for a GUI application such as window events or clipboard support needs to be dealt with extensions, by calling XInternAtom. This sentence is merely my personal opinion, but the use of Atoms in X is extremely unintuitive. And another problem is that not every window manager for X support these extensions well. So let's just leave this dirtiness to the developers of GTK+ and Qt, who break backwards compatibility per each new version. Is it even possible to have portable drag-and-drop support in Linux?
It really seems to me that the Unix community is killing themselves in the Desktop world. I know that the things I mentioned doesn't even matter to set up a BSD server, but it does matter if you ever try to build a portable native Linux app.
What is making up all of this mess? Is there really an effort to clean this up and standardize things for the modern desktop environment of Unix? Why isn't there the Unix API? Will there ever be one?
|
It really seems to me that the Unix community is killing themselves in the Desktop world.
I think there is a misconception that any form of Unix exists in order to compete in the home PC market. There are some linux distros which have this focus; the first one was really Ubuntu, but it is worth considering that part of Ubuntu's original vision was to develop a user friendly operating system that could be used in parts of the world where having to pay hundreds of dollars per computer for a Microsoft license was not feasible and could mean the difference between having computers (in schools, government, etc.) and not having them.
I have not stayed up to date with how successful that's been, but in any case, it seems to me that it is a very different goal than wanting to out Apple Apple or something. Apples and oranges, as they say. Or apples and aardvarks.1 This aardvark is nothing like an apple! No, it isn't. Why would you think it is?
Now you want to create a Unix version of WP [...] You eventually choose Xlib to do the task
Only if you are a crazy person who is unlikely to complete a word processor with mass consumption potential.
Software exists in distinct layers that are assembled in stacks. X is a layer that is used on various platforms as part of the GUI stack. On those platforms, X lib is used to implement higher level libraries such as Gtk and Qt, both of which have a portable API; they can be used on OSX and Windows, neither of which uses X. On those platforms, a different lower level library/API is used to implement Gtk and Qt. This means higher level programs, such as end user GUI applications, written for Gtk or Qt, can be used anywhere with relatively minor changes.2
Those are the libraries that are used directly to implement GUI applications on POSIX systems, which is why most such applications are usually not that hard to port from one "near compliant" POSIX system to another.
So the "problem" you are referring to does not actually exist in the way you've presented it. The Unix community is not killing itself, it's doing exactly what it intends to do.
1. To explain better what I mean by this, consider the role of the free market in the evolution of GNU/Linux vs. Windows. The latter clearly is, by intention, the product of free market forces, but the former case is much more ambiguous. As good Westerners, we of course then put theory first and say, "Well obviously a product that is shaped by the free market is better than one that is not." But this is not quite true -- what it really means is that one will be prone to sell better. You seem to be asking why there's not more of an effort to make the product sell better in this sense. The answer, of course, is that there's less motivation to do so, which raises the question of what there is motivation to do. The development of GNU/Linux (in particular) has been guided a lot by developers out to please themselves, and not so much other people or a mass market. Whether or not this produces a better system I guess depends on how close your perspective is to the people who created it. The history of UNIX does involve more market forces, but it is/was a highly specialized audience. Put another way: It's a technical OS for technical people, but there is a rabbit hole available for the general public.
2. In contrast, if you write something with the Windows API, there is only one place you can use it without some very major changes..
| Why isn't there a Unix API? [closed] |
1,456,364,840,000 |
From https://unix.stackexchange.com/a/436631/674
the file /proc/$$/environ ... does not reflect any changes to the environment, but just reports what the program received when it was execed by the process.
From APUE:
Each program is also passed an environment list. Like the
argument list, the environment list is an array of character
pointers, with each pointer containing the address of a
null-terminated C string. The address of the array of pointers is
contained in the global variable environ:
extern char **environ;
Access to specific environment variables is normally through the
getenv and putenv functions, described in Section 7.9,
instead of through the environ variable. But to go through the
entire environment, the environ pointer must be used.
Are /proc/$$/environ and the global variable environ independent from each other or consistent with each other?
Do the strings accessed via environ also not reflect any changes to the environment, but just reports the environment received by execve()?
Or do the strings accessed via environ always reflect any change to them, just like that getenv always get the up-to-date environment strings?
Do the strings accessed via getenv always reflect any change and are always up-to-date?
Thanks.
|
/proc/$$/environ and the variable environ are independent. environ does reflect changes to the environment, and in fact the value of the pointer in environ also changes when environment variables are added to the environment via putenv() (but this is an implementation detail.)
We'll have to distinguish between the system call level, and the library level. At the system call level, the only mechanism related to the environment is the envp argument to the execve call. This parameter is expected to contain name=value pairs that make up the environment of the new program. This environment is copied to the stack of the new process, where the user space startup code can pick it up.
At the library level, we have
the global variable environ, which points to a copy of the environment
the functions getenv() and putenv() for examining and modifying the environment
the exec* family of functions (not inlcuding execve) which either implicitly (via environ) or explicitly (passed via a parameter) access the environment
The exec* library functions ultimately call the execve system call. The environ variable does not point to the environment on the stack; instead the environment is copied to the process heap before the environ variable is set up (this is again an implementation detail.)
Why doesn't /proc/$$/environ reflect changes to the environment? /proc/$$/environ is a virtual file provided by the kernel, and the kernel has no way of knowing what is going on at this low level in the address space in a user process. The kernel has no knowledge of the environ variable, and is unaware of the data structures used by the process to store the environment.
| Do the strings accessible via global variable `environ` not reflect any change to the environment? |
1,456,364,840,000 |
I have a some internally available servers (all Debian), that share a LetsEncrypt wildcard certificate (*.local.example.com). One server (Server1) keeps the certificate up-to-date and now I'm looking for a process to automatically distribute the .pem-files from Server1 to the other servers (e.g. Server2 and Server3).
I don't allow root logins via SSH, so I believe I need an intermediary user.
I've considered using a cronjob on Server1 to copy the updated .pem-files to a users directory, where
a unprivileged user uses scp or rsync (private key authentication) via another cronjob to copy the files to the Server2/3. However, to make this a more secure process, I wanted to restrict the user's privileges on the Server2/3 to chroot to their home directory and only allow them to use scp or rsync. It seems like this isn't a trivial configuration and most methods are outdated, flawed or requite an extensive setup (rbash, forcecommand, chroot, ...).
I've also considered to change the protocol to sftp, which should allow me to use the restricted sftp environment, via OpenSSH but I have no experience.
An alternative idea was to use an API endpoint (e.g. FastAPI, which is already running on Server1) or simply a webserver via HTTPS with custom API-Secrets or mTLS on Server1 to allow Server2/3 to retrieve the .pem-files.
At the moment, the API/webserver approach seems most reasonable and least complex, yet feels unnecessarily convoluted. I'd prefer a solution that doesn't require additional software.
Server1 has .pem-files (owned by root) and Server2/3 need those files updated regularly (root-owned location). What method can I use to distribute those files automatically in a secure manner?
|
I've settled on an rsync-only user, that can only rsync data to a predefined directory using ssh-keys (https://gist.github.com/jyap808/8700714). I rsync the files with script that runs after successful letsencrypt deployments. On the receiving servers, I have an inotifywait service running that moves the files to the appropriate locations right after they've synced onto the server.
| How to distribute HTTPS certificate/key securely and automatically on internal servers |
1,456,364,840,000 |
We are developing embedded device which will integrate with some of our services in future. This device have limited set of functionalities and user defined mods for particular use cases. Based on arm architecture this device running modified version of Debian. For network and main configuration setup I need to write small web service. This should be secure and light. I tested this with mix of lightweight web servers such as lighttpd and languages like python and manged to get prototype working. which have the functioning web service and that web service can integrate with clients which push the configuration in initial step. My concern is even though this is lightweight I dont need to deploy full featured web server on devices which have high level of access to the device for configure it. even though I disable this process after the configuring
Is there any way to have small rest api other than having full-blown web server ? I already tested restbed C++ rest api. which is complex and python based server less web service. but I don't want to deploy python either. since this service only transmit like below 10 parameters to the client and it's overkill. is there any secure way to implement this without daemon like service.
|
Although this question is a little vague and open to opinion I'll throw mine out there. Golang Has a very easy to use server package right in the standard library. It looks a lot like C, compiles to native executables on almost any platform and architecture and you can host a very simple webserver with very few lines as in below.
package main
import (
"fmt"
"log"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/", handler)
log.Fatal(http.ListenAndServe(":8080", nil))
}
If you want security you can very easily set up TLS by creating some self signed certs, and simply replacing the http.ListenAndServe with err := http.ListenAndServeTLS(":10443", "cert.pem", "key.pem", nil)
It's very lightweight and easy to run anywhere. As Eli smartly pointed out in the comments, cross compilation is also very easy to do, meaning quick builds and deploys to your embedded devices.
| writing small web service for embedded device based on debian [closed] |
1,393,182,323,000 |
I want to run a command on Linux in a way that it cannot create or open any files to write. It should still be able to read files as normal (so an empty chroot is not an option), and still be able to write to files already open (especially stdout).
Bonus points if writing files to certain directories (i.e. the current directory) is still possible.
I’m looking for a solution that is process-local, i.e. does not involve configuring things like AppArmor or SELinux for the whole system, nor root privileges. It may involve installing their kernel modules, though.
I was looking at capabilities and these would have been nice and easy, if there were a capability for creating files. ulimit is another approach that would be convenient, if it covered this use case.
|
It seems that the right tool for this job is fseccomp Based on sync-ignoringf code by Bastian Blank, I came up with this relatively small file that causes all its children to not be able to open a file for writing:
/*
* Copyright (C) 2013 Joachim Breitner <[email protected]>
*
* Based on code Copyright (C) 2013 Bastian Blank <[email protected]>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
* ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#define _GNU_SOURCE 1
#include <errno.h>
#include <fcntl.h>
#include <seccomp.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#define filter_rule_add(action, syscall, count, ...) \
if (seccomp_rule_add(filter, action, syscall, count, ##__VA_ARGS__)) abort();
static int filter_init(void)
{
scmp_filter_ctx filter;
if (!(filter = seccomp_init(SCMP_ACT_ALLOW))) abort();
if (seccomp_attr_set(filter, SCMP_FLTATR_CTL_NNP, 1)) abort();
filter_rule_add(SCMP_ACT_ERRNO(EACCES), SCMP_SYS(open), 1, SCMP_A1(SCMP_CMP_MASKED_EQ, O_WRONLY, O_WRONLY));
filter_rule_add(SCMP_ACT_ERRNO(EACCES), SCMP_SYS(open), 1, SCMP_A1(SCMP_CMP_MASKED_EQ, O_RDWR, O_RDWR));
return seccomp_load(filter);
}
int main(__attribute__((unused)) int argc, char *argv[])
{
if (argc <= 1)
{
fprintf(stderr, "usage: %s COMMAND [ARG]...\n", argv[0]);
return 2;
}
if (filter_init())
{
fprintf(stderr, "%s: can't initialize seccomp filter\n", argv[0]);
return 1;
}
execvp(argv[1], &argv[1]);
if (errno == ENOENT)
{
fprintf(stderr, "%s: command not found: %s\n", argv[0], argv[1]);
return 127;
}
fprintf(stderr, "%s: failed to execute: %s: %s\n", argv[0], argv[1], strerror(errno));
return 1;
}
Here you can see that it is still possible to read files:
[jojo@kirk:1] Wed, der 06.03.2013 um 12:58 Uhr Keep Smiling :-)
> ls test
ls: cannot access test: No such file or directory
> echo foo > test
bash: test: Permission denied
> ls test
ls: cannot access test: No such file or directory
> touch test
touch: cannot touch 'test': Permission denied
> head -n 1 no-writes.c # reading still works
/*
It does not prevent deleting files, or moving them, or other file operations besides opening, but that could be added.
A tool that enables this without having to write C code is syscall_limiter.
| How to prevent a process from writing files |
1,393,182,323,000 |
I'm logged in remotely over SSH with X forwarding to a machine running Ubuntu 10.04 (lucid). Most X11 applications (e.g. xterm, gnome-terminal) work fine. But Evince does not start. It seems unable to read ~/.Xauthority, even though the file exists, and is evidently readable (it has the right permissions and other applications read it just fine).
$ evince
X11 connection rejected because of wrong authentication.
Cannot parse arguments: Cannot open display:
$ echo DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY
DISPLAY=localhost:10.0 XAUTHORITY=
$ strace evince
…
access("/home/gilles/.Xauthority", R_OK) = 0
open("/home/gilles/.Xauthority", O_RDONLY) = -1 EACCES (Permission denied)
…
$ ls -l ~/.Xauthority
-rw------- 1 gilles gilles 496 Jul 5 13:34 /home/gilles/.Xauthority
What's so special about Evince that it can't read ~/.Xauthority? How can I make it start?
|
TL,DR: it's Apparmor's fault, and due to my home directory being outside /home.
Under a default installation of Ubuntu 10.04, the apparmor package is pulled in as an indirect Recommends-level dependency of the ubuntu-standard package. The system logs (/var/log/syslog) show that Apparmor is rejecting Evince's attempt to read ~/.Xauthority:
Jul 5 17:58:31 darkstar kernel: [15994724.481599] type=1503 audit(13415
03911.542:168): operation="open" pid=9806 parent=9805 profile="/usr/bin/evince"
requested_mask="r::" denied_mask="r::" fsuid=1001 ouid=1001 name="/elsewhere/home/gilles/.Xauthority"
The default Evince configuration for Apparmor (in /etc/apparmor.d/usr.bin.evince) is very permissive: it allows arbitrary reads and writes under all home directories. However, my home directory on this machine is a symbolic link to non-standard location which is not listed in the default AppArmor configuration. Access is allowed under /home, but the real location of my home directory is /elsewhere/home/gilles, so access is denied.
Other applications that might be affected by this issue include:
Firefox, but its profile is disabled by default (by the presence of a symbolic link /etc/apparmor.d/disable/usr.bin.firefox -> /etc/apparmor.d/usr.bin.firefox).
CUPS PDF printing; I haven't tested, but I expect it to fail writing to ~/PDF.
My fix was to edit /etc/apparmor.d/tunables/home.d/local and add the line
@{HOMEDIRS}+=/elsewhere/home/
to have the non-standard location of home directories recognized (note that the final / is important; see the comments in /etc/apparmor.d/tunables/home.d/ubuntu), then run /etc/init.d/apparmor reload to update the Apparmor settings.
If you don't have administrator privileges and the system administrator is unresponsive, you can copy the evince binary to a different location such as ~/bin, and it won't be covered by the Apparmor policy (so you'll be able to start it, but will not be afforded the very limited extra security that Apparmor provides).
This issue has been reported as Ubuntu bug #447292. The resolution handles the case when some users have their home directory as listed in /etc/passwd outside /home, but not cases such as mine where /home/gilles is a symbolic link.
| Evince fails to start because it cannot read .Xauthority |
1,393,182,323,000 |
I am just testing out a new Ubuntu (Vivid 15.04) install on Vagrant, and getting problems with mysql and logging to a custom location.
In /var/log/syslog I get
/usr/bin/mysqld_safe: cannot create /var/log/mysqld.log: Permission denied
If I ls -l /var I get
drwxrwxr-x 10 root syslog 4096 Jun 8 19:52 log
If I look in /var/log the file doesn't exist
I thought I had temporarily disabled apparmor just to isolate if it was that or something else causing the problem, but not sure if its still creating an issue (edit: think it may still be enabled, so not sure if this is an issue or simple permissions).
If I try manually creating the file as mysql I get denied as well (I temp allowed it bash access to test, I will remove after).
touch /var/log/mysql.log
touch: cannot touch ‘/var/log/mysql.log’: Permission denied
If I look at another running server (centos) it has permissions as above (and writes as mysql user), so I'm wondering how does mysql normally get permissions to access the /var/log directory, and how can I get it to access that folder via normal running ?
Here is my apparmor profile for mysql
/usr/sbin/mysqld {
#include
#include
#include
#include
#include
capability dac_override,
capability sys_resource,
capability setgid,
capability setuid,
network tcp,
/etc/hosts.allow r,
/etc/hosts.deny r,
/etc/mysql/** r,
/usr/lib/mysql/plugin/ r,
/usr/lib/mysql/plugin/*.so* mr,
/usr/sbin/mysqld mr,
/usr/share/mysql/** r,
/var/log/mysqld.log rw,
/var/log/mysqld.err rw,
/var/lib/mysql/ r,
/var/lib/mysql/** rwk,
/var/log/mysql/ r,
/var/log/mysql/* rw,
/var/run/mysqld/mysqld.pid rw,
/var/run/mysqld/mysqld.sock w,
/run/mysqld/mysqld.pid rw,
/run/mysqld/mysqld.sock w,
/sys/devices/system/cpu/ r,
/var/log/mysqld.log rw,
# Site-specific additions and overrides. See local/README for details.
#include
}
I also added the above file to the apparmor.d/disable directoru
Note: I added this line /var/log/mysqld.log rw, it wasn't originally there, and has same issue (after doing an apparmor reload).
apparmor module is loaded.
5 profiles are loaded.
5 profiles are in enforce mode.
/sbin/dhclient
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/tcpdump
0 profiles are in complain mode.
1 processes have profiles defined.
1 processes are in enforce mode.
/sbin/dhclient (565)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
Jun 8 20:33:33 vagrant-ubuntu-vivid-64 systemd[1]: Starting MySQL Community Server...
Jun 8 20:33:33 vagrant-ubuntu-vivid-64 mysqld_safe[11231]: 150608 20:33:33 mysqld_safe Logging to '/var/log/mysqld.log'.
Jun 8 20:33:33 vagrant-ubuntu-vivid-64 mysqld_safe[11231]: touch: cannot touch ‘/var/log/mysqld.log’: Permission denied
Jun 8 20:33:33 vagrant-ubuntu-vivid-64 mysqld_safe[11231]: chmod: cannot access ‘/var/log/mysqld.log’: No such file or directory
Jun 8 20:33:33 vagrant-ubuntu-vivid-64 mysqld_safe[11231]: 150608 20:33:33 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Jun 8 20:33:33 vagrant-ubuntu-vivid-64 mysqld_safe[11231]: /usr/bin/mysqld_safe: 126: /usr/bin/mysqld_safe: cannot create /var/log/mysqld.log: Permission denied
|
It seems to me that most people create a directory named mysql inside of /var/log, change the owner of this folder to the mysql user.
sudo mkdir /var/log/mysql
sudo chown mysql:mysql /var/log/mysql
That should do it. Be sure to update the server's logging location and restart it. After you've tested re-enable mysql's apparmor profile.
| Permission denied writing to mysql log |
1,393,182,323,000 |
I have a Docker container (LXC) which runs MySQL. Since the idea behind Docker is generally "one running process per container," if I define AppArmor profiles targeting the MySQL binary, will they be enforced? Is there a way for me to test for this?
|
First, cgroups are not used to isolate an application from others on a system. They are used to manage resource usage and device access. It's the various namespaces (PID, UTS, mount, user...) that provide some (limited) isolation.
Moreover, a process launched inside a Docker container will probably not be able to manage the AppArmor profile it is running under. The approach currently taken is to setup a specific AppArmor profile before launching the container.
It looks like the libcontainer execution driver in Docker supports setting AppArmor profiles for containers, but I can't find any example or reference in the doc.
Apparently AppArmor is also supported with LXC in Ubuntu.
You should write an AppArmor profile for your application and make sure LXC/libcontainer/Docker/... loads it before starting the processes inside the container.
Profiles used this way should be enforced, and to test it you should try an illegal access and make sure it fails.
There is no link between the binary and the actually enforced profile in this case. You have to explicitly tell Docker/LXC to use this profile for your container. Writing a profile for the MySQL binary will only enforce it on the host, not in the container.
| AppArmor profiles in Docker/LXC |
1,393,182,323,000 |
I have my "root" partition split into two: a normal one that contains most files and a second one used for those areas that can "grow". Concretely, this means, I have symlinks like:
/var/log -> /part1/log
/var/cache -> /part1/cache
/var/spool -> /part1/spool
This worked well until I started using AppArmor which keeps complaining about things like cupsd looking at files in /part1/log/cups/..
I currently work around this problem by adding entries for each application protected by AppArmor, but this is tedious.
Is there some way to tell AppArmor once and forall that if access to /var/log/FOO is allowed than access to /part1/log/FOO should also be allowed?
|
It can be done in /etc/apparmor.d/tunables/alias as follows:
# Alias rules can be used to rewrite paths and are done after variable
# resolution. For example, if '/usr' is on removable media:
# alias /usr/ -> /mnt/usr/,
#
# Or if mysql databases are stored in /home:
# alias /var/lib/mysql/ -> /home/mysql/,
/var/log -> /part1/log,
/var/cache -> /part1/cache,
/var/spool -> /part1/spool,
See also:
https://manpages.debian.org/unstable/apparmor/apparmor.d.5.en.html#Alias_rules
https://bugs.launchpad.net/apparmor/+bug/1485055/comments/2
| How to teach AppArmor about toplevel symlinks |
1,393,182,323,000 |
If I tell applications like VLC or Audacity to record from my webcam or microphone, they just wake the hardware if sleeping and do their work without my interference.
Although this is a good thing, I've always wondered, due to privacy concerns: is there a way to restrict applications access to a hardware device?
While writing this, I came up with the idea of using something like SELinux or AppArmor to restrict access to /dev/something. Is this possible? Could there be a better or easier way?
Also, is there any more hardware besides the webcam and microphone that I should be concerned about?
|
I guess the traditional way would be to make pseudo-users (like the games-user) for the program/set of programs, assign this user to the groups for devices it should access (eg. camera), and run the program(s) SUID as this user. If you removed permissions for "others" (not owner or group), only the owner and members of the group - including the pseudo-user - could access it.
Further more, you could use the group of the program(s) to restrict which users where allowed to execute the program(s). Make a group (eg. conference) for the users allowed to make video-conferences, and restrict the execution of the associated programs (the ones granted special access to camera and mic) to this group only.
+++
Another way is running the program SGID as the special-group belonging to the device, and remove permission for "others". This of course only work if the program need to access just one restricted device.
| Restrict applications to access certain hardware (webcam, microphone...) |
1,393,182,323,000 |
Can anyone explain this line in the log.smbd? Searching the internet for apparmor details gets so many hits I cannot find the information. This occurs when smbd is started.
kernel: [908896.070790] type=1400 audit(1442305563.416:371): apparmor="STATUS"
operation="profile_replace" profile="unconfined" name="/usr/sbin/nmbd"
pid=16870 comm="apparmor_parser"
|
I would beg to differ with Marks answer.
Any time I type sudo service mysql restart I see a similar message in syslog... time kernel:
audit: type=1400 apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/sbin/mysqld" pid=5014 comm="apparmor_parser"
If I then type sudo aa-status I see that mysql is in the list "nn processes are in enforce mode" 0 processes are in complain mode. 0 processes are unconfined but have a profile defined.
So I think this rather confusing message is just apparmor saying... I just found a process matching profile="unconfined" and I am going to perform operation="profile_replace"
These messages also appear when the pc is rebooted, presumably for the same reason, apparmor loads first, then as other processes load it confines them.
| what is apparmor "profile_replace" log message |
1,393,182,323,000 |
Is there a way to create an AppArmor profile for each Firefox profile, when running multiple profiles off a single installation of Firefox? Or more generally for any application supporting multiple profiles, Thunderbird, etc. Generally all the AppArmor profiles I find for these apps only contain the whole app, unless I missed something.
Usually you launch a Firefox or Thunderbird with a command line argument to specify a different profile. However I can find nothing in the AppArmor profile syntax to match against app arguments.
I know libvirt does this somehow by creating an AppArmor profile for each virtual machine, so there must be some way.
|
AppArmor works by executable. It can't figure out that Firefox has loaded a different profile and so it should use a different AppArmor profile.
AppArmor does support change rules, which allow an application to change which profile applies to it. The intended use case is precisely to allow an application to switch to a more restrictive profile once it's finished initializing and figured out what it needs to access in this particular instance. So if Firefox was AppArmor-aware, it would be possible to give it change_profile rule and have it apply the transition once it's figured out which profile to run as. As far as I know, this hasn't been done.
What you can do without programming is make multiple copies or hard links of the firefox-bin executable, and define different profiles for each of them (AppArmor is based on the path to the executable, so different hard links need not use the same profile, unlike SELinux which is based on inodes). This requires root and isn't so convenient, which is why the change profile feature was added to AppArmor.
| AppArmor: Are multiple profiles per application (Firefox, Thunderbird) possible? Syntax? |
1,393,182,323,000 |
In researching this U&L Q&A titled: permission denied for ptrace under GDB, the question came up, "Is there other software similar to SELinux & AppArmor?".
User @IwillnotexistIdonotexist mentioned 2 that I'd never heard of: Smack & Yama. He found them by searching through the Linux source code. So now we're up to 4.
SELinux
AppArmor
Smack
Yama
Are there others?
|
In searching for Linux Security Modules, I came across the wikipedia page, titled: Linux Security Modules.
These are the following LSM's listed there:
SELinux
AppArmor
Smack
TOMOYO Linux
YAMA LSM
Linux Intrusion Detection System
FireFlier
CIPSO
Multi ADM
Of the modules listed, the first 4, SELinux, AppArmor, Smack, and TOMOYO Linux are the only ones accepted into the official Linux Kernel, since version 2.6.
| Are there other LSM (Linux Security Modules) in addition to SELinux and AppArmor? |
1,393,182,323,000 |
I am running Ubuntu 16.04 with apparmor 2.10.95-0ubuntu2.7. I often need to comment on software of dubious quality. I want to employ apparmor to guard my system from harm.
I created an apparmor wildcard profile like this:
/home/username/testing/** {
somerules
}
Unfortunately, this profile has no effect. It works as expected as soon as I put the exact path without a wildcard:
/home/username/testing/client42/executable {
somerules
}
On the manpage, it looks like globbing is supported for profiles:
PROFILE = ( PROFILE HEAD ) [ ATTACHMENT SPECIFICATION ] [ PROFILE FLAG CONDS ] '{' ( RULES )* '}'
PROFILE HEAD = [ 'profile' ] FILEGLOB | 'profile' PROFILE NAME
This wiki article says so, too. There even is a user reporting success.
What am I missing?
Do wildcards in profiles need to be explicitly enabled in a configuration file?
Is globbing disabled in the Ubuntu build?
|
Tinkering around with this problem today, I found the wildcard profile working as expected after a reboot. It looks like setting the profile to enforce mode with aa-enforce /etc/apparmor.d/<profile> or reloading the profile with apparmor_parser -r /etc/apparmor.d/<profile> as described here and here is not sufficient for wildcard profiles. I am unsure if reloading the service via systemctl reload apparmor is sufficient to activate the wildcard profile, but a system restart definitely is.
| apparmor wildcard profile with globbing |
1,393,182,323,000 |
Environment:
OS: Debian GNU/Linux 9.3 (stretch)
Kernel parameter: security=apparmor
Here is my test profile (created by aa-genprof):
/etc/apparmor.d/usr.bin.telnet.netkit
#include <tunables/global>
/usr/bin/telnet.netkit {
#include <abstractions/base>
/lib/x86_64-linux-gnu/ld-*.so mr,
/usr/bin/telnet.netkit mr,
deny network,
}
Take effect by:
sudo systemctl reload apparmor.service
AppArmor status:
$ sudo aa-status | grep telnet
/usr/bin/telnet
/usr/bin/telnet.netkit
But when I test the telnet program:
$ telnet.netkit 127.0.0.1 22
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
SSH-2.0-OpenSSH_7.4p1 Debian-10+deb9u2
Network access is NOT denied.
Here is the process status:
$ ps auxZ | grep -v unconfined | grep telnet
/usr/bin/telnet.netkit (enforce) test 10410 0.0 0.0 19504 2852 pts/1 S+ 18:26 0:00 telnet.netkit 127.0.0.1 22
Netstat:
$ netstat -nap | grep telnet
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:56710 127.0.0.1:22 ESTABLISHED 10410/telnet.netkit
Can anyone help to find out what's wrong with the profile? Thanks a lot!
|
If I'm reading correctly you are running debian.
Well the problem is that the kernel in debian lacks of the needed code to block the network connections (same for D-Bus mitigation) as the patches are not mainline (yet, I know there was work to change the situation).
| How to deny application's access to network by AppArmor? |
1,393,182,323,000 |
I've installed and enabled snappy on my openSUSE Leap 15.1 system according to this documentation: https://snapcraft.io/docs/installing-snap-on-opensuse
When adding the repository, I used the one for my specific version: https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_15.1/
However, after enabling the service, it keeps crashing with exit code 42 just a few seconds after I start it. The socket seems to be okay:
opensuse:~ # systemctl status snapd.socket
● snapd.socket - Socket activation for snappy daemon
Loaded: loaded (/usr/lib/systemd/system/snapd.socket; disabled; vendor preset: disabled)
Active: active (listening) since Tue 2019-12-31 15:22:47 CET; 1h 58min ago
Listen: /run/snapd.socket (Stream)
/run/snapd-snap.socket (Stream)
Dec 31 15:22:47 opensuse systemd[1]: Starting Socket activation for snappy daemon.
Dec 31 15:22:47 opensuse systemd[1]: Listening on Socket activation for snappy daemon.
When I manually start snapd.service it shows up okay right after starting it:
opensuse:~ # systemctl status snapd.service
● snapd.service - Snappy daemon
Loaded: loaded (/usr/lib/systemd/system/snapd.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-12-31 17:23:34 CET; 999ms ago
Main PID: 3014 (snapd)
Tasks: 10 (limit: 4915)
CGroup: /system.slice/snapd.service
└─3014 /usr/lib/snapd/snapd
Dec 31 17:23:34 opensuse systemd[1]: Starting Snappy daemon...
Dec 31 17:23:34 opensuse snapd[3014]: AppArmor status: apparmor is enabled but some kernel features are missing: dbus
Dec 31 17:23:34 opensuse snapd[3014]: daemon.go:346: started snapd/2.42.4-lp151.1.1 (series 16; classic; devmode) opensuse-leap/15.1 (amd64) linux/4.12.14-lp151.28.36-defau.
Dec 31 17:23:34 opensuse snapd[3014]: daemon.go:439: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 31 17:23:34 opensuse systemd[1]: Started Snappy daemon.
But then, after a few seconds it fails:
opensuse:~ # systemctl status snapd.service
● snapd.service - Snappy daemon
Loaded: loaded (/usr/lib/systemd/system/snapd.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Tue 2019-12-31 17:23:39 CET; 36s ago
Process: 3014 ExecStart=/usr/lib/snapd/snapd (code=exited, status=42)
Main PID: 3014 (code=exited, status=42)
Dec 31 17:23:34 opensuse systemd[1]: Starting Snappy daemon...
Dec 31 17:23:34 opensuse snapd[3014]: AppArmor status: apparmor is enabled but some kernel features are missing: dbus
Dec 31 17:23:34 opensuse snapd[3014]: daemon.go:346: started snapd/2.42.4-lp151.1.1 (series 16; classic; devmode) opensuse-leap/15.1 (amd64) linux/4.12.14-lp151.28.36-defau.
Dec 31 17:23:34 opensuse snapd[3014]: daemon.go:439: adjusting startup timeout by 30s (pessimistic estimate of 30s plus 5s per snap)
Dec 31 17:23:34 opensuse systemd[1]: Started Snappy daemon.
Dec 31 17:23:39 opensuse snapd[3014]: daemon.go:540: gracefully waiting for running hooks
Dec 31 17:23:39 opensuse snapd[3014]: daemon.go:542: done waiting for running hooks
Dec 31 17:23:39 opensuse snapd[3014]: daemon stop requested to wait for socket activation
Running /usr/lib/snapd/snapd directly gives me:
opensuse:~ # /usr/lib/snapd/snapd
AppArmor status: apparmor is enabled but some kernel features are missing: dbus
cannot run daemon: when trying to listen on /run/snapd.socket: socket "/run/snapd.socket" already in use
What do?
|
Never mind, found it (I think). Apparently the service only keeps running when there are any snaps installed. Since I hadn't installed any snaps yet, it terminated itself after starting. After installing the first snap, snapd kept running in the background.
In order to install, first you have to stop the daemon:
systemctl stop snapd.socket
and then do the actual install; otherwise, the error originally posted (cannot run daemon) will appear.
| snapd fails on openSUSE Leap 15.1 |
1,393,182,323,000 |
Under my new Linux Mint 18.3 (64 bit) installation on my notebook, the command
$ sudo apparmor_status
returns the following:
apparmor module is loaded.
apparmor filesystem is not mounted.
Is AppArmor configured for usage under Linux Mint or do I have to enable and configure it first? Because, as you can see, the command does not return a list of active AppArmor profiles.
|
To enable apparmor you need to boot your system with apparmor=1 security=apparmor option.
Edit your /etc/default/grub by modifying the line:
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
To:
GRUB_CMDLINE_LINUX_DEFAULT="apparmor=1 security=apparmor quiet"
Then update grub and reboot:
sudo update-grub
sudo reboot
Debian : Enable AppArmor
Enable the AppArmor LSM:
$ sudo mkdir /etc/default/grub.d
$ echo 'GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT apparmor=1 security=apparmor"' \
| sudo tee /etc/default/grub.d/apparmor.cfg
$ sudo update-grub
$ sudo reboot
| Is AppArmor actively used by Linux Mint 18? |
1,393,182,323,000 |
I installed bind9 with chroot on Debian11, described by this tutorial: https://wiki.debian.org/Bind9#Debian_Jessie_and_later
It works fine, but as soon as I turn on dynamic zone updates, it will fail with this reason in the syslog:
Jul 18 19:22:52 NS kernel: [12161.968582] audit: type=1400 audit(1658164972.109:107): apparmor="DENIED" operation="open" profile="named" name="/var/bind9/chroot/" pid=18104 comm="named" requested_mask="r" denied_mask="r" fsuid=106 ouid=0
I thought it might be some missing option from /etc/apparmor.d/local/usr.sbin.named so I added /var/bind9/chroot to it, now the file looks like this:
/var/bind9/chroot/** r,
/var/bind9/chroot/etc/bind/** r,
/var/bind9/chroot/usr/** rw,
/var/bind9/chroot/var/** rw,
/var/bind9/chroot/dev/** rw,
/var/bind9/chroot/run/** rw,
Then I restarted apparmor and also named services, but the problem is the same. If I look into with apparmor_status command, it brings the right named process id, so there is no another false process running or so. Expect this, the chrooted named works fine. If I turn off enforcing from this profile or I disable apparmor completely, then the dynamic updates are also work, but I would like to fix this somehow.
UPDATE:
If I modify /etc/apparmor.d/local/usr.sbin.named to this:
/var/bind9/chroot/** r,
/var/bind9/chroot/etc/bind/** rw,
/var/bind9/chroot/usr/** rw,
/var/bind9/chroot/var/** rw,
/var/bind9/chroot/dev/** rw,
/var/bind9/chroot/run/** rw,
then the dynamic zone updates are working. But still get that error messages I noticed before and getting that exactly when a dynamic zone update is triggered. It is a bit annoying why I get that messages.
|
The log message (wrapped for readability)
Jul 18 19:22:52 NS kernel: [12161.968582] audit: type=1400 audit(1658164972.109:107): \
apparmor="DENIED" operation="open" profile="named" name="/var/bind9/chroot/" pid=18104 \
comm="named" requested_mask="r" denied_mask="r" fsuid=106 ouid=0
indicates the named process was trying to read the directory /var/bind9/chroot and was denied.
The rule examples in man 5 apparmor.d say (emphasis mine):
When AppArmor looks up a directory the pathname being looked up will
end with a slash (e.g., /var/tmp/); otherwise it will not end with a
slash. Only rules that match a trailing slash will match directories.
Some examples, none matching the /tmp/ directory itself, are:
/tmp/*
Files directly in /tmp.
/tmp/*/
Directories directly in /tmp.
/tmp/**
Files and directories anywhere underneath /tmp.
/tmp/**/
Directories anywhere underneath /tmp.
In other words, your first rule
/var/bind9/chroot/** r,
only allows reading files within the /var/bind9/chroot/ directory, but reading the directory listing is not allowed. And that is apparently what named wants to do.
To fix it, you would need to add a line:
/var/bind9/chroot/ r,
because ** won't match an empty string.
| Bind9 dynamic zone updates are denied by apparmor in Debian11 |
1,393,182,323,000 |
I've compiled a kernel (linux-libre-xtreme) with this configuration, it has most LSMs enabled: YAMA, SMACK, AppArmor, TOMOYO and SELinux. However, when I start the apparmor service with OpenRC I get:
# rc-service apparmor start
* Stopping AppArmor ...
* Unloading AppArmor profiles
* Root privileges not available [ !! ]
* Starting AppArmor ...
* Loading AppArmor profiles ...
Cache read/write disabled: interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.)
Warning: unable to find a suitable fs in /proc/mounts, is it mounted?
Use --subdomainfs to override.
* /etc/apparmor.d/usr.bin.apache2 failed to load
Cache read/write disabled: interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.)
Warning: unable to find a suitable fs in /proc/mounts, is it mounted?
Use --subdomainfs to override.
And other profiles also complain, however this doesn't happen with other kernel that I've compiled too (linux-libre-lts-apparmor, see its configuration here)
What am I doing wrong? If I do cat /sys/module/apparmor/parameters/enabled with the linux-libre-xtreme kernel, I get N, but with linux-libre-lts-apparmor, it says Y, so I know it's not something with kernel parameters from the bootloader.
|
Solved by disabling CONFIG_DEFAULT_SECURITY_DAC=y, there has to be only one CONFIG_DEFAULT_SECURITY_* enabled it seems
EDIT: I also discovered that, for AppArmor to be enabled by default when booting, SECURITY_APPARMOR_BOOTPARAM_VALUE must be set to "1", like this: CONFIG_SECURITY_APPARMOR_BOOTPARAM_VALUE=1
| Enabling AppArmor in Linux |
1,393,182,323,000 |
I realise this is probably a bad question but I'm stuck. After a lot of googling I'm struggling to fix the problem. I've been trying to get AppArmor to work on Debian. I've been following the instructions from https://wiki.debian.org/AppArmor/HowToUse.
Part of the instruction told me to do
sudo perl -pi -e 's,GRUB_CMDLINE_LINUX="(.*)"$,GRUB_CMDLINE_LINUX="$1 apparmor=1 security=apparmor",' /etc/default/grub
sudo update-grub
sudo reboot
Because I didn't understand the perl command I did this in my VM and now I can't use Firefox, not even in safemode. I get a segfault.
Fontconfig error: Cannot load default config file
(firefox:3875): Pango-WARNING **: failed to choose a font, expect ugly output. engine-type='PangoRenderFc', script='common'
Crash Annotation GraphicsCriticalError: |[0][GFX1]: no fonts - init: 1 fonts: 0 loader: 0 (t=0.206719) [GFX1]: no fonts - init: 1 fonts: 0 loader: 0
[3875] ###!!! ABORT: unable to find a usable font (Sans): file /tmp/buildd/firefox-47.0.1/gfx/thebes/gfxTextRun.cpp, line 1875
[3875] ###!!! ABORT: unable to find a usable font (Sans): file /tmp/buildd/firefox-47.0.1/gfx/thebes/gfxTextRun.cpp, line 1875
Segmentation fault
apt-cache policy apparmor
apparmor:
Installed: 2.9.0-3
Candidate: 2.9.0-3
Version table:
2.10.95-4~bpo8+2 0
100 http://ftp.uk.debian.org/debian/ jessie-backports/main amd64 Packages
*** 2.9.0-3 0
500 http://ftp.uk.debian.org/debian/ jessie/main amd64 Packages
500 http://mirror.bytemark.co.uk/debian/ jessie/main amd64 Packages
100 /var/lib/dpkg/status
ls -l on /etc/fonts/fonts.conf returns the following:
-rw-r--r-- 1 root root 5533 Nov 23 2014 /etc/fonts/fonts.conf
I tried exporting the font config path with the command
export FONTCONFIG_PATH=/etc/fonts
however this didn't help.
I know this is looking for a font that doesn't exist because I checked the path but now I'm at an impasse, out of ideas and can't find anymore help from Google.
|
I have managed to solve the issue. The way I did this was by editing /etc/default/grub and changing GRUB_CMDLINE_LINUX from
GRUB_CMDLINE_LINUX=" apparmor=1 security=apparmor"
to
GRUB_CMDLINE_LINUX=""
Then after running sudo update grub and sudo reboot the issue was fixed. However this stopped apparmor from working as it gave the error:
apparmor.common.AppArmorException: 'Warning: unable to find a suitable fs in /proc/mounts, is it mounted?\nUse --subdomainfs to override.\n'
However I managed to solve this by using the commands from the debian guide again.
sudo perl -pi -e 's,GRUB_CMDLINE_LINUX="(.*)"$,GRUB_CMDLINE_LINUX="$1 apparmor=1 security=apparmor",' /etc/default/grub
sudo update-grub
sudo reboot
After the reboot I tried to launch firefox and I didn't get any errors and it is all working fine now. However after trying it on my pc I started having the segfault issue again and this fix didn't work. but after comparing the apparmor profiles in /etc/apparmor.d I found that the profile rules were different.
Rules on segfaulting pc:
# Last Modified: Tue Aug 2 11:32:25 2016
#include <tunables/global>
/usr/lib/firefox/firefox {
#include <abstractions/base>
/usr/bin/firefox r,
}
Rules on working pc:
# Last Modified: Tue Aug 2 11:32:25 2016
#include <tunables/global>
/usr/bin/firefox {
#include <abstractions/base>
#include <abstractions/bash>
/bin/dash ix,
/usr/bin/firefox r,
}
I added #include <abstractions/bash> and /bin/dash ix, to the config file, then changed the path to /usr/bin/firefox and now the issue is fixed after rebooting.
| AppArmor is causing Firefox segfaults |
1,355,584,342,000 |
I am going through some primers on LSM implementations so eventually I am digging a bit into AppArmor and SELinux.
I am aware of this discussion but this does not make very clear one question I am having in regard to these two LSM implementations:
Is it a fact that:
SELinux must be applied system-wide (thus the auto-relabeling process on first boot which takes as much time as a filesystem scan)
AppArmor provides the flexibility to define policies only on those processes / scripts you d' like? - via the interactive auditing process)
(?)
|
As I answered to the other question, a major difference between AppArmor and SELinux is labeling. AppArmor is path based while SELinux adds additional label to every object. This is why auto relabeling is done at first boot to apply the default file labels. Otherwise it would not be possible to write meaningful policy for file access, as every file would be considered the same (due having same labels).
Both AppArmor and SELinux have unconfined domains, which do not restrict processes. Both systems also have complain mode (called permissive domain in SELinux), which only log policy violations but not enforce the policy. Both AppArmor and SELinux are enabled system-wide and it is possible in both systems to run processes which are not restricted by the security module.
When it comes to automatic policy generation, both systems have similar tools and mechanisms.
AppArmor profiles can be generated using aa-genprof and aa-logprof. aa-genprof creates a basic profile and sets it in complain mode. After running the program the rules can be generated from log files.
SELinux tools are policygentool and audit2allow. The major difference again is the file labeling, but policygentool can automatically create file types for program data (var), configuration files and log files. The policy can then be loaded in permissive mode and rules can be then generated from logs using audit2allow.
| SELinux vs AppArmor applicability |
1,355,584,342,000 |
We have a few workstations that have access to a network share with all our company files. This is correct, because users may need to use them.
However our boss is concerned that someone (upon resignation maybe) may plug in a USB drive and take those files home or things like that. Since we can't forbid USB drives (we may need them for regular work) is there something we could do to improve security, making data theft more difficult?
I am the administrator of those workstations and the users don't have root powers (obviously).
Maybe SELinux/AppArmor can forbid copying from one particular directory to another?
|
No, you can't. If they can read the files they can copy them.
EDIT:
I've been thinking about it and maybe you can do something. If those files should only be accessed by a couple of programs (You said something about CAD files) maybe you can set the program owner to a new user (let's say CADuser) and change the permissions of all those files so that only CADuser can read them. Setting the sticky bit in the CAD program will allow those that run the program to be that user while the program is run thus being able to access and work with the files. But if they stop running the program they wont have access to those files. I have not tested, but I am pretty sure that if the file is saved to any other location the permissions will be those of the user of the program, so they won't be accessible outside the program, thus complicating the act of copying them to a USB.
| How to protect against data leak? |
1,355,584,342,000 |
I have a file with a secret and a generator application that reads it and generates something similar to a license.
There are users on that Linux machine who are allow to use that application.
Because that app read that secret file, these users must have read permission to that file.
Is there a way to remove the read permission from these users - just let the app read that file through that app only, when these users run it?
I want to give only ones who run that app the ability to read that file, and through that file only. Not just cat it and watch its content.
I saw a way solving it using
chmod 400 secret_file
chmod 110 generator
chmod u+s generator
This way users in the same group as generator can execute generator and can't read secret_file.
But because generator is with setuid then generator can read secret_file.
This is a nice solution, but I wanted to have the user's name in generator, and using that solution I always get the owner's name.
This is how I get the user's name from c/cpp application:
#include <pwd.h>
uid_t uid = geteuid();
struct passwd *pw = getpwuid(uid);
std::string user_name = pw->pw_name;
Is there another way to solve this issue?
Can somehow apparmor help? (I couldn't understand)
A follow up question - is there way to make file executable only through a specific script?
What I mean is that I don't want generator to be executed from a shell. I want it to be executed only from another script generator.sh which calls generator, because I do more stuff in generator.sh. I want user who runs bluntly generator to fail, and user who runs generator.sh to succeed.
|
Use getuid() instead of geteuid().
From the execve() man page:
If the set-user-ID bit is set on the program file referred to by
pathname, then the effective user ID of the calling process is
changed to that of the owner of the program file.
The "real" user ID, which getuid() returns is unchanged, that is, it's the UID of the user starting the process.
The other thing:
I don't want generator to be executed from a shell. I want it to be executed only from another script generator.sh which calls generator, because I do more stuff in generator.sh.
is harder, since you can't have setuid scripts. But you could have a setuid wrapper that runs the script (and nothing else), and that script can then run the final binary. Or use sudo to run the script. You might have problems with the shell disliking being setuid, though, see Setuid bit seems to have no effect on bash and Allow setuid on shell scripts .
Also note that including the shell may bring more possible issues security-wise, and that with any setuid program, you should be careful to make sure the execution environment is clean. That includes environment variables and file descriptors, but possibly other inheritable features too. One reason to use sudo is that it at least tries to deal with this sort of issues.
Another, possibly better way would be to implement the privileged process as a network service instead, and have the users just run a simple dumb client program.
| Allow file read only through application |
1,355,584,342,000 |
I have a ubuntu 20.04 server running docker. Recently the default apparmor profile seems to have started enforcing a restriction on mount points in docker containers. So the containers write directly to the root filesystem rather than the mount.
Outside of docker I can navigate the mounts with no issues but when executing a shell in containers it is as if the mount points are not mounted.
I have narrowed this down to being caused by apparmor and disabling apparmor allows mounting and everything works as I would expect. The containers seem to be using the docker-default profile.
My question is: how do I enable mounting in docker container either on a global basis or on individual containers. I would rather not have to completely disable apparmor for this issue?
|
So it turns out my issue was actually with Docker starting before filesystems were mounted. I believe I can alter the systemd file for docker to delay starting until my mounts are in place. The containers were binding to the mount point as a directory and writing directly to the root filesystem.
Incidentally you can change the apparmor profile used for containers with the security_opt option and load in a new profile with apparmor-parser. My containers didn't have mount but nor should they need it if the mounts are already in place.
| How do I enable mounting filesystems in docker containers using apparmor |
1,355,584,342,000 |
firejail --net=none creates a sandbox looking like a computer without any network interfaces.
Is it possible to achieve the same result with AppArmor? It looks like, AppArmor's deny network just denies everything, but doesn't hide the network interfaces from the application.
|
The reason firejail makes the network interfaces "disappear" is that it runs the application in a new network namespace:
DESCRIPTION
Network namespaces provide isolation of the system resources
associated with networking: network devices, IPv4 and IPv6 protocol
stacks, IP routing tables, firewall rules, the /proc/net directory
(which is a symbolic link to /proc/PID/net), the /sys/class/net
directory, various files under /proc/sys/net, port numbers (sockets),
and so on.
[...]
The new network namespace has initially no interface (except its own local instance of the lo interface), that's why the application doesn't see any interface (except lo) nor can do any useful network-related action.
AppArmor creates additional restrictions to a program when it accesses resources, that will prevent operations normally permitted by the user running the program to succeed. So you could perhaps imagine AppArmor could be configured to prevent an application to successfully access or interact with the the various resources mentioned in the previous quote, but it won't make them disappear. The application will get a difference in the result: it won't receive an answer telling the result is empty or that there's no such object, but will instead receive an error when asking for it. The answer to the question is thus: no.
Note: firejail --net=none does more than just isolating the network namespace. It does much more work, including preventing to even query about those interfaces (thus also getting errors when trying), and isolating most other namespaces too (user, pid, mount, ...).
There are plenty of other tools available for isolation. Even if in some cases it's possible they overlap in functionalities, they are often all used together. For example Firejail can be used along Apparmor, SELinux (an alternate method to AppArmor) or cgroups. Or for example there's also the use of seccomp(2) which can in some cases lie to an application telling it the requested action was successfully done while it wasn't. That's an example, I don't think it's usable either for preventing to see network interfaces.
| AppArmor equivalent to Firejail --net=none |
1,355,584,342,000 |
As far as I know, AppArmor cannot grant privileges to a program that otherwise doesn't have it (i.e. it can only further restrict). Given that, would it be okay to "allow all" for all programs at first, and iteratively add more rules & fine-tune the existing ones?
People seem to disagree with that on the Web it seems, as if the ruleset has to be right from the first moment but I don't understand why that should be the case? Isn't some level of protection better than none?
To clarify:
Is an iterative approach supported technically by AppArmor. (Perhaps there is no such thing as "allow all" or whitelisting by default and blacklisting specifically?)
Are there any security risks of such approach?
|
Sure it would be technically possible... but it's not the most efficient way to work.
If you first allow everything, you'll need to keep guessing what things would be good to disallow. You'll be effectively working blind.
If you first disallow everything, you'll get program error messages, giving you information about things the program is trying to do and failing (probably because of overly strict AppArmor rules). You'll also get messages in the system audit log telling you exactly what AppArmor prevented the program from doing, which will be a great help in allowing precisely the things the program needs and no more.
Note that AppArmor profiles can also be set in complain mode: in that mode, AppArmor won't actually stop the program from doing anything, but generates audit log messages as if it would. So, if you need to minimize the disruption to a particular application you're developing an AppArmor profile for, you can start with the "allow-nothing" profile in complain mode, look at the resulting audit log, add rules to allow things that look legitimate and are currently generating audit messages, and keep iterating this way until any audit messages about that application are about things you don't want the application do in the first place.
At that point, you can be reasonably confident that the AppArmor profile is either completely correct or very close to it, and can switch it into enforcing mode, where the AppArmor restrictions are actually applied. At that point, it is wise to do some testing, just in case you've missed something... but after this procedure, you can be quite confident that the resulting profile won't accidentally allow something dangerous you haven't thought of.
| Is iteratively adding more AppArmor rules bad? |
1,355,584,342,000 |
I accidentally deleted the /etc/apparmor/ folder on my debian 4.19.28-2 box by having the wrong folder selected in a gui file browser.
My questions are:
Should I be worried?
#apparmor_status
apparmor module is loaded.
Is there a way to find what should have been in that folder? Perhaps a way to search apt for all packages that provided a file in that folder? Or perhaps a way to know from the output of apparmor_status?
#apparmor_status
apparmor module is loaded.
13 profiles are loaded.
13 profiles are in enforce mode.
{snip 13 lines}
0 profiles are in complain mode.
83 processes have profiles defined.
83 processes are in enforce mode.
{snip 83 lines}
|
You can try to reinstall it with
apt-get install --reinstall apparmor
| I've accidentily deleted /etc/apparmor/, what should I do to restore it? |
1,355,584,342,000 |
I know that using apparmor, one can reduce process capabilities(7). But is it possible to gain them?
For example: ping requires CAP_NET_RAW. It's binary has no suid set and doesn't have any file capabilities. Is it possible to give CAP_NET_RAW to it, without touching binary itself? (eg. with creating apparmor rule)
Grsecurity seem to have RBAC system too, so maybe it is an option?
|
No, it's not possible.
AppArmor works in addition to the standard Linux permission checks, not instead of. It can't grant any privilege the program didn't already have.
| Using apprarmor/grsec to gain capabilities for file |
1,355,584,342,000 |
I don't really know how to add this rule to the profile, since the name was "wired", it's not some absolute path on my system,
kernel: type=1400 audit(1353749970.556:556): apparmor="ALLOWED"
operation="open" parent=1
profile="/usr/lib/firefox/firefox{,*[^s][^h]}"
name=2F4170706C69636174696F6E2F7468656D65732F4C696F6E2D7468656D652D72656C6F61646564202F67746B2D322E302F67746B7263
pid=14778 comm="firefox" requested_mask="r" denied_mask="r" fsuid=1000
ouid=1000
Anyone know what sort of rules should I add to firefox's profile? With this unknown stuff denied from reading, firefox looks ugly (no gtk themes applied)
|
The string is encoded because it contains special chars,
Decoded the string with aa-decode 2F4170706C69636174696F6E2F7468656D65732F4C696F6E2D7468656D652D72656C6F61646564202F67746B2D322E302F67746B7263 and found out the issue was caused by a newly introduced GTK theme.
Now it's fixed, just simple add a line to that profile, i.e /Application/themes/** r,
| Fix this apparmor rule? |
1,355,584,342,000 |
I want to run some analytics on my mail.log (postfix 3.2.13 on Ubuntu 20.04 LTS) including updating a db of undeliverable emails, so I wrote script, excluded mail.log from the generic /var/log logrotate script and created a new /etc/lorotate.d/mail_log which ran the script in the post-rotate section. Although the script was getting invoked, it was unable to generate the db file:
postfix/postmap[540039]: fatal: open /etc/postfix/bad_recipients.db: Read-only file system
Thinking that this might actually be a permissions issue, I added a sudoers rule for the syslog user (the /var/log/mail files are owned by syslog user) and amended the logrotate script:
/var/log/mail.log
{
rotate 30
daily
missingok
notifempty
compress
delaycompress
sharedscripts
postrotate
/usr/lib/rsyslog/rsyslog-rotate
sudo /usr/local/sbin/mailfail.sh
endscript
}
But I still get the same error reported at the top of each mail.log (Read-only file system) and the database is not updated.
That the script is executing at all suggests its not a chroot or permissions or sudo misconfiguration issue on the script. The other files being written to have permissions for the syslog user (the user owning the log files).
Rsyslogd appears to be the only executable in the chain which is subject to an apparmor profile - but adding the path /etc/postfix* (rwk) to the profile and switching from enforce to complain had no impact on the error.
(running the script from the command works as expectd)
|
This is probably caused by systemd’s protection features, which are enabled for logrotate and Postfix. In particular, ProtectSystem, if set to “full” or “strict”, will result in /etc being read-only.
You should move anything you want to be able to modify to var, or if you can’t avoid changing /etc, override the relevant units (systemctl edit) to change ProtectSystem to “true”, which will protect /usr but not /etc.
| Postfix thinks its on a read only filesystem? |
1,355,584,342,000 |
I have home server with proxmox 5 installed and some services in docker containers.
All was fine till yestarday.
I rebooted the server and all services in all containers cannot bind socket because of permission denied. I'm frustrated...
Here some technical details
Linux server 4.10.15-1-pve #1 SMP PVE 4.10.15-15 (Fri, 23 Jun 2017 08:57:55 +0200) x86_64 GNU/Linux
Docker version 18.03.0-ce, build 0520e24
docker-compose version 1.20.1, build 5d8c71b
caddy docker-compose.yml
version: '2'
services:
caddy:
container_name: caddy
image: zzrot/alpine-caddy:latest
restart: unless-stopped
network_mode: "host"
environment:
- PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
hostname: caddy
volumes:
- /etc/localtime:/etc/localtime:ro
- /mirror/config/caddy-config/certs:/root/.caddy
- /mirror/config/caddy-config/caddy:/etc/Caddyfile
docker-compose up output
root@server:~/compose/caddy# docker-compose up
Creating caddy ... done
Attaching to caddy
caddy | Activating privacy features... done.
caddy | 2018/03/23 19:55:21 listen tcp :443: socket: permission denied
caddy exited with code 1
mariadb docker-compose.yml
version: '3.1'
services:
mariadb:
container_name: mariadb
image: mariadb
restart: always
ports:
- 3306:3306/udp
- 3306:3306/tcp
environment:
- PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- MYSQL_ROOT_PASSWORD=password
hostname: mariadb
volumes:
- /mirror/config/mariadb-config/databases:/var/lib/mysql
- /mirror/config/custom.cnf:/etc/mysql/conf.d/config-file.cnf
- /mirror/config/logs:/config/logs
docker-compose up output
mariadb_1 | 2018-03-23 13:20:36 139659836417920 [Warning] Failed to create a socket for IPv6 '::': errno: 13.
mariadb_1 | 2018-03-23 13:20:36 139659836417920 [Warning] Failed to create a socket for IPv4 '0.0.0.0': errno: 13.
mariadb_1 | 2018-03-23 13:20:36 139659836417920 [ERROR] Can't create IP socket: Permission denied
mariadb_1 | 2018-03-23 13:20:36 139659836417920 [ERROR] Aborting
mariadb_1 |
mariadb_mariadb_1 exited with code 1
What could be the reason for this?
Upd: some new details
kernel: audit: type=1400 audit(1521896913.536:10071): apparmor="DENIED" operation="create" profile="docker-default" pid=16502 comm="mysqld" family="inet" sock_type="dgram" protocol=0 requested_mask="create" denied_
audit[16271]: AVC apparmor="DENIED" operation="create" profile="docker-default" pid=16271 comm="caddy" family="inet" sock_type="dgram" protocol=0 requested_mask="create" denied_mask="create"
|
I have added security_opt to docker-compose and problem has gone.
security_opt:
- apparmor:unconfined
But I do not consider this option a completely correct solution of the problem.
| docker socket: permission denied |
1,355,584,342,000 |
UPDATE: I now know my issue was database corruption, but discerning it was somewhat tricky--apparmor appeared to be the cause for longer than it should've.
I didn't note when first posting that even after putting mysql in complain mode and sending apparmor both stop and teardown commands my syslog still showed the apparmor message...feeding my irrational fear of the protection layer--still not sure how this happened. I finally got mysql separated from apparmor, but it still couldn't lock its own files. Ergo database corruption--dang. My backups worked fine on a new server.
INITIAL POST:
Mysql server is being blocked by (I think) apparmor, but I'm at wits' end to determine why/how. I'm not overly familiar with apparmor.
I know I shouldn't uninstall apparmor--for at least two reasons--but I've used enough profanity (and given this issue too much time) to not consider it. Hopefully I'm merely missing something simple and will learn here.
The failure began today and follows no system changes. MySQL's error log laments permissions
Can't open and lock privilege tables: Table 'servers' is read only
I've been unable to find anyone with this issue who isn't currently moving their default database store. I moved mine as well--two years ago. The apparmor config is unchanged since 2014/04/21:
/files/bak/tmp/ rw,
/files/bak/tmp/* rwk,
/files/bak/mysql/ rw,
/files/bak/mysql/** rwk,
I've verified filesystem permissions:
# find mysql/ -type d -exec chmod 700 {} \;
# find mysql/ -type f -exec chmod 660 {} \;
# chown -R mysql: mysql
I reloaded apparmor, installed apparmor-utils, pushed mysql to complain
# aa-complain mysql
# apparmor_status
apparmor module is loaded.
5 profiles are loaded.
4 profiles are in enforce mode.
/sbin/dhclient
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/tcpdump
1 profiles are in complain mode.
/usr/sbin/mysqld
1 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
1 processes are unconfined but have a profile defined.
/sbin/dhclient (495)
...but viewing syslog still suggests apparmor is blocking mysql after service mysql start:
apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=13899 comm="apparmor_parser"
Before I found the apparmor issue I tried restoring DBs from backups, which also failed with write permissions:
Can't create/write to file '/files/bak/mysql/dbCooper/wp_cb_contact_form.MYI'
I verified the filesystem is 'rw' (even though the above find -exec would have failed anyway):
mount
/dev/xvdf on /files/bak type ext4 (rw,noatime)
I've even tried stopping apparmor, but the syslog still shows can't open and lock privilege tables after this:
# service apparmor stop
[...redacted teardown msg...]
# /etc/init.d/apparmor teardown
* Unloading AppArmor profiles
# service apparmor status
apparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
Is it possible for mysql to lock database files and fail to unlock them when the daemon crashes? If so how would I clear the lock?
I'm currently running my DB with
mysqld --skip-grant-tables
...so I know the executable can run, and the databases are at least somewhat valid (the sites all appear normal). Am I missing something?
thanks for reading.
|
Database corruption.
Occam's Razor prevails. I moved backups to a new server and updated the db location/apparmor config. I cringed as I restarted everything. I'd spent hours convincing myself AppArmor was a cavernously complex and difficult beast, but my reticence was completely without cause--it worked perfectly on the first try.
Amazing how it just works when no files are corrupted--should be an Apple ad.
Now if I could just recover the WordPress site my web developer updated extensively while I was running under --skip-grant-tables but before I realized what the cause was.
:-/
| apparmor: mysql permissions--with no recent changes |
1,355,584,342,000 |
BIND9 denying queries from IPs outsite localnet (External IPs) on Ubuntu.
options {
listen-on port 53 { any; };
directory "/var/bind";
allow-query { any; };
allow-query-cache { any; };
allow-transfer { none; };
recursion no;
dnssec-validation auto;
auth-nxdomain no;
};
include "/etc/bind/zones.conf";
include "/etc/bind/reverse-zones.conf";
include "/etc/bind/named.conf.default-zones";
Example of zones.conf
zone "test.test" IN {
type slave;
file "zones/test.test.zone";
masters { 1.1.1.1; };
};
Also, I saw a denied in my logs so added allow-query-cache { any; }; however this made no difference.
Log:client 192.168.3.100#64088 (test.test.SUB.DOMAIN.INTERN): query (cache) 'test.test.SUB.DOMAIN.INTERN/A/IN' denied
After running "nslookup test.test 172.1.1.5" ( DNS Timeout)
Now nothing shows in the syslog out of the ordinary. This is what BIND shows before it loads the zones (with no errors):
adjusted limit on open files from 4096 to 1048576
found 18 CPUs, using 18 worker threads
using 18 UDP listeners per interface
using up to 18432 sockets
loading configuration from '/etc/bind/named.conf'
reading built-in trusted keys from file '/etc/bind/bind.keys'
using default UDP/IPv4 port range: [1024, 65535]
using default UDP/IPv6 port range: [1024, 65535]
no IPv6 interfaces found
listening on IPv4 interface lo, 127.0.0.1#53
listening on IPv4 interface eth0, 172.1.1.5#53
generating session key for dynamic DNS
sizing zone task pool based on 162 zones
using built-in root key for view _default
set up managed keys zone for view _default, file 'managed-keys.bind'
command channel listening on 127.0.0.1#953
managed-keys-zone: loaded serial 2
zone 0.in-addr.arpa/IN: loaded serial 1
Var/Bind is in a non standard location but I have checked logs after editing the apparmor profile and see no issue.
I can successfully query bind from the same subnet.
/etc/default/bind9:
# run resolvconf?
RESOLVCONF=no
# startup options for the server
# OPTIONS="-u bind"
OPTIONS="-4 -u bind"
This change was to disable ipv6
I'm a RHEL guy - set up the server successfully on Centos7(1503) and found out the guys overseas with the slave want to run Ubuntu. So this cool be an OS config error on my part.
|
This was an upstream loadbalancer issue - responses were being received by the wrong IP and being dropped by firewalls on the other end.
| BIND9 denying queries from IPs outsite localnet (External IPs) - Ubuntu 14.04 |
1,355,584,342,000 |
What are all the ways AppArmor profiles can be matched to processes? One seems to be path based (e.g. the /sbin/dhclient profile is applied when /sbin/dhclient is executed) but is this due to /sbin/dhclient appearing in the sbin.dhclient profile or because of the way sbin.dhclient is named?
Also, for non-path based profile matching (e.g. for the docker-default profile) how is AppArmor told which processes to apply the profile to?
|
AppArmor profiles using a mangled path of the command for the filename is just a convention. From man 7 apparmor:
Profiles are traditionally stored in files in /etc/apparmor.d/ under
filenames with the convention of replacing the / in pathnames with .
(except for the root /) so profiles are easier to manage (e.g. the
/usr/sbin/nscd profile would be named usr.sbin.nscd).
The profile name, if it contains a file glob, applies to files matched by that glob. From the AppArmor Core Policy Reference:
The attachment specification is used by AppArmor to determine which
executables a profile will attach to. If alternate profile name is not
supplied the attachment specification is also used as the profiles
name and if an attachment specification is not specified a profile
name must be provided.
The name of a profile is very import in AppArmor. It provides not only
a name(s) that users can associate to the set of profile rules, but is
also used for labeling, ipc, and in the case that the name is an
attachment specification it determines to which executables the
profile attaches.
| How does AppArmor match profiles to processes? |
1,355,584,342,000 |
Context : I installed (manually) the apparmor-utils and apparmor-profiles and apparmor-utils, since when trying
apt-get install apparmor-utils apparmor-profiles
can't find the packages.
aa-status
works as usual.
I also did
sed -i -e 's/GRUB_CMDLINE_LINUX_DEFAULT="/&security=apparmor /' /etc/default/grub
to enable it.
However when I try
aa-genprof
it says:
import apparmor.aa as apparmor ... No module named apparmor
What is the problem if I already installed it?
|
I think you are missing apparmor package which will provide that python module:
sudo apt install python3-apparmor
If you can’t install that, you need to fix your repository configuration; you will then also be able to install the Debian AppArmor packages.
| No module named apparmor. Why? |
1,355,584,342,000 |
I have Ubuntu 16.04 and lately I reinstalled AppArmor:
sudo rm -rf /etc/apparmor*
sudo apt-get install apparmor --reinstall
sudo service apparmor restart
When I am trying to parse a profile with apparmor_praser I got error:
AppArmor parser error for my.profile in my.profile at line 1:
Could not open 'tunables/global'
I checked my ApprAmor folder and noticed it missing some files:
root@ubuntu:/etc/apparmor.d# ls ./tunables/
home.d multiarch.d xdg-user-dirs.d
While, before I removed the files I had these files:
root@ubuntu:~# ls /etc/apparmor.d/tunables/
alias apparmorfs dovecot global home home.d kernelvars multiarch multiarch.d proc securityfs sys xdg-user-dirs xdg-user-dirs.d
It seems that the installation didn't install all the dependencies libraries.
I tried also these ones:
apt-get install apparmor-utils apparmor-easyprof apparmor-easyprof-ubuntu
But I still don't have important files such as tunables/global.
Any idea how I can reinstall AppArmor as it came up with Ubuntu default installation ?
|
I went to this place:
https://launchpad.net/ubuntu/xenial/+source/apparmor
download:
https://launchpad.net/ubuntu/+archive/primary/+sourcefiles/apparmor/2.10.95-0ubuntu2.10/apparmor_2.10.95.orig.tar.gz
Inside this tar file I wend to ../profiles/apparmor.d and extracted all the content to /etc/apparmor.d by:
cp -r ./apparmor.d/ /etc/apparmor.d/
But it weird that I needed to do it manually.
I will be glad if someone can share automatic way to do it with apt-get.
| Re-installation of AppArmor misses some files |
1,355,584,342,000 |
Every time I run virsh destroy ${KVM} as root I get the following error (virsh shutdown ${KVM} is showing absolutely no reaction, nothing happens):
error: Failed to destroy domain ${KVM}
error: Failed to terminate process 11956 with SIGTERM: Permission denied
When I run shutdown -h now inside the KVM, it hangs forever until I kill the qemu-system-x86_64 process (kill ${PID_OF_QEMU_PROCESS}). As stated in the syslog, apparmor is blocking the calls (both for virsh shutdown and virsh destroy):
apparmor="DENIED" operation="ptrace" profile="/usr/sbin/libvirtd" pid=23212
comm="libvirtd" requested_mask="trace" denied_mask="trace" peer="unconfined"
In the qemu configuration file /etc/libvirt/qemu.conf I tried to disable Apparmor (security_driver = "none"), but I still get the same error.
Some details: OS = Debian 9, Kernel = 4.14.0-0.bpo.2-amd64, libvirt-version = 3.0.0-4.
Does anyone know how to fix the problem without disabling apparmor?
|
Setting security_driver = "none" will not disable apparmor in the kernel, just some support in libvirt itself
Looking at the apparmor profile in current stable (debian 9/stretch) and the one currently in unstable, I see quite some differences.
I believe you could add the following rule in /etc/apparmor.d/local/usr.sbin.libvirtd (this is one of the many differences between the two versions):
ptrace (trace) peer=unconfined,
And then restart the apparmor service with service restart apparmor
Other rules would probably be needed to make everything work though.
An other solution would be to set the profile in "complain" mode with aa-complain /usr/sbin/libvirtd, that will prevent apparmor from denying anything but you would keep the issues logged.
You could later use aa-logprof to generate the missing rules (after carefully reviewing them) or try to get the apparmor profile files from unstable.
| KVM: Can not destroy VM (Permission denied) - AppArmor blocking Libvirt |
1,509,513,280,000 |
I just installed Debian 9.2.1 on an old laptop as a cheap server. The computer is not physically accessed by anyone other than myself, so I would like to automatically login upon startup so that if I have to use the laptop itself rather than SSH, I don't have to bother logging in. I have no graphical environments installed, so none of those methods would work, and I've tried multiple solutions such as https://superuser.com/questions/969923/automatic-root-login-in-debian-8-0-console-only
However all it did was result in no login prompt being given at all... So I reinstalled Debian.
What can I do to automatically log in without a graphical environment? Thanks!
|
Edit your /etc/systemd/logind.conf , change #NAutoVTs=6 to NAutoVTs=1
Create a /etc/systemd/system/[email protected]/override.conf through ;
systemctl edit getty@tty1
Paste the following lines
[Service]
ExecStart=
ExecStart=-/sbin/agetty --autologin root --noclear %I 38400 linux
enable the [email protected] then reboot
systemctl enable [email protected]
reboot
Arch linux docs :getty
| Automatically Login on Debian 9.2.1 Command Line |
1,509,513,280,000 |
I have started running Jessie (Debian 8) with a LightDM/Xfce desktop on my HTPC after it grinding to a near-halt on W7. One of the things that I cannot get past is having to type the password -- not a normal thing to do for watching TV.
Following the instructions on the Debian Wiki I got as far as my login being automatically selected. But this still requires the password, and half-fixes like empty / trivial passwords are not allowed.
Is it possible to go straight to the Xfce session without login/password?
|
I solved it using the Debian wiki page and this page on LinuxServe -- especially the comment!
when I do /usr/sbin/lightdm --show-config I get two files: /etc/lightdm/lightdm.conf and /usr/share/lightdm/lightdm.conf.d/01_debian.conf
These I edited so that in /usr/share/lightdm/lightdm.conf.d/01_debian.conf it says:
greeter-session=lightdm-greeter
session-wrapper=/etc/X11/Xsession
and in /etc/lightdm/lightdm.conf it says:
autologin-user=username
autologin-user-timeout=0
The trick was that, as the comment at the end of the second link says, that the autologin settings need to be in the [SeatDefaults] section of the file. There are two places where the lines appear, commented, and I had uncommented the first place.
It was a bit strange because in normal settings files for Debian, lines like these don't appear twice -- but I should have taken a better look!
| auto login on xfce in jessie |
1,509,513,280,000 |
How do I open Fedora 19 without a user password?
My user name is mz2, and I want to login without a password.
|
According to the passwd man page:
-d This is a quick way to delete a password for an account. It will
set the named account passwordless. Available to root only.
so you can do this (as root):
passwd -d mz2
then you can login without a password
| How to open Fedora without a user password? |
1,509,513,280,000 |
How to autologin a specified user with xdm?
I know it's possible with other display managers but I wasn't able to figure out how xdm has to be configured to autologin a certain user.
Is it possible? Or should I rather remove xdm and simply use an initscript with startx?
|
I haven't used xdm in a long while but as far as I know autologin is not supported by xdm (and, as per one of the devs, not needed).
| How to autologin with XDM? |
1,509,513,280,000 |
A computer is being used for shift work currently with a shared account with autologin at boot. Certain UI applications start up at autologin and its important that they run continuously.
How can we securely authenticate individual users into this system (such as with smartcard) without ever logging out? The individual users only unlock and lock the screen with their credentials, they don't own the account per se, and the account never logs out.
A suggested method is to connect the console to a KVM that supports authentication. This would require physical security of the system having that open console, and trust in the KVM. Clearly better than what we're doing now. Is there an elegant in-computer solution? It seems like a window manager ought to be configurable or modifiable to do such a thing. Maybe there are tutorials on this and I'm simply not using the right search terms?
|
You could try to implement smartcard-based authentication with PAM and map the smartcard certificate subject-DNs of all authorized users to this generic local login.
Newer versions of sssd have smartcard support. Not sure how mature it is though.
| Multiple user authentications for same account/environment |
1,509,513,280,000 |
When I'm supplying username and password to telnet to login, it is automatically closed after the following command:
echo password | telnet mymachine -l mysuername
The console outputs:
Connected to mymachine.
Connection closed by foreign host.
Then the connection is closed. Is there a way to maintain the connection while still supplying username and password to the login at one time?
Update:
I wrote an alias according to the accepted answer that does the login without typing username and password everytime:
alias logon='rm loginscript; echo \#\!\/usr\/bin\/expect >> loginscript ; echo spawn telnet remotemachine -l username >> loginscript ; echo expect Password: >> loginscript ; echo send password"\\r" >> loginscript ; echo expect -- \{\$ \} >> loginscript ; echo interact >> loginscript; chmod 777 loginscript; ./loginscript'
|
If you want to continue with an interactive session, the usual way is to use expect to do the login:
#!/usr/bin/expect
spawn telnet mymachine -l myusername
expect Password:
send password\r
expect -- {$ }
interact
| NOT close telnet when supplying username and password using echo |
1,509,513,280,000 |
I switched from bash to fish shell. I liked it and decided to use it on my servers also. How can I start tmux automatically on ssh connection? I followed this instruction for bash but fish shell is different and this recipe doesn't work without cardinal rewriting.
|
Byobu, a terminal multiplexer, based on tmux, offers an autostart feature.
| How can I start tmux automatically in fish shell while connecting to remote server via ssh |
1,509,513,280,000 |
I have a headless computer (TS-7680) with Debian Jessie that I access via Putty command line. The computer does not have any GUI and is only accessed by the command line. It will be put into the field with a program that needs to restart automatically if there is a temporary power outage.
I know how to get the program to run automatically. However, I am having trouble getting past the login. Every time I boot the computer, I am prompted for the root login. I do not have a password on this computer. After the boot-up code, it looks like this:
Debian GNU/Linux 8 ts7680 ttyAMA0
ts7680 login:
At which point, I must type root to get to the command prompt root@ts7680:~#
Does anyone know how to autologin? I have googled all over, but cannot find an answer. I've tried this solution with no luck.
|
Assuming systemd treats ttyAMA0 as a serial port the same way it would treat ttyS0 on a PC, you need to edit the command started by the [email protected].
(You could check if systemctl status serial-getty@ttyAMA0 shows it is active.)
The base version is in /lib/systemd/system/[email protected] and inside it we find the command that starts the getty:
ExecStart=-/sbin/agetty --keep-baud 115200,38400,9600 %I $TERM
Create an override file for the service to start the agetty with --autologin root:
Create the directory /etc/systemd/system/[email protected], and a file called override.conf in it with the following content:
[Service]
ExecStart=
ExecStart=-/sbin/agetty --autologin root --keep-baud 115200,38400,9600 %I $TERM
systemctl edit [email protected] will help with doing this.
Note that the terminal type passed as an argument to agetty needs to match what your serial terminal actually is. This has been the subject of various approaches in systemd over the years. It has variously been hardwired to vt102 and (indirectly) inherited from the kernel/bootstrap loader. The current approach (as of 2020) is rather complex in how it makes its decision.
However, it only eventually picks from the three values linux, vt220, and whatever the kernel/bootstrap loader says for the Linux console. The first is never right for any real terminal, and unlikely to even approximately match a terminal emulator over a serial cable. The second is unlikely to be right, especially when it comes to colour. Neither matches PuTTY, whose correct terminal type is putty (or putty-256color). And the third probably won't be putty, either; unless you've redirected the Linux /dev/console to the serial terminal and PuTTY as well, and properly reconfigured the boot loader with the console terminal type in lockstep.
So for best results you also need to set the TERM environment variable in that override file:
Environment=TERM=putty-256color
Then reload systemd and we can check that the new configuration is in place:
# systemctl daemon-reload
# systemctl cat serial-getty\@ttyAMA0 | grep Exec
(we should see the new command on the last ExecStart line.)
If you want to only autologin after a key press, add -p or --login-pause to the agetty command line.
The page you linked talks about configuring automatic login on a virtual console: they are configured through [email protected] and the command line used for agetty is a bit different (it seems to be just missing the --keep-baud option). In that case we would use, say /etc/systemd/system/[email protected]/override.conf for tty1 instead.
There's an answer in Ask Ubuntu with more details about overriding systemd configuration.
On a system with sysvinit instead of systemd, you need to add/modify the line corresponding to the serial port in /etc/inittab:
T0:23:respawn:/sbin/getty -L ttyAMA0 --autologin root 38400 vt100
| How do you configure autologin in Debian Jessie? |
1,509,513,280,000 |
Can someone help to autologin in console text mode as root in fedora, usually I can do using script like this :
/sbin/autologin.sh:
#!/bin/bash
0</dev/$1 1>/dev/$1 2>&1
cat /etc/issue
shift
exec $*
and on /etc/inittab do login by calling that script
1:2345:respawn:/sbin/autologin.sh tty1 login -f root
..
..
now I can't do that, since fedora use /etc/init/tty.conf :
stop on runlevel [016]
respawn
instance $TTY
exec /sbin/mingetty $TTY
I know its dangerous to autologin and moreover as root, but I don't care, I don't care about security.
|
Add the following line to "/etc/init/tty.conf":
exec /sbin/mingetty --autologin root $TTY
| autologin console as root on fedora |
1,509,513,280,000 |
I'm working with Linux (Fedora) without a GUI (I've disabled it). In the system, there is one user (tarik). I would like to log in automatically without typing the username and password. I have already deleted the password using the command:
passwd -d tarik
However, I don't know how to automatically log in to the user without typing the username.
Thanks
|
Under Fedora, I'll presume you use systemd as init system and that the console you want to log from is a virtual console (tty[N]), that is to say not a serial console. Additionally assuming that agetty is running at startup. :
What you need is to override the default parameters given to agetty. You will need to define some service editing /etc/systemd/system/[email protected]/autologin.conf for example and add the following lines :
[Service]
ExecStart=
ExecStart=-/sbin/agetty -o '-p -f -- \\u' --noclear --autologin tarik %I $TERM
| Login to Linux automatically without input username and password |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.