date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,453,743,545,000
General problem I want to write a script that interacts with the user even though it is in the middle of a chain of pipes. Concrete example Concretely, it takes a file or stdin, displays lines (with line numbers), asks the user to input a selection or line numbers, and then prints the corresponding lines to stdout. Let's call this script selector. Then basically, I want to be able to do grep abc foo | selector > myfile.tmp If foo contains blabcbla foo abc bar quux xyzzy abc then selector presents me (on the terminal, not in myfile.tmp!) with options 1) blabcbla 2) foo abc bar 3) xyzzy abc Select options: after which I type in 2-3 and end up with foo abc bar xyzzy abc as contents of myfile.tmp. I've got a selector script up and running, and basically it is working perfectly if I don't redirect input and output. So selector foo behaves like I want. However, when piping things together as in the above example, selector prints the presented options to myfile.tmp and tries to read a selection from the grepped input. My approach I've tried to use the -u flag of read, as in exec 4< /proc/$PPID/fd/0 exec 4> /proc/$PPID/fd/1 nl $INPUT >4 read -u4 -p"Select options: " but this doesn't do what I hoped it would. Q: How do I get actual user interaction?
Using /proc/$PPID/fd/0 is unreliable: the parent of the selector process may not have the terminal as its input. There is a standard path that always refers to the current process's terminal: /dev/tty. nl "$INPUT" >/dev/tty read -p"Select options: " </dev/tty or exec </dev/tty >/dev/tty nl "$INPUT" read -p"Select options: "
How to read user input when using script in pipe
1,453,743,545,000
I'm investigating the behaviour of a script that is normally run as an automated process (e.g. cron, Jenkins). The script can (eventually) invoke commands that behave differently (seeking user input) when run interactively; for example, patch will ask what to do with a reversed patch, and svn will ask for passwords, but I need to see what happens when they're run non-interactively. Persuading patch that it's non-interactive is fairly easy; I just need to redirect stdout to be a non-tty: $ </dev/null > >(cat) /path/to/myscript --args However svn will connect to the controlling terminal if it exists; editing the scripts to pass --non-interactive isn't really an option as this is coming from several levels deep and it'd be difficult to be certain I'd found every invocation. Is there a way to invoke a script/command non-interactively, without a controlling terminal (so that /dev/tty doesn't exist)? I'd prefer stdout/stderr to still go to my terminal. (I found the question Run script in a non interactive shell? but the answers to that discuss the differences between the cron and user environment; I've already eliminated all differences except non-interactivity.)
You need to start another session not attached to a terminal, so for instance: $ setsid sh -c 'tty; ps -jp "$$"; echo test' < /dev/null > log 2>&1 $ cat log not a tty PID PGID SID TTY TIME CMD 19506 19506 19506 ? 00:00:00 sh test See also the start-stop-daemon command found on some Linux distributions. There's also a daemon command.
Invoke a command/script disconnected from the controlling terminal?
1,453,743,545,000
I have a number of functions defined in my .bashrc, intented to be used interactively in a terminal. I generally preceded them with a comment describing its intended usage: # Usage: foo [bar] # Foo's a bar into a baz foo() { ... } This is fine if browsing the source code, but it's nice to run type in the terminal to get a quick reminder of what the function does. However this (understandably) doesn't include comments: $ type foo foo is a function foo () { ... } Which got me thinking "wouldn't it be nice if these sort of comments persisted so that type could display them?" And in the spirit of Python's docstrings I came up with this: foo() { : Usage: foo [bar] : "Foo's a bar into a baz" ... } $ type foo foo is a function foo () { : Usage: foo [bar]; : "Foo's a bar into a baz"; ... } Now the usage is included right in the type output! Of course as you can see quoting becomes an issue which could be error-prone, but it's a nicer user experience when it works. So my question is, is this a terrible idea? Are there better alternatives (like a man/info for functions) for providing users of Bash functions with additional context? Ideally I'd still like the usage instructions to be located nearby the function definition so that people viewing the source code also get the benefit, but if there's a "proper" way to do this I'm open to alternatives. Edit these are all fairly simple helper-style functions and I'm just looking to get a little extra context interactively. Certainly for more complex scripts that parse flags I'd add a --help option, but for these it would be somewhat burdensome to add help flags to everything. Perhaps that's just a cost I should accept, but this : hack seems to work reasonably well without making the source much harder to read our edit.
I don't think that there is just one good way to do this. Many functions, scripts, and other executables provide a help message if the user provides -h or --help as an option: $ foo() { [[ "$1" =~ (-h|--help) ]] && { cat <<EOF Usage: foo [bar] Foo's a bar into a baz EOF return; } : ...other stuff... } For example: $ foo -h Usage: foo [bar] Foo's a bar into a baz $ foo --help Usage: foo [bar] Foo's a bar into a baz
Displaying usage comments in functions intended to be used interactively
1,453,743,545,000
I have written some scripts and stored them in my ~/bin folder. I'm already able to run them just by calling their title during a shell session. However, they aren't running interactively (I mean, my ~/.bashrc aliases are not loaded). Is there a way to mark them to run interactively by default? Or must I source ~/.bashrc inside of them to use any aliases defined there? Other alternatives are welcome!
If you add the -i option to your hashbang(s) it will specify that the script runs in interactive mode. #!/bin/bash -i Alternatively you could call the scripts with that option: bash -i /path/to/script.sh
Is possible to define a bash script to run interactively by default?
1,453,743,545,000
I often want a temporary directly where I can unpack some archive (or create a temporary project), see around some files. It is unpredictable in advance for how much time a particular directory may be needed. Such directories are often clutter home directory, /tmp, project directories. They often have names like some weak passwords, like qqq, 1, test that become undescriptive a month after. Is there some shell command or external program that can help manage such throw-away directories, so that they get cleaned up automatically when I lose interest in them, where I don't need to invent a name for them, but that can be given a name and made persistent easily? If there is no such tool, is it a good idea to create one?
It doesn’t quite cover all the features you mention (easily making the temporary directory persistent), but I rather like Kusalananda’s shell for this. It creates a temporary directory, starts a new shell inside it and cleans the temporary directory up when the shell exits. Before the shell exits, if you decide you want to keep the temporary directory, send a USR1 signal to shell; typically kill -USR1 $PPID When you exit, shell will tell you where to find the temporary directory, and you can move it somewhere more persistent. If there is no such tool, is it a good idea to create one? This is the best kind of tool to create — you already know it would be useful for you.
Is there some interactive analogue of `mktemp` that helps to organize throw-away directories?
1,453,743,545,000
I'm writing a pretty ad-hoc install script for some thing. No much control constructs, basically just a list of commands. I'd like the user to confirm each command before it gets executed. Is there a way to let bash do that, without prefixing every command with a shell function name?
You could use extdebug: shopt -s extdebug trap ' IFS= read -rn1 -d '' -p "run \"$BASH_COMMAND\"? " answer <> /dev/tty 1>&0 echo > /dev/tty [[ $answer = [yY] ]]' DEBUG cmd1 cmd2 ... For reference, the zsh equivalent would be: TRAPDEBUG() { read -q "?run \"$ZSH_DEBUG_CMD\"? " || setopt errexit echo > /dev/tty } cmd1 cmd2 ... More portably: run() { printf '%s ' "run $@?" > /dev/tty IFS= read -r answer < /dev/tty case $answer in [yY]*) "$@";; esac } run cmd1 run cmd2 run cmd3 > file Beware that in run cmd3 > file, the file will be truncated even if you say n. So you may want to write it: run eval 'cmd3 > file' Or move the eval to the run function as in: run() { printf '%s ' "run $@?" > /dev/tty IFS= read -r answer < /dev/tty case $answer in [yY]*) eval "$@";; esac } run cmd1 run 'cmd2 "$var"' run 'cmd3 > file' Another portable one, but with even more limitations: xargs -pL1 env << 'EOF' cmd1 "some arg" cmd2 'other arg' arg\ 2 ENV_VAR=value cmd3 EOF It only works for commands (ones found in $PATH), and arguments can only be literal strings (no variable or any shell structure, though xargs understand some forms of quoting of its own), and you can't have redirections, pipes...
Prompt for confirmation for every command
1,453,743,545,000
I am writing a shell script to install all my required applications on my Ubuntu PC in one shot (while I can take a stroll or do something else). For most applications adding -y to the end of the apt-get install statement, has worked well to avoid the need for any user involvement. My script looks something like this: #!/bin/bash add-apt-repository ppa:webupd8team/sublime-text-3 -y apt-get update -y apt-get upgrade -y apt-get install synaptic -y apt-get install wireshark -y Though I no longer have to worry about Do you want to continue? [Y/n] or Press [ENTER] to continue or ctrl-c to cancel adding it, the problem is with wireshark, which requires a response to an interactive prompt as shown below: How can I avoid this mandatory intervention?
Configure the debconf database: echo "wireshark-common wireshark-common/install-setuid boolean true" | sudo debconf-set-selections Then, install Wireshark: sudo DEBIAN_FRONTEND=noninteractive apt-get -y install wireshark You might also want to suppress the output of apt-get. In that case: sudo DEBIAN_FRONTEND=noninteractive apt-get -y install wireshark > /dev/null
How to choose a response for interactive prompt during installation from a shell script
1,453,743,545,000
From what I understanding, a daemon is a background process, but daemon requires unique config file to set the environment variable. E.g. Hadoop daemon require hadoop-env.sh to set environment variable JAVA_HOME, you can't simply get the variable from ~/.bashrc. The reason is because of daemon as a background process means it's non-interactive, while ~/.bashrc is means to used only from interactive session, to prevent alias cp='cp -i' case. And the latest ~/.bashrc has the safe guard on top of the file do not allow non-interactive caller, i.e. without -i option will return early: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything case $- in *i*) ;; *) return;; esac It make me wonder why bashrc don't divide the config files to 3 groups, such as: ~/.bashrc_interactive ~/.bashrc_non_interactive ~/.bashrc_global #(both interactive and non interactive) So user can simply set JAVA_HOME in ~/.bashrc_non_interactive or ~/.bashrc_global, and no need to add this environment variable in each daemon file, over and over again. Is there any reason or restriction of why bashrc not support non-interactive on that way or any other way ? OR am I misunderstanding some concepts ?
You already have the opportunity to set BASH_ENV to the pathname of a file that non-interactive shell script parse before running. This allows you to do, in a crontab for example @hourly BASH_ENV="$HOME/.bashrc_non_interactive" "$HOME/bin/mybashscript" or even BASH_ENV="$HOME/.bashrc_non_interactive" @hourly "$HOME/bin/mybashscript" @daily "$HOME/bin/myotherbashscript" $BASH_ENV is usually empty, but there's nothing stopping you from setting it globally on your server, pointing it to a file under /etc that does if [ -f "$HOME/.bashrc_non_interactive" ]; then . "$HOME/.bashrc_non_interactive" fi However, if a script needs specific variables set, such as JAVA_HOME etc., then it may be a better idea to set BASH_ENV explicitly on a script by script basis, or to explicitly source the relevant file from within the script itself, or to just set the variables in the script. Collecting all the things any non-interactive shell may want to use in a single file may slow down scripts and will potentially also pollute the environment of scripts with things they do not need.
Why no such non-interactive version of bashrc?
1,453,743,545,000
Updated: This is not a file system problem. I used to be able to enter: $ echo kødpålæg But now bash/zsh change this to: bash$ echo kddddddddplg zsh$ echo k<c3><b8>dp<c3><a5>l<c3><a6>g I can run cat and enter 'kødpålæg' with no problem: $ cat kødpålæg kødpålæg This is both with this environment: $ locale LANG=C LANGUAGE=C LC_CTYPE="C" LC_NUMERIC="C" LC_TIME="C" LC_COLLATE="C" LC_MONETARY="C" LC_MESSAGES="C" LC_PAPER="C" LC_NAME="C" LC_ADDRESS="C" LC_TELEPHONE="C" LC_MEASUREMENT="C" LC_IDENTIFICATION="C" LC_ALL=C and in this: $ locale LANG=da_DK.utf8 LANGUAGE=da_DK.utf8 LC_CTYPE="da_DK.utf8" LC_NUMERIC="da_DK.utf8" LC_TIME="da_DK.utf8" LC_COLLATE="da_DK.utf8" LC_MONETARY="da_DK.utf8" LC_MESSAGES="da_DK.utf8" LC_PAPER="da_DK.utf8" LC_NAME="da_DK.utf8" LC_ADDRESS="da_DK.utf8" LC_TELEPHONE="da_DK.utf8" LC_MEASUREMENT="da_DK.utf8" LC_IDENTIFICATION="da_DK.utf8" LC_ALL=da_DK.utf8 csh does not change 'kødpålæg'. How can I get the old behaviour back, so I can enter 'kødpålæg'? Running any of these give the old behaviour: LC_ALL=en_GB.utf-8 luit LC_ALL=da_DK.utf-8 luit LC_ALL=en_GB.iso88591 luit LC_ALL=da_DK.iso88591 luit but only for that single session. This: $ od -An -vtx1 ø Gives: c3 b8 0a So it seems the input from Konsole to bash is UTF8. $ konsole --version QCoreApplication::arguments: Please instantiate the QApplication object first Qt: 5.5.1 KDE Frameworks: 5.18.0 Konsole: 15.12.3 $ bash --version GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu) Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. $ zsh --version zsh 5.1.1 (x86_64-ubuntu-linux-gnu) $ dpkg -l csh Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-=================-=============-=============-======================================== ii csh 20110502-2.1u amd64 Shell with C-like syntax
I'd say most likely your terminal is misconfigured and sends and displays characters in some single-byte character set, probably ISO8859-1 or ISO8859-15 given the sample characters you show instead of the locale's charset. There is typically no ø, å, æ character in the C locale and the ISO8859-1(5) encoding of those characters (0xf8, 0xe5, 0xe6) don't form valid characters in UTF-8. Line editors like readline or zle need to decode those into characters as they need to know how many bytes make up a display column so they can do cursor positioning correctly. Moreover, in the C locale which on most systems uses ASCII, since there are no character in ASCII with the 8th bit set, that 8th bit would be understood by bash as meaning Meta. 0xF8 would be understood as meaning Meta+x (0x78 (x) | 0x80), because that's what some terminals send upon Alt+x or Meta+x. While M-x is not bound to anything by default in bash, ß would be understood as M-_ and insert the last word. You can turn that off with: bind 'set convert-meta off' Shells like csh are too ancient to even be aware that characters may be made of several bytes or take up anything but a single column width, so they don't bother. To verify that theory, run: od -An -vtx1 And enter those characters followed by ^D^D and see what encoding you see. If you see 0xf8 for ø, that means I'm right. If you see 0xc3 0xb8 instead, which is the UTF-8 encoding of ø that means I'm wrong. Or change the locale to da_DK.iso88591 (check in locale -a for the exact name of the locale on your system) and see if that works better. Now as to why your terminal may send the wrong encoding for those characters, maybe it was started in a locale where the charset was iso8859-1. Maybe it's configured to ignore the locale and use a specific charset (look for charset or encoding in its configuration). Or maybe you've sshed in from another system where the locale was using ISO8859-1(5) as its charset. I can reproduce that behaviour if from a UTF-8 terminal, I run: LC_ALL=en_GB.iso885915 luit And from within luit change the locale to C or a UTF-8 one and enter non-ASCII characters.
Non-ascii chars are no longer displayed in bash
1,453,743,545,000
Such as in a bash script: read -p "Only UI(y/n)" Temp_Answer.  Is it possible to do this while a Makefile is running? Because I want to do different things based on the $Temp_Answer (Y or N) in Makefile.
Accessing shell variables All exported shell environment variables are accessible like so: $(MYBASEDIR) Example Say I have this Makefile. $ cat Makefile all: @echo $(FOO) Now with no variable set, big surprise, we get nothing: $ printenv | grep FOO $ $ make $ With the variable just set, but not exported: $ FOO=bar $ printenv |grep FOO FOO=bar $ export -p | grep FOO $ $ make $ Now with an exported varible: $ export FOO $ export -p | grep FOO declare -x FOO="bar" $ make bar Reading input from user Sure you can read input from the user within a Makefile. Here's an example. all: @echo "Text from env. var.: $(FOO)" @echo "" @while [ -z "$$CONTINUE" ]; do \ read -r -p "Type anything but Y or y to exit. [y/N]: " CONTINUE; \ done ; \ [ $$CONTINUE = "y" ] || [ $$CONTINUE = "Y" ] || (echo "Exiting."; exit 1;) @echo "..do more.." With this in place you can now either continue or stop the Makefile: Example Pressing y will continue: $ make Text from env. var.: bar Type anything but Y or y to exit. [y/N]: y ..do more.. $ Pressing anything else such as n will stop it: $ make Text from env. var.: bar Type anything but Y or y to exit. [y/N]: n Exiting. make: *** [all] Error 1 References The Basics: Getting environment variables into GNU Make Linux / Unix: Bash Shell See All Exported Variables and Functions How to prompt for target-specific Makefile variable if undefined?
How to get variables from the command line while makefile is runing?
1,453,743,545,000
I'm trying to figure out what words does the -interactive option of cp accepts as input. For your convenience, here's code that sets up files for experimentation. touch example_file{1..3} mkdir example_dir cp example_file? example_dir cp -i example_file? example_dir The shell then asks interactively for each file whether it should be overwritten. It seems to accept all sorts of random input. cp: overwrite 'example_dir/example_file1'? q cp: overwrite 'example_dir/example_file2'? w cp: overwrite 'example_dir/example_file3'? e I tried looking into the source code of cp, but I don't know C and searching for overwrite is of no help. As far as I can tell it accepts some words as confirmation for overwriting, and everything else is taken as a no. The problem is even words like ys seem to be accepted as yes, so I don't know what works and what doesn't. I'd like to know how exactly does this work and to have some proof of it by means of documentation or intelligible snippets of source code.
The POSIX standard only specifies that the response need to be "affirmative" for the copying to be carried out when -i is in effect. For GNU cp, the actual input at that point is handled by a function called yesno(). This function is defined in the lib/yesno.c file in the gnulib source distribution, and looks like this: bool yesno (void) { bool yes; #if ENABLE_NLS char *response = NULL; size_t response_size = 0; ssize_t response_len = getline (&response, &response_size, stdin); if (response_len <= 0) yes = false; else { /* Remove EOL if present as that's not part of the matched response, and not matched by $ for example. */ if (response[response_len - 1] == '\n') response[response_len - 1] = '\0'; yes = (0 < rpmatch (response)); } free (response); #else /* Test against "^[yY]", hardcoded to avoid requiring getline, regex, and rpmatch. */ int c = getchar (); yes = (c == 'y' || c == 'Y'); while (c != '\n' && c != EOF) c = getchar (); #endif return yes; } If NLS ("National Language Support") is not used, you can see that the only reply that the function returns true for is a response that starts with an upper or lower case Y character. Any additional or other input is discarded. If NLS is used, the rpmatch() function is called to determine whether the response was affirmative or not. The purpose of the rpmatch() NLS library function is to determine whether a given string is affirmative or not (with support for internationalisation). On BSD systems, the corresponding function is found in src/bin/cp/utils.c: /* * If the file exists and we're interactive, verify with the user. */ int copy_overwrite(void) { int ch, checkch; if (iflag) { (void)fprintf(stderr, "overwrite %s? ", to.p_path); checkch = ch = getchar(); while (ch != '\n' && ch != EOF) ch = getchar(); if (checkch != 'y' && checkch != 'Y') return (0); } return 1; } This is essentially the same as the non-NLS code path in the GNU code.
Copying files interactively: "cp: overwrite"
1,453,743,545,000
I use the following code as part of a much larger script: mysql -u root -p << MYSQL create user '${DOMAIN}'@'localhost' identified by '${DOMAIN}'; create database ${DOMAIN}; GRANT ALL PRIVILEGES ON ${DOMAIN}.* TO ${domain}@localhost; MYSQL As you can see it creates an authorized, allprivileged DB user, and a DB instance with the same value (the password will also as the same value). DB user ====> ${domain}. DB user password ====> ${domain}. DB instance ====> ${domain}. This is problematic because I need the password to be different. Of course, I could change the password manually from `${domain} after the whole script will finish to be executed, but that's not what I want: What I want is to type/paste the password directly on execution, interactively. In other words, I want that me being prompted for the DB user's password would be an integral part of running the script. I've already tried the following code, which failed: mysql -u root -p << MYSQL create user '${DOMAIN}'@'localhost' identified by -p; create database ${DOMAIN}; GRANT ALL PRIVILEGES ON ${DOMAIN}.* TO ${domain}@localhost; MYSQL What is the right way to be able to insert a password interactively, by either typing/pasting directly in script execution (instead changing it manually after script execution)?
Just have the user store the variable beforehand with read: echo "Please enter password for user ${domain}: "; read -s psw mysql -u root -p << MYSQL create user '${domain}'@'localhost' identified by '${psw}'; create database ${domain}; GRANT ALL PRIVILEGES ON ${domain}.* TO ${domain}@localhost; MYSQL Here, the command read reads user input and stores it in the $psw variable. Note that just after entering the password value, you'll be prompted for the MySQL root password in order to connect to the MySQL database (-p flag = interactive password prompt).
Making mysql CLI ask me for a password interactively
1,453,743,545,000
I'm trying to bring up a terminal to interactively ask for a file, and open it using a GUI program: foot /bin/sh -c 'file=`fzf`;(setsid xdg-open "$file" &)' I'm using setsid, because otherwise the terminal takes down the xdg-open with it when it exits. The command above, however, doesn't work: it still exits without showing anything on the screen. However, when I add a sleep at the end, it does work: foot /bin/sh -c 'file=`fzf`;(setsid xdg-open "$file" &); sleep 0.0000000001' The terminal exits, but the process started by xdg-open remains running. What is going on here? Is there a cleaner way such that I can avoid the sleep (because I assume the exact time to sleep depends on the system). I tried using disown, but this doesn't work at all (even with the sleep).
The background process: detaches from the terminal (setsid); runs xdg-open. If the terminal disappears before step 1 is finished, the whole process group receives a SIGHUP and is killed. setsid prevents the SIGHUP from reaching xdg-open, but that doesn't help if setsid (or the subshell working to invoke setsid) is killed before setsid has done its job. The fix is to detach from SIGHUP in the foreground. foot /bin/sh -c 'file=`fzf`; setsid sh -c "xdg-open \"\$1\" &" sh "$file"' Another solution would be to ignore SIGHUP. foot /bin/sh -c 'file=`fzf`; trap "" SIGHUP; xdg-open "$file" &'
Running a program outside of terminal
1,453,743,545,000
On my source REDHAT Linux 7 host i fire this command to never prompt for password and passwordless login ssh -i /app/axmw/ssh_keys/id_rsa -o PasswordAuthentication=no root@<target-host> -vvv This works for a list of host and I can determine if ssh is working or not non-interactively [with no password prompt]. However, on one particular host 10.0.66.66 it prompts me for the password despite -o PasswordAuthentication flag. ssh -i /app/axmw/ssh_keys/id_rsa -o PasswordAuthentication=no [email protected] -vvv Debug of the output of the above ssh command is as below: OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 58: Applying options for * debug2: resolving "10.0.66.66" port 22 debug2: ssh_connect_direct: needpriv 0 debug1: Connecting to 10.0.66.66 [10.0.66.66] port 22. debug1: Connection established. debug1: identity file /app/axmw/ssh_keys/id_rsa type 1 debug1: key_load_public: No such file or directory debug1: identity file /app/axmw/ssh_keys/id_rsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_7.4 debug1: Remote protocol version 2.0, remote software version Sun_SSH_2.2 debug1: no match: Sun_SSH_2.2 debug2: fd 3 setting O_NONBLOCK debug1: Authenticating to 10.0.66.66:22 as 'root' debug3: hostkeys_foreach: reading file "/home/user1/.ssh/known_hosts" debug3: record_hostkey: found key type RSA in file /home/user1/.ssh/known_hosts:315 debug3: load_hostkeys: loaded 1 keys from 10.0.66.66 debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],rsa-sha2-512,rsa-sha2-256,ssh-rsa debug3: send packet: type 20 debug1: SSH2_MSG_KEXINIT sent debug3: receive packet: type 20 debug1: SSH2_MSG_KEXINIT received debug2: local client KEXINIT proposal debug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1,ext-info-c debug2: host key algorithms: [email protected],rsa-sha2-512,rsa-sha2-256,ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,ssh-dss debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,[email protected],zlib debug2: compression stoc: none,[email protected],zlib debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug2: peer server KEXINIT proposal debug2: KEX algorithms: gss-group1-sha1-toWM5Slw5Ew8Mqkay+al2g==,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: host key algorithms: ssh-rsa,ssh-dss debug2: ciphers ctos: aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,arcfour debug2: ciphers stoc: aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,arcfour debug2: MACs ctos: hmac-sha2-256,hmac-sha2-512,hmac-sha1,hmac-sha2-256-96,hmac-sha2-512-96,hmac-sha1-96,hmac-md5,hmac-md5-96 debug2: MACs stoc: hmac-sha2-256,hmac-sha2-512,hmac-sha1,hmac-sha2-256-96,hmac-sha2-512-96,hmac-sha1-96,hmac-md5,hmac-md5-96 debug2: compression ctos: none,zlib debug2: compression stoc: none,zlib debug2: languages ctos: de-DE,en-US,es-ES,fr-FR,it-IT,ja-JP,ko-KR,pt-BR,zh-CN,zh-TW,i-default debug2: languages stoc: de-DE,en-US,es-ES,fr-FR,it-IT,ja-JP,ko-KR,pt-BR,zh-CN,zh-TW,i-default debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: diffie-hellman-group-exchange-sha256 debug1: kex: host key algorithm: ssh-rsa debug1: kex: server->client cipher: aes128-ctr MAC: hmac-sha2-256 compression: none debug1: kex: client->server cipher: aes128-ctr MAC: hmac-sha2-256 compression: none debug1: kex: diffie-hellman-group-exchange-sha256 need=32 dh_need=32 debug1: kex: diffie-hellman-group-exchange-sha256 need=32 dh_need=32 debug3: send packet: type 34 debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<8192<8192) sent debug3: receive packet: type 31 debug1: got SSH2_MSG_KEX_DH_GEX_GROUP debug2: bits set: 2034/4095 debug3: send packet: type 32 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug3: receive packet: type 33 debug1: got SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: ssh-rsa SHA256:HCTDUmgLFN9OFvbuusL5Z9hZbUXQyZTqS0hGwkbapxA debug3: hostkeys_foreach: reading file "/home/user1/.ssh/known_hosts" debug3: record_hostkey: found key type RSA in file /home/user1/.ssh/known_hosts:315 debug3: load_hostkeys: loaded 1 keys from 10.0.66.66 debug1: Host '10.0.66.66' is known and matches the RSA host key. debug1: Found key in /home/user1/.ssh/known_hosts:315 debug2: bits set: 1961/4095 debug3: send packet: type 21 debug2: set_newkeys: mode 1 debug1: rekey after 4294967296 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: receive packet: type 21 debug1: SSH2_MSG_NEWKEYS received debug2: set_newkeys: mode 0 debug1: rekey after 4294967296 blocks debug2: key: /app/axmw/ssh_keys/id_rsa (0x55e36c8cde60), explicit debug3: send packet: type 5 debug3: receive packet: type 6 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug3: send packet: type 50 debug3: receive packet: type 51 debug1: Authentications that can continue: gssapi-keyex,gssapi-with-mic,publickey,password,keyboard-interactive debug3: start over, passed a different list gssapi-keyex,gssapi-with-mic,publickey,password,keyboard-interactive debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive debug3: authmethod_lookup gssapi-keyex debug3: remaining preferred: gssapi-with-mic,publickey,keyboard-interactive debug3: authmethod_is_enabled gssapi-keyex debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug2: we did not send a packet, disable method debug3: authmethod_lookup gssapi-with-mic debug3: remaining preferred: publickey,keyboard-interactive debug3: authmethod_is_enabled gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information No Kerberos credentials available (default cache: KEYRING:persistent:2019) debug1: Unspecified GSS failure. Minor code may provide more information No Kerberos credentials available (default cache: KEYRING:persistent:2019) debug2: we did not send a packet, disable method debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /app/axmw/ssh_keys/id_rsa debug3: send_pubkey_test debug3: send packet: type 50 debug2: we sent a publickey packet, wait for reply debug3: receive packet: type 51 debug1: Authentications that can continue: gssapi-keyex,gssapi-with-mic,publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug3: authmethod_lookup keyboard-interactive debug3: remaining preferred: debug3: authmethod_is_enabled keyboard-interactive debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug3: send packet: type 50 debug2: we sent a keyboard-interactive packet, wait for reply debug3: receive packet: type 60 debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Password: Can you please let me know how to enforce no-password prompt for hosts like 10.0.66.66 when PasswordAuthentication=no option not helping?
debug1: Next authentication method: keyboard-interactive You're being prompted for "keyboard-interactive" authentication, which is technically separate from "password" authentication. Keyboard-interactive is like password, but the server provides the prompt message. It's often used with things like RSA tokens and yubikeys. You can disable keyboard-interactive by setting KbdInteractiveAuthentication to "no": KbdInteractiveAuthentication Specifies whether to use keyboard-interactive authentication. The argument to this keyword must be yes (the default) or no Alternately, if you're not running this command interactively, you may want to enable batch mode: BatchMode If set to yes, user interaction such as password prompts and host key confirmation requests will be disabled. This option is useful in scripts and other batch jobs where no user is present to interact with ssh(1). The argument must be yes or no (the default).
PasswordAuthentication=no flag does not work on one strange host
1,453,743,545,000
I'm using tcsh, and have to source a group .cshrc file. This file echoes some messages, which is fine for normal shells, but causes problems with programs like scp and rsync. I can see the solution taking one of a few forms, but am unable to implement any of them. Only execute echos when appropriate I've scoured the rsync and tcsh man pages, but I can't find any variables that are guaranteed to be set or unset when it is called from ssh/rsync/whatever. $PROMPT is the same as normal, $shlvl is 1, and nothing else looks promising. Redirect to stderr rsync/scp/etc don't seem to care about what comes over stderr, so if I could, I would echo $MSG >&2 But this doesn't even work from the shell. Instead, it writes $MSG to a file named 2. When I look through the history, it seems that something (xterm? readline? tcsh?) is inserting spaces, so what was actually run was echo $MSG > & 2 So the observed behavior makes sense given the actual input to tcsh. Redirect to /dev/stderr I've also tried echo $MSG > /dev/stderr Which works for ssh, but for scp and rsync, I get the message '/dev/stderr: Permission denied.' and the key difference seems to be where the file is symlinked. Adding ls -l /dev/stderr /proc/self/fd/2 to the cshrc file shows # For ssh lrwxrwxrwx 1 root root 15 Apr 11 09:58 /dev/stderr -> /proc/self/fd/2 lrwx------ 1 <me> <mygrp> 64 May 24 14:34 /proc/self/fd/2 -> /dev/pts/6 # For scp lrwxrwxrwx 1 root root 15 Apr 11 09:58 /dev/stderr -> /proc/self/fd/2 l-wx------ 1 <me> <mygrp> 64 May 24 15:07 /proc/self/fd/2 -> pipe:[378204842] However, since the permission denied message comes across on stderr, the scp/rsync process is able to do its thing, so I can live with this solution, but would rather not get this spurious error message.
The idiom I use is if ( $?prompt ) then # interactive commands here endif note that it's spelled $prompt (lowercase), not $PROMPT. % echo $prompt %U%m%u:%B%~%b%# % ssh localhost 'echo $prompt' Warning: Permanently added 'localhost' (RSA) to the list of known hosts. Password: prompt: Undefined variable. If $prompt is always set, then one of your startup files might be setting it unconditionally. It should go inside the if ( $?prompt ) test too, e.g. if ( $?prompt ) then set prompt='%B%m%b %C3>' # interactive commands here endif Testing if the output is a terminal might work too. if ({ test -t 0 }) then # interactive commands here endif
Supress startup messages on stdout?
1,453,743,545,000
I have been doing this for a while: sudo su - but it uses 'sh' rather than 'bash', which is what I'd like to do. Which command will log me in as root and get me a bash shell even if that's not the default the system gives me?
Try this command: sudo -i bash
On OS X, how do I log in interactively as root starting from my normal user account?
1,453,743,545,000
I want to count number of interactively removed files and directories: for f in /tmp/mydir/* ; do rm -ir "$f" done How to do it in most concise/elegant way? Example: abc@def:/tmp/mydir$ tree . ├── 1 ├── 2 ├── 3 ├── 4 ├── A │   ├── 1 │   ├── 2 │   └── 3 ├── B │   ├── 1 │   └── 2 └── C 3 directories, 9 files If all answers are y (yes), then I expect answer: 7 (count: 1, 2, 3, 4, A, B, C) or: 10 (count: 1, 2, 3, 4, A/1, A/2, A/3, B/1, B/2, C). Both versions are welcome. I know that I can count files/directories before and after running interactive rm, but this is not the case because in fact I iterate through files stored in text file.
simply use: rm -vri files | wc -l will include dirs, too (i.e. the removal of A). This will work as -v will only send successful removed ’file’ (or dir) output to stdout, while all others go to stderr. In your example the output will be 12, as there are 3 dirs and 9 files.
Count deleted files with interactive rm (rm -i)
1,453,743,545,000
I have to convert multiple video files in a folder. I have to rename each one of them but I need the script to prompt me to write a custom name for each filename. Here is what I've got so far: First I need to remove the filename spaces which my script does with that command: for f in *' '*; do mv "$f" "`echo $f | sed -e 's/ /_/g'`"; done I remove the extension of the files because I don't need it for the conversion. I use this command: for x in *; do mv $x $(echo ${x%*.*}); done After that I need a for loop to rename each file in the pattern: for i in * ; do mv $i $customname; done The problem is that I need in that phase the script to prompt me what name to add in the variable $customname for each file. Something like this: Rename file1 to: ....... Rename file2 to: .......
Just use read to read one line from STDIN, like this. for FILE in *; do echo "Rename '$FILE' to:" read NAME mv "$FILE" "$NAME" done This example would require you to press Ctrl-C to abort when you're done, but you could add an extra if case to abort e.g. if no name was entered. EDIT: If you want all your destination files to have the same file extension as the original file (e.g. if the a file was called pogo.jpg you'd want the renamed file to keep its .jpg ending, and not have to type that manually) you can change the mv command in the above loop to: mv "$FILE" "$NAME.${FILE##*.}" The ${FILE##*.} stuff in the string above means: ${FILE...} display the variable $FILE, but… ## remove longest matching string from the beginning (just one # matches shortest) * match zero or more of any character . match a literal period Only if the entire expression (*.) matches will it be removed, thus if there is no period in the original file name, you'll get a weird result, and the new filename will be NEWNAME.OLDNAME.
For loop for renaming files with prompt for each filename
1,453,743,545,000
I have read the following in this question: A shell running a script is always a non-interactive shell, but the script can emulate an interactive shell by prompting the user to input values. I don't know if the above statement is correct, I thought the following is correct: A shell running a script and this script allows you to input data is an interactive shell (and not an "emulation" of an interactive shell like the quote says). A shell running a script and this script does not allow you to input data is a non-interactive shell. Which statement is correct?
A shell running a script is a non-interactive shell. A non-interactive shell can still use e.g. read to read data from standard input. If standard input is a terminal, this may provide a level of "interaction", but it does not make the shell executing the script an interactive shell. Thu script will be "interactive" though. The text is confusing because it uses the word "interactive" to mean two things: A shell that was started in order to execute a shell script is non-interactive (in the sense that it does not have job control, it does not provide a prompt by itself by default etc. etc.). This is a technical term for the type of a shell, just like "login shell" and "interactive shell". The action of acquiring data by this same script may be "interactive" (if not reading from e.g. a pipe or a file). But then again, any command that takes data from standard input may be said to be interactive. tr 'a-z' 'A-Z' will, by itself, "interactively" turn all lowercase ASCII characters to uppercase.
Confused about the meaning of an interactive and non-interactive shell when running a script
1,453,743,545,000
Is there a simple way to let the user interactively choose one of the lines of the output of lsblk -f? NAME FSTYPE LABEL MOUNTPOINT sda ├─sda1 ntfs WINRE_DRV ├─sda2 vfat SYSTEM_DRV ├─sda3 vfat LRS_ESP ├─sda4 ├─sda5 ntfs Windows8_OS /media/Win8 ├─sda6 ntfs Data /media/Data ├─sda7 ext4 linux / ├─sda8 ├─sda9 ntfs 4gb-original ├─sda10 ntfs PBR_DRV └─sda11 ext4 home /home And let the chosen line be filled in a variable to use in the continuation of the script? I thought it would be perfect if the user could use the arrow keys to go up and down through the lines and press enter to select one. (I think I've seen this before in some configuration scripts during install.) If that's not possiple, at least how could I get numbers in front of each line to let the user choose using read?
You're looking for dialog. It's a very powerful tool and uses ncurses to provide a lot of options. I suggest you read through its manpage. Specifically, you want the --menu option: --menu text height width menu-height [ tag item ] ... As its name suggests, a menu box is a dialog box that can be used to present a list of choices in the form of a menu for the user to choose. Choices are displayed in the order given. Each menu entry consists of a tag string and an item string. The tag gives the entry a name to distinguish it from the other entries in the menu. The item is a short description of the option that the entry represents. The user can move between the menu en‐ tries by pressing the cursor keys, the first letter of the tag as a hot-key, or the number keys 1-9. There are menu-height en‐ tries displayed in the menu at one time, but the menu will be scrolled if there are more entries than that. On exit the tag of the chosen menu entry will be printed on dia‐ log's output. If the "--help-button" option is given, the cor‐ responding help text will be printed if the user selects the help button. Unfortunately, implementing it in a sane manner using the output of a command that contains spaces is quite complex because of various quoting issues. At any rate, I didn't manage to do it, and had to resort to using eval. Nevertheless, it does work and does what you asked for: #!/usr/bin/env bash tmp=$(mktemp) IFS= eval dialog --menu \"Please choose a filesystem:\" 50 50 10 $(lsblk -f | sed -r 's/^/"/;s/$/" " "/' | tr $'\n' ' ') 2>$tmp D=$(tr -d '│├└─' < $tmp | sed 's/^[ \t]*//' | cut -d' ' -f1) printf "You chose:\n%s\n" "$D" For a more portable approach, change the grep command to The sed just formats the output of lsblk so that there are quotes around each output line (that's dialog's "tag"), followed by a quoted space (that's dialog's "item") and the tr replaces newlines with spaces and the tree-part-characters. The result looks like this: ┌────────────────────────────────────────────────┐ │ Please choose a filesystem: │ │ ┌────────────────────────────────────────────┐ │ │ │ NAME FSTYPE LABEL MOUNTPOINT │ │ │ │ sda │ │ │ │ ├─sda1 │ │ │ │ ├─sda2 │ │ │ │ ├─sda3 /winblows │ │ │ │ ├─sda4 │ │ │ │ ├─sda5 │ │ │ │ ├─sda6 /home │ │ │ │ ├─sda7 / │ │ │ │ └─sda8 [SWAP] │ │ │ └────↓(+)────────────────────────────90%─────┘ │ │ │ ├────────────────────────────────────────────────┤ │ < OK > <Cancel> │ └────────────────────────────────────────────────┘
Interactive multiple choice in a bash script
1,453,743,545,000
Specifically Up/Down for history navigation. What I already know I understand dash is a minimalistic, no bloat, (somewhat) strict POSIX shell. I understand the philosophy behind it, and the reason features, that have become basic in other interactive shells, are not present in dash. I've looked into the following resources: man page Arch Linux Wiki page As well as a bunch of other pages and answers specific about history in dash. From my reading, I gather there is a history mechanism, as well as a vi mode. However I can't find how to map the <ESCAPE> key, or the [UP]/[DOWN] arrow keys (or any key) to any meaningful action. No bind or bindkey builtin commands either. My goal - make dash minimally usable as an interactive shell for debugging purposes. Questions Is there a default mapping? Is there a way to manipulate the keyboard mapping in dash? Is there another source of information that can shed some light on interactive usability of dash?
I ran into this problem on macOS and ended up finding that the simplest option was to use rlwrap (https://github.com/hanslub42/rlwrap). In my particular case, in iTerm, I set up a profile for dash with the following command to execute it whenever I want to use the dash shell (-l is for login mode; -E is for emacs command line editor vs -V for vi): /opt/homebrew/bin/rlwrap /bin/dash -l -E
Anyway to bind keyboard to dash (Debian Almquist Shell)?
1,453,743,545,000
Is there a way for setting an environment variable in bash such that its value is not passed directly after = but is prompted for separately? For example, something akin to $ TEST=< (syntax does not actually work) instead of $ TEST=test.
As Stephen answered in the comments, shells that adhere to the POSIX spec will have a way to read input into a variable. bash includes several extra flag to the read built-in command, none of which you need for your situation: read TEST will leave your terminal waiting for you to enter a line of input, which will be assigned to the TEST variable.
Setting environment variables by prompt instead of in command line
1,453,743,545,000
I'm piping output of an interactive command (ghci) through sed-based script to add some colors: ghci | colorize.sh where colorize.sh is something like: #!/bin/bash trap '' INT sed '...some pattern...' Now if I hit Ctrl-C I want only ghci to receive it (it does not terminate), and I want sed to thrive (or perhaps get restarted?) and still process the output of ghci. This script does not work and I don't know why.
First, let me start out by saying that this doesn't answer your question, but I hope might help clarify what's happening. I suspect that what you think is happening might not really be happening. Consider this simple example: # The 'writer' reads input from standard input and # echos it to standard output. It handles SIGINT by # printing INT to standard output. $ cat writer #!/bin/bash function foo() { echo "INT" } trap foo INT while read x; do echo $x; done # The 'reader' reads input from standard input and pipes what is # read to 'sed', which converts it to upper case. It ignores SIGINT. # When it receives EOF on standard input, it writes "done". $ cat reader #!/bin/bash trap '' INT cat | sed -e 's/\(.*\)/\U\1/' echo "done" Now, when I run both, piping the output of writer into reader: $ ./writer | ./reader hello HELLO ^CINT ^CINT ^CINT world WORLD ^D done $ The writer script reads reads from standard input and writes to the standard output – the pipe. The reader scripts reads from standard input — the pipe — and writes to standard output. When I hit Ctrl-C, the writer writes "INT"; the reader ignores the signal (multiple times). Eventually, I enter Ctrl-D (EOF), and the writer terminates. When the reader receives the EOF, it terminates and writes "done". Note that the reader ignores the SIGINT more than once, and that neither the pipe nor sed is interrupted when the writer handles the SIGINT.
How to trap INT signal infinitely many times?
1,453,743,545,000
I want the color of my zsh prompt to be decided based on whether I'm inside a tmux session or not. In bash, it can be done by checking the value of $TMUX, but I can't find an equivalent method in zsh. Is it possible in zsh?
In zsh, the prompt_subst option is off by default. If you want to use variable substitutions in your prompt, turn it on. setopt prompt_subst PS1='$foo' For $TMUX, though, you don't need this. The value doesn't change during the session, so you can initialize PS1 when the shell starts. setopt prompt_subst if (($+TMUX)); then PS1='[tmux:${TMUX_PANE//\%/%%}] %# ' else PS1='[not tmux] %# ' fi Note that prompt expansion happens after variable susbtitution, this is why the percent signs in the variable's value need to be protected.
Format zsh prompt according to the value of an environment variable
1,453,743,545,000
A lot of my workflow involves using a sudo interactive session (sudo -i) as a service user that is able to run certain things that my personal username can't. When I do this, I like to preserve my PS1 variable and some other little bash niceties. As I can't modify the service user's .bashrc, I have a script set up in my home directory to export this as I like it. For example, my workflow might look like: ssh me@remote sudo -i -u service_user . /home/me/ps1.sh *service user commands here* exit I'd like to roll the sudo command and sourcing the PS1 script into one command. My thought was to use something like sudo -i -u service_user -c sh ". /home/me/ps1.sh", which will pass the command in - the problem is that the session will immediately exit after the command runs, rather than hanging in interactive mode. Short of requesting the admin allow the PS1 variable to be preserved through sudo, is there anything I can do?
You could use --rcfile to tell bash to read your ps1.sh file instead of the service_user's .bashrc: sudo -i -u service_user bash --rcfile /home/me/ps1.sh The execution flow then will be something like sudo running bash -lc 'bash --rcfile /home/me/ps1.sh' as service_user. If you want to source the service_user's .bashrc, you can do so in the ps1.sh file. The -i in sudo -i is not "interactive", it's a login shell (i.e., a shell is started which will process the target user's .profile or equivalent and run the given command). That's why the execution has a bash -l wrapping the actual command. If the service_user's .profile is irrelevant, you can just do: sudo -u service_user bash --rcfile /home/me/ps1.sh
Is it possible to begin a sudo interactive session and also provide an initial command?
1,453,743,545,000
I'm looking for a command that invokes readline or similar, primed with the current $PWD, to let the user edit the current directory, then cd to the edited value. E.g. > cd ~/a/b/c/d > pwd > /home/alice/a/b/c/d Then run the proposed icd command (for "interactive cd", inspired by imv in renameutils). It prompts the user as follows: > icd icd> /home/alice/a/b/c/d Then the user can, e.g. press Alt-b, Alt-b, Alt-t, resulting in: icd> /home/alice/a/c/b/d (Alt-t transposing b and c) Upon pressing Enter, the icd command changes the current directory to /home/alice/a/c/b/d. Ideally icd would have some autocompletion. Maybe even visual indication of whether the current value is an existing/valid directory. This can nearly be done in zsh by typing > cd `pwd` then pressing Tab. But a command like icd would save keystrokes. Related: Interactive cd (directory browser)
For bash and any other shell supporting readline you might be able to use this function icd() { local a; read -ei "${1:-$PWD}" -p "$FUNCNAME> " a && cd "$a"; } Usage icd # Starts editing with $PWD icd /root # Starts editing with /root
sh: is there a command to interactively edit the PWD?
1,453,743,545,000
I only want to determine from my POSIX shell script, if it is running interactively, but for some reason, the following function: running_interactively() { printf '%s' ${-} | grep -F i > /dev/null 2>&1 } returns false even if I run the script in terminal. Am I doing it wrong, or is the definition of interactive script somehow different from my plain idea of running the script by a user in terminal? Snippet of the code: #!/bin/sh set -u running_interactively() { # echo $- returns only u printf '%s' ${-} | grep i > /dev/null 2>&1 } print_error_and_exit() { # redirect all output from this function to standard error stream if running_interactively then exec >&2 else echo wrong again, smart ass fi ... } print_error_and_exit someArgs
A shell script is, unless it's sourced by an interactive shell, very seldom run in an interactive shell environment. This means that $- would not include an i. What you could check is to see whether standard input is connected to a terminal or not. This is done using the -t test with an argument of 0 (the file descriptor of the standard input stream): running_interactively () { [ -t 0 ]; } This assumes that by "is running interactively" you mean "able to read input directly from a terminal". An additional test on file descriptor 2 (standard error) would also be possible as a test of being able to do full interaction with the user in a script. User interaction mainly happens on standard input (user input) and standard error (prompts, diagnostic messages, etc.): running_interactively () { [ -t 0 ] && [ -t 2 ]; } However, testing on file descriptor 1 (standard output) would fail if the output of the script was redirected or piped.
Confused about determining if a shell script is running interactively
1,453,743,545,000
Here is my loop ls -ltrd * | head -n -3 | awk '{print $NF}' | while read y; do rm -iR $y done output: rm: descend into directory oct_temp17? rm: descend into directory oct_temp18? rm: descend into directory oct_temp19? .... It does not prompt for user confirmation which rm -i should ideally when run manually. I m using bash shell on Linux 3.10 I am naive to Unix / Linux. Can you please suggest how can i make the script ask me for confirmation for every folder from the ls output ?
1. Why you example did not work as expected rm's prompts need STDIN to receive your feedback. In your example you used STDIN to pipe a list to the while loop though, thus rm was getting the answers to it's prompts from your ls/head/awk commands, instead of the user. The solution is to not use STDIN for providing the list to the loop - e.g.: for y in $(ls -ltrd * | head -n -3 | awk '{print $NF}'); do rm -iR $y done 2. Safer way to do this Be ready for filenames containing spaces (you don't need awk, to get the filename, you can just tell ls to only print the filename in the 1st place): IFS= for y in $(ls -1qtrd * | head -n -3); do rm -iR "$y" done 3. An easier way to do this As Archemar pointed out: You don't even need a loop (as long as there are no spaces in filenames). rm -iR $(ls -1qtrd * | head -n -3)
rm -iR does not work inside a loop
1,345,896,409,000
Is there such a thing as list of available D-Bus services? I've stumbled upon a few, like those provided by NetworkManager, Rhythmbox, Skype, HAL. I wonder if I can find a rather complete list of provided services/interfaces.
On QT setups (short commands and clean, human readable output) you can run: qdbus will list list the services available on the session bus and qdbus --system will list list the services available on the system bus. On any setup you can use dbus-send dbus-send --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListNames Just like qdbus, if --session or no message bus is specified, dbus will send to the login session message bus. So the above will list the services available on the session bus. Use --system if you want instead to use the system wide message bus: dbus-send --system --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListNames
A list of available D-Bus services
1,345,896,409,000
I heard that FIFOs are named pipes. And they have exactly the same semantics. On the other hand, I think Unix domain socket is quite similar to pipe (although I've never made use of it). So I wonder if they all refer to the same implementation in Linux kernel. Any idea?
UNIX domain sockets and FIFO may share some part of their implementation but they are conceptually very different. FIFO functions at a very low level. One process writes bytes into the pipe and another one reads from it. A UNIX domain socket has similar behaviour as a TCP/IP or UDP/IP socket. A socket is bidirectional and can be used by a lot of processes simultaneously. A process can accept many connections on the same socket and attend several clients simultaneously. The kernel delivers a new file descriptor each time connect(2) or accept(2) is called on the socket. The packets will always go to the right process. On a FIFO, this would be impossible. For bidirectional communication, you need two FIFOs, and you need a pair of FIFOs for each of your clients. There is no way of writing or reading in a selective way, because they are a much more primitive way to communicate. Anonymous pipes and FIFOs are very similar. The difference is that anonymous pipes don't exist as files on the filesystem so no process can open(2) it. They are used by processes that share them by another method. If a process creates pipes and then performs, for example, a fork(2), its child will inherit its file descriptors and, among them, the pipe. (File descriptors to named pipes/FIFOs can also be passed in the same way.) The UNIX domain sockets, anonymous pipes and FIFOs are similar in the fact they provide interprocess communication using file descriptors, where the kernel handles the system calls and abstracts the mechanism.
Are FIFO, pipe & Unix domain socket the same thing in Linux kernel?
1,345,896,409,000
This is a follow-up question to A list of available DBus services. The following python code will list all available DBus services. import dbus for service in dbus.SystemBus().list_names(): print(service) How do we list out the object paths under the services in python? It is ok if the answer does not involve python although it is preferred. I am using Ubuntu 14.04
QT setups provide the most convenient way to do it, via qdbus: qdbus --system org.freedesktop.UPower prints / /org /org/freedesktop /org/freedesktop/UPower /org/freedesktop/UPower/Wakeups /org/freedesktop/UPower/devices /org/freedesktop/UPower/devices/line_power_ADP0 /org/freedesktop/UPower/devices/DisplayDevice /org/freedesktop/UPower/devices/battery_BAT0 As to the python way... per the official docs (under standard interfaces): There are some standard interfaces that may be useful across various D-Bus applications. org.freedesktop.DBus.Introspectable This interface has one method: org.freedesktop.DBus.Introspectable.Introspect (out STRING xml_data) Objects instances may implement Introspect which returns an XML description of the object, including its interfaces (with signals and methods), objects below it in the object path tree, and its properties. So here's a very simplistic example that should get you started. It uses xml.etree.ElementTree and dbus: #!/usr/bin/env python import dbus from xml.etree import ElementTree def rec_intro(bus, service, object_path): print(object_path) obj = bus.get_object(service, object_path) iface = dbus.Interface(obj, 'org.freedesktop.DBus.Introspectable') xml_string = iface.Introspect() for child in ElementTree.fromstring(xml_string): if child.tag == 'node': if object_path == '/': object_path = '' new_path = '/'.join((object_path, child.attrib['name'])) rec_intro(bus, service, new_path) bus = dbus.SystemBus() rec_intro(bus, 'org.freedesktop.UPower', '/org/freedesktop/UPower') It recursively introspects org.freedesktop.UPower starting from e.g. /org/freedesktop/UPower and prints all object paths (node names): /org/freedesktop/UPower /org/freedesktop/UPower/Wakeups /org/freedesktop/UPower/devices /org/freedesktop/UPower/devices/DisplayDevice /org/freedesktop/UPower/devices/battery_BAT0 /org/freedesktop/UPower/devices/line_power_ADP0 which is pretty much what you'd get if you used d-feet (not that you'd need it):
How to list all object paths under a dbus service?
1,345,896,409,000
We can check the details of system V message queue with the help of ipcscommand. Is there any command to check POSIX message queue in Linux?
There is no command I know of but there exists a libc function call which can get the statistics: man 3 mq_getattr mq_getattr() returns an mq_attr structure in the buffer pointed by attr. This structure is defined as: struct mq_attr { long mq_flags; /* Flags: 0 or O_NONBLOCK */ long mq_maxmsg; /* Max. # of messages on queue */ long mq_msgsize; /* Max. message size (bytes) */ long mq_curmsgs; /* # of messages currently in queue */ };
linux command to check POSIX message queue
1,345,896,409,000
Passing a password on command line (to a child process started from my program) is known to be insecure (because it can be seen even by other users with ps command). Is it OK to pass it as an environment variable instead? What else can I use to pass it? (Except of environment variable) the easiest solution seems to use a pipe, but this easiest solution is not easy. I program in Perl.
Process arguments are visible to all users, but the environment is only visible to the same user (at least on Linux, and I think on every modern unix variant). So passing a password through an environment variable is safe. If someone can read your environment variables, they can execute processes as you, so it's game over already. The contents of the environment is at some risk of leaking indirectly, for example if you run ps to investigate something and accidentally copy-paste the result including confidential environment variables in a public place. Another risk is that you pass the environment variable to a program that doesn't need it (including children of the process that needs the password) and that program exposes its environment variables because it didn't expect them to be confidential. How bad these risks of secondary leakage are depends on what the process with the password does (how long does it run? does it run subprocesses?). It's easier to ensure that the password won't leak accidentally by passing it through a channel that is not designed to be eavesdropped, such as a pipe. This is pretty easy to do on the sending side. For example, if you have the password in a shell variable, you can just do echo "$password" | theprogram if theprogram expects the password on its standard input. Note that this is safe because echo is a builtin; it would not be safe with an external command since the argument would be exposed in ps output. Another way to achieve the same effect is with a here document: theprogram <<EOF $password EOF Some programs that require a password can be told to read it from a specific file descriptor. You can use a file descriptor other than standard input if you need standard input for something else. For example, with gpg: get-encrypted-data | gpg --passphrase-fd 3 --decrypt … 3<<EOP >decrypted-data $password EOP If the program can't be told to read from a file descriptor but can be told to read from a file, you can tell it to read from a file descriptor by using a file name like `/dev/fd/3. theprogram --password-from-file=/dev/fd/3 3<<EOF $password EOF In ksh, bash or zsh, you can do this more concisely through process substitution. theprogram --password-from-file=<(echo "$password")
How to pass a password to a child process?
1,345,896,409,000
For intercepting/analyzing network traffic, we have a utility called Wireshark. Do we have a similar utility for intercepting all the interprocess communication between any two processes in Unix/Linux? I have created some processes in memory and I need to profile how they communicate with each other.
This depends a lot on the communication mechanism. At the most transparent end of the spectrum, processes can communicate using internet sockets (i.e. IP). Then wireshark or tcpdump can show all traffic by pointing it at the loopback interface. At an intermediate level, traffic on pipes and unix sockets can be observed with truss/strace/trace/..., the Swiss army chainsaw of system tracing. This can slow down the processes significantly, however, so it may not be suitable for profiling. At the most opaque end of the spectrum, there's shared memory. The basic operating principle of shared memory is that accesses are completely transparent in each involved process, you only need system calls to set up shared memory regions. Tracing these memory accesses from the outside would be hard, especially if you need the observation not to perturb the timing. You can try tools like the Linux trace toolkit (requires a kernel patch) and see if you can extract useful information; it's the kind of area where I'd expect Solaris to have a better tool (but I have no knowledge of it). If you have the source, your best option may well be to add tracing statements to key library functions. This may be achievable with LD_PRELOAD tricks even if you don't have the (whole) source, as long as you have enough understanding of the control flow of the part of the program that accesses the shared memory.
Is there a way to intercept interprocess communication in Unix/Linux?
1,345,896,409,000
In the list of signals defined in a linux system, there are two signals stated as User Defined signals (SIGUSR1 and SIGUSR2). Other signals will be raised or caught in specific situations, but SIGUSRs are left for user application's use. So why only two signals?
Historically, Unix had only these two signals, but modern systems have the real-time signals SIGRTMIN...SIGRTMAX. Due to the wacky and unportable semantics of the signal APIs, there is almost no use case where signals would be preferrable over other communication mechanisms like pipes. Therefore, allocating a new signal number has never been seen as necessary.
Why there are only two user defined signals?
1,345,896,409,000
When using a MySQL client (e.g. mysql) how can I determine whether it connected to the server using a Unix socket file or by using TCP/IP?
Finding the transport Try using netstat -ln | grep 'mysql' and you can see how it is connected by the output. if you have access to shell On Unix, MySQL programs treat the host name localhost specially, in a way that is likely different from what you expect compared to other network-based programs. For connections to localhost, MySQL programs attempt to connect to the local server by using a Unix socket file. This occurs even if a --port or -P option is given to specify a port number. If yould like to know the connection type from within the mysql CLI, use the '\s' (status) command. mysql> \s The output would have a line like one of the following (on Unix). Connection: 127.0.0.1 via TCP/IP or Connection: Localhost via UNIX socket Forcing a particular transport To ensure that the client makes a TCP/IP connection to the local server, use --host or -h to specify a host name value of 127.0.0.1, or the IP address or name of the local server. You can also specify the connection protocol explicitly, even for localhost, by using the --protocol=TCP option. For example: shell> mysql --host=127.0.0.1 shell> mysql --protocol=TCP The --protocol={TCP|SOCKET|PIPE|MEMORY} option explicitly specifies a protocol to use for connecting to the server. It is useful when the other connection parameters normally would cause a protocol to be used other than the one you want. For example, connections on Unix to localhost are made using a Unix socket file by default: shell> mysql --host=localhost To force a TCP/IP connection to be used instead, specify a --protocol option: shell> mysql --host=localhost --protocol=TCP Protocol types: TCP: TCP/IP connection to local or remote server. Available on all platforms. SOCKET: Unix socket file connection to local server. Available on unix only. PIPE: Named-pipe connection to local or remote server. Available on windows only. MEMORY: Shared-memory connection to local server. Available on windows only. A Unix socket file connection is faster than TCP/IP, but can be used only when connecting to a server on the same computer.
How can I determine the connection method used by a MySQL client?
1,345,896,409,000
The special variable $RANDOM has a new value every time it's accessed. In this respect, it is reminiscent of the "generator" objects found in some languages. Is there a way to implement something like this in zsh? I tried to do this with named pipes, but I did not find a way to extract items from the fifo in a controlled manner without killing the "generator" process. For example: % mkfifo /tmp/ints % (index=0 while ( true ) do echo $index index=$(( index + 1 )) done) > /tmp/ints & [1] 16309 % head -1 /tmp/ints 0 [1] + broken pipe ( index=0 ; while ( true; ); do; echo $index; index=$(( ... Is there some other way to implement such a generator-type object in zsh? EDIT: This does not work: #!/usr/bin/env zsh FIFO=/tmp/fifo-$$ mkfifo $FIFO INDEX=0 while true; do echo $(( ++INDEX )) > $FIFO; done & cat $FIFO If I put the above in a script, and run it, the output rarely the expected single line 1 rather, it usually consists of several integers; e.g. 1 2 3 4 5 The number of lines produced varies from one run to the next. EDIT2: As jimmij pointed out, changing echo to /bin/echo takes care of the problem.
ksh93 has disciplines which are typically used for this kind of thing. With zsh, you could hijack the dynamic named directory feature: Define for instance: zsh_directory_name() { case $1 in (n) case $2 in (incr) reply=($((++incr))) esac esac } And then you can use ~[incr] to get an incremented $incr each time: $ echo ~[incr] 1 $ echo ~[incr] ~[incr] 2 3 Your approach fails because in head -1 /tmp/ints, head opens the fifo, reads a full buffer, prints one line, and then closes it. Once closed, the writing end sees a broken pipe. Instead, you could either do: $ fifo=~/.generators/incr $ (umask 077 && mkdir -p $fifo:h && rm -f $fifo && mkfifo $fifo) $ seq infinity > $fifo & $ exec 3< $fifo $ IFS= read -rneu3 1 $ IFS= read -rneu3 2 There, we leave the reading end open on fd 3, and read reads one byte at a time, not a full buffer to be sure to read exactly one line (up to the newline character). Or you could do: $ fifo=~/.generators/incr $ (umask 077 && mkdir -p $fifo:h && rm -f $fifo && mkfifo $fifo) $ while true; do echo $((++incr)) > $fifo; done & $ cat $fifo 1 $ cat $fifo 2 That time, we instantiate a pipe for every value. That allows returning data containing any arbitrary number of lines. However, in that case, as soon as cat opens the fifo, the echo and the loop is unblocked, so more echo could be run, by the time cat reads the content and closes the pipe (causing the next echo to instantiate a new pipe). A work around could be to add some delay, like for instance by running an external echo as suggested by @jimmij or add some sleep, but that would still not be very robust, or you could recreate the named pipe after each echo: while mkfifo $fifo && echo $((++incr)) > $fifo && rm -f $fifo do : nothing done & That still leaves short windows where the pipe doesn't exist (between the unlink() done by rm and the mknod() done by mkfifo) causing cat to fail, and very short windows where the pipe has been instantiated but no process will ever write again to it (between the write() and the close() done by echo) causing cat to return nothing, and short windows where the named pipe still exists but nothing will ever open it for writing (between the close() done by echo and the unlink() done by rm) where cat will hang. You could remove some of those windows by doing it like: fifo=~/.generators/incr ( umask 077 mkdir -p $fifo:h && rm -f $fifo && mkfifo $fifo && while mkfifo $fifo.new && { mv $fifo.new $fifo && echo $((++incr)) } > $fifo do : nothing done ) & That way, the only problem is if you run several cat at the same time (they all open the fifo before our writing loop is ready to open it for writing) in which case they will share the echo output. I would also advise against creating fixed name, world readable fifos (or any file for that matters) in world writable directories like /tmp unless it's a service to be exposed to all users on the system.
How to implement "generators" like $RANDOM?
1,345,896,409,000
As far as I understood one end of a pipe has both read and write fd's and the other end also has read and write fd's. Thats why when we are writing using fd[1], we are closing the read end e.g. fd[0] of the same side of the pipe and when we are reading from the 2nd end using fd[0] we close the fd[1] of that end. Am I correct?
Yes, a pipe made with pipe() has two file descriptors. fd[0] for reading and fd[1] for writing. No, you do not have to close either end of the pipe, it can be used for bidirectional communication. Edit: in the comments you want to know how this relates to ls | less, so I'll explain that too: Your shell has three open file descriptors: 0 (stdin), 1 (stdout) and 2 (stderr). When a shell executes a command it does something like this (I've simplified it a bit: pid = fork(); if(pid == 0) { /* I am the child */ execve(...); /* Whatever the user asked for */ } else { waitpid(pid); /* Wait for child to complete */ } File descriptors 0, 1 and 2 are inherited by the child, so input/output works as expected. If you do ls | less, something slightly different happens to do the redirection: int pipe[2]; pipe(pipe, 0); pid1 = fork(); if(pid1 == 0) { /* This is ls, we need to remap stdout to the pipe. We don't care about reading from the pipe */ close(pipe[0]); close(1); dup2(pipe[1], 1); execve(...); } else { pid2 = fork(); if(pid2 == 0) { /* This is less, it reads from the pipe */ close(pipe[1]); close(0); dup2(pipe[0], 0); execve(...); } else { waitpid(pid1); waitpid(pid2); } } So the shell creates the pipe, forks, and just before executing it remaps the pipe to the stdin or stdout of the child processes, making data flow from process one to process 2. As shell pipes are not bidirectional, they only use one end of the pipe and close the other end (it actually closes the other end too, after duplicating the filedescriptor to stdin or stdout).
Does one end of a pipe have both read and write fd?
1,345,896,409,000
In a Debian lenny server running postgresql, I noticed that a lack of semaphore arrays is preventing Apache from starting up. Looking at the limits, I see 128 arrays used out of 128 arrays maximum, for semaphores. I know this is the problem because it happens on a semget call. How do I increase the number of arrays? PS: I need Apache running to make use of phppgadmin.
If you read the manpage for semget, in the Notes section you'll notice: System wide maximum number of semaphore sets: policy dependent (on Linux, this limit can be read and modified via the fourth field of /proc/sys/kernel/sem). On my system, cat /proc/sys/kernel/sem reports: 250 32000 32 128 So do that on your system, and then echo it back after increasing the last number: printf '250\t32000\t32\t200' >/proc/sys/kernel/sem (There are tab characters between the numbers, so I'm using printf to generate them.)
How do I increase the number of semaphore arrays in Linux?
1,345,896,409,000
Is there a way of implementing the publish / subscribe pattern from the command line without using a server process? This need only work on one machine. The main thing I want to avoid by not having a server process is having configure a machine to use these tools. I'm also quite keen on not having to deal with the possibility of my server process dying.| This might look something like: # client 1 subscribe name | while read line; do echo $line; done # client 2 subscribe name | while read line; do echo $line; done # server echo message | publish name Related links POSIX ipc provides a serverless message queue and there are command-line clients for it (1) (2) (3). This could be used together with some sort of state storage to implement the above. ZMQ provides a protocol for pub / sub communication. There are command-line tools analogous to nc for using ZMQ, such as zmcat. These could be used to set up a minimal command-line pub/sub pattern with a server. Linux provides another IPC mechanism called named pipes (c.f. mkfifo). I don't know what the intended behaviour with multiple consumers is. But some initial experimentation suggests that each message is only received by one of the consumers
All subscribers need to be notified of new data in a way that doesn't affect other subscribers and the server must not have to keep track of what data subscribers have received. This makes FIFO useless for this purpose. Ironically a regular file will do exactly what you want because file descriptors on regular files keep track of file changes. You can combine this with overwrite which ensures all changes are published before a new overwrite occurs meaning you are only storing one message. touch pubsub tail -f pubsub | while read line; do echo $line; done tail -f pubsub | while read line; do echo $line; done echo "message" | cat > pubsub You will get "file truncated" on standard error which is expected behavior but if you don't want to see it add 2> /dev/null tail is actually doing everything read and echo do but its written like that because I assume you want to incorporate it in a script.
Command line pub / sub without a server?
1,345,896,409,000
I'm trying to access a process' stdio streams from outside its parent process. I've found the /proc/[pid]/fd directory, but when I try $ cat /proc/[pid]/fd/1 I get a No such file or device error. I know for certain that it exists, as Dolphin (file explorer) shows it. I also happened to notice the file explorer lists it as a socket and trying to read from it as suggested here produces a similar error. This appeared odd to me as stdio streams are typically pipes, rather than sockets, so I'm not sure what's up here. I'd like to point out also that the processes are started by the same user and attempting to access it with sudo didn't work either. I apologise if this question appears noobish, but I'd sincerely appreciate some guidance - perhaps there's a better way of accessing the stdio pipes?
tl;dr; As of 2020, you cannot do that (or anything similar) if /proc/<pid>/fd/<fd> is a socket. The stdin, stdout, stderr of a process may be any kind of file, not necessarily pipes, regular files, etc. They can also be sockets. On Linux, the /proc/<pid>/fd/<fd> are a special kind of symbolic links which allow you to open from the scratch the actual file a file descriptor refers to, and do it even if the file has been removed, or it didn't ever have any presence in any file system at all (e.g. a file created with memfd_create(2)). But sockets are a notable exception, and they cannot be opened that way (nor is it obvious at all how that could be implemented: would an open() on /proc/<pid>/fd/<fd> create another connection to the server if that fd is a connected socket? what if the socket is explicitly bound to a local port?). Recent versions of Linux kernels have introduced a new system call, pidfd_getfd(2), which allows you to "steal" a file descriptor from another process, in the same way you were able to pass it via Unix sockets, but without the collaboration of the victim process. But that hasn't yet made its way in most Linux distros.
/proc/[pid]/fd/[0, 1, 2]: No such file or device - even though file exists
1,345,896,409,000
I know that with ipcs(1) command, one can monitor System V message queues, shared memory and semaphores, but how do I monitor POSIX message queues, shared memory and semaphores. For POSIX message queues, I can mount a pseudo filesystem, as stated in mq_overview(7). Thank you in advance for any help.
I don't believe there are any commands that allow you to monitor POSIX message queues specifically. As you mentioned, all of the details are exposed via a pseudo filesystem, usually mounted under /dev/mqueue. Once you've done that, you can then use file management commands like ls, rm, cat, etc. to inspect and manage the queue details.
ipcs(1) POSIX equivalent to System V
1,345,896,409,000
I have two cooperating programs. One program just writes its output to a file and the other one then reads from the file and spits the data out for the front end to work with. I have been reading about named pipes and domain sockets, but I am having trouble seeing what advantages they offer to just using a temp file. It just seems like a formal way of communicating to me.
If you need to save the intermediate file after the processing is done, then inter-process communication (such as through a pipe or socket) is not particularly valuable.  Similarly, if you need to run the two programs at vastly different times, you should just do it the way you're doing it now. Back when Unix was created, disks were very small, and it was common for a rather benign command to consume all the free space in a file system. For example, some_command_that_produces_a_lot_of_output | grep some_very_obscure_string produces output that's much smaller than the size of the output of the first command (i.e., the size of the intermediate file that would be created if you ran the commands the way you're running your programs). Data flowing through pipes and sockets is (probably) not written to disk at all.  Therefore, these IPC solutions may be more efficient (faster) than disk-based solutions. more secure than disk-based solutions, if the intermediate data are more sensitive than the end result.
Advantages of using named pipes and sockets rather than temporary files
1,345,896,409,000
I'd like to know how to create a terminal device to simulate a piece of hardware connected through a serial port. Basically, tty device with a certain baud rate that can be read from and written to between two processes. From what I understand, a psuedo-terminal is what I'm looking for, and the makedev can apparently make one. I've also found the following set of instructions: su to root cd /dev mkdir pty mknod pty/m0 c 2 0 mknod pty/s0 c 3 0 ln -s pty/m0 ttyp0 ln -s pty/s0 ptyp0 chmod a+w pty/m0 pty/s0 Is there a better way of making pseudo-terminal, or is this pretty much the standard way of make one in the shell?
That's probably how pty device files get created, but you don't want to do that whenever you want a pty. Any given machine usually has a complement of pty device files already created. Pseudo TTYs are fairly OS specific and you don't mention what you want to do this on. For a modern linux, I'd take a look at openpty(3). You can find working example code in the OpenSSH source code, sshpty.c. You will probably have to find code that calls pty_allocate() to fully understand.
Creating a terminal device for interprocess communication
1,345,896,409,000
If file descriptors are specific to each process (i.e. two processes may use the same file descriptor id to refer to different open files) then how is it possible to share transfer file descriptors (e.g. for shared mmaps) over sockets etc? Does it rely on the kernel being mapped to the same numerical address range under each process?
When you share a file descriptor over a socket, the kernel mediates. You need to prepare data using the cmsg(3) macros, send it using sendmsg(2) and receive it using recvmsg(2). The kernel is involved in the latter two operations, and it handles the conversion from a file descriptor to whatever data it needs to transmit the file descriptor, and making the file descriptor available in the receiving process. How can same fd in different processes point to the same file? provides useful background. The sending process sends a file descriptor which means something in relation to its (private) file descriptor table; the kernel knows what that maps to in the system-wide open file table, and creates a new entry as necessary in the receiving process’ file descriptor table.
Sharing file descriptors
1,345,896,409,000
When we run this with a POSIX shell, $ cmd0 | cmd1 STDOUT of cmd0 is piped to STDIN of cmd1. Q: On top of this, how can I also pipe STDOUT of cmd1 to STDIN of cmd0? Is it mandatory to use redirect from/into a named pipe (FIFO) ? I don't like named pipes very much because they occupy some filesystem paths and I need to worry about name collisions. Or do I have to call pipe(2) via C, Python or some general purpose programming languages? (Both cmd0 and cmd1 do some network I/O, so they won't block each other forever.)
On systems with bi-directional pipes (not Linux), you can do: cmd0 <&1 | cmd1 >&0 On Linux, you can do: : | { cmd1 | cmd2; } > /dev/fd/0 That works because on Linux (and Cygwin, but generally not other systems) /dev/fd/x where x is a fd to a pipe (named or not) acts like a named pipe, that is, opening it in read mode gets you the reading end and in write mode gets you the writing end. With the yash shell and its x>>|y pipeline redirection operator: { cmd0 | cmd1; } >>|0 With shells with coproc support: zsh: coproc cmd0 cmd1 <&p >&p ksh cmd0 |& cmd1 <&p >&p bash4+ coproc cmd0 cmd1 <&"${COPROC[0]}" >&"${COPROC[1]}" Dedicated tool approaches (shamelessly copied from answers on this very similar question): Using pipexec pipexec [ A /path/to/cmd0 ] \ [ B /path/to/cmd1 ] \ '{A:1>B:0}' '{A:0>B:1}' Or using dpipe dpipe cmd0 = cmd1 Or using socat: socat EXEC:cmd0 EXEC:cmd1,nofork # using socketpairs socat EXEC:cmd0,commtype=pipes EXEC:cmd1,nofork # using pipes I don't know about python, but that's also relatively easily done in perl: perl -e 'pipe STDOUT,STDIN; exec "cmd0 | cmd1"' You can always also resort to named pipes, but finding unique names for them and some safe directory to create them in, restricting access to them (so other processes can't open them and interfere) and cleanup afterwards become additional problems which are hard to overcome reliably and portably. In any case, beware of deadlocks!
Shell: mutual piping of STDIN/STDOUT of two commands [duplicate]
1,345,896,409,000
When I look at journalctl, it tells me the PID and the program name(or service name?) of a log entry. Then I wondered, logs are created by other processes, how do systemd-journald know the PID of these processes when processes may only write raw strings to the unix domain socket which systemd-journald is listenning. Also, do sytemd-journald always use the same technique to detect the PID of a piece of log data even when processes are producing log using functions like sd_journal_sendv()? Is there any documentation I should read about this? I read JdeBP's answer and know systemd-journald listen on an Unix Domian Socket, but even if can know the peer socket address who send the log message, how does it know the PID? What if that sending socket is opened by many non-parent-children processes?
It receives the pid via the SCM_CREDENTIALS ancillary data on the unix socket with recvmsg(), see unix(7). The credentials don't have to be sent explicitly. Example: $ cc -Wall scm_cred.c -o scm_cred $ ./scm_cred scm_cred: received from 10114: pid=10114 uid=2000 gid=2000 Processes with CAP_SYS_ADMIN data can send whatever pid they want via SCM_CREDENTIALS; in the case of systemd-journald, this means they can fake entries as if logged by another process: # cc -Wall fake.c -o fake # setcap CAP_SYS_ADMIN+ep fake $ ./fake `pgrep -f /usr/sbin/sshd` # journalctl --no-pager -n 1 ... Dec 29 11:04:57 debin sshd[419]: fake log message from 14202 # rm fake # lsb_release -d Description: Debian GNU/Linux 9.6 (stretch) systemd-journald handles datagrams and credentials sent via ancillary data is in the server_process_datagram() function from journald-server.c. Both the syslog(3) standard function from libc and sd_journal_sendv() from libsystemd will send their data via a SOCK_DGRAM socket by default, and getsockopt(SO_PEERCRED) does not work on datagram (connectionless) sockets. Neither systemd-journald nor rsyslogd accept SOCK_STREAM connections on /dev/log. scm_cred.c #define _GNU_SOURCE 1 #include <sys/socket.h> #include <sys/un.h> #include <unistd.h> #include <err.h> int main(void){ int fd[2]; pid_t pid; if(socketpair(AF_LOCAL, SOCK_DGRAM, 0, fd)) err(1, "socketpair"); if((pid = fork()) == -1) err(1, "fork"); if(pid){ /* parent */ int on = 1; union { struct cmsghdr h; char data[CMSG_SPACE(sizeof(struct ucred))]; } buf; struct msghdr m = {0}; struct ucred *uc = (struct ucred*)CMSG_DATA(&buf.h); m.msg_control = &buf; m.msg_controllen = sizeof buf; if(setsockopt(fd[0], SOL_SOCKET, SO_PASSCRED, &on, sizeof on)) err(1, "setsockopt"); if(recvmsg(fd[0], &m, 0) == -1) err(1, "recvmsg"); warnx("received from %d: pid=%d uid=%d gid=%d", pid, uc->pid, uc->uid, uc->gid); }else /* child */ write(fd[1], 0, 0); return 0; } fake.c #define _GNU_SOURCE 1 #include <sys/socket.h> #include <sys/un.h> #include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <err.h> int main(int ac, char **av){ union { struct cmsghdr h; char data[CMSG_SPACE(sizeof(struct ucred))]; } cm; int fd; char buf[256]; struct ucred *uc = (struct ucred*)CMSG_DATA(&cm.h); struct msghdr m = {0}; struct sockaddr_un ua = {AF_UNIX, "/dev/log"}; struct iovec iov = {buf}; if((fd = socket(AF_LOCAL, SOCK_DGRAM, 0)) == -1) err(1, "socket"); if(connect(fd, (struct sockaddr*)&ua, SUN_LEN(&ua))) err(1, "connect"); m.msg_control = &cm; m.msg_controllen = cm.h.cmsg_len = CMSG_LEN(sizeof(struct ucred)); cm.h.cmsg_level = SOL_SOCKET; cm.h.cmsg_type = SCM_CREDENTIALS; uc->pid = ac > 1 ? atoi(av[1]) : getpid(); uc->uid = ac > 2 ? atoi(av[2]) : geteuid(); uc->gid = ac > 3 ? atoi(av[3]) : getegid(); iov.iov_len = snprintf(buf, sizeof buf, "<13>%s from %d", ac > 4 ? av[4] : "fake log message", getpid()); if(iov.iov_len >= sizeof buf) errx(1, "message too long"); m.msg_iov = &iov; m.msg_iovlen = 1; if(sendmsg(fd, &m, 0) == -1) err(1, "sendmsg"); return 0; }
How does journald know the PID of a process that produces log data?
1,345,896,409,000
Why are pseudo-terminals a seperate feature on Unix-like systems? What makes them superior to a pair of pipes or FIFOs for implementing terminal emulators?
Terminals are different from other forms of I/O, and a terminal emulator needs to present itself as a terminal. A terminal (including a pseudoterminal) has certain attributes, such as its line length and supported control sequences. Programs can ask for these, for example, in general ls will determine whether its output is going to a terminal, and then adjust its colors and tabulation to match the terminal. You can test this: ls | cat will not give you separate columns. A pseudoterminal is used to pass the appropriate values for the terminal emulator. As another example, programs like sudo and ssh will, for security reasons, read the password from the terminal directly, you can't pipe them in. Terminals are used to control processes. If you press ^C, the terminal will send SIGINT to its foreground process. This is the terminal's job. This means that, in order for things like ^C to work, there must be a terminal. Similarly, hanging up on a terminal (or on a modern system, closing the terminal emulator's window) will send SIGHUP to all processes associated with it. The pseudoterminal handles this, a pair of pipes can't. In general, all processes except daemons have a controlling terminal. You can use ps to tell you which processes belong to which terminals.
Pseudo-terminals vs. a pair of pipes
1,345,896,409,000
Despite reading through tons of DBus tutorials, I still struggle to understand the whole concept. In my opinion this was one of the best explanations so far: http://telepathy.freedesktop.org/doc/book/sect.basics.dbus.html The reason to use the DBus is because I want to exchange data between different programs. In my opinion, it would suffice to provide a server or, like named in Figure 2-2, a service. This service provides several methods over an interface which I share with the client. The client then invokes a method and gets an answer. So what am I missing? Why is there a need of additional objects? I guess it's just to stick to the Java conventions of objects respectively classes. Each object represents an instance. Would really like someone to confirm that. What is the benefit of the first system over the second?
Not by convention but to facilitate high-level bindings. Native Objects and Object Paths Your programming framework probably defines what an "object" is like; usually with a base class. For example: java.lang.Object, GObject, QObject, python's base Object, or whatever. Let's call this a native object. The low-level D-Bus protocol, ..., does not care about native objects. However, it provides a concept called an object path. The idea of an object path is that higher-level bindings can name native object instances, and allow remote applications to refer to them. Edit: Probably you can just use the API and the message bus daemon built in libdbus in order to avoid the use of objects so you will end with your communication approach of a client that invokes a method and gets an answer. However be aware that libdbus is intended to be a low-level backend for the higher level bindings so much of the libdbus API is only useful for binding implementation.
What's the sense of DBus objects?
1,345,896,409,000
I have 3 different programs that I would like to intercommunicate with each other. I have an engine that needs to communicate with 2 bots and the bots with the engine. The engine is written in C++ and the bots can be written in any language. The engine writes output to stdout and both bots need to read the output. Depending on the output from the engine one of the bots will write a response to stdout (it's a turn based game). Here is crude diagram attempting to illustrate what I mean. My current approach is like the following: mkfifo fifo0 fifo1 fifo2 ./engine | tee fifo1 fifo2 < fifo0 & ./bot1 > fifo0 < fifo1 & ./bot2 > fifo0 < fifo2 I read this post on circular I/O which suggests using tail and tee but I am not sure how to make that work with my requirements. Is it possible to do this with pipes? How would this be done with pipes?
You've got the < fifo0 in the wrong place. You want it to be engine's stdin, not tee's: mkfifo fifo0 fifo1 fifo2 < fifo0 ./engine | tee fifo1 fifo2 & ./bot1 > fifo0 < fifo1 & ./bot2 > fifo0 < fifo2 Note that many utilities start to buffer their output when it doesn't go to a tty device (here a pipe (or possibly a socket pair if the shell is ksh93)). On GNU systems and FreeBSD, you may try to use the stdbuf command to disable that buffering: mkfifo fifo0 fifo1 fifo2 < fifo0 stdbuf -o0 ./engine | tee fifo1 fifo2 & stdbuf -o0 ./bot1 > fifo0 < fifo1 & stdbuf -o0 ./bot2 > fifo0 < fifo2
Many-to-one two-way communication of separate programs
1,345,896,409,000
Is RabbitMQ, for inter process communication, like pipes and named pipes? How does RabbitMQ compare to named pipes? Except distributed systems. (RabbitMQ, for those who haven't encountered it, is an open source, middleware, enterprise message broker that speaks AMQP.)
Is RabbitMQ, for inter process communication, like pipes and named pipes? No. That's not the best way to comprehend RabbitMQ, or indeed message-passing broker-based middlewares in general. If you are looking for a paradigm to hang your metaphorical hat on in order to start understanding RabbitMQ and its ilk, don't think of low-level IPC at all. Think about Unix mail. Programs generate messages. They have headers and bodies. They even have (optional) message IDs, MIME content types, timestamps, and reply-to addresses. They get sent to a broker. The broker routes them, and according to the routing topology they get dropped into queues, from which they are retrieved by other programs. There are fan-out exchanges which create multiple copies of messages to be sent onwards. There are even dead-letter boxes. It's not quite mail, of course, once one gets into the details. The routing topology is under client program control, using the same client-server protocol as is used for sending and receiving messages. The DNS is, largely, not involved. It's not store-and-forward. Fan-out exchanges are only very roughly like mailing lists. Client programs can use the protocol to purge queues (https://askubuntu.com/a/707523/43344) and to set TTLs on messages. There are various degrees of durability and persistence. The reception of messages can involve handshaking, positive and negative, and programmatically forced redelivery. There's a security paradigm for controlling which clients have what access to what parts of the infrastructure, allowing administrators to (say) restrict where clients logged in with user credentials "JdeBP" can send messages to. But mail is a good first approximation for understanding the concepts, far better than starting by comparing to IPC or RPC subsystems, anyway.
are named pipes (mkfifo) the predecessor of RabbitMQ? [closed]
1,345,896,409,000
I grepped the ps output for dbus with the following output: 102 742 0.0 0.0 4044 1480 ? Ss Apr16 27:13 dbus-daemon --system --fork --activation=upstart xralf 2551 0.0 0.0 4076 212 ? Ss Apr16 0:14 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session dwm xralf 2554 0.0 0.0 3936 224 ? S Apr16 0:00 /usr/bin/dbus-launch --exit-with-session dwm xralf 2555 0.0 0.0 4248 1684 ? Ss Apr16 0:07 //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session root 9970 0.0 0.0 3944 476 pts/5 S May08 0:00 dbus-launch --autolaunch f6ddc5d5c514b5fb84725db7000007cd --binary-syntax --close-stderr root 9971 0.0 0.0 3268 308 ? Ss May08 0:00 //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session Everything was run automatically. Could you explain what is happening in the system and if it's secure? Notice especially username 102, //bin/dbus-daemon.
You didn't provide much information about your system though. DBus system usually has two buses: a system bus and a session bus. Session bus is started per user (in your case for root and xralf), lines 3 to 6. Line 2 is a dbus service that was requested by your window manager. A system bus is needed for system-wide message exchange. This is your first line started under UID 102. The reason for UID to be shown instead user name could be that the user name is longer than 8 characters You could check your /etc/passwd to look up this UID. This is how it looks like on my system: message+ 924 1 0 13:31 ? 00:00:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation and a corresponding /etc/passwd entry: messagebus:x:106:110::/var/run/dbus:/bin/false dbus-launch is a utility to start a message bus. In more recent distribution this is done by systemd.
dbus-launch and dbus-daemon - what's happening
1,345,896,409,000
This is a combination of programming and Linux question but I think it suits better here. I am writing an application that works with ipcs (shared memory segments) and after each running I am checking if any ipcs are left using the bash command ipcs. I noticed a lot more than I created so I thought they are part of the system software. i decided to examine each one and see where it is connected. After closing the process each one is connected I noticed that one of the processes that is attached to a shared memory segment is the system clock. By system clock I mean the clock that tells the time down right of the panel (or up depending on how you set up things) and not the CPU clock. Why, out of all the processes that the system runs, does the clock need a shared memory segment?
By system clock I mean the clock that tells the time down right of the panel "System clock" generally refers to the clock maintained by the kernel; applications such as date and GUI clocks such as the one you refer to make calls to it like this. Why, out of all the processes that the system runs, does the clock need a shared memory segment? There's probably dozens of different GUI and DE based clocks available for linux so there's no way to say specifically. This implies it involves multiple processes which is certainly not necessary for a GUI clock, but if it is integrated with the desktop, who knows -- it could also possess some functionality you haven't discovered yet. You have a lot of choices, IPC wise, when programming. What method you use depends on the exact requirements but also perhaps some personal preference. I'm more a sockets n' serialization kinda guy, but shared mem is very popular; when I run ipcs -a I get a few dozen entries under "Shared Memory Segments". Interestingly, if I run it on a headless system I get none, so presumably those are all related to GUI applications. Glib and D-bus may have facilities built on shared mem used by such programs.
Why does the clock need a shared memory segment?
1,345,896,409,000
What is the best way of checking current status of different types of IPC in Linux (including uids) ? I want to inspect named pipes, half duplex pipes, unix domain sockets, signals. I know for sys V we have ipcs.
lsof(8) is probably your best option. Lesser options include ipcs(1), fuser(1), netstat(8), ps(1), and rummaging through /proc.
Linux - check IPC stats
1,345,896,409,000
From The Linux Programming Interface under "data transfer" under "communication", we have "byte stream", "message" and "pseudoterminal". Does pseudoterminal belong to byte stream instead, just like how pipe belongs? If not, why?
Consider the various modes a pseudoterminal can be in: in raw mode, it would behave much like a byte stream, but in cooked mode, it becomes more message-like.
Does pseudoterminal transfer byte stream or message?
1,345,896,409,000
I wrote a simple bash script that reads out meta information about currently playing songs via playerctl. Right now the script is just unnecessary polling the information. I would like the script to only be invoked when the song changes. The actual player I am using is mostly spotify. Is there any way I can use signals to make this happen? Maybe intercept signals spotify is sending? I am not (only) interested in the solution to my problem. I would really like to learn more about the topic in general. How do I find out what signals are sent by processes, how can I intercept and use them etc? If that is even a possibility.
playerctl's github page has an example for polling events with python. The API might give you additional infos.
How do I turn my simple script into a non polling version
1,345,896,409,000
As I understand it, the Linux Security Module (LSM) framework has many hooks which are callbacks for security modules to register functions performing additional security checks before security-sensitive operations. Most of the time, these hooks are placed before the access to an internal data structure like file. One thing that I don't understand is why there are hooks in System V IPC APIs but not in the corresponding POSIX APIs. For example, there is security_ipc_permission which is a hook describe in include/linux/lsm_hooks.h as "affecting all System V IPC operations" and several more hooks specialized for each APIs such as the message queues but no counterpart for the POSIX APIs. Manual investigation reveals that the System V hooks are not used in the POSIX functions (as expected, given the description). But in the case of POSIX message queues and System V message queues for example, while they don't have the same interface, they provide roughly the same functionality. So my question is: what is the rationale for not putting LSM hooks in POSIX functions?
I should have posted that earlier but I got some elements of answer from Stephen Smalley, SELinux developper and maintainer, in a conversation on the LSM mailing list, in July 2016. There is no longer an archive for this mailing list for that period, due to MARC stopping archiving that mailing list and Gmane going out of business but I was able to dug up this email from my backups: [Laurent Georget:] Hi, this series adds LSM hooks in some system calls. I propose them as a RFC because I fail to understand why these LSM hooks are not already present but maybe there is a very good reason, and I'd like to hear it. The first patch adds hooks in mq_timedsend and mq_timedreceive. mq_timedsend and mq_timedreceive are the two system calls used to manipulate POSIX message queues. Although their corresponding SysV counterparts msgrcv and msgsnd have LSM hooks, mq_timedsend and mq_timedreceive have not. The second patch adds calls to the security_file_permission in system calls vmsplice, splice and tee, and adds a new LSM hook security_splice_pipe_to_pipe. These three system calls leverage the internal implementation of pipes in the Linux kernel to perform zero-copy data transfer between pipes, or between a pipe and a file. Although conceptually, any operation done by vmsplice, splice or tee could be performed by sequences of read and write (which do have LSM hooks), there are no calls to LSM hooks in these three system calls. [Stephen Smalley:] I think it is a combination of: these system calls were added after LSM was introduced and thus were not part of the original analysis and instrumentation, POSIX mqueues are at least partly covered by the existing file-based access controls due to being implemented via a pseudo filesystem and therefore it is unclear if we in fact need any additional hooks, Revalidation of access to non-pipe files during splice() is already covered by rw_verify_area() calling security_file_permission() IIUC. And revalidation support is principally to support revocation of access to files upon relabeling or policy change. Not saying that you are wrong to propose new hooks, but the above may help provide context. About the revalidation part: [Laurent Georget:] So your argument would be that pipes are not subject to revalidation like regular files, and as such, no validation is necessary after their opening succeeds? This makes sense but if this is the general consensus among the security modules developers, this means that information flow control is not something which is expected to be implementable with LSM. [Stephen Smalley:] No, I wouldn't argue that in general; it just hasn't been a major concern to date. So I'm not opposed to adding hooks, although I think we probably ought to have one for pipe creation too so that we can cache the information in the same manner that we do for file open. We also have other problems wrt revalidation even for files, e.g. for memory-mapped files or async i/o. So, here are the reasons why there are no hooks in POSIX message queues (according to Stephen Smalley). LSM was implemented before POSIX message queues. Message queues already benefit from the hooks on inodes. For example, to open a message queue, you would have to go through the security_inode_open hook. Hooks in individual read and write-like operations are only provided for revalidation, and revalidation is mostly useful for regular files, which are permanent storage of information (this argument applies to message queues as well as other strange cases like splice).
Why are there no LSM hooks in the POSIX IPC APIs?
1,345,896,409,000
There are empty files like this in my /tmp directory: qipc_sharedmemory_soliddiskinfomemac5ffa537fd8798875c98e190df289da7e047c05 qipc_systemsem_soliddiskinfomemac5ffa537fd8798875c98e190df289da7e047c05 qipc_systemsem_soliddiskinfosem92d02dca794587d686de797d715edb3b58944546 What are they?
These appear to be files that Qt creates during the course of Inter-process communication. The file names indicate that shared memory and semaphores were used.
What are the files in /tmp that start with "qipc"?
1,429,325,339,000
When I run ipcs -m I get below info ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x00000 38699014 user 700 8125440 2 dest 0x00072 2064391 root 444 1 0 0x00000 38830088 user 700 299112 2 dest 0x00000 38862857 user 700 181720 2 dest 0x00000 38895626 user 700 244776 2 dest 0x00000 38928395 user 700 156816 2 dest What I want is to get the processes(id) which use these shared memory. How do I get it?
ipcs -m -p shows the shmid and the PID of the process that created it (the "cpid"). It also shows a "last operator" or "lpid" - I don't know what that is (the man page doesn't say so I'd having to dig deeper into the docs or source code to find out, and that's crazy talk!). For example, on one of my systems (which happens to be running postgres and apache, amongst other things): $ ipcs -m -p ------ Shared Memory Creator/Last-op PIDs -------- shmid owner cpid lpid 36 postgres 3155864 2367086 38 root 14452 2362481 (apache, pid 14452, is shown with owner root. It gets started as root, but changes to www-data when it pre-forks other processes). We can use awk to extract the creator PID, and pipe that into xargs -n 1 pstree -p to show the tree of PIDs beneath those PIDs. NOTE: pstree only takes a maximum of one PID argument at a time, so we have to use xargs -n 1 to run pstree once per pid. For example (using pstree -A for ASCII output. It'll probably look slightly prettier on your terminal without -A, using the default line-drawing characters): $ ipcs -m -p | awk '$3 ~ /^[0-9]+$/ {print $3}' | xargs -n 1 pstree -A -p postgres(3155864)-+-postgres(1610942) |-postgres(1620056) |-postgres(1761109) |-postgres(1831225) |-postgres(1931537) |-postgres(2123512) |-postgres(2284745) |-postgres(2386392) |-postgres(3155867) |-postgres(3155868) |-postgres(3155869) |-postgres(3155870) |-postgres(3155871) |-postgres(3155872) `-postgres(3159321) apache2(14452)-+-apache2(141263) |-apache2(762459) |-apache2(856005) |-apache2(856006) |-apache2(856008) |-apache2(856009) |-apache2(856010) |-apache2(856438) |-apache2(1369957) |-apache2(1777646) |-apache2(1887781) `-apache2(3746760) If required, this can be post-processed (with awk or whatever) to extract only the PIDs from within the parentheses. BTW, pstree has various other useful options (including -u to show uid transitions, and -a to show the full command line) to change what it outputs and how it formats it. If you need to show the pstree for both the cpids and the lpids, use something like: $ ipcs -m -p | awk '$3 ~ /^[0-9]+$/ {printf "%s\n%s\n", $3, $4}' | xargs -n 1 pstree -p
List processes associated with shared memory
1,429,325,339,000
When you have a Linux application that depends on a library (dynamically-linked), how does the application communicate with the library? What inter-process communication method is used?
None. Because a dynamically linked library lives in the same process' memory space - and thus, no second process exists with which you need to do IPC.
What IPC is used between an application and a library in Linux?
1,429,325,339,000
I am trying my hand on Linux Signals. Where I have created a scenario mentioned below: Initially block all SIGINT signals using sigprocmask(). If sender send SIGUSR1 signal then unblock SIGINT for rest of the process life. However first step is successfully implemented but not able to unblock (or change) process mask using sigprocmask(). What am I doing wrong? #include<stdio.h> #include<signal.h> #include<stdlib.h> sigset_t block_list, unblock_list; void sigint_handler(int sig) { printf("Ouch!!\n"); } void sigusr1_handler(int sig) { sigemptyset(&unblock_list); sigprocmask(SIG_SETMASK, &unblock_list, NULL); } int main(int argc, char **argv) { int count; signal(SIGINT, &sigint_handler); signal(SIGUSR1, &sigusr1_handler); sigemptyset(&block_list); sigaddset(&block_list, SIGINT); sigprocmask(SIG_SETMASK, &block_list, NULL); for(count=1; ;++count) { printf("Process id: %ld\t%d\n", (long)getpid(), count); sleep(4); } exit(EXIT_SUCCESS); } $kill -n SIGINT <pid> $kill -n SIGUSER1 <pid> //This call should unblock sigint_handler() for rest of the process life, but it is only unblocking for one time. Everytime I have call $kill -n SIGUSER1 <pid> to unblock SIGINT. Note: Error handling has been removed for simplicity.
The kernel will restore the signal mask upon returning from a signal handler. This is specified by the standard: When a thread's signal mask is changed in a signal-catching function that is installed by sigaction(), the restoration of the signal mask on return from the signal-catching function overrides that change (see sigaction()). If the signal-catching function was installed with signal(), it is unspecified whether this occurs. On Linux, signal(2) is just a deprecated compatibily wrapper for sigaction(2), and that does also occur when using signal(2).
Why below code is not able to unblock SIGINT signal
1,429,325,339,000
We have a kernel module that was building fine for RedHat family of Linux distribution, until the recent RHEL7.5. When trying to build on RHEL7.5, we've got an error of: ...error: ‘GENL_ID_GENERATE’ undeclared... Did some reading, and it seems like this is an change since kernel 4.11+, but RHEL7.5 is based on kernel 3.10+. What happened? Anyway, I know that the value of GENL_ID_GENERATE is simply 0. But can I used use 0 to replace the macro? Will there be a problem with user mode module to communicate with this kernel module? Or, what should be the proper way to fix the problem? Any advice? Thanks and regards, Weishan
Looking at the git commits for netlink it looks like several changes were made to the structure in version 4.11: First, you can omit the .id field completely from your initializer in genl_family as Linux has removed static family IDs. As well, the genl_register_family_with_ops function is not used any more. Instead, as noted in the Linux HOWTO documentation for netlink: Up to linux 4.10, use genl_register_family_with_ops(). On 4.10 and later, include a reference to your genl_ops struct as an element in the genl_family struct (element .ops), as well as the number of commands (element .n_ops).
netlink: GNEL_ID_GENERATE definition removed from RHEL7.5 kernel library
1,429,325,339,000
I'm learning how to use Message Queue in Linux and I've found a simple example: https://www.geeksforgeeks.org/ipc-using-message-queues/. With the reader and writer in this link, I can read and write messages through the Message Queue on my Ubuntu. Everything is fine. Well, if I'm right, when we write some messages into a Message Queue, the messages are stored into the Kernel, meaning that the Kernel will allocate some RAM to store them. Let's say I keep writing many messages into a Message Queue but never consume them. As my understanding, more and more RAM will be used. In this case, can I use the command top or ps aux to monitor the increasing usage of RAM? The lines VIRT and RES of the command top are about RAM usage and the lines VSZ and RSS of the command ps aux are about RAM usage too. In the case above, can I see some of the four numbers (VIRT, RES, VSZ and RSS) are increasing? Or top and pa aux can't show us the RAM usage of the Kernel, which is used by MQ, FIFO, SHM, domain socket or other IPC ways?
IPC resources aren’t tied to a given process, so they don’t show up in the data displayed by top, ps etc. You can see this in the example you’re referring to: the message queue is created by the writer but deleted by the reader. To monitor IPC resources, you can use lsipc: lsipc will provide an overview, and lsipc -q will show details of the message queues.
Is RAM usage of IPC a part of the RAM usage of a program
1,429,325,339,000
I have two processes given by their pids: P1 and P2. Is there are a simple way of chcecking whether these processes are communicating via sockets or other inter-process communication mechanism? I need to know this because I have two seemingly unrelated apps that might be communicating under the hood and I want to know if this is really the case.
You can use lsof -p P1 and lsof -p P2 to see the file descriptors open by the two processes. Then you can look at the list of sockets and pipes they each have open, and see if any of them have the same ID. imac:barmar $ sleep 100 | sleep 100 & [1] 51885 imac:barmar $ jobs -l [1]+ 51884 Running sleep 100 51885 | sleep 100 & imac:barmar $ lsof -p 51884 | grep -i pipe sleep 51884 barmar 1 PIPE 0x491a6929f9ea1ca9 16384 ->0x491a6929f9e9fae9 imac:barmar $ lsof -p 51885 | grep -i pipe sleep 51885 barmar 0 PIPE 0x491a6929f9e9fae9 16384 ->0x491a6929f9ea1ca9 Notice that the destination ID of the pipe in the first process is the same as the source ID of the pipe in the second process. That indicates that they're the two ends of the same pipe.
How to verify whether two local processes are communicating via sockets or ipcs?
1,429,325,339,000
My process deadlocks. master looks like this: p=Popen(cmd, stdin=PIPE, stdout=PIPE) for ....: # a few million p.stdin.write(...) p.stdin.close() out = p.stdout.read() p.stdout.close() exitcode = p.wait() child looks something like this: l = list() for line in sys.stdin: l.append(line) sys.stdout.write(str(len(l))) strace -p PID_master shows that master is stuck in wait4(PID_child,...). strate -p PID_child shows that child is stuck in read(0,...). How can that be?! I did close the stdin, why is child still reading from it?!
parent.py from subprocess import Popen, PIPE cmd = ["python", "child.py"] p=Popen(cmd, stdin=PIPE, stdout=PIPE) for i in range(1,100000): p.stdin.write("hello\n") p.stdin.close() out = p.stdout.read() p.stdout.close() print(out) exitcode = p.wait() child.py import sys l = list() for line in sys.stdin: l.append(line) sys.stdout.write(str(len(l))) Running it: $ python parent.py 99999 Looks like this works fine so the problem must be somewhere else.
Deadlock on read/wait [closed]
1,429,325,339,000
I wish to send a command to process A, from process B, via a FIFO. The command will be a word or a sentence, but wholly contained on a "\n" terminated line - but could, in general, be a multi-line record, terminated by another character. The relevant portion of the code that I tried, looks something like this: Process A: $ mkfifo ff $ read x < ff Process B: (from another terminal window) $ echo -n "cmd" > ff $ echo -n " arg1" > ff $ echo -n " arg2" > ff ... $ echo " argN" > ff However, what's happening is, the read returns with the value cmd, even though the bash man page says it, by default, reads \n terminated lines, unless the -d delim option is used. So, I next tried specifying -d delim explicitly, $ read -d "\n" x < f` and still the same result. Could echo -n be closing the FIFO's file 'descriptor'? I'm using bash 4.4.x on Ubuntu 18.04.
Yep, that's exactly what happens: $ mkfifo p $ while :; do cat p ; done > /dev/null & $ strace -etrace=open,close bash -c 'echo -n foo > p; echo bar > p' |& grep '"p"' -A1 open("p", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3 close(3) = 0 -- open("p", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3 close(3) = 0 The redirections only take effect for the duration of the single command they're set up on. The workaround on the write side is to either a) use a compound block to group the commands, or b) use exec to open a file descriptor for the duration of the whole script (or until closed). a) { echo -n foo; echo bar; } > p (You could also put the commands in a function and use redirection when calling the function.) b) exec 3>p echo -n foo >&3 echo bar >&3 exec 3>&- # to explicitly close it If you want to fix it on the reading side, you'll need to loop over read and concatenate the strings you get. Since you explicitly want partial non-lines, and to skip over end-of-file conditions, you can't use the exit code of read for anything useful.
bash: Reading a full record from a fifo
1,429,325,339,000
Obviously O_CREAT and O_EXCL are not required when opening an existing semaphore. O_CREAT is required when creating a new semaphore. O_EXCL is only meaningful when OR-ing with O_CREAT, specifying that if a semaphore with the given name already exists, then an error is returned. Linux manual page for sem_open said that Definitions of the flags values can be obtained by including <fcntl.h> but I did not find any flag in fcntl.h that told me how to open an existing semaphore.
Consider the following example: #include <fcntl.h> #include <sys/stat.h> #include <semaphore.h> #include <stdio.h> int main(int argc, char* argv[]) { const char* const sem_name = "lock.sem"; if (argc == 1) { sem_t* const sem = sem_open(sem_name, O_CREAT, 0644, 0); if (sem == NULL) { perror("sem_open"); return 1; } sem_wait(sem); // Will block sem_close(sem); sem_unlink(sem_name); } else { sem_t* const sem = sem_open(sem_name, 0); if (sem == NULL) { perror("sem_open"); return 1; } sem_post(sem); // Will unblock the other process sem_close(sem); } return 0; } I'm using the argument count to control the behavior of the program. If I supply no parameters (i.e., when argc == 1), then the program will open the semaphore, creating it if it does not already exist; it initializes the value of the semaphore to 0. It then does a sem_wait() on the sem object. Since the semaphore was initialized to 0, this causes the process to block. Now if I run a second instance of this same program, but this time with any non-zero number of arguments (i.e., when argc != 1), then the program will open the semaphore, but will not create it if it does not already exist. Note here that I pass 0 for the oflag parameter (i.e., I'm not passing any flags). The program then does a sem_post(), incrementing the semaphore from 0 to 1. This unblocks the first process. Both processes will close their references to the semaphore, and terminate. If I understand your question correctly, the second case is what you're looking for. If I try to run the second case first (i.e., when there isn't a running instance of the first case), then I get: $ ./a.out foo sem_open: No such file or directory That's comming from the call to perror() because a semaphore with the given name does not exist.
How to open an existing named semaphore?
1,429,325,339,000
When I do a kill -SIGUSR1 $PPID I get kill: (1) - Operation not permitted . How can I overcome this?
My parent process had died for some reason. This caused the issue. I took care of that and the problem got solved.
Sending SIGUSR1 to parent
1,429,325,339,000
I have a systemd-nspawn container in which I am trying to change the kernel parameter for msgmnb. When I try to change the kernel parameter by directly writing to the /proc filesystem or using sysctl inside the systemd-nspawn container, I get an error that the /proc file system is read only. From the arch wiki I see this relevant documentation systemd-nspawn limits access to various kernel interfaces in the container to read-only, such as /sys, /proc/sys or /sys/fs/selinux. Network interfaces and the system clock may not be changed from within the container. Device nodes may not be created. The host system cannot be rebooted and kernel modules may not be loaded from within the container. I thought the container would inherit some properties of /proc from the host, including the kernel parameter value for msgmnb, but this does not appear to be the case as the host and container have different values for msgmnb. The kernel parameter value in the container: cat /proc/sys/kernel/msgmnb 16384 Writing to the proc filesystem inside the container $ bash -c 'echo 2621440 > /proc/sys/kernel/msgmnb' bash: /proc/sys/kernel/msgmnb: Read-only file system For completeness, I also tried sysctl in the container: # sysctl -w kernel.msgmnb=2621440 sysctl: setting key "kernel.msgmnb": Read-only file system I thought this value would be inherited from the host system. I set the value on the host, rebooted and re-created my container. The container (even new ones) maintains the value of 16384. # On the host $ cat /proc/sys/kernel/msgmnb 2621440 I've also tried using unprivileged the -U flag when booting the systemd-nspawn container but I get the same results. I've also tried to editted /etc/sysctl.conf in the container tree to include this line before booting the container: kernel.msgmnb=2621440 I also looked into https://man7.org/linux/man-pages/man7/capabilities.7.html and noticed CAP_SYS_RESOURCE which has a line that reads: CAP_SYS_RESOURCE ... raise msg_qbytes limit for a System V message queue above the limit in /proc/sys/kernel/msgmnb (see msgop(2) and msgctl(2)); Using sudo systemd-nspawn --capability=CAP_SYS_RESOURCE -D /path/to/container, and then inside the container, when I use msgctl with IPC_SET and pass msqid_ds->msg_qbytes with a value that is higher than what is in /proc/sys/kernel/msgmnb, the syscall returns an error code. It seemed like passing the CAP_SYS_RESOURCE should work here? Nothing I've tried here has changed the value for msgmnb in the container. I can't seem to find documentation on how to achieve my goal. I'd appreciate any help - thank you! EDIT: Trying to determine if the process calling msgctl has the capability. Here is what I found: $ cat /proc/6211/status | grep -i Cap CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000fdecafff CapAmb: 0000000000000000 $ capsh --decode=00000000fdecafff 0x00000000fdecafff=cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_raw,cap_ipc_owner,cap_sys_chroot,cap_sys_ptrace,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap
$ cat /proc/6211/status | grep -i Cap CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000fdecafff CapAmb: 0000000000000000 CapInh is the set of inheritable capabilities, which is not useful for the current program, but could be passed on to any programs this process would exec() if the right conditions apply. It's all zeroes, so there's no capabilities in there anyway. CapEff is the most important one: it is the set of effective capabilities, or the privileged things this process/thread is allowed to do right now. Unfortunately, it is all zeroes here. CapPrm limits the capabilities this particular process/thread is permitted to get for itself or its child processes if it asks for them. And that is also all zeroes. So as long as this process executes the current program, it will never be able to gain any capabilities at all. CapBnd is the bounding set that limits the capabilities the descendants of this program could receive - if they would get them from somewhere else. If this process would exec() a setuid-root program, this is the set of capabilities that would become effective for it all at once. Or if, for example, this process executed a program that had a setcap 'cap_sys_resource=eip' <filename> done on it, this CapBnd value would allow the CAP_SYS_RESOURCE capability to become effective for the executed program and its child processes. So your process currently does not have the CAP_SYS_RESOURCE capability and cannot get it without exec()ing another program. To make the CAP_SYS_RESOURCE immediately effective for your containerized process, you would need to add the option --ambient-capability=CAP_SYS_RESOURCE to your systemd-nspawn command line.
How to increase kernel parameter (`msgmnb`) for a systemd-nspawn container
1,429,325,339,000
There is the Supermicro X10DAi motherboard and the manual is here. On page 1-11 you can see that each CPU has it's own RAM. Let's say program A is offering an API through a local socket /var/run/socketapi. This program is started on CPU 1. Then there is program B connecting to this socket and it's started on CPU 2. When program B writes a command to the socket the kernel normally copies the data from the memory space of program B to that of program A. But because the programs run on different CPUs and the memory is not shared between CPUs there is a problem. How is this solved under recent Linux? Maybe the whole memory of CPU 1 is memory mapped to CPU 2 using the QPI interface shown in the manual? Or perhaps the program IPC won't work and an error occurs? Please provide some reference to Linux source code or documentation.
Yes, CPUs map each other's memory through the CPU interconnect. On Intel-compatible architectures, that is a coherent mapping, so software notices mostly in the form of higher latency when accessing memory connected to the other CPU. As system memory has quite a bit of latency on its own, the difference is not that great. The OS still optimizes on the fly, and might decide to move two processes that have lots of IPC traffic onto the same node. Different architectures might have non-coherent mappings as well, which requires software to be more explicit about memory locality, but scales better with more sockets.
How does local socket IPC work on a multi CPU system?
1,429,325,339,000
Is there any way to redirect stdout (1) from that "pipe" (I don't know exacly how I suppose interpret this, I will be glad if someone could explain how to treat this, or give me some read on this) to some other output, eg. file or terminal? -bash-4.2$ ls -l /proc/11/fd total 0 lrwx------ 1 us sudo 64 Sep 24 11:26 0 -> /dev/null l-wx------ 1 us sudo 64 Sep 24 11:26 1 -> pipe:[20619] l-wx------ 1 us sudo 64 Sep 24 11:26 2 -> pipe:[20620] lrwx------ 1 us sudo 64 Sep 24 11:26 3 -> socket:[30376] lr-x------ 1 us sudo 64 Sep 24 11:26 4 -> /dev/null l-wx------ 1 us sudo 64 Sep 24 11:26 5 -> pipe:[30639] lrwx------ 1 us sudo 64 Sep 24 11:26 6 -> socket:[27522]
Not in a clean or portable way. You have to attach with a debugger like gdb, open some the destination file and dup it into fd 1. As with gdb -p <PID> -batch -ex 'call dup2(open("<PATH>", 2), 1)' That pipe:[digits] is an "anonymous" pipe, as created by cmd | cmd shell construct. However on Linux it's not really anonymous, since you can open it via /proc/<PID>/fd/<NUM>. So you have another option (which is guaranteed to wreak even more havoc than using gdb): open the other side of the pipe, kill whatever program is reading from it, and cat it somewhere else. Stupid example: % while sleep 1; do TZ=Zulu date; done | wc -c & [1] 26727 % ps PID TTY TIME CMD 20330 pts/1 00:00:00 bash 26726 pts/1 00:00:00 bash # this the while ... done process 26727 pts/1 00:00:00 wc 26745 pts/1 00:00:00 sleep 26746 pts/1 00:00:00 ps % ls -l /proc/26726/fd/1 ... /proc/26726/fd/1 -> 'pipe:[1294932]' % exec 7</proc/26726/fd/1 # open the other side of the pipe % kill 26727 # kill wc -c % cat <&7 Fri 24 Sep 2021 01:25:52 PM UTC Fri 24 Sep 2021 01:25:53 PM UTC Fri 24 Sep 2021 01:25:54 PM UTC ...
How to redirect running process output from pipe to something else?
1,429,325,339,000
I'm trying to write a little chess program - actually more of a chess GUI. The chess GUI should use the stockfish chess engine in the background when the player plays against the computer. I got stockfish installed and can run it in the terminal and communicate with it via STDIN and STDOUT, for example I can type 'isready' and stockfish responds with 'readyok'. Now I'm trying to be able to communicate from the chess GUI to stockfish continuously via some IPC method on linux. Looked first into pipes, but discarded that because pipes communicate unidirectional. Then I read of FIFOs and of Bash redirections, and trying that now. It sort of works, because I can read one line of output from stockfish. But that just works for the first line. When I then send 'isready' to stockfish via a FIFO, and try to read the next output from stockfish, there is no response. I use Bash redirections to redirect STDIN and STDOUT of stockfish to FIFOs. I run this script to start stockfish in one terminal: #!/bin/bash rm /tmp/to_stockchess -f mkfifo /tmp/to_stockchess rm /tmp/from_stockchess -f mkfifo /tmp/from_stockchess stockfish < /tmp/to_stockchess > /tmp/from_stockchess I call this script with ./stockfish.sh And I have this c program for example(I'm new to C) #include <stdio.h> #include <stdlib.h> int main(void) { FILE *fpi; fpi = fopen("/tmp/to_stockchess", "w"); FILE *fpo; fpo = fopen ("/tmp/from_stockchess", "r"); char * line = NULL; size_t len = 0; ssize_t read; read = getline(&line, &len, fpo); printf("Retrieved line of length %zu:\n", read); printf("%s", line); fprintf(fpi, "isready\n"); read = getline(&line, &len, fpo); printf("Retrieved line of length %zu:\n", read); printf("%s", line); fclose (fpi); fclose (fpo); return 0; } Output from the program in the terminal(but the program doesn't halt, it waits): Retrieved line of length 74: Stockfish 11 64 POPCNT by T. Romstad, M. Costalba, J. Kiiski, G. Linscott Alas that doesn't work continuously(I have no loop now, just trying to read two or more times without a loop), for example the stockfish script terminates in one terminal(instead of running continuously), after one line is read from the stockfish output FIFO. Or I can just read one line of output from the stockfish output FIFO. If there is an easier way to IPC with stockfish via STDIN and STDOUT, I can also try that. Thank you.
Since you're already working with C, then I'd suggest you manage stockchess in C as well. There is a library function, popen() that will give you a unidirectional pipe to a process -- that doesn't suite your use case. You can, however, set it up yourself. Consider the following example program: #include <errno.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/wait.h> /** * Creates two pipes, forks, and runns the given command. One pipe is * connected between the given *out and the standard input stream of the child; * the other pipe is connected between the given *in and the standard output * stream of the child. * * Returns the pid of the child on success, -1 otherwise. On error, errno * will be set accordingly. */ int bi_popen(const char* const command, FILE** const in, FILE** const out) { const int READ_END = 0; const int WRITE_END = 1; const int INVALID_FD = -1; int to_child[2] = { INVALID_FD, INVALID_FD }; int to_parent[2] = { INVALID_FD, INVALID_FD }; *in = NULL; *out = NULL; if (command == NULL || in == NULL || out == NULL) { errno = EINVAL; goto bail; } if (pipe(to_child) < 0) { goto bail; } if (pipe(to_parent) < 0) { goto bail; } const pid_t pid = fork(); if (pid < 0) { goto bail; } if (pid == 0) { // Child if (dup2(to_child[READ_END], STDIN_FILENO) < 0) { perror("dup2"); exit(1); } close(to_child[READ_END]); close(to_child[WRITE_END]); if (dup2(to_parent[WRITE_END], STDOUT_FILENO) < 0) { perror("dup2"); exit(1); } close(to_parent[READ_END]); close(to_parent[WRITE_END]); execlp(command, command, NULL); perror("execlp"); exit(1); } // Parent close(to_child[READ_END]); to_child[READ_END] = INVALID_FD; close(to_parent[WRITE_END]); to_parent[WRITE_END] = INVALID_FD; *in = fdopen(to_parent[READ_END], "r"); if (*in == NULL) { goto bail; } to_parent[READ_END] = INVALID_FD; *out = fdopen(to_child[WRITE_END], "w"); if (*out == NULL) { goto bail; } to_child[WRITE_END] = INVALID_FD; setvbuf(*out, NULL, _IONBF, BUFSIZ); return pid; bail: ; // Goto label must be a statement, this is an empty statement const int old_errno = errno; if (*in != NULL) { fclose(*in); } if (*out != NULL) { fclose(*out); } for (int i = 0; i < 2; ++i) { if (to_child[i] != INVALID_FD) { close(to_child[i]); } if (to_parent[i] != INVALID_FD) { close(to_parent[i]); } } errno = old_errno; return -1; } int main(void) { FILE* in = NULL; FILE* out = NULL; char* line = NULL; size_t size = 0; const int pid = bi_popen("/bin/bash", &in, &out); if (pid < 0) { perror("bi_popen"); return 1; } fprintf(out, "ls -l a.out\n"); getline(&line, &size, in); printf("-> %s", line); fprintf(out, "pwd\n"); getline(&line, &size, in); printf("-> %s", line); fprintf(out, "date\n"); getline(&line, &size, in); printf("-> %s", line); // Since in this case we can tell the child to terminate, we'll do so // and wait for it to terminate before we close down. fprintf(out, "exit\n"); waitpid(pid, NULL, 0); fclose(in); fclose(out); return 0; } In the program, I have defined a function bi_popen. The function take as input the path of a program to run as well as two FILE*: in for input from the command and out for output to the command. bi_popen sets up two pipes, one for communicating from the parent process to the child process, and another for communicating from the child process to the parent process. Next, bi_popen forks, creating a new process. The child process connects its standard output to the write-end of the pipe to the parent, and connects its standard input to the read-end of the pipe from the parent. It then cleans up the old pipe file descriptors and uses execlp to replace the running process with the given command. That new program inherits the standard input/output configuration with the pipes. On success, execlp never returns. In the case of the parent --- when fork returns a non-zero value ---, the parent process closes the unnecessary ends of the pipe, and uses fdopen to create FILE* associated with the relevant pipe file descriptors. It updates the in and out output parameters with those values. Finally, it uses execlp on the output FILE* to make it unbuffered (so that you don't have to explicitly flush content you send to the child process). The main function is an example of how to use the bi_popen function. It calls bi_popen with the command as /bin/bash. Anything written to the out stream is sent to bash to execute. Anything bash prints to standard output is available for reading from in. Here's an example run of the program: $ ./a.out -> -rwxr-xr-x 1 user group 20400 Aug 29 17:09 a.out -> /home/user/src/bidirecitonal_popen -> Sat Aug 29 05:10:52 PM EDT 2020 Note that main writes a series of commands to the child process (here, bash), and the command responded with the expected output. In your case, you could replace "/bin/bash" with "stockfish", then use out to write commands to stockfish and in to read the responses.
Programming - Communicating with chess engine stockfish / FIFOs / Bash redirections
1,429,325,339,000
I want to store the stdout from a process into a buffer and have the buffer emptied once read, FIFO style. I know that I can pipe the stdout, but the pipe/file will keep growing and contain data that I have already read. I just want the fresh data. command > named_pipe & Are there any other inbuilt methods, similar to a buffer in a network socket, that I can redirect data to?
I don't understand how named pipes don't solve your problem. This example uses two shell interfaces, shell_1 and shell_2. I indent I/O from/to shell_2 more than that of shell_1 to try to differentiate what I/O is occurring from which shell. $ mkfifo my_pipe (shell_1) $ echo hi > my_pipe # Blocks waiting for a reader (shell_2) $ cat my_pipe # Unblocks shell_1 hi (shell_2) $ cat my_pipe # blocks -- does not print "hi" again (shell_1) $ echo bye > my_pipe # Unblocks shell_2 bye # Printed by shell_2
What methods exist for capturing stdout into a buffer that is automatically cleared on read?
1,429,325,339,000
I've done some research about this topic but I didn't understand it quite well. From msgsnd man page : The msgsnd() system call appends a copy of the message pointed to by msgp to the message queue whose identifier is specified by msqid. Does this mean that when i use a msgget to create a message queue the Enqueue and Dequeue happens automatically with msgsnd and msgrcv? For example, if I want to use a message queue that can simultaneously hold N messages, when i use msgsnd i put a message to the queue and when i use msg rcv i get it from here and delete that message? If that's the case i shouldn't implement manually enqueue and dequeue to create a list of N messages because it's enough to set a value to const void *msgp from int msgsnd(int msqid, const void *msgp, size_t msgsz, int msgflg); to add one more message in the queue and it is enough that this is received by msgrcv to be deleted from the queue otherwise it remains in the queue until it is received by some process, am i correct? But then how much messages this queue can contain if i'm not the one setting how many can be contained?
According to man2(msgrcv) enqueue/dequeue operations are handled internally by systemV API. so you don't need to re-implement them just use the provided API. For message queue attributes use msgctl with IPC_INFO command.
Handling multiple messages in message queue
1,429,325,339,000
I'm trying to invoke CreateItem method on org.freedesktop.secrets dbus service. busctl --user call org.freedesktop.secrets /org/freedesktop/secrets/collection/login org.freedesktop.Secret.Collection CreateItem "a{sv}(oayays)b" How can I figure out what kind of arguments to pass for a{sv}(oayays)b signature.
a{sv} dictionary with keys being strings and values variants (oayays) struct of object path (o), two bytearrays (ay) and a string (s) b boolean Check the Secrets API Specification for more details about the parameters for CreateItem.
What is a{sv}(oayays)b dbus signature
1,429,325,339,000
From what I seen online you call kill method in c++ in order to see if the process is alive. the issue with that is PID's get rycled and the same PID your looking for may not be the same process. I have a program that has two processes that are not children of each other. The only way to communicate with them is IPC. I would like my host process to shutdown when the client process shuts down. In order to do that I have to know when the client's process is no longer alive. In Windows, they have what's called a process handler that will reserve the PID from getting recycled until the process that created the handle is closed. I am wondering how to achieve this for macOS/Linux (POSIX) systems. The problematic code as PID's are reycled. if (0 == kill(pid, 0)) { // Process exists. }
The solution is to either reserve the PID on windows by cacheing and not closing the process handle. For POSIX Systems we simply get the process's start time from the kernal OS DEPENDANT! and then check if the cached start time equals the current start time. If it doesn't a PID conflict is detected and it returns false. Windows: #include <windows.h> #include <iostream> #include <vector> #include <map> map<DWORD, HANDLE> handles; bool isProcessAlive(DWORD pid) { HANDLE process; if(handles.find(pid) == handles.end()) { process = OpenProcess(SYNCHRONIZE, FALSE, pid); handles[pid] = process; } else { process = handles[pid]; } DWORD ret = WaitForSingleObject(process, 0); bool isRunning = ret == WAIT_TIMEOUT; if(!isRunning)//close the cached handle to free the PID and erase from the cache { CloseHandle(process); handles.erase(pid); } return isRunning; } MacOS: #include <signal.h> #include <stddef.h> #include <sys/_types/_timeval.h> #include <sys/errno.h> #include <sys/proc.h> #include <sys/sysctl.h> #include <cstring> #include <iostream> #include <map> #include <string> /** * map of unsigned long, creation time either in jiffies, ms, or in clock ticks or different on mac even. so we keep it as a string */ std::map<unsigned long, string> handles; /** * returns true if the process is alive and attempts to suggest a handle to linux's os that we are reading the directory /proc/[PID] reserve this PID till program shutdown */ bool isProcessAlive(unsigned long pid) { // Get process info from kernel struct kinfo_proc info; int mib[] = { CTL_KERN, KERN_PROC, KERN_PROC_PID, (int)pid }; size_t len = sizeof info; memset(&info,0,len); int rc = sysctl(mib, (sizeof(mib)/sizeof(int)), &info, &len, NULL, 0); //exit program sysctl failed to verify PID if (rc != 0) { handles.erase(pid); return false; } //extract start time and confirm PID start time equals org start time struct timeval tv = info.kp_proc.p_starttime; if(tv.tv_sec == 0) { handles.erase(pid); return false; } string time = to_string(tv.tv_usec) + "-" + to_string(tv.tv_sec); if(handles.find(pid) != handles.end()) { string org_time = handles[pid]; if(org_time != time) { cout << "PID Conflict PID:" << pid << " org_time:" + org_time << " new_time:" << time << endl; handles.erase(pid); return false; } } else { handles[pid] = time; } return true; } Linux: #include <iostream> #include <vector> #include <map> #include <signal.h> #include <dirent.h> #include <errno.h> #include <fstream> #include <iostream> #include <iterator> #include <sstream> #include <fstream> #include <vector> #include <cstring> #include <cerrno> #include <ctime> #include <cstdio> #include <fcntl.h> #include <sys/time.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include <stdlib.h> #include <string> #include "/usr/include/x86_64-linux-gnu/sys/param.h" /** * map of unsigned long, creation time either in jiffies, ms, or in clock ticks or different on mac even. so we keep it as a string */ std::map<unsigned long, string> handles = {}; /** * returns true if the process is alive and attempts to suggest a handle to linux's os that we are reading the directory /proc/[PID] reserve this PID till program shutdown */ bool isProcessAlive(unsigned long pid) { ifstream procFile; string f = "/proc/"+ std::to_string(pid)+ "/stat"; procFile.open(f.c_str()); if(!procFile.fail()) { //get creation time of current pid's process char str[255]; procFile.getline(str, 255); // delim defaults to '\n' vector<string> tmp; istringstream iss(str); copy(istream_iterator<string>(iss), istream_iterator<string>(), back_inserter<vector<string> >(tmp)); string creation_time = tmp.at(21); //check if the process's creation time matches the cached creation time if(handles.find(pid) != handles.end()) { string org = handles[pid]; //if the pid's creation time is not the cached creation time we assume it's not the same process and the original has closed //unlike java the ==,!= actually checks .equals() when comparing if(creation_time != org) { std::cerr << "PID conflict:" + to_string(pid) + " orgCreationTime:" + org + " newCreationTime:" + creation_time; handles.erase(pid); procFile.close(); return false; } } handles[pid] = creation_time; procFile.close(); return true; } handles.erase(pid); procFile.close(); return false; }
check is Process is Alive from PID while handling recycled PID
1,429,325,339,000
So I wanted to know how files are opened by zsh like .xinitrc, .xprofile, .zprofile, and exactly in which order. So I have decided to strace on zsh process with the grep command to see how the open system call is called and eventually I can decide and order how these files are loaded. My command: strace zsh | grep open but as soon as I ran this it kept showing me the output for zsh and grep is not working. when I end the process with ctrl+d also nothing happens. So is there any way to get this grep output on this kind of process?
strace sends its output to stderr by default. Here, you could do: strace -o >(grep --color open >&2) zsh To run zsh while seeing all system calls that have open anywhere in the name or arguments or return value or: strace -e /open zsh (short for strace -e trace=/open zsh) or: To see the system calls that have open in their name (such as open, openat, pidfd_open, mq_open, open_by_handle_at, perf_event_open list might not be accurate depending on what system and version thereof you're on). Or: strace -e open,openat zsh For only those two system calls. That output goes to the terminal along with the shell's prompt and command outputs. You may prefer to send it to a file that you can inspect later on or live in a separate terminal or screen/tmux pane: strace -o >(grep --color open > strace.log) zsh strace -e /open -o strace.log zsh To also strace calls made in child processes (including by commands executed there and their children), you'd need the -f option. So see what opens your ~/.xinitrc (which has nothing to do with zsh) and when, you can use the audit system (may not be installed/enabled by default on your system).
How to trace on continuously running process?
1,400,384,865,000
Is it possible to setup a Linux system so that it provides more than 65,535 ports? The intent would be to have more than 65k daemons listening on a given system. Clearly there are ports being used so this is not possible for those reasons, so think of this as a theoretical exercise in trying to understand where TCP would be restrictive in doing something like this.
Looking at the RFC for TCP: RFC 793 - Transmission Control Protocol, the answer would seem to be no because of the fact that a TCP header is limited to 16-bits for the source/destination port field.      Does IPv6 improve things? No. Even though IPv6 will give us a much larger IP address space, 32-bit vs. 128-bits, it makes no attempt to improve the TCP packet limitation of 16-bits for the port numbers. Interestingly the RFC for IPv6: Internet Protocol, Version 6 (IPv6) Specification, the IP field needed to be expanded. When TCP runs over IPv6, the method used to compute the checksum is changed, as per RFC 2460: Any transport or other upper-layer protocol that includes the addresses from the IP header in its checksum computation must be modified for use over IPv6, to include the 128-bit IPv6 addresses instead of 32-bit IPv4 addresses.                   So how can you get more ports? One approach would be to stack additional IP addresses using more interfaces. If your system has multiple NICs this is easier, but even with just a single NIC, one can make use of virtual interfaces (aka. aliases) to allocate more IPs if needed. NOTE: Using aliases have been supplanted by iproute2 which you can use to stack IP addresses on a single interface (i.e. eth0) instead. Example $ sudo ip link set eth0 up $ sudo ip addr add 192.0.2.1/24 dev eth0 $ sudo ip addr add 192.0.2.2/24 dev eth0 $ ip addr show dev eth0 2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:d0:b7:2d:ce:cf brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global eth1 inet 192.0.2.2/24 scope global secondary eth1 Source: iproute2: Life after ifconfig References OpenWrt Wiki » Documentation » Networking » Linux Network Interfaces Some useful command with iproute2 Linux Advanced Routing & Traffic Control HOWTO Multiple default routes / public gateway IPs under Linux iproute2 cheat sheet - Daniil Baturin's website
Can TCP provide more than 65535 ports?
1,400,384,865,000
I have a computer with: Linux superhost 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64 GNU/Linux It runs Apache on port 80 on all interfaces, and it does not show up in netstat -planA inet, however it unexpectedly can be found in netstat -planA inet6: Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp6 0 0 :::5672 :::* LISTEN 2402/beam.smp tcp6 0 0 :::111 :::* LISTEN 1825/rpcbind tcp6 0 0 :::9200 :::* LISTEN 2235/java tcp6 0 0 :::80 :::* LISTEN 2533/apache2 tcp6 0 0 :::34611 :::* LISTEN 1856/rpc.statd tcp6 0 0 :::9300 :::* LISTEN 2235/java ... tcp6 0 0 10.0.176.93:80 10.0.76.98:53704 TIME_WAIT - tcp6 0 0 10.0.176.93:80 10.0.76.98:53700 TIME_WAIT - I can reach it by TCP4 just fine, as seen above. However, even these connections are listed under tcp6. Why?
By default if you don't specify address to Apache Listen parameter, it handles ipv6 address using IPv4-mapped IPv6 addresses. You can take a look in Apache ipv6 The output of netstat doesn't mean Apache is not listening on IPv4 address. It's a IPv4-mapped IPv6 address.
netstat — why are IPv4 daemons listening to ports listed only in -A inet6?
1,400,384,865,000
I have a system that has two network interfaces with different IP adresses, both of which are in the public address range (albeit via NAT in the case of the first one) and both of which have different gateways. (Long story, it's for testing purposes) The problem is that right now, if I try to ping the address on the second interface, the default route points out via the first interface - and never arrives properly. Is it possible to make sure that responses always go out over the same network interface (and with the same source IP) as they came in on? And if so, how?
You are misunderstanding the problem. Not every packet is a response and not every packet can be matched to some other packet such that "same network interface as they came in on" makes sense. What you want to do is select the gateway for a packet based on its source IP address. This is called source-based routing or policy routing. You can do it with a simple iptables rule, but the best way is to set up two routing tables, one for each public source address: First, create two tables (Replace <NAME1> and <NAME2> with sensible names for your two providers, same with IP1, DEV1, and so on): echo 200 <NAME1> >> /etc/iproute2/rt_tables echo 201 <NAME2> >> /etc/iproute2/rt_tables Add a gateway to each routing table (if needed): ip route add <NET1> dev <DEV1> src <SRC1> table <NAME1> ip route add <NET2> dev <DEV2> src <SRC2> table <NAME2> Then a default route: ip route add default via <IP1> table <NAME1> ip route add default via <IP2> table <NAME2> Then the rules to select the route table based on the source address: ip rule add from <IP1> table <NAME1> ip rule add from <IP2> table <NAME2> See Routing for multiple uplinks/providers for more details.
Two interfaces, two addresses, two gateways?
1,400,384,865,000
I have a system with two NICs on it. This machine, and a few accompanying devices will be moved and attached to different LANs or sometimes it'll be using dial-up. eth0: - 10.x.x.x address space - no internet gateway - only a few devices eth1 (when used): - 172.16.x.x or 192.168.x.x or other address spaces - access to the gateway from LAN to internet ppp0 (when used): - internet access through dialup using KPPP I'm using ifconfig to bring interfaces up or down (other than with ppp0, which is handled by KPPP). If I bring up eth1 first, it gets an address from its DHCP and gets the gateway and that is added to routing so there's no trouble reaching the LAN and the internet. If I bring up eth0 first or second, it gets its address and sets the default gateway to within its address space (in the 10.x.x.x range). If I bring up eth0 first and eth1 second, the default gateway is still kept to within the 10.x.x.x range. So no matter what I do, eth0 will override eth1 and "claim" the gateway in the routing. Is there some way to either prevent eth0 from claiming the gateway, or to make sure eth1 (if brought up 2nd) uses its gateway? Or can I somehow prioritize a ranking of which interface's gateway should be used over the others? I basically want to make sure eth1's default address space gateway is used if it's active, and if not, then the ppp0's default gateway is used. I'd like to be able to prevent eth0 from ever having the default gateway.
The DHCP server configuration is wrong. It must not send a default gateway option when it can't provide routing to the rest of the world. If it does send that option then any client may assume that it can send packets for any off-link destination to the specified default gateway. So your box is right in using the default gateway from eth0 if it is told so by DHCP. The solution is to remove the bad option from your DHCP server.
Can I prevent a default route being added when bringing up an interface?
1,400,384,865,000
We can use the syntax ${var##pattern} and ${var%%pattern} to extract the last and first section of an IPv4 address: IP=109.96.77.15 echo IP: $IP echo 'Extract the first section using ${var%%pattern}: ' ${IP%%.*} echo 'Extract the last section using ${var##pattern}: ' ${IP##*.} How we can extract the second or third section of an IPv4 address using parameter expansion? Here is my solution: I use an array and change the IFS variable. :~/bin$ IP=109.96.77.15 :~/bin$ IFS=. read -a ArrIP<<<"$IP" :~/bin$ echo ${ArrIP[1]} 96 :~/bin$ printf "%s\n" "${ArrIP[@]}" 109 96 77 15 Also I have written some solutions using the awk, sed, and cut commands. Now, my question is: Is there a simpler solution based on parameter expansion which does not use array and IFS changing?
Assuming the default value of IFS you extract each octet into it's own variable with: read A B C D <<<"${IP//./ }" Or into an array with: A=(${IP//./ })
Bash: Extract one of the four sections of an IPv4 address
1,400,384,865,000
I try to configure static IPv4 & IPv6 configuration on CentOS 6.2. The configuration below works perfectly : # ifconfig eth0 x.x.x.x/29 # route add defalt gw x.x.x.y # ip addr add dev eth0 XXXX:C810:3001:D00::3/56 # ip -6 route add default XXXX:C810:3001:D00::1 However, I want to keep this configuration after a reboot. So I made the following configuration: Enabling IPv6 [root@test network-scripts]# cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=test.net NETWORKING_IPV6=yes Interface Configuration [root@test network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0" BOOTPROTO="static" ONBOOT="yes" HWADDR="2C:C3:AC:A8:C3:3E" IPADDR=x.x.x.x GATEWAY=x.x.x.x NETMASK=255.255.255.248 TYPE=Ethernet IPV6INIT=yes IPV6ADDR=XXXX:C810:3001:D00::3/56 IPV6_DEFAULTGW=XXXX:C810:3001:D00::1 DNS1=208.67.222.222 DNS2=208.67.220.220 # Only DNS{1,2} according to /usr/share/doc/initscripts-9.03.27/sysconfig.txt # DNS3=2620:0:ccc::2 # DNS4=2620:0:ccD::2 Restarting the Network [root@test network-scripts]# service network restart Arrêt de l'interface eth0 : État du périphérique&nbsp;: 3 (déconnecté) [ OK ] Arrêt de l'interface loopback : [ OK ] Activation de l'interface loopback : [ OK ] Activation de l'interface eth0 : État de connexion active&nbsp;: activation État de chemin actif&nbsp;: /org/freedesktop/NetworkManager/ActiveConnection/3 état&nbsp;: activé Connexion activée [ OK ] [root@test network-scripts]# cat /var/log/message Mar 13 14:32:13 test NetworkManager[8299]: <info> (eth0): device state change: 8 -> 3 (reason 39) Mar 13 14:32:13 test NetworkManager[8299]: <info> (eth0): deactivating device (reason: 39). Mar 13 14:32:13 test avahi-daemon[8311]: Withdrawing address record for x.x.x.x on eth0. Mar 13 14:32:13 test avahi-daemon[8311]: Leaving mDNS multicast group on interface eth0.IPv4 with address x.x.x.x. Mar 13 14:32:13 test avahi-daemon[8311]: Interface eth0.IPv4 no longer relevant for mDNS. Mar 13 14:32:14 test kernel: lo: Disabled Privacy Extensions Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) starting connection 'System eth0' Mar 13 14:32:14 test NetworkManager[8299]: <info> (eth0): device state change: 3 -> 4 (reason 0) Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Mar 13 14:32:14 test NetworkManager[8299]: <info> (eth0): device state change: 4 -> 5 (reason 0) Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Mar 13 14:32:14 test NetworkManager[8299]: <info> (eth0): device state change: 5 -> 7 (reason 0) Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 4 of 5 (IP4 Configure Get) scheduled... Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Beginning IP6 addrconf. Mar 13 14:32:14 test avahi-daemon[8311]: Withdrawing address record for fe80::1ec1:deff:feb8:a2fd on eth0. Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 4 of 5 (IP4 Configure Get) started... Mar 13 14:32:14 test NetworkManager[8299]: <info> Activation (eth0) Stage 4 of 5 (IP4 Configure Get) complete. Mar 13 14:32:15 test avahi-daemon[8311]: Registering new address record for fe80::1ec1:deff:feb8:a2fd on eth0.*. Mar 13 14:32:35 test NetworkManager[8299]: <info> (eth0): IP6 addrconf timed out or failed. Mar 13 14:32:35 test NetworkManager[8299]: <info> Activation (eth0) Stage 4 of 5 (IP6 Configure Timeout) scheduled... Mar 13 14:32:35 test NetworkManager[8299]: <info> Activation (eth0) Stage 4 of 5 (IP6 Configure Timeout) started... Mar 13 14:32:35 test NetworkManager[8299]: <info> Activation (eth0) Stage 5 of 5 (IP Configure Commit) scheduled... Mar 13 14:32:35 test NetworkManager[8299]: <info> Activation (eth0) Stage 4 of 5 (IP6 Configure Timeout) complete. Mar 13 14:32:35 test NetworkManager[8299]: <info> Activation (eth0) Stage 5 of 5 (IP Configure Commit) started... Mar 13 14:32:35 test avahi-daemon[8311]: Joining mDNS multicast group on interface eth0.IPv4 with address x.x.x.x. Mar 13 14:32:35 test avahi-daemon[8311]: New relevant interface eth0.IPv4 for mDNS. Mar 13 14:32:35 test avahi-daemon[8311]: Registering new address record for x.x.x.x on eth0.IPv4. Mar 13 14:32:36 test NetworkManager[8299]: <info> (eth0): device state change: 7 -> 8 (reason 0) Mar 13 14:32:36 test NetworkManager[8299]: <info> Policy set 'System eth0' (eth0) as default for IPv4 routing and DNS. Mar 13 14:32:36 test NetworkManager[8299]: <info> Activation (eth0) successful, device activated. Mar 13 14:32:36 test NetworkManager[8299]: <info> Activation (eth0) Stage 5 of 5 (IP Configure Commit) complete. IPv6 configuration is not working ... [root@test network-scripts]# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 1c:c1:de:b8:a3:fd brd ff:ff:ff:ff:ff:ff inet x.x.x.x/29 brd x.x.x.x scope global eth0 inet6 fe80::1ec1:deff:feb8:a3fd/64 scope link valid_lft forever preferred_lft forever IPv6 addresses of the resolvers are not even in the resolv.conf ! Did I miss a configuration step ? I thought that the IPv6 configuration would be a formality .. [root@test network-scripts]# lsb_release -a LSB Version: :core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch Distributor ID: CentOS Description: CentOS release 6.2 (Final) Release: 6.2 Codename: Final
Network Manager is trying to override your static configuration settings. As root or sudo user, run: service NetworkManager stop If you don't have service, try: /etc/init.d/NetworkManager stop Also, you can set the static interfaces to not be managed by the NetworkManager, which is what I did in my CentOS configs merely by adding the line NM_CONTROLLED=no to your static config files. Your static configuration files don't have that line, meaning the NetworkManager will try to control those interfaces instead of ignoring them. See here for reference on disabling and/or uninstalling NM.
Static IPv4 & IPv6 configuration on CentOS 6.2
1,400,384,865,000
I have to set up a FTP server on my machine. I have installed vsftpd using the command: sudo apt-get install vsftpd I then edited the configuration file vsftpd.conf in the location /etc. The file contains: #Set the server to run in standalone mode listen=YES #Enable anonymous access local_enable=NO anonymous_enable=YES #Disable write access write_enable=NO #Set root directory for anon connections anon_root=/var/ftp #Limit retrieval rate anon_max_rate=2048000 #Enable logging user login and file transfers. /var/log/vsftpd.log xferlog_enable=YES #Set interface and port listen_address=192.120.43.250 listen_port=21 The IP address 192.120.43.250 is the eth0 for my server. When I run the command sudo vsftpd /etc/vsftpd.conf I get the error: 500 OOPS: could not bind listening IPv4 socket To check to see what was running on port 21, I ran the command: sudo netstat -tulpn And saw that vsftpd process id was 29383 so I issued the command: sudo killserver 29383 And checked again. The vsftpd was still there, but with a different PID. Running the command: sudo killall vsftpd and sudo killall -9 vsftpd Does the same thing. I have already tried reinstalling. Anyone know what is going on and how to fix it?
Remember to comment out listen=YES in your vsftpd.conf file so that you don't run your vsftpd in standalone mode It fixed the problem in my case.
Installing vsftpd - 500 OOPS: could not bind listening IPv4 socket?
1,400,384,865,000
On a server wget-1.16 takes 8 minutes to complete: $ wget http://http.debian.net/debian/dists/stable/Release -O - --2017-06-12 23:44:40-- http://http.debian.net/debian/dists/stable/Release [4693/5569] Resolving http.debian.net (http.debian.net)... 2001:4f8:1:c::15, 2605:bc80:3010:b00:0:deb:166:202, 2001:610:1908:b000::148:14, ... Connecting to http.debian.net (http.debian.net)|2001:4f8:1:c::15|:80... failed: Connection timed out. Connecting to http.debian.net (http.debian.net)|2605:bc80:3010:b00:0:deb:166:202|:80... failed: Connection timed out. Connecting to http.debian.net (http.debian.net)|2001:610:1908:b000::148:14|:80... failed: Connection timed out. Connecting to http.debian.net (http.debian.net)|140.211.166.202|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://cdn-fastly.deb.debian.org/debian/dists/stable/Release [following] --2017-06-12 23:51:02-- http://cdn-fastly.deb.debian.org/debian/dists/stable/Release Resolving cdn-fastly.deb.debian.org (cdn-fastly.deb.debian.org)... 2a04:4e42:3::204, 151.101.12.204 Connecting to cdn-fastly.deb.debian.org (cdn-fastly.deb.debian.org)|2a04:4e42:3::204|:80... failed: Connection timed out. Connecting to cdn-fastly.deb.debian.org (cdn-fastly.deb.debian.org)|151.101.12.204|:80... connected. ... Because it is trying to connect using IPv6 address. curl-7.38.0 on the same machine responds instantly. Because it uses IPv4 address. Do they resolve domain differently? How do they do it? How can I make wget use IPv4 address? UPD $ ip a ... 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether d8:cb:8a:37:cf:57 brd ff:ff:ff:ff:ff:ff inet 188.40.99.4/26 brd 188.40.99.63 scope global eth0 valid_lft forever preferred_lft forever inet6 2a01:4f8:100:738b::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::dacb:8aff:fe37:cf57/64 scope link valid_lft forever preferred_lft forever $ ip route default via 188.40.99.1 dev eth0 10.0.0.0/24 dev br0 proto kernel scope link src 10.0.0.1 188.40.99.0/26 via 188.40.99.1 dev eth0 188.40.99.0/26 dev eth0 proto kernel scope link src 188.40.99.4
curl and wget do not use different mechanisms for resolving domains (they're using getaddrinfo()). However, curl implements a fast fallback algorithm to improve the user experience in cases where IPv6 connectivity is less than good. This algorithm is described in detail in RFC 6555 (Happy Eyeballs): https://www.rfc-editor.org/rfc/rfc6555 According to curl/lib/connect.h this timeout is set to 200ms: https://github.com/curl/curl/blob/a8e523f086c12e7bb9acb18d1ac84d92dde0605b/lib/connect.h#L43 Both curl and wget support -4/-6 options which will force the connection to either IPv4 or IPv6 respectively.
wget uses ipv6 address and takes too long to complete
1,400,384,865,000
When I run ip to get the ip address, I'm getting $ ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT group default qlen 1000 link/ether 6c:88:14:ba:cb:cc brd ff:ff:ff:ff:ff:ff None of that is an ipv4 address, however ifconfig does show it, $ sudo /sbin/ifconfig lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 1102801 bytes 74417671 (70.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1102801 bytes 74417671 (70.9 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 wlp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.7.7.57 netmask 255.255.252.0 broadcast 10.7.7.255 inet6 fe80::440:3794:6794:8b1b prefixlen 64 scopeid 0x20<link> inet6 2620:0:28a2:4010:2:2:8c75:f8a1 prefixlen 128 scopeid 0x0<global> ether 6c:88:14:ba:cb:cc txqueuelen 1000 (Ethernet) RX packets 32743430 bytes 48351612590 (45.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 14856403 bytes 1590947780 (1.4 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 How can I get the ipv4 information without falling to my deprecated (and trusty) ifconfig?
Apparently ip broke up the MAC address (now in the ip link (device) interface), and the network ip address. The command ip address is what shows the network addresses, 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 6c:88:14:ba:cb:cc brd ff:ff:ff:ff:ff:ff inet 10.7.7.57/22 brd 10.7.7.255 scope global dynamic noprefixroute wlp3s0 valid_lft 1203509sec preferred_lft 1203509sec inet6 2620:0:28a2:4010:2:2:8c75:f8a1/128 scope global dynamic noprefixroute valid_lft 1203512sec preferred_lft 598712sec inet6 fe80::440:3794:6794:8b1b/64 scope link noprefixroute valid_lft forever preferred_lft forever This can be seen in a more compact and user friendly (-brief) format with $ ip -4 -br addr show lo UNKNOWN 127.0.0.1/8 wlp3s0 UP 10.7.7.57/22 Or, you can see it one line (-o), $ ip -o address 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever 3: wlp3s0 inet 10.7.7.57/22 brd 10.7.7.255 scope global dynamic noprefixroute wlp3s0\ valid_lft 1202464sec preferred_lft 1202464sec 3: wlp3s0 inet6 2620:0:28a2:4010:2:2:8c75:f8a1/128 scope global dynamic noprefixroute \ valid_lft 1202466sec preferred_lft 597666sec 3: wlp3s0 inet6 fe80::440:3794:6794:8b1b/64 scope link noprefixroute \ valid_lft forever preferred_lft forever
How can I get the ipv4 address from `ip link` like I used to see with ifconfig?
1,400,384,865,000
I'm currently stumped by a strange problem… I have a dual stack host to which I want to SSH. If I connect via IPv6 everything works like expected datenwolf@foo ~/ > ssh -6 bar.example.com Password: datenwolf@bar ~/ > However when doing the same via IPv4 it fails datenwolf@foo ~/ > ssh -4 bar.example.com Password: Permission denied (publickey,keyboard-interactive). datenwolf@foo ~/ > Excerpt from /var/log/sshd for the failing login Apr 24 16:34:03 [sshd] SSH: Server;Ltype: Version;Remote: www.xxx.yyy.zzz-38427;Protocol: 2.0;Client: OpenSSH_5.9p1 Debian-5ubuntu1 Apr 24 16:34:03 [sshd] SSH: Server;Ltype: Kex;Remote: www.xxx.yyy.zzz-38427;Enc: aes128-ctr;MAC: hmac-md5;Comp: none [preauth] Apr 24 16:34:04 [sshd] SSH: Server;Ltype: Authname;Remote: www.xxx.yyy.zzz-38427;Name: wolfgangd [preauth] Apr 24 16:34:07 [sshd] pam_access(sshd:account): access denied for user `datenwolf' from `foo.example.com' Apr 24 16:34:07 [sshd] error: PAM: User account has expired for datenwolf from foo.example.com Apr 24 16:34:07 [sshd] Connection closed by www.xxx.yyy.zzz [preauth] Of course the account did not expire and I can perfectly log in via IPv6. Using Google I found various reports on the log messages but none of them matched my problem (in the sense that applying the proposed solutions didn't work for my case). I'm pretty much out of ideas here. Update /var/log/sshd for successfull IPv6 login on the very same target machine: Apr 24 16:56:42 [sshd] SSH: Server;Ltype: Version;Remote: 2001:x:x:x:x:x:x:x-46025;Protocol: 2.0;Client: OpenSSH_5.9p1 Debian-5ubuntu1 Apr 24 16:56:42 [sshd] SSH: Server;Ltype: Kex;Remote: 2001:x:x:x:x:x:x:x-46025;Enc: aes128-ctr;MAC: hmac-md5;Comp: none [preauth] Apr 24 16:56:43 [sshd] SSH: Server;Ltype: Authname;Remote: 2001:x:x:x:x:x:x:x-46025;Name: datenwolf [preauth] Apr 24 16:56:47 [sshd] Accepted keyboard-interactive/pam for datenwolf from 2001:x:x:x:x:x:x:x port 46025 ssh2 Apr 24 16:56:47 [sshd] pam_unix(sshd:session): session opened for user datenwolf by (uid=0) I tried logging in from various machines all the same result: IPv6 works, IPv4 doesn't. Update 2 For reference this are the used IP tables. Note that these are battle tested, i.e. they are in use for several years now and were not changed recently. Remote login via IPv4 did work with them. IPv4 iptables: Chain INPUT (policy ACCEPT 2144 packets, 336K bytes) pkts bytes target prot opt in out source destination 132 20762 fail2ban-SSH tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 12M 14G ACCEPT all -- ppp0 * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 3111 95984 ACCEPT icmp -- ppp0 * 0.0.0.0/0 0.0.0.0/0 18692 1123K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 2 112 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:1194 4633 288K ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpts:6880:6899 2826 154K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpts:6880:6899 4 160 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:123 44165 3069K REJECT all -- ppp0 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT 48032 packets, 44M bytes) pkts bytes target prot opt in out source destination 0 0 REJECT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:631 reject-with icmp-port-unreachable 0 0 REJECT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:515 reject-with icmp-port-unreachable 0 0 REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:631 reject-with icmp-port-unreachable 0 0 REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:515 reject-with icmp-port-unreachable 0 0 REJECT all -- ppp0 ppp0 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 133K 8347K TCPMSS tcp -- * ppp0 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain OUTPUT (policy ACCEPT 14378 packets, 2172K bytes) pkts bytes target prot opt in out source destination Chain fail2ban-SSH (1 references) pkts bytes target prot opt in out source destination 132 20762 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 IPv6 ip6tables Chain INPUT (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DROP all * * ::/0 ::/0 rt type:0 segsleft:0 484K 86M ACCEPT icmpv6 * * ::/0 ::/0 105K 7943K ACCEPT tcp * * ::/0 ::/0 tcp dpt:22 0 0 ACCEPT udp * * ::/0 ::/0 udp dpt:1194 0 0 ACCEPT tcp * * ::/0 ::/0 tcp dpt:1194 0 0 ACCEPT udp * * ::/0 ::/0 udp dpts:6880:6899 0 0 ACCEPT tcp * * ::/0 ::/0 tcp dpts:6880:6899 0 0 ACCEPT tcp * * ::/0 ::/0 tcp dpt:123 0 0 ACCEPT udp * * ::/0 ::/0 udp dpt:123 0 0 ACCEPT all ppp0,sixxs * ::/0 ::/0 ctstate RELATED,ESTABLISHED 4164K 466M ACCEPT all !ppp0,sixxs * ::/0 ::/0 0 0 REJECT all * * ::/0 ::/0 reject-with icmp6-port-unreachable Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DROP all * * ::/0 ::/0 rt type:0 segsleft:0 2864 311K ACCEPT icmpv6 * * ::/0 ::/0 0 0 REJECT tcp * * ::/0 ::/0 multiport ports 631 reject-with icmp6-port-unreachable 0 0 REJECT udp * * ::/0 ::/0 multiport ports 631 reject-with icmp6-port-unreachable 0 0 REJECT tcp * * ::/0 ::/0 multiport ports 515 reject-with icmp6-port-unreachable 0 0 REJECT udp * * ::/0 ::/0 multiport ports 515 reject-with icmp6-port-unreachable 0 0 REJECT all ppp0,sixxs ppp0,sixxs ::/0 ::/0 reject-with icmp6-port-unreachable 0 0 accept_with_pmtu_clamp tcp ppp0,sixxs * !2001:x:x::/48 2001:x:x::/48 tcp dpt:22 18M 14G accept_with_pmtu_clamp all * * ::/0 ::/0 ctstate RELATED,ESTABLISHED 65503 5289K accept_with_pmtu_clamp all !ppp0,sixxs * ::/0 ::/0 0 0 REJECT all * * ::/0 ::/0 reject-with icmp6-port-unreachable Chain OUTPUT (policy ACCEPT 8099K packets, 11G bytes) pkts bytes target prot opt in out source destination 0 0 DROP all * * ::/0 ::/0 rt type:0 segsleft:0 Chain accept_with_pmtu_clamp (3 references) pkts bytes target prot opt in out source destination 0 0 TCPMSS tcp * ppp0,sixxs ::/0 ::/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU 18M 14G ACCEPT all * * ::/0 ::/0 Update 3 This is /etc/sshd/sshd_config of the system I try connect to, stripped of all comments: Port 22 ListenAddress 0.0.0.0 ListenAddress :: PubkeyAuthentication yes PasswordAuthentication no UsePAM yes AllowAgentForwarding yes AllowTcpForwarding yes X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost yes PrintMotd no PrintLastLog no UseDNS yes Subsystem sftp /usr/lib64/misc/sftp-server
After things getting stranger and stranger (see the thread of comments in my question) I finally figured it out. First things first: The authentication process did fail in pam_access.so however not due to some misconfiguration in /etc/security/access.conf as it was suggested. To understand why, we must look at the setup of this box in particular: It acts as a router toward IPv4 which goes natively over the PPP link and IPv6 which is over a 6in4 tunnel. Also this box acts as a DNS recursive resolver, and here it is getting interesting. I did configure the resolver in a way that IPv4 reverse lookups are resolved recursively starting with the IPv4 root servers and IPv6 reverse lookups start with the IPv6 root servers. This setup did work when I first installed it. Now my ISP enters the pictures and people who don't understand, how DNS amplification attacks work. To make a long story short: I know for sure that my ISP messes with incoming DNS packets at random, i.e. some things must be resolved through their own resolvers for some time now, while resolving other DNS addresses recursively on your own works – the official reason is to mitigate DNS amplification attacks, but they're doing it wrong then^1. Since I didn't want to largely change my setup I just threw my ISP's DNS resolvers at the end of my local DNS resolver as nonrecursive forward, so if the recursive resolving attempt times out it tries my ISP's resolvers. This works so far. But when I did configure this I made a small mistake: I entered the ISP's DNS resolvers to work only from hosts within my local scope, i.e. 192.168.0.0/16 but forgot about localhost, aka my router, which is the host I tried to SSH into, for which the resolver would not take the ISP's resolvers into account. pam_access.so attempts a reverse lookup on the peers address; and this closes the circle: Because for IPv6 reverse lookup the DNS IPv6 root servers would be accessed the packets would go though the 6in4 tunnel without my ISP messing with them, getting a response. But IPv4 reverse lookup would not be done over the ISP's resolvers by my own, which would receive no response and would ultimately report a NXHOST (or it would run in a timeout). Anyway pam_access.so won't see something it likes and just says "you shall not pass". After I fixed that resolver configuration everything now works like a charm again. But I really have to step onto my ISP's toes now… As to how I did resolve it? Well, by yanking up logging verbosity intensely studying /var/log/everything to see in which order things unfolded. When I saw my resolver logging the reverse lookup attempts I knew what was going on. 1: from a DNS amplification mitigation point of view this is complete nonsense, because I did test and outgoing DNS packets get through just fine – however those are the packets they should filter. In fact every end customer ISP should drop all UDP packets which sender address doesn't match those of their customers
SSH login via IPv6 successfull while using IPv4 to the same host yields "Permission denied"
1,400,384,865,000
I have two machines connected in link local IPv4 over a CAT6 cable. Is there a way from host1 that I can determine host2's IPv4 address? I'm on an Debian-derivative running kernel 3.2.0-34-generic.
Yes, already posted in the comments as a verified solution, but posting as an answer anyway. Try using mDNS. One should install avahi-daemon on the machine you want to resolve (e.g. host2), and at least some Avahi client libraries appropriate for your client system (e.g. host1). These client libraries are usually installed by default on most desktop distributions. Provided your Linux distribution then automatically installs hooks to actually use the Avahi client (mDNS) for lookups, you should then be able to resolve the name host2.local on the client machine. The Avahi set of tools is an mDNS implementation. Summarized, it provides name services via multicast, for both regular host resolving and service discovery. Mac OS X users might recognize this as "Bonjour" and this is how for example iTunes applications find each other (service discovery). However, plain address lookup should work just out of the box. Avahi is triggered in host name lookups because of the settings in /etc/nsswitch.conf (for me at least on Debian/Ubuntu), like this: hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 The .local suffix is exported by the Avahi daemon and configurable. host2 is just the base hostname of the machine.
Detect other machine's address in link local?
1,400,384,865,000
When it comes to packet filtering/management I never actually know what is going on inside the kernel. There are so many different tools that act on the packets, either from userspace (modifying kernel-space subsystems) or directly on kernel-space. Is there any place where each tool documents the interaction with other tools, or where they act. I feel like there should be a diagram somewhere specifying what is going on for people who aren't technical enough to go and read the kernel code. So here's my example: A packet is received on one of my network interfaces and I have: UFW iptables IPv4 subsystem (routing) IPVs eBPF Ok, so I know that UFW is a frontend for iptables, and iptables is a frontend for Netfiler. So now we're on kernel space and our tools are Netfiler, IPVs, IPv4 and eBPF. Again, the interactions between Netfilter and the IPv4 subsystems are easy to find since these are very old (not in a bad way) subsystems, so lack of docs would be very strange. This diagram is an overview of the interaction: But what about IPVs and eBPF? What's the actual order in which kernel subsystems act upon the packets when these two are in the kernel? I always find amazing people who try to go into the guts and help others understand, for example, this description of the interaction between LVS and Netfilter. But shouldn't this be documented in a more official fashion? I'm not looking for an explanation here as to how these submodules interact, I know I could find it myself by searching. My question is more general as to why is there no official documentation that actually tries to explain what is going on inside these kernel subsystems. Is it documented somewhere that I just don't know of? Is there any reason not to try to explain these tools? I apologize if I'm not making any sense. I just started learning about these things.
Most folks I know who are working with the Linux network stack use the below diagram (which you can find on Wikipedia under CC BY-SA 3.0 license). As you can see, in addition to the netfilter hooks, it also documents XFRM processing points and some eBPF hook points. tc eBPF programs would be executed as part of the ingress and egress qdiscs. BPF networking hook points other than XDP and tc (e.g., at the socket level) are not documented here. As far as I know, IPVS is built on top of netfilter so it wouldn't directly appear here.
How do packets flow through the kernel
1,400,384,865,000
Suppose I want to use : $ ip ntable show dev eth0 inet arp_cache dev eth0 refcnt 4 reachable 20744 base_reachable 30000 retrans 1000 gc_stale 60000 delay_probe 5000 queue 31 app_probes 0 ucast_probes 3 mcast_probes 3 anycast_delay 1000 proxy_delay 800 proxy_queue 64 locktime 1000 inet6 ndisc_cache dev eth0 refcnt 1 reachable 40768 base_reachable 30000 retrans 1000 gc_stale 60000 delay_probe 5000 queue 31 app_probes 0 ucast_probes 3 mcast_probes 3 anycast_delay 1000 proxy_delay 800 proxy_queue 64 locktime 0 What's ndisc_cache?
In IPv4 networks the neighbour tables are written with usage of the Address Resolution Protocol. Those tables are commonly known as "ARP-tables". They resolve IP-addresses (addresses in the network layer) in MAC-addresses (addresses in the link-layer) and vice-vesa. You can list the entries of this table by the command arp -a or ip neigh show. On the other side, in the IPv6 internet protocol suite, the functionality of the ARP protocol is provided by a more advanced protocol named Neighbour Discovery Protocol. The ip ntable show command provides information about the neighbour table of the given network device, therefore: arp_cache stands for an ARP-table (ARP Cache) of an IPv4 network. ndisc_cache stands for an NDP-table (Neighbor Cache) of an IPv6 network.
What's ndisc_cache?
1,400,384,865,000
I am looking at the output of lsof -i and I am getting confused! For example, the following connection between my java process and the database shows as IPv6: [me ~] % lsof -P -n -i :2315 -a -p xxxx COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java xxxx me 93u IPv6 2499087197 0t0 TCP 192.168.0.1:16712->192.168.0.2:2315 (ESTABLISHED) So the output type is IPv6 but it clearly shows an IPv4 address in the NAME column. Furthermore, the connection was configured with an IPv4 address! (In this example, 192.168.0.2) Thanks very much for any insight!
In Linux, IPv6 sockets may be both IPv4 and IPv6 at the same time. An IPv6 socket may also accept packets from an IPv4-mapped IPv6 address. This feature is controlled by the IPV6_V6ONLY socket option, whose default is controlled by the net.ipv6.bindv6only sysctl (/proc/sys/net/ipv6/bindv6only). Its default is 0 (i.e. it's off) on most Linux distros. This could be easily reproduced with: [prompt] nc -6 -l 9999 & nc -4 localhost 9999 & [4] 10892 [5] 10893 [prompt] lsof -P -n -i :9999 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME nc 10892 x 3u IPv6 297229 0t0 TCP *:9999 (LISTEN) nc 10892 x 4u IPv6 297230 0t0 TCP 127.0.0.1:9999->127.0.0.1:41472 (ESTABLISHED) nc 10893 x 3u IPv4 296209 0t0 TCP 127.0.0.1:41472->127.0.0.1:9999 (ESTABLISHED) [prompt] kill %4 %5 The client socket is IPv4, and the server socket is IPv6, and they're connected.
Why does lsof indicate my IPv4 socket is IPv6?
1,400,384,865,000
I'm going to explain my question with an example. I have two servers, A and B, both runs Debian 7.8, both have dual-stack connection to the Internet (I don't know if it matters but they even have the same amount of IPv6 addresses) and they both have the same version of whois installed (without any config file). Now, when I whois google.fr (I chose whois.nic.fr because it shows the IP you're connecting from) from server A, I get this response header: %% %% This is the AFNIC Whois server. %% %% complete date format : DD/MM/YYYY %% short date format : DD/MM %% version : FRNIC-2.5 %% %% Rights restricted by copyright. %% See http://www.afnic.fr/afnic/web/mentions-legales-whois_en %% %% Use '-h' option to obtain more information about this service. %% %% [xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx REQUEST] >> -V Md5.1 google.fr %% %% RL Net [##########] - RL IP [#########.] %% As you can see, whois used a IPv6 address to connect to whois.nic.fr. But, when I do the same at server B, I get this response header: %% %% This is the AFNIC Whois server. %% %% complete date format : DD/MM/YYYY %% short date format : DD/MM %% version : FRNIC-2.5 %% %% Rights restricted by copyright. %% See http://www.afnic.fr/afnic/web/mentions-legales-whois_en %% %% Use '-h' option to obtain more information about this service. %% %% [xx.x.xxx.xxx REQUEST] >> -V Md5.1 google.fr %% %% RL Net [##########] - RL IP [#########.] %% As you can see now, whois used IPv4 at server B. Why doesn't server B use IPv6 when connecting to the whois server? They surely both have connection with IPv6 but one of them chooses to use IPv6 where one does not. Is there any reason for the OS to prioritize connection types?
Turns out that it's about configuration of getaddrinfo, which is controlled at /etc/gai.conf. More information about how it can be solved: https://askubuntu.com/questions/32298/prefer-a-ipv4-dns-lookups-before-aaaaipv6-lookups
How does Debian select or prioritize IPv4 and IPv6 connections?
1,400,384,865,000
If I am listening on :::80, is it listening on all ipv6 or all ipv6+ipv4? This is my netstat -tln: tcp 0 0 :::8080 :::*
A listening socket that is bound to ::, i.e. any address IPv6 address (INADDR6_ANY), may or may not also listen to connections using IPv4. This depends from several things: Some operating systems are what is known as dual stack, and on those operating systems this depends from whether the IPV6_V6ONLY socket option is set on the listening socket (by the program that created the socket). Linux-based operating systems and FreeBSD are examples of such operating systems.The default behavior if the option is not explicitly set by a program is OS dependent. On Linux-based operating systems, for example, you can change this default by writing 0 or 1 to /proc/sys/net/ipv6/bindv6only. Some other operating systems are not dual stack, and on those operating systems one cannot ever listen to both IPv6 and IPv4 with a single socket. OpenBSD is one such operating system. On some operating systems, the output of netstat will tell you whether the socket is dual-stack. FreeBSD's netstat reports dual-stack sockets as tcp46 and udp46 in the first column of the output, for examples. Thanks for your answer @Johan Myreen I want to improve this answer with examples. I am testing the ipv6_only behavior with both values. 1. cat /proc/sys/net/ipv6/bindv6only 0 nc -6 -l 80 #server started # netstat tcp6 0 0 :::80 :::* LISTEN # nc client nc localhost 80 test # server response nc -6 -l 80 test # from ipv6 now nc ::1 80 test ipv6 # server response nc -6 -l 80 test ipv6 2. cat /proc/sys/net/ipv6/bindv6only 1 # server started nc -6 -l 80 # connect to ipv4 nc localhost 80 nc: connect to localhost port 80 (tcp) failed: Connection refused # connect to ipv6 nc ::1 80 test ipv6 # server respose nc -6 -l 80 test ipv6 from above results we can see that value of /proc/sys/net/ipv6/bindv6only deciding the behaviour of ipv6 only or ipv6+ipv4
does :::80 in netstat output means only ipv6 or ipv6+ipv4?
1,400,384,865,000
I'm working on a custom Ubuntu 20.04 server and am trying to get a dhcp IP for it. The server has so far run on a static IP and when I run dhcp or dhclient it says dhcpd: command not found and dhclient: command not found. The /sbin has no dhcpd or dhclient directories but there is a /etc/dhcp folder which contains dhclient-enter-hooks.d dhclient-exit-hooks.d with scripts in it to which I assume is start/stop dhcp. What I want to know is if it's possible that dhcp or dhclient is not installed in this machine or if I'm missing the installation path and if it isn't which should be the best one to install to get a dhcp IP from.
If you receive command not found when trying to run dhcp or dhclient, it's possible that these are not installed. To install the DHCP client utilities run: sudo apt install isc-dhcp-client This will install the isc-dhcp-client package, which includes the dhclient. After the installation, you should be able to use the dhclient command to receive an IP address from a DHCP server. Make sure that /etc/network/interfaces is configured to use DHCP. auto eth0 iface eth0 inet dhcp Replace eth0 with your network interface name on your system. Restart the networking with: systemctl restart networking You can run the following command to request an IP address: sudo dhclient The dhcpd command you mentioned is for the DHCP server, not the client. If you need to configure and run a DHCP server, you will need to install and set up the isc-dhcp-server package instead. Configure network with netplan: How to Configure Networking in Ubuntu 20.04 with NetPlan Locate the netplan configuration file in the /etc/netplan/ folder and has a .yaml extension. Edit the file, should see a yaml structure defining the network interfaces and their configurations. Set the dhcp4 property to true. nano /etc/netplan/YOUR_NETPLAN_CONFIG_FILE.yaml dhcp example: network: version: 2 renderer: networkd ethernets: eth0: dhcp4: true dhcp6: true If you have multiple network interfaces, you can add similar sections for each interface. Apply changes: netplan apply Request an IP address: sudo dhclient Ubuntu source: Ubuntu Network configuration
dhcpd or dhclient not found
1,400,384,865,000
I have an embedded system built using buildroot. I have had a number of network issues, one of which is that my machine cannot see its gateway despite it being on the same subnet. I have tried using wireshark to analyse what is going on without success so as a last resort, I am considering trying to turn off support for IPv6 as I do not need it (my device doesn't need DNS or anything similar, simply needs to be able to communicate with other local machines on its subnet). I have read that I can turn off IPv6 by editing /etc/modprobe.conf but this file does not exist on my setup. Is there anything else I can do to disable IPv6 or is the only option to build the kernel from scratch without IPv6 support?
I agree with Ulrich, that this doesn't sound like an IPv6 problem. However, here's how to disable IPv6. In /etc/sysctl.conf set the following options: net.ipv6.conf.all.autoconf = 0 net.ipv6.conf.all.accept_ra = 0 net.ipv6.conf.all.disable_ipv6 = 1 If you don't have /etc/sysctl.conf just create it and add those lines, then reboot. Alternatively, each of these has an interface in /proc that you can flip (and/or create a script to do this at boot time). echo 0 > /proc/sys/net/ipv6/conf/all/autoconf echo 0 > /proc/sys/net/ipv6/conf/all/accept_ra echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6
How can I disable IPv6 in custom built embedded setup
1,400,384,865,000
The task includes an option with an undefined variable. The error was: ansible_all_ipv4_addresses is undefined. Why would I be getting this error, if I am connecting over ipv4? I'm trying to dump this like, "{{ ansible_all_ipv4_addresses[0] }}" And I can verify that it is valid, $ ansible -u centos -m setup 10.1.38.15 | grep ansible_all_ipv4_addresses -A2 -B1 "ansible_facts": { "ansible_all_ipv4_addresses": [ "172.16.0.13" ], But then very similar to the above, $ ansible -u centos -m debug -a "msg='{{ansible_all_ipv4_addresses}}'" 10.1.38.15 10.1.38.15 | FAILED! => { "msg": "The task includes an option with an undefined variable. The error was: 'ansible_all_ipv4_addresses' is undefined" }
For me the problem was that my playbook had gather_facts: false Set at the top of my playbook. As to why the use of facts does not work with the debug module, for that see this question How can the debug module get access to facts on the command line?
Ansible fact is undefined: `ansible_all_ipv4_addresses` is undefined
1,400,384,865,000
OS: GNU/Linux Debian 9.2 64-bit I disabled IPv6 on one of my servers. And now I'm getting this in mail: exim paniclog ... IPv6 socket creation failed: Address family not supported by protocol How do I get rid of it?
First of, man needs to disable IPv6 in exim4. In the following file: /etc/exim4/update-exim4.conf.conf Make sure this line is there, if not, add it, or change it: disable_ipv6='true' But I tried only this solution and the mail is still coming, so digging further... In the same file, make sure this line is set to true: dc_minimaldns='true' Now edit this file: /etc/hosts Let's suppose, this line defines your server name: 127.0.1.1 server-name Change it as follows: 127.0.1.1 server-name.localhost server-name Now, verify, that this command: hostname --fqdn Returns: server-name.localhost If so, you can update your Exim4 configuration: update-exim4.conf And restart the Exim4 service: systemctl restart exim4.service
IPv6 socket creation failed: Address family not supported by protocol
1,400,384,865,000
Similar questions have been asked before but my setup is a little different and the solutions to those questions are not working. A have a CentOS 6 server running iptables with 5 interfaces: eth0: Management 136.2.188.0/24 eth1: Cluster1 internal 10.1.0.0/16 eth2: Cluster1 external 136.2.217.96/27 eth3: Cluster2 internal 10.6.0.0/20 eth4: Cluster2 external 136.2.178.32/28 What I'm trying to do is to have traffic from eth1 to go out eth2 and be NATd, traffic from eth3 go out eth4 and be NATd, all other traffic (e.g. SSH to the box itself) use eth0. To do that I configured route tables like so: ip route add default via 136.2.178.33 src 136.2.178.37 table 1 ip route add default via 136.2.217.97 src 136.2.217.124 table 2 ip rule add fwmark 1 pref 1 table 1 ip rule add fwmark 2 pref 2 table 2 The source IPs are those of the NAT box. The regular default route the management interface will use is in table 0 as it usually is. I then configured iptables to mark packets using the mangle table so that they use a specific route table (if I am understanding this correctly) and NAT particular source traffic to a particular interface: iptables -A PREROUTING -t mangle -j CONNMARK --restore-mark iptables -A PREROUTING -t mangle -m mark --mark 0x0 -s 10.6.0.0/20 -j MARK --set-mark 1 iptables -A PREROUTING -t mangle -m mark --mark 0x0 -s 10.1.0.0/16 -j MARK --set-mark 2 iptables -A POSTROUTING -t mangle -j CONNMARK --save-mark iptables -A POSTROUTING -t nat -s 10.6.0.0/20 -o eth4 -j MASQUERADE iptables -A POSTROUTING -t nat -s 10.1.0.0/16 -o eth2 -j MASQUERADE iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -j LOG --log-level debug iptables -A FORWARD -m state --state NEW -s 10.6.0.0/20 -o eth4 -j ACCEPT iptables -A FORWARD -m state --state NEW -s 10.1.0.0/16 -o eth2 -j ACCEPT iptables -A FORWARD -j DROP When I test this (a simple wget of google.com from a client machine) I can see traffic come in the internal interface (eth3 in the test), then go out the external interface (eth4) with the NAT box's external IP as the source IP. So, the NAT itself works. However, when the system receives the response packet it comes in eth4 as it should but then nothing happens, it never gets un-NAT'd and never shows up on eth3 to go back to the client machine. Internal interface: 11:52:08.570462 IP 10.6.0.50.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 34573 ecr 0,nop,wscale 7], length 0 11:52:09.572867 IP 10.6.0.50.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 35576 ecr 0,nop,wscale 7], length 0 11:52:11.576943 IP 10.6.0.50.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 37580 ecr 0,nop,wscale 7], length 0 11:52:15.580846 IP 10.6.0.50.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 41584 ecr 0,nop,wscale 7], length 0 11:52:23.596897 IP 10.6.0.50.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 49600 ecr 0,nop,wscale 7], length 0 External interfaces: 11:52:08.570524 IP 136.2.178.37.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 34573 ecr 0,nop,wscale 7], length 0 11:52:08.609213 IP 74.125.198.103.80 > 136.2.178.37.50783: Flags [S.], seq 1197168065, ack 4030201377, win 42540, options [mss 1380,sackOK,TS val 1835608368 ecr 34573,nop,wscale 7], length 0 11:52:08.909188 IP 74.125.198.103.80 > 136.2.178.37.50783: Flags [S.], seq 1197168065, ack 4030201377, win 42540, options [mss 1380,sackOK,TS val 1835608668 ecr 34573,nop,wscale 7], length 0 11:52:09.572882 IP 136.2.178.37.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 35576 ecr 0,nop,wscale 7], length 0 11:52:09.611414 IP 74.125.198.103.80 > 136.2.178.37.50783: Flags [S.], seq 1197168065, ack 4030201377, win 42540, options [mss 1380,sackOK,TS val 1835609370 ecr 34573,nop,wscale 7], length 0 11:52:11.576967 IP 136.2.178.37.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 37580 ecr 0,nop,wscale 7], length 0 So, why is traffic getting out but iptables is not sending the return traffic back to the client? It seems that routing is correct since packets leave and arrive on the correct interfaces, so what is iptables doing with the return traffic?
OK, I figured it out. What I had to do was add the internal subnet route to each route table then set rules to control what interface traffic routes to/from. Then in iptables marking packets with the mangle table was not needed, just the typical forward and nat rules. ip route add 136.2.178.32/28 dev eth4 table 1 ip route add 10.6.0.0/20 dev eth3 table 1 ip route add default via 136.2.178.33 src 136.2.178.37 table 1 ip rule add iif eth4 table 1 ip rule add from 10.6.0.0/20 table 1 ip route add 136.2.217.96/28 dev eth2 table 2 ip route add 10.1.0.0/16 dev eth1 table 2 ip route add default via 136.2.217.113 src 136.2.217.124 table 2 ip rule add iif eth2 table 2 ip rule add from 10.1.0.0/16 table 2 iptables -A FORWARD -i eth2 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i eth1 -o eth2 -m state --state NEW -j LOG --log-level debug iptables -A FORWARD -i eth1 -o eth2 -j ACCEPT iptables -A FORWARD -i eth4 -o eth3 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i eth3 -o eth4 -m state --state NEW -j LOG --log-level debug iptables -A FORWARD -i eth3 -o eth4 -j ACCEPT iptables -A FORWARD -j REJECT --reject-with icmp-host-prohibited iptables -t nat -A POSTROUTING -o eth2 -j MASQUERADE iptables -t nat -A POSTROUTING -o eth4 -j MASQUERADE
NAT box with multiple internal and external interfaces
1,400,384,865,000
It seems that when resolving hosts on Alpine Linux, the default behavior is to try IPv6 first and falling back to IPv4. But sometimes it takes a lot of time to resolve, and there are connections when IPv6 is blocked entirely making it frustating. Is there a way to configure the resolver to try IPv4 first?
I've just found that I can disable IPv6 entirely and that makes the trick for me. Adding to /etc/sysctl.d/local.conf (source): # Force IPv6 off net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 net.ipv6.conf.eth0.disable_ipv6 = 1 And reloading the configuration: # sysctl --system
How to resolve IPv4 first on Alpine Linux?
1,400,384,865,000
I'm trying to work out how to connect an inbound IPv4 connection to a port listening on a IPv6 port on a CentOS box. To demonstrate on a vanilla CentOS 7 server: Confirm bindV6only is disabled $ cat /proc/sys/net/ipv6/bindv6only 0 Run netcat listening on a IPv6 port nc -lvn6p 80 On another shell, attempt to telnet to the port via IPv4 telnet 127.0.0.1 80 Trying 127.0.0.1... telnet: connect to address 127.0.0.1: Connection refused Further information Trying to connect via IPv6 works as expected. e.g. telnet ::1 80 However everything I'm reading suggests that Linux-based IPv6 sockets should accept IPv4 connections too if net.ipv6.bindv6only is disabled in sysctl, which it is. I've tried Socket CAT, it works but isn't an elegant solution and requires a separate service to be configured. e.g. socat TCP4-LISTEN:80,reuseaddr,fork TCP6:[::1]:80 ref: https://sysctl-explorer.net/net/ipv6/bindv6only/ ref: https://stackoverflow.com/questions/6343747/ipv6-socket-creation
I don't know if this is your problem, but running yum install nc on centos 7 will install nmap-ncat, which does set the SOL_IPV6/IPV6_V6ONLY socket option itself on ipv6 sockets: # strace -e trace=setsockopt nc -lvn6p 80 Ncat: Version 7.50 ( https://nmap.org/ncat ) setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 setsockopt(3, SOL_IPV6, IPV6_V6ONLY, [1], 4) = 0 Ncat: Listening on :::80 If you omit the -6 and -4 options, it will bind two different ipv6 and ipv4 sockets: # strace -e trace=bind,setsockopt nc -lvnp 80 Ncat: Version 7.50 ( https://nmap.org/ncat ) setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 setsockopt(3, SOL_IPV6, IPV6_V6ONLY, [1], 4) = 0 bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0 Ncat: Listening on :::80 setsockopt(4, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 bind(4, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("0.0.0.0")}, 16) = 0 Ncat: Listening on 0.0.0.0:80 Apparently, the nmap people aren't great fans of the dual-stack sockets feature of Linux ;-)
How to connect to a IPv6 service using IPv4 connection on CentOS 7?