date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,427,416,989,000
I've been trying to figure out why I get a literal end-of-transmission character (EOT, ASCII code 4) in my variable if I read Ctrl+D with read -N 1 in bash and ksh93. I'm aware of the distinction between the end-of-transmission character and the end-of-file condition, and I know what Ctrl+D does when using read without -N (it sends EOT, and if the input was empty, the underlying read() returns zero, signalling EOF). But I'm not sure why trying to read a specific number of characters changes this behaviour so radically. I would have expected an EOF condition and that the following loop would exit: while read -N 1 ch; do printf '%s' "$ch" | od done Output when pressing Ctrl+D: 0000000 000004 0000001 The bash manual says about read -N (ksh93 has a similar wording): -N nchars; read returns after reading exactly nchars characters rather than waiting for a complete line of input, unless EOF is encountered or read times out. ... but it says nothing about switching the TTY to raw/unbuffered mode (which is what I assume is happening). The -n option to read seems to work in the same way with regards to Ctrl+D, and the number of characters to read doesn't seem to matter either. How may I signal an end-of-input to read -N and exit the loop (other than testing the value that was read), and why is this different from a "bare" read?
It might be more helpful if the doc pointed out that there's no such thing as an ASCII EOF, that the ASCII semantics for ^D is EOT, which is what the terminal driver supplies in canonical mode: it ends the current transmission, the read. Programs interpret a 0-length read as EOF, because that's what EOF looks like on files that have that, but the terminal driver refusing to deliver character code 4 and instead swallowing it and terminating the read isn't always what you want. That's what's going on here: control character semantics are part of canonical mode, the mode where the terminal driver buffers until it sees a character to which convention assigns a special meaning. This is true of EOT, BS, CR and a host of others (see stty -a and man termios for alll the gory details). read -N is an explicit order to just deliver the next N characters. To do that, the shell has to stop asking the terminal driver for canonical semantics. By the way, EOF isn't actually a condition a terminal can set, or enter. If you keep reading past eof on anything else, you'll keep getting the EOF indicator, but the only EOF the terminal driver can supply is a fake one—think about it—if the terminal driver actually delivered a real EOF, then the shell couldn't keep reading from it afterwards either. It's all the same terminal. Here: #include <unistd.h> #include <stdio.h> char s[32]; int main(int c, char**v) { do { c=read(0,s,sizeof s); printf("%d,%.*s\n",c,c,s); } while (c>=0); } try that on the terminal, you'll see that the terminal driver in canonical mode just interprets EOT to complete any outstanding read, and it buffers internally until it sees some canonical input terminator regardless of the read buffer size (type a line longer than 32 bytes). The text that's confusing you¸ unless EOF is encountered is referring to a real EOF.
How to signal end-of-input to "read -N"?
1,427,416,989,000
Of course you can do this sort of thing, read var; if [[ $var = 'y' ]]; then echo "Yes"; fi But is there any way to skip the first step and do something a little more like this, (ideally without needing a subshell): if [[ $(read var) = 'y' ]]; then echo "Yes"; fi Of course the above doesn't work unless you add echo, but I'm looking for something much cleaner and simpler than this: if [[ $(read var; echo "$var") = 'y' ]]; then echo "Yes"; fi It would be ideal if this would work: if [[ read = 'y' ]]; then echo "Yes"; fi
How about using a function to do what you need: user_input () { read var echo $var } if [ "$(user_input)" = 'y' ]; then echo "Yes"; fi At the top I define a function called user_input, which will read the value in, and then echo it out. Then the if statement is able to take that value and check if it is 'y'. I've wrapped the $(user_input) in quotation marks in case the user put in spaces, so that the script doesn't just error out.
Succinct way to respond to user input?
1,427,416,989,000
I am trying to echo the variables $f1, $f2, ... , outside of the while loop shown below. From what I understand, the scope of the variable is not correct. Therefore I tried a few of the workarounds as shown in the Bash FAQ: while IFS=: read -r f1 f2 f3 f4 f5 f6 f7; do echo "Username: $f1. Home directory:$f6" done</etc/passwd echo $f1 However, I can't seem to reproduce the fix(es) as shown in the link. Can someone provide an applied example on how to fix this issue?
In your link, the variable used outside (linecount) is not defined as a "loop" variable. It's just modified (incremented) inside the body loop, but not in the "while" statement. This is because when the "read -r f1 f2..." part is called, it reset (prepare) the variables used (f1 ..f7), wait for an input line and assign the variables according to the input. At the end of the file, it does not get any input line (EOF or Pipe Error), returns false, and exits the loop. But it already has reset the variables. You can figure it for yourself by enabling the "debug mode" by adding set -x before your while statement. This will produce something like : ++ IFS=: ++ read -r f1 f2 f3 f4 f5 f6 f7 ++ echo 'Username: bind. Home directory:/var/cache/bind' Username: bind. Home directory:/var/cache/bind ++ IFS=: ++ read -r f1 f2 f3 f4 f5 f6 f7 ++ echo You can see the last read being called, then your final "echo $f1". So, as stated in your link, create another storage variable before your while, assign it the value and then you'll be able to use it : typeset f0="" while IFS=: read -r f1 f2 f3 f4 f5 f6 f7; do echo "Username: $f1. Home directory:$f6" f0="$f1" done</etc/passwd echo $f0
Preserve read statement's last variable assignments outside of "while" loop
1,427,416,989,000
I am trying to write a BASH function that will do x if program a or b finish. example: echomessage() { echo "here's your message" if [[sleep 3 || read -p "$*"]] then clear fi } In this scenario: a = 'sleep 3' which is supposed to run x after 3 seconds b = 'read -p "$*"' which is supposed to run x upon providing any keyboard input. x = 'clear' which clears the echo output if either the program times out with sleep or the user presses a key on the keyboard.
read has a parameter for timeout, you can use: read -t 3 answer If you want read to wait for a single character (whole line + Enter is default), you can limit input to 1 char: read -t 3 -n 1 answer After proper input, return value will be 0, so you can check for it like this: if [ $? == 0 ]; then echo "Your answer is: $answer" else echo "Can't wait anymore!" fi I guess there is no need to implement background jobs in your situation, but if you want to, here is an example: #!/bin/bash function ProcessA() { sleep 1 # do some thing echo 'A is done' } function ProcessB() { sleep 2 # do some other thing echo 'B is done' } echo "Starting background jobs..." ProcessA & # spawn process "A" pid_a=$! # get its PID ProcessB & # spawn process "B" pid_b=$! # get its PID too echo "Waiting... ($pid_a, $pid_b)" wait # wait for all children to finish echo 'All done.'
BASH function to read user input OR interrupt on timeout
1,427,416,989,000
I have a file foo.txt test qwe asd xca asdfarrf sxcad asdfa sdca dac dacqa ea sdcv asgfa sdcv ewq qwe a df fa vas fg fasdf eqw qwe aefawasd adfae asdfwe asdf era fbn tsgnjd nuydid hyhnydf gby asfga dsg eqw qwe rtargt raga adfgasgaa asgarhsdtj shyjuysy sdgh jstht ewq sdtjstsa sdghysdmks aadfbgns, asfhytewat bafg q4t qwe asfdg5ab fgshtsadtyh wafbvg nasfga ghafg ewq qwe afghta asg56ang adfg643 5aasdfgr5 asdfg fdagh5t ewq I want to print all the lines between qwe and ewq in a separate file. This is what I have so far : #!/bin/bash filename="foo.txt" #While loop to read line by line while read -r line do readLine=$line #If the line starts with ST then echo the line if [[ $readLine = qwe* ]] ; then echo "$readLine" read line readLine=$line if [[ $readLine = ewq* ]] ; then echo "$readLine" fi fi done < "$filename"
You need to make some changes to your script (in no particular order): Use IFS= before read to avoid removing leading and trailing spaces. As $line is not changed anywhere, there is no need for variable readLine. Do not use read in the middle of the loop!!. Use a Boolean variable to control printing. Make clear the start and end of printing. With those changes, the script becomes: #!/bin/bash filename="foo.txt" #While loop to read line by line while IFS= read -r line; do #If the line starts with ST then set var to yes. if [[ $line == qwe* ]] ; then printline="yes" # Just t make each line start very clear, remove in use. echo "----------------------->>" fi # If variable is yes, print the line. if [[ $printline == "yes" ]] ; then echo "$line" fi #If the line starts with ST then set var to no. if [[ $line == ewq* ]] ; then printline="no" # Just to make each line end very clear, remove in use. echo "----------------------------<<" fi done < "$filename" Which could be condensed in this way: #!/bin/bash filename="foo.txt" while IFS= read -r line; do [[ $line == qwe* ]] && printline="yes" [[ $printline == "yes" ]] && echo "$line" [[ $line == ewq* ]] && printline="no" done < "$filename" That will print the start and end lines (inclusive). If there is no need to print them, swap the start and end tests: #!/bin/bash filename="foo.txt" while IFS= read -r line; do [[ $line == ewq* ]] && printline="no" [[ $printline == "yes" ]] && echo "$line" [[ $line == qwe* ]] && printline="yes" done < "$filename" However, it would be quite better (if you have bash version 4.0 or better) to use readarray and loop with the array elements: #!/bin/dash filename="infile" readarray -t lines < "$filename" for line in "${lines[@]}"; do [[ $line == ewq* ]] && printline="no" [[ $printline == "yes" ]] && echo "$line" [[ $line == qwe* ]] && printline="yes" done That will avoid most of the issues of using read. Of course, you could use the recommended (in comments; Thanks, @costas) sed line to get only the lines to be processed: #!/bin/bash filename="foo.txt" readarray -t lines <<< "$(sed -n '/^qwe.*/,/^ewq.*/p' "$filename")" for line in "${lines[@]}"; do : # Do all your additional processing here, with a clean input. done
Read a file line by line and if condition is met continue reading till next condition [closed]
1,554,498,125,000
I'm not looking for work-arounds or solutions for the issue. I'm fine with it not working like that in bash. I just don't understand why it doesn't work. I'm looking for an in-depth answer why the following script doesn't work. All previous internet search results, including posts from unix.stackexchange.com, couldn't really clear this up completely. It has something to do with read reading from stdin which doesn't work because stdin is already "taken" (?) by cat feeding bash via the pipe? Example bash script test.sh: echo "Please say name:" read NAME echo "Hello $NAME" Method 1 calling the script with bash test.sh: $ bash test.sh Please say name: XYZ Hello XYZ $ Method 2 running the script via piping to bash: $ cat test.sh | bash Please say name: $ So the script immediately returns to the prompt, without waiting for input or even printing the second line.
You did read from stdin with read, but what you read was the next line of standard input - namely echo "Hello $NAME". After reading that line, there was no more input and so no further commands to execute, and the script was over. There is only one standard input stream, and you're trying to use it for both code and data. This is the same as how an interactive bash session reads commands from your typing, as well as read responses, as well as whatever any other commands you run want to use standard input for. You can see this happening if we add an extra line to the end of the script: echo "Please say name:" read NAME echo "Hello $NAME" printf 'name=%s\n' "$NAME" This both provides a further command to see the script continue execution, and shows us what was read into NAME: Please say name: name=echo "Hello $NAME" You can see that the variable holds verbatim what was written in the script file - no variable interpolation, execution, or expansion has happened. If you want to read from the terminal, it is possible. The simplest way that's likely to work is to read from standard output instead of standard input (!), which is presumably connected to the TTY: read NAME <&1 This will wait for me to type something, and then move on to the rest of the program. You could also use /dev/tty or $(tty).
Why isn't it possible to read from `stdin` with `read` when piping a script to bash?
1,554,498,125,000
I was trying to write a shell script that idly waits for a signal in the background. Since the script doesn't take user input I thought of using read to block the script indefinitely while waiting. In bash the following code seems to work as expected, outputing "signal" every time it receives a SIGUSR1: #!/bin/bash trap "echo signal" SIGUSR1 read $ ./test signal signal ... However if I run with #!/bin/sh or BusyBox ash, sending SIGUSR1 also causes the program to terminate: #!/bin/sh trap "echo signal" SIGUSR1 read $ ./test signal $ Shouldn't read only return after reading EOF or IFS from standard input? In this case that didn't happen, so what caused its return?
Dash and BusyBox comply with the POSIX specification by making the signal quit read immediately. Bash doesn't, unless invoked in POSIX mode. The section on signals states When a signal for which a trap has been set is received while the shell is waiting for the completion of a utility executing a foreground command, the trap associated with that signal shall not be executed until after the foreground command has completed. read is not a special built-in, so it is a “utility executing as a foreground command”, even if it happens to run inside the same process. This rules out the bash behavior of running the trap and continuing the execution of read. The section on the execution environment clarifies how signals work when running a utility (once again, including non-special built-ins): If the utility is a shell script, traps caught by the shell shall be set to the default values and traps ignored by the shell shall be set to be ignored by the utility; if the utility is not a shell script, the trap actions (default or ignore) shall be mapped into the appropriate signal handling actions for the utility A built-in is not a shell script, but arguably it's still running as part of a shell that's executing a script, so the first clause could apply. But neither behaviors allow the utility to run the parent's trap. The only possible behaviors are for read to ignore the signal, block it, or let it return immediately. The description of read doesn't mention signals, so it isn't allowed to change the mask of a standard signal, which rules out ignoring. read could arguably block the signal until it finishes (programs may temporarily block signals for internal purposes) — but this would conflict with the expectation that read does not block a SIGINT while it's waiting for input (with Occam's razor saying that it wouldn't make sense to only have this behavior for SIGINT). So having read do nothing upon SIGUSR1 (which is how mksh behaves) might be technically compliant, but I don't find this to be reasonable behavior. Dash and BusyBox aren't fully POSIX-compliant. Both (as of the versions in Ubuntu 22.04) set $? to 1 if read is interrupted by a signal, which contradicts the requirement on exit status. The exit status of a command that terminated because it received a signal shall be reported as greater than 128. (In practice that's 128 + signal value, except ATT ksh where it's 256 + signal value.)
Why does sending a trapped signal cause `read` to return in POSIX shell but not in Bash?
1,554,498,125,000
I want to know which keys are pressed on my keyboard and print the information to stdout. A tool that can do this is showkey. However, if I want to pass the data of showkey to read: while read line; do echo "$line" | otherprog done <`showkey -a` OR showkey -a | while read line; do echo "$line" | otherprog done Then showkey waits until a sum of 140 characters is typed in and then sends the buffered information to read. showkey -a prints the pressed keys line by line, without any buffering. Why does it buffer? How do I avoid this buffering, so that I can read showkey's output truly line by line? Is there an alternative to showkey? Is there a file I can read the pressed keys directly from? What is the correct way to pass data to read? Solution: I've used lornix's solution and included it into my simple keyboard keyboard :D! stdbuf -o0 showkey -a | while read line; do perl -e 'print sprintf "%030s\n",shift' "$line" | aplay &> /dev/null & done Lasership version: #!/bin/bash MP3=(); for i in mp3/*.mp3; do MP3+=("$i"); done NMP3=${#MP3[@]} stdbuf -o0 showkey -a 2>/dev/null | while read line; do [ -z "$line" ] || ! [[ $line =~ ^[0-9] ]] && continue NUM="$(echo "$line" | awk '{print $2}')" mplayer "${MP3[$(($NUM % $NMP3))]}" &>/dev/null & done In the same folder, download some laser mp3 files into a folder called mp3.
Try setting showkey output to non-buffering with the stdbuf command: stdbuf -o0 showkey -a | cat - Will show the output as keys are pressed, rather than buffering a line. stdbuf can adjust the buffering of stdin, stdout and stderr, setting them to none, line buffered, or block buffered, with a choosable block size. Very handy.
Print currently pressed keys to stdout and read them line by line
1,554,498,125,000
I'm working on a bash script that parses a tab separated file. If the file contains the word "prompt" the script should ask the user to enter a value. It appears that while reading the file, the "read" command is not able to read from standard input as the "read" is simply skipped. Does anybody have a work around for doing both reading from a file as well as from stdin? Note: The script should run on both Git Bash and MacOS. Below is a little code example that fails: #!/bin/bash #for debugging set "-x" while IFS=$'\r' read -r line || [[ -n "$line" ]]; do [[ -z $line ]] && continue IFS=$'\t' read -a fields <<<"$line" command=${fields[0]} echo "PROCESSING "$command if [[ "prompt" = $command ]]; then read -p 'Please enter a value: ' aValue echo else echo "Doing something else for "$command fi done < "$1" Output: $ ./promptTest.sh promptTest.tsv + IFS=$'\r' + read -r line + [[ -z something else ]] + IFS=' ' + read -a fields + command=something + echo 'PROCESSING something' PROCESSING something + [[ prompt = something ]] + echo 'Doing something else for something' Doing something else for something + IFS=$'\r' + read -r line + [[ -z prompt ]] + IFS=' ' + read -a fields + command=prompt + echo 'PROCESSING prompt' PROCESSING prompt + [[ prompt = prompt ]] + read -p 'Please enter a value: ' aValue + echo + IFS=$'\r' + read -r line + [[ -n '' ]] Sample tsv file: $ cat promptTest.tsv something else prompt otherthing nelse
The simplest way is to use /dev/tty as the read for keyboard input. For example: #!/bin/bash echo hello | while read line do echo We read the line: $line echo is this correct? read answer < /dev/tty echo You responded $answer done This breaks if you don't run this on a terminal, and wouldn't allow for input to be redirected into the program, but otherwise works pretty well. More generally, you could take a new file handle based off the original stdin, and then read from that. Note the exec line and the read #!/bin/bash exec 3<&0 echo hello | while read line do echo We read the line: $line echo is this correct? read answer <&3 echo You responded $answer done In both cases the program looks a bit like: % ./y We read the line: hello is this correct? yes You responded yes The second variation allows for input to also be redirected % echo yes | ./y We read the line: hello is this correct? You responded yes
bash: Prompting for user input while reading file
1,554,498,125,000
I have a bash script where I'm trying to assign a heredoc string to a variable using read, and it only works if I use read with the -d '' option, I.e. read -d '' <variable> script block #!/usr/bin/env bash function print_status() { echo echo "$1" echo } read -d '' str <<- EOF Setup nginx site-config NOTE: if an /etc/nginx/sites-available config already exists for this website, this routine will replace existing config with template from this script. EOF print_status "$str" I found this answer on SO which is where I copied the command from, it works, but why? I know the first invocation of read stops when it encounters the first newline character, so if I use some character that doesn't appear in the string the whole heredoc gets read in, e.g. read -d '|' <variable> -- this works read -d'' <variable> -- this doesn't I'm sure it's simple but what's going on with this read -d '' command option?
I guess the question is why read -d '' works though read -d'' doesn't. The problem doesn't have anything to do with read but is a quoting "problem". A "" / '' which is part of a string (word) simply is not recognized at all. Let the shell show you what is sees / executes: start cmd:> set -x start cmd:> echo read -d " " foo + echo read -d ' ' foo start cmd:> echo read -d" " foo + echo read '-d ' foo start cmd:> echo read -d "" foo + echo read -d '' foo start cmd:> echo read -d"" foo + echo read -d foo
How does the -d option to bash read work?
1,554,498,125,000
I'm working on a relatively new Linux Mint installation and I haven't noticed any significant issues until just now, I realised I couldn't use any command-line tools that interactively read user input. Instead of processing an enter key as I'd expect by consuming a line of input, it's printing the ^M character sequence to the terminal and continuing to prompt for input. E.g. with git add -p: Stage this hunk [y,n,q,a,d,j,J,g,/,e,?]? y^M After a little more testing I realised that all shell read operations were doing this (in zsh and bash), and sh was entirely unusable. zsh and bash: $ read test value^M^C $ sh: $ echo "test"^M^C $ exit^M^M^M^C $ I opened a new terminal and it seemed unaffected, so I'm not stuck here, but I'd love to know what happened to make this terminal behave as it is. I'll keep the broken one open for a while for testing if people have theories.
This usually happens when a program which set the terminal to raw input mode died unexpectedly and wasn't able to restore the terminal settings to their prior values. A simple stty sane should reset everything to normal. As part of making the terminal "raw", the translation of Carriage-Return (^M, \r) to Line-Feed (^J, \n) is disabled by turning off the ICRNL flag in the c_iflag termios settings. This could also be done from the command line with stty -icrnl. The command line of a shell like bash is usually not affected by this, because bash makes itself the terminal raw and does its own key translation (in order to provide nice line-editing capabilities as moving the insertion point left and right, command line history, etc) and only restores the default terminal settings upon running other commands.
shell read incorrectly processes enter keypress
1,554,498,125,000
I'm trying to write a bash script that will let me save a backup code (lots of numbers) in a file. I've finished the script but it's only letting me to save 4096 digits of the code. I tried to do this: # Ask for backup code read -p "Backup code:" backupcode # Check backup code length l="${#backupcode}" m=4096 if (( l > m )); then echo -e "${RED}ERROR:${NC} Backup is too large! The limit is 4096 digits." else # Save backup code in the file echo $backupcode > "${path[$i]}" fi I think this didn't detect that the backup was too large. So, I think there's something with read command. If there's a limit in read, are there any alternatives I can use?
There's no limit for the read command itself. But there's a limit for how much you can type on a single line in the terminal. To see this, try running the command wc -c and typing a very long line. You'll hit that same limit at 4096 bytes. To input more than the limit, either arrange to make the code multi-line with each line being short enough, or input it in some other way than directly reading from the terminal in cooked mode. If you enable readline, bash reads characters one by one, and there's no limit to the line length other than available memory. read -e -p "Backup code:" backupcode However, reading such a long input from the terminal is a very bad user interface. A user isn't going to sit there and type thousands of characters. Instead, read input from the clipboard or from a file.
Is there a limit for the read command?
1,554,498,125,000
echo " a" | while read; do echo "$REPLY"; done will output ".....a" which contains leading white space. However, echo " a" | while read line; do echo "$line"; done will output "a" with leading white space skipped (OK, because word splitting). It seems the REPLY variable has the same effect of IFS set to null: echo " a" | while IFS= read line; do echo "$line"; done From the bash manual, I can't find the reason. Do you have any ideas? Thanks.
From read man page: Read one line from the standard input, (or from a file) and assign the word(s) to variable name(s). If no names are supplied, the line read is assigned to the variable REPLY. So, $REPLY is always the whole line, while the assigned variables are always words. It would not work otherwise e.g. if you have multiple words, what would $REPLY be?
why REPLY variable in read builtin skip white space?
1,554,498,125,000
In bash, if I wanted to read, say 3, characters from a pipe, I could do: ... | read -n3 In zsh's read, the closest option seems to be -k: -k [ num ] Read only one (or num) characters. All are assigned to the first name, without word splitting. This flag is ignored when -q is present. Input is read from the terminal unless one of -u or -p is present. This option may also be used within zle widgets. Note that despite the mnemonic ‘key’ this option does read full characters, which may consist of multiple bytes if the option MULTIBYTE is set. And for -u and -p: -u n Input is read from file descriptor n. -p Input is read from the coprocess. A bare echo foobar | (read -k3; echo $REPLY) hangs waiting for input. -p fails with read: -p: no coprocess. Only the following works: echo foobar | (read -k3 -u0; echo $REPLY) This is the first time I'm seeing something that's more difficult to achieve in zsh than in bash. Is there a simpler way to read N characters from stdin (whatever that might be) than this?
It's a bit weird, but it is documented: -k [num] (…) Input is read from the terminal unless one of -u or -p is present. The reason your first attempt hangs there is that it's reading from the terminal. Typing three characters on the terminal does unblock it. To read from standard input when you're asking for a limited number of characters rather than a whole line (with -k or -q), you need to pass -u 0 explicitly. echo foobar | ( read -u 0 -k 3; echo $REPLY )
What's the zsh way to read n characters from stdin?
1,554,498,125,000
read -r -p "put an option: " option echo $option this works but shellcheck gives me: In POSIX sh, read -p is undefined. How to get user input with a prompt into a variable in a posix compliant way?
You could use read without -p: printf "put an option: " >&2 read -r option printf '%s\n' "$option"
how to get user input with a prompt into a variable in a posix compliant way
1,554,498,125,000
While I was reading this answer, the author used this command to put the result of a heredoc to a variable: read -r -d '' VAR <<'EOF' abc'asdf" $(dont-execute-this) foo"bar"'' EOF I'm a little confused about the -d option. From the help text for the read command: -d delim continue until the first character of DELIM is read, rather than newline So if I pass an empty string to -d, it means read until the first empty string. What does it mean? The author commented under the answer that -d '' means using the NUL string as the delimiter. Is this true (empty string means NUL string)? Why not use something like -d '\0' or -d '\x0' etc.?
Mostly, it means what it says, e.g.: $ read -d . var; echo; echo "read: '$var'" foo. read: 'foo' The reading ends immediately at the ., I didn't hit enter there. But read -d '' is a bit of a special case, the online reference manual says: -d delim The first character of delim is used to terminate the input line, rather than newline. If delim is the empty string, read will terminate a line when it reads a NUL character. \0 means the NUL byte in printf, so we have e.g.: $ printf 'foo\0bar\0' | while read -d '' var; do echo "read: '$var'"; done read: 'foo' read: 'bar' In your example, read -d '' is used to prevent the newline from being the delimiter, allowing it to read the multiline string in one go, instead of a line at a time. I think some older versions of the documentation didn't explicitly mention -d ''. The behaviour may originally be an unintended coincidence from how Bash stores strings in the C way, with that trailing NUL byte. The string foo is stored as foo\0, and the empty string is stored as just \0. So, if the implementation isn't careful to guard against it and only picks the first byte in memory, it'll see \0, NUL, as the first byte of an empty string. Re-reading the question more closely, you mentioned: The author commented under the answer that -d '' means using the NUL string as delimiter. That's not exactly right. The null string (in the POSIX parlance) means the empty string, a string that contains nothing, of length zero. That's not the same as the NUL byte, which is a single byte with binary value zero(*). If you used the empty string as a delimiter, you'd find it practically everywhere, at every possible position. I don't think that's possible in the shell, but e.g. in Perl it's possible to split a string like that, e.g.: $ perl -le 'print join ":", split "", "foobar";' f:o:o:b:a:r read -d '' uses the NUL byte as the separator. (*not the same as the character 0, of course.) Why not use something like -d '\0' or -d '\x0' etc.? Well, that's a good question. As Stéphane commented, originally, ksh93's read -d didn't support read -d '' like that, and changing it to support backslash escapes would have been incompatible with the original. But you can still use read -d $'\0' (and similarly $'\t' for the tab, etc.) if you like it better. Just that behind the scenes, that's the same as -d '', since Bash doesn't support the NUL byte in strings. Zsh does, but it seems to accept both -d '' and -d $'\0'.
What does the "-d" option of the "read" shell command do when I use it with an empty string as argument?
1,554,498,125,000
I can do this in bash: while read -n1 -r -p "choose [y]es|[n]o" do if [[ $REPLY == q ]]; then break; else #whatever fi done which works but seems a bit redundant, can i do something like this instead? while [[ `read -n1 -r -p "choose [y]es|[n]o"` != q ]] do #whatever done
You can't use the return code of read (it's zero if it gets valid, nonempty input), and you can't use its output (read doesn't print anything). But you can put multiple commands in the condition part of a while loop. The condition of a while loop can be as complex a command as you like. while IFS= read -n1 -r -p "choose [y]es|[n]o" && [[ $REPLY != q ]]; do case $REPLY in y) echo "Yes";; n) echo "No";; *) echo "What?";; esac done (This exits the loop if the input is q or if an end-of-file condition is detected.)
How to use user input as a while loop condition
1,554,498,125,000
In a remote CentOS with Bash 5.0.17(1) where I am the only user via SSH I have executed read web_application_root with: $HOME/www or with: ${HOME}/www or with: "${HOME}"/www or with: "${HOME}/www" Aiming to get an output with an expanded (environment) variable such as MY_USER_HOME_DIRECTORY/www. While ls -la $HOME/www works fine, ls -la $web_application_root fails with all examples; an error example is: ls: cannot access '$HOME/www': No such file or directory I understand that read treats all the above $HOME variants as a string (due to the single quote marks in the error) and hence doesn't expand it. How to expand variables inside read?
Variables are not expanded when passed to read. If you want to expand the $VARs or ${VAR}s where VAR denotes the name of an existing environment variable (limited to those whose name starts with an ASCII letter or underscore and followed by ASCII alnums or underscores) and leave all the other word expansions ($non_exported_shell_variable, $1, $#, ${HOME+x}, $((1 + 1)), $(cmd)...) untouched, you could use envsubst (from GNU gettext): IFS= read -r web_application_root || exit web_application_root=$(printf %s "$web_application_root" | envsubst) ls -la -- "$web_application_root" You could make it a shell function that takes a variable name as argument and does both the reading and environment variable expansion with: read_one_line_and_expand_envvars() { IFS= read -r "$1" ret="$?" command eval "$1"'=$(printf %s "${'"$1"'}" | envsubst)' && return "$ret" } To be used for instance as: printf >&2 'Please enter the root dir (${ENVVAR} expanded): ' read_one_line_and_expand_envvars web_application_root || exit printf >&2 'The expanded version of your input is "%s"\n' "$web_application_root" To limit that substitution to a limited set of environment variables, you'd pass the list as a $VAR1$VAR2... literal argument to envsubst: web_application_root=$( printf %s "$web_application_root" | envsubst '$HOME$MYENVVAR' ) (here tells envsubst to only substitute $HOME, ${HOME}, $MYENVVAR and ${MYENVVAR} in its input, leaving all other $VARs untouched). If you want to allow all forms of word expansions¹ (but note that then that makes it a command injection vulnerability), you could do: web_application_root=$(eval "cat << __EOF__ $web_application_root __EOF__") Or again, as a function that takes the variable name as argument: read_one_line_and_perform_shell_word_expansions() { IFS= read -r "$1" ret=$? command eval ' case "${'"$1"'}" in (EOF) ;; (*) '"$1"'=$(command eval "cat << EOF ${'"$1"'} EOF") esac' && return "$ret" } printf >&2 'Please enter the root dir ($var/$((...))/$(cmd) allowed): ' read_one_line_and_perform_shell_word_expansions web_application_root || exit printf >&2 'The expanded version of your input is "%s"\n' "$web_application_root" The same function with detailed inline documentation: read_one_line_and_perform_shell_word_expansions() { # first argument of our function is the variable name or REPLY # if not specified. varname=${1-REPLY} # read one line from stdin with read's unwanted default post-processing # (which is otherwise dependant on the current value of $IFS) disabled. IFS= read -r "$varname" # record read's exit status. If it's non zero, a full line could not be # read. We may still want to perform the expansions in whatever much # was read, and pass that exit status to the caller so they decide what # to do with it. ret=$? # We prefix the "eval" special builtin with "command" to make it lose # its "special" status (namely here, exit the script about failure, # something bash only does when in standard mode). command eval ' # the approach we take to expand word expansions would be defeated # if the user entered "EOF" which is the delimiter we chose for our # here-document, so we need to handle it as a special case: case "${'"$varname"'}" in (EOF) ;; (*) # the idea here is to have the shell evaluate the # myvar=$(command eval "cat << EOF # ${myvar} # EOF") # # shell code when $1 is myvar, so that the # # cat << EOF # contents of $myvar with $(cmd), $ENV and all # EOF # # shell code be evaluated, and those $(cmd), $ENV expansions # performed in the process '"$varname"'=$(command eval "cat << EOF ${'"$varname"'} EOF") esac' && # unless eval itself failed, return read's exit status to the caller: return "$ret" } But your problems sounds more like an XY problem. Getting input via read is cumbersome and impractical. It's much better to get input via arguments, and then you can leave it to the caller's shell to do the expansions as they intend it. Instead of #! /bin/sh - IFS= read -r var ls -l -- "$var" (and remember that calling read without IFS= and without -r is almost never what you want). Make it: #! /bin/sh - var=${1?} ls -l -- "$var" And then the caller can do your-script ~/dir or your-script "$HOME/dir" or your-script '$$$weird***/dir' or even your-script $'/dir\nwith\nnewline\ncharacters' as they see fit. ¹ word expansion in this context refers to parameter expansion, arithmetic expansion and command substitution. That doesn't include filename generation (aka globbing or pathname expansion), tilde expansion nor brace expansion (itself not a standard sh feature). Using a here-document here makes sure ' and "s are left untouched, but note that there still is backslash processing.
How to expand variables inside read?
1,554,498,125,000
Consider the following shell script echo foo; read; echo bar Running bash my_script outputs 'foo', waits for the return key and outputs 'bar'. While this works fine running it that way, it doesn't work anymore if piped to /bin/bash: $ echo 'echo foo;read;echo bar'|bash directly outputs 'foo' and 'bar' without waiting for a key press. Why doesn't read work anymore when using it this way? Is there any way to rewrite the script in a way it works as file script file as well as a script string piped to /bin/bash?
This is really easy, actually, First, you need to set aside your stdin in some remembered descriptor: exec 9<&0 There. You've made a copy. Now, let's pipe our commands at our shell. echo 'echo foo; read <&9; echo bar' | bash ...well, that was easy. Of course, we're not really done yet. We should clean up. exec 9<&- Ok, now we're done. But we can avoid the cleanup if we just group our commands a little... { echo 'echo foo; read <&9; echo bar' | bash; } 9<&0 The descriptor only survives as long as its assigned compound command does in that case.
Wait for key in shell script that may get piped to /bin/bash
1,554,498,125,000
I have some php scripts which I run in sequential order like: php index.php import file1 --offline && php index.php import file2 --deleteUnused && php index.php import file3 Now I have newly discovered the parallel command and I tried something like that: parallel -j 3 -- "php index.php import file1 --offline" "php index.php import file2 --deleteUnused" "php index.php import file3" And it works perfectly. Is it possible that I could have a file where all the above commands are included, and to start parallel with an option to read the commands from the file? Something like this: parallel -XX myFileWithCommands.txt
Create myFileWithCommands.txt: php index.php import file1 --offline php index.php import file2 --deleteUnused php index.php import file3 Then run parallel like this: parallel -j 3 -- < myFileWithCommands.txt
How to read commands from file?
1,554,498,125,000
I am using the following counter functionality in a script: for ((i=1; i <= 100; i++ )); do printf "\r%s - %s" "$i" $(( 100 - i )) sleep 0.25 done is there a way I can pause and resume the counter with keyboard input? (preferably with the same key - lets say with space)
Use read with a timeout -t and set a variable based on its output. #!/bin/bash running=true for ((i=1; i <= 100; i++ )); do if [[ "$running" == "true" ]]; then printf "\r%s - %s" "$i" $(( 100 - i )) fi if read -sn 1 -t 0.25; then if [[ "$running" == "true" ]]; then running=false else running=true fi fi done With this, you can press any key to pause or unpause the script. running stores true or false to tell if we want the loop to do work or now. read -sn 1 -t 0.25 is the key, it reads one character -n1, it suppresses keypresses -s and waits for only 0.25s -t 0.25. If read times out it returns a none 0 exit status which we detect with the if and only if a key was pressed do we toggle the status of running. You can also assign the read char to a variable and check for a specific character to limit it to only one key. if read -sn 1 -t 0.25 key && [[ "$key" = "s" ]] ; then Use "$key" == "" if you want to check for space or enter. Note that one side effect of the read + timeout is that if you hit a key the next loop will execute quicker then normal which is made more obvious if you hold down a key. An alternative might be to use job flow control. ctrl + s will suspend a terminal and ctrl + q will resume it. This blocks the entire shell and does not work in all shells. You can use ctrl + z to suspend a process giving you a prompt back and use fg to resume the process again which allows you to continue to use the shell.
Bash - pause and resume a script (for-loop) with certain key
1,554,498,125,000
So here's my deal: working in BASH, I have already built out a function which works just fine that accepts an array or any number of parameters, and spits out an interactive menu, navigable by arrows up or down, and concluding with the user hitting enter, having highlighted the menu item they desire (the output of which is either the index or the menu item's value, depending on how the menu is initiated): That's all working fine; I render the menu, then respond to the events parsed from the user's input to the invisible prompt of a read command (auto-triggered after the collection of 3 characters): read -s -n 3 key 2>/dev/null >&2 The output, having been fed into a $key variable is then run through a case statement evaluating against the predicted acceptable inputs: \033[A (up) \033[A (down) "" (enter) which in turn fire the behaviors desired. However, it then dawned of me that with the introduction of 7+ menu items (we may presuppose it shall not exceed 10) it'd be nice to let the user tap the numeric entry of the desired menu item, which would highlight the item in question without submitting it. My problem is this: I got that working just fine too, BUT the user, having typed the numeric key desired (5 in the case of this example) is then obliged to hit the enter key for the read statement to trigger its effect, courtesy of my -n 3 modifier flags on my read. This is counter to the usability model already established, unless, of course, they hit their numeric selection thrice, thereby triggering the 3-char minimum requirement (which is equally counterintuitive). The issue is that \033[A is treated as 3 characters, thereby necessitating the -n 3. 0-9 are treated as SINGLE characters (meaning if I change that to a -n 1, THEY behave as expected, but now the arrow keys fail, collecting only the escape character). So, I guess what I'm wondering is: is there a way to listen for a -n 1 {OR} 3 (whichever comes first)? I cannot seem to send a \n or \r or similar, as until the read has resolved, they have no effect (meaning I have found no means to simply leave the -n 3 while running a parallel process to check if the entered value is a 0-9 should it prove a single character). I'm NOT MARRIED to this approach. I'm fine with using awk or sed, or even expect (though that last one I'm confused about still). I don't care if it's a read that does the collecting. Edit: SOLUTION read -n1 c case "$c" in (1) echo One. ;; (2) echo Two. ;; ($'\033') read -t.001 -n2 r case "$r" in ('[A') echo Up. ;; ('[B') echo Down. ;; esac esac Status: Resolved @choroba to the rescue! Solution Explanation I'll do my best to paraphrase: His solution involved nesting the two read statements (I'd been trying them sequentially) It was this, coupled with the -t.001 (thereby setting a near-instant timeout on the function) that enabled the carryover read. My problem was that the escape keys I'd been monitoring were 3 characters in length (hence my setting the -n3 flag). It wasn't until afterwards that it occurred to me that accepting certain single-character inputs would be advantageous, too. His solution was to suggest a **($'\033') case: Basically 'Upon reading the escape character...' (**($'\033')) Create anotherread`` (this time awaiting TWO characters), and set to timeout after a nanosecond and precluding backslashes on escape chars. Since the behavior of read apparently is to "spillover" the remaining input into the next read statement, said statement started its countdown-to-timeout with the sought value having already been seeded. Since that met the defined requirement flags for the read, then it became a simple matter of testing the second set of characters for the case result (and since the initializing function is still getting the response its expecting, albeit from a different statement, the program carries on as though it had gotten the results I'd been trying to puzzle my way to in the first place.
You can read for -n 1, and read the following two if the first one is \033 and react accordingly. Otherwise, handle the number directly. #!/bin/bash read -n1 c case "$c" in (1) echo One. ;; (2) echo Two. ;; ($'\033') read -t.001 -n2 r case "$r" in ('[A') echo Up. ;; ('[B') echo Down. ;; esac esac
BASH question: using read, can I capture a single char OR arrow key (on keyup)
1,554,498,125,000
I want to grep some line from a log file with an input from another file. I'm using this small command to do that: while read line; do grep "$line" service.log; done < input_strings.txt > result.txt input_strings.txt has like 50 000 strings (one per line). For every of this string I'm currently searching the huge service.log file (with around 2 000 000 lines). So lets say the 1st string of input_strings.txt is found in service.log at line 10 000, this line gets written to my result.txt. After that, the 2nd string of input_strings.txt will be searched in service.log, BUT starting at line 1 of service.log. How can I remember the last line I found the 1st entry in service.log? So that I can start the 2nd search-run there?
If you want to get the matches then you don't need to be using a loop at all. It would be much faster to just use a single grep command: grep -Ff input_strings service.log > results.txt That said, if you want to do literally what you stated in your question, then you could use a variable to keep track of the line on which the last match was found: LINE_NUMBER=0 while read LINE; do # Search for the next match starting at the line number of the previous match MATCH="$(tail -n+${LINE_NUMBER} "service.log" | grep -n "${LINE}" | head -n1)"; # Extract the line number from the match result LINE_NUMBER="${MATCH/:*/}"; # Extract the matching string from the match result STRING="${x#*:}"; # Output the matching string echo "${STRING}"; done < input_strings.txt > result.txt
read file line by line and remember the last position in file
1,554,498,125,000
I have some text file (for example json). I can use head order for the reading of the first lines. For example: head -n 100 file.json get me 100 first lines back. What is Linux order, which I can use to read some general lines somewhere in file? For example from the line 500 to the line 700.
You can use sed: sed -n 500,700p file.json
read specific lines from the file [duplicate]
1,554,498,125,000
I found bash ignores binary zero on input when reading using the read buildin command. Is there a way around that? The task is reading from a pipe that delivers binary data chunks of 12 bytes at a time, i.e. 2 ints of 16 bit and 2 ints of 32 bit. Data rate is low, performance no issue. Since bash variables are C-style, the obvious read -N 12 struct does not work, bytes beyond a NUL are not accessible. So I figured I need to read the data byte by byte, using read -N 1 byte. Problems easy to fix are escapes (requires -r), and UTF multi-byte character coding (export LC_ALL=C). The problem I'm so far unable to solve is to deal with zero bytes. I thought they'd show up as empty variable byte, but in fact read -r -N 1 byte does not return at all upon zero (ignores zeros) but returns with the next following non-zero byte in the data stream. This is what I was attempting to do, which, as long as no zero comes in, works without flaws: export LC_ALL=C while true; do for ((index = 0; index < 12; index++)) do read -r -N 1 byte if [ -n "${byte}" ]; then struct[${index}]=$(echo -n "${byte}" | od -An -td1) else struct[${index}]=0 fi done ... # some arithmetics reconstructing the four bitfields and processing them done < pipe It turns out the else branch in the if is never taken. A data chunk of 12 bytes that includes a zero does not make the for loop run 12 times, instead it awaits more data to fill the struct array. I demonstrated the behaviour by feeding the pipe 12 bytes using the command echo -en "ABCDE\tGH\0JKL" > pipe Since it is so easy to fool oneself with this, I verified the sending of zeros with ~# mkfifo pipe ~# od -An -td1 <pipe & [1] 25512 ~# echo -en "ABCDE\tGH\0JKL" > pipe ~# 65 66 67 68 69 9 71 72 0 74 75 76 [1]+ Done od -An -td1 < /root/pipe Is there a way to change this behaviour of bash? Or how else can the zero bytes be read?
bash variables can't store NUL bytes (only zsh does, though see also ksh93's printf %B and typeset -b using base64 encoding). Its read builtin will also skip NUL bytes in input. However, here, you could use: LC_ALL=C IFS= read -rd '' -n1 c That is read up to one byte off a NUL-delimited record. So if $c is empty, that means either EOF (but then read's exit status would be non-zero) or a NUL byte was read. For both, you can get the numeric value of that byte with: LC_ALL=C printf -v value %d "'$c" So: while IFS= LC_ALL=C read -rd '' -n1 c && LC_ALL=C printf -v value %d "'$c" do echo "Got byte with value $value" done Would read the input one byte at a time until EOF and support NUL bytes. Or you could always do: value=$(dd bs=1 count=1 2> /dev/null | od -An -vtu1) Or with some od implementations: value=$(od -N1 -An -vtu1) Though that implies forking extra processes and run external executables (and if stdin is a terminal device, that will not put it out of icanon mode like read does).
How to read binary data including zero bytes using BASH builtin read?
1,554,498,125,000
I want to read from stdin until a string delimiter MARKER=$'\0'"BRISH_MARKER" is encountered. I tried: ❯ unset br ; print -rn -- hi${MARKER}world | { IFS= read -d "$MARKER" -r br ; cat -v } ; echo ; typeset -p br Which gives: BRISH_MARKERworld typeset br=hi So read is only using the first character of the given delimiter, \0. I want it to use the whole string. How can I achieve this? The problem I am trying to solve is that I have a process that continuously feeds a stream of data to a zsh process, and the data needs to be broken into different values using a delimiter. I was originally using just \0, but that won't allow me to use values that contain \0, so I am trying to use the current MARKER.
Yes read -d works only with single-character delimiters (in bash and ksh93, it only works with single-byte delimiters). Reading up to a delimiter also means that you need to read one byte at a time (especially with non-seekable inputs such as pipes) to make sure you don't read past the delimiter, which makes it inefficient. I'd suggest using length:value records instead: write-record() { set -o localoptions +o multibyte print -rn -- "$#1:$1" } read-record() { set -o localoptions +o multibyte local len # note that in current versions of zsh, read -k0 (for empty records) # returns a non-zero exit status. eval "$1=" IFS= read -rd: len || return ((len == 0)) || read -u0 -k "$len" "$1" } (disabling multibyte locally in those to work with length in byte and avoid the useless character encoding/decoding here). Then as an example: $ (write-record $'é\0x'; write-record $'foo\0MARKER') | { read-record a && read-record b; printf "<%q>\n" "$a" "$b"; } <é$'\0'x> <foo$'\0'MARKER>
zsh: Read from stdin until a string delimiter
1,554,498,125,000
So I have an auto clicker script that is this simple command: Tribute xdotool click --delay 5 --repeat 900000 1 I have to switch to the terminal and Ctrl-C interrupt the script to stop it. Then just run it again to restart. So I started to use the read command to check for key input to avoid this switching back and forth. However that only checks for input at the terminal. I am clicking somewhere else, and want to be able to start and stop from there. Is there a version of read that can check for keystrokes globally?
By typing xinput --list, you get a list of all the input devices in your system. You can also programmatically get the state of each key using xinput --query-state DEVICE_ID. 1 class : KeyClass key[0]=up key[1]=up key[2]=up ... First, you will need to figure out the keycode you want to use. You can do this by running xinput --test DEVICE_ID, and pressing the key. key press ### key release ### Once you find the correct key, make a script like this. #!/bin/bash while true; do # Replace DEVICE_ID and KEYCODE. inp=`xinput --query-state DEVICE_ID | grep -o 'button\[KEYCODE\]=down'` if [ ! -z "$inp" ]; then xdotool click 1 fi done This will spam click the mouse button while the user holds down a button
How to make an auto clicker with global start and stop key
1,554,498,125,000
My bash script needs to read the variables from a text file that contains hundreds of variables, most of which conform to standard bash variable syntax (e.g., VAR1=VALUE1). However, a few lines in this file can be problematic, and I hope to find a simple solution to reading them into bash. Here's what the file looks like: #comment VAR1=VALUE1 VAR_2=VALUE_2 ... VAR5=VALUE 5 (ASDF) VAR6=VALUE 6 #a comment VAR7=/path/with/slashes/and,commas VAR8=2.1 VAR9=a name with spaces VAR_10=true ... VAR_N=VALUE_N The rules about the file structure include: one variable (with value) per line the assignment (=) has no spaces around it the variable name is in the first column (unless it is a comment line) comments can follow the value (after #) the values can include spaces, parens, slashes, commas, and other chars the values can be floating point numbers (2.1), integers, true/false, or strings. string values are not quoted, and they can be a thousand chars long or longer the variable name contains only letters and underscores UPDATE: and numbers. Most of the variables are of a type that would just allow me to source the file into my bash script. But those few problematic ones dictate a different solution. I'm not sure how to read them.
While you can transform this file to be a shell snippet, it's tricky. You need to make sure that all shell special characters are properly quoted. The easiest way to do that is to put single quotes around the value and replace single quotes inside the value by '\''. You can then put the result into a temporary file and source that file. script=$(mktemp) sed <"config-file" >"$script" \ -e '/^[A-Z_a-z][A-Z_a-z]*=/ !d' \ -e s/\'/\'\\\\\'\'/g \ -e s/=/=\'/ -e s/\$/\'/ I recommend doing the parsing directly in the shell instead. The complexity of the code is about the same, but there are two major benefits: you avoid the need for a temporary file, and the risk that you accidentally got the quoting wrong and end up executing a part of a line as a shell snippet (something like dangerous='$(run me)'). You also get a better chance at validating potential errors. while IFS= read -r line; do line=${line%%#*} # strip comment (if any) case $line in *=*) var=${line%%=*} case $var in *[!A-Z_a-z]*) echo "Warning: invalid variable name $var ignored" >&2 continue;; esac if eval '[ -n "${'$var'+1}" ]'; then echo "Warning: variable $var already set, redefinition ignored" >&2 continue fi line=${line#*=} eval $var='"$line"' esac done <"config-file"
Read non-bash variables from a file into bash script
1,554,498,125,000
I am trying to redirect the output of a python script as an input into an interactive shell script. test.py print('Hello') print('world') Say test.py is as above prints "Hello world" which is feed to two variables using Here string redirection as below Interactive script : read a b <<< `python3 test.py` This is not working as expected in Rhel 8 server while it works fine in Rhel 7 Rhel 8: tmp> read a b <<< `python3 test.py` tmp> echo $a $b Hello tmp> cat /etc/redhat-release Red Hat Enterprise Linux release 8.3 (Ootpa) variable b is empty in rhel 8 Rhel 7: tmp> read a b <<< `python3 test.py` tmp> echo $a $b Hello world tmp> cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.8 (Maipo) while the read & Here string works fine in both cases as below tmp> read a b <<< Hello world" tmp> echo $a $b Hello world
read a b reads two words from one line (words delimited by $IFS characters and word and line delimiters escaped with \). Your python script outputs 2 lines. Older versions of bash had a bug in that cmd <<< $var or cmd <<< $(cmd2) was applying word splitting to the expansion of $var and $(cmd2) and joining the resulting elements back with spaces do make up the contents of the here-string (see for instance Why does cut fail with bash and not zsh?). That was fixed in version 4.4 which explains why you don't get what you expect any longer. To read the first two lines of the output of a command into the $a and $b variable in bash, use: { IFS= read -r a IFS= read -r b } < <(cmd) Or (not in interactive shells): shopt -s lastpipe cmd | { IFS= read -r a IFS= read -r b } Or without lastpipe: cmd | { IFS= read -r a IFS= read -r b something with "$a" and "$b" } # $a and $b will be lost after the `{...}` command group returns To join lines of the output of a command with spaces, use: cmd | paste -sd ' ' -. Then you can do: IFS=' ' read -r a b < <(cmd | paste -sd ' ' -) If you like. You can also read the lines into elements an array with: readarray -t array < <(cmd) And join the elements of the array with the first character of $IFS (space by default) with "${array[*]}".
backtick or Here string or read is not working as expected in RHEL 8
1,554,498,125,000
I'm using read to read a path from a user like so: read -p "Input the file name: " FilePath The user now enters this string: \Supernova\projects\ui\nebula What can I do to replace \ with /. The result I want is: /Supernova/projects/ui/nebula By the way, the command: echo $FilePath outputs the result: Supernovaprojectsuinebula I have no idea what's wrong with it.
The problem is that read treats backslash in its input as an escape operator (to escape the separators when using read word1 word2 and the newline to allow line continuation). To do what you want, you need to add the -r switch to read which tells it not to do that backslash processing, and you also need to set $IFS to the empty string so it treats the whole line as the whole word to assign to the variable¹: IFS= read -rp "Input file name: " FilePath UnixPath="${FilePath//\\//}" Additionally, your echo commands needs double quotes around the variable substitution: echo "$FilePath", and anyway in general echo can't be used to output arbitrary data², so: printf '%s\n' "$FilePath" ¹ with the default value of $IFS, to prevent it from stripping leading and trailing blanks (space and tab) from the input line ² and for echo as well, backslash is or can be treated specially, even in bash depending on the environment or how it was built.
Read a variable with "read" and preserve backslashes entered by the user
1,554,498,125,000
#!/bin/bash echo 123456789 > out.txt exec 3<> out.txt read -n 4 <&3 echo -n 5 >&3 exec 3>&- Was asked the content of out.txt at the end of script, in an interview written exam. I did run the script afterwords and it gave me 123456789. Yet I have no idea what is going on in the script, especially the parts with the exec statements. I looked up the manpage and google search results for exec and could not find anything on the 3<> bit. Could somebody, well versed in shell scripting, explain what is going on here?
echo 123456789 > out.txt writes the string 123456789 in out.txt file. The exec 3<>out.txt construct opens the file out.txt for reading < and writing > and attaches it to file descriptor #3. read -n 4 <&3 reads 4 characters. echo -n 5 >&3 writes 5 (replacing 5 by 5). exec 3>&- closes file descriptor #3. Resulting in cat out.txt 123456789 Section about execint bash(1) states that: exec [-cl] [-a name] [command [arguments]] If command is specified, it replaces the shell. [...] If command is not specified, any redirections take effect in the current shell [...].
What is this script doing?
1,554,498,125,000
I have a string -w o rd. I need to split it to w o r d or to an array for 'w' 'o' 'r' 'd' it doesn't really matter. I have tried the following IFS='\0- ' read -a string <<< "-w o rd" echo ${string[*]} rd isn't getting split. How can I make it get split
You can't use IFS in bash to split on nothing (it has to be on a character). There's no characters between r and d in rd. No space and no character isn't the same as the null character. If you want each character as a separate element in the array, one way I can think of is to read each character individually and append it to an array (and using IFS to get rid of spaces and -): bash-4.4$ while IFS=' -' read -n1 c ; do [[ -n $c ]] && foo+=("$c"); done <<<"-w o rd" bash-4.4$ declare -p foo declare -a foo=([0]="w" [1]="o" [2]="r" [3]="d")
How can I split a string to letters with IFS and read at no space/null, space and at a character -?
1,554,498,125,000
I have this script: #!/bin/bash function main { while read -r file; do do_something "$file" done <<< $(find . -type f 2>/dev/null) } function do_something{ echo file:$@ } On linux, it works fine, but on Mac (Bash version 5.2), it treats all files found as one item, and pass the whole string without line-break to do_something. And if I run this: while read -r file; do echo file:"$file" done <<< $(find . -type f 2>/dev/null) directly in Bash terminal on Mac, it also works fine. So what goes wrong?
In older versions of bash, in <<< $param$((arith))$(cmdsubst) where <<< is the here-string operator copied from zsh, such unquoted expansions were subject to $IFS-splitting and the resulting words joined back with space and the result stored in the temporary file which makes up the target of the redirection. That was fixed in 4.4. See corresponding entry in CWRU/changelog: 2015-09-02 redir.c - write_here_string: don't word-split the here string document. The bash documentation has always said this doesn't happen, even though bash has done so for years, and other shells that implement here- strings don't perform any word splitting. The practical effect is that sequences of IFS characters are collapsed to spaces. Fixes bug reported by Clint Hepner <[email protected]> But macos still uses an ancient version of bash. You might have a newer bash installed somewhere else but AFAIK, there, /bin/bash as used in your shebang is 3.2.x, not 5.2. While quoting the $(find...) would address that particular issue, here it's definitely the wrong way to iterate over find's output. See why and correct alternatives at Why is looping over find's output bad practice? Namely, if you have to use a bash loop (would also work in zsh): while IFS= read -rd '' -u3 file; do something with "$file" done 3< <(find . -type f -print0 2> /dev/null) (process substitution, -r, -u, -d, all copied from ksh were all introduced before or in the same version as <<< in 2.05b, so should be available in macos' /bin/bash) Since macos has had zsh preinstalled since forever, you could also switch to that and just write: for file (**/*(ND.)) something with $file See also: When is double-quoting necessary? Understanding "IFS= read -r line" Why is printf better than echo?
"read -r" builtin in bash script acts differently on Mac
1,554,498,125,000
I have problem with a simple read. I read a list of items xml items and then I work with them. In some point I need to ask if Im sure and accept this response in a variable. My problem is that if I ask into the "while read linea" the "read -p ..." is ignored and I can't answer the question. xml2 < list | egrep "item" | egrep "url|pubDate|title" | while read linea; do case 1 in $(($x<= 1))) ... ;; $(($x<= 2))) ... ;; $(($x<= 3))) .... if [ $DIFERENCIA -lt $num_dias ]; then ... read -p “Are you sure: ” sure ... fi ... ;; *) let x=1 ;; esac done Thanks
use this one instead : read -p "Are you sure: " sure </dev/tty Quotes should be ascii 0x22, not UNICODE U-201c “ and U-201d ”.
Read line is ignored
1,554,498,125,000
I have two files: In one I have a list of strings, which need to removed if the corresponding line in the other file contains a string "NOPE". If it contains "YES" it stays there. Note that removing one line might destroy the order. The format is like this: 1.txt: Jimmy John Johnson 2.txt: YES NOPE YES Correct Output: Jimmy Johnson What's the simplest way to do this for thousands of entries?
You could so something like paste 2.txt 1.txt | awk '$1 == "YES" {print $2}' (for single-word strings) or awk 'NR==FNR && $0=="YES" {i[FNR]; next} FNR in i' 2.txt 1.txt
How to sort out the wrong entries in the most simple way depending on the corresponding line in the other file?
1,554,498,125,000
Suppose we have the following program, which calls read() twice: #include <stdio.h> #include <unistd.h> #define SIZE 0x100 int main(void) { char buffer1[SIZE]; char buffer2[SIZE]; printf("Enter first input: \n"); fflush(stdout); read(STDIN_FILENO, buffer1, SIZE); printf("Enter second input: \n"); fflush(stdout); read(STDIN_FILENO, buffer2, SIZE); printf("\nFirst input:\n%s", buffer1); printf("\nSecond input:\n%s", buffer2); return 0; } When we call it directly, we can enter 1 for the first input and 2 for the second input in order to have it print: First input: 1 Second input: 2 How can this be achieved when using input redirection? The following methods don't work, since the first read consumes both inputs: Pipe redirection: $ { echo "1"; echo "2"; } | ./main_read Enter first input: Enter second input: First input: 1 2 Second input: Heredoc redirection: $ ./main_read << EOF 1 2 EOF Enter first input: Enter second input: First input: 1 2 Second input: The assumption is that the source code cannot be changed, and that the input must sometimes be shorter than SIZE. Is there any way to signal the first read() to stop reading, in order for the second read() to consume the rest of the input?
This is likely not to provide an acceptable solution to you, but considering that: the source code cannot be changed the shell can not change where the open file descriptors of a running program point to, nor make a running program stop reading from a file descriptor Some alternatives you are left with (short of exploiting a race condition) are: Trying to make sure your program is always fed with SIZE bytes at a time: { echo foo | dd bs=256 conv=sync echo bar | dd bs=256 conv=sync } 2>/dev/null | ./main_read Output: Enter first input: Enter second input: First input: foo Second input: bar This assumes, as a minimum, that SIZE is smaller than the size of the pipe buffer. Wrapping the invocation of your program in an expect (or equivalent) script: expect <<'EOT' spawn ./main_read expect "Enter first input:" send "foo\n" expect "Enter second input:" send "bar\n" expect eof EOT Or, in a way that allows you to pipe to it the output of other commands, read separately (assuming your operating system provides processes with the /dev/fd/n file descriptors): echo foo | { echo bar | expect 4<&0 <<'EOT' spawn ./main_read set chan [open "/dev/fd/3"] gets $chan line expect "Enter first input:" send "$line\r" close $chan set chan [open "/dev/fd/4"] gets $chan line expect "Enter second input:" send "$line\r" close $chan expect eof EOT } 3<&0 In both cases, the output is: spawn ./main_read Enter first input: foo Enter second input: bar First input: foo Second input: bar On systems (such as Linux) that allow for opening pipes in a non-blocking fashion, you may use FIFOs to make the shell read from and write to your program. For instance: makefifo fifo { exec 3<>fifo ./main_read 0<&3 } | sh -c ' # read a line from the pipe # read from a file or a different pipe, write to fifo # repeat ... # echo the output from the pipe ' mysh Though, if expect is available to you, I see no compelling reason to prefer reinventing it. Note that, however, as others have pointed out, there is no guarantee that all of your program's reads will actually get SIZE bytes.
Providing input for multiple read(stdin) calls via bash input redirection
1,554,498,125,000
Say we have file like so: foo1 bar foo2 foo2 bar bar bar foo3 I want it reduced to: foo1 bar foo2 bar foo3 basically removing duplicates only if they are adjacent...I started writing a bash function but realized I have no idea how to do this: remove_duplicate_adjacent_lines(){ prev=''; while read line; do if test "$line" != "$prev"; then prev="$line"; echo "$line" fi done; } but the problem is the prev is not in scope in the while loop - is there a way to do this with bash somehow?
This is exactly what the uniq utility is for: $ uniq <File foo1 bar foo2 bar foo3 A good example might be bash history: history | uniq The above won't work because of line numbers, but this will: cat ~/.bash_history | uniq will remove repeated adjacent commands From man uniq: Filter adjacent matching lines from INPUT (or standard input), writing to OUTPUT (or standard output). With no options, matching lines are merged to the first occurrence. [Emphasis added]
Remove duplicate adjacent lines from file
1,554,498,125,000
bash builtin command read is said to accept input from stdin, but why does the following not read in anything? $ printf "%s" "a b" | read line $ printf "%s" "$line" $ Thanks.
The problem is not with read itself, but with the pipe. In bash, that causes the second command (read, in this case) to run in a subshell. So it will actually read into a line variable, only that variable exists in a subshell and will vanish as soon as the pipeline completes. (Note that other shells behave differently, most notably ksh will run the last command of a pipeline in the current shell, so this snippet might work in ksh. In bash it will not work, as you're seeing.) A possible solution is to use <(...) process substitution for the first part of the pipeline, with an additional < to redirect that to stdin: read line < <(printf "%s" "a b") In this particular case, you could do without the printf command, then <<< would also work: read line <<<"a b"
How can I make `read` read from stdin? [duplicate]
1,554,498,125,000
I'm trying to read from two fifos (read from one, if there's no content read from the other, and if there's no content in neither of both try it again later), but it keeps blocking the process (even with the timeout option). I've followed some other questions that reads from files (How to read from two input files using while loop, and Reading two files into an IFS while loop) and right now I have this code: while true; do while IFS= read -r msg; statusA=$? IFS= read -u 3 -r socket; statusB=$? [ $statusA -eq 0 ] || [ $statusB -eq 0 ]; do if [ ! -z "$msg" ]; then echo "> $msg" fi if [ ! -z "$socket" ]; then echo ">> $socket" fi done <"$msg_fifo" 3<"$socket_fifo" done Is there something I'm doing wrong? Also, I can't use paste/cat piped or it blocks the process completely.
I've seen the conversation between @etosan and @ilkkachu and I've tested your proposal of using exec fd<>fifo and it works now. I'm not sure if that involves some kind of problem (as @etosan have said) but at least it works now. exec 3<>"$socket_fifo" # to not block the read while true; do while IFS= read -t 0.1 -r msg; statusA=$? IFS= read -t 0.1 -u 3 -r socket; statusB=$? [ $statusA -eq 0 ] || [ $statusB -eq 0 ]; do if [ ! -z "$msg" ]; then echo "> $msg" fi if [ ! -z "$socket" ]; then echo ">> $socket" fi done <"$msg_fifo" done I'll also consider your warnings about using Bash in this case.
Reading from two fifos in Bash
1,554,498,125,000
I'm trying to figure out how I can reliably loop a read on a pt master I have. I open the ptmx, grant and unlock it as per usual: * ptmx stuff */ /* get the master (ptmx) */ int32_t masterfd = open("/dev/ptmx", O_RDWR | O_NOCTTY); if(masterfd < 0){ perror("open"); exit(EXIT_FAILURE); }; /* grant access to the slave */ if(grantpt(masterfd) < 0){ perror("grantpt"); exit(EXIT_FAILURE); }; /* unlock the slave */ if(unlockpt(masterfd) < 0){ perror("unlockpt"); exit(EXIT_FAILURE); }; comms_in->ptmx = masterfd; Next I save the slave's name (yes I know sizeof(char) is always 1) /* get the path to the slave */ char * slavepathPtr; char * slavePath; size_t slavepathLen; if((slavepathPtr = ptsname(masterfd)) == NULL){ perror("ptsname"); exit(EXIT_FAILURE); }else{ slavepathLen = strlen(slavepathPtr); slavePath = (char *) malloc(sizeof(char) * (slavepathLen + 1)); strcpy(slavePath, slavepathPtr); }; I then create a predictably named symlink to the slave (/dev/pts/number) in /dev/custom/predictable (which was provided as an argument to this program using getopts) and verify that its permissions are safe using calls to access, lstat, readlink, symlink and confirm that the program can continue execution, otherwise it calls unlink on the symlink and terminates the thread. Finally the program ends up in this loop ssize_t read_result; ssize_t write_result; while(1){ if((read_result = read(comms_in->ptmx, ptmxio_read_buffer, sizeof ptmxio_read_buffer)) <= 0){ { /** calls thread ender routine */ pthread_mutex_lock(&COMMS_MUTEX); comms_in->thread_statuses[PTMXIO_THREAD] = THREAD_FAILED; pthread_mutex_unlock(&COMMS_MUTEX); pthread_cond_signal(&SIG_PROGRAM_FINISHED); pthread_exit((void *) comms_in); } }else if((write_result = write(STDOUT_FILENO, ptmxio_read_buffer, read_result)) != read_result){ { /** same as above */ } }; }; On the system, I can run this program and all is swell. The read blocks. When the pts symlink is opened with cu or picocom then bytes are successfully read up to the buffer limits either on my end or the kernel's end, depending on who's lower. The problem comes when the slave is closed. At this point, the read returns -1 -> EIO with error text: Input/output error and will continue to do so, consuming a lot of cpu time if I choose to not terminate the thread and loop. When cu or picocom or even just an echo -en "some text" > /dev/pts/number, the read blocks again, until bytes are available. In the case of the redirection into the symlink, obviously if it fills less than a buffer, read just gets that one buffer and continues to return -1 -> EIO again. What's going on? I need a method that doesn't consume a lot of CPU as this runs on a slow embedded application processor and allows me to re-establish reads without losing bytes. I noticed a thread making a call to this: ioctl(3, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) and can't make much sense of what the 3 options are as they're not in my Linux headers anywhere.. Note that 3 is comms_in->ptmx / masterfd. Here is an lstat on the symlink and some extra information, note that the st_mode is unchanged before and after successful and unsuccessful reads. ‘ptmxio_thread’ failed read (-1) on /dev/pts/13 /dev/pts/13: Input/output error ‘ptmxio_thread’ ptsNum (from ioctl) 13 ‘ptmxio_thread’ st_dev: 6, st_ino: 451, st_mode: 0000A1FF, st_nlink: 1 ‘ptmxio_thread’ st_uid: 000003E8, st_gid: 000003E8, st_rdev: 0, st_size: 11 ‘ptmxio_thread’ st_blksize: 4096, st_blocks: 0, st_atime: 1540963806, st_mtime: 1540963798 ‘ptmxio_thread’ st_ctime: 1540963798
It's very simple: you should open and keep open a handle to the slave side of the pty in the program handling the master side. After you got the name with ptsname(3), open(2) it. I noticed a thread making a call to this: ioctl(3, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) and can't make much sense of what the 3 options are as they're not in my Linux headers anywhere.. Note that 3 is comms_in->ptmx / masterfd. ioctl(TCGETS) is tcgetattr(3), which is also called from isatty(3) and ptsname(3). It's defined in /usr/include/asm-generic/ioctls.h. As to the SNDCTL* and SNDRV*, they're because of bugs in older versions of strace. int32_t masterfd = open("/dev/ptmx", O_RDWR | O_NOCTTY); There is no point in making your program needlessly unportable. Use posix_openpt(3) instead. slavepathLen = strlen(slavepathPtr); slavePath = (char *) malloc(sizeof(char) * (slavepathLen + 1)); strcpy(slavePath, slavepathPtr); That's what strdup(3) is for ;-) And you should also handle your read() being interrupted by a signal, unless you're absolutely sure you (and all the library functions you call) set all the signal handlers with SA_RESTART.
read(2) blocking behaviour changes when pts is closed resulting in read() returning error: -1 (EIO)
1,554,498,125,000
The Wikipedia LTO article says that every LTO drive can read out the memory chip of a tape via 13.56 MHz NFC. I expect here to find serial numbers, tape properties and usage data. How can I read this data with free and open-source software on a Linux system?
Method.1 LTO DRIVE LTO drive has a RFID reader inside to read data from the chip. The client can access this via SCSI command. Specifically, READ ATTRIBUTE command (Operation Code: 8C). READ ATTRIBUTE command shall be invoked along with Attribute Identifier that specifies the data field to be transferred. For example, according to IBM SCSI Reference, MEDIUM SERIAL NUMBER can be read with 0401h of Attribute Identifier and LOAD COUNT is 0003h. This is an open-source Linux software sending READ ATTRIBUTE command to the drive. Serial numbers, tape properties such as medium length, width and usage data such as load count, total MB written etc... are supported. Method.2 Generic RFID Reader Currently, Proxmark3 and ACR122U supports LTO cartridge memory. Step.1 Dump all data from the chip with these readers. Install Proxmark3 software or nfc-ltocm depending on your hardware, place LTO cartridge onto the reader, and then send dump command. Binary data of the chip will be stored on your storage device. Step.2 Make this binary data human-readable with this script. Here is YouTube video for demonstration.
Read the chip data from LTO tapes
1,554,498,125,000
If I do this: echo <(cat) I get: /dev/fd/63 so say at the command line I have: myapp -f <(cat) when I run it I get this error: You need to pass a file after the -f flag. The resolved file path was: '/dev/fd/63'. This path did not appear to exist on the filesystem. How can I determine if the result of the process substitution is an actual file (for validation purposes)? Here is my bash code which generated the error: if [[ -L "$file_path" ]]; then file_path="$(readlink "$file_path")"; fi if [[ ! -f "$file_path" ]]; then echo "You need to pass a file after the -f flag. The resolved file path was: '$file_path'. This path did not appear to exist on the filesystem".; return 1; fi if I get rid of my validation, code, I get this: Could not open the following file for reading: /dev/fd/63 EBADF: bad file descriptor, open '/dev/fd/63' The node.js code I am using to read from the path is: const fd = fs.openSync(file_path, 'r'); fs.read(fd, ...);
To determine, in Bash, whether a string value is a path on your current system, use [[ -e "$path" ]]. This checks whether the path exists, and doesn't make any assumptions about the type of file it points to.
How to determine if result of process substitution is a file path
1,554,498,125,000
I wrote this testing code, and find that this program can always read the file successfully even after I canceled the read permission when running getchar(). #include <stdio.h> #include <fcntl.h> #include <unistd.h> #include <stdint.h> #include <sys/types.h> int main(){ int f = open("a.txt",O_RDONLY); uint8_t data[200]; printf("Got %d from read", pread(f, (void *)data, 200, 0)); getchar(); printf("Got %d from read", pread(f, (void *)data, 200, 0)); } This program printed Got 9 from read twice, even though I use chmod a-r a.txt during pausing. I'm pretty sure that I'm just a normal user and my process doesn't have CAP_DAC_OVERRIDE; why doesn't the second pread() return any error? My guess is, when doing read/write, kernel only check file permission on open file description, which is created with open(), and don't change even I changed the file permission on the underlying filesystem. Is my guess correct? Extra question: If I'm right about this, then how about mmaped regions? Do kernel only check permissions recorded in page table when I read/write/execute that mmaped region? Is that true inode data stored in filesystem is only used when creating open file description and mmap region?
Yes, permissions are only checked at open time and recorded. So you can't write to a file descriptor that you opened for read-only access regardless of if you are potentially able to write to the file. The kernel consults in-memory inodes rather than the ones stored in the filesystem. They differ in the reference count for open files, and mount points get the inode of the mounted file. If I'm right about this, then how about mmaped regions? Same. (PROT_* flags passed to mmap() equivalent to O_RDWR / O_RDONLY / O_WRONLY flags passed to open()). Do kernel only check permissions recorded in page table when I read/write/excute that mmaped region? I'm not sure when else it could check permissions recorded in the page table :-). As far as I understand your question: yes. Is that true inode data stored in filesystem is only used when creating open file description and mmap region? Inode permissions are also checked for metadata operations, e.g. mkdir() (and similarly open() with O_CREAT). And don't forget chdir(), which is different from any open() call. (Or at least, it is different from any open() call on current Linux). I'm not sure about SELinux-specific permissions.
Does Linux kernel check file permissions on inode or open file description?
1,554,498,125,000
I am trying to set a while read loop line to read an input text file line by line and pass two strings as variables on each line in the text file. while IFS= read -r line do # Read and pass two path strings as variables. read path1 path2 echo "$path1" echo "$path2" done < "$1" It seems to process the next line in the text file at read path1 path2 before it assigns variables for strings in each current line. How can I pass strings as variables on each line before going to the next line?
The second read inside the body of the loop is incorrect here. It actually goes one line ahead than your first read call as part of the while loop. So for your requirement just read those variables as part of the first read while read -r path1 path2; do echo "$path1" echo "$path2" done < "$1" As you see here, setting IFS= is also incorrect, when reading two variables because, resetting the field separator just picks up the line as a whole. By having its default value (white-space characters of space, tab, and newline) reading two variables will store the values from each line in a space separated list. This way we could have n-column delimited line and use n variables to read. Now the values are available in those variables which you could pass to other commands as needed. Let see how this works for a sample input file foo bar foo1 bar1 foo2 bar2 Running the first script in debug mode with -x set $ bash -x script.sh + read -r path1 path2 + echo 1 1 + echo 2 2 + read -r path1 path2 + echo 3 3 + echo 4 4 + read -r path1 path2 + echo abc abc + echo def def + read -r path1 path2
Problem using read command within while read loop
1,554,498,125,000
I'm trying to write a simple script that read from standard input, using ; character as delimiter to terminate the input line and that allows user to edit line. Here is my test script: #!/bin/bash while true; do read -e -d ";" -t 180 -p "><> " srcCommand if [ $? != 0 ]; then echo "end;" echo "" exit 0 fi case "$srcCommand" in startApp) echo "startApp command";; stopApp) echo "stopApp command";; end) echo "" exit 0 ;; *) echo "unknown command";; esac done This works but doesn't print the delimiter ';' char: # bash test.sh ><> startApp startApp command ><> stopApp stopApp command ><> end If I remove -e option it prints out ; but user can't correct his mistake using backspace character and echoed strings are just right after the delimiter: # bash test.sh ><> startApp;startApp command ><> stopApp;stopApp command ><> end; How can I print out the delimiter character and allow user to edit line while reading standard input? This is the expected behavior: # bash test.sh ><> startApp; startApp command ><> stopApp; stopApp command ><> end; Thanks
I'd use zsh where the line editor has many more capabilities and is a lot more customizable: #! /bin/zsh - insert-and-accept() { zle self-insert # RBUFFER= # to discard everything on the right zle accept-line } zle -N insert-and-accept bindkey ";" insert-and-accept bindkey "^M" self-insert vared -p "><> " -c srcCommand With bash-4.3 or above, you can do something similar with a hack like: # bind ; to ^Z^C (^Z, ^C otherwide bypass the key binding when entered # on the keyboard). Redirect stderr to /dev/null to discard the # useless warning bind '";":"\32\3"' 2> /dev/null # new widget that inserts ";" at the end of the buffer. # If we did bind '";":";\3"', readline would loop indefinitely add_semicolon() { READLINE_LINE+=";" ((READLINE_POINT++)) } # which we bind to ^Z bind -x '"\32":add_semicolon' 2> /dev/null # read until the ^C read -e -d $'\3' -t 180 -p '><> ' srcCommand Note that in that version, the ; is always inserted at the end of the input buffer, not at the current cursor position. Change the add_semicolon to: add_semicolon() { READLINE_LINE="${READLINE_LINE:0:READLINE_POINT++};" } If you want it inserted at the cursor and everything on the right discarded. Or: add_semicolon() { READLINE_LINE="${READLINE_LINE:0:READLINE_POINT};${READLINE_LINE:READLINE_POINT}" READLINE_POINT=${#READLINE_LINE} } if you want to insert it at the cursor but want to preserve what's on the right like in the zsh approach. If you don't want the ; in $srcCommand, you can always strip it afterwards with srcCommand="${srcComman//;}" for instance, but you'd need to insert it in the widget for it to be displayed by zle/readline.
How can I print out the delimiter character and allow user to edit line while reading standard input?
1,554,498,125,000
I have bash script that have input commands. echo "Enter the id: " read id I'd like to know if there's a way I can limit the character can I input in the for id. I mean example he can only enter 5 characters for id. is that possible? Thank you.
So with bash there are (at least) two options. The first is read -n 5. This may sound like it meets your needs. From the man page -n nchars read returns after reading nchars characters rather than waiting for a complete line of input, but honor a delim- iter if fewer than nchars characters are read before the delimiter. BUT there's a gotcha here. If the user types abcde then the read completes without them needing to press RETURN. This limits the results to 5 characters, but may not be a good user experience. People are used to pressing RETURN. The second method is just to test the length of the input and complain if it's too long. We use the fact that ${#id} is the length of the string. This results in a pretty standard loop. ok=0 while [ $ok = 0 ] do echo "Enter the id: " read id if [ ${#id} -gt 5 ] then echo Too long - 5 characters max else ok=1 fi done If you want it to be exactly 5 characters then you can change the if test from -gt to -eq.
Max character length for Read command (input)
1,554,498,125,000
I use bash printf function to print ASCII codes of characters in an input file, but for some reason printf outputs ascii code 0 for LF characters, instead of 10. Any ideas why? while IFS= read -r -n1 c do ch=$(LC_CTYPE=C printf "%d\n" "'$c") # convert to integer echo "ch=$ch" done < input_file_name To be honest, I am not even sure if this is a problem with printf or it is the read function, which supplies the wrong value of LF... Are there other ways how to convert characters to ASCII using bash commands?
first your printf function works perfectly $ export c=" " $ LC_CTYPE=C printf "%d\n" "'$c" 32 But running the script line with -vx on shows that the data getting to this line is wrong ( I won't paste this output ) So I figure it is the read that is wrong. The default EOL delimiter for read is newline, so I tried altering that. This seems to work while IFS= read -d\0 -r -n1 c; do ch=$(LC_CTYPE=C printf "%d\n" "'$c") ; echo "ch=$ch"; done < input_file_name
bash read of a newline, printf reports character 0
1,554,498,125,000
I'm trying to read user input char by char, silently, as follows: while [ 1 ]; do read -s -N 1 ... done While this loop works perfectly using VNC (xterm), it works only partially using putty (xterm) or a Linux terminal, and most of other text terminals. The problem is encountered when I become "wild" with the keyboard and striking multiple keys at the same time, and than some of the keys are echoed despite of the -s mode. I've also tried to redirect output and stty -echo. while the first did not make any difference, the latter would be somehow helpful, minimizing the "echo"s to be less frequent, but not perfect. Any Ideas?
read -s disables the terminal echo only for the duration of that read command. So if you type something in between two read commands, the terminal driver will echo it back. You should disable echo and then call read in your loop without -s: if [ -t 0 ]; then saved=$(stty -g) stty -echo fi while read -rN1; do ... done if [ -t 0 ]; then stty "$saved" fi
Reading char-by-char silently does not work
1,554,498,125,000
An application is loaded by mounting its content from a distant machine to a local dir. On some machines there is slow performance and I'd like to check the read speed of the files in the mounted dir. hdparm -Tt /dev/<dev_name> works for drives and it is exactly what I require from the output but I cannot seem to run it for the mounted directory specifically. Can you suggest me anything which could help to test the read speed for the files in that mounted dir? Thanks in advance.
bonnie++ is a commonly suggested tool for checking disk performance in a directory: bonnie++ -d $DIREC For quick order of magnitude answers I might be inclined to use pv (pipe view) cat file | pv > disk
Read speed from a mounted directory
1,461,439,018,000
In a bash script I'm reading some user input with read. Now I want to provide the possibility to auto-complete the input by hitting the Tab key. Easy example: Let's say, the user shall type in a name from a limited domain. In the script I have an array containing all valid names, those should be included in the auto-completion-suggestions. I already tried something with programmable completion, but I want the script to be portable, i.e., everything should be in this very script. Something comparable would be mysql - if you type SELECT * FROM and the hit Tab it shows all available tables in the database (and actually all columns). Should work on Mac OS X.
Use rlwrap, a readline wrapper. From man rlwrap rlwrap runs the specified command, intercepting user input in order to provide readline's line editing, persistent history and completion. and from the Debian rlwrap package description: This package provides a small utility that uses the GNU readline library to allow the editing of keyboard input for any other command. Input history is remembered across invocations, separately for each command; history completion and search work as in bash and completion word lists can be specified on the command line. A very simple example script: #! /bin/bash ynm=(Yes No Maybe) reply=$(rlwrap -S 'Do you want to continue? ' -H ~/.jakob.history -e '' -i -f <(echo "${ynm[@]}") -o cat) echo "reply='$reply'" This uses rlwrap's one-shot mode to run cat (to get stdin) but accept only one line of input. -o cat is rlwrap's recommended substitute for read. Command-line history is stored in ~/.jakob.history, and the completion items are in the bash array $ynm. rlwrap expects a file as the argument to the -f option. Fortunately, we can use process subsitution <(echo "${ynm[@]}") to supply an array rather than a file. -i enables case-insensitivity for completions. The -e '' stops rlwrap from appending a space after a successful completion (so that $reply ends up containing, e.g., 'Maybe' rather than 'Maybe ' with a trailing space) If you want a default already pre-typed on the input line, you could use the -P or --pre-given option - e.g. add -P Yes to the rlwrap command in the example script above. The user would only have to hit Enter to accept or backspaces or Ctrl-U to erase the default (as is normal for readline in emacs mode). see man rlwrap for details and more options. e.g. you can enable filename completion with -c or --complete-filenames. Check if rlwrap is packaged for your distro (it is for Debian and probably Ubuntu/Mint/etc, at least) before downloading and compiling the source.
Bash script - auto complete for user input based on array data
1,461,439,018,000
I have an album of 11 .flac audio files. (edit: since this issue has been resolved, it's now clear that the precise names and content of the files are irrelevant, so I've renamed them): > find . -name "*.flac" ./two.flac ./ten.flac ./nine.flac ./eight.flac ./seven.flac ./three.flac ./four.flac ./five.flac ./one.flac ./eleven.flac ./six.flac I was converting these to .wav files with a specific sample rate and bit depth. I used ffmpeg in a Bash shell to do this. A command like this one works perfectly for each of the 11 files if I call it manually: ffmpeg -i "./six.flac" -sample_fmt s16 -ar 44100 "./six.wav" However, when I wrote a while loop to run this command for each file: find . -name "*.flac" | sed "s/\.flac$//" | while read f; do ffmpeg -i "$f.flac" -sample_fmt s16 -ar 44100 "$f.wav"; done This worked, but only for 8/11 of the files, with ffmpeg giving the following error messages for the three failures: /ten.flac: No such file or directory ree.flac: No such file or directory ix.flac: No such file or directory For ./ten.flac, the leading . in the relative file path was truncated, resulting in an absolute path, and the other two, ./three.flac and ./six.flac, lose even more characters, including from the start of their basenames. Some of the other eight files had ./ truncated, but this resulted in the correct relative path, so ffmpeg was able to continue in these cases. If I use the same loop structure, but with echo "$f" instead of calling ffmpeg, I see no problems with the output: > find . -name "*.flac" | sed "s/\.flac$//" | while read f; do echo "$f"; done ./two ./ten ./nine ./seven ./three ./four ./five ./one ./eleven ./six So I'm convinced the structure of the loop is fine, with "$f" expanding how I expect it to at each iteration. Somehow, when passing "$f.flac" into ffmpeg, part of the string is truncated, but only for some of the files. I just want to understand why my first attempt exhibited this behaviour. I'm not looking for an alternative way to loop over or convert the files (my second attempt, with a different kind of loop, was successful for all files). edit: I accidentally discovered that piping yes | into ffmpeg seems to fix this issue. I added this so I wouldn't be prompted to overwrite existing .wav files. edit: Thanks @roaima for the explanation! Turns out that ffmpeg and read both inherit the same stdin handle, so ffmpeg could consume characters from the start of each line before read got a chance, thus mangling some of the file paths. This explains why yes | ffmpeg ... worked, since it gave ffmpeg a different stdin handle. The original loop works fine with ffmpeg -nostdin .... See http://mywiki.wooledge.org/BashFAQ/089.
Unless you have a directory tree of files (not mentioned, and not shown in your example dataset), you can use a simple loop to process the files for f in *.flac do w="${f%.flac}.wav" ffmpeg -i "$f" -sample_fmt s16 -ar 44100 "$w" done If you really do have a hierarchy of files you can incorporate this into a find search find -type f -name '*.flac" -exec sh -c 'for f in "$@"; do w="${f%.flac}.wav"; ffmpeg -i "$f" -sample_fmt s16 -ar 44100 "$w"; done' _ {} + For efficiency you might want to skip processing of any flac that already has a corresponding wav. After the w= assignment, add [ -s "$w" ] && continue. If you really don't want that then you can further optimise the command thus, find -type f -name '*.flac" -exec sh -c 'ffmpeg -i "$1" -sample_fmt s16 -ar 44100 "${1%.flac}.wav";' _ {} \; For pre-run testing, prefix ffmpeg with echo to see what would get executed without it actually doing so. (Quotes won't be shown, though, so bear that in mind.) It turns out that the question is actually about ffmpeg chewing up the filenames it's supposed to be processing. This is because ffmpeg reads commands from stdin, and the list of files has been presented as a pipe to a while read … loop (also using stdin). Solution: ffmpeg -nostdin … or ffmpeg … </dev/null See Strange errors when using ffmpeg in a loop http://mywiki.wooledge.org/BashFAQ/089
Bash variable truncated when passed into ffmpeg
1,461,439,018,000
I have this very simple script: #!/bin/bash read local _test echo "_test: $_test" This is the output. $ ./jltest.sh sdfsdfs _test: I want the variable _test to be local only. Is this possible?
The local builtin only works inside a function. Any variable you set in your script will already be "local" to the script though unless you explicitly export it. So if you remove that it will work as expected: #!/bin/bash read _test echo "_test: $_test" Or you could make it a function: my_read () { local _test read _test echo "_test: $_test" } Even inside the function the local builtin wouldn't work in the way you have written it: Your code is actually setting a variable literally named local: #!/bin/bash read local _test echo "_test: $_test" echo "local: $local" $ ./script.sh sssss aaaaa _test: aaaaa local: sssss
How to read keyboard input and assign it to a local variable?
1,461,439,018,000
I have this line of code that reads a text file line by line. The text file is sometimes generated by a Windows user, sometimes by a Unix user. Therefore, sometimes I see \r\n at the end of the line and sometimes I see only \n. I want my script to be able to deal with both scenarios and reach each line separately regardless of whether the linebreak is \r, or \n, or \r\n, or \n\r. while read -r textFileLines; do ... something ...; done < text_file.txt This code works with \n\r (LF CR) at the end of each line, but does NOT work when I have \r\n at the end of the line! TEST Create a new text file using Notepad++ v7.5.4 while read -r LINE; do echo "$LINE"; done < /cygdrive/d/test_text.txt output in Terminal: first_line second_line third_string Why isn't the fourth_output line not shown?
If you have some files that are DOS text file and some that are Unix text files, you script could pass all data through dos2unix: dos2unix <filename | while IFS= read stuff; do # do things with "$stuff" done Unix text files would be unmodified by this. To additionally cope with Mac line breaks, I believe you should be able to do dos2unix <filename | mac2unix | while IFS= read stuff; do # do things with "$stuff" done The last line is not outputted by your read loop since it's not terminated, and therefore not a line at all. To detect whether a file has no terminating newline on the last line, and add one if it hasn't, in bash: if [ "$( tail -c 1 filename )" != $'\n' ]; then printf '\n' >>filename fi Related: Why is using a shell loop to process text considered bad practice?
Treat "\r" as nothing in "while read -r"
1,461,439,018,000
I'm trying to pipe a string with special characters (e.g. HG@eg3,l'{TT\"C! to another command (termux-clipboard-set) with the read program. It seems that read was designed to create a temporary variable (e.g. read temp) that should be then passed to another command (e.g. termux-clipboard-set $temp). I'm wondering if there is a faster way to do it with a pipe, like: read | termux-clipboard-set? UPDATE: Sorry, I forgot to mention that I'm looking for something that would work on bash (termux).
For bash, read is not a program. read is a builtin. Simplified, read reads stdin and assigns that input to a variable. The read builtin does not produce any output on stdout, so trying to pipe stdout to something does not produce anything. The question is why. According to the man page, Usage termux-clipboard-set [text] Text is read either from standard input or from command line arguments. If text is read from stdin, why would you want to put something in front? Sure, you could cat | termux-clipboard-set, but just termux-clipboard-set would do the trick.
pipe the read command?
1,461,439,018,000
I have a DE_CopyOldToNew.txt file with a whole bunch of copy commands for copying old file names to new file names. The file contains rows like : cp /migrationfiles/Company\ Name\ GmbH/2014.138_Old\ File\ Name.pdf /appl/data/docs/202403/DE_2014.138_NewFile_1.pdf cp /migrationfiles/Company\ Name\ GmbH/2014.139_Old\ File\ Name.pdf /appl/data/docs/202403/DE_2014.139_NewFile_1.pdf In my shell script I am iterating over each row and execute it. echo "Start!" while read -r line do command "$line" done < /tmp/DE_CopyOldToNew.txt When executing the script I am getting the following for each row that was read... : No such file or directory6: cp /migrationfiles/Company\ Name\ GmbH/2014.138_Old\ File\ Name.pdf /appl/data/docs/202403/DE_2014.138_NewFile_1.pdf When executing the rows manually from the prompt it works without errors...
You say, I have a DE_CopyOldToNew.txt file with a whole bunch of copy commands for copying old file names to new file names. ... and you say you want to execute these cp commands. You are in luck because you have a shell script, i.e., a file containing a set of commands to be executed in order by a shell. All you need to do is to let a shell execute the commands in the script: sh DE_CopyOldToNew.txt This would start sh, a basic shell, and have it execute the file's contents, command by command.
Execute copy commands from file
1,461,439,018,000
Problem 1: I want to get the array items as user inputs at runtime; print the items and print the array length. This is what I have: read -a myarray echo "array items" ${myarray[@]} echo "array length" ${#myarray[@]} At runtime, I gave the following as input, $ ("apple fruit" "orange" "grapes") The output was, array items "apple fruit" "orange" "grapes" array length 4 which is not correct. If I don't ask for user input and instead used an array declared and initialised as part of the code as myarray=("apple fruit" "orange" "grapes") the array length is echoed as 3. So, It seems like my usage of read command is not right. Problem 2: If I add a prompt to the read command as follows, read -p "enter array items: " myarray the first item "apple fruit" gets printed as fruit" and the length is also wrong. If I remove the prompt and add -a, everything is good. If I combine both a and p and give it as read -ap, prompt doesn't popup at all. It waits for values without any message. Why is it so? Can someone explain to me what is wrong?
Problem 1: In your example, read does not get its input from a command line argument, but from stdin. As such, the input it receives does not go through bash's string parser. Instead, it is treated as a literal string, delimited by spaces. So with your input, your array values become: [0]->("apple [1]->fruit" [2]->"orange" [3]->"grapes" To do what you want, you need to escape any spaces you have, to avoid the delimiter from kicking in. Namely, you must enter the following input after invoking read: apple\ fruit oranges grapes Problem 2: In order for read to store the input it receives as an array, you must have an -a switch followed by the array name. So you need: read -a myarray -p "Enter your items"
Reading array values as user input gives wrong array length and only -a or -p works in read
1,461,439,018,000
I need to write a script that will add a line to a text file if Enter is pressed. But, if Ctrl+D is pressed, I need to exit that loop in the bash. touch texttest.txt LINE="0" while true; do read LINE; if [ "$LINE" == "^D" ]; then break else echo "$LINE" >> texttest.txt fi done Currently have something like this but cannot figure out how I am do exit the while loop when Ctrl+D is pressed instead of Enter.
You're overthinking it. All you need is this: cat > texttest.txt Cat will read from STDIN if you've not told it different. Since it's reading from STDIN, it will react to the control character Ctrl+D without your having to specify it. And since Ctrl+D is the only thing that will finish the cat subprocess, you don't even need to wrap it in a loop.
Bash Script using read needs to stop executing on Ctrl+D
1,461,439,018,000
Hello I have 2 files with the first file containing a few values for example powershell vectormaps JuniperSA and the second file containing values and and ID appid uid SplunkforSnort 340 powershell 610 vectormaps 729 JuniperSA 826 postfix 933 SplunkforJuniperSRX 929 TA-barracuda_webfilter 952 TA-dragon-ips 954 dhcpd 392 So im trying to run a while loop with AWK to get the values and their corresponding ID's but the output file seems to be writing something else. This is how im trying to run the while loop. while read $line; do awk '/'$line'/ {print $0}' file2.csv > new done < file1 My expected output should be powershell 610 vectormaps 729 JuniperSA 826 but my output is appid uid SplunkforSnort 340 powershell 610 vectormaps 729 JuniperSA 826 postfix 933 SplunkforJuniperSRX 929 TA-barracuda_webfilter 952 TA-dragon-ips 954 dhcpd 392 it seems as if nothing is happening. What am i missing here?
Using awk $ awk 'FNR==NR {a[$1]=$2; next} {$(NF+1)=a[$1]}1' file2 file1 powershell 610 vectormaps 729 JuniperSA 826
Trying to find complete string values from one file based on another file using AWK
1,461,439,018,000
I am trying to use read, to read from a file descriptor, like so: read -u fd as in in this link. Here is the code I am using in a bash script: MESSAGE=$(read -u $NODE_CHANNEL_FD) echo " parent message => $MESSAGE" >&2 The exact error message: read: Illegal option -u Anyone know what this could possibly be about?
The error message suggests that you are in fact not executing the script using bash. Either make the script executable and add a proper #!-line on the first line of the script, e.g. #!/bin/bash Or, execute the script with bash explicitly: $ bash script.sh You should treat sh and bash as implementing separate languages, and use the correct interpreter for the script you're writing. In this case, you're using read with the -u option. This is originally a ksh extension to the standard read specification, and it's also implemented in bash and zsh. Hence, you need to run the script with bash, zsh or ksh. How to know when to use sh and when to use bash or some other shell? It's simple, you learn the way sh works and what other features other shells add.
read: illegal option -u
1,461,439,018,000
How can I take input from read, split the words up by spaces, and then put those words into an array? What I want is: $ read sentence this is a sentence $ echo $sentence[1] this $ echo $sentence[2] is (and so on...) I'm using this to process English sentences for a text adventure.
If you are using bash, its read command has a -a option for that. From help read Options: -a array assign the words read to sequential indices of the array variable ARRAY, starting at zero So $ read -a s This is a sentence. Note that the resulting array is zero indexed, so $ echo "${s[0]}" This $ echo "${s[1]}" is $ echo "${s[2]}" a $ echo "${s[3]}" sentence. $
Split words from `read` and store to array?
1,461,439,018,000
Why does this show blank lines instead of folders found by find? ssh -o stricthostkeychecking=no -o userknownhostsfile=/dev/null \ -o batchmode=yes -o passwordauthentication=no [email protected] \ "sudo find /folder/CFGKCP/KCS\ Pro/Job\ Setup -name JOBCFG.info \ | while read linea; do echo $linea; done";
As Romeo pointed out, you're using double quotes around your command. That means your variables are being expanded before doing the ssh command. So the body of the while loop is echo $linea and before the ssh linea probably doesn't exist so the command that is passed will become just echo with linea being replaced with an empty string. If you use single quotes around the command parameter expansion will not happen, and that string will be passed in tact through, so do ssh -o stricthostkeychecking=no -o userknownhostsfile=/dev/null -o batchmode=yes -o passwordauthentication=no [email protected] 'sudo find /folder/CFGKCP/KCS\ Pro/Job\ Setup -name JOBCFG.info | while read linea; do echo $linea; done' or escape the $ to tell your shell not to expand it
Why does read with pipeline fail in an ssh session?
1,461,439,018,000
My goal is to initialize multiple bash variables from the output of one single command. Specifically, line i should be the value of variable i. Example: My command is a Python program with the name init.py: if __name__ == '__main__': print("Value of A") print("Value of B") print() print("Value of D") Desired outcome: echo "A='$A', B='$B', C='$C', D='$D'" # --> A='Value of A', B='Value of B', C='', D='Value of D' What I then tried without success: read A B C D < <(python init.py) # --> Effect: A='Value', B='of', C='A', D='' read -d$'\n' A B C D < <(python init.py) # --> Effect: A='Value', B='of', C='A', D='' IFS=$'\n' read A B C D < <(python init.py) # --> Effect: A='Value of A', B='', C='', D='' IFS=$'\n' read -d$'\n' A B C D < <(python init.py) # --> Effect: A='Value of A', B='', C='', D='' How to solve this problem? How to generalize this to other separators, such as the Null byte \0?
If you have an empty line, using IFS won't work, because multiple \n are squeezed. However, you can use readarray: readarray -t arr < <(python init.py) echo "A='${arr[0]}', B='${arr[1]}', C='${arr[2]}', D='${arr[3]}'" Add -d '' to delimit by \0: readarray -d '' -t arr < <(python init.py) From man bash: -d The first character of delim is used to terminate each input line, rather than newline. If delim is the empty string, mapfile will terminate a line when it reads a NUL character.
Initialize several bash variables with the output of a single command
1,461,439,018,000
According to this manual -r for read: Do not allow backslashes to escape any characters I understand that generally, the read shell builtin gets input and creates a variable which holds that input as a string in which backslashes would be just literal components and wouldn't escape anything anyway. Is read -r only used in rare exceptional usecases of read (with the common denominator of the output being anything else than a string)?
I understand that generally, the read shell builtin gets input and creates a variable which holds that input as a string in which backslashes would be just literal components and wouldn't escape anything anyway. Plain read var, without -r, when given the input foo\bar, would store in var the string foobar. It treats the backslash as escaping the following character, and removes the backslash. You'd need to enter foo\\bar to get foo\bar. read can be used to read multiple values, like so: $ read a b <<< 'xx yy'; echo "<$a> <$b>" <xx> <yy> (<<< is a "here-string", the following string is provided to the command as input.) It uses the characters in IFS as separators, so whitespace by default. It's these separators that a backslash can be used to escape, making them regular characters, and removing the backslash, also if it appears in front of a regular character. So you'd get: $ read a b <<< 'xx\ yy'; echo "<$a> <$b>" <xx yy> <> $ read a b <<< 'xx\n yy'; echo "<$a> <$b>" <xxn> <yy> Being able to escape the separators is seldom useful, and removing backslashes can also be annoying if someone wants to enter a string with C-style character escapes. In addition, a backslash at the end of a line would make read wait for another line to be read as a continuation of the first, similarly to how continuation lines work in C and in the shell. With read -r, backslashes are just a regular character: $ read -r a b <<< 'value\with\backslashes\ yy'; echo "<$a> <$b>" <value\with\backslashes\> <yy> In many use cases, backslashes aren't something one would expect the user to input, and if there aren't any, read -r is the same as plain read. But in case someone were to (need to) input backslashes, using read -r may reduce the surprises involved. Hence it's probably good to use it, unless you really know you want them to be special for read (in addition to whatever special properties your program might otherwise assign to them).
Is read -r only used in rare exceptional usecases of read?
1,461,439,018,000
I found this to get user input from the command line. But it is failing into recognizing the new line characters I put into the input. Doing: #!/bin/bash read -e -p "Multiline input=" variable; printf "'variable=%s'" "${variable}"; Typing 'multi\nline' on Multiline input= makes printf output 'variable=multinline' Typing 'multi\\nline' on Multiline input= makes printf output 'variable=multi\nline' How printf can print the new line I read by read -p, i.e., output multi line Instead of multinline or multi\nline? Related questions: What does the -p option do in the read command? bash: read: how to capture '\n' (newline) character? shell: read: differentiate between EOF and newline https://stackoverflow.com/questions/4296108/how-do-i-add-a-line-break-for-read-command Read arguments separated by newline https://stackoverflow.com/questions/43190306/how-to-add-new-line-after-user-input-in-shell-scripting
If typing in \n (as in the two characters \ and n) is acceptable, then you can use printf to interpret it: #!/bin/bash IFS= read -rep "Multiline input=" variable; printf -v variable "%b" "$variable" printf "'variable=%s'\n" "${variable}"; For example: ~ ./foo.sh Multiline input=foo\nbar 'variable=foo bar' From the bash manual: The backslash character ‘\’ may be used to remove any special meaning for the next character read and for line continuation. The "line continuation" bit seems to imply you can't escape newlines unless you use a different character as the line delimiter.
How to read input lines with newline characters from command line?
1,461,439,018,000
When I run: du -sh ./*/ I get the following error: sort: read failed: ./folder/: Is a directory How do I fix this? Is there something wrong with sort on my system. I am running x86_64 Linux 4.16.8-1-ARCH.
The du utility will never produce that error message. The message comes from sort. The sort utility produces that message when given a command line argument that is a folder when it expects a file. Therefore, it is reasonable to assume that du is in fact a shell function or an alias that calls sort in such a way that sort is given a directory name as command line argument when the alias/function is called in the way it is called in the question. The alias or function is in other words buggy. That du was an alias was later confirmed by the original user in comments.
sort: read failed: ./folder/: Is a directory [closed]
1,461,439,018,000
Here is a test question that I am stuck with: Which output will the following command sequence produce? echo '1 2 3 4 5 6' | while read a b c; do echo result: $c $b $a; done And the correct answer is: 3 4 5 6 2 1 I have no clue why. Can someone please explain it to me? (At first I thought the answer was 3 2 1.)
From the read manpage: Reads a single line from the standard input, or from file descriptor FD if the -u option is supplied. The line is split into fields as with word splitting, and the first word is assigned to the first NAME, the second word to the second NAME, and so on, with any leftover words assigned to the last NAME. Only the characters found in $IFS are recognized as word delimiters. For this reason I commonly use a "trash" variable to collect anything that may be leftover: echo '1 2 3 4 5 6' | while read a b c TRASH; do echo "result is: $c $b $a" echo "trash is: $TRASH" done In use: $ echo '1 2 3 4 5 6' | while read a b c TRASH; do > echo "result is: $c $b $a" > echo "trash is: $TRASH" > done result is: 3 2 1 trash is: 4 5 6
sequence command and test question
1,461,439,018,000
In bash, is there any way to read in user input but still allow bash variable expansion? I am trying to request the user enter a path in the middle of a program but since ~ and other variables are not expanded as a part of the read builtin, users have to enter in an absolute path. Example: When a user enters a path into: read -ep "input> " dirin [[ -d "$dirin" ]] returns true when a user enters /home/user/bin but not ~/bin or $HOME/bin.
A naive way would be: eval "dirin=$dirin" What that does is evaluate the expansion of dirin=$dirin as shell code. With dirin containing ~/foo, it's actually evaluating: dirin=~/foo It's easy to see the limitations. With a dirin containing foo bar, that becomes: dirin=foo bar So it's running bar with dirin=foo in its environment (and you'd have other problems with all the shell special characters). Here, you'd need to decide what expansions are allowed (tilde, command substitution, parameter expansion, process substitution, arithmetic expansion, filename expansion...) and either do those substitutions by hand, or use eval but escape every character but those that would allow them which would be virtually impossible other than by implementing a full shell syntax parser unless you limit it to for instance ~foo, $VAR, ${VAR}. Here, I'd use zsh instead of bash that has a dedicated operator for that: vared -cp "input> " dirin printf "%s\n" "${(e)dirin}" vared is the variable editor, similar to bash's read -e. (e) is a parameter expansion flag that performs expansions (parameter, command, arithmetic but not tilde) in the content of the parameter. To address tilde expansion, which only takes place at the beginning of the string, we'd do: vared -cp "input> " dirin if [[ $dirin =~ '^(~[[:alnum:]_.-]*(/|$))(.*)' ]]; then eval "dirin=$match[1]\${(e)match[3]}" else dirin=${(e)dirin} fi POSIXly (so bashly as well), to perform tilde and variable (not parameter) expansion, you could write a function like: expand_var() { eval "_ev_var=\${$1}" _ev_outvar= _ev_v=${_ev_var%%/*} case $_ev_v in (?*[![:alnum:]._-]*) ;; ("~"*) eval "_ev_outvar=$_ev_v"; _ev_var=${_ev_var#"$_ev_v"} esac while :; do case $_ev_var in (*'$'*) _ev_outvar=$_ev_outvar${_ev_var%%"$"*} _ev_var=${_ev_var#*"$"} case $_ev_var in ('{'*'}'*) _ev_v=${_ev_var%%\}*} _ev_v=${_ev_v#"{"} case $_ev_v in "" | [![:alpha:]_]* | *[![:alnum:]_]*) _ev_outvar=$_ev_outvar\$ ;; (*) eval "_ev_outvar=\$_ev_outvar\${$_ev_v}"; _ev_var=${_ev_var#*\}};; esac;; ([[:alpha:]_]*) _ev_v=${_ev_var%%[![:alnum:]_]*} eval "_ev_outvar=\$_ev_outvar\$$_ev_v" _ev_var=${_ev_var#"$_ev_v"};; (*) _ev_outvar=$_ev_outvar\$ esac;; (*) _ev_outvar=$_ev_outvar$_ev_var break esac done eval "$1=\$_ev_outvar" } Example: $ var='~mail/$USER' $ expand_var var; $ printf '%s\n' "$var" /var/mail/stephane As an approximation, we could also prepend every character but ~${}-_. and alnums with backslash before passing to eval: eval "dirin=$( printf '%s\n' "$dirin" | sed 's/[^[:alnum:]~${}_.-]/\\&/g')" (here simplified on the ground that $dirin can't contain newline characters as it comes from read) That would trigger syntax errors if one entered ${foo#bar} for instance but at least that can't do much harm as a simple eval would. Edit: a working solution for bash and other POSIX shells would be to separate the tilde and other expansions like in zsh and use eval with a here-document for the other expansions part like: expand_var() { eval "_ev_var=\${$1}" _ev_outvar= _ev_v=${_ev_var%%/*} case $_ev_v in (?*[![:alnum:]._-]*) ;; ("~"*) eval "_ev_outvar=$_ev_v"; _ev_var=${_ev_var#"$_ev_v"} esac eval "$1=\$_ev_outvar\$(cat << //unlikely// $_ev_var //unlikely// )" That would allow tilde, parameter, arithmetic and command expansions like in zsh above. }
Can Bash Variable Expansion be performed directly on user input?
1,461,439,018,000
script #!/bin/bash -- # record from microphone rec --channels 1 /tmp/rec.sox trim 0.9 band 4k noiseprof /tmp/noiseprof && # convert to mp3 sox /tmp/rec.sox --compression 0.01 /tmp/rec.mp3 trim 0 -0.1 && # play recording to test for noise play /tmp/rec.mp3 && printf "\nRemove noise? " read reply # If there's noise, remove it if [[ $reply == "y" ]]; then sox /tmp/rec.sox --compression 0.01 /tmp/rec.mp3 trim 0 -0.1 noisered /tmp/noiseprof 0.1 play /tmp/rec.mp3 fi Errors with: read error: 0: Resource temporarily unavailable But, the script works if I use the -e flag on read to enable readline
What happens is that one of your SoX programs (sox, play, rec) has changed how stdin is behaving, making it non-blocking. Typically, something is calling fcntl(0, F_SETFL, O_NONBLOCK). When a call to the read() system call is made on a non-blocking file descriptor, the call does not wait: either there is already something to read in the kernel buffer and it is returned, or read() fails and errno is set to EAGAIN. When Bash meets this EAGAIN error while reading from stdin with the read command, it displays the "Resource temporarily unavailable" message you have met. Try adding <&- at the end of each of your SoX commands; this will close stdin for them and they won't be able to alter how it is working.
Why does `read` fail saying "read error: 0: Resource temporarily unavailable"?
1,461,439,018,000
If I run read -r -s INPUT And then interrupt it with Ctrl-C, then the terminal stays in a state where all the input characters are not shown. How can I restore the terminal after such an incident?
The command used to reset the terminal is aptly named: reset However, this would likely clear the terminal as well. You may also try stty echo which would turn on echoing of what you type, or stty sane which should get your terminal back into a sane state. If the Enter key does not seem to work, you may use Ctrl+J instead.
Reset terminal after interrupting `read -r -s`
1,461,439,018,000
While reading an script, the shell will read the script from a file, or from a pipe or possibly, some other source (stdin ?). The input may not be seek able under some corner conditions (no way to rewind the file position to a previous position). It has been said that read reads stdin one byte at a time until it finds an unescaped newline character Should the shell also read one character at a time from its script input?. I mean the script, not an additional data text file that could be used. If so: why is that needed? Is it defined in some spec? Do all shells work similarly? Which not?
Yes for a POSIX compliant shell. The bash developer had this to say: POSIX requires it for scripts that are read from stdin. When reading from a script given as an argument, bash reads blocks. And, indeed, the POSIX spec says this (emphasis mine): When the shell is using standard input and it invokes a command that also uses standard input, the shell shall ensure that the standard input file pointer points directly after the command it has read when the command begins execution. It shall not read ahead in such a manner that any characters intended to be read by the invoked command are consumed by the shell (whether interpreted by the shell or not) or that characters that are not read by the invoked command are not seen by the shell. That is: (for stdin script) the shell shall read one-character-at-a-time. In C locale, one char is one byte. It seems that posh, mksh, lksh, attsh, yash, ksh, zsh and bash conform to this requirement. However ash (busybox sh) and dash do not.
Should the shell read (an script) one character at a time?
1,461,439,018,000
I'm trying to pipe a command's output to the read builtin of my shell and I get a different behaviour for zsh and bash: $ bash -c 'echo hello | read test; echo $test' $ zsh -c 'echo hello | read test; echo $test' hello Though that doesn't work in bash, the following works for both: $ bash -c 'echo hello | while read test; do echo $test; done' hello $ zsh -c 'echo hello | while read test; do echo $test; done' hello Why is that? Am I using read wrong? I find it much more readable to use it in scripts in comparison to test="$(echo hello)" which forces me to handle quoting issues much more carefully.
You are observing the results from what has not been standardized with POSIX. POSIX does not standarize how the interpreter runs a pipe. With the historic Bourne Shell, the rightmost program in a pipeline is not even a child of the main shell. The reason for doing it this way is because this implementation is slow but needs few code - important if you only have 64 kB of memory. Since in this variant, the read command is run in a sub process, assigning a shell variable in a sub process is not visible in the main shell. Modern shells like ksh or bosh (the recent Bourne Shell) create pipes in a way where all processes in a pipe are direct children of the main shell and if the rightmost program is a shell builtin, it is even run by the main shell. This all is needed to let the read program modify a shell variable of the main shell. So only in this variant (that BTW is the fastest variant) permits the main shell to see the results from the variable assignment. In your second example, the whole while loop is run in the same sub process and thus allows to print the modified version of the shell variable. There is curently a request to add support to the POSIX shell to get information on whether any of the pipeline commands has a non-zero exit code. To achieve this, a shell must be implemented in a way where all programs from a pipeline are direct children from the main shell. This is close to what is needed to allow echo foo | read val; echo $val to do what is expected, since then only the requirement to run the read in the main shell is missing.
read from stdin works differently in bash and zsh [duplicate]
1,461,439,018,000
I'd like to know how to emulate the ICANON behavior of ^D: namely, trigger an immediate, even zero-byte, read in the program on the other end of a FIFO or PTY or socket or somesuch. In particular, I have a program whose specification is that it reads a script on stdin up until it gets a zero byte read, then reads input to feed the script, and I'd like to automatically test this functionality. Simply writing to a FIFO doesn't result in the right thing happening, of course, since there's no zero byte read. Help? Thanks!
As far as I know, this behavior is unique to terminal devices, so that's what you have to use. Use a pseudo-tty whose slave side is in ICANON mode, and write Ctl-d ('\4') to the master side.
Triggering zero-byte reads on FIFO/pty
1,461,439,018,000
I am very novice with scripting and I cannot figure this out. I'm trying to know if there is something to read and if not then
With bash specifically, read -t0 will return true if there's something to read or on stdin or the end of input has been reached and false otherwise. if read -t0; then echo "there's something to be read on stdin, or end-of-file is reached" else echo "there's nothing that may be read from stdin at the moment" fi Note that it will return true even if what's there to be read is not a full line or even a full character, and so a subsequent read may still hang waiting for an unescaped line delimiter. If stdin is in non-blocking mode or if stdin is not readable, read -t0 will always return true.
How to find out if there is something to read before calling while read?
1,461,439,018,000
For example: I want user to input A=a and I have the command which I guess is totally wrong. read -p "Enter something:" frsstring=secstring echo $frsstring echo $secstring ````````````````````````````````````````````````````````````
I don't know how could it be done with one command, but you could read the entire line, and then split it in the desired variables: #!/bin/bash read -p "Enter something:" line frsstring=`echo "$line" | cut -f1 -d'='` secstring=`echo "$line" | cut -f2 -d'='` echo $frsstring echo $secstring I hope it could help
How to read two variables under one read command and echo them separately?
1,461,439,018,000
I created a persistent Debian 9 live usb. The persistence is configured with / union. An unexpected consequence, although obvious in hindsight, is the system lags on non-cached reads: holmes@bakerst:~$ # WRITE to disk holmes@bakerst:~$ dd if=/dev/zero of=tempfile bs=1M count=1024; sync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.417477 s, 2.6 GB/s holmes@bakerst:~$ # READ from buffer holmes@bakerst:~$ dd if=tempfile of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.0907808 s, 11.8 GB/s holmes@bakerst:~$ # Clear cache, non-cached READ speed holmes@bakerst:~$ sudo /sbin/sysctl -w vm.drop_caches=3 vm.drop_caches = 3 holmes@bakerst:~$ dd if=tempfile of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.3935 s, 69.8 MB/s There is a 169X difference between cached and non-cached read operations! What can I do, if anything, to improve performance?
Get a faster USB 3 pendrive, or maybe even a USB SSD :-) You can easily improve reading from the image of the iso file (after a slow start), put all the content of the squash file system into RAM with the boot option toram, but I don't think it is easy or meaningful to do that with the content of the file/partition for persistence. See this link for more details. The following screenshot of the grub menu of a persistent live system made by mkusb is from Ubuntu, but looks very similar for Debian. There is already a menuentry for toram.
Speed up persistent live usb disk operations
1,461,439,018,000
I am using GNU bash - version 4.2.10(1). I want to read multiple variables using single read command in shell script. So i tried as below: echo " Enter P N R : " read P N R but it's not working. It just ask for single value of P variable and returns prompt. I googled it but don't found any solution.
read, without -r expects words on input to be delimited by the characters of the $IFS special parameter (by default SPC, TAB and NL, though as read reads only one line unless it ends in backslash, NL can't count) where backslash can be used to escape the separator or allow a line to be continued on the next physical line (backslash-newline sequences removed). So, here the user must enter the values for P, N, R space or tab separated, like: value_for_P value_for_N value_for_R Or if the values can contain space: value\ for\ P value\ for\ N value for R (here we didn't bother escaping the spaces for R as the rest of the line after the third word would end-up in R anyway; the user would still need to escape a trailing space though). If you want the user to enter the values on 3 lines, you'd need 3 read invocations. You'd then want -r to avoid the backslash processing and make IFS empty: IFS= read -r P IFS= read -r N IFS= read -r R
How to use multiple variable for input with read command?
1,461,439,018,000
I opened two terminals (/dev/pts/1 and /dev/pts/2) and started my application from /dev/pts/1. I want to read in real time from /dev/pts/2 but my code doesn't work: actually some of the symbols are shown on the /dev/pts/1 and some of them are shown on the /dev/pts/2 FILE *f = fopen("dev/pts/2", "r"); while(1) { char current = fgetc(f); printf("%c", current); fflush(0); }
You have two processes reading from /dev/pts/2. One is the shell (or some application) running there, the other is your application on pts/1. It's random which one is faster reading the available bytes.
How to read from another terminal?
1,461,439,018,000
I use sed with <<< and read to assign all words in a string to variables. What I do is: read -a A0 <<< $(sed '2q;d' /proc/stat) Hence, sed reads the second line of the file file and immediately quits. The line sed has read in is fed to <<< which does expands the input it receives from sed and read -a assign the resulting values of <<< to elements of an array. Is the usage of <<< and sed in this way a good idea or are there obvious reasons that speak against this and most of all, is there a faster way to do this?
You can use array assignment directly: A0=($(sed '2q;d' /proc/stat)) Beware that this performs globbing: if the output of the command contains shell wildcards, then the words containing wildcards are replaced by the list of matching files if there are any. If the output of the command might contain one of the characters \[?*, temporarily turn off globbing: set -f A0=($(sed '2q;d' /proc/stat)) set +f This can be tiny faster than using read: $ time for i in {1..1000}; do read -a A0 <<< $(sed '2q;d' /proc/stat); done real 0m2.829s user 0m0.220s sys 0m0.480s $ time for i in {1..1000}; do A0=($(sed '2q;d' /proc/stat)); done real 0m2.388s user 0m0.128s sys 0m0.276s With bash 4.0 and above, you can use mapfile: mapfile -t < <(sed '2q;d' /proc/stat) But mapfile seems to be slowest: $ time for i in {1..1000}; do mapfile -t < <(sed '2q;d' /proc/stat); done real 0m3.990s user 0m0.104s sys 0m0.444s
Using sed with herestring (<<<) and read -a
1,461,439,018,000
I have a list of filenames in a file and want to do let the user decide what to do with each. In bash, iterating over filenames is not trivial in itself, so I followed this answer: #!/bin/bash while IFS= read -r THELINE; do read -n 1 -p "Print line? [y/n] " answer; if [ ${answer} = "y" ]; then echo "${THELINE}"; fi; done < tester; When I try to execute this (on a non-empty file), I get an error at the if: line 5: [: =: unary operator expected My best guess is that answer is not set properly, which would be caused by using two calls of read in a "nested" fashion since the following works as expected: #!/bin/bash for THELINE in $(cat "tester"); do read -n 1 -p "Print line? [y/n] " answer; if [ ${answer} = "y" ]; then echo "${THELINE}"; fi; done; What is going on here? I run bash 4.2.24(1)-release (x86_64-pc-linux-gnu) on 3.2.0-37-generic #58-Ubuntu x86_64 GNU/Linux.
First, the error from [ is because answer is empty, so [ sees three arguments: =, y and ]. Always put double quotes around variable substitutions: if [ "$answer" = "y" ]. The reason $answer is empty fd 0 is busy with the file input due to the redirection <tester over the while loop. while IFS= read -r line <&3 do read -n 1 -p "Print line? [y/n] " answer if test "$answer" = "y" then echo "$line" fi done 3< tester
Nested read fails
1,461,439,018,000
I have this script: #!/usr/bin/env bash main() { while true; do read -r -ep "> " input history -s "$input" echo "$input" done } main which works well for single line strings. Now I'm looking to allow the user to enter multiline strings, e.g. something like the following: > foo \ > bar foobar how do I modify my read command to allow this functionality?
You explicitly disable the special handling of backslash with -r. If you remove -r from your read invocation, you will be able to read your input with the escaped newline: $ read input hello \ > world $ echo "$input" hello world Compare that with what happens if you use -r (which is usually what you want to do): $ read -r input hello \ $ echo "$input" hello \ Note that without -r, you will have to enter \\ to read a single backslash. Related: Understanding "IFS= read -r line"
How to read multiline input in bash
1,461,439,018,000
on a POSIX shell, no Python and no awk available (so don't bother telling me I should use a "real" programming language) I have to loop through a csv file. https://datacadamia.com/lang/bash/read My initial guess was : while IFS=";" read -r rec_name rec_version rec_license rec_origin rec_modification rec_newlicense do if [ "$name" = "$rec_name" ]; then # if [ "$version" = "$rec_version" ]; then if [ "$license" = "$rec_license" ]; then license="$rec_newlicense" fi # fi fi done < <(tail -n +2 "${output_file%%.*}.csv") But the last line wasn't "posix" compliant. So I tried : while IFS=";" read -r rec_name rec_version rec_license rec_origin rec_modification rec_newlicense do if [ "$name" = "$rec_name" ]; then # if [ "$version" = "$rec_version" ]; then if [ "$license" = "$rec_license" ]; then license="$rec_newlicense" fi # fi fi done < "${output_file%%.*}.csv" That did the trick, somehow, but the header line was processed as well. Another problem was that the fields 'rec_version', 'rec_origin' and 'rec_modification' weren't referenced. How to ignore them ? Because I also tried : while IFS=";" read -r -a rec do if [ "$name" = "${rec[0]}" ]; then # if [ "$version" = "${rec[1]}" ]; then if [ "$license" = "${rec[2]}" ]; then license="${rec[5]}" fi # fi fi done < "${output_file%%.*}.csv" But then I get : read: line 93: illegal option -a So, your take on this ? Regards.
Only <() is non-POSIX in your first try, so just use normal pipes instead: tail -n +2 "${output_file%%.*}.csv" | while IFS=";" read -r rec_name rec_version rec_license rec_origin rec_modification rec_newlicense do if [ "$name" = "$rec_name" ]; then if [ "$license" = "$rec_license" ]; then license="$rec_newlicense" fi fi done That seems a bit complex though. I can't be sure since you don't show the data you're parsing, but I suspect you can do: tail -n +2 "${output_file%%.*}.csv" | while IFS=";" read -r rec_name rec_version rec_license rec_origin rec_modification rec_newlicense do if [ "$name" = "$rec_name" ] && [ "$license" = "$rec_license" ] then license="$rec_newlicense" fi done As for ignoring unused terms, I'm afraid you can't do that for terms that are in the middle. You can easily take the first N terms and ignore the rest with: while read -r var1 var2 rest; do ... done That will read the first 2 variables and save the rest of the line as rest. Unfortunately, if you need to use the last one, you will need to capture all of them. Alternatively, remove them before passing to the shell: tail -n +2 "${output_file%%.*}.csv" | cut -d';' -f1,3,6 | while IFS=";" read -r rec_name rec_license rec_newlicense do if [ "$name" = "$rec_name" ] && [ "$license" = "$rec_license" ] then license="$rec_newlicense" fi done
posix bash, how to read a csv file and ignore some columns?
1,461,439,018,000
In this GitHub repository I have a directory named nwsm. This directory contains the file nwsm.sh that contains a master script (a script that runs other scripts). The directory also contains a few other files which contain sub-scripts that the master script executes, each one at a time. In nwsm.sh I declare a few variables and these variables should eventually be used inside all aforementioned subscripts. Variable expansion in both nwsm.sh and subscripts should take place the same way, only in their execution. Note that the directory doesn't contain any other files besides nwsm.sh and its sub script files, nor that it should contain other files, at any time in the future. Variable expansions inside the subscripts should occur with the relevant values defined in nwsm.sh, after nwsm.sh started to run. This is the master script in nwsm.sh (first the variable declarations with the read utility, then the execution of the adjacent files): #!/bin/bash domain="$1" && test -z "$domain" && exit 2 read -sp "Please enter DB root password: " dbrootp_1 && echo read -sp "Please enter DB root password again:" dbrootp_2 && echo if [ "$dbrootp_1" != "$dbrootp_2" ]; then echo "Values unmatched" && exit 1 fi read -sp "Please enter DB user password: " dbuserp_1 && echo read -sp "Please enter DB user password again:" dbuserp_2 && echo if [ "$dbuserp_1" != "$dbuserp_2" ]; then echo "Values unmatched" && exit 1 fi "$PWD"/tests.sh "$PWD"/wp-cli.sh "$PWD"/nginx.sh "$PWD"/dbstack.sh "$PWD"/certbot.sh How could I ensure that values defined in nwsm.sh will be available to all its counterparts while .nwsm, and them, are running?
If you mean you just want to have the variables visible when the main script runs the other scripts, then you just export them: $ cat main.sh #!/bin/sh read foo export foo ./foo.sh $ cat foo.sh #!/bin/sh echo "foo is $foo" $ ./main.sh blah foo is blah $ The other scripts run as subprocesses of the main script, and exported variables get passed to them through the environment. None of this limits the variables to scripts in a particular directory, exported variables are visible to all programs started by the main script. If you want to run some program without passing the variables to them, you'll have to unexport them with export -n first. You could also unexport at the start of the other scripts, to avoid them passing the variables on. Also note that there's no need to clear the variables or unexport them at the end of the main script (or the others). The variables only exist in the memory of the running shell processes, and when the process ends, the variables disappear. (Passing variables to independent processes, on the other hand, would require saving them to a file or some such.) Of course, another way to achieve almost the same kind of modularization would be to split the program into functions, store them in separate files and source those files from the main script. That way, all the variables in the program would be visible to all functions. (Which may or may not be preferable.)
Using variable values defined in one file, in files in the same directory
1,461,439,018,000
Pressing Enter still does its delimiter job but the read command just ends quietly, abstaining from messing with the console scrolling. Basically a read -s that affects only the endline.
You could invoke zsh's line editor (which is fully configurable and generally a lot more advanced than readline (which bash can invoke with read -e)) like: var=$( saved_tty=$(stty -g) var=default-value zsh -c ' zle-line-finish() { # hook run upon leaving the line editor (zle) CURSOR=$#BUFFER # move the cursor to the end zle -R # force a redraw of the editor printf %s $BUFFER # output value on stdout kill $$ # kill ourself to prevent zle cleanup } zle -N zle-line-finish vared -p "Text before [" var' # we need to restore the tty settings by ourselves, as we prevented zsh # from doing so when killing it: stty "$saved_tty" ) printf '] Text after\n' printf 'var = "%s"\n' "$var" Upon running, that gives: Text before [value edited] Text after var = "value edited" While bash now lets you bind keys to shell code widgets, it clears the content of the current line prior to executing the widget, so you'd have to redraw the prompt and value upon your Return handler: var=$(prompt="Text before [" var=default-value bash -c ' bind -x '\''"\r":printf >&2 %s "$prompt$READLINE_LINE"; printf %s "$READLINE_LINE"; exit'\'' 2> /dev/null IFS= read -rep "$prompt" -i default-value') printf '] Text after\n' printf 'var = "%s"\n' "$var"
How do I get `read` to echo all input except for the endline at the end of the typing?
1,461,439,018,000
I'm trying to write a simple shell script for launching commands with urxvt. The idea is this (not the complete script, just the idea): PRMPT="read -r CMD" urxvt -g 55x6-20+20 -e $PRMPT CMD There are two problems with this script. The first one is that read is not fit for this kind of task, as it would ignore options of a command (if I write echo hello read would assign echo to CMD and ignore hello). The second one, which is the one that puzzles me most, is that urxvt -e exits immediately and does not wait for my input. I figure that it has to do with the fact that read is a builtin function, but for example urxvt -e echo hello works fine. Does anybody have any suggestions on how to change the script?
what is your goal? echo is executable (/bin/echo), read is builtin. -e means execute an executable. If you wanna use a builtin function of your shell (bash?) use urxvt -e /bin/bash -c read -r CMD
Understanding read built-in
1,461,439,018,000
I'm trying to capture a key press in a shell script (e.g. using read) and not to echo it. The three methods I found were stty -echo, the -s switch and stream redirection. However, on macOS, which seems to use a FreeBSD implementation, none of these work consistently. The following script shows the issue: while true; do stty -echo read -s -n 1 CHAR &>/dev/null stty echo done When pressing the up and down arrows at the same time, sometimes the command echoes A^[[B or B^[[A. This occurs particularly often when the machine is slow (due to low battery), indicating some race condition. Am I missing something? Otherwise, how can I work around this issue?
In your loop, there is a short window of time between the "stty echo" at the end of the loop and the "stty -echo" at the next iteration. Keyboard input received during this window will be echoed, even though no read command is waiting for it. If you don't want echoes, don't call "stty echo" 😉
How to suppress echo of buggy read function
1,461,439,018,000
I have this working: % cat read.sh !/bin/sh file=list.txt while read line do echo "$line" cut -d' ' -f27 | sed -n '$p' > file2 done < "$file" % cat list.txt sftp> #!/bin/sh sftp> sftp> cd u/aaa sftp> ls -lrt x_*.csv -rwxr-xr-x 0 1001 1001 12274972 May 13 21:07 x_20150501.csv -rw-r--r-- 0 1001 1001 0 May 13 21:44 x_20150601.csv -rw-r--r-- 0 1001 1001 0 May 13 21:44 x_20150701.csv -rw-r--r-- 0 1001 1001 0 May 13 21:44 x_20150801.csv -rw-r--r-- 0 1001 1001 0 May 13 21:44 x_20150901.csv -rw-r--r-- 0 1001 1001 0 May 13 21:45 x_20151001.csv -rw-r--r-- 0 1001 1001 0 May 13 21:45 x_20151101.csv -rw-r--r-- 0 1001 1001 0 May 13 21:45 x_20151201.csv % cat file2 x_20151201.csv First question: Is there something more glamorous to read just the very last item on the very last line? Would you use cut and sed? This is a redirect of a sftp directoy listing. Second question: Whatever is in file2, I want to have it read from a sftp batch file to get that exact file. % cat fetch.sh #!/bin/sh cd u/aaa !sh read.sh !< file2 get bye As you can imagine, sftp doesn't like get provided without any file, so how can I read in file2 to get that file from the sftp server? % sftp -b fetch.sh user@pulse sftp> #!/bin/sh sftp> sftp> cd u/aaa sftp> !sh read.sh sftp> #!/bin/sh sftp> !< file2 x_20151201.csv sftp> get You must specify at least one path after a get command.
You can combine all the actions in one command: sftp user@host:/path/to/file/$(tail -1 file1.txt |tr -s ' ' |cut -d ' ' -f 9) This will fetch the file into the current working directory. If you need to fetch the file into another directory specify the destination directory as a next argument to the sftp command.
Shell read script for sftp
1,461,439,018,000
I am trying to make a clip out of video file by playing it only for certain interval. make_mclip.sh #!/bin/bash mediafile=$@ mediafile_fullpath=$PWD/./$mediafile tmpedlfile=$(mktemp) mplayer -edlout $tmpedlfile "$mediafile" &> /dev/null cat $tmpedlfile | while read f do startpos=$(echo $f | awk '{print $1}') endpos=$(echo $f | awk '{print $2}') length=$(echo "$endpos-$startpos" | bc) tmpclip=$(mktemp --suffix='.mclip' --tmpdir=$PWD) echo -e "$mediafile_fullpath\t$startpos\t$length" > $tmpclip mplayer_clip.sh "$tmpclip" &>/dev/null echo -n "clip name : " read clipname < /dev/tty mv -nv "$tmpclip" "$clipname.mclip" done echo doing rm "$tmpedlfile" mplayer_mclip.sh #!/bin/bash mediafile=$(cat "$@" | awk -F'\t' '{print $1}') startpos=$(cat "$@" | awk -F'\t' '{print $2}') length=$(cat "$@" | awk -F'\t' '{print $3}') mplayer -ss $startpos -endpos $length "$mediafile" &> /dev/null But for some reason the while loop in make_mclip.sh is only run once even if $tempedlfile contains more than one line; the only exception is if the line mplayer_clip.sh "$tmpclip" &>/dev/null is removed. Whats wrong ? ps. I would also like to know if there is already a program for this .
mplayer is "consuming" tmpedlfile remaining content. You need to add an option for it not to ignore its stdin: mplayer -noconsolecontrols -ss $startpos -endpos $length "$mediafile" &> /dev/null
while loop is running only once?
1,461,439,018,000
Before continuing, please bear in mind that I am aware that I could configure keyboard shortcuts through the settings menu, but that would not be of use for my end goal. I'm trying to create a simple script to take a single keypress as the input, then perform an action. Ideally, I would like to execute a command when I press the up key. While I could integrate, for example, the x key as follows: echo "press key" read -sn 1 key if [ "$key" == "x" ]; then echo "x"; fi I'm struggling to find out how to use the up key where the x is in this example. Is it possible to do so?
Try this: #! /usr/bin/env bash read -p "press key " -sn 1 key #you can use -p "Message" instead of using echo "Message" before. if [ "$key" == $'\x1b' ]; then read -sn2 chars if [[ $chars = '[A' ]];then echo You pressed 'Up' key fi else echo Pressed another key fi When you type keys like Up key,Right key, Left key, Down Key, Fn+N this generates the string x1\b. So you have to check if the variable contains that string, if so then you should read the following two characters: [A is for Up key [C is for Right key [B is for Down key [D is for Left key
Specify a Keypress as a variable for "if" command
1,590,801,567,000
How to parse line by line from dmesg command?, i try using a while and read: while read -r L; do echo "line: ${L}"; done < <(dmesg -c --level=err) But can not echo the lines. I try using: LINES=$(dmesg -c --level=err); while read -r L; do echo "line: ${L}"; done <<< "$LINES" But echo only a one line without content. When call dmesg -c --level=err have a 5 lines with content. How to parse this?
I guess that you forget that -c switch is to delete the dmesg content after the first invocation. It's the simple reason why you don't have line echo-ed. The first snippet is valid bash code. Ensure your default shell is bash ! [[ $SHELL == *bash ]] && echo 'bash is the default shell' || echo >&2 "WTF"
How to parse line by line from dmesg in bash?
1,590,801,567,000
I have a text.txt file like this line1 line2 line3 I want to write a script that loops over each line and echo out modified line1 modified line2 modified line3 This is the script which is a very common solution: while IFS= read -r line; do echo modified $line done <<< $(cat ~/text.txt) But the output I got was: modified line1 line2 line3 What went wrong?
The issue is in the last line, you don't need the variable (command substitution) or cat, since read already can read the file. If you instead do this: while IFS= read -r line; do echo modified $line done < ~/text.txt It works. Additionally, your command would work if you quoted the variable like: "$(cat ~/text.txt)" since bash disregards newlines in variables unless you quote them. But doing it this way is overcomplicating it.
Using read command to read file by line in Bash doesn't work
1,590,801,567,000
The problem is to read variables with read command dynamically from a read command in bash without knowing how many they are in advance and store them into an array . I tested with : read -p "array : " array[{0..#}] as read -p "array : " array[{0..3}] works But with no success .
From the read usage output you can actually use the -a flag. read -p "array: " -a array
How to read variables from stdin dynamically and store them into an array
1,590,801,567,000
Using read and by typing `word followed by the left arrow ←, one get $ read word^[[D The same goes for the Home and End keys that lead to ^[[H and ^[[F respectively. How can I handle those characters, so that I go backward with the left arrow ←`, at the beginning and end of what has been written with Home and End respectively.
readline library usually handles this, and inputrc tells you which codes are emitted. Forcing the shell into interactive mode should enable these features. curses is a library that does the full support for moving the cursor around (if you want a text editor or something). But ultimately, you have to remember, that the terminal is the sender and receiver of input/output. So... printing a control sequence that moves the cursor should move the cursor, so you can always write anything to any position on the screen. Terminals differ somewhat in control sequences they respond to, but check out this for the reference.
How to handle back arrows, End and Home keys in a read prompt
1,590,801,567,000
Given such situation: echo "Please enter your name" read name # user enters: john smith echo $name # prints: john What could cause read to read only the first word of the input? Is there a shell variable that controls this? I came across this in a question on Ask Ubuntu and I'm wondering how to reproduce this behavior.
to accept whatever the user enters, use this form IFS= read -r name That will accept leading/trailing/inner spaces as well as literal backslashes.
Input separator of the `read` builtin in Bash
1,590,801,567,000
FYI I am running busybox. I am able to send data to my ttyS1 using the following command: stty -F /dev/ttyS1 speed 115200 cs8 -cstopb -parenb -echo echo -en 'data here' > /dev/ttyS1 But when I try to read, I do this: stty -F /dev/ttyS1 speed 115200 cs8 -cstopb -parenb -echo cat /dev/ttyS1 But program ends without any messages. I also tried cat < /dev/ttyS1 doesn't work either. I am positive that the data is being sent to this port since I have LED indicator to indicate data is coming. And Connection Settings are set to be same as 115200 baud, 8bit, Even Parity, 1stopbit.
So found answer in another forum. I will put it here, basically just add timeout timing and a while loop to constantly read the port. stty -F /dev/ttyS1 speed 115200 cs8 -cstopb -parenb -echo time 3 min 0 while [ true ]; do cat /dev/ttyS1 done That's all.
Reading data from serial port
1,590,801,567,000
I have tried to hook system calls using linux kernel module. However, when I open a pdf file using Evince, I find no open,read and write is used on this specific file, only lstat is used. Here is the strace log of strace evince folder1/test.pdf So I wonder what system call does evince use to open andread from file?
As pointed out by @ThomasDickey, you need to pass strace the -f option, in order to include the trace of all the threads. (The clone() syscalls are creating new threads, not processes, but you still need -f to have strace follow threads.) Once you're following all threads, the open becomes aparent, opening the file in read-only mode, on file descriptor 19: open("/home/xytao/folder1/test.pdf", O_RDONLY) = 19 The contents of the file are read using a succession of pread calls into a small 256 character buffer, the first of which gets the %PDF-1.3 header: pread(19, "%PDF-1.3\r\n%\241\263\305\327\r\n3 0 obj\r\n<</Fil"..., 256, 0) = 256 If you go through the trace, you'll see it will then lseek to the end of the file (to determine file size), then read a couple of blocks of data from there. I assume that's information about the pages in the document. The following preads happen at offsets in the middle of the file, but all of them seem to start with some information about "page", so I assume that's evince figuring out about the pages in the PDF document, probably using offsets it got from the end of the file.
What system call does Evince use to open pdf?
1,590,801,567,000
I have a text file which is usually filled with multiple lines which I want to "print" with a while loop. The text inside this file contains variables - my problem is, that these variables are not interpreted unlike a similar test-string containing variables stored inside the script. Is it possible to also interpret those variables from my external file or do I have to parse them beforehands etc.? What is the difference between $LINE_INSIDE and $LINE_OUTSIDE? I tried some suggestions from other questions like ${!varialbe_name} and different constructs with quote signs but with no luck so far. #!/bin/bash # color.sh BLUE='\033[1;34m' NC='\033[0m' # No Color LINE_INSIDE="${BLUE}Blue Text${NC}" echo -e ${LINE_INSIDE} while read LINE_OUTSIDE; do echo -e ${LINE_OUTSIDE} done < text_file Output: Additional Information: I (indeed) also have shell-commands in my input-text-file which should not by executed. Only the variables should be expaned.
It would probably make more sense to write it as: BLUE=$'\033[1;34m' NC=$'\033[0m' # No Color eval "cat << EOF $(<text_file) EOF " than using a while read loop (that's not the right syntax for reading lines btw). Of course that means that code in there would be interpreted. A $(reboot) in there for instance would cause a reboot, but that's more or less what you're asking for. That also assumes the text_file doesn't contain an EOF line. Another approach that would only do variable (environment variable) substitution (and not command substitution for instance) would be to use GNU gettext's envsubst: BLUE=$'\033[1;34m' NC=$'\033[0m' # No Color export BLUE NC envsubst < text_file Or so that only those two variables are expanded: BLUE=$'\033[1;34m' NC=$'\033[0m' # No Color export BLUE NC envsubst '$BLUE$NC' < text_file
Interpret variables from read in string with shell script [duplicate]
1,590,801,567,000
Is there a way to validate or confirm that the user wrote what it meant to write in read? For example, the user meant to write "Hello world!" but mistakenly wrote "Hello world@". This is very similar to contact-form validation of an email / phone field. Is there a way to prompt the user with something like "Please retype the input", in read? I found no such option in man read. Note: The input is a password so I don't want to print or compare it with an already existing string.
With the bash shell, you can always do FOO=a BAR=b prompt="Please enter value twice for validation" while [[ "$FOO" != "$BAR" ]]; do echo -e $prompt read -s -p "Enter value: " FOO read -s -p "Retype to validate: " BAR prompt="\nUups, please try again" done unset -v BAR # do whatever you need to do with FOO unset -v FOO read options used: -s Silent mode. If input is coming from a terminal, characters are not echoed. -p prompt Display prompt on standard error, without a trailing newline, before attempting to read any input.
read value validation
1,590,801,567,000
I am trying to read in a file using read in bash 3.2, but read seems to be converting any instance of multiple whitespace into a single space. For example, the code below has two tabs between "hello" and "there", and three spaces between "today" and "world": while read -r LINE; do echo $LINE done <<< "hello there today world" However, when run, it outputs with only a space in between each set of words: hello there today world Instead, I'd like it to output the lines with whitespace preserved, e.g.: hello there today world Is there any way to do this? If not with read, then with something else?
Put quotes around your variable when you print it. It's being expanded then word split so echo is getting hello and there as separate arguments. echo "$LINE" or better printf '%s\n' "$LINE" will keep your whitespace so it's not the read that's changing your whitespace, it's your not quoting the variable later
"while read -r LINE; do" is replacing multiple spaces with a single space [duplicate]
1,590,801,567,000
mapfile -t -u 7 arr < textfile gives me bash: mapfile: 7: invalid file descriptor: Bad file descriptor Other, more verbose, methods of reading files line-by-line do allow for such descriptor, e.g. read_w_while() { while IFS="" read -u 7 -r l || [[ -n "${l}" ]]; do echo "${l}" done 7< textfile The standard descriptor, 0, is used quite a lot. Does using such descriptor make scripting more secure from interference? My experience is that so far I only witnessed, I am a Ubuntu Desktop user, such interference when using while IFS="" read -u 7 ... with the descriptor 7. What might be the reasons of such interference.
mapfile -t -u 7 arr < textfile gives me. bash: mapfile: 7: invalid file descriptor: Bad file descriptor Yes, it would, if fd 7 isn't open. -u 7 only tells it to read from fd 7, it says nothing about how or where that fd should come to be. In your second snippet, there's an input redirection 7< textfile that opens a file on fd 7, so it should be quite natural that there, reading from fd 7 works. You'd usually use another file descriptor for read if you have a need to use the original stdin at the same time, e.g. if you read from a file with a while read ... loop but also run something that reads from the terminal (stdin) within the loop. Now, strictly speaking, the -u fd option is unnecessary, since the equivalent could be achieved with e.g. read ... 0<&7, but that has the downside that the shell needs to do some juggling to arrange the file descriptors in the correct places: first, duplicate fd 0 to some other number for safekeeping, say fd 9 duplicate fd 7 to fd 0 (closing the old fd 0) run read, which reads from fd 0 duplicate the original fd 0 from fd 9 (closing the current fd 0) close the now unnecessary copy in fd 9 All that, instead of just run read -u 7, which reads from fd 7 (I originally wrote "close fd 0" as an explicit step, but dup2() does close the destination fd automatically)
How do I use a nonstandard file descriptor for reading a file into an array with mapfile?
1,590,801,567,000
When I give this command: $echo -n "Hello" Hello$ I get the above output. This means echo -n prints the string without the terminating newline. Now, I pipe the output to read, where the read command is supposed to keep reading until it encounters a newline. $echo -n "Hello" | read $ At the above command, the command prompt is returned. However, I was expecting the above command to hang as the read continuously waits for input because it didn't encounter a newline. Why doesn't this happen?
If you investigate the exit-status of that pipeline, you will notice that read returns 1: $ echo -n "Hello" | read $ echo $? 1 It returns 1 because it encountered an end-of-file condition and therefore failed to read more data. The input stream from echo was closed because echo had finished its task and terminated, closing that side of the pipe. The data read before the input stream from echo was closed would still be available in the REPLY variable: $ echo -n "Hello" | { read; echo "$REPLY"; } Hello In short, read does not wait for further input because it noticed that the input stream was closed. Also (tangentially) related: What is meant by "keeping the pipe open"? You may possibly come across loops like these: while read variable || [ -n "$variable" ]; do # something with "$variable" done This allows for reading input that may not be properly terminated by a final newline character. Without the -n test, and without a final newline character in the data, the last (non-terminated) line would otherwise be skipped. The read utility used in this loop acts like a test. If the input stream that the loop is connected to closes (the read reads past the end of the file), read would return a non-zero exit-status, and the loop would terminate. With the -n test, it does one extra iteration, but that extra iteration will just confirm that, yes, there is no more data to read and now the value $variable is also empty.
How is input terminated during piping though newline is not used?