date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,590,801,567,000 |
So I am often guilty of running cat on an executable file that's a binary file and my terminal usually makes some weird noises and isn't happy. Is there some accepted naming convention for giving an an extension to binary/executable encoded file?
I have an executable file (the output of go build -o /tmp/api.exe . and I as I just mentioned I just named it .exe but I am wondering if there is way to check a file before I cat it to see if it's utf8 or whateer.
|
The standard naming practice for executables is to give them the name of the command they’re supposed to implement: ls, cat... There is no provision for extensions which end up ignored from the command line.
To check what a file contains before feeding it to cat, run file on it:
$ file /bin/ls
/bin/ls: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=b6b1291d0cead046ed0fa5734037fa87a579adee, for GNU/Linux 3.2.0, stripped, too many notes (256)
$ file /bin/zgrep
/bin/zgrep: a /usr/bin/sh script, ASCII text executable
This tells me that cat /bin/zgrep won’t do anything strange to my terminal (it doesn’t even contain escape sequences, which are identified separately by file).
I much prefer using less in general: it will warn about binary files before showing them, and won’t mess up the terminal in any case. It can also be configured to behave like cat for short files (see the -F option).
As mosvy points out, you can make cat safe to use on binaries by adding the -v option, which replaces non-printable characters with visible representations (^ and M- prefixes). (Rob Pike famously considered that this option is harmful — not because of its effects on the terminal, but because of its effect on command style.)
| Standard naming practice for executables (binary file) and how to tell whether a file has has non-printable characters? |
1,590,801,567,000 |
I use printf "input: "; read -e. I type something then I press Backspace. When reaching the last character, this deletes the input: part together with it. Backspace doesn't misbehave if I hadn't typed anything before or if I used simple read (no Readline).
|
A read -e calls the readline library. Which gives access to several editing tools that a plain read does not have. However, it assumes an "empty line".
A workaround to this problem is to give something (like an space) to avoid the "empty line" assumption:
printf 'input:'; read -e -p ' '
But since that is using the -p option already, it might be simpler to write:
read -e -p 'input: '
| On backspace, `bash read -e` also deletes same-line printf (preexistent) text |
1,590,801,567,000 |
The man page for grep describes the -d ACTION option as follows:
If an input file is a directory, use ACTION to process it. By default, ACTION is read, i.e., read directories just as if they were ordinary files. [...]
Intuitively, I would expect this to mean that a directory bar is treated (for grepping purposes) as the equivalent of a text file containing something more or less along the lines of what vim displays if I type vim foo, i.e., something roughly (up to variation is what sort of explanatory information and/or metadata is at the top and bottom) like:
"============================================================================
" Netrw Directory Listing (netrw v156)
" /home/chris-henry/bar
" Sorted by name
" Sort sequence: [\/]$,\<core\%(\.\d\+\)\=\>,\.h$,\.c$,\.cpp$,\~\=\*$,*,\.o$,\.obj$,\.info$,\.swp$,\.bak$,\~$
" Quick Help: <F1>:help -:go up dir D:delete R:rename s:sort-by x:special
" ==============================================================================
../
./
foobar/
baz/
qux
If this were the case, then grep -H foo bar would produce the output
bar: foobar/
Instead, it gives the message grep: bar: Is a directory. Why is this? And is there any (reasonably straightforward) way to get the intuitive result (not just on this simple search, but also for searches like grep foo * where * may match any or all of text files, binary files, and directories)?
ETA (2021-07-22): As suggested by the accepted answer and confirmed in the comments, grep foo bar itself actually does exactly what I'd expect it to do: It invokes the system call read (ssize_t read(int fd, void *buf, size_t count)) with the file descriptor for bar, just as it would if bar were an ordinary file. And when read, instead of filling *buf with the contents of bar, returns the error code EISDIR, grep prints an appropriate diagnostic message, then continues on to the next file - just as it would if read returned an error code (other than EINTR or, sometimes, EINVAL) and bar were an ordinary file.
The difference between my expectation and reality comes from the behavior of the Linux version (and, judging by comments, most other modern versions) of read, namely that when fd refers to a directory, it automatically returns EISDIR.
ETA2 (2021-07-23): The primary motivation for this question was not a pressing need to get the intuitive behavior described (although I was interested in that as a potential secondary benefit). The motivation was to understand why (GNU) grep seemed, based on its output, to be behaving in a manner that contradicted a statement in its man page.
The answer turned out to be that grep was actually doing just what its man page said it would, but that changes to the (typical) behavior of the system call read make the result of that, on most modern systems, substantially different from what one would infer based solely on a reading of the grep man page (without being familiar with the behavior of modern read implementations.
While it's true that I would rather, on the whole, that read didn't behave like that, I rather doubt that that behavior contradicts its man page. Given the current situation, I would like to see a line or two added to the grep man page, but it's not wrong as it is, just misleading.
|
Directories do not have an intrinsic representation as text. Many Unix variants allow programs to read from a directory as if it was a regular file, but this is mostly useless since the format of the content depends on the filesystem. Some modern Unix variants, including Linux, outright blocks programs from reading a directory as if it was a regular file.
For example, here's what happens on FreeBSD (an older version that still allows it — since FreeBSD 13, this is disabled by default) with a directory like your bar:
$ grep -H foo bar
Binary file bar matches
$ grep -H --text foo bar
bar:�"!
.�
..�"!foobar�"!
baz�"!qux
Yes, you can determine that foo is present in the directory representation, but you can't be sure that it's part of a file name. For example (still on that FreeBSD machine):
$ rmdir bar/foobar
$ grep -H --text foo bar
bar:�"!
..�"!foobar�"!
baz�"!foo
Deleting the directory removed it from the filesystem, but it didn't wipe the name of the deleted entry from the on-disk structure that encodes the directory.
When you ask Vim to open a directory, Vim traverses the directory (using dedicated system functions like readdir, not using the generic read function) and displays the results in a nice way.
Grep could implement something like that, but that would be a lot of work relative to the size of grep, it would deviate from grep's core purpose which is to search for the content of files, and the implementation would have to be a compromise that doesn't satisfy many people. Would the directory's representation as text only include file names or also some metadata (why doesn't grep "Jul 20" bar find files modified on July 20)? How would entries be separated (if they're separated by newlines, the representation is ambiguous since file names can contain newlines; if they're separated by null bytes, the output would only be useful for grep --null-data)?
To search in file names, there are already better tools such as shell wildcards, find and locate.
| Why does "grep foo bar" print "grep: bar: Is a directory" instead of printing any filenames in bar/ that match the pattern "foo"? |
1,590,801,567,000 |
I have a here document inside a bash script. I want to read a value from it like that :
su myUser<<SESSION
set -x
echo -n "Enter your name and press [ENTER]: "
read name
echo "the name is $name"
SESSION
But when I launch this script from another user, bash does not stop himself to wait for an input and ignore the read command.
Any ideas ?
|
As L. Scott Johnson correctly found, the read reads from standard input. The standard input in the shell that su runs is connected to the here-document, so the read reads the literal string echo "the name is " (note that since the here-document is unquoted, the $name has already been expanded to an empty string, or to whatever value it had in the calling shell).
Here is the same thing, but with a quoted here-document, and an extra line outputting $name again:
su username <<'SESSION'
set -x
echo -n "Enter your name and press [ENTER]: "
read name
echo "the name is $name"
echo "What I read was $name"
SESSION
The output would be
Password:
+ echo -n Enter your name and press [ENTER]:
Enter your name and press [ENTER]: + read name
+ echo What I read was echo "the name is $name"
What I read was echo "the name is $name"
To correctly do this, you can't have read reading from standard input. Instead, open a new file descriptor as a copy of standard input, and get read to read from that:
su username 3<&0 <<'SESSION'
set -x
echo -n "Enter your name and press [ENTER]: "
read name <&3
echo "the name is $name"
SESSION
If the shell of the other user is bash or ksh, then read name <&3 may be replaced by read -u3 name.
Note however that you can't expect the name variable to be set in the shell calling su as a child shell can't modify the environment (variables etc.) of a parent shell.
| Bash - here doc read input value from another user |
1,590,801,567,000 |
I'm writing a bash script where I'm reading in a variable via read this two strings:
log.*.console.log log.*.log
They are separated by space.
How can I rewrite the strings that the output from the variable for the next program called in the script will have them in this form ? 'log.*.console.log' 'log.*.log'
I was trying sed and awk but somehow without success.
The whole script:
#!/bin/bash
if [ -z "$*" ] ; then
echo "USAGE: ./some text"
exit 1
else
echo "some text"
read varlogname
i=1
for file in $@
do
echo "Doing" $file
GTAR=$(gtar -xvf $file --wildcards --ignore-case --no-anchored "$varlogname")
for file in ${GTAR[*]}
do
mv $file $file"."${i}
i=$((i+1))
done
done
echo "Files extracted."
fi
exit 0
|
I don't think you want to give the single quotes to gtar. In a command such as somecmd 'foo bar' 'files*.log', the shell will handle the quotes, they tell it not to treat special characters specially, and pass somecmd the arguments foo bar and files*.log. Unix programs do not get the command-line as one string, but a number of argument strings, it is the shell that does the splitting of the command-line to strings.
If you want to read multiple values in Bash, you could use read -a array, and then hand the array to the command.
-a array assign the words read to sequential indices of the array
variable ARRAY, starting at zero
i.e.
read -a filenames
gtar "${filenames[@]}"
Indexing the array with [@] (in quotes) expands to the array members as separate words, which is what you want.
Also, you have for file in ${GTAR[*]}. That looks like accessing GTAR as an array, but it isn't one, you've just assigned the output of the gtar command to it, as a string. In this case ${GTAR[*]} is the same as $GTAR. Since the expansion isn't quoted the string undergoes word-splitting at this point, which by default splits on whitespaces. But, after that, the words are also taken as filename globs and expanded to matching filenames.
As long as your filenames don't contain whitespace or glob characters (*?[], this is fine. But in general, it's not something you want to do.
For proper arrays, you probably always want to use "${array[@]}" instead of [*].
See: Word Splitting, Why does my shell script choke on whitespace or other special characters? and Arrays
| Replace single quotes for strings divided by blank space from variable |
1,590,801,567,000 |
I am trying to automate software update with bash script. When I am passing version number e.g 7.16.3 I get following error: ") Syntax error Invalid arithmetic operator (error token is "
I could not find any answer which related when passing value from read command. My code look like below:
DATE=`date +'%Y%m%d'`
BSI_SETUP=/opt/bsi/source/setup/elk_${DATE}
OLD_VERSION_FILEBEAT=`/usr/share/filebeat/bin/filebeat version| awk '{print $3 }' 2>/dev/null`
OLD_VERSION_METRICBEAT=`/usr/share/metricbeat/bin/metricbeat version| awk '{print $3 }' 2>/dev/null`
MY_HOME=~
read -p 'Enter filebeat & metricbeat version: ' NEW_VERSION_BEATS
read -p 'Enter CSC environmet: ' CSC_ENV
if [[ ${NEW_VERSION_BEATS} -ne ${OLD_VERSION_FILEBEAT} ]]; then # I get error here
sudo yum install -y $BSI_SETUP/filebeat-*.rpm 2>/dev/null
else
echo "Filebeat is up-to-date"
fi
if [[ ${NEW_VERSION_BEATS} -ne ${OLD_VERSION_METRICBEAT} ]]; then # and here
sudo yum install -y $BSI_SETUP/metricbeat-*.rpm 2>/dev/null
else
echo "Metricbeat is up-to-date"
fi
|
-ne does a numeric comparison, while 7.16.3 is not a number. (Even if it's called a "version number".) As it happens, as far as Bash is concerned, neither would 7.16 be, since Bash only deals with integers. The error is actually clearer within Bash's [ .. ]:
$ [ 7.16.3 -ne 7.16.3 ]
bash: [: 7.16.3: integer expression expected
Use [ "$a" = "$b" ], or [[ $a == "$b" ]] for string equality comparison, != for inequality. ([ "$a" == "$b" ] works in Bash, but isn't standard.)
OTOH, if your error really looks like that, with the ") in front, it's due to a carriage return in the value.
$ var=$'7.16\r'
$ [[ $var -ne 7.16 ]]
")syntax error: invalid arithmetic operator (error token is ".16
In that case, check that your input doesn't come from a Windows text file, or use something like var=${var%$'\r'} to remove the CR.
| Invalid arithmetic operator (error token is " 7.16.3 when passing float from read command |
1,590,801,567,000 |
While reading line by line, and IFS set to null, we can write:
while IFS= read -r line
do
echo "$line"
done < <(find . -name "*.txt")
Is this not the same as:
while read -r
do
echo "$REPLY"
done < <(find . -name "*.txt")
Why or when is one preferred over the other?
|
from the man bash, If no names are supplied, the line read is assigned to the variable REPLY. in your second attempt there is no name so it stores in the REPLY variable by default.
example:
$ cat infile
1
2
3
$ while read ; do echo $REPLY; done <infile
1
2
3
but it (REPLY variable) doesn't set when you specify a name and it that case the current line read into the specified name instead.
$ while read tmp; do echo $REPLY; done <infile
$
Why or when is one preferred over the other?
It's clear and up to you, when you want to use default REPLY variable for storing the lines it reads, drop the name argument, to store in different variable name, specify it strictly, that's all.
| Since read has a built in variable $REPLY, why do we need to explicitly state $line or other variable |
1,590,801,567,000 |
I am trying to create a small script for creating simple, all-default Apache virtual host files (it should be used any time I establish a new web application).
This script prompts me for the domain.tld of the web application and also for its database credentials, in verified read operations:
read -p "Have you created db credentials already?" yn
case $yn in
[Yy]* ) break;;
[Nn]* ) exit;;
* ) echo "Please create db credentials and then comeback;";;
esac
read -p "Please enter the domain of your web application:" domain_1 && echo
read -p "Please enter the domain of your web application again:" domain_2 && echo
if [ "$domain_1" != "$domain_2" ]; then echo "Values unmatched. Please try again." && exit 2; fi
read -sp "Please enter the app DB root password:" dbrootp_1 && echo
read -sp "Please enter the app DB root password again:" dbrootp_2 && echo
if [ "$dbrootp_1" != "$dbrootp_2" ]; then echo "Values unmatched. Please try again." && exit 2; fi
read -sp "Please enter the app DB user password:" dbuserp_1 && echo
read -sp "Please enter the app DB user password again:" dbuserp_2 && echo
if [ "$dbuserp_1" != "$dbuserp_2" ]; then echo "Values unmatched. Please try again." && exit 2; fi
Why I do it with Bash
As for now I would prefer Bash automation over Ansible automation because Ansible has a steep learning curve and its docs (as well as some printed book I bought about it) where not clear or useful for me in learning how to use it). I also prefer not to use Docker images and then change them after-build.
My problem
The entire Bash script (which I haven't brought here in its fullness) is a bit longer and the above "heavy" chuck of text makes it significantly longer - yet it is mostly a cosmetic issue.
My question
Is there an alternative for the verified read operations? A utility that both prompts twice and compares in one go?
Related: The need for $1 and $2 for comparison with an here-string
|
How about a shell function? Like
function read_n_verify {
read -p "$2: " TMP1
read -p "$2 again: " TMP2
[ "$TMP1" != "$TMP2" ] &&
{ echo "Values unmatched. Please try again."; return 2; }
read "$1" <<< "$TMP1"
}
read_n_verify domain "Please enter the domain of your web application"
read_n_verify dbrootp "Please enter the app DB root password"
read_n_verify dbuserp "Please enter the app DB user password"
Then do your desired action/s with $domain, $dbrootp, $dbuserp.
$1 is used to transport the variable name for the later read from the "here string", which in turn is used as it's easier here than a (could be used as well) "here document".
$2 contains the prompt (free) text, used last to allow for (sort of) "unlimited" text length.
Upper case TMP and [ ... ] && "sugar-syntax" (whatever this might be) are used by personal preference.
if - then - fi could be used as well and would eliminate the need for the braces that collect several commands into one single command to be executed as the && branch.
| read-verification alternative (two prompts and if-then comparison alternative) |
1,590,801,567,000 |
NOTE: st is the actual name of the terminal emulator in my question - https://st.suckless.org/.
I want to create a shortcut that if pressed, pops up st and displays the translation of the word in the clipboard.
I tried using this, but is exits immediately and gives an error:
$ st -e "trans $(xclip -o) -t en; read"`
child finished with error '256'
But the same works with xterm just as expected:
$ xterm -e "trans $(xclip -o) -t en; read"
Using only one command as the -e option of st works, but I need to execute read or something like that after trans so the terminal won`t close immediately.
Is this a bug in st or am I doing something wrong?
|
The -e option is a compatibility mechanism in simple terminal. The command and arguments that you pass, with or without -e, are executed directly, by simple terminal forking and then running execvp() in the child process on exactly the command and arguments it is given. There's no shell involved, and the arguments passed to st are sent on exactly as-is to the target program.
You have passed everything all as one argument. So simple terminal is actually trying to run a command named, literally, trans $(xclip -o) -t en; read (if single quoted, or that modified by whatever the result of the expansion is if you use double quotes). Obviously, you have no command named that.
To use a shell command line — such as you have here with expansions, shell built-in commands, and shell command syntax — you have to explicitly invoke a shell to understand it:
st -e sh -c 'trans "$(xclip -o)" -t en; read'
This runs st which starts an sh shell which runs a short shell script, which contains your commands.
| Execute semicolon separated commands passed to the -e flag of st (Simple Terminal) |
1,590,801,567,000 |
So I'm writing a script to basically run my docker applications from quickly, I've got everything working just fine it does everything I coded it to do.
I just have a question about one of my functions:
function prompt_user() {
echo "Enter details for docker build! If it's a new build, you can leave Host Directory and Remote Directory blank."
echo "If you've already assigned variables and are running the host you can leave the already filled vars blank if you entered them before"
echo " "
echo "Enter details:"
read -p "Image Name: " IMAGE_NAME
read -p "IP Address: " IP_ADDRESS
read -p "Port 1: " PORT_ONE
read -p "Port 2: " PORT_TWO
read -p "Container Name: " CONTAINER_NAME
read -p "Node Name: " NODE_NAME
read -p "Host Directory (Can leave this blank if you're building a new image): " HOST_DIRECTORY
read -p "Remote Directory (Can leave this blank if you're building a new image): " REMOTE_DIRECTORY
}
Would there be an easier way to use read less repetitively and assign all the inputs to the vars?
Here is the full script if you'd like to look at it.
|
I'm not sure how much cleaner this is than your existing function but using an associative array (requires bash v4.0 or later) combined with a for loop you could use read once.
function prompt_user() {
declare -A prompt_questions
vars=(IMAGE_NAME IP_ADDRESS PORT_ONE PORT_TWO CONTAINER_NAME NODE_NAME HOST_DIRECTORY REMOTE_DIRECTORY)
prompt_questions=(
[IMAGE_NAME]='Image Name'
[IP_ADDRESS]='IP Address'
[PORT_ONE]='Port 1'
[PORT_TWO]='Port 2'
[CONTAINER_NAME]='Container Name'
[NODE_NAME]='Node Name'
[HOST_DIRECTORY]="Host Directory (Can leave this blank if you're building a new image)"
[REMOTE_DIRECTORY]="Remote Directory (Can leave this blank if you're building a new image)"
)
cat <<EOF
Enter details for docker build! If it's a new build, you can leave Host Directory and Remote Directory blank.
If you've already assigned variables and are running the host you can leave the already filled vars blank if you entered them before
Enter details:
EOF
for var in "${vars[@]}"; do
read -rp "${prompt_questions[$var]}: " "$var"
done
}
| Using 'read' for more than one variable |
1,590,801,567,000 |
My question is based on the following question/answer.
I am trying to use the read -n 1 a solution as given there.
However, FreeBSD gives me a :
read: Illegal option -n
I don't know how to find out, what the FreeBSD equivalent is.
(Please don't tell me RTFM, I searched but can't find the proper info.
|
This is not dependent on your operating system but on your shell.
In bash and ksh93, read -n N will read a specific number (N) of characters (or bytes).
Other shells, such as dash or ash (which serves as sh on FreeBSD) and pdksh (which is sh and ksh on OpenBSD), does not have a read that has this option. The tcsh and csh shells on FreeBSD also do not have read -n.
| What is the FreeBSD equivalent of "read -n"? |
1,590,801,567,000 |
Today I have learned some tricks about menu option in command line.
One of these was
cat << EOF
Some lines
EOF
read -n1 -s
case $newvar in
"1") echo "";
ecsa
It's really magical.
I can't find any description in man page about this option. How the input to read command was pushed into case option ? It usually use a variable to do this thing as I know.
I just want to understand the process of this combination further.
while :
do
clear
cat<<EOF
==============================
Menu Install DHCP Tool
------------------------------
Please enter your choice:
(1) Config Network Interface
(2) Check status
(3) Config DHCP server
(Q)uit
------------------------------
EOF
read -n1 -s
case "$REPLY" in
"1") config_network ;;
"2") check_status ;;
"3") config_dhcp ;;
"q") exit ;;
* ) echo "invalid option" ;;
esac
sleep 0.2
done
|
The documentation of read notes that:
If no names are supplied, the line read is assigned to the variable REPLY.
From that point it's a normal case statement. -n1 reads a single byte and -s turns off terminal echo of the input.
| What does "read -n1 -s" mean in this script? |
1,590,801,567,000 |
The following file runs, but not doing anything, yet it does not error....
while read dates; do ./avg_hrly_all_final.sh ${dates}; done < ./dates_all.csv
I have a list of dates in "dates_all.csv" that have the following form:
2005 01
2005 02
2005 03
And the script I am calling "avg_hrly_all_final.sh" works by passing it 2 positional parameters, example:
./avg_hrly_all_final.sh 2005 01
FOLLOW-UP
xargs -n2 ./the_script.sh <./dates_to_pass.csv
OR
while read dates; do ./the_script.sh ${dates}; done <./dates_to_pass.csv
work, just make sure the dates of the file being passed is of the same "End of Line" variety as the machine you are running the command on expects ;)
|
This is a likely job for xargs:
printf %s\\n '#!/bin/sh' 'printf "<%s>\n" "$$" "$@"' >avg_hourly.sh
chmod +x ./avg_hourly.sh
xargs -n2 ./avg_hourly.sh <<\IN
2005 01
2005 02
2005 03
IN
xargs will split on the spaces by default and invoke the specified command once per -n2 occurring arguments. I just wrote a little dummy avg_hourly.sh script there which prints its arguments one per line as delimited at either end by < and > following its PID in the same format. The above prints:
<1115>
<2005>
<01>
<1116>
<2005>
<02>
<1117>
<2005>
<03>
...just to demonstrate. You should use <./dates_all.csv rather than my <<\IN here-document as input, though, probably.
| use "read" command to pass lines as positional parameters to a shell script |
1,590,801,567,000 |
I wish to back up some of the files located in my home dir.
That is
simple files at the root of my home
and some directories in my home, listed in ~/worthsaving.txt
Sample worthsaving.txt:
cloud
work/web
work/python
I made this script :
#!/bin/bash
srce=/home/poor
dest=/run/media/poor/backup
mkdir -p $dest
cp -v $srce/* $dest/ # simple files
while IFS= read -r worth
do
mkdir -p $dest/$worth
cp -Rv $srce/$worth $dest/$worth # dirs that matters
done < "$srce/worthsaving.txt"
But I end up with not only what I wish but my entire home in a dir called poor at the root of the backup.
What's wrong ?
|
There is a blank line in your list of files to backup. Inside the loop this means that $worth is empty, which in turn results in execution of this command:
cp -Rv /home/poor /run/media/poor/backup/
In case it's not yet clear, this copies your entire home directory to the target directory poor in the destination.
Here's what your loop could correctly look like:
while IFS= read -r worth
do
if [ -n "$worth" ] && [ -d "$worth" ]
then
mkdir -p "$dest/$worth"
cp -Rv "$srce/$worth" "$dest/$worth" # dirs that matters
fi
done < "$srce/worthsaving.txt"
Although as has been hinted in the comments you could replace all of this with rsync:
rsync -rtR --files-from "$srce/worthsaving.txt" "$srce/" "$dest/"
Personally I'd use -a (--archive) instead of -rt (--recursive --times) but as you didn't use cp -a I've stuck with the minimal recommended flags for rsync too.
As an aside, always double-quote your variables* when you use them. This protects them from word splitting and globbing by the shell. As an example consider a directory my bin assigned by the loop to $worth. Without double-quotes, this would result in this erroneous and unexpected command:
cp -Rv /home/poor/my bin /run/media/poor/backup/my bin
Here, an attempt would be made to copy three files or directories (/home/poor/my, bin from the current directory, and /run/media/poor/backup/my) to the target directory bin
* At least, always until you understand why you might have a specific case not to do so. And even then, reconsider.
| Why is my entire home backed up? |
1,590,801,567,000 |
Now we're all familiar with not using:
find . -print | xargs cmd
but using
find . -print0 | xargs -0 cmd
To cope with filenames containing e.g. newline, but what about a line I have in a script:
find $@ -type f -print | while read filename
Well, I assumed it would be something like:
find $@ -type f -print0 | while read -d"\0" filename
And if I'd simply done:
find $@ -type f -print0 | while read filename
I'd be seeing the NULLs?
But No, the while loop exits after zero times around (in both cases) I assume because the read returned zero, also I assume because it read a NULL (\0) .
Feels like the bash read should sport a "-0" option.
Have I misread what's happening or is there a different way to frame this?
For this example I may well have to recode to use xargs but that's a whole heap of new processes I didn't want to fork.
|
When using read, you can use just -d '' to read up to the next null character.
From the bash manual, regarding the read built-in utility:
-d delim
The first character of delim is used to terminate the
input line, rather than newline. If delim is the empty
string, read will terminate a line when it reads a NUL
character.
You probably also want to set IFS to an empty string to stop read from trimming flanking whitespaces from the data, and to use read with -r to be able to read strings containing backslashes properly. You also need to double quote the expansion $@ if you want your script or shell function to support search paths containing newlines, spaces, filename globbing characters, etc:
find "$@" -type f -print0 |
while IFS= read -r -d '' pathname; do
# use "$pathname" to do something
done
Personally, I would not pass pathnames out of find at all if it's not desperately needed, but execute the needed operations via -exec, e.g.,
find "$@" -type f -exec sh -c '
for pathname do
# use "$pathname" to do something
done' sh {} +
Related topics:
When is double-quoting necessary?
Understanding "IFS= read -r line"
Why is looping over find's output bad practice?
| reading filenames with newlines |
1,590,801,567,000 |
in shell script when you have the following :
read my_variable
Enter is the key that saves your input.
is there a way to make Tab accomplish the same as Enter without removing Enter's functionality?
|
It may be overkill but you could obtain that by using read -e, which enables the Readline facility on the read utility. At that point your desired result would be only one key-binding away.
Careful though that Readline brings along many other functionalities too, like completion, history, etc., which you might not want for a simple read my_variable. If those are undesirable, you have to explicitly clear the key-bindings and disable the functionalities you don't want for your read -e.
Sample proof-of-concept from command-line:
(bind 'TAB: accept-line'; IFS= read -re var && echo "$var" || echo ko)
You can do that in a script too, although bind will give a warning (which you can still mute by redirecting 2>/dev/null).
An alternative to bind commands in a script is to provide a custom inputrc file prior to invoking the script that you want to be affected. It's not necessary to have a real file, a Here Document suffices.
The above example made through scripts:
#!/bin/bash
export INPUTRC=/dev/fd/3
script2.sh 3<<EOF
TAB: accept-line
set history-size 0
EOF
# this example 'inputrc'-like file also disables history support
The above script prepares the custom inputrc file as a Here Document on file-descriptor 3, which the shell running script2.sh will read as indicated by the INPUTRC environment variable.
Then script2.sh:
#!/bin/bash
echo start
bind -q accept-line 2>/dev/null # shows which keys are configured to accept input
IFS= read -re var && echo "$var" || echo ko
echo end
Before waiting for input on the read, the script will print something like:
accept-line can be invoked via "\C-i", "\C-j", "\C-m".
showing that Tab (i.e. Ctrl-I shown above as \C-i) accepts a line just as well as a Return (i.e. Ctrl-M, carriage-return) or a newline (Ctrl-J).
For a more "real world" example:
#!/bin/bash
bind 'TAB: accept-line' &>/dev/null
echo "enter your name:"
IFS= read -re var
echo "your name is: $var"
If you go down this path, have a look at Readline user's guide, at least the reduced one in your man bash. The set convert-meta off setting among others may worth a particular mention in order to have better support for non-ascii characters.
| Shell script on "read" accept via both enter key and tab key |
1,590,801,567,000 |
I am trying to read a password from user, i used -s silent flag but read -s is not working from the script but it works if i do it manually from terminal.
error details
project.sh: 3: read: Illegal option -s
you entered
code
maddy@ElementalX:~/Desktop$ cat project.sh
#!/usr/bin/sh
read -s -p "Enter Password: " pswd
echo "you entered $pswd"
maddy@ElementalX:~/Desktop$
|
The -s option to the built-in utility read is not a standard option, and is unlikely to be implemented in sh. Likewise, the -p option for giving a custom prompt is unlikely to be implemented by a generic sh.
Run your script with bash instead, whose read does support -s for reading from the terminal without echoing the typed-in characters (and also -p). The easiest way to do this is to change the #!-line to point to the bash executable on your system.
In a non-bash shell, you may get a similar effect with
printf 'Enter password: ' >&2
stty -echo
read password
stty echo
| read -s gives error via script |
1,590,801,567,000 |
Why when reading input with read, and the input is ??? the result is bin src?
$ read
???
$ echo $REPLY
bin src
Running bash on macOS.
|
The data held in the variable REPLY is still ???, but the result from using the variable unquoted with echo, like you are doing, is the same as doing
echo ???
You need to double quote all variable expansions.
When you leave a variable expansion unquoted, two things happens:
The value of the variable is split into multiple words. The splitting happens wherever a character is the same as one of the characters in $IFS (a space, tab and a newline by default). In your case, the result of the splitting is the same as before the splitting (the single word ???) if the value of $IFS is not modified.
Each generated word undergoes filename generation, or "filename globbing". That means that if a word is a globbing pattern, which ??? is, any filenames matching that pattern will replace the pattern. The pattern ??? matches any three character long filename and you obviously have two of those in your current working directory.
Neither of these things happens if the variable expansion is double-quoted.
Example, recreating your issue and solving it:
$ ls
X11R6 games lib libexec mdec ports share xobj
bin include libdata local obj sbin src
$ read
???
$ echo $REPLY
bin lib obj src
$ echo ???
bin lib obj src
$ echo "$REPLY"
???
Another example in the same directory as above, showing that the string that I input is split into two words (??? and s*) and that these then gets used as filename globbing patterns:
$ read
??? s*
$ echo $REPLY
bin lib obj src sbin share src
$ echo "$REPLY"
??? s*
Notice that src gets outputted twice as it matches both ??? and s*.
Related:
Why does my shell script choke on whitespace or other special characters?
When is double-quoting necessary?
Short summary of the above questions and answers: Always double quote all expansions, unless you know exactly which ones don't need it or if you actually want to invoke splitting and globbing.
| read command with ??? input |
1,590,801,567,000 |
I want to make the following script to prompt the user after each iteration, and wait for input before running the next iteration:
#!/bin/sh
DIR=$(pwd)
for f in $DIR/test-data/*.txt
do
echo "$f"
n=$(wc -w < "$f")
k=$(( $n > 6 ? 6 : $n ))
echo $n:$k
java "Permutation" $k < "$f"
read -p "Press Enter to continue"
done
The read command just prints the following and continues without pause:
Press Enter to continue./test.sh: 12: read: arg count
|
POSIX read doesn't have -p, that's a non-POSIX extension implemented in some shells (like bash). You're currently using /bin/sh which is probably a POSIX-compliant shell with limited extensions, if you want to use bash extensions you should consider using /bin/bash instead.
Instead, you can POSIXly do this:
printf 'Press Enter to continue'
read REPLY
| How do I get a user input inside a `for in` loop |
1,590,801,567,000 |
I am using these commands in Linux Kali but I keep getting an error when I run the second command: "No such file or directory found."
end=7gb
read start _ < <(du -bcm kali-linux-1.0.8.amd64.iso | tail -1); echo $start
parted /dev/sdb mkpart primary $start $end
These are some commands out of a larger set of commands I am using to try to get persistence. I do not actually know what any of these mean.
My request is for an explanation of what each command does so I can fix my errors.
|
read start _
This assigns the first word (according to $IFS) of the input line to the variable start.
du -bcm kali-linux-1.0.8.amd64.iso | tail -1
is a strange way for getting the size of the file, rounded up to the next megabyte.
parted /dev/sdb mkpart primary $start $end
creates a partition on sdb which begins after the space necessary for the iso file (assuming the default unit for parted is megabyte which I have not checked) and ends at 7GB.
| What does these commands do? |
1,590,801,567,000 |
In what way does the final value of number being assigned by read var number (and we enter 5) and number=5 differ? I was making this for loop:
read var number
#number=5
factorial=1
i=1
for i in `seq 1 $number`
do
let factorial=factorial*i
done
echo $factorial
when I noticed that if the number has the value assigned by the read, rather than direct assigning, it doesn't enter my for. I'm guessing it's because of the data type.
|
If you change the first line to
read number
you’ll get the behaviour you’re looking for.
read var number
reads two values and stores them in variables named var and number. If you only input one value, seq 1 $number expands to seq 1 which is just 1.
| For with read value |
1,590,801,567,000 |
This post 'Output file contents while they change' is similar but the answer doesn't work for my case. Tail -f doesn't seem to refresh the output when the file's size doesn't change or when there are no new rows added.
The file I'm trying to monitor/watch in SSH holds the value of a Volt Meter reading and it changes frequently (about every two or three seconds) but tail -f doesn't output the changes. Is there any other command similar to tail that can be used or does this require a custom binary?
|
If the filesize doesn't change then the file isn't being appended to, it's being overwritten.
Depending on how the file is being rewritten, tail -F (capital F) may detect the change and rewrite it.
Otherwise if the file is small (e.g. just one line) then something like
while [ 1 ]
do
cat file
sleep 2
done
Will redisplay the file every 2 seconds. Fortunately there's a command that makes this easier
$ watch cat file
| Watch/View file contents but no new lines added |
1,590,801,567,000 |
Is it possible to create a virtual file in unix, whose contents are determined programmatically when the file is accessed, a bit like the files in /proc?
For example, I have a program that retrieves a particular setting by reading/catting a file. However, rather than store that setting directly in a plain text file, I want to be able to retrieve that setting from a database in the background and then pass that information to the program when it reads this virtual file. Is it possible to do so?
|
You could look at Named Pipes.
man fifo for a starting point.
Essentially you create a named pipe, one process (or more) reads from it and another can write to it.
| Creating a virtual file whose contents are determined programmatically [duplicate] |
1,590,801,567,000 |
I have a script that looks something like the following.
find /path -type f |
sed -re 'stuff' |
xargs -Ix sh -c '{
echo "information about x"
./exe < x
read
}'
My goal is to provide each file given to xargs as input to exe. However, I do not want the output of exe to be provided all at once. Instead, I wish to study output of exe for each file, and then continue once I am done studying that output. Hence, I have read.
The problem is, read does not work seem here. Upon executing the script, all output was produced. Are there any suggestions on how one might solve this problem?
Note that I am not looking for a way to sleep the execution of xargs. Rather, I am looking for a way to pause xargs, potentially indefinitely, until the user wishes to continue.
|
You can redirect the input to read from /dev/tty like this:
read reply < /dev/tty
You can accomplish effectively the same results within the shell, without using xargs and without directly executing a new shell to process each file:
find /path -type f |
sed -re 'stuff' |
while IFS= read -r x
do
echo "information about $x"
./exe < "$x"
read reply < /dev/tty
done
I think you'll find that this method is more efficient (not really an issue in this interactive situation), more portable, requires less syntax, and is easier to read and maintain.
| Pause (with read or similar) in xargs |
1,590,801,567,000 |
I have the following text file:
(Employee Ashley)
insert text here
(Employee Bob)
insert text here
(Employee Joseph)
insert text here
I would like to take the text for each "employee" and save it to a new file. I can manually count the number of employees if that's necessary. How can I do this all from terminal?
I was thinking of "awk" then some way of detecting the open parenthesis as that is the only thing that has the text in between and then in file directory > outfile directory
Could someone help me tie up the loose ends?
|
awk ' /^\(Employee/ { FILENAME="";
for ( i=2 ; i<=NF; i++)
FILENAME=FILENAME $i;
FILENAME=substr(FILENAME,1,length(FILENAME)-1) ".txt";
}
!/^\(Employee/ { print >> FILENAME } '
This presumes the first line will always be an employee identifier. The for loop is to allow for surnames or multiple given names (Betty Lou, Mary Jo, etc.)
| Saving data between names in terminal to new files? |
1,590,801,567,000 |
How can I read in POSIX bash input like this:
<name>,<tag1> <tag2> <tag3>…
I tried
while read line;do done but this wants newlines, all I have is spaces.
(Is IFS solution? If yes, how? (I don't fully understand IFS.))
|
Use an array:
echo '<name>,<tag1> <tag2> <tag3>' | while IFS=" ," read -a foo; do echo ${foo[@]}; done
Output:
<name> <tag1> <tag2> <tag3>
From man bash:
IFS: The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command.
| Reading input fields separated by spaces |
1,637,076,812,000 |
I am trying to direct user input file into while loop, but kept on failing when ran the script.
The user input file genelist contained a list of numbers where I have been using as prefixes of my other files. Eg. 012.laln, 012.model.
genelist:
012
013
025
039
109
.
.
.
This is the script I have been testing on.
#!/usr/bin/env bash
read -pr "genefile: " genelist
read -pr "treefile: " trees
read -pr "workers: " workers
while read -r i; do
while read -r j; do
raxml-ng --sitelh --msa "$i".laln --model "$j".model --tree "${trees}" --workers "${workers}" --prefix "$i"-rT;
done < "$genelist".model;
done < "$genelist"
In order to execute raxml-ng tool, I need to input files for --msa, --model, --tree, --workers and --prefix for output file name. I need to repeat the process with multiple files, each 012.laln need to match with 012.model and generate output file named 012-rT. The input files for tree and workers are the same for all the files.
I kept on getting error:
line 2: read: `genefile: ': not a valid identifier
line 3: read: `treefile: ': not a valid identifier
line 4: read: `workers: ': not a valid identifier
Modifying the way I call the user input file "genelist" in a few ways but with no avail.
while read -r "${genelist}" ...
while read -r "${genelist{@}}" ...
while read -r "{genelist}" ...
Before this, I have been using for loops, i.e., the one-liner below. It worked well.
I would like to try on while loops if possible.
for i in $(cat genelist); do for j in $(cat $i.model); do raxml-ng --sitelh --msa $i.laln.trgc38_1l --model $j --tree trees --workers 4 --prefix $i-rT; done; done
Questions:
What is the correct and neat way to call the user input file genelist into the while loops?
There are some example I found in here but those are using numbers/number sequences in the loops. The answers suggested using C in for/while loops to solve the issue. But that doesn't seemed to be relevant for my case.
Meanwhile, any better alternative for for/while loops in this case is also welcome!
|
The errors have nothing to do with your loop. You are using read wrong:
$ read -pr "genefile: " genelist
bash: read: `genefile: ': not a valid identifier
The -p option needs an argument, and you're giving it r as the argument if you use -pr. You need:
read -p "genefile: " genelist
or
read -rp "genefile: " genelist
Also, a general albeit personal, note. Don't use read! That just makes the user's life harder for no benefit: it means I can't copy/paste the command into my readme file to redo it since the input is not part of the command. It means I can trivially make a typo since you are asking me to laboriously type out things instead of using tab-completion. It also means this is very hard to automate. Nine times out of ten, it is better to pass everything as an argument when launching instead of blocking execution to request input:
#!/usr/bin/env bash
genelist=$1
trees=$2
workers=$3
while read -r i; do
while read -r j; do
raxml-ng --sitelh --msa "$i".laln --model "$j".model --tree "${trees}" --workers "${workers}" --prefix "$i"-rT;
done < "$i".model;
done < "$genelist"
You can then launch the script with:
script.sh "$genelist" "$trees" "$workers"
| Read file from user input with a list of prefixes, then call file with prefixes in while loops |
1,637,076,812,000 |
I've written a bash function that accepts a command as an argument, runs it in the background, and allows the user to kill the command by pressing any key. This part works fine.
However, when I pipe it to a whiptail dialog gauge, the whiptail runs as expected, but after it returns, the terminal will no longer display keypresses. I can still run commands, I just don't see what I'm typing printed to the screen. The output is also formatted weirdly, where stdout appears after $.
I'm pretty sure the read command is responsible for this behavior, but I don't understand why. Can anyone offer any insight?
#!/bin/bash
function killOnKeypress() {
local runcommand="${1}"
local args=(${@:2})
# Run the command in the background
"${runcommand}" ${args[@]} &
# Get the process id of $runcommand
local pid=$!
# Monitor $runcommand and listen for keypress in foreground
while kill -0 "${pid}" >/dev/null 2>&1; do
# If key pressed, kill $runcommand and return with code 1
read -sr -n 1 -t 1 && kill "${pid}" && return 1
done
# Set $? to return code of $runcommand
wait $pid
# Return $runcommand's exit code
return $?
}
function count100() {
for ((i = 0; i < 100; i++)); do
echo $i
sleep .02
done
}
killOnKeypress "count100" | whiptail \
--gauge "Counting to 100" \
16 56 \
0
|
While this doesn't answer OP question, it can be useful for someone else landed here, looking for the fix/workaround.
As NickD in his comment pointed out, whiptail sets -echo (in my environment not just echo).
To fix your script you can put
stty echo
at the end of it.
What your script (whiptail) changes you can see with 'stty -a' before and after your script is run.
Of course you can save the outputs to files and make it easier to spot the differences:
stty -a > good_terminal
run your script - your terminal is messed, reset it with 'reset' or 'tset' or 'stty sane' and run again 'stty' command, and diff after it:
stty -a > bad_terminal
diff good_terminal bad_terminal
| No keyboard output on terminal after running a script using read and whiptail |
1,637,076,812,000 |
I have a lab ntlm-extract.ntds file which has usernames and hashes in the format:
domain\username:integer:hash:hash2
For example:
somedomain.local\jcricket:5201:0020cfaecd41954fb9c9da8c61ccacd7:0020cfaecd41954fb9c9da8c61ccacd7
I'm comparing the hashes in the LINE[3]/hash2 column with hashes in the NTLM HIBP database, then I'd like to print out usernames that have matches, but the domain\username keeps losing the \ whatever I try, and I'm not sure if it's on the read or write that it loses it.
The script I have so far is:
#!/usr/bin/bash
while read line
do
IFS=':' read -ra "LINE" <<< ${line}
HASH=${LINE[3]}
HASH=${HASH^^}
printf "Checking for %s\n" $HASH
found=(`grep "$HASH" "./pwned-passwords-ntlm-ordered-by-hash-v7.txt"`)
if [ -n $found ]; then
printf "Match on username %s\n" "${LINE[0]}"
fi
done < "ntlm-extract.ntds"
Following the recommendations the final working script ended up being:
#!/usr/bin/bash
numoflines=(`wc -l ntlm-extract.ntds`)
numcomp=0
while IFS= read -r line; do
IFS=: read -ra hashline <<< "${line}"
passhash="${hashline[3]}"
printf "Checking for %s\n" $passhash
printf "Line %d of %d\n" $numcomp $numoflines
numcomp=$((numcomp+1))
found=''
found=(`grep -m 1 -i "$passhash" "./pwned-passwords-ntlm-ordered-by-hash-v7.txt"`)
wait
if [ -z "$found" ]; then
continue
else
printf "found return value is %s\n" "$found"
printf "%s\n" "${hashline[0]}" >> ./hibp-busted.txt
fi
done < "ntlm-extract.ntds"
|
You need -r on the outer read, not just on the inner read -a. You should also quote "${line}" and (probably) want IFS= unless you explicitly want to strip leading whitespace:
while IFS= read -r line; do
IFS=: read -ra LINE <<< "${line}"; printf '%s\n' "${LINE[@]}"
done < ntlm-extract.ntds
somedomain.local\jcricket
5201
0020cfaecd41954fb9c9da8c61ccacd7
0020cfaecd41954fb9c9da8c61ccacd7
I'd also suggest changing the name of variable LINE to something that is not all uppercase.
| How to avoid losing my escape characters when reading in variables from a file with bash |
1,637,076,812,000 |
I'm not good at bash and Linux. I'm reading a script and I found the following function.
get_char()
{
SAVEDSTTY=`stty -g`
stty -echo
stty cbreak
dd if=/dev/tty bs=1 count=1 2> /dev/null
stty -raw
stty echo
stty $SAVEDSTTY
}
Basically, it is used to implement Press any key to continue feature like this.
echo "Press any key to continue!"
char=`get_char`
I know you can use the builtin read command to implement this. For example:
read -rsn1 -p "Press any key to continue"
Is there any reason to use this function instead of the builtin read command?
|
The get_char function from the question has problems wrt keys which generate multibyte characters and with the fact that stty acts on the stdin but dd reads from /dev/tty[1], so I'll use a "fixed" and simplified version of it to do the comparison:
get_char(){
_g=$(stty -g); stty raw -echo; dd count=1 2>/dev/null; stty "$_g"
}
Some differences between k=$(get_char) and read -rsn1 k are:
the former is portable; it works the same in dash, bash, zsh, etc. read -rsn1 only works in bash and ksh.
when a key like F1 is pressed, k=$(get_char) will set k to the entire escape generated by the F1 key (^[OP) instead of eating up the leading ^[ (Esc) and leaving the OP for later. The same thing applies to any key which generates multiple characters.
Ctrl-C (VINTR), Ctrl-Q (VQUIT) or Ctrl-Z (VSUSP) will cause k=$(get_char) to set k to the raw control character (\x03, \x11 or \x1a), while it will interrupt or suspend the script when the read -rsn1 is used [2].
[1] If it should read from the controlling tty, it's simple to use it as k=$(get_char </dev/tty)
[2] read -rsn1 will fail to restore the tty settings if the script was suspended with Ctrl-Z and then resumed with fg.
Example when used from a shell with a line-editor -- which is itself messing with the tty settings:
$ bash -c 'read -rsn1 foo; echo "{$foo}"'
<Ctrl-Z>
[4]+ Stopped bash -c 'read -rsn1 foo; echo "{$foo}"'
$ fg
bash -c 'read -rsn1 foo; echo "{$foo}"'
f<Enter>
{f}
$
$ bash -c 'read -rsn1 foo; echo "{$foo}"'
<Ctrl-Z>
[4]+ Stopped bash -c 'read -rsn1 foo; echo "{$foo}"'
$ fg
bash -c 'read -rsn1 foo; echo "{$foo}"'
foo<Enter>
{f}
$ oo
bash: oo: command not found
Or when used from a shell which does not mess with the tty settings (eg. dash):
$ bash -c 'read -rsn1 foo; echo "{$foo}"'
<Ctrl-Z>
[1] + Stopped bash -c "read -rsn1 foo; echo \"{\$foo}\""
$ <Blindly type f, g, Enter>bash -c "read -rsn1 foo; echo \"{\$foo}\""
{e}
$
| Is there any reason to use this custom read_char function instead of the builtin read command? |
1,637,076,812,000 |
I have 2 text files "${LinkP}" and "${QuestionP}. I want to read these files and store each complete line in the respective array,
IFS=$'\r\n' GLOBIGNORE='*' command eval "LinkA=($(cat "${LinkP}"))"
IFS=$'\r\n' GLOBIGNORE='*' command eval "QuestionA=($(cat "${QuestionP}"))"
Now I want to operate on these using a for loop
nLink=${#LinkA[@]} # Size of array
for ((i = 0; i < nLink; i = i + 1)); do
echo $i
Question=${QuestionA[i]}
echo "Question=${QuestionA[i]}"
done
But, the Question variable doesn't contains full line, it breaks after each space character.
How can I store each question and link (complete line in respective file) in these variable and process them inside for loop.
|
store each complete line in the respective array
is easy with a different approach:
mapfile LinkA < "$LinkP"
See help mapfile for more options, such as -t to remove a trailing delimiter from each line.
| Reading multiple files and operating on stored Arrays |
1,637,076,812,000 |
Let's say I defined this function in the script:
fct1() {
local msg1=${@}
if [[ "${verb}" = 'tru' ]]; then
echo "I say $msg1"
sleep 1
echo "i repeat"
sleep 1
echo "I saaaaaaaaay $msg1"
else
echo "$msg1"
fi
}
How would I go about making a user call this function from read ?
I'm thinking something like
read fct1 "aha aha ahaaaaa"
And the output would be:
"I say aha aha ahaaaaa"
"I repeat"
"I saaaaaaaaay aha aha ahaaaaa"
Basically, how do I use the input on read and not store it in a variable, but use it as a command?
Thanks.
|
If you wanted the message do be read as one line from stdin (entered by the user when the script is used in a terminal) and then passed as argument to the function, you could do:
fct1 "$(line)"
line is no longer a standard command but still fairly widespread. You could replace it with head -n1, but with some implementations, it could read more than one line (though it outputs only one) when the input is not coming from a terminal device.
With bash's read you would have to store it in a variable. That's what read is for, store input in a variable.
IFS= read -r line && fct1 "$line"
With zsh's read, you can use the -e option which echoes the read data instead of storing it in a variable, so line above can be written there as IFS= read -re:
fct1 "$(IFS= read -re)"
(that's less efficient than using read with a variable as we need to fork a process so zsh can read read's output).
Of course, you could also replace your:
local msg1=${@}
with
local msg1; IFS= read -r msg1 || return
| Treating the input for the read command as a command itself |
1,637,076,812,000 |
Can we use read(), write() on a directory just like on any other file in Unix/Linux? I have a confusion here because directories are also considered as files.
|
Some filesystems allow to use read() on directories, but this must be seen as a mistake since the data structures in such a directory may be undocumented.
You never can use write() since this would destroy the integrity of the affected directory.
The official interfaces for directories are opendir(), closedir() readdir(), telldir(), seekdir()
| Can we use read(), write() on a directory in unix/linux? |
1,637,076,812,000 |
I have a script which guides users through the installation of my software and I want to write a log file in case something bad happens and the user needs support.
The script looks like this:
while true; do
echo "This script will help you with the installation. Do you wish to proceed?"
echo "(1) Yes"
echo "(2) No"
read proceed
case $proceed in
1 ) break;;
* ) echo "Aborting."; exit 8;;
esac
done
unset proceed
I then run it by using ./install.ksh | tee /var/log/myinstall.log and everything works just fine, but the user input to the question is not logged.
When I add echo $proceed after the read command, it is written to the log but displayed twice, like that:
This script will help you with the installation. Do you wish to proceed?
(1) Yes
(2) No
1 #this is the input which is not written to the log
1 # this is the echo which is written to the log
My question is now how I could either suppress the output of the read command or how I could write the echo only to the file but not to STDOUT?
|
You should use script instead, it’s designed for exactly this purpose:
script /var/log/myinstall.log -c ./install.ksh
It will log the input to read as well as any output.
| Writing user input to file using tee |
1,637,076,812,000 |
I am running several dedicated servers on different tmux sessions.
I have to change ports and have to write a command in all the tmux sessions.
The command is: config['Port'] = 12345, 12345 being the port.
I tried to write a script which would take inputs from me and type the whole code with the code I input into all the different tmux sessions, but it doesn't work. The name of the session is 43210.
#!/bin/bash
read -p '43210: ' avar
tmux attach-session -t 43210 "config['Port'] = ${avar}"
But it never works and shows:
usage: attach-session [-dr] [-t target-session]
|
Use double-quotes to expand variables in bash shell. What you done is passed avar as a literal string to tmux attach-session even though you have a value stored in the variable. Since single-quotes do not expand shell variables, you need prefix a $ before the variable name and double-quote it.
Change your script to something like,
read -p '43210: ' avar
tmux attach-session -t 43210 "config['Port'] = ${avar}"
| Bash - Take input from user and send a command having that input in tmux |
1,637,076,812,000 |
I need to create a bash script where a single argument is passed by the user (without using the "read" input option) at the terminal command line that creates a directory with that name, or otherwise notifies the user that such directory exists.
Most of the script seems pretty straight forward, except for the input being passed to mkdir WITHOUT read input option.
Things I tried but failed:
1) Prompting the user to input a name for a directory in the terminal and then running the script again and using the history option as input.
2) Attempting to write the users input to a file, only to realize that it requires read.
Any tips or help would be greatly appreciated. Thank you in advance.
|
a single argument is passed by the user (without using the "read" input option) at the terminal command line that creates a directory with that name, or otherwise notifies the user it exists...
It sounds like ordinary positional commandline parameters are what you need:
Script source code for mkdir1:
#!/bin/sh
mkdir "$1"
Usage
./mkdir1 foo
./mkdir1 foo
mkdir: cannot create directory ‘foo’: File exists
foo was created the first time I ran mkdir1. The second time, it notified me that it already exists.
That seems to meet your specification, and the read builtin was not required.
Explanation
$1 is the first positional parameter. $2 would be the second, etc.
For more complex examples than this question requires, you can find more information on positional parameters from many sources, such as the bash hackers wiki, summarized here:
$0 is usually the name of the shell
$FUNCNAME the function name if inside a function, or $0
$1 - $9 the argument list elements from 1 to 9
${10} - ${N} the argument list elements beyond 9
$* all positional parameters except $0
$@ all positional parameters except $0
$# the number of arguments, not counting $0
| Linux bash script input without using "read" [closed] |
1,637,076,812,000 |
I've two documents: doc1.lst and doc 2.lst
I want to take the content of each line and put it as parameters for my SQL query.
I tried something like this, please correct me
file=doc1.lst
while read line
do
p1=$line;
file=doc2.lst
while read line
do
p2=$line;
sqlplus64 $User/$Pass@$ORACLE_SID << EOF2
@update.sql p1 p2
done < echo "Ok"
done < echo "Ok"
EOF2
The thing is that I want to take the value of each line and put it as a parameter (p1 and p2) to be able to update my table as seen in the sqlplus query.
For a better understanding my file doc1.lst looks like :
AAA
ABC
EDF
And my file doc2.lst :
30
10
30
I want to take those values to update my table.
|
As I understand it (the <<EOF2 stuff at the end isn't crystal-clear), the end result you're after is to feed the following into sqlplus64:
@update.sql AAA 30
@update.sql ABC 10
@update.sql EDF 30
To produce this, instead of looping over the contents of both files, you can combine them. Using paste on both files (paste doc1.lst doc2.lst) gives
AAA 30
ABC 10
EDF 30
(paste joins with tabs by default). Changing the delimiter with paste -d ' ' doc1.lst doc2.lst gives
AAA 30
ABC 10
EDF 30
Then we need to add @update.sql as a prefix; this can be done with sed, replacing the start of each line (^) with the prefix:
paste -d ' ' doc1.lst doc2.lst | sed 's/^/@update.sql /'
produces the desired result.
This can then be fed in one shot into sqlplus64:
paste -d ' ' doc1.lst doc2.lst | sed 's/^/@update.sql /' | sqlplus64 $User/$Pass@$ORACLE_SID
If you need exit at the end of the script fed into sqlplus64:
(paste -d ' ' doc1.lst doc2.lst | sed 's/^/@update.sql /'; echo exit) | sqlplus64 $User/$Pass@$ORACLE_SID
If you really want to run things line by line, you can while read each line of the result and feed that to sqlplus64 instead.
| Shell : while read line nested |
1,637,076,812,000 |
I have a long line that comes as output from a git command: a=$(git submodule foreach git status). It looks like this:
a = "Entering 'Dir1/Subdir' On branch master Your branch is up to date with 'origin/master'. nothing to commit, working tree clean Entering 'Dir2' HEAD detached at xxxxxx nothing to commit, working tree clean Entering 'Dir3' On branch master Your branch is up to date with 'origin/master'. nothing to commit, working tree clean Entering 'Dir4' On branch master Your branch is up to date with 'origin/master'. nothing to commit, working tree clean"
I want to separate it into an array:
ARR[0] = "'Dir1/Subdir' On branch master ..."
ARR[1] = "'Dir2' HEAD detached at ..."
etc.
To do that, I have tried to substitute "Entering " for a symbol (I have tried # $ % & \t ...) with a=${a//Entering /$} and it works alright. Then, I try to use IFS and read to separate it into an array: IFS='$' read -ra ARR <<< "$a"
It's here where I am facing problems.
The output that I get of echo ${ARR[@]} is "Dir1/Subdir1" so I think that read is being affected by spaces or by how the output from git is, but I don't understand what is happening and how to fix it. Could you please give me any suggestions?
Thank you.
|
You can use readarray bash builtin and specify the delimiter within the same command:
readarray -d 'char delimiter' array <<< $variable
For example:
readarray -d '@' array <<< ${a//Entering /@}
Finally when you print each result you might want to remove the @ (or any other character used as delimiter):
echo ${array[1]%@}
echo ${array[2]%@}
echo ${array[@]%@}
If you want to delete the index 0 (because it contains @) you can reassign the array by copying the items from index 1 to last index:
array=("${array[@]:1}")
Tip: If you want to avoid use ${array[index]%@} each time you want to get some item, you can reassign the array again by removing the @ with:
array=("${array[@]/@}")
| How to separate long string into a string array with IFS and read, or any other method |
1,637,076,812,000 |
Hereafter are two read statements, one that uses a space as a delimiter, and the other \0. Only the first works. What am I doing wrong with the second?
$ IFS=' '; read first second < <(printf "%s " "x" "y" ); echo "$first+$second"
x+y
$ IFS=$'\0'; read first second < <(printf "%s\0" "x" "y" ); echo "$first+$second"
xy+
|
Try using an array, and the mapfile AKA readarray built-in. See help mapfile for details. If you provide an empty string as the argument to mapfile's -d option, it will use a NUL as the delimiter.
First, create a function that can join an array into a single string with an arbitrary separator:
$ joinarray() { local IFS="$1"; shift; echo "$*"; }
This uses the first argument as the output separator, then uses echo to print the remaining arguments as a single string. This isn't limited to joining arrays, it works with any arguments (arrays, scalar variables, fixed strings), but it's particularly useful when used with arrays. It's called joinarray so it doesn't conflict with the standard join command.
Then, using an array called "$array":
$ mapfile -d '' array < <(printf "%s\0" "x" "y" ) # read the data into $array
$ declare -p array # show that the data was read correctly
declare -a array=([0]="x" [1]="y")
$ joinarray + "${array[@]}" # output the array joined by + characters
x+y
| Splitting a null separated string |
1,637,076,812,000 |
I'd like to show that entering passwords via read is insecure.
To embed this into a half-way realistic scenario, let's say I use the following command to prompt the user for a password and have 7z¹ create an encrypted archive from it:
read -s -p "Enter password: " pass && 7z a test_file.zip test_file -p"$pass"; unset pass
My first attempt at revealing the password was by setting up an audit rule:
auditctl -a always,exit -F path=/bin/7z -F perm=x
Sure enough, when I execute the command involving read and 7z, there's a log entry when running ausearch -f /bin/7z:
time->Thu Jan 23 18:37:06 2020
type=PROCTITLE msg=audit(1579801026.734:2688): proctitle=2F62696E2F7368002F7573722F62696E2F377A006100746573745F66696C652E7A697000746573745F66696C65002D7074686973206973207665727920736563726574
type=PATH msg=audit(1579801026.734:2688): item=2 name="/lib64/ld-linux-x86-64.so.2" inode=1969104 dev=08:03 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
type=PATH msg=audit(1579801026.734:2688): item=1 name="/bin/sh" inode=1972625 dev=08:03 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
type=PATH msg=audit(1579801026.734:2688): item=0 name="/usr/bin/7z" inode=1998961 dev=08:03 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
type=CWD msg=audit(1579801026.734:2688): cwd="/home/mb/experiments"
type=EXECVE msg=audit(1579801026.734:2688): argc=6 a0="/bin/sh" a1="/usr/bin/7z" a2="a" a3="test_file.zip" a4="test_file" a5=2D7074686973206973207665727920736563726574
type=SYSCALL msg=audit(1579801026.734:2688): arch=c000003e syscall=59 success=yes exit=0 a0=563aa2479290 a1=563aa247d040 a2=563aa247fe10 a3=8 items=3 ppid=2690563 pid=2690868 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts17 ses=1 comm="7z" exe="/usr/bin/bash" key=(null)
This line seemed the most promising:
type=EXECVE msg=audit(1579801026.734:2688): argc=6 a0="/bin/sh" a1="/usr/bin/7z" a2="a" a3="test_file.zip" a4="test_file" a5=2D7074686973206973207665727920736563726574
But the string 2D7074686973206973207665727920736563726574 is not the password I entered.
My question is twofold:
Is audit the right tool to get at the password? If so, is there something I have to change about the audit rule?
Is there an easier way, apart from audit, to get at the password?
¹ I'm aware that 7z can prompt for passwords by itself.
|
What's insecure is not read(2) (the system call to read data from a file). It isn't even read(1) (the shell builtin to read a line from standard input). What's insecure is passing the password on the command line.
When the user enters something that the shell reads with read, that thing is visible to the terminal and to the shell. It isn't visible to other users. With read -s, it isn't visible to shoulder surfers.
The string passed on the command line is visible in the audit logs. (The string may be truncated, I'm not sure about that, but if it is it would be only for much longer strings than a password.) It's just encoded in hexadecimal when it contains characters such as spaces that would make the log ambiguous to parse.
$ echo 2D7074686973206973207665727920736563726574 | xxd -r -p; echo
-pthis is very secret
$ perl -l -e 'print pack "H*", @ARGV' 2D7074686973206973207665727920736563726574
-pthis is very secret
That's not the main reason why you shouldn't pass a secret on the command line. After all, only the administrator should be able to see audit logs, and the administrator can see everything if they want. It is worse to have the secret in the logs, though, because they may be accessible to more people later (for example through an improperly secured backup).
The main reason why you shouldn't pass a secret on the command line is that on most systems the command line is also visible to other users. (There are hardened systems where this isn't the case, but that's typically not the default.) Anyone running ps, top, cat /proc/*/cmdline or any similar utility at the right time can see the password. The 7z program overwrites the password soon after it starts (as soon as it's been able to make an internal copy), but that only reduces the window of danger, it doesn't remove the vulnerability.
Passing a secret in an environment variable is safe. The environment is not visible to other users. But I don't think 7z supports that. To pass the password without making it visible through the command line, you need to pass it as input, and 7z reads from the terminal, not from stdin. You can use expect to do that (or pexpect if you prefer Python to TCL, or Expect.pm in Perl, or expect in Ruby, etc.). Untested:
read -s -p "Enter password: " pass
pass=$pass expect \
-c 'spawn 7z a -p test_file.zip test_file' \
-c 'expect "assword:" {send $::env(pass)}' \
-c 'expect eof' -c 'catch wait result'
unset pass
| Sniff password entered with read and passed as a command line argument |
1,637,076,812,000 |
I'm trying to create an interactive bash script, where i can call given options from 1-n or just them like commands.
It will end up with a simulated prompt and 'read' is used to get the input, ofc.
But, if i type too big of a text, it will return to the beginning of the line and overwriting the prompt as i type.
Prompt is color coded and if i remove color escapes it will be just fine. But i like colors :)
Script prompt is like this:
NOC=$(echo -en '\033[0m') # Default
RED=$(echo -en '\033[00;31m')
YELLOW=$(echo -en '\033[00;33m')
CYAN=$(echo -en '\033[00;36m')
OPROMPT="${RED}[Admin${CYAN}@${RED}bulletproof]#${NOC}"
until [ ! -z "$MCHOICE" ]; do
read -p "${OPROMPT} " -e MCHOICE
done
What am i supposed to do to stop this behavior ? Cant seem to figure it out.
If i use echo or printf to display the prompt, it will erase it if i type something and then hit backspace to correct it.
If I'm not clear, here's an example:
This is the prompt (just picture it colored :P)
"[Admin@bulletproof]# "
...now here comes typing commands:
"[Admin@bulletproof]# vpn start my"
... now i continue typing
"domain.lanletproof]# vpn start my"
When it should be:
"[Admin@bulletproof]# vpn start mydomain.lan"
I also noticed that the buffer is different depending on the terminal window size. It doesn't start cutting off at same point when i have the terminal maximized, but has longer "tolerance"
EDIT:
Just figured a way to substantially improve 'buffer' about this one.
If i set the code like this:
until [ ! -z "$MCHOICE" ]; do
printf "$OPROMPT"
read -p " " -e MCHOICE
done
It will allow me to input a lot longer text... Don't know why..
|
When reading from a terminal, bash uses the readline library when executing the read builtin. It also uses readline when inputting command lines. In order to handle line wrapping correctly, readline needs to know if any characters in the prompt string do not take up any space on the screen.
If you were to call readline from C, you'd surround any escape sequence used to change screen colors with Ctrl+A (\001) and Ctrl+B (\002).
Bash allows you to use \[ and \] instead of those control characters when assigning to the command prompt variables (PS1, PS2, etc.). More recent versions of gdb support this, too.
Apparently bash doesn't allow this convenience for read -p. So you'll need to use those control characters.
NOC=$'\001\e[0m\002' # Default
RED=$'\001\e[00;31m\002'
YELLOW=$'\001\e[00;33m\002'
CYAN=$'\001\e[00;36m\002'
OPROMPT="${RED}[Admin${CYAN}@${RED}bulletproof]#${NOC}"
until [ ! -z "$MCHOICE" ]; do
read -p "${OPROMPT} " -e MCHOICE
done
Tested with bash 4.4.23.
| Input from readline owerwrites the prompt |
1,637,076,812,000 |
I have a few hundred lines like below in connRefused.log:-
2015-12-12 00:12:10,227 ERROR [Testing-KeepAlive-01] c.v.v.v.Connection [Connection.java : 001] failed to bind to {name=TestGW,direction=BOTH_WAY,username=espada,password=whatever,binds=1,keepAliveInterval=60000,params={Payload=0, useEXP=1},thisOne={id=1001,name=TestGw,ip=192.168.0.1,port=88}}: Connection refused
and below is part of my script (simplified) to read the connRefused.log into array
IFS=$'\n' read -d '' -r -a lines < /path/log/connRefused.log
for xx in "${lines[@]}"
do
??? # what to do here?
echo $Date
echo $ID
echo $Name
echo $IP
echo $Port
done
How can I take the data I need from the line above to store it in variable Date,ID,Name,IP,Port?
thisOne={id=1001,name=TestGw,ip=192.168.0.1,port=88}.
and for $Date, I only need time part.
|
You don't need to use an array. Since the input data appears to be very regular, I would convert the input data into shell assignment statements, then read them into the shell and evaluate. Like this:
#!/bin/sh
sed '
s/^[-0-9]* */date=/
s/,.*thisOne={/ /
s/}.*//
s/,/ /g
' "$@" |
while read line
do
eval $line
echo date=$date
echo id=$id
echo name=$name
echo ip=$ip
echo port=$port
done
The sed command converts the input line into this:
date=00:12:10 id=1001 name=TestGw ip=192.168.0.1 port=88
The while loop reads one such line at a time and eval $line causes the line to be executed into the shell, which results in the variables being set to the given values.
The script will process file names from the command line OR standard input (note the "$@" at the end of the sed command).
The sed command converts the line into shell assignment statements via a series of s (substitute) commands:
Replace, only at the beginning of the line (^), any sequence of dashes and digits ([-0-9]*), followed by one or more spaces (*) with date=:
s/^[-0-9]* */date=/
Replace a comma followed by any characters (.*) followed by thisOne= with a space:
s/,.*thisOne={/ /
Delete a closing brace (}) followed by any other characters (.*) to the (implied) end of the line:
s/}.*//
Replace all commas (,) with spaces:
s/,/ /g
In the example script, I recommend temporarily deleting the pipe | to the end of the file and running only the sed command portion of the script so you can experiment and see how it works.
| Extract fields of a line into shell variables |
1,637,076,812,000 |
My process deadlocks. master looks like this:
p=Popen(cmd, stdin=PIPE, stdout=PIPE)
for ....: # a few million
p.stdin.write(...)
p.stdin.close()
out = p.stdout.read()
p.stdout.close()
exitcode = p.wait()
child looks something like this:
l = list()
for line in sys.stdin:
l.append(line)
sys.stdout.write(str(len(l)))
strace -p PID_master shows that master is stuck in wait4(PID_child,...).
strate -p PID_child shows that child is stuck in read(0,...).
How can that be?!
I did close the stdin, why is child still reading from it?!
|
parent.py
from subprocess import Popen, PIPE
cmd = ["python", "child.py"]
p=Popen(cmd, stdin=PIPE, stdout=PIPE)
for i in range(1,100000):
p.stdin.write("hello\n")
p.stdin.close()
out = p.stdout.read()
p.stdout.close()
print(out)
exitcode = p.wait()
child.py
import sys
l = list()
for line in sys.stdin:
l.append(line)
sys.stdout.write(str(len(l)))
Running it:
$ python parent.py
99999
Looks like this works fine so the problem must be somewhere else.
| Deadlock on read/wait [closed] |
1,637,076,812,000 |
I would like to read a password from stdin, suppress its output and encode it with base64, like so:
read -s|openssl base64 -e
What is the right command for that?
|
The read command sets bash variables, it doesn't output to stdout.
e.g. put stdout into nothing1 file and stderr into nothing2 file and you will see nothing in these files (with or without -s arg)
read 1>nothing1 2>nothing2
# you will see nothing in these files (with or without -s arg)
# you will see the REPLY var has been set
echo REPLY=$REPLY
So you probably want to do something like:
read -s pass && echo $pass |openssl base64 -e
# Read user input into $pass bash variable.
# If read command is successful then echo the $pass var and pass to openssl command.
from man bash SHELL BUILTIN COMMANDS read command:
read [-ers] [-a aname] [-d delim] [-i text] [-n nchars] [-N nchars] [-p prompt] [-t timeout] [-u fd] [name ...]
One line is read from the standard input, or from the file descriptor fd supplied as an argument to the -u option, and the first word is
assigned to the first name, the second word to the second name, and so on, with leftover words and their intervening separators assigned
to the last name.
-s Silent mode. If input is coming from a terminal, characters are not echoed.
If no names are supplied, the line read is assigned to the variable REPLY.
| Read from stdin and pipe to next command |
1,637,076,812,000 |
How can i read list of server entered by user & save it into variable ?
Example:
Please enter list of server:
(user will enter following:)
abc
def
ghi
END
$echo $variable
abc
def
ghi
I want it to be running in shell script.If i use following in shell script:
read -d '' x <<-EOF
It is giving me an error :
line 2: warning: here-document at line 1 delimited by end-of-file (wanted `EOF')
Please suggest how can I incorporate it in shell script ?
|
You can do
servers=() # declare an empty array
# allow empty input or the string "END" to terminate the loop
while IFS= read -r server && [[ -n $server && $server != "END" ]]; do
servers+=( "$server" ) # append to the array
done
declare -p servers # display the array
This also allows the user to manually type entries or to redirect from a file.
| Read list of servers for user? |
1,637,076,812,000 |
I have a list of commands to parse through for an audit, similar to this:
1. -a *policy name=PolicyName -a *policy workflow name=PolicyWorkflow -a *policy action name=PolicyAction -s Server -b Storage -J Node -y 1 Months -o -F -S
2. -a *policy name=PolicyName -a *policy workflow name=PolicyWorkflow -a *policy action name=PolicyAction -s Server -b Storage -J Node -y 1 Months -o -F -S
3. -a *policy name=Policy Name -a *policy workflow name=PolicyWorkflow -a *policy action name=PolicyAction -s Server -b Storage -J Node -y 1 Months -o -F -S
I'm trying to set each of the name=Value to variables. As this is a standard pattern, i used read with success, however when I get to a line with whitespace (see line 3), it offsets all my variables. I'm unsure how to tackle this without looping through each word in the line and matching specific patterns. Hoping someone would have a better solution out there.
|
In bash and using an array variable instead, you can do something like:
{ IFS=$'\n'; array=($(grep -Po 'name=[^-]+(?=\s*-)' infile)); }
then print the elements of the array (array Index in bash starts from 0):
printf '%s\n' "${array[@]}"
name=PolicyName
name=PolicyWorkflow
name=PolicyAction
name=PolicyName
name=PolicyWorkflow
name=PolicyAction
name=Policy Name
name=PolicyWorkflow
name=PolicyAction
Or to print just a single element:
printf '%s\n' "${array[6]}"
name=Policy Name
We defined IFS to "\newline" character IFS=$'\n' so that the word splitting will happen on "\newline" character only on the result of unquoted–Command Substitution $(...).
The syntax { list; } is known Grouping Commands and I just used to grouping my commands.
The array=(...) syntax is used to creating an associated array variable named array.
With grep -Po 'name=[^-]+(?=\s*-)' infile, we print -only the matches that matches the pattern "name=" followed by any non-zero length of characters but a hyphen [^-]+ which follows by zero-or-more whitespaces until a hyphen (?=\s*-).
The syntax (?=...) is positive-lookahead and with grep's -P option it's can be use to interpret the pattern as a Perl-Compatible Regular Expression (PCRE).
Future reading:
How to add/remove an element to/from the array in bash?
Is it a sane approach to “back up” the $IFS variable?
| Bash script Issue parsing text in line with whitespace characters |
1,637,076,812,000 |
I am reading the content of a file do.sh using the bash command line structure. I want to execute each line of this file, line by line, so that, later, I can add some text to the script which can handle the failure of any of the in-between lines and stop the execution of afterwards content. The syntax I am using for this is
cat do.sh|while read lin;do (echo $(" $lin")) ;done
echo $amit
Assuming the content of do.sh is
amit=3
ram=/path/to/some/dir
ls >> amit.log
In the end, I should be able to access the content of the two variables (i.e amit, ram), and the output of ls command should be stored in amit.log
I am not able to figure out the mistake.
|
Sourcing the dot-script using either the standard
. ./do.sh
or non-standard
source ./do.sh
... would have solved the original question, which only asked about a way of executing the script and leaving the created variables in the current environment (and creating the file containing the output of ls).
The updated question additionally asks for a way to terminate the sourcing of the script as soon as there is an error. I'm going to assume "error" means that some command exits with a non-zero exit status.
Usually, one could run a script with set -e enabled to let it terminate at the first error, but since this script must be sourced, we can't do that (it would terminate the current shell). Instead, we may use a feature in bash that allows us to execute an arbitrary command as soon as a command terminates with a non-zero exit status. The command we want to execute is return as soon as there is an error. This will stop the execution of the script and return the exit status in $? in the current shell.
Therefore:
trap 'err=$?; trap - ERR; return "$err"' ERR
. ./do.sh
trap - ERR
This sets the ERR trap to the command err=$?; trap - ERR; return "$err". This command will be executed upon any error, saving the exit status in err, unsetting the trap, and returning the error to the shell in $? (it also leaves the exit status in the variable err). The final trap - ERR resets the ERR trap to the default.
Example:
Our script is modified to include a call to false, which we use to simulate a command that fails. We expect the variables to be set but for the file amit.log to not be created.
$ cat do.sh
amit=3
ram=/path/to/some/dir
false
ls >> amit.log
Run our commands:
$ trap 'err=$?;trap - ERR;return "$err"' ERR
$ . ./do.sh
$ trap - ERR
Show that we got the variables but that the file wasn't created:
$ printf '%s\n' "$amit" "$ram"
3
/path/to/some/dir
$ ls
do.sh
| Read from file; and execute its content line by line; terminate at first error [closed] |
1,637,076,812,000 |
In bash manual, about read builtin command
-d delim The first character of delim is used to terminate the input line,
rather than newline.
Is it possible to specify a character as delim of read, so that it never matches (unless it can match EOF, which is a character?) and read always reads the entire of a file at once?
Thanks.
|
Since bash can't store NUL bytes in its variables anyway, you can always do:
IFS= read -rd '' var < file
which will store the content of the file up to the first NUL byte or the end of the file if the file has no NUL bytes (text files, by definition (by the POSIX definition at least) don't contain NUL bytes).
Another option is to store the content of the file as the array of its lines (including the line delimiter if any):
readarray array < file
You can then join them with:
IFS=; var="${array[*]}"
If the input contains NUL bytes, everything past the first occurrence on each line will be lost.
In POSIX sh syntax, you can do:
var=$(cat < file; echo .); var=${var%.}
We add a . which we remove afterwards to work around the fact that command substitution strips all trailing newline characters.
If the file contains NUL bytes, the behaviour will vary between implementations. zsh is the only shell that will preserve them (it's also the only shell that can store NUL bytes in its variables). bash and a few other shells just removes them, while some others choke on them and discard everything past the first NUL occurrence.
You could also store the content of the file in some encoded form like:
var=$(uuencode -m - < file)
And get it back with:
printf '%s\n' "$var" | uudecode
Or with NULs encoded as \0000 so as to be able to use it in arguments to printf %b in bash (assuming you're not using locales where the charset is BIG5, GB18030, GBK, BIG5-HKCSC):
var=; while true; do
if IFS= read -rd '' rec; then
var+=${rec//\\/\\\\}\\0000
else
var+=${rec//\\/\\\\}
break
fi
done < file
And then:
printf %b "$var"
to get it back.
| Is there a character as `delim` of `read`, so that `read` reads the entire of a file at once? |
1,517,797,791,000 |
I have this read operation:
read -p "Please enter your name:" username
How could I verify the users name, in one line?
If it's not possible in a sane way in one line, maybe a Bash function putted inside a variable is a decent solution?
Name is just an example, it could be a password or any other common form value.
Verifying means here: Requesting the user to insert the name twice and to ensure the two values are the same.
|
That the user typed (or, possibly, copied and pasted...) the same thing twice is usually done with two read calls, two variables, and a comparison.
read -p "Please enter foo" bar1
read -p "Please enter foo again" bar2
if [ "$bar1" != "$bar2" ]; then
echo >&2 "foos did not match"
exit 1
fi
This could instead be done with a while loop and condition variable that repeats the prompts-and-checks until a match is made, or possibly abstracted into a function call if there are going to be a lot of prompts for input.
| One line read verification [closed] |
1,517,797,791,000 |
This is simple
#!/bin/bash
echo "What is your name?"
read name
echo "Your name is: $name"
But what if I don't want to treat a name but a large HTML code block with nested tags and all their special characters? (a block that will be interactively pasted)
How can i save an entire html code block into a variable with a bash script over the terminal input?
|
Instead of reading a line with read you can read directly from the input with cat. This will read from stdin (typically the terminal if you type it directly at the prompt) and write to stdout (also the terminal). Use Ctrl/D to end your input:
cat
In the more general case the cat command reads from one more files listed as arguments, or stdin if none are specified, and writes the contents of all inputs to stdout.
Putting this into your program,
#!/bin/bash
echo "What is your name?"
name=$(cat)
printf 'Your name is: "%s"\n' "$name"
In this instance the output is sent to the variable $name.
| Bash: interactively enter and save large html block into a variable from the terminal |
1,517,797,791,000 |
In one directory, I have several PNGs and one text file. The PNGs are named after UPC barcodes, like 052100029962.png, 052100045535.png, etc., and I have one text file upcs.txt where each line contains the UPC code merged with product name, like so:
...
052100029962mccormickpepper
052100045535mccormickonesheet
...
I want to rename each *.png* file to its product name, without the UPC numbers, using its current filename matched to the strings in the text file---so instead of 052100029962.png I would have mccormickkeeper.png. This has been surprisingly difficult; I have tried varieties of the following and with no success.
for f in $(find . -iname "*.png"); do
while read -r line; do
if [[ "$f" == *"$line"* ]]; then ## also tried =~
cp "$f" "$line";
fi
done < upcs.txt
done
By "varieties," I mean switching the order of operations, e.g., putting the while loop before the for loop, and even creating another text file with only product names so that I could read and compare them also. For the latter, I did:
for f in $(find . -iname "*.png"); do
while read -r line; do
if [[ "$f" == *"$line"* ]]; do
while read -r line2; do
cp "$f" "$line2";
done < upcs_names_only.txt;
fi;
done < upcs.txt;
done
I even tried putting the 2nd while loop before calling the if loop.
Lastly, I know that even if formulated correctly, the above loops would not retain the .png extension. To deal with this, I was prepared to select the results manually and add .png.
What am I doing wrong? Can anyone help me better understand the logic too?
|
You have to use a delimiter for a file containing two fields per row. Here a sed inserts this delimiter and the result is given line by line to mv
#!/bin/bash
while read -r oldname newname; do
[ -f "${oldname}.png" ] && echo mv -- "${oldname}.png" "${newname}.png"
done < <(sed 's/^[0-9]*/& /' upcs.txt)
Remove echo after testing. Of course I assume your first field is always a sequence of digits and the second field has no whitespaces, does not start with a digit, etc.
If you need any additional actions, like for example to check for duplicated target names, you can do it on the file at first place. Or if you need, in any case mv not to overwrite an existing target, use -n (--no-clobber), also -i (--interactive) would prompt for user input for the same reason.
| change filename based on current filename matched with separate file content |
1,517,797,791,000 |
I wrote a script that collects application logs on local machine and then from remote machine.
It has variable oldlogsdate which reads the date of logs I want to collect.
For example, if I enter Apr 23 it works fine because there is only one space, but if I enter Apr 4 (two spaces between Apr and 4) it will remove one space.
Note: This oldlogsdate is also set on remote machine in user's .bash_profile and the entered date is then changed a using sed command.
printf "\n";read -p "Which Date Logs you want to collect = " oldlogsdate
ssh -qt [email protected] "sed -i "s/oldlogsdate=.*/oldlogsdate=\'\"$oldlogsdate\"\'/" ~/.bash_profile"
oldlogsdate="Apr 7"
Now why I'm using Apr 23 and Apr 4 it's part of ls command.
Is there a way to retain double spaces in read command, or some other better way?
|
Read command removes double space and Keeps only one
It doesn't. The read command by itself keeps the interior whitespace as is (however it will trim leading and trailing whitespace unless you set IFS to empty; it also will mangle backslashes if you don't use -r).
read myVar <<<'Apr 4'
echo "$myVar" # will output: Apr 4
That's you who then lose spaces due to not double-quoting the variables:
myVar2='Apr 4'
echo $myVar2 # will mangle spaces, please always use "$myVar2" instead
Not quoting the variables properly not only makes the script misbehave but also may introduce potential security risks.
A bit more correct command would be
ssh -qt [email protected] "sed -i 's/oldlogsdate=.*/oldlogsdate=\"$oldlogsdate\"/' ~/.bash_profile"
But it's not versatile and not safe anyway, because it doesn't take into account that the $oldlogsdate value can potentially contain not only spaces but also apostrophes, slashes, backslashes, dollar signs and exclamation marks.
| Read command removes double space and keeps only one |
1,517,797,791,000 |
I have the following bash script that I'd like to use as a fuzzy file opener. I create a fifo, spawn a new terminal with fzf running and redirect fzf's output to the fifo. I then call a function that reads from the fifo and opens the files.
My problem is that the while loop inside the open function never ends. How can I close the fifo once all the lines have been read?
#!/usr/bin/env bash
FIFO=/tmp/fuzzy-opener
[ -e "$FIFO" ] || mkfifo "$FIFO"
exec 3<> "$FIFO"
function open {
while read file; do
# open every $file based on its mime-type
done <&3
echo 'done' # this is never reached
}
alacritty -e sh -c "fzf -m >&3" \
&& open
|
I would suggest a couple of alternatives, because:
based on the script in your question, a FIFO seems actually not needed;
in principle, since you are using Bash, you could take advantage of the NUL character as a delimiter (the only byte that is not allowed in a POSIX file path); unfortunately, though, fzf does not seem to work with file names containing newline characters;
reading from a file in /tmp may pose a major security issue: if someone else created /tmp/fuzzy-opener (as a regular file), your script would happily apply open to its content (though, on some systems, opening a not-owned file in a word-writable sticky directory using exec 3<> will raise an error).
You may use:
function open {
while IFS= read -r -d '' file
do
echo "$file" # Replace with the actual open action
done
}
alacritty -e sh -c 'fzf -m --disabled --print0 >&3' 3>&1 | open
which can be made portable by removing -d '' and --print0 (losing nothing, given the aforementioned limitation of fzf); or, using an array to store the selected file names:
function open {
for file
do
echo "$file" # Replace with the actual open action
done
}
mapfile -t -d '' toopen < <(alacritty -e sh -c 'fzf -m --print0 >&3' 3>&1) &&
open "${toopen[@]}"
In both cases, the main point is that fzf's output is redirected to a new file descriptor obtained duplicating the writing end of a pipe and thus connected to a reader command.
| How can I read from named pipe line by line and exit? |
1,517,797,791,000 |
#!/bin/bash
echo -n "Enter a number >"
read number
for var in $number
do
read number
echo $var
done
echo "Go!"
I want numbers from 8-1 to print vertically and say go at the end. When I run the code only 8 and Go! prints out.
|
Use seq:
#!/bin/bash
echo -n "Enter a number > "
read number
seq "$number" -1 1
echo "Go!"
Output:
Enter a number > 8
8
7
6
5
4
3
2
1
Go!
To improve your code a bit, you could output the prompt to stderr:
>&2 echo -n "Enter a number > "
or use the -p option from read:
read -p 'enter a number > ' number
| Using for loop with read command |
1,517,797,791,000 |
For example, the user types foofoo\b\b\bbar, presses enter and gets a var equaling foofoo\b\b\bbar instead of foobar. Yes, the user loses the deletion feature so they need to use another shortcut for deletion. Or at least the other way around: normal backspace (pressing) gives them foobar and some modifier + backspace-key gives them literal backspaces.
Is there a way to enable read to accept literal backspaces?
|
This script (in bash) will accept any character except
^C (ASCII 03 ETX )
^J (ASCII 0A LF )
^M (ASCII 0D CR )
^Z (ASCII 1A SUB )
^\ (ASCII 1C FS )
including all other control characters:
#!/bin/bash
while IFS= read -srn1 a ;do
[[ "${a+x$a}" = "x" ]] && break
var=$var$(printf '%s' "$a")
printf '%s' "$a"
done
printf '\n%s\n' "$var"
Type the backspaces as CTRL-H.
Replace:
printf '\n%s\n' "$var"
with:
printf '%s' "$var" | od -An -tx1
To actually "see" the byte values.
| (How) can I get `read var` to add the literal \b (backspace) to var? |
1,517,797,791,000 |
I am trying to create a while loop so that it takes content from one file and creates some content on another file. But what i noticed is that it is only creating the last line of the file instead of all the lines in the file. What am i missing here? Or is my approach with echo wrong?
My file called "test" contains a list of strings for ex.
unix_idx
web_pn_iis
wis_healthpartners
I am using the following command to try and create a while loop.
while read -r line;
do
echo " {
\"name\": \"$line\",
\"datatype\": \"event\",
\"searchableDays\": 180,
\"maxDataSizeMB\": 0,
\"totalEventCount\": \"0\",
\"totalRawSizeMB\": \"0\"
}," > myfile.json;
done < test;
but once the command is run myfile.json only contains the last line read from the test file. i.e wis_healthpartners
{
"name": "wis_healthpartners",
"datatype": "event",
"searchableDays": 180,
"maxDataSizeMB": 0,
"totalEventCount": "0",
"totalRawSizeMB": "0"
},
So i think the echo is over writing the lines as the while loop runs and it is left with only the last line. How do i tweak this so that it contains all the lines together?
My desired output is as below.
{
"name": "unix_idx",
"datatype": "event",
"searchableDays": 180,
"maxDataSizeMB": 0,
"totalEventCount": "0",
"totalRawSizeMB": "0"
},
{
"name": "web_pn_iis",
"datatype": "event",
"searchableDays": 180,
"maxDataSizeMB": 0,
"totalEventCount": "0",
"totalRawSizeMB": "0"
},
{
"name": "wis_healthpartners",
"datatype": "event",
"searchableDays": 180,
"maxDataSizeMB": 0,
"totalEventCount": "0",
"totalRawSizeMB": "0"
},
|
In Bash, the > operator intentionally overwrites any existing data in the file, while the >> operator will append.
If you need to make sure the file is empty before you start, you can use printf "" > myfile.json to clear it out before your loop runs, then use >> to continue writing to the end.
| Trying to create a while loop to output content of one file to another |
1,517,797,791,000 |
I need to read PCI device information from files. But it gives unusable output when I use command like that:
cat /proc/bus/pci/05/00.0
Output:
�h��
How could I fix this?
OS: Debian-like Linux x64, Kenel 4.19
|
Not every file under /proc/ contains text.
/proc/bus/pci/05/00.0 (and similar files) contain binary data, not text. They're not meant to be displayed to a terminal, they're meant to be read by a program that understands the binary data format (which will be documented somewhere in the kernel documentation. or source code, at least).
If you want to see what's in it, you can use hexdump aka hd:
$ hd /proc/bus/pci/05/00.0
00000000 00 10 72 00 07 04 10 00 03 00 07 01 10 00 00 00 |..r.............|
00000010 01 c0 00 00 04 00 6c d2 00 00 00 00 04 00 28 d2 |......l.......(.|
00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 10 40 30 |..............@0|
00000030 00 00 40 fe 50 00 00 00 00 00 00 00 0a 01 00 00 |[email protected]...........|
00000040
Your output will probably be different because you almost certainly have a different PCI-e device at 05:00.0
| Listing PCI Devices By Reading From File (Instead of lspci Command) |
1,517,797,791,000 |
I'm trying to read a list of files from a command and ask the user for input for each file. I'm using one read to read the filenames, and another one to get user input, however this script seems to enter an infinite loop.
foo () {
echo "a\nb\nc" | while read conflicted_file;
do
echo $conflicted_file
while true; do
read -e -p "> " yn
case $yn in
[nN]* ) echo "success"; break;;
[yY]* ) echo "fail"; break;;
* ) echo "invalid input";;
esac
done
done;
}
foo
Removing the outer while read seems to resolve the issue. Any ideas why?
|
read reads from stdin, so both of those reads there will read from the output of echo via that same pipe open on their stdin.
For the read inside the loop to read from the stdin outside the pipe, you could do:
foo () {
printf 'a\nb\nc\n' |
while IFS= read -r conflicted_file; do
printf '%s\n' "$conflicted_file"
while true; do
IFS= read <&3 -re -p "> " yn
case $yn in
[nN]* ) echo "success"; break;;
[yY]* ) echo "fail"; break;;
* ) echo "invalid input";;
esac
done
done
} 3<&0
That is have it duplicated on fd 3 for the whole body of the foo function.
| Nested read statement leads to infinite loop in bash |
1,517,797,791,000 |
I have a file that has a list of bash commands like the following:
echo 'foobar'
echo 'helloworld'
echo 'ok'
And I can execute these commands by simply piping them to /bin/bash like so:
cat commands | /bin/bash
Now, how do I pause the execution right in the middle, and wait for the user input? Using read does not seem to work.
echo 'foobar'
echo 'helloworld'
read -p 'Press ENTER to continue'
echo 'ok'
|
Execute like this:
/bin/bash commands
Piping the file to bash makes the file travel via stdin of bash. In such case read, reading from stdin, reads from the piped stream instead of from the terminal. It consumes echo 'ok'. By specifying the file as an argument to bash you still execute it, this time the stdin is not redirected though.
I assume you want to execute like this. Compare What is the difference between running bash script.sh and ./script.sh?
| Pause execution of a list of commands |
1,517,797,791,000 |
I wish to send a command to process A, from process B, via a FIFO.
The command will be a word or a sentence, but wholly contained on a "\n" terminated line - but could, in general, be a multi-line record, terminated by another character.
The relevant portion of the code that I tried, looks something like this:
Process A:
$ mkfifo ff
$ read x < ff
Process B: (from another terminal window)
$ echo -n "cmd" > ff
$ echo -n " arg1" > ff
$ echo -n " arg2" > ff
...
$ echo " argN" > ff
However, what's happening is, the read returns with the value cmd, even though the bash man page says it, by default, reads \n terminated lines, unless the -d delim option is used.
So, I next tried specifying -d delim explicitly,
$ read -d "\n" x < f`
and still the same result.
Could echo -n be closing the FIFO's file 'descriptor'?
I'm using bash 4.4.x on Ubuntu 18.04.
|
Yep, that's exactly what happens:
$ mkfifo p
$ while :; do cat p ; done > /dev/null &
$ strace -etrace=open,close bash -c 'echo -n foo > p; echo bar > p' |& grep '"p"' -A1
open("p", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
close(3) = 0
--
open("p", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
close(3) = 0
The redirections only take effect for the duration of the single command they're set up on. The workaround on the write side is to either a) use a compound block to group the commands, or b) use exec to open a file descriptor for the duration of the whole script (or until closed).
a)
{ echo -n foo; echo bar; } > p
(You could also put the commands in a function and use redirection when calling the function.)
b)
exec 3>p
echo -n foo >&3
echo bar >&3
exec 3>&- # to explicitly close it
If you want to fix it on the reading side, you'll need to loop over read and concatenate the strings you get. Since you explicitly want partial non-lines, and to skip over end-of-file conditions, you can't use the exit code of read for anything useful.
| bash: Reading a full record from a fifo |
1,517,797,791,000 |
I have a script that joins together various lists of data fields which then needs to have a few more columns added. The file generated looks like this:
$ cat compiled.csv
"name":"Network Rule 1", "description":"Network Rule 1", "from":["internal"], "source":["any"], "user":["domain\\network_user1"], "to":["external"], "destination":["host.example.com","10.1.2.1"], "port":["8443","22"],
"name":"Network Rule 2", "description":"Network Rule 2", "from":["internal"], "source":["any"], "user":["domain\\network_user2"], "to":["external"], "destination":["host.example.com","10.2.1.1"], "port":["23","25"],
"name":"Network Rule 3", "description":"Network Rule 3", "from":["internal"], "source":["any"], "user":["domain\\network_user3"], "to":["external"], "destination":["host.example.com","10.3.4.1"], "port":["80","143"],
I'm trying to append a few more fields (all the same) to the list; these fields would be something like...
"access":"allow", "time":"00:00:00-23:59:59", "notify":"yes"
The final output should look like this:
"name":"Network Rule 1", "description":"Network Rule 1", "from":["internal"], "source":["any"], "user":["domain\\network_user1"], "to":["external"], "destination":["host.example.com","10.1.2.1"], "port":["8443","22"], "access":"allow", "time":"00:00:00-23:59:59", "notify":"yes"
"name":"Network Rule 2", "description":"Network Rule 2", "from":["internal"], "source":["any"], "user":["domain\\network_user2"], "to":["external"], "destination":["host.example.com","10.2.1.1"], "port":["23","25"], "access":"allow", "time":"00:00:00-23:59:59", "notify":"yes"
"name":"Network Rule 3", "description":"Network Rule 3", "from":["internal"], "source":["any"], "user":["domain\\network_user3"], "to":["external"], "destination":["host.example.com","10.3.4.1"], "port":["80","143"], "access":"allow", "time":"00:00:00-23:59:59", "notify":"yes"
When I try to append the fields in a loop, my double backslash disappears both when running the following command in a script and directly in the shell.
while read LINE; do
echo $LINE \"access\":\"allow\", \"time\":\"00:00:00-23:59:59\", \"notify\":\"yes\"
done < compiled.csv > completed_list.csv
Instead, this results in the following example where the username double backslash has disappeared.
"name":"Network Rule 1", "description":"Network Rule 1", "from":["internal"], "source":["any"], "user":["domain\network_user1"], "to":["external"], "destination":["host.example.com","10.1.2.1"], "port":["8443","22"], "access":"allow", "time":"00:00:00-23:59:59", "notify":"yes"
I'm guessing something is wrong with using echo to print the whole line, but what is a way to work around that?
Thank you in advance.
|
read by default interprets backslashes that can be used to escape IFS characters from delimiting, the -r option turns that off.
$ read a b <<< '1\ 2 3'
$ printf "<%s>\n" "$a"
<1 2>
$ read -r a b <<< '1\ 2 3'
$ printf "<%s>\n" "$a"
<1\>
Also, even with just one variable name given, read will remove leading and trailing whitespace (if present in IFS), so you may want to explicitly set IFS to the empty string for it to get an unmodified input line.
Depending on your shell and the settings, echo can also process backslashes when printing. Better use printf "%s\n" "..." instead.
Also, if you print double quotes, it may be easier to single-quote the whole string, instead of escaping each one separately.
So, maybe:
while IFS= read -r LINE; do
echo "$LINE"' "access":"allow", "time":"00:00:00-23:59:59", "notify":"yes"'
done < compiled.csv > completed_list.csv
Then again, since it looks like you're just adding a fixed string to the end of each line, you could use sed instead. It'll likely be faster then the shell:
sed -e 's/$/ "access":"allow", "time":"00:00:00-23:59:59", "notify":"yes"/' < compiled.csv > completed_list.csv
| Double backslash disappears when printed in a loop |
1,517,797,791,000 |
While trying to learn how to manipulate the content of files in bash, I encountered the following code example:
while IFS=, read -r col1 col2
do
echo "I got:$col1|$col2"
done < myfile.csv
According to The Open Group Base Specifications Issue 6:
The read utility shall read a single line from standard input.
If my understanding is correct, this means that, for instance, if I want read to read lines from myfile.csv, I should add < myfile.csv to the end of the read command, such as:
read -r col1 col2 < myfiles.csv
However, in the annexed code, < myfile.csv is appended after the done keyword. Why is that?
|
For the purposes of redirection in this example, stdin for everything in the while loop, including the conditional, will be myfile.csv
You could redirect it as you suggest, but then the redirection would be set up separately for each call to read, and it would just read the first line every time.
| How does "done < file" work in a while loop? |
1,517,797,791,000 |
The following script is meant to trim all media files in the current working directory.
#!/usr/bin/bash
trimmer() {
of=$(echo "${if}"|sed -E "s/(\.)([avimp4kvweb]{3,3}$)/\1trimmed\.\2/")
ffmpeg -hide_banner -loglevel warning -ss "${b}" -to "${ddecreased}" -i "${if}" -c copy "${of}"
echo "Success. Exiting .."
}
get_ddecreased() {
duration="$(ffprobe -v quiet -show_entries format=duration -hide_banner "${if}"|grep duration|sed -E s/duration=\([0-9]*\)\..*$/\\1/)"
echo ${duration}
ddecreased="$(echo "${duration} - ${trimming}"|bc -l)"
echo ${ddecreased}
}
rm_source() {
echo -e "Remove ${if}?[Y/n]?"
read ch
if [[ "${ch}" == 'y' ]]; then
rm "${if}"
fi
}
echo "How much of the beginning would you like to trim?"
read b
echo "How much of the end would you like to trim?"
read trimming
ls *.avi *.mkv *.mp4 *.vob >list_of_files
echo "Prompt before removing the source[Y/n]?"
read ch
while IFS="" read -r if || [[ -n "${if}" ]]; do
if [[ "${ch}" == 'y' ]]; then
get_ddecreased && trimmer && rm_source
elif [[ "${ch}" == 'n' ]]; then
get_ddecreased && trimmer && rm "${if}"
fi
echo $if
done <list_of_files
echo -e "Removing list_of_files."
rm list_of_files
If the user selected y when asked Prompt before removing the source[Y/n] and trimmer has finished trimming the first file rm_source is meant to prompt the user and wait for their input before removing the source file. This does not work as
the script does not wait for the input and proceeds straight away to echo -e "Removing list_of_files." much like there was no while loop at all.
Neither does the while loop get executed when the user selected n when asked Prompt before removing the source[Y/n] - the script proceeds straight away to echo -e "Removing list_of_files." instead of iterating through all the files listed in list_of_files. Why so?
Yet when I comment out all these lines
if [[ "${ch}" == 'y' ]]; then
get_ddecreased && trimmer && rm_source
elif [[ "${ch}" == 'n' ]]; then
get_ddecreased && trimmer && rm "${if}"
fi
within the while loop all the lines of list_of_files get printed to the screen.
What is wrong with my code?
|
Your code is essentially doing the following:
foo () {
read variable
}
while read something; do
foo
done <input-file
The intention is to have the read in foo read something from the terminal, however, it is being called in a context where the standard input stream is redirected from some file.
This means that the read in foo will read from the input stream coming from the input file, not from the terminal at all.
You may circumvent this by making the loop read from a another file descriptor than standard input:
foo () {
read variable
}
while read something <&3; do
foo
done 3<input-file
Here, the read in the loop reads from file descriptor 3, which is being connected to the input file after the done keyword. This leaves the read in the foo function free to use the original standard input stream.
In the bash shell, rather than using a hard-coded value for the extra filedescriptor, you can have the shell allocate the descriptor in a shell variable:
foo () {
read variable
}
while read something <&"$fd"; do
foo
done {fd}<input-file
This would likely set $fd to an integer like 10 or higher. The exact value is unimportant.
In your current code in the question, you may also fix your issue by avoiding creating and reading from the list of files, and instead use the file globs directly:
for filename in *.avi *.mkv *.mp4 *.vob; do
if [ ! -e "$filename" ]; then
# skip non-existing names
continue
fi
# use "$filename" here
# ... and call your rm_source function
done
This avoids redirections all-together. This also allows your code to handle the odd file with newline characters in its name.
The if statement in the loop, which tests for the existence of the named file, is necessary as the shell will, by default, retain the globbing pattern if there are no matching names for that pattern. You may get rid of the if statement in the bash shell by setting the nullglob shell option using shopt -s nullglob. Setting this option would make the bash shell remove non-matching globs completely.
Note too that this is not the same as in your code if any name matching the globbing patterns is a directory. If you have a directory called e.g. mydir.mp3, then ls would list the contents of that directory. Also, if a filename matching the patterns starts with a dash, the code using ls would likely mistake that name for a set of options.
| Why does the execution of these functions break out of this while loop? |
1,517,797,791,000 |
Hello I am learning Scripting here. I am trying to write a simple script using the 'for' loop.
I have hundreds of folders in a folder called user.
if i run this command i get a list of folders that i need to move to another folder
cat failed | awk -F ":" '/FAILED/ {print $1}' | uniq
i.e folders under users that have failed a task have to be moved to users/failed/failedusers
what i am currently doing is i am creating a new file first using
cat failed | awk -F ":" '/FAILED/ {print $1}' | uniq > failedusers
and then i move the folders using the following command
while read line; do cp -r users/$line users/failed; done < failedusers
My question here is can i do the same using just the for command instead of writing the output to another file and using the while read command to get it done?
for example somehow assign a value to a variable in a loop like
faileduser=`cat failed | awk -F ":" '/FAILED/ {print $1}' | uniq`
and then write something like
mv users/$faileduser user/failed/$faileduser
i am getting all kinds of errors when i am trying to write something like above.
thanks
|
With GNU xargs and a shell with support for ksh-style process substitution, you can do:
xargs -rd '\n' -I USER -a <(awk -F : '/FAILED/ {print $1}' failed | sort -u
) cp -r users/USER user/failed/USER
With zsh, you could do:
faileduser=( ${(f)"$(awk -F : '/FAILED/ {print $1}' failed | sort -u)"} )
autoload zargs
zargs -rI USER -- $faileduser -- cp -r users/USER user/failed/USER
Assuming you want to copy USER to user/failed/USER, that is, copy it into user/failed, you could also do (still with zsh):
(( $#faileduser )) && cp -r users/$^faileduser user/failed/
With the bash shell, you could do something similar with:
readarray -t faileduser < <(awk -F : '/FAILED/ {print $1}' failed | sort -u)
(( ${#faileduser[@]} )) &&
cp -r "${faileduser[@]/#/user\/}" user/failed/
Or get awk to prepend the user/ to all the user names:
readarray -t faileduser < <(awk -F : '/FAILED/ {print "user/"$1}' failed | sort -u)
(( ${#faileduser[@]} )) &&
cp -r "${faileduser[@]}" user/failed/
With a for loop, with Korn-like shells (including bash, and would also work with zsh) the syntax would be:
for user in "${faileduser[@]}"; do
cp -r "user/$user" "user/failed/$user"
done
Which in zsh could be shortened to:
for user ($faileduser) cp -r user/$user user/failed/$user
With:
faileduser=`cat failed | awk -F ":" '/FAILED/ {print $1}' | uniq`
(`...` being the archaic and deprecated form of command substitution. Use $(...) instead).
You're storing awk's output in a scalar, not array variable.
In zsh, you can split it on newline with the f parameter expansion flag like we do directly above on the command substitution:
array=( ${(f)faileduser} )
In bash (or ksh), you could use the split+glob operator after having disabled glob and tuned split:
set -o noglob
IFS=$'\n'
array=( $faileduser )
(yes, in bash, leaving a parameter expansion unquoted invokes an implicit split+glob operator (!), a misfeature inherited from the Bourne shell, fixed in zsh and most modern non-Bourne-like shells).
| Looping through variables which is an output of another command |
1,517,797,791,000 |
I have the following command in a linux script.
#!/bin/bash
for i in "001 ARG1" "002 ARG2" "003 ARG3"
do
set -- $i
echo $1
echo $2
done
001 and ARG1 are essentially tuples,
Is there a way to move those tuples into a text file which I can load instead into the forloop?
so many i would save a text file like this
ARG1
ARG2
ARG3
or
001 ARG1
002 ARG2
003 ARG3
And the script would be
for i in textfile.txt
do
set -- $i
echo $1
echo $2
done
and get the same result?
also is there a way to make it so that the 001 002 is counted automatically? Like how in python one could set counter = 0 and counter+=1 and assign a variable.
|
Given two space or tab separated words on each line in a file as in your second example of the input file:
while read -r word1 word2; do
echo "$word1"
echo "$word2"
done <textfile.txt
This would read the first word on each line into $word1, and the rest of the line into $word2.
The input for read is given by the input of the while compound command, which gets it from the file via a redirection.
The -r option to read stops it from interpreting \ in any special way, would that character occur in the input.
With a single word per line and a counter:
counter=0
while read -r word; do
counter=$(( counter + 1 ))
echo "$counter"
echo "$word"
done <textfile.txt
This would increment the counter by one in each iteration (for each line read from textfile.txt).
To get a zero-filled three-digit counter, output the counter using printf with a formatting string of %.3d\n:
printf '%.3d\n' "$counter"
... in place of echo "$counter".
For a description of what %.3d\n means, see the documentation of the C library function printf (man 3 printf, the shell equivalent uses mainly the same format string).
| Iterating through a list of tuples or names from a text file |
1,517,797,791,000 |
This command:
read -d 'z' a < <(printf 'a\n\n\n'); printf "$a"
outputs:
a
bash's read removes excess trailing newlines which is expected.
and by changing the IFS to null character:
IFS= read -d 'z' a < <(printf 'a\n\n\n'); printf "$a"
it outputs:
a
(blank line)
(blank line)
read no longer removes the excess trailing newlines since IFS no longer includes newline character ...
but now if we do the same but with m instead of newlines:
IFS=m read -d 'z' a < <(printf 'ammm'); printf "$a"
one would think the output would be:
a
but the actual output is:
ammm
i.e. now read doesn't remove the excess trailing IFS characters (in this case m character).
why?
|
Field splitting specifically ignores leading and trailing IFS whitespace. From the GNU Bash manual, 3.5.7 Word Splitting:
If IFS is unset, or its value is exactly <space><tab><newline>,
the default, then sequences of <space>, <tab>, and <newline> at
the beginning and end of the results of the previous expansions are
ignored, and any sequence of IFS characters not at the beginning or
end serves to delimit words. If IFS has a value other than the
default, then sequences of the whitespace characters space, tab,
and newline are ignored at the beginning and end of the word, as
long as the whitespace character is in the value of IFS (an IFS
whitespace character).
The courtesy isn't extended to non-whitespace characters. You can check this using other instances of field splitting:
bash-5.0$ printf "|%s|\n" $(printf '\n\na\nb\n\n')
|a|
|b|
bash-5.0$ IFS=' '; printf "|%s|\n" $(printf ' a b ')
|a|
|b|
bash-5.0$ IFS=z; printf "|%s|\n" $(printf 'zzazbzz')
||
||
|a|
|b|
||
| why bash read doesn't remove excess trailing IFS characters |
1,517,797,791,000 |
While studying bash process substitution, I found this:
counter=0
while IFS= read -rN1 _; do
((counter++))
done < <(find /etc -printf ' ')
echo "$counter files"
If I understand correctly, the output of find command substitutes the "_".
However:
Which mechanism is this?
Additionally: what does read -rN1 do?
Update:
Additional question:
The process substitution is pointing to the "done" in the while loop. How does this work, i.e., why would the while loop take a substitution at that place. Is there any thing general I can read about?
|
<(find /etc -printf ' ') is called "process substitution". It will generate one character (an space ' ') per each file. The output of find /etc -printf ' ' is made available in a file (or something that appears as a file). The name of this file is expanded on the command line. The additional < performs the redirection for stdin from that file.
read -rN1 _ reads from (the redirected) stdin into a variable called _, one character at a time, and count those characters (which each represent one file).
Here are the read arguments from man bash:
-r Backslash does not act as an escape character. The backslash is considered
to be part of the line. In particular, a backslash-newline pair may not be
used as a line continuation.
-N nchars
read returns after reading exactly nchars characters rather than waiting for
a complete line of input, unless EOF is encountered or read times out.
Delimiter characters encountered in the input are not treated specially and
do not cause read to return until nchars characters are read.
| Is this bash argument substitution? |
1,517,797,791,000 |
Sometimes it's convenient to use read -t 3 instead of sleep 3. How do I make it work with nohup?
nohup bash -c ' date; read -t 3; date ' | tail -n 2 nohup.out
As you can see, read -t 3 does not wait for three seconds.
|
read -t 3 (a ksh93 extension now also supported by zsh, bash and mksh) is meant to read one line (logical line in that lines may be continued with a trailing backslash as you don't use the -r option) from stdin into $REPLY with a 3 second timeout.
If stdin is a terminal, that will sleep for 3 seconds unless the user presses enter (and the script will be suspended with a SIGTTIN signal if it's started in background).
If it's a regular file, it will read that line from it and return straight away. If it's /dev/zero it will do a very busy read of gigabytes of zeros from there, etc.
nohup is the command you use to detach a command from a terminal. It redirects stdin to /dev/null and stdout and stderr to nohup.out. So typically you would not want to read from the terminal in that case.
read on /dev/null returns straight away with no data returned, that's what /dev/null is for.
If the purpose of using read -t is to have a kind of sleep that can be interrupted by the user (by pressing Enter) like when you want to give them the time to read a message which they can skip, then having read -t return straight away when non-interactive (like when running under nohup) would seem the right thing to do as there's no point delaying the script then.
But if you want to read from the terminal with timeout if stdin is a terminal, and sleep otherwise, then you would do:
if [ -t 0 ]; then
read -t 3
else
sleep 3
fi
[ -t n ] tests whether the file descriptor n (0 being stdin) refers to a terminal device.
You could do read -t 3 < /dev/tty but that would defeat the purpose of nohup by adding back the interaction with the terminal that nohup is meant to guard against.
| Sometimes it's convenient to use ` read -t 3 ` instead of ` sleep 3 `. How do I make it work with `nohup`? |
1,517,797,791,000 |
I have a file with the following values. I am trying to read from the file and add 1096 to the last value and generate the output on screen. My original file looks like this.
agile_prod_toolkit,30
alsv2_prod_app,30
alsv2_qa_app,15
My expected output should be as below. the third value is second value + 1096
agile_prod_toolkit,30,1126
alsv2_prod_app,30,1126
alsv2_qa_app,15,1111
What i have tried is
while IFS="," read line ret;do
value=$ret+1095
print $line,$ret,$value
done < final_original
But this does not seem to work. can someone please tell me what i am doing wrong here?
The output that i am getting here when i run the above command is like this
agile_prod_toolkit,30,30+1095
alsv2_prod_app,30,30+1095
alsv2_qa_app,15,15+1095
|
value=$ret+1095 is not an arithmetic assignment, and the bash shell has no print (perhaps you meant printf?).
You could do
while IFS=, read -r line ret; do
let value=$ret+1095
echo "$line,$ret,$value"
done < final_original
or with the more modern arithmetic syntax and printf
while IFS=, read -r line ret; do
value=$((ret+1095))
printf '%s,%d,%d\n' "$line" "$ret" "$value"
done < final_original
But really shells are not intended for text / arithmetic processing - consider using something like awk or miller instead:
awk -F, '{print $0 FS $NF+1095}' < final_original
mlr -N --csvlite put '$3 = $2 + 1095' < final_original
| Add a numerical value to a variable while reading a file in bash in loop [duplicate] |
1,517,797,791,000 |
I am developing a zsh script that uses read -k.
If I execute my script like this (echo a | myscript), it fails to get input.
Apparently it is due to the fact that -k uses /dev/tty as stdin invariably, and you must tell read to use stdin as in read -u0.
But then if I change it to -u0 (which makes previous case work) and execute my script without redirecting tty, it breaks the script, it simply does not behave as executing it without -u0.
EDIT: After debugging, it seems the issue is simply that after using -u0, the -k1 option does not read a single char and stops anymore. read works in this case as without -k, simply buffering all input and saving it as soon as an EOL arrives
EDIT2: After more debugging I know it's something related to the raw mode not working with -u0. If I add stty raw/cooked before my read then it works (except enter keystroke is now handled with \r not \n), but then when I execute it with non-tty stdin it breaks.
Is there any way to make both modes compatible?
Indeed I would like to understand why the script behaves different at all, if either I read with -u0 or not, fd0 is by default the same as /dev/tty
|
read -k (read N characters) and read -q (read y or n) have two modes of operation:
By default, they read from the terminal. They put the terminal in raw mode to read byte by byte (as many times as necessary to read the requested number of characters) rather than line by line.
They can be instructed to read from an existing file descriptor (-u with a number, or -p to read from the pipe used to communicate with the current coprocess). In this case, they just read from the file descriptor.
There's no option to tell zsh to read from a specific source, but change the terminal mode if reading from a terminal. You can arrange it yourself, though: check if standard input is a terminal, and don't pass -u0 if it is.
if [[ -t 0 ]]; then
read -k1 …
else
read -k1 -u0 …
fi
| Issue of read with -u and -k in zsh |
1,517,797,791,000 |
I'm trying to read remote SSHD server version with bash without installing an extra tool :
$ cat < /dev/tcp/x.y.z.t/22
SSH-2.0-OpenSSH_7.2 FreeBSD-20160310
^C
CTRL+C is needed, so I tried to read only one line but something strange happens in the output :
$ read version < /dev/tcp/x.y.z.t/22
$ echo "=> version = $version, DONE."
, DONE.ion = SSH-2.0-OpenSSH_7.2 FreeBSD-20160310
I just found out there is an \r character at the end of the version variable value :
$ printf "$version" | od -ct x1z
0000000 S S H - 2 . 0 - O p e n S S H _
53 53 48 2d 32 2e 30 2d 4f 70 65 6e 53 53 48 5f >SSH-2.0-OpenSSH_<
0000020 7 . 2 F r e e B S D - 2 0 1 6
37 2e 32 20 46 72 65 65 42 53 44 2d 32 30 31 36 >7.2 FreeBSD-2016<
0000040 0 3 1 0 \r
30 33 31 30 0d >0310.<
0000045
How can I prevent the bash read builtin from reading the trailing \r character ?
|
The IFS variable can be (locally!) modified to also include \r. This code probably needs more error checking on the arguments and perhaps some thought on how to handle timeouts or other such network issues.
function read-ssh-version {
local IFS=$'\r\n'
read version < /dev/tcp/"$1"/"$2"
echo "$version"
}
Some minimal adhoc testing that the function works and that the global IFS variable hasn't been modified:
bash-5.1$ read-ssh-version 127.0.0.1 22 | od -c
0000000 S S H - 2 . 0 - O p e n S S H _
0000020 9 . 0 \n
0000024
bash-5.1$ echo -n "$IFS" | od -c
0000000 \t \n
0000003
(The \r\n sequence is generally mandatory for Internet protocols, and differs from the typical Unix \n newline sequence, but that's a different question... but that's why that pesky \r is there.)
| How to "properly" read remote sshd server version with bash |
1,517,797,791,000 |
I'm binding a function of mine to hotkey:
bind -x '"\em": __my_function'
I would like the function to behave differently depending on if the command line prompt already contains characters or not.
E.g.
$ ***presses ^M***
behaves differently than
$ cd ***presses ^M***
since a command/some text has already been typed into the prompt by the time the user presses ^M.
How do I detect this in bash?
|
__my_function should check if $READLINE_LINE is empty or not. Example:
__my_function() {
if [ "$READLINE_LINE" ]; then
echo foo
else
echo bar
fi
}
Search for READLINE_LINE and READLINE_POINT in man 1 bash.
| How to verify if current command prompt contains already-typed characters |
1,517,797,791,000 |
In Bash, the following command
echo foo | while read line; do echo $line; done
outputs foo; however, the following
alias bar="echo foo | while read line; do echo $line; done"
bar
outputs a \n (or empty space). What is causing this difference in behavior?
|
Use single quotes to defer variable expansion:
alias bar='echo foo | while read line; do echo $line; done'
| while read loop not working inside alias [duplicate] |
1,517,797,791,000 |
I am with CentOS 7 and I want to bind an alias to launch PostgreSQL shell(psql). I defined this alias and append it in /etc/profile.d/alias:
alias psql-local="read -p \"PSQL: enter the DB to connect: \" db ; sudo -i -u postgres psql --dbname $db"
It is executable by root.
And, I login as root, and run alias, I get:
alias psql-local='read -p "PSQL: enter the DB to connect: " db ; sudo -i -u postgres psql --dbname '
Note here $db at the end is empty.
Then I run psql-local, but I get error:
[root@lucas_vm ~]
> psql-local
PSQL: enter the DB to connect: jfps
psql: option '--dbname' requires an argument
Try "psql --help" for more information.
Then I enter /etc/profile.d/, and run alias.sh manually, then suddenly I can use this alias:
[root@lucas_vm /etc/profile.d]
> . alias.sh
[root@lucas_vm /etc/profile.d]
> psql-local
PSQL: enter the DB to connect: jfps
psql (10.5)
Type "help" for help.
jfps=#
If I exit psql and run alias again, I see this line changed:
alias psql-local='read -p "PSQL: enter the DB to connect: " db ; sudo -i -u postgres psql --dbname jfps'
Note $db is changed to jfps.
Then, I try to access another database, and it works again.
But, when I exit and alias, I see the --dbname jfps, not the name of second database. When I echo $db, it, instead, is the name of the second db.
Why?
|
Because you're using double-quotes ("..."), the $db variable will be expanded when you define the alias, not when it's used. Try this instead:
alias psql-local='read -p "PSQL: enter the DB to connect: " db ; sudo -i -u postgres psql --dbname "$db"'
| set alias to read variable and then use in second command; only works when I execute them manually |
1,517,797,791,000 |
I have a file that contains something like the following:
red dog
red cat
red bird
red horse
blue hamster
blue monkey
blue lion
pink pony
pink whale
pink pig
pink dolphin
I need to increment a counter for every color, and then for every animal. So red would be 1, blue 2, pink 3. Next, dog, cat, bird, and horse would be 1, 2, 3, and 4. I need hamster to begin at 1 again because we are starting a new color.
If I do a "while read color animal" of said file, what can I do to compare when color is no longer equal to the previous color?
I am looking for something like this:
1.1
1.2
1.3
1.4
2.1
2.2
2.3
3.1
3.2
3.3
3.4
Any suggestions would be greatly appreciated :)
|
Something like this with awk:
$ awk '$1 != c { cc++; c=$1; ac=0; a="" } $2 != a { ac++; a=$2 } { printf("%d.%d\n", cc, ac) }' file
1.1
1.2
1.3
1.4
2.1
2.2
2.3
3.1
3.2
3.3
3.4
The awk script keeps track of four things:
The most recently read animal name, a.
The most recently read colour, c.
The "animal counter", ac.
The "colour counter", cc.
It updates these variables depending on what's found in the two columns of input.
If the colour is not the same as what's most recently read, increment cc and remember this colour instead. Also reset ac and a.
If the animal is not the same as what's most recently read, increment ac and remember this animal instead.
Then print cc and ac for each line of input.
If the animals on each line is guaranteed to be unique, one could get rid of the a variable.
| Reset counter when change occurs while reading |
1,517,797,791,000 |
I have this idea and maybe it is not feasible, but I think it worth asking.
Let's say a user is running this command:
cat ~/file.txt
Whenever something tries to read from this file, I would like to run a script or command in the background instead and return the response of that script as content.
Somehow like doing:
ln -s /folder /symlink
But for scripts (idea of what I have in mind):
symlinkcommand /filepath/file.txt "command to run"
Hope I made myself clear. Please reply with your ideas and suggestions if any.
Thanks.
|
What you're looking for is possible, but perhaps not exactly as you envision. The way I have seen it done most often (and it is admittedly a very rare occurrence) is to create the file being read a named pipe (aka FIFO) special file, using the mknod command:
mknod file.txt p
You would then need to start the script you want to use to generate the "file contents" in a separate shell, and have it restart automatically when it completes in order to allow for multiple reads of that special file. The STDOUT of the script would be redirected to the named pipe, and the script would pause until some other process - the original user's process - started to read the pipe somehow. Once the script ended, the "EOF" signal would propagate to the user's process, making everything look normal.
| Run a command which returns a string when reading a file [duplicate] |
1,517,797,791,000 |
I want to do something with user input to use in grep. Like, instead of
uin="first\nsecond\n"
printf $uin | grep d
which outputs second, I want to have the user input it, like
read uin
where the user could input "first\nsecond\n". Then the variable could be used in the grep line like above. Or if the user could input actual returns after entering first and second, that would be filled in as \n that would work also. This is meant to be run from the cli. Thanks.
|
You could do:
echo>&2 "Please enter a multiline string, Ctrl-D, twice if not on an empty line to end:"
input=$(cat)
printf 'You entered: <%s>\n'
Note that $(...) strips all trailing newline characters, so that can't be used to input text that ends in newline characters.
To read exactly two lines, just call read twice:
IFS= read -r first
IFS= read -r second
printf 'You entered: <%s\n%s>\n' "$first" "$second"
To input one line and have the \n, \a, \x45, \0123¹... sequences expanded in it:
IFS= read -r line
expanded_line=$(printf %b "$line")
printf 'You entered: <%s>\n' "$expanded_line"
Or with bash/zsh or recent versions of ksh93u+m, avoiding the stripping of trailing newline characters:
IFS= read -r line
printf -v expanded_line %b "$line"
printf 'You entered: <%s>\n' "$expanded_line"
In any case
the syntax to read a line is IFS= read -r line, not read line
parameter expansions must be quoted in POSIX shells such as bash.
the first argument of printf is the format, you shouldn't have external data there.
¹ Beware that's the echo style of expansion where octal sequences need a leading 0 (\0ooo), not the usual \ooo one as in the C language and as also recognised in the format argument of printf.
| How to catch newlines in user input |
1,517,797,791,000 |
Consider:
$ read -r a <<<$(echo "foo"; exit 1)
$ echo $?
0
this returns 0, when I really expect a 1. How can I extract the real exit code from the subshell?
|
You'll need multiple steps:
output=$(echo "foo"; exit 1)
status=$?
read -r a <<<"$output" # assuming the "real" code here is more complex
| How to capture subshell exit code when assigning subshell output to read? [duplicate] |
1,517,797,791,000 |
Suppose I have log.txt
The sample of log.txt are like these format:
Code Data Timestamp
...
C:57 hello 1644498429
C:56 world 1644498430
C:57 water 1644498433
...
If I want filter string line that contain C:57 I can achieve it with
cat log.txt | grep C:57
then I redirect output to the new file hence
cat log.txt | grep C:57 > filtered_log.txt
How ever when there's new change in log.txt, I should repeat execute that command again. I want it executes periodically or for every new change in file or only when there's new line that contain string C:57.
|
You can use tail -f thusly:
tail -f log.txt|grep C:57 >> filtered_log.txt
This continuously reads log.txt grepping for the token C:57 and adding any matches to the filtered.log.txt.
The use of cat to read the log and pipe that to grep is a useless use of cat. grep can directly read a file. You're wasting I/O by combining a cat and a grep.
The one drawback here is that the appearance of filtered output may be delayed due to buffering. This can be circumvented with:
tail -f log.txt|grep --line-buffered C:57 >> filtered_log.txt
or by using the stdbuf -o0 command:
tail -f log.txt|stdbuf -o0|grep C:57 >> filtered_log.txt C:57
| Filter changing file periodically and redirect filtered output to new file |
1,517,797,791,000 |
I have a question about how to read my file correctly in UNIX, I have the file given (attached as image) and this file has tabs between the variables, as you can see Blood is a variable and Whole Blood is another one. However, when I write in the terminal gawk '{print $7}'file.txt | head the result is the other attached image that you can see. I mean, the system is counting like this:
GTEX-1117F-0003-SM-58Q7G ($1)
B1 ($2)
Blood ($3)
Blood ($4)
Whole ($5)
0013756 ($6)
1188 ($7)
At this position should be Whole Blood and not 1188, so I need to know how could I solve this problem.
I need something like this:
Thanks in advance
|
The variable Input Field Separator defaults to all kinds of spaces. You want it to explicitly set it to a tab. The man page of awk says:
-F sepstring
Define the input field separator. This option shall be
equivalent to:
-v FS=sepstring
And further down:
FS Input field separator regular expression; a <space> by default.
So, to set the FS to a tab:
$ awk -F'\t' '{print $7}' file.txt
| How to read my file correctly? |
1,517,797,791,000 |
I am trying to use read command with wget, for that I am using a simple .sh script:
# echo "Please answer by : -> yes <- or -> no <-"
# read answer
# echo $answer
This code works fine locally, but the read command failed remotely with wget, it finishes without waiting an answer:
# wget -qO - 'https://testserver/pub/test.sh' | bash -x
# + echo 'Please answer by : -> yes <- or -> no <-'
# Please answer by : -> yes <- or -> no <-
# + read answer
#
Thank you for your help.
|
When you run your script with bash in the terminal, bash gets your standard input (you only have one) from the keyboard.
keyboard -> script
When you feed the script to bash over a pipe, that pipe becomes the standard input. So your problem is not related to wget, if you did this:
cat test.sh | bash -x
you'd have the same behaviour, because now the input comes from the pipe, not the keyboard. And bash scripts will inherit that standard input.
pipe -> script
As soon as the data in the pipe finishes, so does bash and the script.
A way to solve that is to download, and then run (you don't need the -O but I'm trying to keep your line mostly as-is). Something like this:
$ wget -qO test.sh 'https://testserver/pub/test.sh' && bash test.sh
| Read command works locally and fail with wget |
1,583,510,638,000 |
i need to create a bash that get input from user and insert them into an array until user enter an specific thing. for example, if i run script:
enter variables: 3 4 7 8 ok
i get this array: array=( 3 4 7 8 )
or:
enter variables: 15 9 0 24 36 8 1 ok
i get this array: array=( 15 9 0 24 36 8 1 )
how i can achieve this?
|
With newline as the default separator:
read -a array -p "enter variables: "
If you want a different character than newline, e.g. y:
read -a array -d y -p "enter variables: "
You can only use a single character as delimiter with read.
EDIT:
A solution that works with the ok delimiter:
a=
delim="ok"
printf "enter variables: "
while [ "$a" != "${a%$delim}${delim}" ]; do
read -n1 # read one character
a="${a}${REPLY}" # append character
done
array=(${a%$delim}) # remove "ok" and convert to array
unset a delim # cleanup
echo # add newline for following output
Note: This version also accepts input of the form 3 4 7 8ok (without the last space character),
but line editing with special characters like Del or Backspace doesn't work. They're treated as raw input.
| read user input into array until user enter specific entry |
1,583,510,638,000 |
I want to get user prompted date files which are in two different locations and scp to another server. This is what I have done so far, I am struggling with reading constants from file and conditioning with if condition .
path1 (/nrtrdepath/) has 1 file
path2 (/dcs/arch_05/AUDIT_REPORT/SL_AUDIT_REPORT/) 2 files
all files should scp to one location
updated code
#=================================
#description :This script will scp user prompted SL audit files to another SCP /tmp/SL_Audit_Report/ path .
#author :Prabash
#date :20170902
#================================
true > /home/cmb/SL__scripts/Prabash/list2.txt
read -p "Enter Date " n
ls -lrt /nrtrdepath/ | awk {'print $9'} | grep AuditReport_SL_nrtrde_$n.csv >> /home/cmb/SL__scripts/Prabash/list2.txt
ls -lrt /dcs/SL_AUDIT_REPORT/ | awk {'print $9'} | grep AuditReport_SL_ICT_$n.csv.gz >> /home/cmb/SL__scripts/Prabash/list2.txt
ls -lrt /dcs/SL_AUDIT_REPORT/ | awk {'print $9'} | grep AuditReport_SL_BI_$n.csv.gz >> /home/cmb/SL__scripts/Prabash/list2.txt
k=`cat /home/cmb/SL__scripts/Prabash/list2.txt`
while IFS= read -r k ; do
if [[ $k == AuditReport_SL_nrtrde* ]] ; then
scp /nrtrdepath/$k [email protected]:/tmp/SL_Audit_Report/
else
for i in $k; do scp /dcs/SL_AUDIT_REPORT/$i [email protected]:/tmp/SL_Audit_Report/
fi
done
|
It looks like what you want to do is pick three file based on a date string and scp these to another location. This may be done with
#!/bin/sh
thedate="$1"
scp "/nrtrdepath/AuditReport_SL_nrtrde_$thedate.csv" \
"/dcs/SL_AUDIT_REPORT/AuditReport_SL_ICT_$thedate.csv.gz" \
"/dcs/SL_AUDIT_REPORT/AuditReport_SL_BI_$thedate.csv.gz" \
[email protected]:/tmp/SL_Audit_Report/
You would run this with
$ sh ./script "datestring"
where datestring is the string that you want to use as the date in the filename.
This works since scp can copy several files to a single location, just like cp.
With some error checking:
#!/bin/sh
thedate="$1"
if [ ! -f "/nrtrdepath/AuditReport_SL_nrtrde_$thedate.csv" ]; then
printf 'AuditReport_SL_nrtrde_%s.csv is missing\n' "$thedate" >&2
do_exit=1
fi
if [ ! -f "/dcs/SL_AUDIT_REPORT/AuditReport_SL_ICT_$thedate.csv.gz" ]; then
printf 'AuditReport_SL_ICT_%s.csv is missing\n' "$thedate" >&2
do_exit=1
fi
if [ ! -f "/dcs/SL_AUDIT_REPORT/AuditReport_SL_BI_$thedate.csv.gz" ]; then
printf 'AuditReport_SL_BI_%s.csv is missing\n' "$thedate" >&2
do_exit=1
fi
if [ "$do_exit" -eq 1 ]; then
echo 'Some files are missing, exiting' >&2
exit 1
fi
if ! scp "/nrtrdepath/AuditReport_SL_nrtrde_$thedate.csv" \
"/dcs/SL_AUDIT_REPORT/AuditReport_SL_ICT_$thedate.csv.gz" \
"/dcs/SL_AUDIT_REPORT/AuditReport_SL_BI_$thedate.csv.gz" \
[email protected]:/tmp/SL_Audit_Report/
then
echo 'Errors executing scp' >&2
else
echo 'Transfer is done.'
fi
| Read a file line by line and if condition is met continue reading till end |
1,583,510,638,000 |
I am running a benchmark of HDD read that bypasses page cache. I have set O_DIRECT flag and memaligned my memory. This function attempts a random read within a file(lseek64() used). The data I am getting looks fine until certain point (32 MB). Please see the data below (averages):
In particular I would like to know why do I have a such a large jump after 32 MB? I use Ubuntu 16.04 File system ext4.
I would really appreciate some help on this.
Thank you.
KB TIME
32 11.2452
64 22.3882
128 45.3915
256 89.6025
512 12.655
1024 402.332
2048 759.456
4096 1512.83
8192 2999.54
16384 5988.16
32768 **85358.8**
double readFileRan(std::string name, unsigned long bytes) {
Time t;
int ID = open(name.c_str(), O_RDONLY | O_DIRECT);
sync();
if ( ID == -1) {
std::cout << "can't open input file!" << std::endl;
return 0;
}
unsigned long reads = bytes / 512;
std::vector<unsigned long> offsets;
for(unsigned long i = 0; i < reads; i++) {
offsets.push_back((rand() % reads) * 512);
}
int BLKSIZE = 512;
char* sector = (char*)memalign(BLKSIZE, BLKSIZE); //size of physical sector
unsigned long numRead = 0;
unsigned long i = 0;
off64_t result = 10;
unsigned long long start = t.start();
while(i <= reads) {
result = lseek64(ID, offsets[i] ,SEEK_SET);
numRead = read(ID, sector, 512);
i = i + 1;
}
unsigned long long end = t.end();
close(ID);
unsigned long long total = end - start;
double mili = t.convertCyclesToSec(total);
std::cout << mili << std::endl;
return mili;
}
|
The time to read a sector depends on the rotation angle of the drive when you try the read, and your sample size is too small to avoid statistical fluctuations from this random process. You're reading every sector just once on average. That's fine when bytes is large and you are taking a lot of samples, but not so great when bytes is small. To get more interesting data, you should always read a fixed large number of sectors independently of the magnitude of bytes.
At some point there can be expected to be a jump in the access time when bytes exceeds the size of a cylinder, and the head has to move from track to track rather than just waiting for the correct sector to fly by (which also takes time, but less time). But this effect can be seen better when reading a partition raw rather than through a filesystem (which is free to map the file sectors non-linearly to the device sectors).
The cylinder sizes of modern disks is, of course, variable, as more sectors can fit on the longer outer tracks than on the shorter inner tracks closer to the spindle.
Trying to measure all of this is further complicated by the fact that you probably have an small memory cache on the disk itself, which is not disabled solely by using O_DIRECT.
| read() randomly a file with O_DIRECT flag on, getting a serious performance hit on 32 MB files size? |
1,583,510,638,000 |
I have simple script
#!/bin/bash
SENTENCE=""
while read word
do
SENTENCE="$SENTENCE $word"
done
whose interaction with the user may result in the following:
a
a
b
a b
c
a b c
d
a b c d
How can I have the string displayed at the right in the same line as the user in order to have the output
a a
b a b
c a b c
d a b c d
|
Assuming the simplest case (a short word, no line-wrapping, no concern about reaching the end of the screen with scrolling), you could do this
#!/bin/bash
SENTENCE=""
tput sc
while read word
do
SENTENCE="$SENTENCE $word"
tput rc
tput hpa 20
printf '%s\n' "$SENTENCE"
tput sc
done
That uses two terminal features which are in most of the terminal descriptions you would use:
save/restore cursor position (the sc and rc parameters), and
horizontal position (the hpa parameter).
You could hardcode the corresponding escape sequences, at the expense of readability...
By the way, some may suggest using the up-arrow escape, but that has the same problem with scrolling at the end of the screen, as also would \e[F (CPL, which is not in your terminal description).
For moving horizontally, you could use the right-cursor with a parameter, e.g.,
tput cuf 20
which would be \e[20C.
At the end of the question, there is comment about \e[1a, but ANSI escape sequences are case-dependent, that is not the same as \e[1A (which moves the cursor up by one line). This may be what you had in mind:
#!/bin/bash
SENTENCE=""
while read word
do
SENTENCE="$SENTENCE $word"
tput cuu1
tput hpa 20
printf '%s\n' "$SENTENCE"
done
which is easier to read than
#!/bin/bash
SENTENCE=""
while read word
do
SENTENCE="$SENTENCE $word"
echo -en '\e[A'
echo -en '\e[20C'
echo "$SENTENCE"
done
| How to display a string at the right of the user insert prompt |
1,583,510,638,000 |
I need to make a shell script that will receive a number of rows and a number of columns and then print a word as the number of rows and columns.
For example: 2 rows, 3 columns
expected output:
word word word
word word word
I know how to use read but I don't know how to get the output.
|
This should put you on track :
wordToPrint='hello'
echo "How many rows?"
read nbRows
echo "How many columns?"
read nbColumns
for ((row=0; row<$nbRows; row+=1)); do
for ((column=0; column<$nbColumns; column+=1)); do
echo -en "$wordToPrint\t"
done
echo
done
| How to read numbers of rows and columns and print in a specific way |
1,583,510,638,000 |
I have a txt file with content as such:
Adedunmola Okikiola Adewole 512.035
215−39 ^M
Ademir Cleto de Oliveira 055.735
445−13 ^M
Adilson Wagner Gandu 559.995
780−28 ^M
When I run my script,
#!/usr/bin/bash
file="$@"
while IFS= read -r cmd; do
printf '%s\n' "$cmd"
done < "$file"
./readline.sh list.txt
I get outputs formatted as such:
Yves Levi Paixao Lapa 022.485 165"24
Yvin Miguel Juanico Carvalho 623.200 765"20
Yzis Silva Lima Santos 372.341 215"39
Zilmara de Nazare Lucas Pimentel 282.147 230"44
But, I can't make grep work with the following pattern:
./readline.sh list.txt | grep "230\"44"
Which is a code for the course. In this exemplary output, I expected it should give me, at least, the line:
Zilmara de Nazare Lucas Pimentel 282.147 230"44
What am I doing wrong?
EDIT:
Print:
Link to the txt file:
https://drive.google.com/file/d/1azr_GSB2rBHd9dPzx43vrRdqkxkWb60G/view?usp=sharing
EDIT2:
Changing the file encoding to utf-8 make the output equal to what vim or emacs initially showed. But, still, I can't successfully grep "230-44", for example.
|
The problem was that I was using - on grep, when the text was actually other character (which get's converted when I paste it to stack overflow)
| Unable to match a specific regex with bash |
1,583,510,638,000 |
I am having issues understanding how to get read to "read" information from the console instead of user input. Effectively "getline"? from console.
Here is my scenario. I will be sending an echo "information" | ncat "IP" "PORT" to a port located internally on the network running a deamon to catch correct information.
If information I send is incorrect I will be sent a predefined message telling me that the information sent was incorrect and to try again. However, if the information is correct, I will get a different response from which I have no idea what the response will be.
Here is a snippet of the BASH script I have tried so far.
if [[read -r line ; echo "$line" ; while "$line" == "Information wrong try again!"]] ; continue
elif [[read -r line ; echo "$line" ; while "$line" != "Information wrong try again!"]] ;break
I am very new to bash, so my use of my syntax may be incorrect.
|
I'm afraid your syntax is all wrong. I think you are looking for something like this:
if [ "$line" = "Information wrong try again!" ]; then
echo "Try again"
else
echo "All's well"
fi
Of course, the details will depend on how you run the script. If you want it to be an infinite loop and re-run the echo "information" | ncat "IP" "PORT" command until it works, you want something like this:
line=$(echo "information" | ncat "IP" "PORT")
while [ "$line" = "Information wrong try again!" ]; do
line=$(echo "information" | ncat "IP" "PORT")
sleep 1 ## wait one second between launches to avoid spamming the CPU
done
## At this point, we have gotten a value of `$line` that is not `Information wrong try again!`, so we can continue.
echo "Worked!"
Or, similarly:
while [ $(echo "information" | ncat "IP" "PORT") = "Information wrong try again!" ]; do
sleep 1 ## wait one second between launches to avoid spamming the CPU
done
## rest of the script goes here
echo "Worked!"
| Using a BASH script to read the output displayed in console from a third source? |
1,583,510,638,000 |
I have a bash script that I wrote to automate some commands, and one of the first lines in the script isn't working on the computer that it needs to run on. The code is below
#!/bin/bash
#some comments
read -p 'press enter to begin'
echo "Please Wait..."
#rest of the script
It is a fairly simple start to the automation script and it works fine on the virtual machine I used to test and run the script, but when the script runs in the working environment it outputs the text
read -p 'press enter to begin' straight to the command line and then stops running; instead of the desired result of a read command with the prompt press enter to begin, waiting for the user's input, and then echoing Please wait... while the script runs.
sudo bash ./path/to/file.sh
I am not sure what to do to fix this problem as I can't find anything else about this online.
I've gone through all the basic troubleshooting steps; made sure the file is executable, run while specifying that it is a bash file (it actually wouldn't run without the bash callout), and running the same read command that is in the script directly on the command line (which outputs the desired result of press enter to begin and waiting for a user input).
Any suggestions? I plan on running the script using the set -x command when have a chance today or tomorrow.
|
I figured out why the script was breaking, when using the read command I had
read -p 'prompt'
Instead of
read -p "prompt"
When I changed it to the double quotes the script ran fine for the beginning part that I was asking about in this question.
Why did this specific format break the script? Idk, the machine I'm running it on is very picky and has some weird formatting preferences about things that should work, and do work in other environments.
| Read command in bash script not executing as a read command and outputting text straight to command line [closed] |
1,583,510,638,000 |
I am trying to get a folder name from a stored variable string.
When I ran the following
path="Folder%20Name/Dir/File"
read -d "/" folder < <(echo ${path/\%20/ })
echo "$folder"
I am getting a blank echo $folder. Where am I going wrong, I have tried read -d "/" folder <<< $"(${path/\%20/ })" with no sucess
|
Your first command works just fine on Bash 4.4:
$ path="Folder%20Name/Dir/File"
$ read -d "/" folder < <(echo ${path/\%20/ })
$ echo "$folder"
Folder Name
Though using process substitution here is unnecessary, you could just use a here-string instead:
$ read -d "/" folder <<< "${path/\%20/ }"
As for your second command, you're using the localization quoting $"...", which I don't think you need here, and the parenthesis also get added to the string, so you'd get (Folder Name if you did that with the path variable.
(as an aside, if you ever start using Zsh, don't use path as the name of a variable.)
| Why does read from variable give blank new variable? [closed] |
1,583,510,638,000 |
Could someone perhaps give their advice on this.
I am wanting to take an output file from a mysql query ran:
$code $IP
123456 192.168.26.176
10051 192.168.20.80
234567 192.168.26.178
and run it in command:
rsync -rvp *.$code.extension root@$IP:/path/of/dest
I am trying this:
while read -r line ; do echo
"$SOURCE_TRANSMIT_DIR"*."$code".class.json
"$DEST_HOST"@"$IP":"$DEST_TRANSMIT_DIR" ; done
Output I get is this:
/opt/file/outgoing/*.123456
10051
234567.class.json [email protected]
192.168.20.80
192.168.26.178:/opt/file/incoming/
Where I would like it to read like this in separate rsync commands:
rsync -rvp *.123456.extension [email protected]:/path/of/dest
rsync -rvp *.234567.extension [email protected]:/path/of/dest
rsync -rvp *.345678.extension [email protected]:/path/of/dest
Hopefully this explains better, sorry for the terrible explanation.
|
I can't see the result of your mysql query but you can execute it and parse with awk to print just what you want (see mysql option to avoid printing tuples and titles -raw or something like this )
mysql -printingoptions "your query" |awk '{print "rsync -rvp *."$1".extension root@"$2":/path/of/dest"}'
You can pipe it to sh or bash then (commands |sh) to execute the rsync :)
Seems to be the easiest way to do that for me.
| Read variables in output file & rsync |
1,583,510,638,000 |
The file to read is file.sql containing following text
create table temp
(name varchar(20), id number)
on commit reserve rows;
create table temp1
(name varchar(20), id number)
on commit reserve rows;
select name, id
from temp where id=21;
I want the three queries stored in three different files as below
file1.sql
create table temp
(name varchar(20), id number)
on commit reserve rows
file2.sql
create table temp1
(name varchar(20), id number)
on commit reserve rows
file3.sql
select name, id
from temp where id=21
using ksh scripting while retaining whitespaces
|
re='create table'
csplit -s -k -f file. yourSqlFile "%^$re%" "/^$re/" '/^select name,/' '/./'
for f in file.[0][0-3]; do
k=${f#*.0}
mv "$f" "file$k.sql"
done
for i in {2,1,0};do
j=$((i + 1))
mv "file$i.sql" "file$j.sql"
done
| Read a text file and its store contents into different files or variables |
1,344,375,382,000 |
I know you are able to see the byte size of a file when you do a long listing with ll or ls -l. But I want to know how much storage is in a directory including the files within that directory and the subdirectories within there, etc. I don't want the number of files, but instead the amount of storage those files take up.
So I want to know how much storage is in a certain directory recursively? I'm guessing, if there is a command, that it would be in bytes.
|
Try doing this: (replace dir with the name of your directory)
du -s dir
That gives the cumulative disk usage (not size) of unique (hards links to the same file are counted only once) files (of any type including directory though in practice only regular and directory file take up disk space).
That's expressed in 512-byte units with POSIX compliant du implementations (including GNU du when POSIXLY_CORRECT is in the environment), but some du implementations give you kibibytes instead. Use -k to guarantee you get kibibytes.
For the size (not disk usage) in bytes, with the GNU implementation of du or compatible:
du -sb dir
or (still not standard):
du -sh dir
For human readable sizes (disk usage).
See
man du (link here is for the GNU implementation).
| How to recursively find the amount stored in directory? |
1,344,375,382,000 |
I'd like to write something like this:
$ ls **.py
in order to get all .py filenames, recursively walking a directory hierarchy.
Even if there are .py files to find, the shell (bash) gives this output:
ls: cannot access **.py: No such file or directory
Any way to do what I want?
EDIT: I'd like to specify that I'm not interested in the specific case of ls, but the question is about the glob syntax.
|
In order to do recursive globs in bash, you need the globstar feature from Bash version 4 or higher.
From the Bash documentation:
globstar
If set, the pattern ** used in a filename expansion context will match all files and zero or more directories and subdirectories. If the pattern is followed by a /, only directories and subdirectories match.
For your example pattern:
shopt -s globstar
ls -d -- **/*.py
You must run shopt -s globstar in order for this to work. This feature is not enabled by default in bash, by running shopt you are activating the feature.
| Recursive glob? |
1,344,375,382,000 |
I want to see how many files are in subdirectories to find out where all the inode usage is on the system. Kind of like I would do this for space usage
du -sh /*
which will give me the space used in the directories off of root, but in this case I want the number of files, not the size.
|
find . -maxdepth 1 -type d | while read -r dir
do printf "%s:\t" "$dir"; find "$dir" -type f | wc -l; done
Thanks to Gilles and xenoterracide for safety/compatibility fixes.
The first part: find . -maxdepth 1 -type d will return a list of all directories in the current working directory.
(Warning: -maxdepth is a GNU extension
and might not be present in non-GNU versions of find.)
This is piped to...
The second part: while read -r dir; do
(shown above as while read -r dir(newline)do) begins a while loop – as long as the pipe coming into the while is open (which is until the entire list of directories is sent), the read command will place the next line into the variable dir. Then it continues...
The third part: printf "%s:\t" "$dir" will print the string in $dir
(which is holding one of the directory names) followed by a colon and a tab
(but not a newline).
The fourth part: find "$dir" -type f makes a list of all the files
inside the directory whose name is held in $dir. This list is sent to...
The fifth part: wc -l counts the number of lines that are sent into its standard input.
The final part: done simply ends the while loop.
So we get a list of all the directories in the current directory. For each of those directories, we generate a list of all the files in it so that we can count them all using wc -l. The result will look like:
./dir1: 234
./dir2: 11
./dir3: 2199
...
| How do I count all the files recursively through directories |
1,344,375,382,000 |
I am working through SSH on a WD My Book World Edition. Basically I would like to start at a particular directory level, and recursively remove all sub-directories matching .Apple*. How would I go about that?
I tried
rm -rf .Apple* and rm -fR .Apple*
neither deleted directories matching that name within sub-directories.
|
find is very useful for selectively performing actions on a whole tree.
find . -type f -name ".Apple*" -delete
Here, the -type f makes sure it's a file, not a directory, and may not be exactly what you want since it will also skip symlinks, sockets and other things. You can use ! -type d, which literally means not directories, but then you might also delete character and block devices. I'd suggest looking at the -type predicate on the man page for find.
To do it strictly with a wildcard, you need advanced shell support. Bash v4 has the globstar option, which lets you recursively match subdirectories using **. zsh and ksh also support this pattern. Using that, you can do rm -rf **/.Apple*. This is not POSIX-standard, and not very portable, so I would avoid using it in a script, but for a one-time interactive shell action, it's fine.
| How do I recursively delete directories with wildcard? |
1,344,375,382,000 |
I made a backup to an NTFS drive, and well, this backup really proved necessary. However, the NTFS drive messed up permissions. I'd like to restore them to normal w/o manually fixing each and every file.
One problem is that suddenly all my text files gained execute permissions, which is wrong ofc. So I tried:
sudo chmod -R a-x folder\ with\ restored\ backup/
But it is wrong as it removes the x permission from directories as well which makes them unreadable.
What is the correct command in this case?
|
If you are fine with setting the execute permissions for everyone on all folders:
chmod -R -x+X -- 'folder with restored backup'
The -x removes execute permissions for all
The +X will add execute permissions for all, but only for directories.
See Stéphane Chazelas's answer for a solution
that uses find to really not touch folders, as requested.
| How to recursively remove execute permissions from files without touching folders? |
1,344,375,382,000 |
When I try to use sftp to transfer a directory containing files, I get an error message:
skipping non-regular file directory_name
The directory contains a couple of files and two subdirectories.
What am I doing wrong?
|
sftp, like cp and scp, requires that when you copy a folder (and its contents, obviously), you have to explicitly tell it you want to transfer the folder recursively with the -r option.
So, add -r to the command.
| Using sftp to Transfer a Directory? |
1,344,375,382,000 |
When I use
cp -R inputFolder outputFolder
the result is context-dependent:
if outputFolder does not exist, it will be created, and the cloned folder path will be outputFolder.
if outputFolder exists, then the clone created will be outputFolder/inputFolder
This is horrible, because I want to create some installation script, and if the user runs it twice by mistake, he will have outputFolder created the first time, then on the second run all the stuff will be created once again in outputFolder/inputFolder.
I want always the first behavior: create a clone next to the original (as a sibling).
I want to use cp to be portable (e.g. MINGW does not have rsync shipped)
I checked cp -R --parents but this recreates the path all the way up the directory tree (so the clone will not be outputFolder but some/path/outputFolder)
--remove-destination or --update in case 2 do not change anything, still things are copied into outputFolder/inputFolder
Is there a way to do this without first checking for existence of the outputFolder (if folder does not exist then...) or using rm -rf outputFolder?
What is the agreed, portable UNIX way of doing that?
|
Use this instead:
cp -R inputFolder/. outputFolder
This works in exactly the same way that, say, cp -R aaa/bbb ccc works: if ccc doesn't exist then it's created as a copy of bbb and its contents; but if ccc already exists then ccc/bbb is created as the copy of bbb and its contents.
For almost any instance of bbb this gives the undesirable behaviour that you noted in your Question. However, in this specific situation the bbb is just ., so aaa/bbb is really aaa/., which in turn is really just aaa but by another name. So we have these two scenarios:
ccc does not exist:
The command cp -R aaa/. ccc means "create ccc and copy the contents of aaa/. into ccc/., i.e. copy aaa into ccc.
ccc does exist:
The command cp -R aaa/. ccc means "copy the contents of aaa/. into ccc/., i.e. copy aaa into ccc.
| How to copy a folder recursively in an idempotent way using cp? |
1,344,375,382,000 |
How can I search a wild card name in all subfolders? What would be the equivalent of DOS command: dir *pattern* /s in *nix?
|
You can use find. If, for example, you wanted to find all files and directories that had abcd in the filename, you could run:
find . -name '*abcd*'
| How can I search a wild card name in all subfolders? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.